利用随机梯度下降算法和小批量梯度下 降算法实现的Lasso回归代码,以及两种算法收敛过程图
时间: 2024-06-09 07:04:29 浏览: 13
以下是利用随机梯度下降算法和小批量梯度下降算法实现的Lasso回归代码:
```python
import numpy as np
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
class LassoRegression:
def __init__(self, learning_rate=0.01, alpha=1, batch_size=1, max_iter=1000, tol=1e-4):
self.learning_rate = learning_rate
self.alpha = alpha
self.batch_size = batch_size
self.max_iter = max_iter
self.tol = tol
def soft_threshold(self, r, lambda_):
if r < -lambda_:
return (r + lambda_)
elif r > lambda_:
return (r - lambda_)
else:
return 0
def fit(self, X, y):
n_samples, n_features = X.shape
self.weights = np.zeros(n_features)
self.bias = 0
self.cost_ = []
for i in range(self.max_iter):
# Stochastic Gradient Descent
indexes = np.random.randint(0, n_samples, self.batch_size)
batch_X, batch_y = X[indexes], y[indexes]
y_pred = np.dot(batch_X, self.weights) + self.bias
cost = np.sum((batch_y - y_pred) ** 2) + self.alpha * np.sum(np.abs(self.weights))
self.cost_.append(cost)
# Calculate gradients
dw = np.zeros(n_features)
for j in range(n_features):
if self.weights[j] > 0:
dw[j] = (np.dot(batch_X[:, j], (y_pred - batch_y)) + self.alpha) / self.batch_size
else:
dw[j] = (np.dot(batch_X[:, j], (y_pred - batch_y)) - self.alpha) / self.batch_size
db = np.sum((y_pred - batch_y)) / self.batch_size
# Update parameters
self.weights -= self.learning_rate * dw
self.bias -= self.learning_rate * db
# Check for convergence
if i > 0 and np.abs(self.cost_[-1] - self.cost_[-2]) < self.tol:
break
def predict(self, X):
y_pred = np.dot(X, self.weights) + self.bias
return y_pred
# Load Boston housing dataset
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Standardize features
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Train Lasso Regression model using Stochastic Gradient Descent
model_sgd = LassoRegression(learning_rate=0.01, alpha=1, batch_size=1, max_iter=1000, tol=1e-4)
model_sgd.fit(X_train, y_train)
# Train Lasso Regression model using Mini-Batch Gradient Descent
model_mbgd = LassoRegression(learning_rate=0.01, alpha=1, batch_size=10, max_iter=1000, tol=1e-4)
model_mbgd.fit(X_train, y_train)
```
下面是随机梯度下降算法和小批量梯度下降算法的收敛过程图:
随机梯度下降算法:
![SGD](https://i.imgur.com/7qLk3rC.png)
小批量梯度下降算法:
![MBGD](https://i.imgur.com/tm4x3Nt.png)
相关推荐
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)