线性回归梯度下降python实现
时间: 2024-04-01 08:29:36 浏览: 84
梯度下降python实现
线性回归是一种常见的机器学习算法,用于建立一个线性模型来预测连续型的输出变量。梯度下降是一种优化算法,用于最小化模型的损失函数。下面是线性回归梯度下降的Python实现:
```python
import numpy as np
class LinearRegression:
def __init__(self, learning_rate=0.01, num_iterations=1000):
self.learning_rate = learning_rate
self.num_iterations = num_iterations
self.weights = None
self.bias = None
def fit(self, X, y):
num_samples, num_features = X.shape
self.weights = np.zeros(num_features)
self.bias = 0
for _ in range(self.num_iterations):
# 计算预测值
y_pred = np.dot(X, self.weights) + self.bias
# 计算梯度
dw = (1/num_samples) * np.dot(X.T, (y_pred - y))
db = (1/num_samples) * np.sum(y_pred - y)
# 更新参数
self.weights -= self.learning_rate * dw
self.bias -= self.learning_rate * db
def predict(self, X):
return np.dot(X, self.weights) + self.bias
```
使用示例:
```python
# 创建线性回归对象
regressor = LinearRegression(learning_rate=0.01, num_iterations=1000)
# 准备训练数据
X_train = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y_train = np.array([6, 15, 24])
# 训练模型
regressor.fit(X_train, y_train)
# 准备测试数据
X_test = np.array([[2, 3, 4], [5, 6, 7]])
# 预测结果
y_pred = regressor.predict(X_test)
print(y_pred)
```
阅读全文