写出详细python代码
时间: 2024-10-24 18:02:25 浏览: 4
当然,下面是一个详细的Python代码示例,它涵盖了我们之前讨论过的线性回归模型构建过程,包括数据生成、参数估计、梯度计算以及梯度下降优化:
```python
import numpy as np
import torch
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
# Step 1: 定义模型参数和数据生成
true_w = np.random.uniform(-10, 10)
true_b = np.random.uniform(-10, 10)
np.random.seed(42) # 设置随机种子保证复现性
n_samples = 1000
x_data = np.random.uniform(-1, 1, n_samples)
noise = np.random.normal(0, 0.05, n_samples)
y_data = true_w * x_data + true_b + noise
# Step 2: 使用线性代数求解参数
linear_regression = LinearRegression() # 使用sklearn库的线性回归
linear_regression.fit(x_data.reshape(-1, 1), y_data)
w_estimated = linear_regression.coef_[0]
b_estimated = linear_regression.intercept_
# Step 3: 计算前三个样本的梯度 (这里使用PyTorch方便演示)
torch.manual_seed(42) # 设置随机种子
x_torch = torch.tensor(x_data[:3]).float()
y_torch = torch.tensor(y_data[:3]).float()
model = torch.nn.Linear(1, 1) # 假设单层神经网络
w_initial = torch.randn_like(model.weight) # 随机初始化权重
loss_fn = torch.nn.MSELoss() # 均方误差损失
y_pred = model(x_torch)
grad_w = torch.autograd.grad(loss_fn(y_pred, y_torch), model.weight)[0]
# Step 4: 梯度下降优化
learning_rate = 0.01
n_iterations = 1000
w_gradient_descent = []
train_errors = []
test_errors = []
for _ in range(n_iterations):
y_pred = model(x_data)
loss = loss_fn(y_pred, y_data)
train_error = loss.item()
# 可选,划分训练集和测试集
split_idx = int(0.8 * len(x_data))
train_x, train_y = x_data[:split_idx], y_data[:split_idx]
test_x, test_y = x_data[split_idx:], y_data[split_idx:]
grad = torch.autograd.grad(loss, model.parameters(), create_graph=True)[0]
model.zero_grad()
grad_w += grad.item()
model.weight.data -= learning_rate * grad
train_errors.append(train_error)
if test_x is not None:
test_error = loss_fn(model(test_x), test_y).item()
test_errors.append(test_error)
# Step 6: 比较真实值和梯度下降的结果
print(f"True w: {true_w}, Estimated w by gradient descent: {w_gradient_descent[-1]}")
# Step 7: 绘制误差曲线
plt.plot(range(1, len(train_errors) + 1), train_errors, label='Train Error')
if test_errors:
plt.plot(range(1, len(test_errors) + 1), test_errors, label='Test Error')
plt.xlabel('Iterations')
plt.ylabel('Error')
plt.legend()
plt.show()
```
这段代码展示了从头到尾构建线性回归模型的过程,包括估计、梯度计算和优化。记得在运行之前安装所需的库(如`numpy`, `torch`, `matplotlib`)。如果你有任何疑问,随时提问哦!
阅读全文