给出一个基于pytorch的随机梯度下降SGD应用的代码
时间: 2024-09-13 13:07:40 浏览: 46
深度学习与PyTorch入门实战教程-随机梯度下降.rar
```python
import torch
import torch.nn as nn
import torch.optim as optim
# 定义一个简单的线性模型
class LinearModel(nn.Module):
def __init__(self, input_size, output_size):
super(LinearModel, self).__init__()
self.linear = nn.Linear(input_size, output_size)
def forward(self, x):
return self.linear(x)
# 超参数设置
input_size = 10
output_size = 1
learning_rate = 0.01
epochs = 100
# 创建模型实例
model = LinearModel(input_size, output_size)
# 定义损失函数和优化器
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# 生成模拟数据
x = torch.randn(100, input_size)
y = torch.randn(100, output_size)
# 训练模型
for epoch in range(epochs):
# 前向传播
outputs = model(x)
loss = criterion(outputs, y)
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
optimizer.step()
# 打印损失值
if (epoch+1) % 10 == 0:
print(f'Epoch [{epoch+1}/{epochs}], Loss: {loss.item():.4f}')
```
阅读全文