Implement linear regression model and use autograd to optimize it by Pytorch.
时间: 2024-05-12 08:19:28 浏览: 145
Here is an example implementation of a linear regression model using PyTorch and Autograd for optimization:
```python
import torch
import numpy as np
# Generate some random data
np.random.seed(42)
x = np.random.rand(100, 1)
y = 2 + 3 * x + 0.1 * np.random.randn(100, 1)
# Convert data to PyTorch tensors
x_tensor = torch.from_numpy(x).float()
y_tensor = torch.from_numpy(y).float()
# Define the model
class LinearRegression(torch.nn.Module):
def __init__(self):
super(LinearRegression, self).__init__()
self.linear = torch.nn.Linear(1, 1)
def forward(self, x):
return self.linear(x)
model = LinearRegression()
# Define the loss function
criterion = torch.nn.MSELoss()
# Define the optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# Train the model
num_epochs = 1000
for epoch in range(num_epochs):
# Forward pass
y_pred = model(x_tensor)
loss = criterion(y_pred, y_tensor)
# Backward pass and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print progress
if (epoch+1) % 100 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# Print the learned parameters
w, b = model.parameters()
print('w =', w.item())
print('b =', b.item())
```
In this example, we define a linear regression model as a subclass of `torch.nn.Module`, with a single linear layer. We use the mean squared error loss function and stochastic gradient descent optimizer to train the model on the randomly generated data. The model parameters are learned through backpropagation using the `backward()` method, and are optimized using the `step()` method of the optimizer. After training, we print the learned values of the slope and intercept parameters.
阅读全文