loss.backward()
时间: 2023-09-12 14:10:30 浏览: 47
The method `loss.backward()` is used in PyTorch to compute the gradients of the loss with respect to the parameters of the neural network. It initializes a backpropagation process, where the gradients of the loss are calculated for each parameter in the network using the chain rule of differentiation.
This method is typically used in the training loop of a neural network, after computing the loss for a batch of data. The gradients obtained from `loss.backward()` are then used to update the parameters of the neural network using an optimization algorithm such as stochastic gradient descent (SGD).
Example usage:
```python
import torch
# Define a simple neural network
model = torch.nn.Sequential(
torch.nn.Linear(10, 20),
torch.nn.ReLU(),
torch.nn.Linear(20, 1)
)
# Define a loss function
loss_fn = torch.nn.MSELoss()
# Generate some dummy data
x = torch.randn(32, 10)
y = torch.randn(32, 1)
# Compute the output of the neural network
y_pred = model(x)
# Compute the loss
loss = loss_fn(y_pred, y)
# Compute the gradients of the loss with respect to the parameters
loss.backward()
# Update the parameters using an optimizer (e.g. SGD)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
optimizer.step()
```