torch.no_grad()
时间: 2023-09-20 08:15:10 浏览: 46
torch.no_grad() is a context manager that is used to turn off gradient calculation during the forward pass of a neural network. This is useful when you are only interested in using a trained model for inference or prediction, and do not need to calculate gradients or update the model's weights during this process. By disabling gradient calculation, you can save memory and computation time.
For example, consider the following code:
```
with torch.no_grad():
output = model(input)
```
In this code, we use the no_grad() context manager to disable gradient calculation for the forward pass of the model. This means that any operations that require gradients, such as backward() or optimizer.step(), will not be executed. This is useful when you want to use the model for inference, but not for training.
Note that using no_grad() does not affect the state of the model, so you can still call model.eval() to put the model in evaluation mode and use it for inference.