with torch.no_grad()
时间: 2023-09-20 13:15:11 浏览: 48
The `torch.no_grad()` context manager is used to turn off PyTorch's gradient calculation feature. It is often used during inference or evaluation when we do not need to compute gradients, and doing so would only waste computation time and memory.
When we wrap a block of code with `torch.no_grad()`, all operations inside that block will have their `requires_grad` flag set to `False`, even if they were defined with `requires_grad=True`. This means that gradients will not be computed for those operations and the associated tensors.
Here is an example of using `torch.no_grad()`:
```
import torch
# Define a tensor with requires_grad=True
x = torch.tensor([1., 2., 3.], requires_grad=True)
# Wrap a block of code with torch.no_grad()
with torch.no_grad():
# Do some operations with x
y = x * 2
z = y.mean()
# Gradients will not be computed for y and z
print(y.grad_fn) # None
print(z.grad_fn) # None
# Gradients will still be computed for x
z.backward()
print(x.grad) # tensor([0.6667, 0.6667, 0.6667])
```