torch.no_grad()
时间: 2024-05-11 21:15:20 浏览: 41
`torch.no_grad()` is a context manager that disables gradient calculation. It is used during inference or evaluation to speed up computation and reduce memory usage. When `torch.no_grad()` is active, any computation that requires gradient calculation will be skipped, resulting in faster and more memory-efficient execution. It is commonly used with neural network models to evaluate their performance on test data or to make predictions on new data.
Example usage:
```
with torch.no_grad():
# Disable gradient calculation
output = model(input)
# Perform some operations that don't require gradients
```
In this example, the `model(input)` operation will not calculate gradients because it is executed within the `torch.no_grad()` context. This can be useful for inference or evaluation where gradients are not required.
阅读全文