para.requires_grad_()
时间: 2024-05-11 07:13:51 浏览: 105
`para.requires_grad_()` is a method used in PyTorch to enable or disable gradient computation for a tensor. When `requires_grad` is set to `True`, PyTorch will keep track of the operations performed on the tensor and compute gradients with respect to it during backpropagation. The `_` at the end of the method name indicates that it is an in-place operation that modifies the tensor in place rather than returning a new tensor.
For example, the following code snippet creates a tensor `x` and sets `requires_grad` to `True`, indicating that gradients should be computed for `x`:
```
import torch
x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
```
We can then perform operations on `x` and compute gradients with respect to it during backpropagation:
```
y = x.sum()
y.backward()
```
After calling `y.backward()`, the gradients with respect to `x` can be accessed via `x.grad`.
```
print(x.grad)
```
Output:
```
tensor([1., 1., 1.])
```
阅读全文