torch.nn.relu
时间: 2023-10-12 20:16:11 浏览: 97
pytorch1.0中torch.nn.Conv2d用法详解
The torch.nn.relu module in PyTorch implements the Rectified Linear Unit activation function. It is a commonly used activation function in neural networks and is defined as:
f(x) = max(0, x)
Where x is the input to the function and f(x) is the output. The relu function applies a simple threshold to the input, setting all negative values to zero and leaving positive values unchanged. This leads to a sparse representation of the inputs, which can help to prevent overfitting and improve the generalization of the model.
In PyTorch, the relu function is implemented as a module, which can be easily added to a neural network using the following code:
```
import torch.nn as nn
model = nn.Sequential(
nn.Linear(10, 20),
nn.ReLU(),
nn.Linear(20, 1)
)
```
In this example, a two-layer neural network is defined with a 10-dimensional input, a hidden layer with 20 units, and a single output unit. The ReLU activation function is applied after the first linear layer to introduce non-linearity into the model.
阅读全文