Batch Normalization代码pytorch
时间: 2023-09-21 17:13:57 浏览: 80
Sure! Here's an example of how to implement Batch Normalization in PyTorch:
```python
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(10, 20)
self.bn1 = nn.BatchNorm1d(20)
self.fc2 = nn.Linear(20, 2)
def forward(self, x):
x = self.fc1(x)
x = self.bn1(x)
x = torch.relu(x)
x = self.fc2(x)
return x
# Create an instance of the network
net = Net()
# Define your input data
input_data = torch.randn(5, 10)
# Pass the input data through the network
output = net(input_data)
# Print the output
print(output)
```
In this example, we define a simple neural network with one hidden layer and one output layer. The Batch Normalization layer is applied after the first fully connected layer (`self.fc1`) and before the activation function (`torch.relu`). The `nn.BatchNorm1d` class is used to create the Batch Normalization layer.
Please note that this is just a basic example to demonstrate how to use Batch Normalization in PyTorch. In real-world scenarios, you may need to adjust the network architecture and hyperparameters according to your specific task.
阅读全文