pytorch bp神经网络 回归
时间: 2023-11-21 15:59:13 浏览: 182
以下是使用PyTorch实现BP神经网络回归的示例代码:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
# 定义神经网络模型
class Net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(Net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, output_size)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
# 定义训练函数
def train(net, X_train, y_train, num_epochs, learning_rate):
criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
for epoch in range(num_epochs):
inputs = torch.autograd.Variable(torch.from_numpy(X_train).float())
targets = torch.autograd.Variable(torch.from_numpy(y_train).float())
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
if (epoch+1) % 1000 == 0:
print('Epoch [%d/%d], Loss: %.4f' % (epoch+1, num_epochs, loss.data))
# 准备数据
X_train = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y_train = np.array([[0], [1], [1], [0]])
# 定义超参数
input_size = 2
hidden_size = 2
output_size = 1
num_epochs = 10000
learning_rate = 0.1
# 创建模型并训练
net = Net(input_size, hidden_size, output_size)
train(net, X_train, y_train, num_epochs, learning_rate)
# 测试模型
inputs = torch.autograd.Variable(torch.from_numpy(X_train).float())
targets = torch.autograd.Variable(torch.from_numpy(y_train).float())
outputs = net(inputs)
print(outputs.data)
```
该示例代码实现了一个简单的BP神经网络回归模型,用于解决异或逻辑门问题。其中,`Net`类定义了神经网络模型,`train`函数定义了训练过程,`X_train`和`y_train`分别表示输入和输出数据,`input_size`、`hidden_size`和`output_size`分别表示输入层、隐层和输出层的神经元个数,`num_epochs`表示训练轮数,`learning_rate`表示学习率。在训练完成后,使用训练好的模型对输入数据进行预测,并输出预测结果。
阅读全文