bp神经网络回归三层relu函数实例代码
时间: 2023-08-08 09:05:52 浏览: 107
BP神经网络实例代码
下面是使用Python和PyTorch库实现BP神经网络回归三层ReLU函数的示例代码:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
from sklearn.datasets import make_regression
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
# 生成随机数据集
X, y = make_regression(n_samples=100, n_features=2, noise=0.1, random_state=1)
# 数据标准化
scaler = StandardScaler()
X = scaler.fit_transform(X)
y = scaler.fit_transform(y.reshape(-1, 1)).reshape(-1)
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
# 定义神经网络模型
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 10) # 输入层到隐藏层1
self.relu1 = nn.ReLU() # 隐藏层1的激活函数
self.fc2 = nn.Linear(10, 5) # 隐藏层1到隐藏层2
self.relu2 = nn.ReLU() # 隐藏层2的激活函数
self.fc3 = nn.Linear(5, 1) # 隐藏层2到输出层
def forward(self, x):
x = self.fc1(x)
x = self.relu1(x)
x = self.fc2(x)
x = self.relu2(x)
x = self.fc3(x)
return x
# 初始化模型和优化器
net = Net()
optimizer = optim.SGD(net.parameters(), lr=0.01)
criterion = nn.MSELoss()
# 训练模型
for epoch in range(1000):
optimizer.zero_grad() # 清除梯度
outputs = net(torch.FloatTensor(X_train))
loss = criterion(outputs.squeeze(), torch.FloatTensor(y_train))
loss.backward() # 反向传播
optimizer.step() # 更新权重
if epoch % 100 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, 1000, loss.item()))
# 测试模型
with torch.no_grad():
outputs = net(torch.FloatTensor(X_test))
loss = criterion(outputs.squeeze(), torch.FloatTensor(y_test))
print('Test Loss: {:.4f}'.format(loss.item()))
# 计算R平方值
y_pred = net(torch.FloatTensor(X_test)).squeeze().detach().numpy()
r2 = 1 - np.sum(np.square(y_test - y_pred)) / np.sum(np.square(y_test - np.mean(y_test)))
print('R2 Score: {:.4f}'.format(r2))
```
与之前的示例代码相比,这里增加了一个隐藏层,使得神经网络变成了一个具有两个隐藏层的三层神经网络。同时,我们仍然使用ReLU函数作为隐藏层的激活函数,并使用MSE损失函数和梯度下降法来训练模型。在测试模型和评估性能时,我们使用了与之前相同的方法。
阅读全文