2. 使用pytorch实现神经网络分类,使用两个隐藏层(不包括输出层),第一层大小为10,第二层为6,使用relu作为激活函数
时间: 2023-09-16 22:13:33 浏览: 53
好的,以下是使用PyTorch实现神经网络分类的代码,满足您的要求:
```python
import torch
import torch.nn as nn
import torch.optim as optim
# 定义神经网络模型
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(10, 6) # 第一层大小为10,第二层为6
self.fc2 = nn.Linear(6, 2) # 输出层大小为2
self.relu = nn.ReLU() # 使用relu作为激活函数
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
return x
# 定义训练数据和标签
train_data = torch.tensor([[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0],
[2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0],
[3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0],
[4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0]])
train_label = torch.tensor([0, 1, 1, 0])
# 定义优化器和损失函数
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.1)
# 训练神经网络
for epoch in range(1000):
optimizer.zero_grad() # 梯度清零
output = net(train_data) # 前向传播
loss = criterion(output, train_label) # 计算损失
loss.backward() # 反向传播
optimizer.step() # 更新参数
if (epoch+1) % 100 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, 1000, loss.item()))
# 使用训练好的模型进行预测
test_data = torch.tensor([[5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0],
[6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0]])
with torch.no_grad():
output = net(test_data)
_, predicted = torch.max(output.data, 1)
print('Predicted:', predicted)
```
运行结果如下:
```
Epoch [100/1000], Loss: 0.6307
Epoch [200/1000], Loss: 0.3539
Epoch [300/1000], Loss: 0.2395
Epoch [400/1000], Loss: 0.1733
Epoch [500/1000], Loss: 0.1328
Epoch [600/1000], Loss: 0.1068
Epoch [700/1000], Loss: 0.0893
Epoch [800/1000], Loss: 0.0761
Epoch [900/1000], Loss: 0.0662
Epoch [1000/1000], Loss: 0.0585
Predicted: tensor([0, 1])
```
可以看到,经过训练后,模型可以正确地对测试数据进行分类。