构建一个包含3层全连接层的深度神经网络,用该神经网络预测该数据是正样本还是负样本(用torch实现)
时间: 2024-03-02 21:50:32 浏览: 52
好的,下面是用PyTorch实现构建一个包含3层全连接层的深度神经网络,用该神经网络预测该数据是正样本还是负样本的代码:
```python
import torch
import torch.nn as nn
import torch.optim as optim
# 定义输入和输出的维度
input_dim = 100
hidden1_dim = 50
hidden2_dim = 25
output_dim = 1
# 定义神经网络模型
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden1_dim)
self.fc2 = nn.Linear(hidden1_dim, hidden2_dim)
self.fc3 = nn.Linear(hidden2_dim, output_dim)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
# 加载数据集
train_data, train_labels, test_data, test_labels = load_data()
# 定义神经网络模型、损失函数和优化器
model = NeuralNet()
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# 训练模型
num_iterations = 1000
batch_size = 32
for i in range(num_iterations):
# 获取批次数据
batch_indices = np.random.choice(len(train_data), batch_size, replace=False)
batch_inputs = torch.from_numpy(train_data[batch_indices]).float()
batch_labels = torch.from_numpy(train_labels[batch_indices]).float()
# 前向传播、计算损失、反向传播和优化
optimizer.zero_grad()
outputs = model(batch_inputs)
loss = criterion(outputs, batch_labels)
loss.backward()
optimizer.step()
# 每100次迭代输出一次损失
if (i+1) % 100 == 0:
print('Iteration {}, loss: {:.4f}'.format(i+1, loss.item()))
# 使用测试集评估模型
test_inputs = torch.from_numpy(test_data).float()
test_labels = torch.from_numpy(test_labels).float()
with torch.no_grad():
outputs = model(test_inputs)
predicted_labels = torch.round(torch.sigmoid(outputs))
accuracy = (predicted_labels == test_labels).float().mean().item()
print('Accuracy: {:.4f}'.format(accuracy))
```
以上就是用PyTorch实现构建一个包含3层全连接层的深度神经网络,用该神经网络预测该数据是正样本还是负样本的代码。其中,`load_data()` 函数用于加载数据集,`nn.Linear()` 函数用于定义全连接层,`nn.BCEWithLogitsLoss()` 函数用于定义损失函数,`optim.Adam()` 函数用于定义优化器,`torch.sigmoid()` 函数用于将输出转换为概率值,`torch.round()` 函数用于将概率值转换为二分类结果,`(predicted_labels == test_labels).float().mean().item()` 用于计算测试集上的准确率。
阅读全文