设计一个五层的全连接神经网络(命名为model6),实现对数据集MNIST数据集分类。 要求:1) batchsize=32, learning_rate = 1e-2,epochs=100, 中间层神经元依次为400,300,200,100. 2) 隐层含有ReLU()和批标准化函数。
时间: 2024-03-10 13:48:54 浏览: 76
全连接神经网络实现Fashion-MNIST数据集图像分类(可运行).zip
5星 · 资源好评率100%
好的,以下是代码实现:
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
from torch.utils.data import random_split
# 定义模型
class Model6(nn.Module):
def __init__(self):
super(Model6, self).__init__()
self.fc1 = nn.Linear(784, 400)
self.bn1 = nn.BatchNorm1d(400)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(400, 300)
self.bn2 = nn.BatchNorm1d(300)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(300, 200)
self.bn3 = nn.BatchNorm1d(200)
self.relu3 = nn.ReLU()
self.fc4 = nn.Linear(200, 100)
self.bn4 = nn.BatchNorm1d(100)
self.relu4 = nn.ReLU()
self.fc5 = nn.Linear(100, 10)
def forward(self, x):
x = x.view(x.size(0), -1)
x = self.relu1(self.bn1(self.fc1(x)))
x = self.relu2(self.bn2(self.fc2(x)))
x = self.relu3(self.bn3(self.fc3(x)))
x = self.relu4(self.bn4(self.fc4(x)))
x = self.fc5(x)
return x
# 定义超参数
batch_size = 32
learning_rate = 1e-2
epochs = 100
# 加载数据集
dataset = MNIST(root='data/', train=True, transform=ToTensor(), download=True)
train_ds, val_ds = random_split(dataset, [50000, 10000])
train_loader = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(val_ds, batch_size=batch_size)
# 初始化模型、损失函数和优化器
model = Model6()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# 训练模型
for epoch in range(epochs):
train_loss = 0.0
train_correct = 0
for images, labels in train_loader:
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item() * images.size(0)
_, predicted = torch.max(outputs.data, 1)
train_correct += (predicted == labels).sum().item()
train_loss /= len(train_loader.dataset)
train_accuracy = 100.0 * train_correct / len(train_loader.dataset)
# 在验证集上测试模型
val_loss = 0.0
val_correct = 0
with torch.no_grad():
for images, labels in val_loader:
outputs = model(images)
loss = criterion(outputs, labels)
val_loss += loss.item() * images.size(0)
_, predicted = torch.max(outputs.data, 1)
val_correct += (predicted == labels).sum().item()
val_loss /= len(val_loader.dataset)
val_accuracy = 100.0 * val_correct / len(val_loader.dataset)
# 输出训练和验证的损失和准确率
print('Epoch [{}/{}], Train Loss: {:.4f}, Train Accuracy: {:.2f}%, Val Loss: {:.4f}, Val Accuracy: {:.2f}%'
.format(epoch+1, epochs, train_loss, train_accuracy, val_loss, val_accuracy))
```
这个模型有五个全连接层,中间四层有批标准化和ReLU激活函数,最后一层是输出层,没有激活函数。训练过程中使用了随机梯度下降优化器和交叉熵损失函数。在每个 epoch 结束后,会在训练集和验证集上测试模型,并输出训练和验证的损失和准确率。
阅读全文