基于pytorch平台,利用MNIST数据集,选择一个典型卷积模型,构建LeNet卷积模型,以分类的准确度和混淆矩阵为衡量指标,分析两个模型的分类精度。
时间: 2024-06-09 13:12:13 浏览: 118
使用pytorch搭建卷积网络分类MINST数据集,通过改变网络层数和卷积核大小观察对最终分类准确性的影响,并可视化实验结果
5星 · 资源好评率100%
好的,我了解了你的问题。首先,LeNet是一种最早的卷积神经网络结构,适用于手写字符识别等计算机视觉任务。在此基础上,我们可以使用MNIST数据集来训练LeNet模型,并将其与PyTorch中的其他卷积模型进行比较。
首先,我们需要导入必要的库和数据集。这里我们使用PyTorch自带的MNIST数据集。
```python
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as datasets
import torchvision.transforms as transforms
# 定义训练和测试数据集
train_dataset = datasets.MNIST(root='data/', train=True, transform=transforms.ToTensor(), download=True)
test_dataset = datasets.MNIST(root='data/', train=False, transform=transforms.ToTensor(), download=True)
# 定义训练和测试数据集的数据加载器
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)
```
接下来,我们可以定义LeNet模型。LeNet模型由两个卷积层和三个全连接层组成。在卷积层中,我们使用5x5的卷积核,并使用ReLU激活函数。在全连接层中,我们使用ReLU激活函数,并使用Dropout来减少过拟合。
```python
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 6, kernel_size=5)
self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
self.fc1 = nn.Linear(16*4*4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu = nn.ReLU()
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.dropout = nn.Dropout()
def forward(self, x):
x = self.conv1(x)
x = self.relu(x)
x = self.pool(x)
x = self.conv2(x)
x = self.relu(x)
x = self.pool(x)
x = x.view(-1, 16*4*4)
x = self.dropout(x)
x = self.fc1(x)
x = self.relu(x)
x = self.dropout(x)
x = self.fc2(x)
x = self.relu(x)
x = self.fc3(x)
return x
```
接下来,我们可以定义训练和测试函数。在训练函数中,我们使用交叉熵损失函数和Adam优化器来训练模型。在测试函数中,我们计算模型的分类准确度和混淆矩阵。
```python
def train(model, loader, criterion, optimizer, device):
model.train()
train_loss = 0
train_correct = 0
for batch_idx, (data, target) in enumerate(loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss += loss.item() * data.size(0)
_, pred = torch.max(output, 1)
train_correct += (pred == target).sum().item()
train_loss /= len(loader.dataset)
train_acc = train_correct / len(loader.dataset)
return train_loss, train_acc
def test(model, loader, criterion, device):
model.eval()
test_loss = 0
test_correct = 0
confusion_matrix = torch.zeros(10, 10)
with torch.no_grad():
for data, target in loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += criterion(output, target).item() * data.size(0)
_, pred = torch.max(output, 1)
test_correct += (pred == target).sum().item()
for t, p in zip(target.view(-1), pred.view(-1)):
confusion_matrix[t.long(), p.long()] += 1
test_loss /= len(loader.dataset)
test_acc = test_correct / len(loader.dataset)
return test_loss, test_acc, confusion_matrix
```
最后,我们可以定义主函数来训练和测试模型。
```python
def main():
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = LeNet().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
for epoch in range(10):
train_loss, train_acc = train(model, train_loader, criterion, optimizer, device)
test_loss, test_acc, confusion_matrix = test(model, test_loader, criterion, device)
print('Epoch {}, Train Loss: {:.4f}, Train Acc: {:.4f}, Test Loss: {:.4f}, Test Acc: {:.4f}'.format(epoch+1, train_loss, train_acc, test_loss, test_acc))
print(confusion_matrix)
```
运行主函数,即可训练和测试LeNet模型,并得到分类准确度和混淆矩阵的结果。
```python
if __name__ == '__main__':
main()
```
参考文献:
1. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
2. PyTorch官方文档:https://pytorch.org/docs/stable/index.html
阅读全文