基于Pytorch的MNIST手写数据集识别:使用CNN卷积神经网络实现MNIST手写数据集识别
时间: 2024-10-19 17:15:34 浏览: 72
基于 PyTorch 的 MNIST 手写数字识别通常涉及以下几个步骤:
1. **导入库**:
首先,你需要导入必要的库,如 `torch`, `torchvision`(用于加载图像数据集),以及一些辅助函数。
```python
import torch
import torchvision
from torchvision import datasets, transforms
```
2. **数据预处理**:
对 MNIST 数据集进行归一化,并对输入图片进行适当的转换(例如灰度化、张量化等)。
```python
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)
```
3. **创建数据加载器**:
使用 `DataLoader` 将数据划分为小批次,方便模型训练。
```python
batch_size = 64
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
```
4. **构建 CNN 模型**:
利用 PyTorch 构建一个简单的卷积神经网络结构,包括卷积层、池化层、全连接层及激活函数。
```python
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1)
self.relu = nn.ReLU()
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.fc1 = nn.Linear(16 * 7 * 7, 128)
self.fc2 = nn.Linear(128, 10) # 输出层有10个节点,对应0-9十个数字
def forward(self, x):
x = self.pool(self.relu(self.conv1(x)))
x = x.view(-1, 16 * 7 * 7) # reshape to suitable input for fully connected layers
x = self.relu(self.fc1(x))
x = self.fc2(x)
return x
model = Net()
```
5. **训练模型**:
定义损失函数(如交叉熵)、优化器(如SGD或Adam),然后通过训练循环迭代地更新模型参数。
```python
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
num_epochs = 10
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
```
6. **评估模型**:
测试模型在测试集上的性能,计算准确率。
```python
correct_count, all_count = 0, 0
with torch.no_grad():
for images, labels in test_loader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
correct_count += (predicted == labels).sum().item()
all_count = len(test_loader.dataset)
accuracy = correct_count / all_count
print(f"Accuracy on test set: {accuracy*100:.2f}%")
```
阅读全文