minst手写pytorch
时间: 2023-10-14 12:06:36 浏览: 96
pytorch-cifar100_MINST_cifar100pytorch_
以下是使用PyTorch实现手写数字识别(MNIST)的代码示例:
首先,我们需要导入必要的库和模块:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
```
接着,我们需要定义模型:
```python
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = nn.functional.relu(self.conv1(x))
x = nn.functional.relu(self.conv2(x))
x = nn.functional.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = nn.functional.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return nn.functional.log_softmax(x, dim=1)
```
这里我们使用了两个卷积层、两个池化层和两个全连接层,以及一些dropout层。
接下来,我们需要定义数据集:
```python
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
train_dataset = MNIST(root='./data', train=True, download=True, transform=transform)
test_dataset = MNIST(root='./data', train=False, download=True, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
```
这里我们使用了MNIST数据集,对数据进行了标准化处理,并使用DataLoader对数据进行批处理。
最后,我们需要定义训练过程:
```python
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
def train(model, train_loader, criterion, optimizer):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
def test(model, test_loader, criterion):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = model(data)
test_loss += criterion(output, target).item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
accuracy = correct / len(test_loader.dataset)
print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(
test_loss, correct, len(test_loader.dataset), 100. * accuracy))
for epoch in range(1, 11):
train(model, train_loader, criterion, optimizer)
test(model, test_loader, criterion)
```
这里我们使用了Adam优化器和交叉熵损失函数,对模型进行训练和测试。在训练过程中,我们使用model.train()将模型设置为训练模式,在测试过程中,我们使用model.eval()将模型设置为测试模式,这样dropout层就会停止起作用,从而使得测试结果更加准确。
最后,我们可以通过执行以下命令来运行程序:
```python
python mnist.py
```
经过训练,我们可以得到类似如下的输出:
```
Test set: Average loss: 0.0009, Accuracy: 9869/10000 (99%)
```
这说明我们的模型在手写数字识别上达到了很高的准确率。
阅读全文