.代码实现能用cnn实现手写数字识别。用代码设计cnn网络参数
时间: 2023-12-02 09:05:03 浏览: 70
好的,下面是一个使用CNN实现手写数字识别的示例代码。这个代码使用了PyTorch框架,所以你需要先安装PyTorch。
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torchvision.transforms as transforms
import torchvision.datasets as datasets
# 定义网络结构
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=5)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5)
self.fc1 = nn.Linear(512, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = nn.functional.relu(nn.functional.max_pool2d(self.conv1(x), 2))
x = nn.functional.relu(nn.functional.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 512)
x = nn.functional.relu(self.fc1(x))
x = self.fc2(x)
return nn.functional.log_softmax(x, dim=1)
# 加载数据集
train_dataset = datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True)
test_dataset = datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor())
# 设置训练参数
batch_size = 64
epochs = 10
learning_rate = 0.01
# 定义损失函数和优化器
model = CNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# 开始训练
for epoch in range(1, epochs+1):
train_loss = 0
train_correct = 0
for data, target in torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True):
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss += loss.item()
pred = output.data.max(1, keepdim=True)[1]
train_correct += pred.eq(target.data.view_as(pred)).cpu().sum()
print('Epoch [{}/{}], Train Loss: {:.4f}, Train Accuracy: {:.2f}%'.format(epoch, epochs, train_loss/len(train_dataset), 100.0*train_correct/len(train_dataset)))
# 在测试集上评估模型性能
test_loss = 0
test_correct = 0
for data, target in torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False):
data, target = Variable(data), Variable(target)
output = model(data)
test_loss += criterion(output, target).item()
pred = output.data.max(1, keepdim=True)[1]
test_correct += pred.eq(target.data.view_as(pred)).cpu().sum()
print('Test Loss: {:.4f}, Test Accuracy: {:.2f}%'.format(test_loss/len(test_dataset), 100.0*test_correct/len(test_dataset)))
```
上面的代码实现了一个简单的CNN,包含两个卷积层和两个全连接层。你可以在`__init__`函数中修改这些层的参数,例如卷积核大小、卷积核数量、全连接层的大小等等。在`forward`函数中,我们使用了ReLU作为激活函数,MaxPooling进行下采样,最后使用了LogSoftmax作为输出层。
我们使用MNIST数据集进行训练和测试。在训练过程中,我们使用SGD优化器和交叉熵损失函数。在每个epoch结束时,我们输出训练集的损失和准确率。在测试集上评估模型性能时,我们同样计算损失和准确率。
阅读全文