基于pytorch的卷积网络进行字符分类详细代码
时间: 2023-06-12 07:07:54 浏览: 50
以下是基于PyTorch的卷积网络进行字符分类的详细代码示例:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.transforms as transforms
import torchvision.datasets as datasets
# 定义卷积神经网络模型
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(7 * 7 * 32, 1000)
self.fc2 = nn.Linear(1000, 10)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.drop_out(out)
out = self.fc1(out)
out = self.fc2(out)
return out
# 加载训练数据集和测试数据集
train_dataset = datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True)
test_dataset = datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor())
# 定义批量大小和迭代次数
batch_size = 100
num_iters = 3000
num_epochs = num_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)
# 定义数据加载器
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
# 实例化模型和损失函数
model = ConvNet()
criterion = nn.CrossEntropyLoss()
# 定义优化器
learning_rate = 0.1
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# 训练模型
iter = 0
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# 前向传播计算损失函数
outputs = model(images)
loss = criterion(outputs, labels)
# 反向传播更新参数
optimizer.zero_grad()
loss.backward()
optimizer.step()
iter += 1
# 每迭代100次输出一次日志
if iter % 100 == 0:
# 计算训练集准确率
correct = 0
total = 0
for images, labels in train_loader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
train_accuracy = 100 * correct / total
# 计算测试集准确率
correct = 0
total = 0
for images, labels in test_loader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
test_accuracy = 100 * correct / total
print('Iteration: {}. Loss: {}. Train Accuracy: {}. Test Accuracy: {}'.format(iter, loss.item(), train_accuracy, test_accuracy))
```
在上述代码中,我们首先定义了一个卷积神经网络模型 `ConvNet`,它有两个卷积层和两个全连接层。每个卷积层后面都加了一个批标准化层和 ReLU 激活函数,每个全连接层后面都加了一个 Dropout 层和 ReLU 激活函数。
然后我们加载了 MNIST 数据集,并实例化了数据加载器。我们使用交叉熵损失函数和随机梯度下降优化器进行模型的训练。
在每次迭代中,我们使用一个批次的数据进行前向传播和反向传播,并更新模型参数。我们还计算了训练集和测试集的准确率,以便检查模型的性能。每迭代 100 次,我们输出一次日志,显示当前迭代次数、损失函数、训练集准确率和测试集准确率。