编程多层感知机实现手写数字识别
时间: 2023-09-15 15:15:23 浏览: 121
好的,下面是基于 Python 和 PyTorch 的多层感知机实现手写数字识别的代码示例。
首先,我们需要导入必要的库和数据集。这里我们使用 PyTorch 自带的 MNIST 数据集。
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
# 定义数据集路径
data_path = './data'
# 定义数据预处理方式
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(0.5,))
])
# 加载数据集
train_dataset = datasets.MNIST(root=data_path, train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(root=data_path, train=False, download=True, transform=transform)
# 定义批次大小
batch_size = 128
# 创建数据加载器
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
```
接着,我们定义多层感知机模型,这里使用两层全连接层。
```python
class MLP(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
self.relu = nn.ReLU()
def forward(self, x):
x = x.view(x.size(0), -1)
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
return x
```
然后,我们定义模型的超参数,并实例化模型和优化器。
```python
# 定义模型的超参数
input_dim = 784
hidden_dim = 128
output_dim = 10
learning_rate = 0.001
num_epochs = 10
# 实例化模型和优化器
model = MLP(input_dim, hidden_dim, output_dim)
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
```
接下来,我们定义训练和测试的函数。
```python
def train(model, train_loader, optimizer, criterion):
model.train()
train_loss = 0
train_correct = 0
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
train_loss += loss.item()
pred = output.argmax(dim=1, keepdim=True)
train_correct += pred.eq(target.view_as(pred)).sum().item()
loss.backward()
optimizer.step()
train_loss /= len(train_loader.dataset)
train_acc = train_correct / len(train_loader.dataset)
return train_loss, train_acc
def test(model, test_loader, criterion):
model.eval()
test_loss = 0
test_correct = 0
with torch.no_grad():
for data, target in test_loader:
output = model(data)
test_loss += criterion(output, target).item()
pred = output.argmax(dim=1, keepdim=True)
test_correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_acc = test_correct / len(test_loader.dataset)
return test_loss, test_acc
```
最后,我们进行训练并测试模型。
```python
criterion = nn.CrossEntropyLoss()
for epoch in range(1, num_epochs+1):
train_loss, train_acc = train(model, train_loader, optimizer, criterion)
test_loss, test_acc = test(model, test_loader, criterion)
print('Epoch [{}/{}], Train Loss: {:.4f}, Train Acc: {:.4f}, Test Loss: {:.4f}, Test Acc: {:.4f}'
.format(epoch, num_epochs, train_loss, train_acc, test_loss, test_acc))
```
训练完成后,我们可以使用模型进行预测。
```python
import matplotlib.pyplot as plt
import numpy as np
# 随机选择一张测试图片
index = np.random.randint(0, len(test_dataset))
image, target = test_dataset[index]
image = image.unsqueeze(0)
# 使用模型进行预测
output = model(image)
pred = output.argmax(dim=1, keepdim=True)
# 显示图片和预测结果
plt.imshow(image.squeeze(), cmap='gray')
plt.title('Ground Truth: {}, Prediction: {}'.format(target, pred.item()))
plt.axis('off')
plt.show()
```
以上就是基于 Python 和 PyTorch 的多层感知机实现手写数字识别的代码示例。
阅读全文