pytorch 案例
时间: 2024-02-04 20:09:51 浏览: 78
当然可以!以下是一个使用PyTorch的简单案例,它实现了一个简单的神经网络模型,用于分类手写数字(MNIST数据集):
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
# 定义超参数
input_size = 784 # MNIST数据集的输入尺寸
hidden_size = 128 # 隐藏层尺寸
num_classes = 10 # 类别数量
num_epochs = 5 # 训练周期数
batch_size = 64 # 批处理大小
learning_rate = 0.001 # 学习率
# 加载数据集并进行预处理
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
train_dataset = datasets.MNIST(root='./data', train=True, transform=transform, download=True)
test_dataset = datasets.MNIST(root='./data', train=False, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
# 定义神经网络模型
class Net(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(Net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
model = Net(input_size, hidden_size, num_classes)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
# 训练模型
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
outputs = model(images)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print(f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{len(train_loader)}], Loss: {loss.item():.4f}')
# 测试模型性能
correct = 0
total = 0
with torch.no_grad():
for images, labels in test_loader:
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print(f'Accuracy of the network on the test images: {100 * correct / total}%')
```
这个案例使用PyTorch构建了一个简单的神经网络模型,包括一个输入层、一个隐藏层和一个输出层。在训练过程中,使用Adam优化器和交叉熵损失函数来优化模型的参数。在测试阶段,使用相同的测试数据集来评估模型的性能。在这个简单的例子中,神经网络成功地分类了手写数字数据集,并得到了较高的准确率。
阅读全文