基于PyTorch实现一个多层感知机,画出训练过程中训练损失函数,训练误差,测试误差,随迭代epoch的变化曲线。
时间: 2024-10-19 20:14:10 浏览: 35
神经网络训练教程(pytorch)-个人向
在Python中,使用PyTorch库来实现一个多层感知机(MLP)并可视化训练过程通常涉及以下几个步骤:
1. **导入必要的库**:
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, Dataset
import matplotlib.pyplot as plt
```
2. **定义模型(MLP)**:
```python
class MLP(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
```
3. **数据预处理和加载**:
创建一个自定义的数据集,并使用DataLoader对数据进行批处理。
4. **创建并配置模型、优化器和损失函数**:
```python
model = MLP(input_dim, hidden_dim, num_classes) # 根据实际输入和输出维度设置
criterion = nn.CrossEntropyLoss() # 对分类任务常用交叉熵损失
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
5. **训练循环和数据可视化**:
```python
for epoch in range(num_epochs): # 设置总迭代次数
train_loss = []
train_acc = []
test_loss = []
test_acc = []
# 训练阶段
model.train()
for inputs, targets in train_loader:
outputs = model(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss.append(loss.item())
_, preds = torch.max(outputs.data, 1)
acc = (preds == targets).sum().item() / len(preds)
train_acc.append(acc)
# 测试阶段
model.eval()
with torch.no_grad():
for inputs, targets in test_loader:
outputs = model(inputs)
loss = criterion(outputs, targets)
test_loss.append(loss.item())
_, preds = torch.max(outputs.data, 1)
acc = (preds == targets).sum().item() / len(preds)
test_acc.append(acc)
# 绘制图表
plt.figure(figsize=(10, 6))
plt.plot(train_loss, label='Train Loss')
plt.plot(test_loss, label='Test Loss')
plt.title(f"Epoch {epoch + 1}")
plt.xlabel("Iterations")
plt.ylabel("Loss")
plt.legend()
plt.show()
# 类似的,可以绘制训练准确率和测试准确率的曲线
```
6. **运行训练循环**:
```python
for _ in range(num_epochs):
... # 执行上述步骤
```
完成以上步骤后,你应该能看到训练和测试的损失以及准确度随epoch变化的折线图。
阅读全文