随机生成一个简单的pycharm深度学习代码,可以直接运行不出现错误,只用pytorch库,,训练100个epoch,得到训练集和验证集的准确率曲线、损失函数曲线以及交叉熵函数曲线
时间: 2023-10-04 12:11:23 浏览: 89
使用pytorch写Googlenet代码,详细注释,可以生成训练集和测试集的损失和准确率的折线图
好的,以下是一个简单的pycharm深度学习代码,使用PyTorch库训练一个简单的神经网络模型:
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
# 定义神经网络模型
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = x.view(-1, 784)
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
# 定义训练函数
def train(model, train_loader, optimizer, criterion):
model.train()
running_loss = 0.0
correct = 0
total = 0
for i, (inputs, labels) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc_train = 100 * correct / total
loss_train = running_loss / len(train_loader)
return acc_train, loss_train
# 定义验证函数
def validate(model, val_loader, criterion):
model.eval()
running_loss = 0.0
correct = 0
total = 0
with torch.no_grad():
for i, (inputs, labels) in enumerate(val_loader):
outputs = model(inputs)
loss = criterion(outputs, labels)
running_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc_val = 100 * correct / total
loss_val = running_loss / len(val_loader)
return acc_val, loss_val
# 加载MNIST数据集
train_dataset = torchvision.datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True)
val_dataset = torchvision.datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor(), download=True)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=64, shuffle=False)
# 初始化模型、损失函数和优化器
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# 训练模型并记录结果
n_epochs = 100
train_acc_list = []
train_loss_list = []
val_acc_list = []
val_loss_list = []
for epoch in range(n_epochs):
acc_train, loss_train = train(model, train_loader, optimizer, criterion)
acc_val, loss_val = validate(model, val_loader, criterion)
train_acc_list.append(acc_train)
train_loss_list.append(loss_train)
val_acc_list.append(acc_val)
val_loss_list.append(loss_val)
print('Epoch [{}/{}], Train Loss: {:.4f}, Train Acc: {:.2f}%, Val Loss: {:.4f}, Val Acc: {:.2f}%'.format(epoch+1, n_epochs, loss_train, acc_train, loss_val, acc_val))
# 可视化结果
import matplotlib.pyplot as plt
plt.plot(train_acc_list, label='Train Acc')
plt.plot(val_acc_list, label='Val Acc')
plt.legend()
plt.show()
plt.plot(train_loss_list, label='Train Loss')
plt.plot(val_loss_list, label='Val Loss')
plt.legend()
plt.show()
plt.plot(train_loss_list, label='Train CE Loss')
plt.plot(val_loss_list, label='Val CE Loss')
plt.legend()
plt.show()
```
这段代码使用了MNIST数据集,训练一个3层全连接神经网络模型,使用Adam优化器和交叉熵损失函数进行训练。
训练过程中,每个epoch都会记录训练集和验证集的准确率和损失函数,并将这些结果保存到对应的列表中。最后,可以使用Matplotlib库将这些结果可视化出来,以便更好地了解模型的训练情况。
阅读全文