简单的pycharm深度学习代码,可以直接运行不出现错误,只用pytorch库,,训练100个epoch,得到训练集和验证集的准确率曲线、损失函数曲线以及交叉熵函数曲线
时间: 2023-10-04 13:09:31 浏览: 97
以下是一个简单的pycharm深度学习代码,可以直接运行不出现错误,只用pytorch库,训练100个epoch,得到训练集和验证集的准确率曲线、损失函数曲线以及交叉熵函数曲线。这个代码使用了MNIST数据集进行训练和测试。
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
# 超参数设置
batch_size = 64
learning_rate = 0.01
num_epochs = 100
# 数据预处理
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))])
# 加载数据集
train_dataset = datasets.MNIST(root='./data', train=True, transform=transform, download=True)
test_dataset = datasets.MNIST(root='./data', train=False, transform=transform, download=True)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
# 定义模型
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=5)
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
self.conv2 = nn.Conv2d(32, 64, kernel_size=5)
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
self.fc1 = nn.Linear(1024, 512)
self.fc2 = nn.Linear(512, 10)
def forward(self, x):
x = self.maxpool1(torch.relu(self.conv1(x)))
x = self.maxpool2(torch.relu(self.conv2(x)))
x = x.view(-1, 1024)
x = torch.relu(self.fc1(x))
x = self.fc2(x)
return x
# 定义模型、损失函数和优化器
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
train_losses = []
train_accs = []
test_losses = []
test_accs = []
# 训练模型
for epoch in range(num_epochs):
train_loss = 0
train_acc = 0
test_loss = 0
test_acc = 0
# 训练模式
model.train()
for i, (inputs, targets) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
train_acc += (predicted == targets).sum().item()
# 测试模式
model.eval()
with torch.no_grad():
for inputs, targets in test_loader:
outputs = model(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
test_acc += (predicted == targets).sum().item()
# 记录训练和测试的损失和准确率
train_loss /= len(train_loader.dataset)
train_acc /= len(train_loader.dataset)
test_loss /= len(test_loader.dataset)
test_acc /= len(test_loader.dataset)
train_losses.append(train_loss)
train_accs.append(train_acc)
test_losses.append(test_loss)
test_accs.append(test_acc)
# 输出训练过程中的信息
print('Epoch [{}/{}], Train Loss: {:.4f}, Train Acc: {:.4f}, Test Loss: {:.4f}, Test Acc: {:.4f}'
.format(epoch+1, num_epochs, train_loss, train_acc, test_loss, test_acc))
# 绘制训练和测试的准确率曲线、损失函数曲线以及交叉熵函数曲线
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Train Loss')
plt.plot(test_losses, label='Test Loss')
plt.legend()
plt.show()
plt.plot(train_accs, label='Train Acc')
plt.plot(test_accs, label='Test Acc')
plt.legend()
plt.show()
plt.plot(train_losses, label='Train Cross Entropy')
plt.plot(test_losses, label='Test Cross Entropy')
plt.legend()
plt.show()
```
这个代码使用了一个简单的卷积神经网络来对MNIST数据集进行分类。在训练过程中,我们记录了训练集和验证集的损失和准确率,并且最后绘制了训练和测试的准确率曲线、损失函数曲线以及交叉熵函数曲线。你可以通过修改超参数和网络结构来进行实验。
阅读全文