pytorch二分类保存最优模型的代码
时间: 2023-11-16 13:06:39 浏览: 40
以下是使用PyTorch进行二分类并保存最优模型的代码示例:
```python
import torch
import torch.nn as nn
import torch.optim as optim
# 定义模型
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(10, 5)
self.fc2 = nn.Linear(5, 2)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = self.fc2(x)
return x
# 定义训练函数
def train(model, train_loader, optimizer, criterion):
model.train()
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
return running_loss / len(train_loader)
# 定义测试函数
def test(model, test_loader, criterion):
model.eval()
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
inputs, labels = data
outputs = model(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
return correct / total
# 加载数据集
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=32, shuffle=False)
# 初始化模型、损失函数和优化器
model = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# 训练模型并保存最优模型
best_acc = 0.0
for epoch in range(10):
train_loss = train(model, train_loader, optimizer, criterion)
test_acc = test(model, test_loader, criterion)
if test_acc > best_acc:
best_acc = test_acc
torch.save(model.state_dict(), 'best_model.pth')
print('Epoch: {}, Train Loss: {:.4f}, Test Acc: {:.4f}'.format(epoch+1, train_loss, test_acc))
```
在上述代码中,我们定义了一个简单的神经网络模型 `Net`,并使用交叉熵损失函数和随机梯度下降优化器进行训练。在每个 epoch 结束时,我们计算测试集上的准确率,并将当前模型的状态保存到文件 `best_model.pth` 中,如果当前模型的准确率比之前保存的最优模型的准确率更高,则更新最优模型。