调用任意框架,完成对tiny-imagenet的训练过程和分类预测用jupyter notebook实现
时间: 2024-02-19 22:02:19 浏览: 253
基于Jupyter Notebook的AI模型上线与模型部署
好的,我可以为您提供基于 PyTorch 的代码实现,您可以在 Jupyter Notebook 中直接运行。请确保您已经安装了 PyTorch 和 torchvision 库。
### 从头开始训练模型
#### 数据集准备
首先,我们需要下载 tiny-imagenet 数据集。可以从 [官方网站](https://tiny-imagenet.herokuapp.com/) 下载,也可以使用 Kaggle 上的 [数据集](https://www.kaggle.com/c/tiny-imagenet/data)。
然后,将下载好的数据集解压到指定的目录下,如下所示:
```python
!unzip -q "tiny-imagenet-200.zip" -d "./data"
```
接下来,我们需要对数据进行预处理,将图片 resize 到固定的大小(如 224x224),并将像素值归一化到 [0, 1] 的范围内。可以使用 torchvision 库中的 `transforms` 模块来进行图片的读取和处理。
```python
import os
from torchvision import datasets, transforms
data_dir = './data/tiny-imagenet-200'
train_dir = os.path.join(data_dir, 'train')
val_dir = os.path.join(data_dir, 'val')
test_dir = os.path.join(data_dir, 'test')
# 定义图像预处理的操作
train_transform = transforms.Compose([
transforms.Resize(256),
transforms.RandomCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
val_transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
# 加载数据集
train_dataset = datasets.ImageFolder(train_dir, transform=train_transform)
val_dataset = datasets.ImageFolder(val_dir, transform=val_transform)
test_dataset = datasets.ImageFolder(test_dir, transform=val_transform)
# 创建 DataLoader,用于加载数据
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=32, shuffle=False)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=32, shuffle=False)
```
#### 模型构建与训练
我们使用 PyTorch 自带的 ResNet18 模型进行训练。在训练之前,需要将数据集分为训练集、验证集和测试集。可以使用 PyTorch 中的 `DataLoader` 类来进行数据的划分和加载。
```python
import torch
import torch.nn as nn
import torch.optim as optim
# 加载预训练模型
model = torch.hub.load('pytorch/vision:v0.9.0', 'resnet18', pretrained=True)
# 替换模型最后一层的输出,使其与数据集分类数相同
num_classes = len(train_dataset.classes)
model.fc = nn.Linear(model.fc.in_features, num_classes)
# 将模型移动到 GPU 上
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=1e-4)
# 训练模型
num_epochs = 10
for epoch in range(num_epochs):
# 训练模型
model.train()
train_loss = 0.0
train_acc = 0.0
for i, (inputs, labels) in enumerate(train_loader):
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item() * inputs.size(0)
_, preds = torch.max(outputs, 1)
train_acc += torch.sum(preds == labels.data)
train_loss = train_loss / len(train_loader.dataset)
train_acc = train_acc.double() / len(train_loader.dataset)
# 在验证集上测试模型
model.eval()
val_loss = 0.0
val_acc = 0.0
with torch.no_grad():
for inputs, labels in val_loader:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
loss = criterion(outputs, labels)
val_loss += loss.item() * inputs.size(0)
_, preds = torch.max(outputs, 1)
val_acc += torch.sum(preds == labels.data)
val_loss = val_loss / len(val_loader.dataset)
val_acc = val_acc.double() / len(val_loader.dataset)
print('Epoch [{}/{}], Train Loss: {:.4f}, Train Acc: {:.4f}, Val Loss: {:.4f}, Val Acc: {:.4f}'
.format(epoch+1, num_epochs, train_loss, train_acc, val_loss, val_acc))
```
#### 分类预测
训练完成后,可以使用测试集对模型进行测试,并计算模型在测试集上的准确率。
```python
# 在测试集上测试模型
model.eval()
test_acc = 0.0
with torch.no_grad():
for inputs, labels in test_loader:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
test_acc += torch.sum(preds == labels.data)
test_acc = test_acc.double() / len(test_loader.dataset)
print('Test Acc: {:.4f}'.format(test_acc))
```
### 使用预训练模型进行 fine-tune
#### 数据集准备
与从头开始训练模型的步骤相同。
#### 模型构建与训练
我们可以使用 PyTorch 自带的 ResNet18 模型进行 fine-tune。大部分预训练模型的前面几层都是固定的,可以将这些层的参数冻结,只训练后面几层的参数。这可以提高模型的训练速度和准确率。
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import models
# 加载预训练模型
model = models.resnet18(pretrained=True)
# 替换模型最后一层的输出,使其与数据集分类数相同
num_classes = len(train_dataset.classes)
model.fc = nn.Linear(model.fc.in_features, num_classes)
# 将前面几层的参数冻结
for param in model.parameters():
param.requires_grad = False
for param in model.fc.parameters():
param.requires_grad = True
# 将模型移动到 GPU 上
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.fc.parameters(), lr=0.01, momentum=0.9, weight_decay=1e-4)
# 训练模型
num_epochs = 10
for epoch in range(num_epochs):
# 训练模型
model.train()
train_loss = 0.0
train_acc = 0.0
for i, (inputs, labels) in enumerate(train_loader):
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item() * inputs.size(0)
_, preds = torch.max(outputs, 1)
train_acc += torch.sum(preds == labels.data)
train_loss = train_loss / len(train_loader.dataset)
train_acc = train_acc.double() / len(train_loader.dataset)
# 在验证集上测试模型
model.eval()
val_loss = 0.0
val_acc = 0.0
with torch.no_grad():
for inputs, labels in val_loader:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
loss = criterion(outputs, labels)
val_loss += loss.item() * inputs.size(0)
_, preds = torch.max(outputs, 1)
val_acc += torch.sum(preds == labels.data)
val_loss = val_loss / len(val_loader.dataset)
val_acc = val_acc.double() / len(val_loader.dataset)
print('Epoch [{}/{}], Train Loss: {:.4f}, Train Acc: {:.4f}, Val Loss: {:.4f}, Val Acc: {:.4f}'
.format(epoch+1, num_epochs, train_loss, train_acc, val_loss, val_acc))
```
#### 分类预测
训练完成后,可以使用测试集对模型进行测试,并计算模型在测试集上的准确率。
```python
# 在测试集上测试模型
model.eval()
test_acc = 0.0
with torch.no_grad():
for inputs, labels in test_loader:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
test_acc += torch.sum(preds == labels.data)
test_acc = test_acc.double() / len(test_loader.dataset)
print('Test Acc: {:.4f}'.format(test_acc))
```
希望这些代码能够对您有所帮助!
阅读全文