用pytorch实现基于cnn的水果识别
时间: 2024-05-10 17:15:32 浏览: 79
好的,下面是一个基于CNN的水果识别的 PyTorch 实现。
首先,我们需要准备数据集。这里我使用了 Kaggle 上的一个水果数据集,其中包含了 60 种不同种类的水果。数据集已经划分好了训练集和测试集,并且每个水果的图像都被裁剪为 100x100 的大小。
接下来,我们可以开始构建模型了。我们使用一个简单的 CNN 模型,包含了两个卷积层和两个全连接层。代码如下:
```python
import torch
import torch.nn as nn
class FruitCNN(nn.Module):
def __init__(self):
super(FruitCNN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(32 * 25 * 25, 128)
self.fc2 = nn.Linear(128, 60)
self.dropout = nn.Dropout(0.5)
def forward(self, x):
x = self.pool(torch.relu(self.conv1(x)))
x = self.pool(torch.relu(self.conv2(x)))
x = x.view(-1, 32 * 25 * 25)
x = self.dropout(torch.relu(self.fc1(x)))
x = self.fc2(x)
return x
```
接下来,我们需要加载数据集,并定义损失函数和优化器。代码如下:
```python
import torch.optim as optim
from torchvision import datasets, transforms
transform_train = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])
train_dataset = datasets.ImageFolder('train', transform=transform_train)
test_dataset = datasets.ImageFolder('test', transform=transform_test)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=64, shuffle=False)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
```
最后,我们可以开始训练模型了。代码如下:
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = FruitCNN().to(device)
n_epochs = 10
for epoch in range(n_epochs):
train_loss = 0.0
for data, target in train_loader:
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss += loss.item()*data.size(0)
model.eval()
test_loss = 0.0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
loss = criterion(output, target)
test_loss += loss.item()*data.size(0)
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
train_loss = train_loss/len(train_loader.dataset)
test_loss = test_loss/len(test_loader.dataset)
accuracy = 100. * correct/len(test_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f} \tTesting Loss: {:.6f} \tAccuracy: {:.2f}%'.format(
epoch+1, train_loss, test_loss, accuracy))
```
这段代码会训练模型并输出每个 epoch 的训练损失、测试损失和准确率。经过 10 个 epoch 的训练,模型在测试集上达到了 95% 的准确率。
完整代码如下:
阅读全文