torch搭建一个卷积网络模型,并用遗传算法进行参数优化,代码举例并说明每句代码的含义
时间: 2023-12-20 14:03:49 浏览: 148
以下是一个用PyTorch搭建卷积神经网络模型,并使用遗传算法进行参数优化的代码示例:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import random
# 定义卷积神经网络模型
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 3, stride=1, padding=1)
self.conv2 = nn.Conv2d(16, 32, 3, stride=1, padding=1)
self.fc1 = nn.Linear(8 * 8 * 32, 256)
self.fc2 = nn.Linear(256, 10)
def forward(self, x):
x = nn.functional.relu(self.conv1(x))
x = nn.functional.max_pool2d(x, 2)
x = nn.functional.relu(self.conv2(x))
x = nn.functional.max_pool2d(x, 2)
x = x.view(-1, 8 * 8 * 32)
x = nn.functional.relu(self.fc1(x))
x = self.fc2(x)
return x
# 初始化种群
def init_population(pop_size, gene_length):
population = []
for i in range(pop_size):
chromosome = []
for j in range(gene_length):
chromosome.append(random.uniform(0, 1))
population.append(chromosome)
return population
# 计算适应度
def fitness_function(chromosome, model, train_loader, optimizer, criterion):
# 将基因转换为模型参数
params = []
for gene in chromosome:
params.append(torch.tensor(gene, requires_grad=True))
optimizer.param_groups[0]['params'] = params
# 训练模型
model.train()
total_loss = 0
for images, labels in train_loader:
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
total_loss += loss.item()
# 计算适应度
fitness = 1 / (total_loss + 1e-10)
return fitness
# 选择
def selection(population, fitness_values, elite_size):
elite_indices = np.argsort(fitness_values)[-elite_size:]
elite_population = [population[i] for i in elite_indices]
return elite_population
# 交叉
def crossover(parent1, parent2):
child1 = []
child2 = []
for i in range(len(parent1)):
if random.random() < 0.5:
child1.append(parent1[i])
child2.append(parent2[i])
else:
child1.append(parent2[i])
child2.append(parent1[i])
return child1, child2
# 变异
def mutation(chromosome, mutation_rate):
for i in range(len(chromosome)):
if random.random() < mutation_rate:
chromosome[i] = random.uniform(0, 1)
return chromosome
# 遗传算法优化
def genetic_algorithm(model, train_loader, pop_size, elite_size, mutation_rate, num_generations):
# 初始化种群
gene_length = sum(p.numel() for p in model.parameters())
population = init_population(pop_size, gene_length)
# 定义优化器和损失函数
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
criterion = nn.CrossEntropyLoss()
# 遗传算法迭代
for generation in range(num_generations):
# 计算适应度
fitness_values = []
for chromosome in population:
fitness_values.append(fitness_function(chromosome, model, train_loader, optimizer, criterion))
# 选择
elite_population = selection(population, fitness_values, elite_size)
# 交叉
offspring_population = []
num_offspring = pop_size - elite_size
for i in range(num_offspring // 2):
parent1 = random.choice(elite_population)
parent2 = random.choice(elite_population)
child1, child2 = crossover(parent1, parent2)
offspring_population.append(child1)
offspring_population.append(child2)
# 变异
for i in range(num_offspring):
offspring_population[i] = mutation(offspring_population[i], mutation_rate)
# 更新种群
population = elite_population + offspring_population
# 打印每代最佳适应度
best_fitness = max(fitness_values)
print('Generation {}: best fitness = {}'.format(generation, best_fitness))
# 加载数据集
train_data = torch.utils.data.DataLoader(dataset=torchvision.datasets.CIFAR10(root='./data', train=True, transform=torchvision.transforms.ToTensor(), download=True), batch_size=64, shuffle=True)
# 创建卷积神经网络模型
model = ConvNet()
# 使用遗传算法进行参数优化
genetic_algorithm(model, train_data, pop_size=20, elite_size=2, mutation_rate=0.1, num_generations=10)
```
代码解释如下:
1. 首先导入必要的库,包括PyTorch、NumPy和random等。
2. 定义一个卷积神经网络模型`ConvNet`,其中包含了两个卷积层和两个全连接层。
3. 初始化种群的函数`init_population`接收两个参数:种群大小`pop_size`和基因长度`gene_length`,返回一个由随机浮点数组成的种群列表。
4. 计算适应度的函数`fitness_function`接收五个参数:染色体`chromosome`、卷积神经网络模型`model`、训练数据`train_loader`、优化器`optimizer`和损失函数`criterion`。该函数将基因转换为模型参数,然后在训练数据上训练模型并计算损失,最后返回适应度。
5. 选择的函数`selection`接收三个参数:种群`population`、适应度值列表`fitness_values`和精英数量`elite_size`。该函数根据适应度值从大到小排序,选择适应度最高的`elite_size`个个体作为精英。
6. 交叉的函数`crossover`接收两个参数:父代染色体`parent1`和`parent2`。该函数采用单点交叉的方法,随机选择一个位置进行交叉。
7. 变异的函数`mutation`接收两个参数:染色体`chromosome`和变异率`mutation_rate`。该函数将每个基因以概率`mutation_rate`进行变异,随机生成一个新的浮点数替换原来的基因。
8. 遗传算法优化的函数`genetic_algorithm`接收五个参数:卷积神经网络模型`model`、训练数据`train_loader`、种群大小`pop_size`、精英数量`elite_size`和变异率`mutation_rate`。该函数首先初始化种群,然后进行一定次数的迭代,每次迭代计算适应度、选择、交叉和变异,最后更新种群并打印每代最佳适应度。
9. 加载CIFAR10数据集,并创建卷积神经网络模型。
10. 使用遗传算法进行参数优化,其中`pop_size`表示种群大小,`elite_size`表示精英数量,`mutation_rate`表示变异率,`num_generations`表示迭代次数。
阅读全文