利用pytorch实现遗传算法,用于实现贪吃蛇游戏,请给出具体代码

时间: 2024-02-19 10:57:47 浏览: 22
好的,以下是一个简单的利用PyTorch实现遗传算法的贪吃蛇游戏示例代码: ```python import random import numpy as np import torch # 游戏相关参数 WIDTH = 20 HEIGHT = 20 FOOD_REWARD = 10 DEATH_PENALTY = -10 MOVE_PENALTY = -1 class SnakeGame: def __init__(self): self.width = WIDTH self.height = HEIGHT self.food_reward = FOOD_REWARD self.death_penalty = DEATH_PENALTY self.move_penalty = MOVE_PENALTY self.reset() def reset(self): self.snake = [(0, 0)] self.food = self.generate_food() self.direction = 0 self.score = 0 self.steps = 0 def generate_food(self): while True: food = (random.randint(0, self.width-1), random.randint(0, self.height-1)) if food not in self.snake: return food def get_state(self): head = self.snake[0] left = (head[0]-1, head[1]) right = (head[0]+1, head[1]) up = (head[0], head[1]-1) down = (head[0], head[1]+1) state = [ int((left in self.snake or left[0] < 0) and self.direction != 1), # danger straight int((right in self.snake or right[0] >= self.width) and self.direction != 0), # danger straight int((up in self.snake or up[1] < 0) and self.direction != 3), # danger straight int((down in self.snake or down[1] >= self.height) and self.direction != 2), # danger straight int(self.direction == 0 and down in self.snake), # danger right int(self.direction == 0 and up in self.snake), # danger left int(self.direction == 1 and left in self.snake), # danger right int(self.direction == 1 and up in self.snake), # danger left int(self.direction == 2 and up in self.snake), # danger right int(self.direction == 2 and right in self.snake), # danger left int(self.direction == 3 and right in self.snake), # danger right int(self.direction == 3 and down in self.snake), # danger left self.food[0] - head[0], # food x distance self.food[1] - head[1] # food y distance ] return np.array(state, dtype=int) def play_step(self, action): self.steps += 1 reward = self.move_penalty if action == 0: new_head = (self.snake[0][0], self.snake[0][1]-1) elif action == 1: new_head = (self.snake[0][0], self.snake[0][1]+1) elif action == 2: new_head = (self.snake[0][0]-1, self.snake[0][1]) else: new_head = (self.snake[0][0]+1, self.snake[0][1]) if new_head == self.food: self.score += self.food_reward self.snake.insert(0, new_head) self.food = self.generate_food() reward = self.food_reward elif new_head[0] < 0 or new_head[0] >= self.width or new_head[1] < 0 or new_head[1] >= self.height or new_head in self.snake: self.score += self.death_penalty reward = self.death_penalty self.reset() else: self.snake.insert(0, new_head) self.snake.pop() reward = self.move_penalty self.direction = self.get_direction() state = self.get_state() done = False return state, reward, done def get_direction(self): dx = self.snake[0][0] - self.snake[1][0] dy = self.snake[0][1] - self.snake[1][1] if dx == 0: return 0 if dy == -1 else 1 else: return 2 if dx == -1 else 3 class GeneticAlgorithm: def __init__(self, population_size, mutation_rate, model_fn): self.population_size = population_size self.mutation_rate = mutation_rate self.model_fn = model_fn self.population = [model_fn() for _ in range(population_size)] self.fitness = [0 for _ in range(population_size)] def select(self): parent1_idx = random.choices(range(self.population_size), weights=self.fitness)[0] parent2_idx = random.choices(range(self.population_size), weights=self.fitness)[0] return parent1_idx, parent2_idx def crossover(self, parent1, parent2): child1 = parent1.clone() child2 = parent2.clone() for param1, param2 in zip(child1.parameters(), child2.parameters()): mask = torch.empty_like(param1).uniform_() < 0.5 param1[mask], param2[mask] = param2[mask], param1[mask] return child1, child2 def mutate(self, model): for param in model.parameters(): mask = torch.empty_like(param).uniform_() < self.mutation_rate delta = torch.empty_like(param).normal_(0, 0.1) param[mask] += delta[mask] def evolve(self): # 计算适应度 for i, model in enumerate(self.population): fitness = self.evaluate(model) self.fitness[i] = fitness # 选择与繁殖新一代 new_population = [] for _ in range(self.population_size): parent1_idx, parent2_idx = self.select() parent1 = self.population[parent1_idx] parent2 = self.population[parent2_idx] child1, child2 = self.crossover(parent1, parent2) self.mutate(child1) self.mutate(child2) new_population.extend([child1, child2]) self.population = new_population def evaluate(self, model): game = SnakeGame() state = game.get_state() done = False fitness = 0 while not done: action = model(torch.tensor(state).float().unsqueeze(0)).argmax(dim=1).item() state, reward, done = game.play_step(action) fitness += reward return fitness def train(self, num_generations): for generation in range(num_generations): self.evolve() fitness = self.fitness print(f"Generation: {generation}, Max Fitness: {max(fitness)}, Avg Fitness: {sum(fitness) / len(fitness)}") if __name__ == "__main__": # 定义模型 model = torch.nn.Sequential( torch.nn.Linear(13, 32), torch.nn.ReLU(), torch.nn.Linear(32, 16), torch.nn.ReLU(), torch.nn.Linear(16, 4) ) # 定义遗传算法 ga = GeneticAlgorithm(population_size=50, mutation_rate=0.1, model_fn=lambda: model.clone()) # 训练遗传算法 ga.train(num_generations=100) ``` 代码中定义了一个 `SnakeGame` 类来实现贪吃蛇游戏,并实现了一个 `GeneticAlgorithm` 类来实现遗传算法。其中,`GeneticAlgorithm` 类的 `model_fn` 参数是一个函数,用于初始化模型,本例中使用了一个简单的前馈神经网络作为模型。在 `GeneticAlgorithm` 类的 `train` 方法中,首先计算每个个体的适应度,然后进行选择、交叉和变异,生成新一代个体。在训练过程中,会输出每一代的最大适应度和平均适应度。

相关推荐

最新推荐

recommend-type

详解用python实现简单的遗传算法

主要介绍了详解用python实现简单的遗传算法,小编觉得挺不错的,现在分享给大家,也给大家做个参考。一起跟随小编过来看看吧
recommend-type

利用PyTorch实现VGG16教程

主要介绍了利用PyTorch实现VGG16教程,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
recommend-type

pytorch之添加BN的实现

今天小编就为大家分享一篇pytorch之添加BN的实现,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
recommend-type

使用pytorch实现论文中的unet网络

设计神经网络的一般步骤: ...示例代码: import torch import torch.nn as nn class Unet(nn.Module): #初始化参数:Encoder,Decoder,bridge #bridge默认值为无,如果有参数传入,则用该参数替换None
recommend-type

Pytorch实现LSTM和GRU示例

今天小编就为大家分享一篇Pytorch实现LSTM和GRU示例,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
recommend-type

zigbee-cluster-library-specification

最新的zigbee-cluster-library-specification说明文档。
recommend-type

管理建模和仿真的文件

管理Boualem Benatallah引用此版本:布阿利姆·贝纳塔拉。管理建模和仿真。约瑟夫-傅立叶大学-格勒诺布尔第一大学,1996年。法语。NNT:电话:00345357HAL ID:电话:00345357https://theses.hal.science/tel-003453572008年12月9日提交HAL是一个多学科的开放存取档案馆,用于存放和传播科学研究论文,无论它们是否被公开。论文可以来自法国或国外的教学和研究机构,也可以来自公共或私人研究中心。L’archive ouverte pluridisciplinaire
recommend-type

实现实时数据湖架构:Kafka与Hive集成

![实现实时数据湖架构:Kafka与Hive集成](https://img-blog.csdnimg.cn/img_convert/10eb2e6972b3b6086286fc64c0b3ee41.jpeg) # 1. 实时数据湖架构概述** 实时数据湖是一种现代数据管理架构,它允许企业以低延迟的方式收集、存储和处理大量数据。与传统数据仓库不同,实时数据湖不依赖于预先定义的模式,而是采用灵活的架构,可以处理各种数据类型和格式。这种架构为企业提供了以下优势: - **实时洞察:**实时数据湖允许企业访问最新的数据,从而做出更明智的决策。 - **数据民主化:**实时数据湖使各种利益相关者都可
recommend-type

用 Python 画一个可以动的爱心

好的,我可以帮助您使用Python来绘制一个动态的爱心。您可以使用turtle库来实现。以下是实现代码: ```python import turtle import math # 设置画布和画笔 canvas = turtle.Screen() canvas.bgcolor("black") pencil = turtle.Turtle() pencil.speed(0) pencil.color("red", "pink") pencil.pensize(3) # 定义爱心函数 def draw_love(heart_size, x_offset=0, y_offset=0):
recommend-type

JSBSim Reference Manual

JSBSim参考手册,其中包含JSBSim简介,JSBSim配置文件xml的编写语法,编程手册以及一些应用实例等。其中有部分内容还没有写完,估计有生之年很难看到完整版了,但是内容还是很有参考价值的。