请用keras编写一个全连接神经网络, 输入维度为10,单输出,三层隐藏层,每层100个神经元,激活函数用relu,然后用遗传算法对超参数进行优化,给出示例代码
时间: 2024-05-02 13:21:03 浏览: 95
由于遗传算法需要对超参数进行搜索,因此我们需要定义一个函数来构建神经网络并返回其性能指标。
首先,我们导入必要的库:
```python
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
from keras.datasets import mnist
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
```
接下来,我们定义一个函数来构建神经网络:
```python
def build_model(n_inputs, n_outputs, n_layers, n_neurons, activation):
model = Sequential()
model.add(Dense(n_neurons, input_dim=n_inputs, activation=activation))
for i in range(n_layers):
model.add(Dense(n_neurons, activation=activation))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.001), metrics=['accuracy'])
return model
```
该函数输入参数包括:
- n_inputs:输入维度
- n_outputs:输出维度
- n_layers:隐藏层数
- n_neurons:每层神经元数量
- activation:激活函数
该函数返回一个编译好的Keras模型对象。
接下来,我们定义一个函数来计算模型的性能指标(在这里,我们将使用mnist数据集作为示例):
```python
def evaluate_model(model, X_train, y_train, X_test, y_test):
model.fit(X_train, y_train, epochs=10, batch_size=32, verbose=0)
y_pred = np.argmax(model.predict(X_test), axis=1)
y_test = np.argmax(y_test, axis=1)
return accuracy_score(y_test, y_pred)
```
该函数输入参数包括:
- model:Keras模型对象
- X_train:训练数据
- y_train:训练标签
- X_test:测试数据
- y_test:测试标签
该函数返回模型在测试数据上的准确度。
接下来,我们定义一个遗传算法来对超参数进行优化:
```python
from deap import algorithms, base, creator, tools
# 遗传算法参数
POPULATION_SIZE = 10
P_CROSSOVER = 0.9
P_MUTATION = 0.1
MAX_GENERATIONS = 10
HALL_OF_FAME_SIZE = 3
RANDOM_SEED = 42
# 神经网络参数
N_INPUTS = 784
N_OUTPUTS = 10
N_LAYERS = 3
N_NEURONS = 100
ACTIVATION = 'relu'
# 加载数据集
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], N_INPUTS).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], N_INPUTS).astype('float32') / 255
y_train = np_utils.to_categorical(y_train, N_OUTPUTS)
y_test = np_utils.to_categorical(y_test, N_OUTPUTS)
# 创建遗传算法工具箱
toolbox = base.Toolbox()
creator.create('FitnessMax', base.Fitness, weights=(1.0,))
creator.create('Individual', list, fitness=creator.FitnessMax)
toolbox.register('attr_bool', np.random.uniform, low=50, high=500)
toolbox.register('individual', tools.initRepeat, creator.Individual, toolbox.attr_bool, n=N_LAYERS)
toolbox.register('population', tools.initRepeat, list, toolbox.individual)
toolbox.register('evaluate', evaluate_model, X_train=X_train, y_train=y_train, X_test=X_test, y_test=y_test)
toolbox.register('mate', tools.cxTwoPoint)
toolbox.register('mutate', tools.mutUniformInt, low=50, up=500, indpb=0.05)
toolbox.register('select', tools.selTournament, tournsize=3)
# 设置随机数种子
np.random.seed(RANDOM_SEED)
# 创建种群
population = toolbox.population(n=POPULATION_SIZE)
# 运行遗传算法
hall_of_fame = tools.HallOfFame(HALL_OF_FAME_SIZE)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register('avg', np.mean)
stats.register('min', np.min)
stats.register('max', np.max)
population, logbook = algorithms.eaSimple(population, toolbox, cxpb=P_CROSSOVER, mutpb=P_MUTATION, ngen=MAX_GENERATIONS, stats=stats, halloffame=hall_of_fame, verbose=True)
# 输出结果
best = hall_of_fame.items[0]
model = build_model(N_INPUTS, N_OUTPUTS, len(best), int(np.mean(best)), ACTIVATION)
model.fit(X_train, y_train, epochs=10, batch_size=32, verbose=0)
print('Test accuracy:', evaluate_model(model, X_train, y_train, X_test, y_test))
print('Best solution:', best)
```
该代码中的参数包括:
- POPULATION_SIZE:种群大小
- P_CROSSOVER:交叉概率
- P_MUTATION:变异概率
- MAX_GENERATIONS:最大迭代次数
- HALL_OF_FAME_SIZE:精英集大小
- RANDOM_SEED:随机数种子
- N_INPUTS:输入维度
- N_OUTPUTS:输出维度
- N_LAYERS:隐藏层数
- N_NEURONS:每层神经元数量
- ACTIVATION:激活函数
该代码首先加载mnist数据集,并将其规范化。然后,它使用DEAP库定义了一个遗传算法工具箱,并注册了必要的函数。接下来,它创建了一个初始种群,并使用eaSimple函数运行遗传算法。最后,它输出了测试数据上的准确度和最佳超参数解决方案。
请注意,由于遗传算法是一种随机搜索方法,因此每次运行可能会得到不同的结果。
阅读全文