keras实现调用自己训练的模型实现调用自己训练的模型,并去掉全连接层并去掉全连接层
主要介绍了keras实现调用自己训练的模型,并去掉全连接层,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
其实很简单
from keras.models import load_model
base_model = load_model('model_resenet.h5')#加载指定的模型
print(base_model.summary())#输出网络的结构图
这是我的网络模型的输出,其实就是它的结构图
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 227, 227, 1) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 225, 225, 32) 320 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 225, 225, 32) 128 conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 225, 225, 32) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 225, 225, 32) 9248 activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 225, 225, 32) 128 conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 225, 225, 32) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 225, 225, 32) 9248 activation_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 225, 225, 32) 128 conv2d_3[0][0]
__________________________________________________________________________________________________
merge_1 (Merge) (None, 225, 225, 32) 0 batch_normalization_3[0][0]
activation_1[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 225, 225, 32) 0 merge_1[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 225, 225, 32) 9248 activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 225, 225, 32) 128 conv2d_4[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 225, 225, 32) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 225, 225, 32) 9248 activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 225, 225, 32) 128 conv2d_5[0][0]
__________________________________________________________________________________________________
merge_2 (Merge) (None, 225, 225, 32) 0 batch_normalization_5[0][0]
activation_3[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 225, 225, 32) 0 merge_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 112, 112, 32) 0 activation_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 110, 110, 64) 18496 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 110, 110, 64) 256 conv2d_6[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 110, 110, 64) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 110, 110, 64) 36928 activation_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 110, 110, 64) 256 conv2d_7[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 110, 110, 64) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 110, 110, 64) 36928 activation_7[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 110, 110, 64) 256 conv2d_8[0][0]
__________________________________________________________________________________________________
merge_3 (Merge) (None, 110, 110, 64) 0 batch_normalization_8[0][0]
activation_6[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 110, 110, 64) 0 merge_3[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 110, 110, 64) 36928 activation_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 110, 110, 64) 256 conv2d_9[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 110, 110, 64) 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 110, 110, 64) 36928 activation_9[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 110, 110, 64) 256 conv2d_10[0][0]
__________________________________________________________________________________________________
merge_4 (Merge) (None, 110, 110, 64) 0 batch_normalization_10[0][0]
activation_8[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, 110, 110, 64) 0 merge_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 55, 55, 64) 0 activation_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 53, 53, 64) 36928 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 53, 53, 64) 256 conv2d_11[0][0]
__________________________________________________________________________________________________
activation_11 (Activation) (None, 53, 53, 64) 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 26, 26, 64) 0 activation_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 26, 26, 64) 36928 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 26, 26, 64) 256 conv2d_12[0][0]