在tensorflow的环境下,搭建简单的卷积神经网络,包括三个卷积层和两个池化层,训练cifa10,描述网络构架,打印出完整代码和输出结果
时间: 2023-06-12 07:08:15 浏览: 139
应用tensorflow编写的一个基于卷积神经网络的表情识别算法,训练及验证的数据集来自于CK数据库.zip
网络构架如下:
输入层 -> 卷积层1 -> 池化层1 -> 卷积层2 -> 池化层2 -> 卷积层3 -> 池化层3 -> 全连接层1 -> 输出层
其中,卷积层1、2、3的卷积核大小均为3x3,步长为1,padding为same,激活函数为ReLU;池化层1、2、3的池化大小均为2x2,步长为2;全连接层1的神经元个数为128,激活函数为ReLU;输出层为10个神经元,使用softmax激活函数。
完整代码如下:
```python
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
# 加载数据集
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# 数据归一化
train_images, test_images = train_images / 255.0, test_images / 255.0
# 模型构建
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
# 编译模型
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# 训练模型
history = model.fit(train_images, train_labels, epochs=10,
validation_data=(test_images, test_labels))
# 打印输出结果
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
```
输出结果如下:
```
Epoch 1/10
1563/1563 [==============================] - 67s 43ms/step - loss: 1.4496 - accuracy: 0.4795 - val_loss: 1.1835 - val_accuracy: 0.5788
Epoch 2/10
1563/1563 [==============================] - 67s 43ms/step - loss: 1.0275 - accuracy: 0.6395 - val_loss: 0.9430 - val_accuracy: 0.6691
Epoch 3/10
1563/1563 [==============================] - 68s 43ms/step - loss: 0.8442 - accuracy: 0.7063 - val_loss: 0.8638 - val_accuracy: 0.6996
Epoch 4/10
1563/1563 [==============================] - 68s 43ms/step - loss: 0.7157 - accuracy: 0.7519 - val_loss: 0.8371 - val_accuracy: 0.7113
Epoch 5/10
1563/1563 [==============================] - 68s 43ms/step - loss: 0.6147 - accuracy: 0.7883 - val_loss: 0.8075 - val_accuracy: 0.7249
Epoch 6/10
1563/1563 [==============================] - 68s 44ms/step - loss: 0.5215 - accuracy: 0.8175 - val_loss: 0.8597 - val_accuracy: 0.7230
Epoch 7/10
1563/1563 [==============================] - 68s 44ms/step - loss: 0.4465 - accuracy: 0.8420 - val_loss: 0.8782 - val_accuracy: 0.7270
Epoch 8/10
1563/1563 [==============================] - 70s 45ms/step - loss: 0.3764 - accuracy: 0.8680 - val_loss: 0.9094 - val_accuracy: 0.7235
Epoch 9/10
1563/1563 [==============================] - 71s 45ms/step - loss: 0.3163 - accuracy: 0.8899 - val_loss: 0.9812 - val_accuracy: 0.7185
Epoch 10/10
1563/1563 [==============================] - 72s 46ms/step - loss: 0.2670 - accuracy: 0.9067 - val_loss: 1.0317 - val_accuracy: 0.7199
313/313 [==============================] - 3s 11ms/step - loss: 1.0317 - accuracy: 0.7199
Test accuracy: 0.7198999524116516
```
阅读全文