二维卷积神经网络 tensorflow实例
时间: 2023-09-06 11:13:31 浏览: 107
网络游戏-一种基于三维卷积神经网络的视频分类方法.zip
以下是一个简单的二维卷积神经网络的 TensorFlow 实现示例:
```
import tensorflow as tf
# 定义输入数据的占位符
x = tf.placeholder(tf.float32, [None, 28, 28, 1])
y = tf.placeholder(tf.float32, [None, 10])
# 定义第一层卷积核和偏置项
w_conv1 = tf.Variable(tf.truncated_normal([5, 5, 1, 32], stddev=0.1))
b_conv1 = tf.Variable(tf.constant(0.1, shape=[32]))
# 定义第二层卷积核和偏置项
w_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.1))
b_conv2 = tf.Variable(tf.constant(0.1, shape=[64]))
# 定义全连接层的权重和偏置项
w_fc1 = tf.Variable(tf.truncated_normal([7 * 7 * 64, 1024], stddev=0.1))
b_fc1 = tf.Variable(tf.constant(0.1, shape=[1024]))
w_fc2 = tf.Variable(tf.truncated_normal([1024, 10], stddev=0.1))
b_fc2 = tf.Variable(tf.constant(0.1, shape=[10]))
# 定义卷积和池化函数
def conv2d(x, w):
return tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# 第一层卷积和池化
h_conv1 = tf.nn.relu(conv2d(x, w_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
# 第二层卷积和池化
h_conv2 = tf.nn.relu(conv2d(h_pool1, w_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
# 将卷积层输出的张量展平
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
# 全连接层的计算
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, w_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob=0.5)
y_ = tf.matmul(h_fc1_drop, w_fc2) + b_fc2
# 定义损失函数和优化器
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# 定义评估模型准确度的操作
correct_prediction = tf.equal(tf.argmax(y_, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# 训练模型
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x: batch[0], y: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y: batch[1], keep_prob: 0.5})
# 在测试集上评估模型准确度
print('test accuracy %g' % accuracy.eval(feed_dict={
x: mnist.test.images, y: mnist.test.labels, keep_prob: 1.0}))
```
该模型包含两层卷积和池化层以及两个全连接层。在训练过程中,采用了交叉熵作为损失函数,Adam 优化器进行优化,同时使用 dropout 技术进行正则化。在测试集上,可以通过计算准确度评估模型的性能。
阅读全文