二维卷积神经网络 tensorflow程序
时间: 2023-10-29 16:16:39 浏览: 147
以下是一个简单的二维卷积神经网络 TensorFlow程序的示例:
```python
import tensorflow as tf
# 定义输入数据的占位符
x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
y_true = tf.placeholder(tf.float32, shape=[None, 10])
# 定义卷积神经网络的结构
conv_layer_1 = tf.layers.conv2d(x, filters=32, kernel_size=[5, 5], activation=tf.nn.relu)
pool_layer_1 = tf.layers.max_pooling2d(conv_layer_1, pool_size=[2, 2], strides=2)
conv_layer_2 = tf.layers.conv2d(pool_layer_1, filters=64, kernel_size=[5, 5], activation=tf.nn.relu)
pool_layer_2 = tf.layers.max_pooling2d(conv_layer_2, pool_size=[2, 2], strides=2)
flatten_layer = tf.layers.flatten(pool_layer_2)
dense_layer = tf.layers.dense(flatten_layer, units=1024, activation=tf.nn.relu)
dropout_layer = tf.layers.dropout(dense_layer, rate=0.4)
logits = tf.layers.dense(dropout_layer, units=10)
# 定义损失函数
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_true, logits=logits))
# 定义优化器
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
train = optimizer.minimize(cross_entropy)
# 定义评估指标
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y_true, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# 初始化变量
init = tf.global_variables_initializer()
# 开始训练
with tf.Session() as sess:
sess.run(init)
for i in range(1000):
batch_x, batch_y = next_batch(train_data, train_labels, batch_size=50)
sess.run(train, feed_dict={x: batch_x, y_true: batch_y})
if i % 100 == 0:
acc = sess.run(accuracy, feed_dict={x: test_data, y_true: test_labels})
print("Step:", i, "Accuracy:", acc)
```
在这个示例中,我们定义了一个具有两个卷积层和两个池化层的卷积神经网络。我们还定义了一个全连接层和一个dropout层,以避免过拟合。我们使用softmax交叉熵作为损失函数,并使用Adam优化器进行训练。我们还定义了准确度作为评估指标。在sess.run()函数中,我们可以通过feed_dict参数将数据传递给占位符。在训练过程中,我们使用next_batch()函数从训练数据中获取批次。最后,我们通过循环运行train操作来训练模型,并在每个步骤中计算模型的准确度。
阅读全文