深度学习cnn算法代码
时间: 2023-09-10 15:06:22 浏览: 46
以下是一个基于 TensorFlow 的卷积神经网络(CNN)的示例代码:
```python
import tensorflow as tf
# 定义模型参数
learning_rate = 0.001
num_epochs = 10
batch_size = 128
dropout_rate = 0.5
# 定义输入和输出
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1], name='inputs')
targets = tf.placeholder(tf.float32, [None, 10], name='targets')
keep_prob = tf.placeholder(tf.float32)
# 定义卷积层
conv1 = tf.layers.conv2d(inputs, filters=32, kernel_size=(3, 3), activation=tf.nn.relu, padding='same')
pool1 = tf.layers.max_pooling2d(conv1, pool_size=(2, 2), strides=(2, 2), padding='same')
conv2 = tf.layers.conv2d(pool1, filters=64, kernel_size=(3, 3), activation=tf.nn.relu, padding='same')
pool2 = tf.layers.max_pooling2d(conv2, pool_size=(2, 2), strides=(2, 2), padding='same')
flatten = tf.layers.flatten(pool2)
# 定义全连接层
fc1 = tf.layers.dense(flatten, units=128, activation=tf.nn.relu)
fc1_dropout = tf.nn.dropout(fc1, keep_prob=keep_prob)
logits = tf.layers.dense(fc1_dropout, units=10, activation=None)
# 定义损失函数和优化器
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=targets))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
# 定义准确率
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(targets, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# 训练模型
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(num_epochs):
for batch in range(mnist.train.num_examples // batch_size):
batch_inputs, batch_targets = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={inputs: batch_inputs, targets: batch_targets, keep_prob: dropout_rate})
train_accuracy = sess.run(accuracy, feed_dict={inputs: mnist.train.images, targets: mnist.train.labels, keep_prob: 1.0})
test_accuracy = sess.run(accuracy, feed_dict={inputs: mnist.test.images, targets: mnist.test.labels, keep_prob: 1.0})
print('Epoch: {}, Train Accuracy: {:.4f}, Test Accuracy: {:.4f}'.format(epoch+1, train_accuracy, test_accuracy))
```
上述代码是一个简单的 CNN 模型,用于在 MNIST 数据集上进行数字分类。它包含两个卷积层和两个全连接层,并使用 ReLU 激活函数和 dropout 技术来提高模型的性能。在训练过程中,使用 Adam 优化器来最小化交叉熵损失函数,并计算准确率来评估模型的性能。