tf.nn.dropout()
时间: 2023-09-17 13:09:13 浏览: 140
`tf.nn.dropout()`是TensorFlow中的一个函数,用于在神经网络中进行dropout操作。dropout是一种常用的正则化技术,可以有效地减少过拟合风险。在训练过程中,dropout会随机将一部分神经元的输出置为0,从而减少神经元之间的相互依赖,使得网络更加稳健。具体来说,`tf.nn.dropout()`函数的输入是一个张量和一个保留概率,输出是一个与输入张量形状相同的张量,其中一部分元素被随机置为0,一部分元素被保留,保留的元素按照保留概率进行缩放。具体使用可以参考如下示例代码:
```
import tensorflow as tf
x = tf.placeholder(tf.float32, shape=[None, 784])
keep_prob = tf.placeholder(tf.float32)
# 定义一个全连接层
W_fc1 = tf.Variable(tf.truncated_normal([784, 512], stddev=0.1))
b_fc1 = tf.Variable(tf.constant(0.1, shape=[512]))
h_fc1 = tf.nn.relu(tf.matmul(x, W_fc1) + b_fc1)
# dropout操作
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# 定义一个输出层
W_fc2 = tf.Variable(tf.truncated_normal([512, 10], stddev=0.1))
b_fc2 = tf.Variable(tf.constant(0.1, shape=[10]))
y = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
# 定义损失函数
y_ = tf.placeholder(tf.float32, shape=[None, 10])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
# 定义优化器
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# 训练模型
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
```
阅读全文