写一段python深度学习的代码
时间: 2023-02-19 16:03:03 浏览: 61
以下是一段基于TensorFlow的深度学习Python代码:
import tensorflow as tf
# 定义输入和输出
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
# 定义权重和偏置
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# 定义模型
y = tf.nn.softmax(tf.matmul(x, W) + b)
# 定义损失函数
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
# 定义优化器
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# 初始化变量
init = tf.initialize_all_variables()
# 运行模型
sess = tf.Session()
sess.run(init)
# 训练模型
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
相关问题
写一个python深度学习代码
我可以为你写一段深度学习代码:import numpy as np
import tensorflow as tf# 定义输入和输出
X = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])# 定义模型参数
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))# 定义模型
y_pred = tf.nn.softmax(tf.matmul(X, W) + b)# 定义损失函数和优化器
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(y_pred), reduction_indices=1))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)# 初始化变量
init = tf.global_variables_initializer()# 开始训练
with tf.Session() as sess:
sess.run(init)
# 迭代训练
for epoch in range(1000):
_, c = sess.run([optimizer, cost], feed_dict={X: data.train.images, y: data.train.labels})
if (epoch+1) % 100 == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c)) print("Optimization Finished!") # 计算准确率
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print ("Accuracy:", accuracy.eval({X: data.test.images, y: data.test.labels}))
使用python写一段深度学习代码
好的,以下是一个基于Python的深度学习代码示例:
```
import numpy as np
import tensorflow as tf
# 加载数据集
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# 处理像素值范围
train_images, test_images = train_images / 255.0, test_images / 255.0
# 构建模型
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
# 定义损失函数和优化器
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
# 训练模型
model.compile(optimizer=optimizer,
loss=loss_fn,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5, validation_data=(test_images, test_labels))
```
此代码示例是一个基于TensorFlow的简单深度学习模型,用于MNIST数字分类。它使用了一个包含两个隐藏层的全连接神经网络模型来处理输入数据,并采用`Adam`优化器进行模型训练。