简单神经网络反向传播代码
时间: 2024-09-29 21:06:19 浏览: 25
在深度学习中,简单神经网络的反向传播是一种计算梯度的方法,用于优化网络权重,使得预测结果接近真实值。以下是一个简单的用Python和TensorFlow实现的反向传播示例,假设我们有一个只有一个隐藏层的前馈神经网络:
```python
import tensorflow as tf
# 定义模型参数
weights = {
'hidden': tf.Variable(tf.random.normal([input_size, hidden_units])),
'output': tf.Variable(tf.random.normal([hidden_units, output_size]))
}
biases = {
'hidden': tf.Variable(tf.zeros(hidden_units)),
'output': tf.Variable(tf.zeros(output_size))
}
# 假设输入X,目标y,激活函数用ReLU
def forward_pass(X):
hidden_layer = tf.nn.relu(tf.add(tf.matmul(X, weights['hidden']), biases['hidden']))
output = tf.add(tf.matmul(hidden_layer, weights['output']), biases['output'])
return output
# 模型训练需要的损失函数和优化器
loss_fn = tf.losses.mean_squared_error(y_true, y_pred)
optimizer = tf.optimizers.SGD(learning_rate)
# 反向传播
with tf.GradientTape() as tape:
y_pred = forward_pass(X)
loss_value = loss_fn(y_true, y_pred)
gradients = tape.gradient(loss_value, [weights['hidden'], biases['hidden'], weights['output'], biases['output']])
optimizer.apply_gradients(zip(gradients, [weights['hidden'], biases['hidden'], weights['output'], biases['output']]))
阅读全文