写出Python代码实现简单的3层网络BP算法;M,N,L分别为4.
时间: 2024-12-15 16:17:36 浏览: 5
神经网络(BP)算法Python实现及应用
在Python中,我们可以使用深度学习库如TensorFlow或PyTorch来构建一个简单的三层神经网络并实现反向传播(BP)算法。这里我们选择使用numpy进行数值计算,因为没有引入复杂的库,适合做小规模示例。
```python
import numpy as np
# 假设输入数据形状为(M, N),隐藏层有L个节点,输出层有1个节点
input_size = N
hidden_layer_size = L
output_size = 1
# 初始化权重矩阵
W1 = np.random.randn(input_size, hidden_layer_size)
W2 = np.random.randn(hidden_layer_size, output_size)
# 学习率和偏置值
learning_rate = 0.01
bias = np.zeros((1, hidden_layer_size))
# 模拟样本数据
X = np.random.rand(M, input_size)
y = np.random.rand(M, output_size) # 对应的目标值
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def forward_pass(X):
Z1 = np.dot(X, W1) + bias
A1 = sigmoid(Z1)
Z2 = np.dot(A1, W2)
Y_hat = sigmoid(Z2)
return Y_hat, A1, Z1
def loss_function(Y, Y_hat):
return np.mean(-Y * np.log(Y_hat) - (1 - Y) * np.log(1 - Y_hat))
def backward_pass(Y, Y_hat, A1, Z1):
dZ2 = Y_hat - Y
dW2 = np.dot(A1.T, dZ2)
db2 = np.sum(dZ2, axis=0, keepdims=True)
dA1 = np.dot(dZ2, W2.T) * (A1 * (1 - A1))
dZ1 = np.multiply(dA1, 1 - A1)
dW1 = np.dot(X.T, dZ1)
db1 = np.sum(dZ1, axis=0, keepdims=True)
return dW1, db1, dW2, db2
def update_weights():
W1 -= learning_rate * dW1
W2 -= learning_rate * dW2
bias -= learning_rate * db1
for _ in range(1000): # 迭代次数,可以根据需要调整
Y_hat, A1, Z1 = forward_pass(X)
loss = loss_function(y, Y_hat)
dW1, db1, dW2, db2 = backward_pass(y, Y_hat, A1, Z1)
update_weights()
print(f"Final weights after training:\nW1: {W1}\nW2: {W2}")
```
注意:这是一个非常基础的实现,并不适合大规模训练。实际应用会更复杂,包括批处理、优化器、梯度裁剪等。此外,这个例子也不包含正则化、Dropout等功能,仅用于演示如何编写三层BP算法的基本框架。
阅读全文