深度学习 反向传播算法的python代码
时间: 2024-05-26 22:08:49 浏览: 115
深度学习的反向传播代码
深度学习是一种机器学习方法,其基本思想是通过对数据进行分层抽象和学习,建立多层神经网络模型,实现对数据的高效分类、识别和预测。而反向传播算法是深度学习中用于训练神经网络的一种有效方法。下面是反向传播算法的 Python 代码示例:
```python
# 反向传播算法的 Python 代码示例
import numpy as np
# 定义sigmoid函数
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# 定义神经网络的前向传播函数
def forward(X, W1, b1, W2, b2):
z1 = np.dot(X, W1) + b1
a1 = sigmoid(z1)
z2 = np.dot(a1, W2) + b2
y_hat = sigmoid(z2)
return y_hat, z1, a1, z2
# 定义损失函数
def loss(y, y_hat):
L = -y * np.log(y_hat) - (1 - y) * np.log(1 - y_hat)
return L
# 定义反向传播函数
def backward(X, y, y_hat, z1, a1, z2, W2):
delta2 = (y_hat - y) * y_hat * (1 - y_hat)
dW2 = np.dot(a1.T, delta2)
db2 = np.sum(delta2, axis=0)
delta1 = np.dot(delta2, W2.T) * a1 * (1 - a1)
dW1 = np.dot(X.T, delta1)
db1 = np.sum(delta1, axis=0)
return dW1, db1, dW2, db2
# 训练神经网络
def train(X, y, hidden_size, epochs, lr):
input_size = X.shape
output_size = y.shape
# 随机初始化权重和偏置
W1 = np.random.randn(input_size, hidden_size)
b1 = np.zeros((1, hidden_size))
W2 = np.random.randn(hidden_size, output_size)
b2 = np.zeros((1, output_size))
# 迭代训练神经网络
for i in range(epochs):
# 前向传播计算输出和损失
y_hat, z1, a1, z2 = forward(X, W1, b1, W2, b2)
L = loss(y, y_hat)
# 反向传播计算梯度
dW1, db1, dW2, db2 = backward(X, y, y_hat, z1, a1, z2, W2)
# 更新权重和偏置
W1 -= lr * dW1
b1 -= lr * db1
W2 -= lr * dW2
b2 -= lr * db2
# 每迭代10次输出一次损失
if i % 10 == 0:
print("Epoch %d Loss %.4f" % (i, np.mean(L)))
return W1, b1, W2, b2
# 测试模型
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([, , , ])
hidden_size = 4
epochs = 1000
lr = 0.5
W1, b1, W2, b2 = train(X, y, hidden_size, epochs, lr)
y_hat_test = forward(X, W1, b1, W2, b2)
print("Final Prediction")
print(y_hat_test)
```
阅读全文