bp神经网络python
时间: 2023-08-27 07:05:17 浏览: 81
要在Python中实现BP神经网络,可以使用NumPy库来进行矩阵运算。以下是一个简单的BP神经网络实现的示例代码:
```python
import numpy as np
class NeuralNetwork:
def __init__(self, layers, learning_rate=0.1):
self.layers = layers
self.learning_rate = learning_rate
self.weights = [np.random.randn(x, y) for x, y in zip(layers[:-1], layers[1:])]
self.biases = [np.random.randn(y, 1) for y in layers[1:]]
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(self, x):
return x * (1 - x)
def feed_forward(self, x):
a = np.array(x).reshape(-1, 1)
for w, b in zip(self.weights, self.biases):
z = np.dot(w.T, a) + b
a = self.sigmoid(z)
return a
def back_propagation(self, x, y):
a = np.array(x).reshape(-1, 1)
y = np.array(y).reshape(-1, 1)
# feed forward
a_layers = [a]
z_layers = []
for w, b in zip(self.weights, self.biases):
z = np.dot(w.T, a) + b
z_layers.append(z)
a = self.sigmoid(z)
a_layers.append(a)
# backward propagation
delta = (a_layers[-1] - y) * self.sigmoid_derivative(a_layers[-1])
delta_layers = [delta]
for i in range(len(self.layers) - 2):
delta = np.dot(self.weights[-(i + 1)], delta) * self.sigmoid_derivative(a_layers[-(i + 2)])
delta_layers.append(delta)
delta_layers.reverse()
# update weights and biases
for i in range(len(self.weights)):
self.weights[i] -= self.learning_rate * np.dot(a_layers[i], delta_layers[i].T)
self.biases[i] -= self.learning_rate * delta_layers[i]
def train(self, X, y, epochs=1000):
for i in range(epochs):
for j in range(len(X)):
self.back_propagation(X[j], y[j])
def predict(self, X):
y_pred = []
for x in X:
y_pred.append(self.feed_forward(x))
return y_pred
```
在这个实现中,`layers` 是一个列表,其中包含了每一层神经元的数量。例如,`layers=[2, 3, 1]` 表示输入层有2个神经元,隐藏层有3个神经元,输出层有1个神经元。`learning_rate` 是学习率,用于控制权重和偏置的更新速度。`weights` 和 `biases` 是每一层的权重和偏置,初始化为随机值。`sigmoid` 和 `sigmoid_derivative` 是Sigmoid函数及其导数。
`feed_forward` 方法将输入值传递到网络中,计算输出值。`back_propagation` 方法实现反向传播,更新权重和偏置。`train` 方法用于训练网络,`predict` 方法用于预测输出值。
使用该BP神经网络,可以按照以下方式进行训练和预测:
```python
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
nn = NeuralNetwork(layers=[2, 3, 1], learning_rate=0.1)
nn.train(X, y, epochs=10000)
print(nn.predict([[0, 0], [0, 1], [1, 0], [1, 1]]))
```
这里使用了一个简单的例子,训练网络来实现异或逻辑运算。首先创建输入矩阵 `X` 和目标输出矩阵 `y`,然后创建一个 `NeuralNetwork` 实例,传递层数和学习率。使用 `train` 方法训练网络,传递输入矩阵和目标输出矩阵,并设定训练次数。最后,使用 `predict` 方法来预测输出值。
阅读全文