写出关于bp神经网络模型的python代码
时间: 2023-09-01 20:11:07 浏览: 76
以下是一个简单的 BP 神经网络模型的 Python 代码:
``` python
import numpy as np
class BPNeuralNetwork:
def __init__(self, input_size, hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
# 初始化权重矩阵和偏差向量
self.weights1 = np.random.randn(self.input_size, self.hidden_size)
self.bias1 = np.random.randn(self.hidden_size)
self.weights2 = np.random.randn(self.hidden_size, self.output_size)
self.bias2 = np.random.randn(self.output_size)
# 定义激活函数
self.activation_func = lambda x: 1 / (1 + np.exp(-x))
def forward(self, X):
# 前向传播计算输出值
self.hidden_layer = self.activation_func(np.dot(X, self.weights1) + self.bias1)
self.output_layer = self.activation_func(np.dot(self.hidden_layer, self.weights2) + self.bias2)
return self.output_layer
def backward(self, X, y, learning_rate):
# 反向传播计算梯度并更新权重和偏差
error = y - self.output_layer
output_delta = error * self.output_layer * (1 - self.output_layer)
hidden_error = np.dot(output_delta, self.weights2.T)
hidden_delta = hidden_error * self.hidden_layer * (1 - self.hidden_layer)
self.weights2 += learning_rate * np.dot(self.hidden_layer.T, output_delta)
self.bias2 += learning_rate * np.sum(output_delta, axis=0)
self.weights1 += learning_rate * np.dot(X.T, hidden_delta)
self.bias1 += learning_rate * np.sum(hidden_delta, axis=0)
def train(self, X, y, epochs, learning_rate):
for i in range(epochs):
output = self.forward(X)
self.backward(X, y, learning_rate)
loss = np.mean(np.square(y - output))
if i % 100 == 0:
print("Epoch: {0}, Loss: {1:.4f}".format(i, loss))
def predict(self, X):
return np.round(self.forward(X))
```
这个模型的输入是 `input_size` 维的特征向量,输出是 `output_size` 维的预测向量。隐藏层的大小为 `hidden_size`。在初始化时,权重和偏差向量都被随机初始化。前向传播使用 sigmoid 激活函数,反向传播使用梯度下降算法来更新权重和偏差。 `train` 方法用于训练模型, `predict` 方法用于对新的数据进行预测。
阅读全文