写一个BP神经网络模型
时间: 2024-04-30 16:19:04 浏览: 92
好的,以下是一个简单的BP神经网络模型的实现:
```python
import numpy as np
class BPNN:
def __init__(self, input_size, hidden_size, output_size, learning_rate=0.1):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.learning_rate = learning_rate
# 初始化权重
self.weights1 = np.random.randn(self.input_size, self.hidden_size)
self.bias1 = np.zeros((1, self.hidden_size))
self.weights2 = np.random.randn(self.hidden_size, self.output_size)
self.bias2 = np.zeros((1, self.output_size))
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(self, x):
return x * (1 - x)
def forward(self, x):
# 隐层输出
self.hidden_output = self.sigmoid(np.dot(x, self.weights1) + self.bias1)
# 输出层输出
self.output = self.sigmoid(np.dot(self.hidden_output, self.weights2) + self.bias2)
return self.output
def backward(self, x, y, output):
# 计算输出层误差
output_error = y - output
# 计算输出层梯度
output_gradient = output_error * self.sigmoid_derivative(output)
# 计算隐层误差
hidden_error = np.dot(output_gradient, self.weights2.T)
# 计算隐层梯度
hidden_gradient = hidden_error * self.sigmoid_derivative(self.hidden_output)
# 更新权重和偏置
self.weights2 += self.learning_rate * np.dot(self.hidden_output.T, output_gradient)
self.bias2 += self.learning_rate * np.sum(output_gradient, axis=0, keepdims=True)
self.weights1 += self.learning_rate * np.dot(x.T, hidden_gradient)
self.bias1 += self.learning_rate * np.sum(hidden_gradient, axis=0, keepdims=True)
def train(self, x, y, epochs):
for i in range(epochs):
output = self.forward(x)
self.backward(x, y, output)
def predict(self, x):
return self.forward(x)
```
该模型接受输入大小、隐层大小、输出大小和学习率作为参数,并在初始化时随机初始化权重。
`sigmoid`函数和`sigmoid_derivative`函数分别用于计算sigmoid函数和其导数。
`forward`函数计算给定输入的输出,并存储隐层输出。`backward`函数计算梯度并更新权重和偏置。
最后,`train`函数使用反向传播算法来训练网络,`predict`函数用于预测给定输入的输出。
阅读全文