bp神经网络多元回归预测单层隐藏层relu函数代码
时间: 2023-09-10 11:08:24 浏览: 50
单个隐藏层神经网络bp反馈
好的,下面是一个基于反向传播算法的单隐藏层神经网络的多元回归预测代码,其中激活函数采用ReLU函数:
```
import numpy as np
class NeuralNetwork:
def __init__(self, num_inputs, num_hidden, num_outputs):
self.input_size = num_inputs
self.hidden_size = num_hidden
self.output_size = num_outputs
self.W1 = np.random.randn(self.input_size, self.hidden_size) * 0.01
self.b1 = np.zeros((1, self.hidden_size))
self.W2 = np.random.randn(self.hidden_size, self.output_size) * 0.01
self.b2 = np.zeros((1, self.output_size))
def relu(self, Z):
return np.maximum(0, Z)
def forward(self, X):
Z1 = np.dot(X, self.W1) + self.b1
A1 = self.relu(Z1)
Z2 = np.dot(A1, self.W2) + self.b2
y_hat = Z2
return y_hat
def relu_derivative(self, Z):
return np.where(Z > 0, 1, 0)
def backward(self, X, y, learning_rate):
y_hat = self.forward(X)
dZ2 = y_hat - y
dW2 = np.dot(self.relu(self.W1), dZ2)
db2 = np.sum(dZ2, axis=0, keepdims=True)
dZ1 = np.dot(dZ2, self.W2.T) * self.relu_derivative(self.W1)
dW1 = np.dot(X.T, dZ1)
db1 = np.sum(dZ1, axis=0)
self.W2 -= learning_rate * dW2
self.b2 -= learning_rate * db2
self.W1 -= learning_rate * dW1
self.b1 -= learning_rate * db1
def train(self, X, y, num_epochs, learning_rate):
for i in range(num_epochs):
self.backward(X, y, learning_rate)
if i % 100 == 0:
loss = np.mean(np.square(y - self.forward(X)))
print(f"Epoch {i} - Loss: {loss:.4f}")
```
其中,`num_inputs`表示输入层的神经元数量,`num_hidden`表示隐藏层的神经元数量,`num_outputs`表示输出层的神经元数量。在初始化函数中,我们随机初始化了两个权值矩阵`W1`和`W2`,以及两个偏置向量`b1`和`b2`。在`forward`函数中,我们首先计算隐藏层的输入`Z1`,再通过ReLU函数计算出隐藏层的输出`A1`,最后计算输出层的输出`y_hat`。在`backward`函数中,我们首先计算输出层的误差`dZ2`,然后根据误差反向传播计算出隐藏层的误差`dZ1`,最后根据误差和梯度更新权值和偏置。在`train`函数中,我们使用反向传播算法来训练网络,每隔100个epoch打印一次损失。
阅读全文