bp神经网络多元回归预测三层,要有激活函数定义损失函数和优化器代码用numpy库和pandas库
时间: 2024-03-16 11:42:44 浏览: 65
以下是使用numpy和pandas库实现三层神经网络进行多元回归预测的代码,包括激活函数、损失函数和优化器的定义:
```python
import numpy as np
import pandas as pd
# 定义激活函数sigmoid
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# 定义损失函数均方误差
def mse_loss(y_true, y_pred):
return np.mean(np.square(y_true - y_pred))
# 定义优化器随机梯度下降
def sgd_optimizer(w, b, dw, db, learning_rate):
w -= learning_rate * dw
b -= learning_rate * db
return w, b
# 数据准备
data = pd.read_csv("data.csv")
X = data.iloc[:, :-1]
Y = data.iloc[:, -1]
X = np.array(X)
Y = np.array(Y)
Y = Y.reshape(-1, 1)
# 参数初始化
input_dim = X.shape[1]
hidden_dim = 4
output_dim = 1
learning_rate = 0.01
epochs = 1000
# 初始化权重和偏置
w1 = np.random.randn(input_dim, hidden_dim)
b1 = np.zeros((1, hidden_dim))
w2 = np.random.randn(hidden_dim, output_dim)
b2 = np.zeros((1, output_dim))
# 训练模型
for i in range(epochs):
# 前向传播
z1 = np.dot(X, w1) + b1
a1 = sigmoid(z1)
z2 = np.dot(a1, w2) + b2
y_pred = z2
# 计算损失
loss = mse_loss(Y, y_pred)
# 反向传播
dy = 2 * (y_pred - Y)
dw2 = np.dot(a1.T, dy)
db2 = np.sum(dy, axis=0, keepdims=True)
da1 = np.dot(dy, w2.T)
dz1 = da1 * a1 * (1 - a1)
dw1 = np.dot(X.T, dz1)
db1 = np.sum(dz1, axis=0, keepdims=True)
# 参数更新
w2, b2 = sgd_optimizer(w2, b2, dw2, db2, learning_rate)
w1, b1 = sgd_optimizer(w1, b1, dw1, db1, learning_rate)
# 打印损失
if i % 100 == 0:
print("Epoch %d loss: %.4f" % (i, loss))
```
在上述代码中,sigmoid函数被用作激活函数,mse_loss函数被用作损失函数,sgd_optimizer函数被用作优化器。数据集被加载为pandas DataFrame,然后被转换为numpy数组。在训练模型时,使用随机梯度下降(SGD)更新权重和偏置。在每个epoch结束后,打印损失以跟踪进度。
阅读全文