numpy实现两层神经网络与门并输出损失函数图像
时间: 2023-08-02 16:06:25 浏览: 114
好的,下面是代码实现:
```python
import numpy as np
import matplotlib.pyplot as plt
# 定义sigmoid函数
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# 定义损失函数
def loss(y, y_pred):
return np.mean((y - y_pred) ** 2)
# 定义输入和输出
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [0], [0], [1]])
# 定义超参数
num_epochs = 10000
learning_rate = 0.1
# 定义网络参数
input_dim = 2
hidden_dim = 3
output_dim = 1
# 初始化权重和偏置
W1 = np.random.randn(input_dim, hidden_dim)
b1 = np.zeros((1, hidden_dim))
W2 = np.random.randn(hidden_dim, output_dim)
b2 = np.zeros((1, output_dim))
# 训练网络
losses = []
for epoch in range(num_epochs):
# 前向传播
hidden = sigmoid(np.dot(X, W1) + b1)
y_pred = sigmoid(np.dot(hidden, W2) + b2)
# 计算损失函数
l = loss(y, y_pred)
losses.append(l)
# 反向传播
grad_y_pred = (y_pred - y) / (y_pred * (1 - y_pred))
grad_hidden = np.dot(grad_y_pred, W2.T) * (hidden * (1 - hidden))
grad_W2 = np.dot(hidden.T, grad_y_pred)
grad_b2 = np.sum(grad_y_pred, axis=0, keepdims=True)
grad_W1 = np.dot(X.T, grad_hidden)
grad_b1 = np.sum(grad_hidden, axis=0, keepdims=True)
# 更新权重和偏置
W1 -= learning_rate * grad_W1
b1 -= learning_rate * grad_b1
W2 -= learning_rate * grad_W2
b2 -= learning_rate * grad_b2
# 绘制损失函数图像
plt.figure()
plt.plot(losses)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
```
运行以上代码,即可得到两层神经网络与门的输出损失函数图像。
阅读全文