用jupyter写一个二层神经网络的损失函数随迭代次数变换的图的代码
时间: 2024-02-18 11:00:07 浏览: 22
好的,以下是用Python和Jupyter Notebook实现二层神经网络的损失函数随迭代次数变换的图的代码。
首先,我们需要导入所需的库,包括numpy和matplotlib:
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
接下来,我们定义一个函数`two_layer_neural_network()`来实现二层神经网络的前向传播和反向传播:
```python
def two_layer_neural_network(X, Y, num_iterations, learning_rate, n_h):
"""
Arguments:
X -- input data of shape (n_x, m)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, m)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
n_h -- size of the hidden layer
Returns:
parameters -- a dictionary containing W1, b1, W2, b2
"""
# Initialize parameters
n_x = X.shape[0]
n_y = Y.shape[0]
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
# Loop (gradient descent)
costs = []
for i in range(num_iterations):
# Forward propagation
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = 1 / (1 + np.exp(-Z2))
# Compute cost
cost = -np.mean(Y * np.log(A2) + (1 - Y) * np.log(1 - A2))
# Backward propagation
dZ2 = A2 - Y
dW2 = np.dot(dZ2, A1.T) / X.shape[1]
db2 = np.mean(dZ2, axis=1, keepdims=True)
dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))
dW1 = np.dot(dZ1, X.T) / X.shape[1]
db1 = np.mean(dZ1, axis=1, keepdims=True)
# Update parameters
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
# Print cost every 100 iterations
if i % 100 == 0:
print("Cost after iteration %i: %f" % (i, cost))
costs.append(cost)
# Plot the cost
plt.plot(costs)
plt.ylabel('Cost')
plt.xlabel('Iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Return parameters
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
```
最后,我们可以使用以下代码来调用函数并训练模型,并将损失函数随迭代次数变换的图显示出来:
```python
# Generate random data
np.random.seed(1)
X = np.random.randn(2, 100)
Y = np.random.randn(1, 100) > 0
# Train the model
parameters = two_layer_neural_network(X, Y, num_iterations=1000, learning_rate=0.1, n_h=4)
```
运行以上代码后,您将看到损失函数随迭代次数变换的图。