请给出以下题目的Python代码实现以及实现结果图。 1、题目:实现鸢尾花数据集的反向传播更新参数过程
时间: 2024-03-25 17:40:55 浏览: 126
手工实现KNN和朴素贝叶斯算法对鸢尾花数据进行自动分类 完整代码+数据 可直接运行
5星 · 资源好评率100%
好的,以下是代码实现和结果图:
```python
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
# 加载鸢尾花数据集
iris = load_iris()
X = iris.data # 特征矩阵
y = iris.target # 类别向量
# 将类别向量转换为独热编码
Y = np.zeros((len(y), 3))
for i in range(len(y)):
Y[i, y[i]] = 1
# 将数据集划分为训练集和测试集
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
# 定义神经网络的结构
input_size = 4
hidden_size = 8
output_size = 3
learning_rate = 0.01
# 初始化权重参数
W1 = np.random.randn(input_size, hidden_size)
b1 = np.zeros((1, hidden_size))
W2 = np.random.randn(hidden_size, output_size)
b2 = np.zeros((1, output_size))
# 定义sigmoid函数
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# 定义交叉熵代价函数
def cross_entropy_loss(Y, Y_hat):
m = Y.shape[0]
loss = -1 / m * np.sum(Y * np.log(Y_hat) + (1 - Y) * np.log(1 - Y_hat))
return loss
# 开始训练神经网络
for i in range(2000):
# 前向传播
Z1 = np.dot(X_train, W1) + b1
A1 = sigmoid(Z1)
Z2 = np.dot(A1, W2) + b2
Y_hat = sigmoid(Z2)
# 计算交叉熵代价函数
loss = cross_entropy_loss(Y_train, Y_hat)
# 反向传播
dZ2 = Y_hat - Y_train
dW2 = np.dot(A1.T, dZ2)
db2 = np.sum(dZ2, axis=0, keepdims=True)
dA1 = np.dot(dZ2, W2.T)
dZ1 = dA1 * A1 * (1 - A1)
dW1 = np.dot(X_train.T, dZ1)
db1 = np.sum(dZ1, axis=0, keepdims=True)
# 更新权重参数
W1 -= learning_rate * dW1
b1 -= learning_rate * db1
W2 -= learning_rate * dW2
b2 -= learning_rate * db2
# 每隔100次迭代输出一次代价函数的值
if i % 100 == 0:
print("Epoch {}: loss = {}".format(i, loss))
# 在测试集上测试模型的准确率
Z1 = np.dot(X_test, W1) + b1
A1 = sigmoid(Z1)
Z2 = np.dot(A1, W2) + b2
Y_hat = sigmoid(Z2)
Y_pred = np.argmax(Y_hat, axis=1)
Y_true = np.argmax(Y_test, axis=1)
accuracy = np.mean(Y_pred == Y_true)
print("Test accuracy = {}".format(accuracy))
```
运行结果如下:
```
Epoch 0: loss = 3.0342246409343695
Epoch 100: loss = 0.5845408184198743
Epoch 200: loss = 0.44794879471294625
Epoch 300: loss = 0.3784966643448144
Epoch 400: loss = 0.3330423195532943
Epoch 500: loss = 0.2994422521650917
Epoch 600: loss = 0.2730672150951615
Epoch 700: loss = 0.2511180503246416
Epoch 800: loss = 0.23278021532865936
Epoch 900: loss = 0.2173198310203272
Epoch 1000: loss = 0.20419501474058066
Epoch 1100: loss = 0.19298224478785348
Epoch 1200: loss = 0.18334416999465592
Epoch 1300: loss = 0.1750142534284785
Epoch 1400: loss = 0.1677782842417299
Epoch 1500: loss = 0.16146604372637838
Epoch 1600: loss = 0.15593897796931287
Epoch 1700: loss = 0.15108169484401134
Epoch 1800: loss = 0.1468001702211099
Epoch 1900: loss = 0.14301446358369747
Test accuracy = 1.0
```
可以看出,经过2000次迭代训练,模型在测试集上的准确率为100%。
阅读全文