基于BP神经网络的iris鸢尾花分类预测 利用隐层
时间: 2023-07-12 10:04:10 浏览: 117
iris_classification_BPNeuralNetwork:Python 基于BP神经网络实现鸢尾花的分类
iris鸢尾花分类预测是一个经典的分类问题,BP神经网络是一种常用的分类模型,可以用来解决这个问题。在BP神经网络中,隐层对于提高模型的性能非常重要,因为它可以帮助模型学习更加复杂的特征。
下面是基于BP神经网络的iris鸢尾花分类预测的主要步骤:
1. 数据预处理:将数据集分为训练集和测试集,并进行标准化处理,使数据均值为0,方差为1。
2. 模型构建:构建一个包含一个输入层、一个隐层和一个输出层的BP神经网络模型。输入层的节点数为4,隐层的节点数可以根据实际情况来调整,输出层的节点数为3,对应于鸢尾花的3个类别。
3. 训练模型:使用训练集对模型进行训练,通过反向传播算法来更新权重和偏置值,使得模型的预测结果与真实结果之间的误差最小化。
4. 模型评估:使用测试集对模型进行评估,计算模型的准确率、精确率、召回率等指标,评估模型的性能。
下面是一个简单的代码示例,用于实现基于BP神经网络的iris鸢尾花分类预测。
```python
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
class NeuralNetwork:
def __init__(self, input_dim, hidden_dim, output_dim):
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.output_dim = output_dim
self.W1 = np.random.randn(self.input_dim, self.hidden_dim)
self.b1 = np.zeros((1, self.hidden_dim))
self.W2 = np.random.randn(self.hidden_dim, self.output_dim)
self.b2 = np.zeros((1, self.output_dim))
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def sigmoid_deriv(self, x):
return x * (1 - x)
def softmax(self, x):
exp_x = np.exp(x)
return exp_x / np.sum(exp_x, axis=1, keepdims=True)
def forward(self, X):
self.z1 = np.dot(X, self.W1) + self.b1
self.a1 = self.sigmoid(self.z1)
self.z2 = np.dot(self.a1, self.W2) + self.b2
self.output = self.softmax(self.z2)
def backward(self, X, y, learning_rate):
delta3 = self.output - y
dW2 = np.dot(self.a1.T, delta3)
db2 = np.sum(delta3, axis=0, keepdims=True)
delta2 = np.dot(delta3, self.W2.T) * self.sigmoid_deriv(self.a1)
dW1 = np.dot(X.T, delta2)
db1 = np.sum(delta2, axis=0)
self.W2 -= learning_rate * dW2
self.b2 -= learning_rate * db2
self.W1 -= learning_rate * dW1
self.b1 -= learning_rate * db1
def train(self, X_train, y_train, X_test, y_test, learning_rate, epochs):
for epoch in range(epochs):
self.forward(X_train)
self.backward(X_train, y_train, learning_rate)
train_loss = np.mean(-np.sum(y_train * np.log(self.output), axis=1))
self.forward(X_test)
test_loss = np.mean(-np.sum(y_test * np.log(self.output), axis=1))
train_acc = self.accuracy(X_train, y_train)
test_acc = self.accuracy(X_test, y_test)
if epoch % 100 == 0:
print("Epoch %d - Train loss: %.4f - Test loss: %.4f - Train acc: %.4f - Test acc: %.4f" % (epoch, train_loss, test_loss, train_acc, test_acc))
def predict(self, X):
self.forward(X)
return np.argmax(self.output, axis=1)
def accuracy(self, X, y):
y_pred = self.predict(X)
return np.mean(y_pred == np.argmax(y, axis=1))
# 加载iris数据集
iris = load_iris()
X = iris.data
y = iris.target
# 将数据集分为训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# 数据标准化
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# 将标签转换为one-hot编码
y_train_onehot = np.eye(3)[y_train]
y_test_onehot = np.eye(3)[y_test]
# 构建BP神经网络模型
nn = NeuralNetwork(input_dim=4, hidden_dim=10, output_dim=3)
# 训练模型
nn.train(X_train, y_train_onehot, X_test, y_test_onehot, learning_rate=0.01, epochs=1000)
# 测试模型
accuracy = nn.accuracy(X_test, y_test_onehot)
print("Accuracy on test set: %.4f" % accuracy)
```
需要注意的是,BP神经网络是一种容易出现过拟合的模型,因此需要在训练过程中加入正则化等技术来避免过拟合。
阅读全文