stacking集成学习python代码
时间: 2023-09-05 22:08:58 浏览: 125
集成学习中的stacking以及python实现
以下是一个简单的 stacking 集成学习的 Python 代码示例:
```python
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
# 加载数据
X, y = load_iris(return_X_y=True)
# 划分训练集和验证集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 初始化基模型
rf = RandomForestClassifier(n_estimators=50, random_state=42)
lr = LogisticRegression(random_state=42)
knn = KNeighborsClassifier(n_neighbors=3)
nb = GaussianNB()
# 训练基模型
rf.fit(X_train, y_train)
lr.fit(X_train, y_train)
knn.fit(X_train, y_train)
nb.fit(X_train, y_train)
# 使用基模型对验证集进行预测
rf_pred = rf.predict(X_test)
lr_pred = lr.predict(X_test)
knn_pred = knn.predict(X_test)
nb_pred = nb.predict(X_test)
# 计算基模型的准确率
rf_acc = accuracy_score(y_test, rf_pred)
lr_acc = accuracy_score(y_test, lr_pred)
knn_acc = accuracy_score(y_test, knn_pred)
nb_acc = accuracy_score(y_test, nb_pred)
print('Random Forest accuracy:', rf_acc)
print('Logistic Regression accuracy:', lr_acc)
print('KNN accuracy:', knn_acc)
print('Naive Bayes accuracy:', nb_acc)
# 构建元模型的训练集和验证集
train_meta = [rf_pred, lr_pred, knn_pred, nb_pred]
train_meta = np.array(train_meta).T
test_meta = np.column_stack((rf.predict(X_test), lr.predict(X_test), knn.predict(X_test), nb.predict(X_test)))
# 初始化元模型
meta_model = RandomForestClassifier(n_estimators=50, random_state=42)
# 训练元模型
meta_model.fit(train_meta, y_test)
# 使用元模型对验证集进行预测
meta_pred = meta_model.predict(test_meta)
# 计算元模型的准确率
meta_acc = accuracy_score(y_test, meta_pred)
print('Stacking accuracy:', meta_acc)
```
该代码使用 scikit-learn 库中的 iris 数据集演示了如何使用 stacking 集成学习。首先,将数据集划分为训练集和验证集;然后,使用随机森林、逻辑回归、KNN 和朴素贝叶斯等基模型对训练集进行训练,并在验证集上进行预测和评估;接着,将基模型的预测结果作为元特征,构建元模型的训练集和验证集;最后,使用随机森林作为元模型对验证集进行预测和评估。
阅读全文