XGboost分类模型算法代码实现
时间: 2023-10-03 14:05:45 浏览: 129
XGBoost是一种GBDT(梯度提升树)算法,它具有高效性和准确性,被广泛应用于分类、回归和排序等领域。以下是XGBoost分类模型的算法代码实现:
1. 导入必要的库
```python
import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
```
2. 加载数据集并进行划分
```python
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
```
3. 设置XGBoost分类模型的参数
```python
params = {
'max_depth': 3, # 树的最大深度
'eta': 0.3, # 学习率
'objective': 'multi:softmax', # 多分类问题,采用softmax目标函数
'num_class': 3 # 类别数
}
num_round = 50 # 迭代次数
```
4. 训练模型并进行预测
```python
model = xgb.train(params, dtrain, num_round)
y_pred = model.predict(dtest)
```
5. 计算准确率
```python
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
```
完整代码如下:
```python
import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
params = {
'max_depth': 3,
'eta': 0.3,
'objective': 'multi:softmax',
'num_class': 3
}
num_round = 50
model = xgb.train(params, dtrain, num_round)
y_pred = model.predict(dtest)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
```
阅读全文