网格搜索找到XGBoost 的n_estimators和learning_rate的最优组合并用热力图可视化
时间: 2024-01-22 15:20:55 浏览: 83
好的,首先需要导入必要的库和数据集。这里使用的是sklearn自带的Iris数据集。
```python
from sklearn.datasets import load_iris
from sklearn.model_selection import GridSearchCV
from xgboost import XGBClassifier
import seaborn as sns
# 加载数据集
iris = load_iris()
X = iris.data
y = iris.target
```
接下来,我们定义一个字典参数`param_grid`来设置网格搜索的参数范围。这里我们设置了`n_estimators`和`learning_rate`两个参数的范围。
```python
param_grid = {
'n_estimators': [50, 100, 200, 300],
'learning_rate': [0.01, 0.1, 0.5, 1]
}
```
然后,我们实例化一个XGBClassifier,并将其传入GridSearchCV中进行网格搜索。
```python
# 实例化XGBClassifier
xgb_model = XGBClassifier()
# 网格搜索
grid_search = GridSearchCV(xgb_model, param_grid=param_grid, cv=5, scoring='accuracy')
grid_search.fit(X, y)
```
最后,我们将每组超参数的测试结果可视化为热力图,以便更好地观察最优组合。
```python
# 获取每组参数的测试结果
results = grid_search.cv_results_
params = results['params']
scores = results['mean_test_score']
# 将测试结果可视化为热力图
results_df = pd.DataFrame(params)
results_df['score'] = scores
results_df = results_df.pivot('n_estimators', 'learning_rate', 'score')
sns.heatmap(results_df, annot=True, cmap='YlGnBu')
```
完整代码如下:
```python
from sklearn.datasets import load_iris
from sklearn.model_selection import GridSearchCV
from xgboost import XGBClassifier
import seaborn as sns
# 加载数据集
iris = load_iris()
X = iris.data
y = iris.target
# 定义参数范围
param_grid = {
'n_estimators': [50, 100, 200, 300],
'learning_rate': [0.01, 0.1, 0.5, 1]
}
# 实例化XGBClassifier
xgb_model = XGBClassifier()
# 网格搜索
grid_search = GridSearchCV(xgb_model, param_grid=param_grid, cv=5, scoring='accuracy')
grid_search.fit(X, y)
# 获取每组参数的测试结果
results = grid_search.cv_results_
params = results['params']
scores = results['mean_test_score']
# 将测试结果可视化为热力图
results_df = pd.DataFrame(params)
results_df['score'] = scores
results_df = results_df.pivot('n_estimators', 'learning_rate', 'score')
sns.heatmap(results_df, annot=True, cmap='YlGnBu')
```
运行后,会得到一个热力图,其中最深的颜色代表最优的测试结果。
![image.png](attachment:image.png)
阅读全文