> fit_xgb_cls <- xgb.train( + data = dtrain, + eta = 0.3, + gamma = 0.001, + max_depth = 2, + subsample = 0.7, + colsample_bytree = 0.4, + objective = "binary:cox", + nrounds = 1000, + + verbose = 1, + print_every_n = 100, + early_stopping_rounds = 200 + ) Error in xgb.iter.update(bst$handle, dtrain, iteration - 1, obj) : [22:34:14] amalgamation/../src/objective/objective.cc:26: Unknown objective function: `binary:cox` Objective candidate: reg:squarederror Objective candidate: reg:squaredlogerror Objective candidate: reg:logistic Objective candidate: reg:pseudohubererror Objective candidate: binary:logistic Objective candidate: binary:logitraw Objective candidate: reg:linear Objective candidate: count:poisson Objective candidate: survival:cox Objective candidate: reg:gamma Objective candidate: reg:tweedie Objective candidate: multi:softmax Objective candidate: multi:softprob Objective candidate: rank:pairwise Objective candidate: rank:ndcg Objective candidate: rank:map Objective candidate: binary:hinge Objective candidate: survival:aft
时间: 2024-03-11 12:45:02 浏览: 152
该错误提示显示在`objective`参数中指定的目标函数`binary:cox`未被识别。XGBoost支持多种常见的目标函数,但是`binary:cox`不是其中之一。以下是一些常见的二进制分类目标函数:
- `binary:logistic`: 逻辑回归目标函数,输出预测的概率值。
- `binary:hinge`: SVM-style hinge loss,输出预测的类别0或1。
- `binary:logitraw`: 输出未经过sigmoid变换的对数几率值。
- `rank:pairwise`: 对于排名任务,使用pairwise ranking loss进行训练。
您需要将`objective`参数设置为支持的目标函数之一。如果您要使用Cox Proportional Hazards模型进行生存分析,可以将`objective`参数设置为`survival:cox`。如果您要进行二进制分类,可以将其设置为`binary:logistic`。
相关问题
> fit_xgb_cls <- xgb.train( + data = dtrain, + eta = 0.3, + gamma = 0.001, + max_depth = 2, + subsample = 0.7, + colsample_bytree = 0.4, + objective = "survival:cox", + nrounds = 1000, + + verbose = 1, + print_every_n = 100, + early_stopping_rounds = 200 + ) Error in init(env) : For early stopping, watchlist must have at least one element
这个错误提示表明,您设置了`early_stopping_rounds`参数,但`watchlist`参数为空。`watchlist`参数用于跟踪模型在训练过程中的性能,并在性能不再改善时停止训练。因此,在使用`early_stopping_rounds`参数时,必须设置`watchlist`参数并提供至少一个数据集。
您需要将`watchlist`参数添加到`xgb.train`函数中。例如,您可以将其设置为以下内容:
```
watchlist <- list(train=dtrain, eval=dtest)
```
然后将`watchlist`参数添加到`xgb.train`函数中:
```
fit_xgb_cls <- xgb.train(
data = dtrain,
eta = 0.3,
gamma = 0.001,
max_depth = 2,
subsample = 0.7,
colsample_bytree = 0.4,
objective = "survival:cox",
nrounds = 1000,
verbose = 1,
print_every_n = 100,
early_stopping_rounds = 200,
watchlist = watchlist
)
```
这应该可以解决该错误。
将以下代码生成的特征重要性排序图结果保留四位小数,params_0 = { 'booster': 'gbtree', 'objective': 'multi:softmax', 'num_class': 4, 'gamma': 0.1, 'max_depth': 7, 'lambda': 2, 'subsample': 0.8, 'colsample_bytree': 0.8, 'min_child_weight': 3, 'eta': 0.1, 'seed': 1000, 'gain': True, 'learning_rate': 0.1 } model_XGB_clf = xgb.train(params = params_0, dtrain = dtrain_0, num_boost_round = 30) # 预测 ans = model_XGB_clf.predict(dtest_0) # acc acc = metrics.accuracy_score(y_test, ans) print(acc) # 0.625866050808314 # 特征重要性排序 xgb.plot_importance(model_XGB_clf, importance_type='gain') plt.show() # 图片保存在目录下
由于无法运行该代码,我无法生成实际的特征重要性排序图。但是,我可以提供以下代码作为参考,以便生成特征重要性排序图并将其保存在指定的文件路径下:
```
import xgboost as xgb
import matplotlib.pyplot as plt
params_0 = {
'booster': 'gbtree',
'objective': 'multi:softmax',
'num_class': 4,
'gamma': 0.1,
'max_depth': 7,
'lambda': 2,
'subsample': 0.8,
'colsample_bytree': 0.8,
'min_child_weight': 3,
'eta': 0.1,
'seed': 1000,
'gain': True,
'learning_rate': 0.1
}
model_XGB_clf = xgb.train(params=params_0, dtrain=dtrain_0, num_boost_round=30)
# 预测
ans = model_XGB_clf.predict(dtest_0)
# acc
acc = metrics.accuracy_score(y_test, ans)
print(acc) # 0.625866050808314
# 特征重要性排序
fig, ax = plt.subplots(figsize=(12, 8))
xgb.plot_importance(model_XGB_clf, importance_type='gain', ax=ax)
plt.savefig('path/to/save/figure', dpi=300, bbox_inches='tight')
```
请将 `path/to/save/figure` 替换为您希望保存图像的文件路径。
阅读全文