X = df_copy_Logistic.drop(columns=['HeartDisease'])的作用
时间: 2023-04-09 14:00:29 浏览: 113
ChatGPT: X = df_copy_Logistic.drop(columns=['HeartDisease'])的作用是从df_copy_Logistic中删除名为'HeartDisease'的列,并将结果存储在变量X中。
相关问题
df_new = df.drop(columns=["姓名", "位置","球队"]) # 去除非数值型数据 X_train,X_test,Y_train,Y_test=train_test_split(df_new,Pie1_array,test_size=0.2,random_state=180) # 定义模型 def basic_logosticregression(X_train, X_test, Y_train, Y_test): model = LogisticRegression(random_state=0, solver='lbfgs') model.fit(X_train, Y_train) Y_train_pre = model.predict(X_train) Y_test_pre = model.predict(X_test) train_predict_proba = model.predict_proba(X_train) test_predict_proba = model.predict_proba(X_test) confusion_matrix_result = metrics.confusion_matrix(Y_test_pre, Y_test) print('混淆矩阵结果:\n', confusion_matrix_result) plt.figure(figsize=(8, 6)) sns.heatmap(confusion_matrix_result, annot=True, cmap='Blues') # plt.xlabel('预测的标签') # plt.ylabel('实际的标签') print("逻辑回归") print("score_train: " + str(model.score(X_train, Y_train))) print("score_test: " + str(model.score(X_test, Y_test))) basic_logosticregression(X_train,X_test,Y_train,Y_test) Pie_array = np.array(df['球员贡献度']) df_new = df.drop(columns=["姓名", "位置","球队",'球员贡献度']) data=df_new.values.tolist()
这段代码中,首先使用 `df.drop` 方法去掉了数据框 `df` 中的非数值型数据,然后使用 `train_test_split` 方法将数据集划分为训练集和测试集。接下来定义了一个逻辑回归模型 `basic_logosticregression`,并在其中使用 `LogisticRegression` 方法进行训练和预测。最后,将球员贡献度从数据框中提取出来,再次使用 `df.drop` 方法去掉非数值型数据,并将数据转换为列表。
需要注意的是,这段代码中并没有显示 `Pie1_array` 的定义,因此无法确定该变量的含义和类型,难以对代码进行更深入的分析。同时,该代码中的逻辑回归模型没有进行参数调优,可能会影响模型的性能。如果需要对模型进行更深入的分析和优化,可以参考一些相关的教程或者进行自行探索。
import seaborn as sns import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression df = pd.read_csv('heart.csv') corrmat = df.corr() top_corr_features = corrmat.index plt.figure(figsize=(16,16)) sns.heatmap(df[top_corr_features].corr(),annot=True,cmap="RdYlGn") plt.show() sns.set_style('whitegrid') sns.countplot(x='target',data=df,palette='RdBu_r') plt.show() dataset = pd.get_dummies(df, columns=['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal']) columns_to_scale = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak'] scaler = StandardScaler() dataset[columns_to_scale] = scaler.fit_transform(dataset[columns_to_scale]) dataset.head() y = dataset['target'] X = dataset.drop(['target'], axis=1) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) logreg = LogisticRegression() logreg.fit(X_train, y_train) print("Training accuracy: {:.3f}".format(logreg.score(X_train, y_train))) print("Test accuracy: {:.3f}".format(logreg.score(X_test, y_test)))改写为ROC代码
以下是将 Logistic Regression 模型的评估指标改为 ROC 曲线的代码:
```python
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve, auc
df = pd.read_csv('heart.csv')
# 查看特征相关性
corrmat = df.corr()
top_corr_features = corrmat.index
plt.figure(figsize=(16,16))
sns.heatmap(df[top_corr_features].corr(),annot=True,cmap="RdYlGn")
plt.show()
# 查看数据分布
sns.set_style('whitegrid')
sns.countplot(x='target',data=df,palette='RdBu_r')
plt.show()
# 对数据进行 One-hot 编码和标准化
dataset = pd.get_dummies(df, columns=['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal'])
columns_to_scale = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak']
scaler = StandardScaler()
dataset[columns_to_scale] = scaler.fit_transform(dataset[columns_to_scale])
dataset.head()
# 划分数据集
y = dataset['target']
X = dataset.drop(['target'], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# 训练模型
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
# 评估模型
y_train_pred = logreg.predict_proba(X_train)[:, 1]
y_test_pred = logreg.predict_proba(X_test)[:, 1]
fpr_train, tpr_train, thresholds_train = roc_curve(y_train, y_train_pred)
fpr_test, tpr_test, thresholds_test = roc_curve(y_test, y_test_pred)
roc_auc_train = auc(fpr_train, tpr_train)
roc_auc_test = auc(fpr_test, tpr_test)
# 绘制 ROC 曲线
plt.figure()
plt.plot(fpr_train, tpr_train, color='darkorange', lw=2, label='Train ROC curve (area = %0.2f)' % roc_auc_train)
plt.plot(fpr_test, tpr_test, color='navy', lw=2, label='Test ROC curve (area = %0.2f)' % roc_auc_test)
plt.plot([0, 1], [0, 1], color='black', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
```
在这个代码中,我们使用 `roc_curve` 函数计算训练集和测试集的 FPR 和 TPR,然后使用 `auc` 函数计算 ROC 曲线下的面积。最后,我们使用 `matplotlib` 库绘制 ROC 曲线。
阅读全文