train_data = pd.get_dummies(train_data)
时间: 2024-04-18 22:34:10 浏览: 149
这段代码使用了pandas库中的get_dummies函数,用于对train_data进行独热编码(One-Hot Encoding)。通过这个函数,将train_data中的分类变量转换为二进制的特征向量,并创建新的虚拟变量列来表示原始数据中的每个分类值。这有助于在机器学习模型中更好地处理分类数据。
相关问题
import pandas as pd data=pd.read_csv('housing.csv') total_bedrooms_mean=data['total_bedrooms'].mean() data['total_bedrooms'].fillna(total_bedrooms_mean,inplace=True) onehot=pd.get_dummies((data[['ocean_proximity']]),prefix='ocean_proximity') data.drop(columns = ['ocean_proximity'],inplace=True) X=pd.concat([data['housing_median_age'],data['total_rooms'],data['total_bedrooms'],data['population'],data['households'],data['median_income'],onehot],axis=1) y=data[["median_house_value"]] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,random_state=42) from sklearn.linear_model import LinearRegression lin_reg=LinearRegression() lin_reg.fit(X_train,y_train) y_pre=lin_reg.predict(X_test) from sklearn import metrics metrics.accuracy_score(y_test,y_pre)报错import pandas as pd data=pd.read_csv('housing.csv') total_bedrooms_mean=data['total_bedrooms'].mean() data['total_bedrooms'].fillna(total_bedrooms_mean,inplace=True) onehot=pd.get_dummies((data[['ocean_proximity']]),prefix='ocean_proximity') data.drop(columns = ['ocean_proximity'],inplace=True) X=pd.concat([data['housing_median_age'],data['total_rooms'],data['total_bedrooms'],data['population'],data['households'],data['median_income'],onehot],axis=1) y=data[["median_house_value"]] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,random_state=42) from sklearn.linear_model import LinearRegression lin_reg=LinearRegression() lin_reg.fit(X_train,y_train) y_pre=lin_reg.predict(X_test) from sklearn import metrics metrics.accuracy_score(y_test,y_pre)
这个代码会报错,因为`metrics.accuracy_score()`是用于分类问题的指标,而这里是回归问题,所以不能使用该指标。
如果你想评估回归模型的性能,可以使用回归指标,如均方误差(mean squared error, MSE)、均方根误差(root mean squared error, RMSE)或平均绝对误差(mean absolute error, MAE)。例如,使用MSE指标可以这样计算:
```
from sklearn.metrics import mean_squared_error
mse = mean_squared_error(y_test, y_pre)
print(mse)
```
请注意,当使用不同的指标时,解释和解读结果的方式也会有所不同,因此请根据你的具体问题和数据类型选择适当的指标。
import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split # 读取训练集和测试集数据 train_data = pd.read_csv(r'C:\ADULT\Titanic\train.csv') test_data = pd.read_csv(r'C:\ADULT\Titanic\test.csv') # 统计训练集和测试集缺失值数目 print(train_data.isnull().sum()) print(test_data.isnull().sum()) # 处理 Age, Fare 和 Embarked 缺失值 most_lists = ['Age', 'Fare', 'Embarked'] for col in most_lists: train_data[col] = train_data[col].fillna(train_data[col].mode()[0]) test_data[col] = test_data[col].fillna(test_data[col].mode()[0]) # 拆分 X, Y 数据并将分类变量 one-hot 编码 y_train_data = train_data['Survived'] features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare', 'Sex', 'Embarked'] X_train_data = pd.get_dummies(train_data[features]) X_test_data = pd.get_dummies(test_data[features]) # 合并训练集 Y 和 X 数据,并创建乘客信息分类变量 train_data_selected = pd.concat([y_train_data, X_train_data], axis=1) print(train_data_selected) cate_features = ['Pclass', 'SibSp', 'Parch', 'Sex', 'Embarked', 'Age_category', 'Fare_category'] train_data['Age_category'] = pd.cut(train_data.Fare, bins=range(0, 100, 10)).astype(str) train_data['Fare_category'] = pd.cut(train_data.Fare, bins=list(range(-20, 110, 20)) + [800]).astype(str) print(train_data) # 统计各分类变量的分布并作出可视化呈现 plt.figure(figsize=(18, 16)) plt.subplots_adjust(hspace=0.3, wspace=0.3) for i, cate_feature in enumerate(cate_features): plt.subplot(7, 2, 2 * i + 1) sns.histplot(x=cate_feature, data=train_data, stat="density") plt.xlabel(cate_feature) plt.ylabel('Density') plt.subplot(7, 2, 2 * i + 2) sns.lineplot(x=cate_feature, y='Survived', data=train_data) plt.xlabel(cate_feature) plt.ylabel('Survived') plt.show() # 绘制点状的相关系数热图 plt.figure(figsize=(12, 8)) sns.heatmap(train_data_selected.corr(), vmin=-1, vmax=1, annot=True) plt.show() sourceRow = 891 output = pd.DataFrame({'PassengerId': test_data.PassengerId, 'Survived': predictions}) output.head() # 保存结果 output.to_csv('gender_submission.csv', index=False) print(output) train_X, test_X, train_y, test_y = train_test_split(X_train_data, y_train_data, train_size=0.8, random_state=42) print("随机森林分类结果") y_pred_train1 = train_data.predict(train_X) y_pred_test1 = train_data.predict(test_X) accuracy_train1 = accuracy_score(train_y, y_pred_train1) accuracy_test1 = accuracy_score(test_y, y_pred_test1) print("训练集——随机森林分类器准确率为:", accuracy_train1) print("测试集——随机森林分类器准确率为:", accuracy_train1)
在你的代码中,你正在尝试从 `train_data` 对象上调用一个名为 "predict" 的方法,而 `train_data` 实际上是一个 DataFrame 对象,该对象并没有 "predict" 方法。你应该使用你之前定义的随机森林分类器对象 `RandomForestClassifier` 来进行预测。请将以下代码:
```
y_pred_train1 = train_data.predict(train_X)
y_pred_test1 = train_data.predict(test_X)
```
更改为:
```
# 定义随机森林分类器对象
rfc = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=42)
# 在训练数据上拟合分类器
rfc.fit(train_X, train_y)
# 在训练集和测试集上进行预测
y_pred_train1 = rfc.predict(train_X)
y_pred_test1 = rfc.predict(test_X)
```
这将使用你之前定义的随机森林分类器对象 `rfc` 来进行预测。同时,你需要使用 `rfc.fit(train_X, train_y)` 在训练数据上拟合分类器,然后使用 `rfc.predict(train_X)` 和 `rfc.predict(test_X)` 在训练集和测试集上进行预测。
阅读全文