用代码完成high_diamond_ranked_10min.csv处理和特征工程,首先是写入对应数据信息的探索与分析,进行数据预处理用归一化,按过滤法对数据进行特征选择,挑选出最优特征数,对两类数据用PCA算法降到2维后,进行可视化展示。对完整数据进PCA降维,用碎石图选择合适的降维后特征范围。在一个图中绘制不同特征数对应决策树和随机森林准确率效果折线对比图。分别输出决策树和随机森林总特征数,和对应的准确率、输出特征过滤后的特征数,和对应的准确率、PCA降维后的特征数,和对应的准确率。
时间: 2024-03-18 17:40:58 浏览: 58
Oracle中使用Rownum分页详细例子
好的,让我来解答您的问题。
首先,我们需要导入所需的库和数据。假设数据文件名为 `high_diamond_ranked_10min.csv`,那么可以使用以下代码进行导入和探索:
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# 导入数据
df = pd.read_csv("high_diamond_ranked_10min.csv")
# 查看数据信息
print(df.info())
# 查看数据摘要
print(df.describe())
# 查看数据前5行
print(df.head())
```
接下来,我们需要进行数据预处理,包括缺失值处理和数据归一化。可以使用以下代码:
```python
# 处理缺失值
df.dropna(inplace=True)
# 数据归一化
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
df_scaled = pd.DataFrame(scaler.fit_transform(df), columns=df.columns)
```
然后,我们需要进行特征选择。可以使用过滤法,例如方差选择法或相关系数选择法。以下是一个使用相关系数选择法的示例代码:
```python
# 相关系数选择特征
corr = df_scaled.corr()
corr_target = abs(corr["blueWins"])
relevant_features = corr_target[corr_target > 0.2]
print(relevant_features)
```
接下来,我们可以使用 PCA 算法将数据降维到 2 维。以下是一个示例代码:
```python
# PCA降维
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca_result = pca.fit_transform(df_scaled)
df_pca = pd.DataFrame(data=pca_result, columns=["PC1", "PC2"])
```
然后,我们可以使用碎石图选择合适的降维后特征范围。以下是一个示例代码:
```python
# 碎石图选择特征
from sklearn.cluster import KMeans
sse = []
for k in range(1, 11):
kmeans = KMeans(n_clusters=k, random_state=0)
kmeans.fit(df_pca)
sse.append(kmeans.inertia_)
plt.plot(range(1, 11), sse)
plt.title("Elbow Method")
plt.xlabel("Number of Clusters")
plt.ylabel("SSE")
plt.show()
```
接下来,我们可以绘制不同特征数对应决策树和随机森林准确率效果折线对比图。以下是一个示例代码:
```python
# 决策树和随机森林准确率对比
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
X = df_scaled[relevant_features.index]
y = df_scaled["blueWins"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
dt_scores = []
rf_scores = []
for i in range(1, len(X.columns)+1):
dt = DecisionTreeClassifier(max_depth=i, random_state=0)
dt.fit(X_train, y_train)
dt_scores.append(dt.score(X_test, y_test))
rf = RandomForestClassifier(n_estimators=i, random_state=0)
rf.fit(X_train, y_train)
rf_scores.append(rf.score(X_test, y_test))
plt.plot(range(1, len(X.columns)+1), dt_scores, label="Decision Tree")
plt.plot(range(1, len(X.columns)+1), rf_scores, label="Random Forest")
plt.legend()
plt.title("Accuracy vs. Number of Features")
plt.xlabel("Number of Features")
plt.ylabel("Accuracy")
plt.show()
```
最后,我们可以输出决策树和随机森林总特征数,和对应的准确率、输出特征过滤后的特征数,和对应的准确率、PCA降维后的特征数,和对应的准确率。以下是示例代码:
```python
# 输出决策树和随机森林总特征数,和对应的准确率
print("Decision Tree:")
print("Number of Features:", len(X.columns))
print("Accuracy:", dt_scores[-1])
print("Random Forest:")
print("Number of Features:", len(X.columns))
print("Accuracy:", rf_scores[-1])
# 输出特征过滤后的特征数,和对应的准确率
X_filtered = df_scaled[["blueWardsPlaced", "redWardsPlaced", "blueWardsDestroyed", "redWardsDestroyed", "blueTotalGold", "redTotalGold", "blueTotalExperience", "redTotalExperience", "blueCSPerMin", "redCSPerMin", "blueGoldDiff", "redGoldDiff", "blueExperienceDiff", "redExperienceDiff", "blueDeaths", "redDeaths"]]
X_filtered_train, X_filtered_test, y_train, y_test = train_test_split(X_filtered, y, test_size=0.3, random_state=0)
dt_filtered = DecisionTreeClassifier(max_depth=4, random_state=0)
dt_filtered.fit(X_filtered_train, y_train)
dt_filtered_score = dt_filtered.score(X_filtered_test, y_test)
rf_filtered = RandomForestClassifier(n_estimators=6, random_state=0)
rf_filtered.fit(X_filtered_train, y_train)
rf_filtered_score = rf_filtered.score(X_filtered_test, y_test)
print("Filtered Features:")
print("Number of Features:", len(X_filtered.columns))
print("Decision Tree Accuracy:", dt_filtered_score)
print("Random Forest Accuracy:", rf_filtered_score)
# 输出PCA降维后的特征数,和对应的准确率
pca = PCA(n_components=8)
pca_result = pca.fit_transform(df_scaled)
df_pca = pd.DataFrame(data=pca_result)
X_pca_train, X_pca_test, y_train, y_test = train_test_split(df_pca, y, test_size=0.3, random_state=0)
dt_pca = DecisionTreeClassifier(max_depth=2, random_state=0)
dt_pca.fit(X_pca_train, y_train)
dt_pca_score = dt_pca.score(X_pca_test, y_test)
rf_pca = RandomForestClassifier(n_estimators=8, random_state=0)
rf_pca.fit(X_pca_train, y_train)
rf_pca_score = rf_pca.score(X_pca_test, y_test)
print("PCA Features:")
print("Number of Features:", len(df_pca.columns))
print("Decision Tree Accuracy:", dt_pca_score)
print("Random Forest Accuracy:", rf_pca_score)
```
希望这些代码可以帮助您完成您的项目。
阅读全文