用python基于基尼值构建决策树
时间: 2023-06-16 22:06:54 浏览: 89
决策树是一种基于树结构的分类算法,它通过对样本数据进行分类,不断地将数据集划分为更小的子集,最终达到分类的目的。基于基尼值构建决策树是其中一种常见的方法。
以下是基于基尼值构建决策树的python代码:
```python
import numpy as np
import pandas as pd
# 定义基尼值计算函数
def get_gini(labels):
n = len(labels)
cnt = {}
for label in labels:
if label not in cnt:
cnt[label] = 0
cnt[label] += 1
gini = 1.0
for label in cnt:
prob = cnt[label] / n
gini -= prob * prob
return gini
# 定义基于基尼值的决策树类
class DecisionTree:
def __init__(self, max_depth=10, min_samples_split=2):
self.tree = None
self.max_depth = max_depth
self.min_samples_split = min_samples_split
# 定义决策树训练函数
def fit(self, X, y):
self.tree = self.build_tree(X, y, 0)
# 定义决策树预测函数
def predict(self, X):
y_pred = []
for i in range(len(X)):
y_pred.append(self.predict_one(X[i], self.tree))
return y_pred
# 定义决策树节点划分函数
def split(self, X, y):
m, n = X.shape
best_gini = float('inf')
best_feature = None
best_value = None
for j in range(n):
for i in range(m):
left_X = []
left_y = []
right_X = []
right_y = []
for k in range(m):
if X[k][j] < X[i][j]:
left_X.append(X[k])
left_y.append(y[k])
else:
right_X.append(X[k])
right_y.append(y[k])
if len(left_X) < self.min_samples_split or len(right_X) < self.min_samples_split:
continue
left_gini = get_gini(left_y)
right_gini = get_gini(right_y)
gini = left_gini * len(left_X) / m + right_gini * len(right_X) / m
if gini < best_gini:
best_gini = gini
best_feature = j
best_value = X[i][j]
return best_feature, best_value
# 定义决策树构建函数
def build_tree(self, X, y, depth):
if depth >= self.max_depth or len(X) < self.min_samples_split:
return np.argmax(np.bincount(y))
feature, value = self.split(X, y)
if feature is None:
return np.argmax(np.bincount(y))
left_X, left_y, right_X, right_y = [], [], [], []
for i in range(len(X)):
if X[i][feature] < value:
left_X.append(X[i])
left_y.append(y[i])
else:
right_X.append(X[i])
right_y.append(y[i])
node = {}
node['feature'] = feature
node['value'] = value
node['left'] = self.build_tree(left_X, left_y, depth + 1)
node['right'] = self.build_tree(right_X, right_y, depth + 1)
return node
# 定义决策树单个样本预测函数
def predict_one(self, x, tree):
if isinstance(tree, int):
return tree
feature = tree['feature']
value = tree['value']
if x[feature] < value:
return self.predict_one(x, tree['left'])
else:
return self.predict_one(x, tree['right'])
```
以上代码实现了基于基尼值构建决策树的类DecisionTree,包括了基尼值计算函数、训练函数、预测函数、节点划分函数和构建函数。你可以根据需要修改构造函数中的参数,如最大深度和最小样本数等。
使用该类进行训练和预测的示例代码:
```python
# 读入数据
data = pd.read_csv('data.csv')
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
# 实例化决策树模型
model = DecisionTree()
# 训练模型
model.fit(X, y)
# 预测新数据
x_new = [[1, 1, 0, 0],
[1, 1, 1, 0],
[0, 1, 0, 1]]
y_pred = model.predict(x_new)
print(y_pred)
```
以上代码中,首先读入数据,然后实例化决策树模型,训练模型,最后预测新数据并输出预测结果。
阅读全文