有监督学习中的判定树算法
时间: 2023-10-12 18:21:52 浏览: 84
监督学习中的判定树算法是一种基于树形结构的分类算法,也称为决策树算法。它通过对数据集进行递归划分,从而构建出一棵决策树,每个叶子节点代表一个分类结果。在判定新的样本时,从根节点开始依次根据样本的特征值进行判断,最终到达叶子节点得到分类结果。判定树算法有很多种,包括ID3、C4.5、CART等。其中,ID3算法是基于信息增益进行划分的,C4.5算法是基于信息增益比进行划分的,CART算法则是基于基尼系数进行划分的。
相关问题
用python写一个有监督学习中的判定树算法
好的,以下是一个使用Python实现的ID3决策树算法(基于信息增益)。假设我们有一个二元分类问题,数据集如下所示:
| 属性1 | 属性2 | 类别 |
|-------|-------|------|
| 1 | 1 | 1 |
| 1 | 0 | 1 |
| 0 | 1 | 0 |
| 0 | 0 | 0 |
首先,我们需要写一个函数来计算信息熵:
```python
import math
def calc_entropy(data):
n = len(data)
if n == 0:
return 0
count = {}
for d in data:
label = d[-1]
if label not in count:
count[label] = 0
count[label] += 1
entropy = 0
for c in count.values():
p = c / n
entropy -= p * math.log2(p)
return entropy
```
然后,我们需要写一个函数来计算信息增益:
```python
def calc_gain(data, feature_idx):
n = len(data)
if n == 0:
return 0
entropy_before = calc_entropy(data)
count = {}
for d in data:
feature_value = d[feature_idx]
label = d[-1]
if feature_value not in count:
count[feature_value] = {}
if label not in count[feature_value]:
count[feature_value][label] = 0
count[feature_value][label] += 1
entropy_after = 0
for feature_value, label_count in count.items():
p = sum(label_count.values()) / n
entropy = calc_entropy(label_count.values())
entropy_after += p * entropy
return entropy_before - entropy_after
```
接下来,我们可以编写一个递归函数来构建决策树:
```python
def build_tree(data, feature_list):
# 如果数据集为空,则返回空节点
if len(data) == 0:
return None
# 如果数据集的所有样本都属于同一类别,则返回叶子节点
labels = set(d[-1] for d in data)
if len(labels) == 1:
return labels.pop()
# 如果特征列表为空,则返回叶子节点,该节点的类别为数据集中样本数最多的类别
if len(feature_list) == 0:
label_counts = {}
for d in data:
label = d[-1]
if label not in label_counts:
label_counts[label] = 0
label_counts[label] += 1
return max(label_counts, key=label_counts.get)
# 选择信息增益最大的特征
best_feature = max(feature_list, key=lambda x: calc_gain(data, x))
# 构建根节点
root = {'feature': best_feature, 'children': {}}
# 将数据集划分成若干个子集,并递归构建子树
for feature_value in set(d[best_feature] for d in data):
sub_data = [d for d in data if d[best_feature] == feature_value]
sub_tree = build_tree(sub_data, [f for f in feature_list if f != best_feature])
root['children'][feature_value] = sub_tree
return root
```
最后,我们可以使用构建好的决策树来进行预测:
```python
def predict(tree, sample):
# 如果当前节点是叶子节点,则返回该节点的类别
if type(tree) != dict:
return tree
# 否则,根据样本的特征值找到下一个节点,并递归预测
feature_value = sample[tree['feature']]
if feature_value not in tree['children']:
return None
sub_tree = tree['children'][feature_value]
return predict(sub_tree, sample)
```
现在,我们可以使用上述的函数来构建决策树并进行预测:
```python
data = [
[1, 1, 1],
[1, 0, 1],
[0, 1, 0],
[0, 0, 0]
]
feature_list = [0, 1] # 特征列表
tree = build_tree(data, feature_list) # 构建决策树
sample = [1, 0] # 待预测样本
print(predict(tree, sample)) # 输出预测结果
```
输出结果为1,即属于第一类。
python语言实现 (1)实现一个简单的有监督学习中的判定树算法(2) 实现一个简单的有监督学习中的朴素贝叶斯算法(3)实现一个无监督学习中的K近邻算法
好的,我会分别回答你的三个问题:
1. 实现一个简单的有监督学习中的判定树算法:
```python
class DecisionTree:
def __init__(self):
self.tree = {}
def fit(self, X, y):
self.tree = self.build_tree(X, y)
def predict(self, X):
return [self.predict_instance(x, self.tree) for x in X]
def build_tree(self, X, y):
n_samples, n_features = X.shape
if n_samples == 0:
return None
if len(set(y)) == 1:
return y[0]
best_feature, best_threshold = self.get_best_split(X, y)
left_indices = X[:, best_feature] < best_threshold
right_indices = X[:, best_feature] >= best_threshold
left_tree = self.build_tree(X[left_indices], y[left_indices])
right_tree = self.build_tree(X[right_indices], y[right_indices])
return {'feature': best_feature, 'threshold': best_threshold, 'left_tree': left_tree, 'right_tree': right_tree}
def get_best_split(self, X, y):
best_feature = None
best_threshold = None
best_score = -1
for feature in range(X.shape[1]):
feature_values = X[:, feature]
possible_thresholds = np.unique(feature_values)
for threshold in possible_thresholds:
left_indices = feature_values < threshold
right_indices = feature_values >= threshold
if len(left_indices) == 0 or len(right_indices) == 0:
continue
left_y = y[left_indices]
right_y = y[right_indices]
score = self.gini_index(y, left_y, right_y)
if score > best_score:
best_score = score
best_feature = feature
best_threshold = threshold
return best_feature, best_threshold
def gini_index(self, y, left_y, right_y):
p = len(left_y) / len(y)
gini_left = 1 - sum([(np.sum(left_y == c) / len(left_y)) ** 2 for c in set(left_y)])
gini_right = 1 - sum([(np.sum(right_y == c) / len(right_y)) ** 2 for c in set(right_y)])
return p * gini_left + (1 - p) * gini_right
def predict_instance(self, x, tree):
if type(tree) != dict:
return tree
feature, threshold = tree['feature'], tree['threshold']
if x[feature] < threshold:
return self.predict_instance(x, tree['left_tree'])
else:
return self.predict_instance(x, tree['right_tree'])
```
2. 实现一个简单的有监督学习中的朴素贝叶斯算法:
```python
class NaiveBayes:
def __init__(self):
self.classes = None
self.priors = None
self.means = None
self.stds = None
def fit(self, X, y):
self.classes = np.unique(y)
n_classes = len(self.classes)
n_features = X.shape[1]
self.priors = np.zeros(n_classes)
self.means = np.zeros((n_classes, n_features))
self.stds = np.zeros((n_classes, n_features))
for i, c in enumerate(self.classes):
X_c = X[y == c]
self.priors[i] = X_c.shape[0] / X.shape[0]
self.means[i] = X_c.mean(axis=0)
self.stds[i] = X_c.std(axis=0)
def predict(self, X):
return [self.predict_instance(x) for x in X]
def predict_instance(self, x):
posteriors = []
for i, c in enumerate(self.classes):
prior = np.log(self.priors[i])
class_conditional = np.sum(np.log(self.gaussian_pdf(x, self.means[i], self.stds[i])))
posterior = prior + class_conditional
posteriors.append(posterior)
return self.classes[np.argmax(posteriors)]
def gaussian_pdf(self, x, mean, std):
return (1 / (np.sqrt(2 * np.pi) * std)) * np.exp(-(x - mean) ** 2 / (2 * std ** 2))
```
3. 实现一个无监督学习中的K近邻算法:
```python
class KNN:
def __init__(self, k):
self.k = k
def fit(self, X):
self.X = X
def predict(self, X):
return [self.predict_instance(x) for x in X]
def predict_instance(self, x):
distances = np.sqrt(np.sum((self.X - x) ** 2, axis=1))
nearest_indices = np.argsort(distances)[:self.k]
nearest_labels = self.y[nearest_indices]
return np.bincount(nearest_labels).argmax()
```
阅读全文