朴素贝叶斯实现文本分类代码
时间: 2023-08-10 07:06:01 浏览: 140
下面是一个基于朴素贝叶斯算法实现文本分类的示例代码:
```python
import numpy as np
# 定义训练集和测试集
train_data = [
['I love this sandwich', 'pos'],
['This is an amazing place', 'pos'],
['I feel very good about these beers', 'pos'],
['This is my best work', 'pos'],
['What an awesome view', 'pos'],
['I do not like this restaurant', 'neg'],
['I am tired of this stuff', 'neg'],
['I can't deal with this', 'neg'],
['He is my sworn enemy', 'neg'],
['My boss is horrible', 'neg']
]
test_data = [
'The beer was good',
'I do not enjoy my job',
'I ain’t feeling dandy today',
'I feel amazing!'
]
# 分离出训练集中的文本和标签
train_texts, train_labels = zip(*train_data)
# 将文本转化为词向量
vocab = sorted(set(word for text in train_texts for word in text.split()))
word2idx = {word: i for i, word in enumerate(vocab)}
train_vectors = np.zeros((len(train_data), len(vocab)))
for i, text in enumerate(train_texts):
for word in text.split():
train_vectors[i, word2idx[word]] += 1
# 训练模型
pos_vectors = train_vectors[train_labels == 'pos']
neg_vectors = train_vectors[train_labels == 'neg']
pos_prior = len(pos_vectors) / len(train_labels)
neg_prior = len(neg_vectors) / len(train_labels)
pos_likelihood = (pos_vectors.sum(axis=0) + 1) / (pos_vectors.sum() + len(vocab))
neg_likelihood = (neg_vectors.sum(axis=0) + 1) / (neg_vectors.sum() + len(vocab))
# 对测试集进行分类
for text in test_data:
vector = np.zeros(len(vocab))
for word in text.split():
if word in word2idx:
vector[word2idx[word]] += 1
pos_score = np.log(pos_prior) + np.sum(np.log(pos_likelihood) * vector)
neg_score = np.log(neg_prior) + np.sum(np.log(neg_likelihood) * vector)
if pos_score > neg_score:
print(f"{text} is positive")
else:
print(f"{text} is negative")
```
其中,训练集和测试集分别包含文本和标签。将训练集中的文本转化为词向量,并计算每个词在正类和负类中出现的概率。对于测试集中的每个文本,将其转化为词向量,并利用朴素贝叶斯算法计算其属于正类和负类的概率得分。最终,将得分较高的类别作为预测结果输出。
阅读全文