python代码thucnnews数据集文本分类
时间: 2024-02-06 13:04:00 浏览: 78
好的,以下是使用朴素贝叶斯算法对THUCNews数据集进行文本分类的Python代码实现:
```python
import os
import jieba
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# 读取数据集
data_path = "/path/to/data"
df = pd.read_csv(os.path.join(data_path, "THUCNews.csv"))
# 中文分词
df["content"] = df["content"].apply(lambda x: " ".join(jieba.cut(x)))
# 构建词袋模型
vectorizer = CountVectorizer(stop_words=None, max_features=5000)
X = vectorizer.fit_transform(df["content"])
y = df["category"].values
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 训练模型
clf = MultinomialNB(alpha=0.01)
clf.fit(X_train, y_train)
# 预测
y_pred = clf.predict(X_test)
# 评估模型
acc = accuracy_score(y_test, y_pred)
print("Test accuracy: {:.4f}".format(acc))
```
这里使用了朴素贝叶斯算法作为模型,使用了词袋模型将文本转换为向量,然后使用多项式朴素贝叶斯算法进行训练和预测。最后评估模型的准确率。
阅读全文