python LTP对文本文件进行分词处理,加载停用词表去除停用词,按词频排序并标注词性,将结果输出为excel表格
时间: 2024-05-06 20:16:45 浏览: 295
以下是使用Python LTP库对文本文件进行分词处理,加载停用词表去除停用词,按词频排序并标注词性,并将结果输出为Excel表格的示例代码:
```python
import os
import xlwt
from pyltp import SentenceSplitter, Segmentor, Postagger
# 设置LTP库的模型文件路径
LTP_DATA_DIR = 'ltp_data_v3.4.0'
cws_model_path = os.path.join(LTP_DATA_DIR, 'cws.model')
pos_model_path = os.path.join(LTP_DATA_DIR, 'pos.model')
# 加载停用词表
stopwords = []
with open('stopwords.txt', 'r', encoding='utf-8') as f:
for line in f:
stopwords.append(line.strip())
# 初始化LTP分词器和词性标注器
segmentor = Segmentor()
segmentor.load(cws_model_path)
postagger = Postagger()
postagger.load(pos_model_path)
# 读取文本文件
with open('input.txt', 'r', encoding='utf-8') as f:
text = f.read()
# 使用LTP分句器对文本进行分句
sentences = SentenceSplitter.split(text)
# 建立词频统计字典
word_freq_dict = {}
# 处理每个句子
for sentence in sentences:
# 使用LTP分词器对句子进行分词
words = segmentor.segment(sentence)
# 使用LTP词性标注器对分词结果进行词性标注
postags = postagger.postag(words)
# 处理每个词语
for word, pos in zip(words, postags):
# 去除停用词
if word not in stopwords:
# 统计词频
if word in word_freq_dict:
word_freq_dict[word] += 1
else:
word_freq_dict[word] = 1
# 按词频排序
sorted_word_freq = sorted(word_freq_dict.items(), key=lambda x: x[1], reverse=True)
# 将结果输出为Excel表格
workbook = xlwt.Workbook(encoding='utf-8')
worksheet = workbook.add_sheet('Sheet1')
# 写入表头
worksheet.write(0, 0, '词语')
worksheet.write(0, 1, '词性')
worksheet.write(0, 2, '词频')
# 写入词语、词性和词频
for i in range(len(sorted_word_freq)):
word = sorted_word_freq[i][0]
freq = sorted_word_freq[i][1]
pos = postagger.postag([word])[0]
worksheet.write(i+1, 0, word)
worksheet.write(i+1, 1, pos)
worksheet.write(i+1, 2, freq)
# 保存Excel文件
workbook.save('output.xls')
```
其中,`input.txt`是待处理的文本文件,`stopwords.txt`是停用词表文件,程序将处理结果输出为`output.xls`文件。需要注意的是,由于LTP库是基于Python2开发的,因此在Python3中需要对部分代码进行修改,例如在`open()`函数中需要加上`encoding='utf-8'`参数。
阅读全文