以下代码:import jieba seg_list = [jieba.lcut(text) for text in new_data] jieba.analyse.set_stop_words('stopwords.txt') # 设置停用词 keywords = [jieba.analyse.extract_tags(text) for text in new_data] # 提取关键词。发生以下错误:odule 'jieba' has no attribute 'analyse'。请对原代码进行修改
时间: 2024-02-20 22:58:12 浏览: 21
抱歉,可能是因为你使用的jieba版本较旧,没有`jieba.analyse`模块。可以尝试使用以下代码:
```
import jieba
import jieba.analyse
seg_list = [jieba.lcut(text) for text in new_data]
jieba.analyse.set_stop_words('stopwords.txt') # 设置停用词
keywords = [jieba.analyse.extract_tags(text) for text in new_data] # 提取关键词
```
这样可以确保正确导入`jieba.analyse`模块并使用其中的函数。
相关问题
解释一下这段代码import jieba import jieba.posseg as pseg from utils.dbutils import * from question_answer.获取天气情况 import * import re from utils.user_base import * jieba.enable_paddle() def get_loc_list(text): per_list = [] # 人名列表 word_list = jieba.lcut(t
这段代码的作用是导入必要的模块和函数,并定义了一个名为 `get_loc_list()` 的函数。具体来说,它导入了 `jieba` 库以及 `jieba.posseg` 模块,并将其重命名为 `pseg`。此外,它还导入了自定义的 `dbutils` 模块、`获取天气情况` 模块、`re` 模块和 `user_base` 模块。接下来,它启用了 `jieba` 库的 Paddle 模式。最后,它定义了一个名为 `get_loc_list()` 的函数,该函数接受一个文本参数,并使用 `jieba` 库对文本进行分词和词性标注,找出其中的人名并返回一个人名列表。
请在注释处填入代码完成对训练集和测试集的结巴分词from paddlenlp.datasets import load_dataset def read(data_path): data_set = [] with open(data_path, 'r', encoding='utf-8') as f: for line in f: l = line.strip('\n').split('\t') if len(l) != 2: print (len(l), line) words, labels = line.strip('\n').split('\t') data_set.append((words,labels)) return data_set train_ds = read(data_path='train.txt') dev_ds = read(data_path='dev.txt') test_ds = read(data_path='test.txt') for i in range(5): print("sentence %d" % (i), train_ds[i][0]) print("sentence %d" % (i), train_ds[i][1]) print(len(train_ds),len(dev_ds)) import jieba def data_preprocess(corpus): data_set = [] ####填结巴分词代码 for text in corpus: seg_list = jieba.cut(text) data_set.append(" ".join(seg_list)) return data_set train_corpus = data_preprocess(train_ds) test_corpus = data_preprocess(test_ds) print(train_corpus[:2]) print(test_corpus[:2])
from paddlenlp.datasets import load_dataset
def read(data_path):
data_set = []
with open(data_path, 'r', encoding='utf-8') as f:
for line in f:
l = line.strip('\n').split('\t')
if len(l) != 2:
print (len(l), line)
words, labels = line.strip('\n').split('\t')
data_set.append((words,labels))
return data_set
train_ds = read(data_path='train.txt')
dev_ds = read(data_path='dev.txt')
test_ds = read(data_path='test.txt')
for i in range(5):
print("sentence %d" % (i), train_ds[i][0])
print("sentence %d" % (i), train_ds[i][1])
print(len(train_ds),len(dev_ds))
import jieba
def data_preprocess(corpus):
data_set = []
for text in corpus:
seg_list = jieba.cut(text[0])
data_set.append((" ".join(seg_list), text[1]))
return data_set
train_corpus = data_preprocess(train_ds)
test_corpus = data_preprocess(test_ds)
print(train_corpus[:2])
print(test_corpus[:2])