python对英文文本进行分词停词、大小写处理、词性还原、去除标点符号,最后TF-IDF方法提取分析关键词并获取,列出具体代码其TF-IDF分数
时间: 2023-12-30 12:05:01 浏览: 126
好的,根据您的要求,我来为您提供一个Python代码,用于对英文文本进行分词、停词、大小写处理、词性还原、去除标点符号以及TF-IDF方法提取分析关键词并获取其TF-IDF分数。
您需要安装以下库:
- nltk
- scikit-learn
以下是代码:
```python
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from string import punctuation
from sklearn.feature_extraction.text import TfidfVectorizer
# 下载nltk所需的数据包
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
# 读取文本文件
with open('example.txt', 'r') as f:
text = f.read()
# 将文本转为小写
text = text.lower()
# 分词
tokens = word_tokenize(text)
# 去除标点符号
tokens = [token for token in tokens if token not in punctuation]
# 停用词过滤
stop_words = set(stopwords.words('english'))
tokens = [token for token in tokens if token not in stop_words]
# 词性还原
lemmatizer = WordNetLemmatizer()
tokens = [lemmatizer.lemmatize(token) for token in tokens]
# 将分词后的词语列表转为空格分隔的字符串
text_cleaned = ' '.join(tokens)
# 使用TF-IDF方法提取关键词并获取其TF-IDF分数
vectorizer = TfidfVectorizer()
tfidf = vectorizer.fit_transform([text_cleaned])
feature_names = vectorizer.get_feature_names()
tfidf_scores = tfidf.toarray()[0]
# 输出关键词及其TF-IDF分数
for i in range(len(feature_names)):
print(feature_names[i], tfidf_scores[i])
```
请将代码中的 `'example.txt'` 替换为您要分析的文本文件路径,运行代码即可输出关键词及其TF-IDF分数。
阅读全文