优化这段代码 s_len = [] s_tokens = sent_tokenize(text) for i in s_tokens: w_tokens = word_tokenize(i) s_len.append(len(w_tokens)) x_axis = np.arange(0,len(s_len)) plt.scatter(x_axis,s_len) plt.show()
时间: 2023-05-25 19:02:54 浏览: 158
对 Python 代码使用的词语标记化器 tokenize.docx
s_tokens = sent_tokenize(text)
s_len = [len(word_tokenize(i)) for i in s_tokens]
x_axis = np.arange(len(s_len))
plt.scatter(x_axis, s_len)
plt.show()
阅读全文