Traceback (most recent call last): File "E:/PyCharm Community Edition 2020.2.2/Project/WordDict/newsim.py", line 410, in <module> similarities = cosine_similarity(seed_vectors + corpus_vectors) File "E:\anaconda\envs\TF2.4\lib\site-packages\sklearn\metrics\pairwise.py", line 1251, in cosine_similarity X, Y = check_pairwise_arrays(X, Y) File "E:\anaconda\envs\TF2.4\lib\site-packages\sklearn\metrics\pairwise.py", line 153, in check_pairwise_arrays estimator=estimator, File "E:\anaconda\envs\TF2.4\lib\site-packages\sklearn\utils\validation.py", line 796, in check_array % (array.ndim, estimator_name) ValueError: Found array with dim 3. check_pairwise_arrays expected <= 2. 怎么修改这个错误
时间: 2023-07-21 09:21:05 浏览: 238
这个错误的原因是输入的数组维度超过了 `check_pairwise_arrays` 函数的限制。这个函数只能处理二维数组。你需要先将三维数组转换成二维数组,然后再进行相似性计算。可以使用 NumPy 库中的 `reshape()` 函数,将三维数组转换成二维数组。具体的修改方法如下:
1. 导入 NumPy 库:
```
import numpy as np
```
2. 将三维数组 `seed_vectors` 和 `corpus_vectors` 转换成二维数组:
```
seed_vectors = np.reshape(seed_vectors, (seed_vectors.shape[0], -1))
corpus_vectors = np.reshape(corpus_vectors, (corpus_vectors.shape[0], -1))
```
3. 然后再进行相似性计算:
```
similarities = cosine_similarity(seed_vectors + corpus_vectors)
```
这样应该就可以解决这个错误了。
相关问题
又报错了,报错信息:Traceback (most recent call last): File "E:/PyCharm Community Edition 2020.2.2/Project/WordDict/newsim.py", line 473, in <module> fine_tune(model, data) File "E:/PyCharm Community Edition 2020.2.2/Project/WordDict/newsim.py", line 468, in fine_tune loss.backward() AttributeError: 'str' object has no attribute 'backward' 帮我修改
这个错误提示表明,您的代码中存在一个字符串对象,而您正试图调用它的 backward() 方法。请检查您的代码,找到这个字符串对象,并将其替换为一个 PyTorch 张量对象,以便您可以调用它的 backward() 方法。
可能的原因是,在您的代码中,您将字符串对象赋值给了一个需要张量对象的变量,或者您没有正确地将您的输入数据转换为 PyTorch 张量对象。
请检查您的代码中所有与输入数据有关的部分,并确保将它们转换为正确的张量对象。
import jieba import torch from sklearn.metrics.pairwise import cosine_similarity from transformers import BertTokenizer, BertModel seed_words = ['姓名'] # 加载微博文本数据 text_data = [] with open("output/weibo1.txt", "r", encoding="utf-8") as f: for line in f: text_data.append(line.strip()) # 加载BERT模型和分词器 tokenizer = BertTokenizer.from_pretrained('bert-base-chinese') model = BertModel.from_pretrained('bert-base-chinese') seed_tokens = ["[CLS]"] + seed_words + ["[SEP]"] seed_token_ids = tokenizer.convert_tokens_to_ids(seed_tokens) seed_segment_ids = [0] * len(seed_token_ids) # 转换为张量,调用BERT模型进行编码 seed_token_tensor = torch.tensor([seed_token_ids]) seed_segment_tensor = torch.tensor([seed_segment_ids]) with torch.no_grad(): seed_outputs = model(seed_token_tensor, seed_segment_tensor) seed_encoded_layers = seed_outputs[0] jieba.load_userdict('data/userdict.txt') # 构建隐私词库 privacy_words = set() for text in text_data: words = jieba.lcut(text.strip()) tokens = ["[CLS]"] + words + ["[SEP]"] token_ids = tokenizer.convert_tokens_to_ids(tokens) segment_ids = [0] * len(token_ids) # 转换为张量,调用BERT模型进行编码 token_tensor = torch.tensor([token_ids]) segment_tensor = torch.tensor([segment_ids]) with torch.no_grad(): outputs = model(token_tensor, segment_tensor) encoded_layers = outputs[0] # 对于每个词,计算它与种子词的相似度 for i in range(1, len(tokens)-1): word = tokens[i] if word in seed_words: continue word_tensor = encoded_layers[0][i].reshape(1, -1) seed_tensors =seed_encoded_layers[0][i].reshape(1, -1) # 计算当前微博词汇与种子词的相似度 sim = cosine_similarity(word_tensor, seed_tensors, dense_output=False)[0].max() print(sim, word) if sim > 0.5 and len(word) > 1: privacy_words.add(word) print(privacy_words) 上述代码运行之后有错误,报错信息为:Traceback (most recent call last): File "E:/PyCharm Community Edition 2020.2.2/Project/WordDict/newsim.py", line 397, in <module> seed_tensors =seed_encoded_layers[0][i].reshape(1, -1) IndexError: index 3 is out of bounds for dimension 0 with size 3. 请帮我修改
根据报错信息,问题出在获取seed_encoded_layers的第三个元素时,超过了张量的维度范围。可以通过打印输出seed_encoded_layers的形状,来确定张量的维度情况,看看是否与预期一致。
同时,可以检查一下seed_tokens和seed_token_ids的值,确保它们正确。
下面是修改后的代码:
```
import jieba
import torch
from sklearn.metrics.pairwise import cosine_similarity
from transformers import BertTokenizer, BertModel
seed_words = ['姓名']
# 加载微博文本数据
text_data = []
with open("output/weibo1.txt", "r", encoding="utf-8") as f:
for line in f:
text_data.append(line.strip())
# 加载BERT模型和分词器
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
model = BertModel.from_pretrained('bert-base-chinese')
seed_tokens = ["[CLS]"] + seed_words + ["[SEP]"]
seed_token_ids = tokenizer.convert_tokens_to_ids(seed_tokens)
seed_segment_ids = [0] * len(seed_token_ids)
# 转换为张量,调用BERT模型进行编码
seed_token_tensor = torch.tensor([seed_token_ids])
seed_segment_tensor = torch.tensor([seed_segment_ids])
with torch.no_grad():
seed_outputs = model(seed_token_tensor, seed_segment_tensor)
seed_encoded_layers = seed_outputs[0]
jieba.load_userdict('data/userdict.txt')
# 构建隐私词库
privacy_words = set()
for text in text_data:
words = jieba.lcut(text.strip())
tokens = ["[CLS]"] + words + ["[SEP]"]
token_ids = tokenizer.convert_tokens_to_ids(tokens)
segment_ids = [0] * len(token_ids)
# 转换为张量,调用BERT模型进行编码
token_tensor = torch.tensor([token_ids])
segment_tensor = torch.tensor([segment_ids])
with torch.no_grad():
outputs = model(token_tensor, segment_tensor)
encoded_layers = outputs[0]
# 对于每个词,计算它与种子词的相似度
for i in range(1, len(tokens)-1):
word = tokens[i]
if word in seed_words:
continue
word_tensor = encoded_layers[0][i].reshape(1, -1)
seed_tensors = seed_encoded_layers[0][0].reshape(1, -1) # 修改此处,取第一个元素
# 计算当前微博词汇与种子词的相似度
sim = cosine_similarity(word_tensor, seed_tensors, dense_output=False)[0].max()
print(sim, word)
if sim > 0.5 and len(word) > 1:
privacy_words.add(word)
print(privacy_words)
```
阅读全文