Traceback (most recent call last): File "E:/PyCharm Community Edition 2020.2.2/Project/WordDict/newsim.py", line 389, in <module> seed_vectors = np.reshape(seed_vectors, (seed_vectors.shape[0], -1)) AttributeError: 'list' object has no attribute 'shape' 又报错了
时间: 2024-02-29 10:52:36 浏览: 120
这个错误的原因是 `seed_vectors` 是一个列表对象,没有 `shape` 属性。`reshape()` 函数需要的是一个 NumPy 数组对象,因此需要先将列表对象转换成 NumPy 数组对象。你可以使用 NumPy 库中的 `array()` 函数将列表对象转换成 NumPy 数组对象。具体的修改方法如下:
1. 导入 NumPy 库:
```
import numpy as np
```
2. 将列表对象 `seed_vectors` 转换成 NumPy 数组对象:
```
seed_vectors = np.array(seed_vectors)
```
3. 然后再进行形状变换:
```
seed_vectors = np.reshape(seed_vectors, (seed_vectors.shape[0], -1))
```
这样应该就可以解决这个错误了。
相关问题
又报错了,报错信息:Traceback (most recent call last): File "E:/PyCharm Community Edition 2020.2.2/Project/WordDict/newsim.py", line 473, in <module> fine_tune(model, data) File "E:/PyCharm Community Edition 2020.2.2/Project/WordDict/newsim.py", line 468, in fine_tune loss.backward() AttributeError: 'str' object has no attribute 'backward' 帮我修改
这个错误提示表明,您的代码中存在一个字符串对象,而您正试图调用它的 backward() 方法。请检查您的代码,找到这个字符串对象,并将其替换为一个 PyTorch 张量对象,以便您可以调用它的 backward() 方法。
可能的原因是,在您的代码中,您将字符串对象赋值给了一个需要张量对象的变量,或者您没有正确地将您的输入数据转换为 PyTorch 张量对象。
请检查您的代码中所有与输入数据有关的部分,并确保将它们转换为正确的张量对象。
import jieba import torch from sklearn.metrics.pairwise import cosine_similarity from transformers import BertTokenizer, BertModel seed_words = ['姓名'] # 加载微博文本数据 text_data = [] with open("output/weibo1.txt", "r", encoding="utf-8") as f: for line in f: text_data.append(line.strip()) # 加载BERT模型和分词器 tokenizer = BertTokenizer.from_pretrained('bert-base-chinese') model = BertModel.from_pretrained('bert-base-chinese') seed_tokens = ["[CLS]"] + seed_words + ["[SEP]"] seed_token_ids = tokenizer.convert_tokens_to_ids(seed_tokens) seed_segment_ids = [0] * len(seed_token_ids) # 转换为张量,调用BERT模型进行编码 seed_token_tensor = torch.tensor([seed_token_ids]) seed_segment_tensor = torch.tensor([seed_segment_ids]) with torch.no_grad(): seed_outputs = model(seed_token_tensor, seed_segment_tensor) seed_encoded_layers = seed_outputs[0] jieba.load_userdict('data/userdict.txt') # 构建隐私词库 privacy_words = set() for text in text_data: words = jieba.lcut(text.strip()) tokens = ["[CLS]"] + words + ["[SEP]"] token_ids = tokenizer.convert_tokens_to_ids(tokens) segment_ids = [0] * len(token_ids) # 转换为张量,调用BERT模型进行编码 token_tensor = torch.tensor([token_ids]) segment_tensor = torch.tensor([segment_ids]) with torch.no_grad(): outputs = model(token_tensor, segment_tensor) encoded_layers = outputs[0] # 对于每个词,计算它与种子词的相似度 for i in range(1, len(tokens)-1): word = tokens[i] if word in seed_words: continue word_tensor = encoded_layers[0][i].reshape(1, -1) seed_tensors =seed_encoded_layers[0][i].reshape(1, -1) # 计算当前微博词汇与种子词的相似度 sim = cosine_similarity(word_tensor, seed_tensors, dense_output=False)[0].max() print(sim, word) if sim > 0.5 and len(word) > 1: privacy_words.add(word) print(privacy_words) 上述代码运行之后有错误,报错信息为:Traceback (most recent call last): File "E:/PyCharm Community Edition 2020.2.2/Project/WordDict/newsim.py", line 397, in <module> seed_tensors =seed_encoded_layers[0][i].reshape(1, -1) IndexError: index 3 is out of bounds for dimension 0 with size 3. 请帮我修改
根据报错信息,问题出在获取seed_encoded_layers的第三个元素时,超过了张量的维度范围。可以通过打印输出seed_encoded_layers的形状,来确定张量的维度情况,看看是否与预期一致。
同时,可以检查一下seed_tokens和seed_token_ids的值,确保它们正确。
下面是修改后的代码:
```
import jieba
import torch
from sklearn.metrics.pairwise import cosine_similarity
from transformers import BertTokenizer, BertModel
seed_words = ['姓名']
# 加载微博文本数据
text_data = []
with open("output/weibo1.txt", "r", encoding="utf-8") as f:
for line in f:
text_data.append(line.strip())
# 加载BERT模型和分词器
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
model = BertModel.from_pretrained('bert-base-chinese')
seed_tokens = ["[CLS]"] + seed_words + ["[SEP]"]
seed_token_ids = tokenizer.convert_tokens_to_ids(seed_tokens)
seed_segment_ids = [0] * len(seed_token_ids)
# 转换为张量,调用BERT模型进行编码
seed_token_tensor = torch.tensor([seed_token_ids])
seed_segment_tensor = torch.tensor([seed_segment_ids])
with torch.no_grad():
seed_outputs = model(seed_token_tensor, seed_segment_tensor)
seed_encoded_layers = seed_outputs[0]
jieba.load_userdict('data/userdict.txt')
# 构建隐私词库
privacy_words = set()
for text in text_data:
words = jieba.lcut(text.strip())
tokens = ["[CLS]"] + words + ["[SEP]"]
token_ids = tokenizer.convert_tokens_to_ids(tokens)
segment_ids = [0] * len(token_ids)
# 转换为张量,调用BERT模型进行编码
token_tensor = torch.tensor([token_ids])
segment_tensor = torch.tensor([segment_ids])
with torch.no_grad():
outputs = model(token_tensor, segment_tensor)
encoded_layers = outputs[0]
# 对于每个词,计算它与种子词的相似度
for i in range(1, len(tokens)-1):
word = tokens[i]
if word in seed_words:
continue
word_tensor = encoded_layers[0][i].reshape(1, -1)
seed_tensors = seed_encoded_layers[0][0].reshape(1, -1) # 修改此处,取第一个元素
# 计算当前微博词汇与种子词的相似度
sim = cosine_similarity(word_tensor, seed_tensors, dense_output=False)[0].max()
print(sim, word)
if sim > 0.5 and len(word) > 1:
privacy_words.add(word)
print(privacy_words)
```
阅读全文