def doTrainSubClassOf(self, ids): concept_embs_a = self.concept_vec(ids[[0,2],:]) concept_embs_b = self.concept_vec(ids[[1, 3], :]) radius_a = concept_embs_a[:, :, -1] radius_b = concept_embs_b[:, :, -1] concept_embs_a = concept_embs_a[:, :, :-1] concept_embs_b = concept_embs_b[:, :, :-1] if self.args.pnorm==1: dis = F.relu(norm(concept_embs_a - concept_embs_b,pnorm=self.args.pnorm) + torch.abs(radius_a) - torch.abs(radius_b)) else: dis = F.relu(norm(concept_embs_a - concept_embs_b,pnorm=self.args.pnorm) + radius_a ** 2 - radius_b ** 2) loss = F.relu(dis[0] + self.args.margin_sub - dis[1]).sum() return loss改为分类损失
时间: 2023-06-13 19:05:54 浏览: 117
embs_open_1:约克大学嵌入式系统设计与实施模块的开放式评估 1
如果你想将上述代码改为分类损失,可以使用交叉熵损失函数。首先,将两个类别的概念向量拼接在一起,这样就得到了一个大小为 (2, batch_size, embedding_size) 的张量。然后,将这个张量传入一个全连接层,输出大小为 (2, batch_size, num_classes),其中 num_classes 是你的分类数量。最后,将输出张量传入交叉熵损失函数中,同时将真实标签传入,即可计算损失。下面是修改后的代码:
```
def doTrainSubClassOf(self, ids, labels):
concept_embs_a = self.concept_vec(ids[[0,2],:])
concept_embs_b = self.concept_vec(ids[[1, 3], :])
radius_a = concept_embs_a[:, :, -1]
radius_b = concept_embs_b[:, :, -1]
concept_embs_a = concept_embs_a[:, :, :-1]
concept_embs_b = concept_embs_b[:, :, :-1]
# Concatenate two classes' embeddings
concept_embs = torch.cat([concept_embs_a, concept_embs_b], dim=0)
# Pass through a fully connected layer
logits = self.fc(concept_embs)
# Calculate cross-entropy loss
loss_fn = torch.nn.CrossEntropyLoss()
loss = loss_fn(logits, labels)
return loss
```
阅读全文