def calc_gradient_penalty(self, netD, real_data, fake_data): alpha = torch.rand(1, 1) alpha = alpha.expand(real_data.size()) alpha = alpha.cuda() interpolates = alpha * real_data + ((1 - alpha) * fake_data) interpolates = interpolates.cuda() interpolates = Variable(interpolates, requires_grad=True) disc_interpolates, s = netD.forward(interpolates) s = torch.autograd.Variable(torch.tensor(0.0), requires_grad=True).cuda() gradients1 = autograd.grad(outputs=disc_interpolates, inputs=interpolates, grad_outputs=torch.ones(disc_interpolates.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0] gradients2 = autograd.grad(outputs=s, inputs=interpolates, grad_outputs=torch.ones(s.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0] gradient_penalty = (((gradients1.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) + \ (((gradients2.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) return gradient_penalty,上述代码中(((gradients2.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA)提示出现错误:RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
时间: 2023-11-12 20:06:03 浏览: 87
这个错误提示是因为在计算梯度时,有些张量(tensors)没有被使用到,可能是因为你在计算梯度时定义了一些不必要的变量,或者是你在定义某些变量时出现了问题。根据错误提示建议你设置 allow_unused=True,这样可以忽略未使用的张量而不报错。你可以这样修改代码:
```
gradients2 = autograd.grad(outputs=s, inputs=interpolates,
grad_outputs=torch.ones(s.size()).cuda(),
create_graph=True, retain_graph=True,
only_inputs=True, allow_unused=True)[0]
```
改为:
```
gradients2 = autograd.grad(outputs=s, inputs=interpolates,
grad_outputs=torch.ones(s.size()).cuda(),
create_graph=True, retain_graph=True,
only_inputs=True, allow_unused=True)[0]
if gradients2 is None:
return None
```
这样就可以避免报错了。
阅读全文