def calc_gradient_penalty(self, netD, real_data, fake_data): alpha = torch.rand(1, 1) alpha = alpha.expand(real_data.size()) alpha = alpha.cuda() interpolates = alpha * real_data + ((1 - alpha) * fake_data) interpolates = interpolates.cuda() interpolates = Variable(interpolates, requires_grad=True) disc_interpolates, s = netD.forward(interpolates) gradients1 = autograd.grad(outputs=disc_interpolates, inputs=interpolates, grad_outputs=torch.ones(disc_interpolates.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True)[0] gradients2 = autograd.grad(outputs=s, inputs=interpolates, grad_outputs=torch.ones(s.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True)[0] gradient_penalty = (((gradients1.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) + \ (((gradients2.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) return gradient_penalty运行上述代码,出现错误:RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
时间: 2023-07-16 19:16:11 浏览: 187
Hedge_DrawDown_Calc_Hedge_DrawDown_Calc_hedge_MT4EA_Drawdown_
这个错误通常是因为在反向传播时,某些变量没有被使用到,但是又没有设置 `allow_unused=True`。你可以尝试在 `grad` 函数中加入 `allow_unused=True` 参数,如下所示:
```
gradients1 = autograd.grad(outputs=disc_interpolates, inputs=interpolates,
grad_outputs=torch.ones(disc_interpolates.size()).cuda(),
create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0]
gradients2 = autograd.grad(outputs=s, inputs=interpolates,
grad_outputs=torch.ones(s.size()).cuda(),
create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0]
```
这样就可以避免出现这个错误了。
阅读全文