运行grad_i = torch.FloatTensor(net.zero_grad())出现了下述错误 TypeError: new(): data must be a sequence (got NoneType)该怎么修改
时间: 2024-05-31 11:08:23 浏览: 142
可以将代码修改为:
grad_i = torch.zeros_like(net.parameters())
grad_i.zero_()
或者
grad_i = [torch.zeros_like(param) for param in net.parameters()]
torch.autograd.backward(loss, grad_i)
相关问题
运行代码时出现了这个错误:grad_i = torch.FloatTensor(net.zero_grad()) TypeError: new(): data must be a sequence (got NoneType)
这个错误是因为在创建 `torch.FloatTensor` 对象时,传递给它的数据是 `NoneType`,而不是一个序列。在这个特定的情况下,`net.zero_grad()` 返回一个空值,而不是一个包含零值的张量。
要解决这个问题,可以将 `net.zero_grad()` 改为 `torch.zeros_like()`,它会创建一个与参数相同形状的零张量。例如:
```
grad_i = torch.zeros_like(net.parameters())
```
这样就能正确创建 `grad_i` 张量了。
def calc_gradient_penalty(self, netD, real_data, fake_data): alpha = torch.rand(1, 1) alpha = alpha.expand(real_data.size()) alpha = alpha.cuda() interpolates = alpha * real_data + ((1 - alpha) * fake_data) interpolates = interpolates.cuda() interpolates = Variable(interpolates, requires_grad=True) disc_interpolates, s = netD.forward(interpolates) s = torch.autograd.Variable(torch.tensor(0.0), requires_grad=True).cuda() gradients1 = autograd.grad(outputs=disc_interpolates, inputs=interpolates, grad_outputs=torch.ones(disc_interpolates.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0] gradients2 = autograd.grad(outputs=s, inputs=interpolates, grad_outputs=torch.ones(s.size()).cuda(), create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0] if gradients2 is None: return None gradient_penalty = (((gradients1.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) + \ (((gradients2.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) return gradient_penalty def get_loss(self, net,fakeB, realB): self.D_fake, x = net.forward(fakeB.detach()) self.D_fake = self.D_fake.mean() self.D_fake = (self.D_fake + x).mean() # Real self.D_real, x = net.forward(realB) self.D_real = (self.D_real+x).mean() # Combined loss self.loss_D = self.D_fake - self.D_real gradient_penalty = self.calc_gradient_penalty(net, realB.data, fakeB.data) return self.loss_D + gradient_penalty,return self.loss_D + gradient_penalty出现错误:TypeError: unsupported operand type(s) for +: 'Tensor' and 'NoneType'
这个错误是因为在 calc_gradient_penalty 函数中,如果 gradients2 是 None,则会返回 None。在 get_loss 函数中,如果 gradient_penalty 是 None,则返回的就是 self.loss_D + None,这导致了 TypeError 错误。
为了解决这个问题,你可以在 calc_gradient_penalty 函数中添加一个条件判断,如果 gradients2 是 None,则返回 0.0,而不是 None。这样,当计算 gradient_penalty 时,即使 gradients2 是 None,也不会导致错误。修改后的代码如下:
```
def calc_gradient_penalty(self, netD, real_data, fake_data):
alpha = torch.rand(1, 1)
alpha = alpha.expand(real_data.size())
alpha = alpha.cuda()
interpolates = alpha * real_data + ((1 - alpha) * fake_data)
interpolates = interpolates.cuda()
interpolates = Variable(interpolates, requires_grad=True)
disc_interpolates, s = netD.forward(interpolates)
s = torch.autograd.Variable(torch.tensor(0.0), requires_grad=True).cuda()
gradients1 = autograd.grad(outputs=disc_interpolates, inputs=interpolates,
grad_outputs=torch.ones(disc_interpolates.size()).cuda(),
create_graph=True, retain_graph=True, only_inputs=True, allow_unused=True)[0]
gradients2 = autograd.grad(outputs=s, inputs=interpolates,
grad_outputs=torch.ones(s.size()).cuda(),
create_graph=True, retain_graph=True,
only_inputs=True, allow_unused=True)[0]
if gradients2 is None:
return 0.0
gradient_penalty = (((gradients1.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA) + \
(((gradients2.norm(2, dim=1) - 1) ** 2).mean() * self.LAMBDA)
return gradient_penalty
def get_loss(self, net,fakeB, realB):
self.D_fake, x = net.forward(fakeB.detach())
self.D_fake = self.D_fake.mean()
self.D_fake = (self.D_fake + x).mean()
# Real
self.D_real, x = net.forward(realB)
self.D_real = (self.D_real+x).mean()
# Combined loss
self.loss_D = self.D_fake - self.D_real
gradient_penalty = self.calc_gradient_penalty(net, realB.data, fakeB.data)
if gradient_penalty == None:
gradient_penalty = 0.0
return self.loss_D + gradient_penalty
```
阅读全文