def forward(self, pointcloud: torch.cuda.FloatTensor): xyz, features = self._break_up_pc(pointcloud) l_xyz, l_features = [xyz], [features] channel_out = None for i in range(len(self.SA_modules)): li_xyz, li_features = self.SA_modules[i][0](l_xyz[i], l_features[i]) l_xyz.append(li_xyz) l_features.append(li_features) # channel_out = li_features.size(1) #修改 如果要将它们传入 SA 模块中,只需要在循环中定义一个变量 channel_out,然后将其作为参数传入 SA 模块即可。同时,也需要将 l_xyz[i] 和 l_features[i] 作为参数传入 SA 模块 l_features[i] = self.SA_modules[i][1](l_features[i]) #SelfAttention模块需要传入的参数为输入特征向量l_features[i] for i in range(-1, -(len(self.FP_modules) + 1), -1): l_features[i - 1] = self.FP_modules[i]( l_xyz[i - 1], l_xyz[i], l_features[i - 1], l_features[i] ) return l_xyz[0], l_features[0]为什么self.SA_modules[i][1](l_features[i])中的l_features[i]被提示为空
时间: 2023-06-27 09:07:51 浏览: 77
torch.cuda.is_available()返回False解决方案
5星 · 资源好评率100%
可能是因为在循环中定义了变量 channel_out,但没有将其作为参数传递给 SA 模块的第一个组件,导致在第二个组件中无法使用 l_features[i] 参数。你可以尝试修改代码如下:
```
def forward(self, pointcloud: torch.cuda.FloatTensor):
xyz, features = self._break_up_pc(pointcloud)
l_xyz, l_features = [xyz], [features]
channel_out = None
for i in range(len(self.SA_modules)):
li_xyz, li_features = self.SA_modules[i][0](l_xyz[i], l_features[i])
l_xyz.append(li_xyz)
l_features.append(li_features)
channel_out = li_features.size(1)
l_features[i+1] = self.SA_modules[i][1](l_xyz[i+1], l_features[i+1], l_xyz[i], l_features[i], channel_out)
for i in range(-1, -(len(self.FP_modules) + 1), -1):
l_features[i - 1] = self.FP_modules[i](
l_xyz[i - 1], l_xyz[i], l_features[i - 1], l_features[i]
)
return l_xyz[0], l_features[0]
```
这样,就将 channel_out 参数传递给了 SA 模块的第一个组件,并且在第二个组件中使用了 l_features[i+1] 参数。
阅读全文