Traceback (most recent call last): File "E:/pycharm/AHEcode/train.py", line 229, in <module> outputs = model(images) File "E:\conda\CONDA\envs\hu-torch\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "E:/pycharm/AHEcode/train.py", line 63, in forward x_final = torch.cat([x3_flat, lbp_output], dim=1) # 将 x3_flat 和 lbp_output 拼接 RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 6272 but got size 8 for tensor number 1 in the list.完整的代码如下: def forward(self, x): x1 = F.relu(self.bn1(torch.cat([self.conv1_3x3(x), self.conv1_5x5(x), self.conv1_7x7(x)], dim=1))) x1 = F.max_pool2d(x1, 2) x2 = F.relu(self.bn2(torch.cat([self.conv2_3x3(x1), self.conv2_5x5(x1), self.conv2_7x7(x1)], dim=1))) x2 = F.max_pool2d(x2, 2) x3 = F.relu(self.bn3(torch.cat([self.conv3_3x3(x2), self.conv3_5x5(x2), self.conv3_7x7(x2)], dim=1))) x3 = F.max_pool2d(x3, 2) x3_flat = x3.view(-1, 768) print(f'x3_flat size: {x3_flat.size()}') clahe_output = self.clahe_module(x) print(clahe_output.shape) lbp_output = self.lbp_layer(clahe_output) print(f'lbp_output size: {lbp_output.size()}') lbp_output = lbp_output.to(x3_flat.device) # print(f'lbp_output expanded size: {lbp_output.size()}') x_final = torch.cat([x3_flat, lbp_output], dim=1) # 将 x3_flat 和 lbp_output 拼接 output = self.fc(x_final) # 全连接层得到最终的输出 return output
时间: 2024-03-25 11:38:33 浏览: 131
这个错误是因为在 torch.cat 操作中,除了第 1 维度以外,其他维度的大小应该是相同的,但是在你的代码中,x3_flat 的大小是 [batch_size, 768],而 lbp_output 的大小是 [batch_size, 1, height, width],没有经过展平操作,因此维度不匹配。你可以将 lbp_output 展平成 [batch_size, height*width],然后再进行拼接操作。修改后的代码如下:
```
def forward(self, x):
x1 = F.relu(self.bn1(torch.cat([self.conv1_3x3(x), self.conv1_5x5(x), self.conv1_7x7(x)], dim=1)))
x1 = F.max_pool2d(x1, 2)
x2 = F.relu(self.bn2(torch.cat([self.conv2_3x3(x1), self.conv2_5x5(x1), self.conv2_7x7(x1)], dim=1)))
x2 = F.max_pool2d(x2, 2)
x3 = F.relu(self.bn3(torch.cat([self.conv3_3x3(x2), self.conv3_5x5(x2), self.conv3_7x7(x2)], dim=1)))
x3 = F.max_pool2d(x3, 2)
x3_flat = x3.view(-1, 768)
print(f'x3_flat size: {x3_flat.size()}')
clahe_output = self.clahe_module(x)
print(clahe_output.shape)
lbp_output = self.lbp_layer(clahe_output)
lbp_output = lbp_output.view(lbp_output.size(0), -1) # 将 lbp_output 展平
print(f'lbp_output size: {lbp_output.size()}')
lbp_output = lbp_output.to(x3_flat.device)
x_final = torch.cat([x3_flat, lbp_output], dim=1) # 将 x3_flat 和 lbp_output 拼接
output = self.fc(x_final) # 全连接层得到最终的输出
return output
```
这样修改后,应该就可以正常运行了。
阅读全文
相关推荐
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)