CBAM注意力机制 代码
时间: 2023-10-19 11:08:47 浏览: 46
以下是一个简单的CBAM注意力机制的代码示例:
```python
import torch
import torch.nn as nn
class CBAMModule(nn.Module):
def __init__(self, channels, reduction_ratio=16):
super(CBAMModule, self).__init__()
self.channel_attention = ChannelAttention(channels, reduction_ratio)
self.spatial_attention = SpatialAttention()
def forward(self, x):
x = self.channel_attention(x)
x = self.spatial_attention(x)
return x
class ChannelAttention(nn.Module):
def __init__(self, channels, reduction_ratio=16):
super(ChannelAttention, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.max_pool = nn.AdaptiveMaxPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channels, channels // reduction_ratio),
nn.ReLU(inplace=True),
nn.Linear(channels // reduction_ratio, channels),
nn.Sigmoid()
)
def forward(self, x):
avg_out = self.fc(self.avg_pool(x).view(x.size(0), -1))
max_out = self.fc(self.max_pool(x).view(x.size(0), -1))
out = avg_out + max_out
out = out.view(x.size(0), x.size(1), 1, 1)
return x * out
class SpatialAttention(nn.Module):
def __init__(self):
super(SpatialAttention, self).__init__()
self.conv = nn.Conv2d(2, 1, kernel_size=7, padding=3)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
avg_out = torch.mean(x, dim=1, keepdim=True)
max_out, _ = torch.max(x, dim=1, keepdim=True)
out = torch.cat([avg_out, max_out], dim=1)
out = self.conv(out)
out = self.sigmoid(out)
return x * out
# 使用CBAM模块
cbam = CBAMModule(channels=64)
input_tensor = torch.randn(1, 64, 32, 32)
output_tensor = cbam(input_tensor)
print(output_tensor.shape)
```
这段代码定义了一个包含CBAM注意力机制的模块`CBAMModule`,以及两个子模块`ChannelAttention`和`SpatialAttention`。在模块的前向传播中,首先通过通道注意力机制对输入进行通道维度的注意力调整,然后再通过空间注意力机制对调整后的特征图进行空间维度的注意力调整。最后返回调整后的特征图。代码最后的使用示例展示了如何使用CBAM模块对输入进行处理。