FCA注意力机制代码
时间: 2023-10-20 22:09:07 浏览: 195
FCA(Feature-wise Linear Modulation with Composite Attention)注意力机制是一种用于计算机视觉任务的注意力模型。下面是一个示例代码,展示了如何实现FCA注意力机制:
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
class FCAAttention(nn.Module):
def __init__(self, input_dim, hidden_dim):
super(FCAAttention, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, 1)
def forward(self, input):
x = F.relu(self.fc1(input))
attention_weights = F.softmax(self.fc2(x), dim=1)
attended_input = torch.mul(attention_weights, input)
output = torch.sum(attended_input, dim=1)
return output
# 示例用法
input_dim = 512 # 输入维度
hidden_dim = 256 # 隐藏层维度
attention = FCAAttention(input_dim, hidden_dim)
input = torch.randn(32, input_dim) # 输入数据,32表示batch size
output = attention(input) # FCA注意力机制的输出
print(output.shape) # 输出形状
```
阅读全文