transformer多头注意力机制代码
时间: 2024-04-28 12:17:18 浏览: 184
Transformer多头注意力机制是Transformer模型中的核心组件之一,它通过使用多个注意力头来捕捉输入序列中的不同关系和特征。以下是一个简单的Transformer多头注意力机制的代码示例:
```python
import torch
import torch.nn as nn
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
self.d_k = d_model // num_heads
self.W_Q = nn.Linear(d_model, d_model)
self.W_K = nn.Linear(d_model, d_model)
self.W_V = nn.Linear(d_model, d_model)
self.W_O = nn.Linear(d_model, d_model)
def forward(self, Q, K, V, mask=None):
batch_size = Q.size(0)
# 线性变换得到Q、K、V
Q = self.W_Q(Q)
K = self.W_K(K)
V = self.W_V(V)
# 将Q、K、V分割成多个头
Q = Q.view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)
K = K.view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)
V = V.view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)
# 计算注意力得分
scores = torch.matmul(Q, K.transpose(-2, -1)) / torch.sqrt(torch.tensor(self.d_k).float())
# 对注意力得分进行mask操作
if mask is not None:
scores = scores.masked_fill(mask == 0, -1e9)
# 计算注意力权重
attention_weights = torch.softmax(scores, dim=-1)
# 进行注意力加权求和
attention_output = torch.matmul(attention_weights, V)
# 将多个头的输出拼接起来
attention_output = attention_output.transpose(1, 2).contiguous().view(batch_size, -1, self.d_model)
# 线性变换得到最终的输出
output = self.W_O(attention_output)
return output, attention_weights
```
这段代码实现了一个简单的多头注意力机制,其中`d_model`表示输入和输出的维度,`num_heads`表示注意力头的数量。在`forward`方法中,首先通过线性变换将输入序列Q、K、V映射到指定维度,然后将它们分割成多个头,并计算注意力得分。接着根据mask对注意力得分进行处理,然后计算注意力权重并进行加权求和。最后,将多个头的输出拼接起来,并通过线性变换得到最终的输出。
阅读全文