自注意力机制模块代码
时间: 2023-11-12 17:59:41 浏览: 80
注意力机制的一些代码整理
好的,以下是自注意力机制模块的代码:
```python
import torch
import torch.nn as nn
class SelfAttention(nn.Module):
def __init__(self, embed_size, heads):
super(SelfAttention, self).__init__()
self.embed_size = embed_size
self.heads = heads
self.head_dim = embed_size // heads
assert (self.head_dim * heads == embed_size), "Embed size needs to be divisible by heads"
self.values = nn.Linear(self.head_dim, self.head_dim, bias=False)
self.keys = nn.Linear(self.head_dim, self.head_dim, bias=False)
self.queries = nn.Linear(self.head_dim, self.head_dim, bias=False)
self.fc_out = nn.Linear(heads*self.head_dim, embed_size)
def forward(self, values, keys, query, mask):
N = query.shape[0]
value_len, key_len, query_len = values.shape[1], keys.shape[1], query.shape[1]
# Split embedding into self.heads pieces
values = values.reshape(N, value_len, self.heads, self.head_dim)
keys = keys.reshape(N, key_len, self.heads, self.head_dim)
query = query.reshape(N, query_len, self.heads, self.head_dim)
# Compute energy between query and key for each head
energy = torch.einsum("nqhd,nkhd->nhqk", [query, keys])
# Apply mask to energy
if mask is not None:
energy = energy.masked_fill(mask == 0, float("-1e20"))
# Apply softmax to get attention scores
attention = torch.softmax(energy / (self.embed_size ** (1/2)), dim=3)
# Compute attention output
out = torch.einsum("nhql,nlhd->nqhd", [attention, values]).reshape(
N, query_len, self.heads*self.head_dim
)
# Apply final linear layer
out = self.fc_out(out)
return out
```
这是一个基于 PyTorch 实现的自注意力机制模块,其中包含了值、键、查询三个线性层,以及一个输出层。在 forward 方法中,我们首先将输入的 values、keys、query 分别 reshape 成 (batch_size, seq_len, heads, head_dim) 的形状,然后通过 einsum 函数计算出每个 head 的注意力得分,再将得分与 values 相乘得到每个 head 的输出,最后将所有 head 的输出拼接起来并通过输出层得到最终的输出。
阅读全文