transformer position维度
时间: 2023-08-24 07:05:54 浏览: 75
Transformer之Positional encoding
在Transformer模型中,位置编码的维度与输入embedding的维度是一样的。引用中提到,为了让位置信息参与训练,需要构造一个与输入embedding维度一样的矩阵,并将其加到输入embedding中。这样,位置编码的维度就与输入embedding的维度保持一致。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
#### 引用[.reference_title]
- *1* *2* [Transformer的position embedding](https://blog.csdn.net/weixin_37539396/article/details/105974779)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"]
- *3* [Transformer:Position Embedding解读](https://blog.csdn.net/weixin_45424997/article/details/108503792)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"]
[ .reference_list ]
阅读全文