Transformer参考文献引用格式
时间: 2024-01-10 21:21:26 浏览: 226
Transformer是一种基于自注意力机制的深度神经网络,它的参考文献引用格式可以参考以下几种:
1. ACL Anthology: 2018年ACL会议上,有一篇关于Transformer的论文被收录在ACL Anthology中,该论文的引用格式为:作者。文章题目。年份。会议名称。卷号(期号):页码。
2. NAACL: 2019年NAACL会议上,有一篇关于Transformer的论文被收录在NAACL中,该论文的引用格式为:作者。文章题目。年份。会议名称。卷号(期号):页码。
3. EMNLP: 2018年EMNLP会议上,有一篇关于Transformer的论文被收录在EMNLP中,该论文的引用格式为:作者。文章题目。年份。会议名称。卷号(期号):页码。
相关问题
Transformer经典参考文献及其引用
Transformer 是一种基于自注意力机制的神经网络模型,它在自然语言处理领域中取得了很大的成功。以下是一些经典的 Transformer 参考文献:
- Vision Transformers: An Overview of Models and Applications
- A Survey on Deep Learning for Image Classification: From Traditional Convolutional Neural Networks to the Transformer
- Attention Is All You Need
卷积神经网络参考文献
根据科技情报大数据挖掘与服务系统平台AMiner从人工智能领域国际顶会/期刊中提取出的“卷积神经网络”相关关键词,筛选推荐了100篇经典必读论文,内容包含CNN在检测/识别/分类/分割/跟踪等领域的理论与实践,并按被引用量进行了排序整理。此外,最新的卷积神经网络分类研究进展也会经常出现在计算机视觉相关的国际学术会议和期刊上,例如CVPR、ICCV、ECCV等会议和TPAMI、IJCV等期刊。以下是一些卷积神经网络的参考文献:
1. "ImageNet Classification with Deep Convolutional Neural Networks" by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (2012)
2. "Very Deep Convolutional Networks for Large-Scale Image Recognition" by Karen Simonyan and Andrew Zisserman (2015)
3. "Going Deeper with Convolutions" by Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich (2015)
4. "Rethinking the Inception Architecture for Computer Vision" by Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna (2016)
5. "Deep Residual Learning for Image Recognition" by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun (2016)
6. "Mask R-CNN" by Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick (2017)
7. "YOLOv3: An Incremental Improvement" by Joseph Redmon and Ali Farhadi (2018)
8. "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks" by Mingxing Tan and Quoc V. Le (2019)
9. "Vision Transformers" by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby (2020)
10. "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo (2021)