没有合适的资源?快使用搜索试试~ 我知道了~
首页GPT-4:大规模多模态模型的突破
GPT-4:大规模多模态模型的突破
1 下载量 195 浏览量
更新于2024-06-26
收藏 4.86MB PDF 举报
"GPT-4 报告"
GPT-4 是 OpenAI 开发的一个大型多模态模型,它是自然语言处理(NLP)领域的一个重要进展。与之前的版本相比,GPT-4 引入了处理图像输入的能力,使其不仅限于文本处理,从而扩展了其应用范围。这一创新使得 GPT-4 能够在更复杂的交互式场景中发挥作用,比如理解和回应包含图像信息的请求。
GPT-4 的设计基于 Transformer 架构,这是一种在深度学习中广泛使用的序列建模技术,特别适合处理自然语言任务。模型的训练目标是预测文档中的下一个令牌,这是自回归语言模型的标准做法。然而,GPT-4 的独特之处在于其后训练的对齐过程,这个过程旨在提高模型在事实准确性以及遵循期望行为方面的表现。通过这个过程,GPT-4 在多项专业和学术基准测试中达到了人类水平,比如在模拟的律师资格考试中取得了前10%的成绩,展示了其在复杂认知任务上的能力。
尽管 GPT-4 在某些场景下可能不及人类智能,但其在基准测试中的表现证明了其强大的学习和泛化能力。为了实现这一点,OpenAI 着重开发了能够在大规模范围内保持行为可预测性的基础设施和优化方法。这些技术的进步意味着,即使使用比 GPT-4 训练所需的计算资源小1000倍的模型,也能相对准确地预测 GPT-4 的某些行为特性。
报告的介绍部分强调了多模态模型如 GPT-4 的研究价值,因为它们有可能被应用于各种各样的场景,从自动文本生成、问答系统到更复杂的交互式应用。随着技术的发展,这类模型有望在教育、法律、医疗等专业领域产生深远影响,同时也在人机交互、信息检索和自动化服务等方面提供新的解决方案。
GPT-4 的出现标志着NLP领域的又一重大突破,它的多模态处理能力和高性能表明,人工智能正朝着更接近人类智能的方向发展。随着技术的不断优化,未来可能会有更多类似 GPT-4 的模型出现,进一步推动人工智能在现实世界的应用。
dou, Liam Fedus, Tarun Gogineni, Rapha Gontijo-Lopes, Jonathan
Gordon, Joost Huizinga, Shawn Jain, Roger Jiang, Łukasz Kaiser,
Christina Kim, Jan Leike, Chak Li, Stephanie Lin, Ryan Lowe, Jacob
Menick, Luke Metz, Pamela Mishkin, Tong Mu, Oleg Murk, Ashvin
Nair, Long Ouyang, Alex Passos, Michael (Rai) Pokorny, Vitchyr
Pong, Shibani Santurkar, Daniel Selsam, Sarah Shoker, Carroll Wain-
wright, Matt Wiethoff, Jeff Wu, Kai Xiao, Kevin Yu, Marvin Zhang,
Chong Zhang, William Zhuk, Barret Zoph
Data infrastructure
10
Irwan Bello, Lenny Bogdonoff, Juan Felipe Cerón Uribe, Joshua
Gross, Shawn Jain, Haozhun Jin, Christina Kim, Aris Konstantinidis,
Teddy Lee, David Medina, Jacob Menick, Luke Metz, Ashvin Nair,
Long Ouyang, Michael (Rai) Pokorny, Vitchyr Pong, John Schulman,
Jonathan Ward, Jiayi Weng, Matt Wiethoff, Sarah Yoo, Kevin Yu,
Wojciech Zaremba, William Zhuk, Barret Zoph
ChatML format
10
Ilge Akkaya, Christina Kim, Chak Li, Rachel Lim, Jacob Menick,
Luke Metz, Andrey Mishchenko, Vitchyr Pong, John Schulman,
Carroll Wainwright, Barret Zoph
Model safety
10
Josh Achiam, Steven Adler, Juan Felipe Cerón Uribe, Hyung Won
Chung, Tyna Eloundou, Rapha Gontijo-Lopes, Shixiang Shane Gu,
Johannes Heidecke, Joost Huizinga, Teddy Lee, Jan Leike, Stephanie
Lin, Ryan Lowe, Todor Markov, Luke Metz, Tong Mu, Shibani
Santurkar, John Schulman, Andrea Vallone, Carroll Wainwright, Jason
Wei, Lilian Weng, Kai Xiao, Chong Zhang, Marvin Zhang, Barret Zoph
Refusals
10
Juan Felipe Cerón Uribe, Tyna Eloundou, Johannes Heidecke, Joost
Huizinga, Jan Leike, Stephanie Lin, Ryan Lowe, Pamela Mishkin,
Tong Mu, Carroll Wainwright, Lilian Weng, Kai Xiao, Chong Zhang,
Barret Zoph
Foundational RLHF and InstructGPT work
10
Diogo Almeida, Joost Huizinga, Roger Jiang, Jan Leike, Stephanie Lin,
Ryan Lowe, Pamela Mishkin, Dan Mossing, Long Ouyang, Katarina
Slama, Carroll Wainwright, Jeff Wu, Kai Xiao, Marvin Zhang
Flagship training runs
10
Greg Brockman, Liam Fedus, Johannes Heidecke, Joost Huizinga,
Roger Jiang, Kyle Kosic, Luke Metz, Ashvin Nair, Jiayi Weng, Chong
Zhang, Shengjia Zhao, Barret Zoph
Code capability
10
Ilge Akkaya, Mo Bavarian, Jonathan Gordon, Shawn Jain, Haozhun
Jin, Teddy Lee, Chak Li, Oleg Murk, Ashvin Nair, Vitchyr Pong,
Benjamin Sokolowsky, Jerry Tworek, Matt Wiethoff, Sarah Yoo, Kevin
Yu, Wojciech Zaremba, William Zhuk
Evaluation & analysis
Core contributors
10
Sandhini Agarwal System card co-lead
Lama Ahmad Expert red teaming & adversarial testing program lead
Mo Bavarian Capability prediction co-lead
Tyna Eloundou Safety evaluations co-lead
Andrew Kondrich OpenAI Evals open-sourcing co-lead
Gretchen Krueger System card co-lead
Michael Lampe Privacy and PII evaluations lead
Pamela Mishkin Economic impact & overreliance evaluations lead
Benjamin Sokolowsky Capability prediction co-lead
Jack Rae Research benchmark execution lead
Chelsea Voss Eval execution lead
Alvin Wang OpenAI Evals lead
Kai Xiao Safety evaluations co-lead
Marvin Zhang OpenAI Evals open-sourcing co-lead
OpenAI Evals library
10
Shixiang Shane Gu, Angela Jiang, Logan Kilpatrick, Andrew Kon-
drich, Pamela Mishkin, Jakub Pachocki, Ted Sanders, Jessica Shieh,
Alvin Wang, Marvin Zhang
Model-graded evaluation infrastructure
10
Liam Fedus, Rapha Gontijo-Lopes, Shixiang Shane Gu, Andrew
Kondrich, Michael (Rai) Pokorny, Wojciech Zaremba, Chong Zhang,
Marvin Zhang, Shengjia Zhao, Barret Zoph
Acceleration forecasting
10
Alan Hickey, Daniel Kokotajlo, Cullen O’Keefe, Sarah Shoker
ChatGPT evaluations
10
Juan Felipe Cerón Uribe, Hyung Won Chung, Rapha Gontijo-Lopes,
Liam Fedus, Luke Metz, Michael Rai Pokorny, Jason Wei, Shengjia
Zhao, Barret Zoph
Capability evaluations
10
Tyna Eloundou, Shengli Hu, Roger Jiang, Jamie Kiros, Teddy Lee,
Scott Mayer McKinney, Jakub Pachocki, Alex Paino, Giambattista
Parascandolo, Boris Power, Raul Puri, Jack Rae, Nick Ryder, Ted
Sanders, Szymon Sidor, Benjamin Sokolowsky, Chelsea Voss, Alvin
Wang, Rowan Zellers, Juntang Zhuang
Coding evaluations
10
Ilge Akkaya, Mo Bavarian, Jonathan Gordon, Shawn Jain, Chak Li,
Oleg Murk, Vitchyr Pong, Benjamin Sokolowsky, Jerry Tworek, Kevin
Yu, Wojciech Zaremba
Real-world use case evaluations
10
Andrew Kondrich, Joe Palermo, Boris Power, Ted Sanders
Contamination investigations
10
Adrien Ecoffet, Roger Jiang, Ingmar Kanitscheider, Scott Mayer
McKinney, Alex Paino, Giambattista Parascandolo, Jack Rae, Qiming
Yuan
Instruction following and API evals
10
Diogo Almeida, Carroll Wainwright, Marvin Zhang
Novel capability discovery
10
Filipe de Avila Belbute Peres, Kevin Button, Fotis Chantzis, Mike
Heaton, Wade Hickey, Xin Hu, Andrew Kondrich, Matt Knight, An-
drew Mayne, Jake McNeil, Vinnie Monaco, Joe Palermo, Joel Parish,
Boris Power, Bob Rotsted, Ted Sanders
Vision evaluations
10
Shixiang Shane Gu, Shengli Hu, Jamie Kiros, Hyeonwoo Noh, Raul
Puri, Rowan Zellers
Economic impact evaluation
10
Tyna Eloundou, Sam Manning, Aalok Mehta, Pamela Mishkin
Non-proliferation, international humanitarian law & national
security red teaming
10
Sarah Shoker
Overreliance analysis
10
Miles Brundage, Michael Lampe, Pamela Mishkin
Privacy and PII evaluations
10
Michael Lampe, Vinnie Monaco, Ashley Pantuliano
Safety and policy evaluations
10
Josh Achiam, Sandhini Agarwal, Lama Ahmad, Jeff Belgum, Tyna
Eloundou, Johannes Heidecke, Shengli Hu, Joost Huizinga, Jamie
Kiros, Gretchen Krueger, Michael Lampe, Stephanie Lin, Ryan Lowe,
Todor Markov, Vinnie Monaco, Tong Mu, Raul Puri, Girish Sastry,
Andrea Vallone, Carroll Wainwright, CJ Weinmann, Lilian Weng, Kai
Xiao, Chong Zhang
OpenAI adversarial testers
10
Josh Achiam, Steven Adler, Lama Ahmad, Shyamal Anadkat, Red
Avila, Gabriel Bernadett-Shapiro, Anna-Luisa Brakman, Tim Brooks,
Miles Brundage, Chelsea Carlson, Derek Chen, Hyung Won Chung,
Jeremiah Currier, Daniel Kokotajlo, David Dohan, Adrien Ecoffet,
Juston Forte, Vik Goel, Ryan Greene, Johannes Heidecke, Alan Hickey,
Shengli Hu, Joost Huizinga, Janko, Tomer Kaftan, Ali Kamali, Nitish
Shirish Keskar, Tabarak Khan, Hendrik Kirchner, Daniel Kokotajlo,
Gretchen Krueger, Michael Lampe, Teddy Lee, Molly Lin, Ryan
Lowe, Todor Markov, Jake McNeil, Pamela Mishkin, Vinnie Monaco,
Daniel Mossing, Tong Mu, Oleg Murk, Cullen O’Keefe, Joe Palermo,
Giambattista Parascandolo, Joel Parish, Boris Power, Alethea Power,
Cameron Raymond, Francis Real, Bob Rotsted, Mario Salterelli, Sam
Wolrich, Ted Sanders, Girish Sastry, Sarah Shoker, Shyamal Anadkat,
Yang Song, Natalie Staudacher, Madeleine Thompson, Elizabeth
Tseng, Chelsea Voss, Jason Wei, Chong Zhang
System card & broader impacts analysis
10
Steven Adler, Sandhini Agarwal, Lama Ahmad, Janko Altenschmidt,
Jeff Belgum, Gabriel Bernadett-Shapiro, Miles Brundage, Derek Chen,
16
Tyna Eloundou, Liam Fedus, Leo Gao, Vik Goel, Johannes Heidecke,
Alan Hickey, Shengli Hu, Joost Huizinga, Daniel Kokotajlo, Gretchen
Krueger, Michael Lampe, Jade Leung, Stephanie Lin, Ryan Lowe,
Kim Malfacini, Todor Markov, Bianca Martin, Aalok Mehta, Pamela
Mishkin, Tong Mu, Richard Ngo, Cullen O’Keefe, Joel Parish, Rai
Pokorny, Bob Rotsted, Girish Sastry, Sarah Shoker, Andrea Vallone,
Carroll Wainwright, CJ Weinmann, Lilian Weng, Dave Willner, Kai
Xiao, Chong Zhang
Deployment
Core contributors
10
Steven Adler Early stage program management lead
Sandhini Agarwal Launch safety lead
Derek Chen Monitoring & response lead
Atty Eleti GPT-4 API co-lead
Joanne Jang GPT-4 product co-lead
Angela Jiang GPT-4 product co-lead
Tomer Kaftan Inference infrastructure & deployment lead
Rachel Lim GPT-4 API co-lead
Kim Malfacini Usage policy lead
Bianca Martin Release program management lead
Evan Morikawa Engineering lead
Henrique Ponde de Oliveira Pinto Inference workflow lead
Heather Schmidt GPT-4 infrastructure management
Maddie Simens Design lead
Felipe Such Inference optimization & reliability lead
Andrea Vallone Detection & refusals policy lead
Lilian Weng Applied research lead
Dave Willner Trust & safety lead
Michael Wu Inference research lead
Inference research
10
Paul Baltescu, Scott Gray, Yuchen He, Arvind Neelakantan, Michael
Wu
GPT-4 API & ChatML deployment
10
Greg Brockman, Brooke Chan, Chester Cho, Atty Eleti, Rachel Lim,
Andrew Peng, Michelle Pokrass, Sherwin Wu
GPT-4 web experience
10
Valerie Balcom, Lenny Bogdonoff, Jason Chen, Dave Cummings,
Noah Deutsch, Mike Heaton, Paul McMillan, Rajeev Nayak, Joel
Parish, Adam Perelman, Eric Sigler, Nick Turley, Arun Vijayvergiya,
Chelsea Voss
Inference infrastructure
10
Brooke Chan, Scott Gray, Chris Hallacy, Kenny Hsu, Tomer Kaftan,
Rachel Lim, Henrique Ponde de Oliveira Pinto, Raul Puri, Heather
Schmidt, Felipe Such
Reliability engineering
10
Haiming Bao, Madelaine Boyd, Ben Chess, Damien Deville, Yufei
Guo, Vishal Kuo, Ikai Lan, Michelle Pokrass, Carl Ross, David
Schnurr, Jordan Sitkin, Felipe Such
Trust & safety engineering
10
Jeff Belgum, Madelaine Boyd, Vik Goel
Trust & safety monitoring and response
10
Janko Altenschmidt, Anna-Luisa Brakman, Derek Chen, Florencia
Leoni Aleman, Molly Lin, Cameron Raymond, CJ Weinmann, Dave
Willner, Samuel Wolrich
Trust & safety policy
10
Rosie Campbell, Kim Malfacini, Andrea Vallone, Dave Willner
Deployment compute
10
Peter Hoeschele, Evan Morikawa
Product management
10
Jeff Harris, Joanne Jang, Angela Jiang
Additional contributions
Sam Altman, Katie Mayer, Bob McGrew, Mira Murati, Ilya Sutskever,
Peter Welinder
10
Blog post & paper content
10
Sandhini Agarwal, Greg Brockman, Miles Brundage, Adrien Ecoffet,
Tyna Eloundou, David Farhi, Johannes Heidecke, Shengli Hu, Joost
Huizinga, Roger Jiang, Gretchen Krueger, Jan Leike, Daniel Levy,
Stephanie Lin, Ryan Lowe, Tong Mu, Hyeonwoo Noh, Jakub Pa-
chocki, Jack Rae, Kendra Rimbach, Shibani Santurkar, Szymon Sidor,
Benjamin Sokolowsky, Jie Tang, Chelsea Voss, Kai Xiao, Rowan
Zellers, Chong Zhang, Marvin Zhang
Communications
10
Ruby Chen, Cory Decareaux, Thomas Degry, Steve Dowling, Niko
Felix, Elie Georges, Anna Makanju, Andrew Mayne, Aalok Mehta,
Elizabeth Proehl, Kendra Rimbach, Natalie Summers, Justin Jay Wang,
Hannah Wong
Compute allocation support
10
Theresa Lopez, Elizabeth Tseng
Contracting, revenue, pricing, & finance support
10
Brooke Chan, Denny Jin, Billie Jonn, Patricia Lue, Kyla Sheppard,
Lauren Workman
Launch partners & product operations
10
Filipe de Avila Belbute Peres, Brittany Carey, Simón Posada Fishman,
Isabella Fulford, Teddy Lee„ Yaniv Markovski, Tolly Powell, Toki
Sherbakov, Jessica Shieh, Natalie Staudacher, Preston Tuggle
Legal
10
Jake Berdine, Che Chang, Sheila Dunning, Ashley Pantuliano
Security & privacy engineering
10
Kevin Button, Fotis Chantzis, Wade Hickey, Xin Hu, Shino Jomoto,
Matt Knight, Jake McNeil, Vinnie Monaco, Joel Parish, Bob Rotsted
System administration & on-call support
10
Morgan Grafstein, Francis Real, Mario Saltarelli
We also acknowledge and thank every OpenAI team member not explicitly mentioned above,
including the amazing people on the executive assistant, finance, go to market, human resources,
legal, operations and recruiting teams. From hiring everyone in the company, to making sure we have
an amazing office space, to building the administrative, HR, legal, and financial structures that allow
us to do our best work, everyone at OpenAI has contributed to GPT-4.
We thank Microsoft for their partnership, especially Microsoft Azure for supporting model
training with infrastructure design and management, and the Microsoft Bing team and Microsoft’s
safety teams for their partnership on safe deployment.
We are grateful to our expert adversarial testers and red teamers who helped test our mod-
els at early stages of development and informed our risk assessments as well as the System Card
output. Participation in this red teaming process is not an endorsement of the deployment plans
10
All author lists sorted alphabetically.
17
of OpenAI or OpenAI’s policies: Steven Basart, Sophie Duba, Cèsar Ferri, Heather Frase, Gavin
Hartnett, Jake J. Hecla, Dan Hendrycks, Jose Hernandez-Orallo, Alice Hunsberger, Rajiv W.
Jain, Boru Gollo Jattani, Lauren Kahn, Dan Kaszeta, Sara Kingsley, Noam Kolt, Nathan Labenz,
Eric Liddick, Andrew J. Lohn, Andrew MacPherson, Sam Manning, Mantas Mazeika, Anna
Mills, Yael Moros, Jimin Mun, Aviv Ovadya, Roya Pakzad, Yifan Peng, Ciel Qi, Alex Rosenblatt,
Paul Röttger, Maarten Sap, Wout Schellaert, Geoge Shih, Muhammad Shoker, Melanie Subbiah,
Bryan West, Andrew D. White, Anna Katariina Wisakanto, Akhila Yerukola, Lexin Zhou, Xuhui Zhou
We thank our collaborators at Casetext and Stanford CodeX for conducting the simulated
bar exam: P. Arredondo (Casetext/Stanford CodeX), D. Katz (Stanford CodeX), M. Bommarito
(Stanford CodeX), S. Gao (Casetext).
GPT-4 was used for help with wording, formatting, and styling throughout this work.
References
[1]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in Neural Information Processing Systems, 33:1877–1901, 2020.
[2]
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza
Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al.
Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
[3]
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM:
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
[4]
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song,
John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. Scaling language
models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446,
2021.
[5]
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov.
Transformer-XL: Attentive language models beyond a fixed-length context. arXiv preprint
arXiv:1901.02860, 2019.
[6]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike
Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining
approach. arXiv preprint arXiv:1907.11692, 2019.
[7]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of
deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805,
2018.
[8]
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena,
Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified
text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
[9]
Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory
cost. arXiv preprint arXiv:1804.04235, 2018.
[10]
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv preprint
arXiv:1607.06450, 2016.
[11]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny
Zhou. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022.
[12]
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei
Han. Large language models can self-improve. arXiv preprint arXiv:2210.11610, 2022.
[13]
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022.
18
[14]
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child,
Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language
models. arXiv preprint arXiv:2001.08361, 2020.
[15]
Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson,
Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for autoregressive
generative modeling. arXiv preprint arXiv:2010.14701, 2020.
[16]
Greg Yang, Edward J. Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick
Ryder, Jakub Pachocki, Weizhu Chen, and Jianfeng Gao. Tensor programs v: Tuning large
neural networks via zero-shot hyperparameter transfer. arXiv preprint arXiv:2203.03466, 2022.
[17]
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,
and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts
layer. arXiv preprint arXiv:1701.06538, 2017.
[18]
Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer,
and William Fedus. ST-MoE: Designing stable and transferable sparse expert models. arXiv
preprint arXiv:2202.08906, 2022.
[19]
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani
Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large
language models. TMLR, 2022.
[20]
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Uni-
versal transformers. In International Conference on Learning Representations, 2019. URL
https://openreview.net/forum?id=HyzdRiR9Y7.
[21]
Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer:
Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2021.
[22]
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson,
Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual
language model for few-shot learning. In Advances in Neural Information Processing Systems.
[23]
Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz,
Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. PaLI: A jointly-scaled
multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022.
[24]
Ben Wang and Aran Komatsuzaki. Gpt-j-6b: A 6 billion parameter autoregressive language
model, 2021.
[25]
Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. Gpt-neo: Large scale
autoregressive language modeling with mesh-tensorflow. If you use this software, please cite it
using these metadata, 58, 2021.
[26]
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili
´
c, Daniel Hesslow,
Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A
176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100,
2022.
[27]
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen,
Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained
transformer language models. arXiv preprint arXiv:2205.01068, 2022.
[28] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo-
thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open
and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[29]
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding. Proceedings of the
International Conference on Learning Representations (ICLR), 2021.
19
[30]
Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob
Steinhardt. Aligning ai with shared human values. Proceedings of the International Conference
on Learning Representations (ICLR), 2021.
[31]
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language
models are unsupervised multitask learners. 2019.
[32]
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language
understanding by generative pre-training. 2018.
[33]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. NeurIPS, 2017.
[34]
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep
reinforcement learning from human preferences. Advances in Neural Information Processing
Systems, 30, 2017.
[35]
Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan
Kianinejad, Md Patwary, Mostofa Ali, Yang Yang, and Yanqi Zhou. Deep learning scaling is
predictable, empirically. arXiv preprint arXiv:1712.00409, 2017.
[36]
Neil C Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F Manso. The computational
limits of deep learning. arXiv preprint arXiv:2007.05558, 2020.
[37]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto,
Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul
Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke
Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad
Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias
Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex
Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra,
Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer,
Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech
Zaremba. Evaluating large language models trained on code. 2021.
[38]
Ian McKenzie, Alexander Lyzhov, Alicia Parrish, Ameya Prabhu, Aaron Mueller, Najoung
Kim, Sam Bowman, and Ethan Perez. The inverse scaling prize, 2022. URL
https://github.
com/inverse-scaling/prize.
[39]
Jason Wei, Najoung Kim, Yi Tay, and Quoc V. Le. Inverse scaling can become U-shaped. arXiv
preprint arXiv:2211.02011, 2022.
[40]
Ian McKenzie, Alexander Lyzhov, Alicia Parrish, Ameya Prabhu, Aaron Mueller, Najoung
Kim, Sam Bowman, and Ethan Perez. Inverse scaling prize: First round winners, 2022. URL
https://irmckenzie.co.uk/round1.
[41]
Greg Brockman, Peter Welinder, Mira Murati, and OpenAI. OpenAI: OpenAI API, 2020. URL
https://openai.com/blog/openai-api.
[42]
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid,
Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al.
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.
arXiv preprint arXiv:2206.04615, 2022.
[43]
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint
arXiv:2009.03300, 2020.
[44]
Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri, Xavier
Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, et al. Transcending scaling
laws with 0.1% extra compute. arXiv preprint arXiv:2210.11399, 2022.
20
剩余97页未读,继续阅读
2023-05-12 上传
2023-03-16 上传
2023-04-20 上传
2023-05-25 上传
2023-03-17 上传
2023-07-13 上传
2023-03-23 上传
2023-06-04 上传
Albert_Lsk
- 粉丝: 1w+
- 资源: 26
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 黑板风格计算机毕业答辩PPT模板下载
- CodeSandbox实现ListView快速创建指南
- Node.js脚本实现WXR文件到Postgres数据库帖子导入
- 清新简约创意三角毕业论文答辩PPT模板
- DISCORD-JS-CRUD:提升 Discord 机器人开发体验
- Node.js v4.3.2版本Linux ARM64平台运行时环境发布
- SQLight:C++11编写的轻量级MySQL客户端
- 计算机专业毕业论文答辩PPT模板
- Wireshark网络抓包工具的使用与数据包解析
- Wild Match Map: JavaScript中实现通配符映射与事件绑定
- 毕业答辩利器:蝶恋花毕业设计PPT模板
- Node.js深度解析:高性能Web服务器与实时应用构建
- 掌握深度图技术:游戏开发中的绚丽应用案例
- Dart语言的HTTP扩展包功能详解
- MoonMaker: 投资组合加固神器,助力$GME投资者登月
- 计算机毕业设计答辩PPT模板下载
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功