没有合适的资源?快使用搜索试试~ 我知道了~
首页Generative Adversarial Imitation Learning 生成对抗的模仿学习
Generative Adversarial Imitation Learning 生成对抗的模仿学习
需积分: 48 15 下载量 116 浏览量
更新于2023-03-16
评论
收藏 433KB PDF 举报
Generative Adversarial Imitation Learning Jonathan Ho Stefano Ermon
资源详情
资源评论
资源推荐
Generative Adversarial Imitation Learning
Jonathan Ho
OpenAI
hoj@openai.com
Stefano Ermon
Stanford University
ermon@cs.stanford.edu
Abstract
Consider learning a policy from example expert behavior, without interaction with
the expert or access to a reinforcement signal. One approach is to recover the
expert’s cost function with inverse reinforcement learning, then extract a policy
from that cost function with reinforcement learning. This approach is indirect
and can be slow. We propose a new general framework for directly extracting a
policy from data as if it were obtained by reinforcement learning following inverse
reinforcement learning. We show that a certain instantiation of our framework
draws an analogy between imitation learning and generative adversarial networks,
from which we derive a model-free imitation learning algorithm that obtains signif-
icant performance gains over existing model-free methods in imitating complex
behaviors in large, high-dimensional environments.
1 Introduction
We are interested in a specific setting of imitation learning—the problem of learning to perform a
task from expert demonstrations—in which the learner is given only samples of trajectories from
the expert, is not allowed to query the expert for more data while training, and is not provided a
reinforcement signal of any kind. There are two main approaches suitable for this setting: behavioral
cloning [
18
], which learns a policy as a supervised learning problem over state-action pairs from
expert trajectories; and inverse reinforcement learning [
23
,
16
], which finds a cost function under
which the expert is uniquely optimal.
Behavioral cloning, while appealingly simple, only tends to succeed with large amounts of data, due
to compounding error caused by covariate shift [
21
,
22
]. Inverse reinforcement learning (IRL), on
the other hand, learns a cost function that prioritizes entire trajectories over others, so compounding
error, a problem for methods that fit single-timestep decisions, is not an issue. Accordingly, IRL has
succeeded in a wide range of problems, from predicting behaviors of taxi drivers [
29
] to planning
footsteps for quadruped robots [20].
Unfortunately, many IRL algorithms are extremely expensive to run, requiring reinforcement learning
in an inner loop. Scaling IRL methods to large environments has thus been the focus of much recent
work [
6
,
13
]. Fundamentally, however, IRL learns a cost function, which explains expert behavior
but does not directly tell the learner how to act. Given that the learner’s true goal often is to take
actions imitating the expert—indeed, many IRL algorithms are evaluated on the quality of the optimal
actions of the costs they learn—why, then, must we learn a cost function, if doing so possibly incurs
significant computational expense yet fails to directly yield actions?
We desire an algorithm that tells us explicitly how to act by directly learning a policy. To develop such
an algorithm, we begin in Section 3, where we characterize the policy given by running reinforcement
learning on a cost function learned by maximum causal entropy IRL [
29
,
30
]. Our characterization
introduces a framework for directly learning policies from data, bypassing any intermediate IRL step.
Then, we instantiate our framework in Sections 4 and 5 with a new model-free imitation learning
algorithm. We show that our resulting algorithm is intimately connected to generative adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
快乐地笑
- 粉丝: 58
- 资源: 14
上传资源 快速赚钱
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
会员权益专享
最新资源
- c++校园超市商品信息管理系统课程设计说明书(含源代码) (2).pdf
- 建筑供配电系统相关课件.pptx
- 企业管理规章制度及管理模式.doc
- vb打开摄像头.doc
- 云计算-可信计算中认证协议改进方案.pdf
- [详细完整版]单片机编程4.ppt
- c语言常用算法.pdf
- c++经典程序代码大全.pdf
- 单片机数字时钟资料.doc
- 11项目管理前沿1.0.pptx
- 基于ssm的“魅力”繁峙宣传网站的设计与实现论文.doc
- 智慧交通综合解决方案.pptx
- 建筑防潮设计-PowerPointPresentati.pptx
- SPC统计过程控制程序.pptx
- SPC统计方法基础知识.pptx
- MW全能培训汽轮机调节保安系统PPT教学课件.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0