没有合适的资源?快使用搜索试试~ 我知道了~
首页可解释人工智能文献综述论文-2019-Explanation in Human-AI Systems
This Report is an expansion of a previous Report on the DARPA XAI Program, which was titled "Literature Review and Integration of Key Ideas for Explainable AI," and was dated February 2018. This new version integrates nearly 200 additional references that have been discovered. This Report includes a new section titled "Review of Human Evaluation of XAI Systems." This section focuses on reports—many of them recent—on projects in which human-machine AI or XAI systems underwent some sort of empirical evaluation. This new section is particularly relevant to the empirical and experimental activities in the DARPA XAI Program
资源详情
资源评论
资源推荐

1"
DARPA"XAI"Literature"Review"" " " " " " " " " p."
"
"
Explanation in Human-AI Systems:
A Literature Meta-Review
Synopsis of Key Ideas and Publications
and
Bibliography for Explainable AI
Prepared by Task Area 2
Shane T. Mueller
Michigan Technical University
Robert R. Hoffman, William Clancey, Abigail Emrey
Institute for Human and Machine Cognition
Gary Klein
MacroCognition, LLC
DARPA XAI Program
February 2019
!

2"
DARPA"XAI"Literature"Review"" " " " " " " " " p."
"
"
Explanation in Human-AI Systems
Executive Summary
This is an integrative review that address the question, "What makes for a good
explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is
necessarily selective. That said, most of the key concepts and issues are exressed in this Report.
The Report encapsulates the history of computer science efforts to create systems that explain
and instruct (intelligent tutoring systems and expert systems). The Report expresses the
explainability issues and challenges in modern AI, and presents capsule views of the leading
psychological theories of explanation. Certain articles stand out by virtue of their particular
relevance to XAI, and their methods, results, and key points are highlighted.
It is recommended that AI/XAI researchers be encouraged to include in their research
reports fuller details on their empirical or experimental methods, in the fashion of experimental
psychology research reports: details on Participants, Instructions, Procedures, Tasks, Dependent
Variables (operational definitions of the measures and metrics), Independent Variables
(conditions), and Control Conditions.
In the papers reviewed in this Report one can find methodological guidance for the
evaluation of XAI systems. But the Report highlights some noteworthy considerations: The
differences between global and local explanations, the need to evaluate the performance of the
human-machine work system (and not just the performance of the AI or the performance of the
users), the need to recognize that experiment procedures tacitly impose on the user the burden of
self-explanation.
Corrective/contrastive user tasks support self-explanation or explanation-as-exploration.
Tasks that involve human-AI interactivity and co-adaptation, such as bug or oddball detection,
hold promise for XAI evaluation since they too conform to the notions of "explanation-as-
exploration" and explanation as a co-adaptive dialog process. Tasks that involve predicting the
AI's determinations, combined with post-experimental interviews, hold promise for the study of
mental models in the XAI context.

3"
DARPA"XAI"Literature"Review"" " " " " " " " " p."
"
"
Preface
This Report is an expansion of a previous Report on the DARPA XAI Program, which was titled
"Literature Review and Integration of Key Ideas for Explainable AI," and was dated February
2018. This new version integrates nearly 200 additional references that have been discovered.
This Report includes a new section titled "Review of Human Evaluation of XAI Systems." This
section focuses on reports—many of them recent—on projects in which human-machine AI or
XAI systems underwent some sort of empirical evaluation. This new section is particularly
relevant to the empirical and experimental activities in the DARPA XAI Program.
Acknowledgements
Contributions to this Report were made by Sara Tan and Brittany Nelson of the Michigan
Technological University, and Jared Van Dam of the Institute for Human and Machine
Cognition.
This material is approved for public release. Distribution is unlimited. This material is based on
research sponsored by the Air Force Research Lab (AFRL) under agreement number FA8650-
17-2-7711. The U.S. Government is authorized to reproduce and distribute reprints for
Governmental purposes notwithstanding any copyright notation thereon.
Disclaimer
The views and conclusions contained herein are those of the authors and should not be
interpreted as necessarily representing the official policies or endorsements, either expressed or
implied, of AFRL or the U.S. Government.
!

4"
DARPA"XAI"Literature"Review"" " " " " " " " " p."
"
"
Outline
Executive Summary
2
Preface
3
Acknowledgement and Disclaimer
3
1. Purpose, Scope, and Organization
5
2. Disciplinary Perspectives
9
3. Findings From Research on Pertinent Topics
17
4. Key Papers and Their Contributions that are Specifically Pertinent to XAI
30
5. Explanation in Artificial Intelligence Systems: An Historical Perspective
43
6. Psychological Theories, Hypotheses and Models
70
7. Synopsis of Key XAI Concepts
83
8. Evaluation of XAI Systems: Performance Evaluation Using Human Participants
97
9. Bibliography
109
APPENDIX: Evaluations of XAI System Performance Using Human Participants
170
!

5"
DARPA"XAI"Literature"Review"" " " " " " " " " p."
"
"
1. Purpose, Scope, and Organization
The purpose of this document is to distill from existing scientific literatures the resent key ideas
that pertain to the DARPA XAI Program.
Importance of the Topic
For decision makers who rely upon analytics and data science, explainability is a real issue. If the
computational system relies on a simple decision model such as logistic regression, they can
understand it and convince executives who have to sign off on a system because it seems
reasonable and fair. They can justify the analytical results to shareholders, regulators, etc. But for
"Deep Nets" and "Machine Learning" systems, they can no longer do this. There is a need to find
ways to explain the system to the decision maker so that they know that their decisions are going
to be reasonable, and simply invoking a neurological metaphor might not be sufficient. The goals
of explanation involve persuasion, but that comes only as a consequence of understanding the
hot the AI works, the mistakes the system can make, and the safety measures surrounding it.
... current efforts face unprecedented difficulties: contemporary models are more
complex and less interpretable than ever; [AI systems are] used for a wider array
of tasks, and are more pervasive in everyday life than in the past; and [AI is]
increasingly allowed to make (and take) more autonomous decisions (and
actions). Justifying these decisions will only become more crucial, and there is
little doubt that this field will continue to rise in prominence and produce exciting
and much needed work in the future (Biran and Cotton, 2017, p. 4).
This quotation brings into relief the importance of XAI. Governments and the general public are
expressing concern about the emerging "black box society." A proposed regulation before the
European Union (Goodman and Flaxman, 2016) prohibits "automatic processing" unless user's
rights are safeguarded. Users have a "right to an explanation" concerning algorithm-created
剩余203页未读,继续阅读

















安全验证
文档复制为VIP权益,开通VIP直接复制

评论0