没有合适的资源?快使用搜索试试~ 我知道了~
首页Bayesian Networks without Tears贝叶斯网络理论
资源详情
资源评论
资源推荐
This Is a Publication of
The American Association for Artificial Intelligence
This electronic document has been retrieved from the
American Association for Artificial Intelligence
445 Burgess Drive
Menlo Park, California 94025
(415) 328-3123
(415) 321-4457
info@aaai.org
http://www.aaai.org
(For membership information,
consult our web page)
The material herein is copyrighted material. It may not be
reproduced in any form by any electronic or
mechanical means (including photocopying, recording,
or information storage and retrieval) without permission
in writing from AAAI.
Articles
50 AI MAGAZINE
understanding
(Charniak and Gold-
man 1989a, 1989b;
Goldman 1990),
vision (Levitt, Mullin,
and Binford 1989),
heuristic search
(Hansson and Mayer
1989), and so on. It
is probably fair to
say that Bayesian
networks are to a
large segment of
the AI-uncertainty
community what
resolution theorem
proving is to the AI-
logic community.
Nevertheless, despite what seems to be their
obvious importance, the ideas and techniques
have not spread much beyond the research
community responsible for them. This is
probably because the ideas and techniques are
not that easy to understand. I hope to rectify
this situation by making Bayesian networks
more accessible to
the probabilis-
tically unso-
Over the last few
years, a method of
reasoning using
probabilities, vari-
ously called belief
networks, Bayesian
networks, knowl-
edge maps, proba-
bilistic causal
networks, and so on,
has become popular
within the AI proba-
bility and uncertain-
ty community. This
method is best sum-
marized in Judea
Pearl’s (1988) book,
but the ideas are a
product of many hands. I adopted Pearl’s
name, Bayesian networks, on the grounds
that the name is completely neutral about
the status of the networks (do they really rep-
resent beliefs, causality, or what?). Bayesian
networks have been applied to problems in
medical diagnosis (Heckerman 1990; Spiegel-
halter, Franklin, and Bull 1989), map learning
(Dean 1990), lan-
guage
Bayesian Networks
without Tears
Eugene Charniak
I give an introduction to Bayesian networks for
AI researchers with a limited grounding in prob-
ability theory. Over the last few years, this
method of reasoning using probabilities has
become popular within the AI probability and
uncertainty community. Indeed, it is probably
fair to say that Bayesian networks are to a large
segment of the AI-uncertainty community what
resolution theorem proving is to the AI-logic
community. Nevertheless, despite what seems to
be their obvious importance, the ideas and
techniques have not spread much beyond the
research community responsible for them. This is
probably because the ideas and techniques are
not that easy to understand. I hope to rectify this
situation by making Bayesian networks more
accessible to the probabilistically unsophisticated.
0738-4602/91/$4.00 ©1991 AAAI
…making
Bayesian
networks more
accessible to
the probabilis-
tically
unsophis-
ticated.
phisticated. That is, this article tries to make
the basic ideas and intuitions accessible to
someone with a limited grounding in proba-
bility theory (equivalent to what is presented
in Charniak and McDermott [1985]).
An Example Bayesian Network
The best way to understand Bayesian networks
is to imagine trying to model a situation in
which causality plays a role but where our
understanding of what is actually going on is
incomplete, so we need to describe things
probabilistically. Suppose when I go home at
night, I want to know if my family is home
before I try the doors. (Perhaps the most con-
venient door to enter is double locked when
nobody is home.) Now, often when my wife
leaves the house, she turns on an outdoor
light. However, she sometimes turns on this
light if she is expecting a guest. Also, we have
a dog. When nobody is home, the dog is put
in the back yard. The same is true if the dog
has bowel troubles. Finally, if the dog is in the
backyard, I will probably hear her barking (or
what I think is her barking), but sometimes I
can be confused by other dogs barking. This
example, partially inspired by Pearl’s (1988)
earthquake example, is illustrated in figure 1.
There we find a graph not unlike many we see
in AI. We might want to use such diagrams to
predict what will happen (if my family goes
out, the dog goes out) or to infer causes from
observed effects (if the light is on and the dog
is out, then my family is probably out).
The important thing to note about this
example is that the causal connections are
not absolute. Often, my family will have left
without putting out the dog or turning on a
light. Sometimes we can use these diagrams
anyway, but in such cases, it is hard to know
what to infer when not all the evidence points
the same way. Should I assume the family is
out if the light is on, but I do not hear the
dog? What if I hear the dog, but the light is
out? Naturally, if we knew the relevant proba-
bilities, such as P(family-out | light-on, ¬ hear-
bark), then we would be all set. However,
typically, such numbers are not available for
all possible combinations of circumstances.
Bayesian networks allow us to calculate them
from a small set of probabilities, relating only
neighboring nodes.
Bayesian networks are directed acyclic graphs
(DAGs) (like figure 1), where the nodes are
random variables, and certain independence
assumptions hold, the nature of which I dis-
cuss later. (I assume without loss of generality
that DAG is connected.) Often, as in figure 1,
the random variables can be thought of as
states of affairs, and the variables have two
possible values, true and false. However, this
need not be the case. We could, say, have a
node denoting the intensity of an earthquake
with values no-quake, trembler, rattler, major,
and catastrophe. Indeed, the variable values
do not even need to be discrete. For example,
the value of the variable earthquake might be
a Richter scale number. (However, the algo-
rithms I discuss only work for discrete values,
so I stick to this case.)
In what follows, I use a sans serif font for
the names of random variables, as in earth-
quake. I use the name of the variable in italics
to denote the proposition that the variable
takes on some particular value (but where we
are not concerned with which one), for exam-
ple, earthquake. For the special case of Boolean
variables (with values true and false), I use the
variable name in a sans serif font to denote
the proposition that the variable has the
value true (for example, family-out). I also
show the arrows pointing downward so that
“above” and “below” can be understood to
indicate arrow direction.
The arcs in a Bayesian network specify the
independence assumptions that must hold
between the random variables. These inde-
pendence assumptions determine what prob-
ability information is required to specify the
probability distribution among the random
variables in the network. The reader should
note that in informally talking about DAG, I
said that the arcs denote causality, whereas in
the Bayesian network, I am saying that they
specify things about the probabilities. The
next section resolves this conflict.
To specify the probability distribution of a
Bayesian network, one must give the prior
probabilities of all root nodes (nodes with no
predecessors) and the conditional probabilities
Articles
WINTER 1991 51
light-on
family-out
dog-out
bowel-problem
hear-bark
Figure 1. A Causal Graph.
The nodes denote states of affairs, and the arcs can be interpreted as causal connections.
剩余14页未读,继续阅读
feifeiae86
- 粉丝: 0
- 资源: 19
上传资源 快速赚钱
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
会员权益专享
最新资源
- c++校园超市商品信息管理系统课程设计说明书(含源代码) (2).pdf
- 建筑供配电系统相关课件.pptx
- 企业管理规章制度及管理模式.doc
- vb打开摄像头.doc
- 云计算-可信计算中认证协议改进方案.pdf
- [详细完整版]单片机编程4.ppt
- c语言常用算法.pdf
- c++经典程序代码大全.pdf
- 单片机数字时钟资料.doc
- 11项目管理前沿1.0.pptx
- 基于ssm的“魅力”繁峙宣传网站的设计与实现论文.doc
- 智慧交通综合解决方案.pptx
- 建筑防潮设计-PowerPointPresentati.pptx
- SPC统计过程控制程序.pptx
- SPC统计方法基础知识.pptx
- MW全能培训汽轮机调节保安系统PPT教学课件.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论1