没有合适的资源?快使用搜索试试~ 我知道了~
首页Geoffrey Hinton深度学习泰斗的课件
资源详情
资源评论
资源推荐

IPAM Summer School 2012
Tutorial on:
Deep Learning
Geoffrey Hinton
Canadian Institute for Advanced Research
&
Department of Computer Science
University of Toronto

Overview of the tutorial
• A brief history of deep learning.
• How to learn multi-layer generative models of unlabelled
data by learning one layer of features at a time.
– What is really going on when we stack RBMs to form a
deep belief net.
• How to use generative models to make discriminative
training methods work much better for classification and
regression.
• How to modify RBMs to deal with real-valued input.

PART 1
Introduction to Deep Learning
&
Deep Belief Nets

A brief history of deep learning
• The backpropagation algorithm for learning
multiple layers of non-linear features was
invented several times in the 1970’s and 1980’s
(Werbos, Amari?, Parker, LeCun, Rumelhart et. al.)
• Backprop clearly had great promise, but by the
1990’s people in machine learning had largely
given up on it because:
– It did not seem to be able to make good use
of multiple hidden layers (except in “time-
delay” and convolutional nets).
– It did not work well in recurrent networks.

How to learn many layers of features (~1985)
input vector
hidden
layers
outputs
Back-propagate
error signal to
get derivatives
for learning
Compare outputs with
correct answer to get
error signal
剩余68页未读,继续阅读
















安全验证
文档复制为VIP权益,开通VIP直接复制

评论0