没有合适的资源?快使用搜索试试~ 我知道了~
首页深度学习ufldl英文版pdf下载
深度学习ufldl英文版pdf下载
需积分: 10 325 浏览量
更新于2023-05-20
评论
收藏 2.79MB PDF 举报
吴恩达机器学习课程整理英文版,深度学习入门必须看的内容之一。适用于人工智能方向初学者,有兴趣可以下载来看一看。
资源详情
资源评论
资源推荐

UFLDL Tutorial
ver 201406
∗
1 Introduction
This tutorial will teach you the main ideas of Unsupervised Feature Learning and Deep Learning.
By working through it, you will also get to implement several feature learning/deep learning
algorithms, get to see them work for yourself, and learn how to apply/adapt these ideas to new
problems.
This tutorial assumes a basic knowledge of machine learning (specifically, familiarity with the
ideas of supervised learning, logistic regression, gradient descent). If you are not familiar with
these ideas, we suggest you go to this Machine Learning course and complete sections II, III, IV
(up to Logistic Regression) first.
Material contributed by: Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen
∗
This document was formated base on the online UFLDL Tutorial 2014-06-11. You may update the latest version
from GitHub. Please don’t hesitate to leave message in the web site if you have any comments or suggestion. Yes,
the format of this document is not perfect, and you may adjust it by yourself if you want. Don’t forget to share
your version, and notify me to update this document. Thanks, have fun!
1

2 Sparse Autoencoder
2.1 Neural Networks
Consider a supervised learning problem where we have access to labeled training examples (x
(i)
, y
(i)
).
Neural networks give a way of defining a complex, non-linear form of hypotheses h
W,b
(x), with
parameters W, b that we can fit to our data.
To describe neural networks, we will begin by describing the simplest possible neural network,
one which comprises a single “neuron." We will use the following diagram to denote a single neuron:
This “neuron" is a computational unit that takes as input x
1
, x
2
, x
3
(and a +1 intercept term),
and outputs h
W,b
(x) = f(W
T
x) = f(
3
i=1
W
i
x
i
+ b), where f : ℜ 7→ ℜ is called the activation
function. In these notes, we will choose f(·) to be the function:
f(z) =
1
1 + exp(−z)
.
Thus, our single neuron corresponds exactly to the input-output mapping defined by logistic
regression. Although these notes will use the sigmoid function, it is worth noting that another
common choice for f is the hyperbolic tangent, or tanh, function:
f(z) = tanh(z) =
e
z
− e
−z
e
z
+ e
−z
,
Here are plots of the sigmoid and tanh functions (Figure 1):
The tanh(z) function is a rescaled version of the sigmoid, and its output range is [−1, 1]
instead of [0, 1].
Note that unlike some other venues (including the OpenClassroom videos, and parts of CS229),
we are not using the convention here of x
0
= 1. Instead, the intercept term is handled separately
by the parameter b.
Finally, one identity that’ll be useful later: If f(z) = 1/(1 + exp(−z)) is the sigmoid function,
then its derivative is given by f
′
(z) = f(z)(1−f(z)). (If f is the tanh function, then its derivative
is given by f
′
(z) = 1 − (f (z))
2
.) You can derive this yourself using the definition of the sigmoid
(or tanh) function.
2.1.1 Neural Network model
A neural network is put together by hooking together many of our simple “neurons," so that the
output of a neuron can be the input of another. For example, here is a small neural network:
2

-1
-0.5
0
0.5
1
-4 -2 0 2 4
f(z)
z
1
1+exp(−z)
tanh(z)
•
•
Figure 1: Activation functions
In this figure, we have used circles to also denote the inputs to the network. The circles labeled
“+1" are called bias units, and correspond to the intercept term. The leftmost layer of the network
is called the input layer, and the rightmost layer the output layer (which, in this example, has
only one node). The middle layer of nodes is called the hidden layer, because its values are not
observed in the training set. We also say that our example neural network has 3 input units (not
counting the bias unit), 3 hidden units, and 1 output unit.
We will let n
l
denote the number of layers in our network; thus n
l
= 3 in our example. We label
layer l as L
l
, so layer L
1
is the input layer, and layer L
n
l
the output layer. Our neural network
has parameters (W, b) = (W
(1)
, b
(1)
, W
(2)
, b
(2)
), where we write W
(l)
ij
to denote the parameter (or
weight) associated with the connection between unit j in layer l, and unit i in layer l + 1. (Note
the order of the indices.) Also, b
(l)
i
is the bias associated with unit i in layer l + 1. Thus, in
our example, we have W
(1)
∈ ℜ
3×3
, and W
(2)
∈ ℜ
1×3
. Note that bias units don’t have inputs or
connections going into them, since they always output the value +1. We also let s
l
denote the
number of nodes in layer l (not counting the bias unit).
We will write a
(l)
i
to denote the activation (meaning output value) of unit i in layer l. For
l = 1, we also use a
(1)
i
= x
i
to denote the i-th input. Given a fixed setting of the parameters
W, b, our neural network defines a hypothesis h
W,b
(x) that outputs a real number. Specifically, the
computation that this neural network represents is given by:
a
(2)
1
= f (W
(1)
11
x
1
+ W
(1)
12
x
2
+ W
(1)
13
x
3
+ b
(1)
1
)
a
(2)
2
= f (W
(1)
21
x
1
+ W
(1)
22
x
2
+ W
(1)
23
x
3
+ b
(1)
2
)
a
(2)
3
= f (W
(1)
31
x
1
+ W
(1)
32
x
2
+ W
(1)
33
x
3
+ b
(1)
3
)
h
W,b
(x) = a
(3)
1
= f (W
(2)
11
a
(2)
1
+ W
(2)
12
a
(2)
2
+ W
(2)
13
a
(2)
3
+ b
(2)
1
)
In the sequel, we also let z
(l)
i
denote the total weighted sum of inputs to unit i in layer l,
including the bias term (e.g., z
(2)
i
=
n
j=1
W
(1)
ij
x
j
+ b
(1)
i
), so that a
(l)
i
= f (z
(l)
i
).
3

Note that this easily lends itself to a more compact notation. Specifically, if we extend
the activation function f(·) to apply to vectors in an element-wise fashion (i.e., f ([z
1
, z
2
, z
3
]) =
[f(z
1
), f (z
2
), f (z
3
)], then we can write the equations above more compactly as:
z
(2)
= W
(1)
x + b
(1)
a
(2)
= f (z
(2)
)
z
(3)
= W
(2)
a
(2)
+ b
(2)
h
W,b
(x) = a
(3)
= f (z
(3)
)
We call this step forward propagation. More generally, recalling that we also use a
(1)
= x to
also denote the values from the input layer, then given layer l’s activations a
(l)
, we can compute
layer l + 1’s activations a
(l+1)
as:
z
(l+1)
= W
(l)
a
(l)
+ b
(l)
a
(l+1)
= f (z
(l+1)
)
By organizing our parameters in matrices and using matrix-vector operations, we can take
advantage of fast linear algebra routines to quickly perform calculations in our network.
We have so far focused on one example neural network, but one can also build neural networks
with other architectures (meaning patterns of connectivity between neurons), including ones with
multiple hidden layers. The most common choice is a n
l
-layered network where layer 1 is the input
layer, layer n
l
is the output layer, and each layer l is densely connected to layer l + 1. In this
setting, to compute the output of the network, we can successively compute all the activations
in layer L
2
, then layer L
3
, and so on, up to layer L
n
l
, using the equations above that describe
the forward propagation step. This is one example of a feedforward neural network, since the
connectivity graph does not have any directed loops or cycles.
Neural networks can also have multiple output units. For example, here is a network with two
hidden layers layers L
2
and L
3
and two output units in layer L
4
:
4

To train this network, we would need training examples (x
(i)
, y
(i)
) where y
(i)
∈ ℜ
2
. This sort
of network is useful if there’re multiple outputs that you’re interested in predicting. (For example,
in a medical diagnosis application, the vector x might give the input features of a patient, and the
different outputs y
i
’s might indicate presence or absence of different diseases.)
2.2 Backpropagation Algorithm
Suppose we have a fixed training set
{
(
x
(1)
, y
(1)
)
, . . . ,
(
x
(m)
, y
(m)
)
}
of
m
training examples. We can
train our neural network using batch gradient descent. In detail, for a single training example
(x, y), we define the cost function with respect to that single example to be:
J(W, b; x, y) =
1
2
∥h
W,b
(x) − y∥
2
. (1)
This is a (one-half) squared-error cost function. Given a training set of m examples, we then
define the overall cost function to be:
J(W, b) =
1
m
m
i=1
J(W, b; x
(i)
, y
(i)
)
+
λ
2
n
l
−1
l=1
s
l
i=1
s
l+1
j=1
W
(l)
ji
2
(2)
=
1
m
m
i=1
1
2
h
W,b
(x
(i)
) − y
(i)
2
+
λ
2
n
l
−1
l=1
s
l
i=1
s
l+1
j=1
W
(l)
ji
2
(3)
The first term in the definition of J(W,b) is an average sum-of-squares error term. The second
term is a regularization term (also called a weight decay term) that tends to decrease the magnitude
of the weights, and helps prevent overfitting.
1
1
Usually weight decay is not applied to the bias terms b
(l)
i
, as reflected in our definition for J(W, b). Applying
weight decay to the bias units usually makes only a small difference to the final network, however. If you’ve taken
CS229 (Machine Learning) at Stanford or watched the course’s videos on YouTube, you may also recognize this
weight decay as essentially a variant of the Bayesian regularization method you saw there, where we placed a
Gaussian prior on the parameters and did MAP (instead of maximum likelihood) estimation.
5
剩余111页未读,继续阅读












安全验证
文档复制为VIP权益,开通VIP直接复制

评论0