没有合适的资源?快使用搜索试试~ 我知道了~
首页deep learning 教程Andrew英文原版.pdf
资源详情
资源评论
资源推荐
13-3-22 U F LD L Tutorial - U fldl
1/3deeplearning.stanford.edu/w iki/index.php/U F LD L_Tutorial
UFLDL Tutorial
From Ufldl
Description: This tutorial will teach you the main ideas of Unsupervised Feature
Learning and Deep Learning. By working through it, you will also get to implement
several feature learning/deep learning algorithms, get to see them work for
yourself, and learn how to apply/adapt these ideas to new problems.
This tutorial assumes a basic knowledge of machine learning (specifically,
familiarity with the ideas of supervised learning, logistic regression, gradient
descent). If you are not familiar with these ideas, we suggest you go to this
Machine Learning course
(http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=MachineLearning)
and complete sections II, III, IV (up to Logistic Regression) first.
Sparse Autoencoder
Neural Networks
Backpropagation Algorithm
Gradient checking and advanced optimization
Autoencoders and Sparsity
Visualizing a Trained Autoencoder
Sparse Autoencoder Notation Summary
Exercise:Sparse Autoencoder
Vectorized implementation
Vectorization
Logistic Regression Vectorization Example
Neural Network Vectorization
Exercise:Vectorization
Preprocessing: PCA and Whitening
PCA
Whitening
Implementing PCA/Whitening
Exercise:PCA in 2D
Exercise:PCA and Whitening
Softmax Regression
Softmax Regression
Exercise:Softmax Regression
13-3-22 U F LD L Tutorial - U fldl
2/3deeplearning.stanford.edu/w iki/index.php/U F LD L_Tutorial
Self-Taught Learning and Unsupervised Feature Learning
Self-Taught Learning
Exercise:Self-Taught Learning
Building Deep Networks for Classification
From Self-Taught Learning to Deep Networks
Deep Networks: Overview
Stacked Autoencoders
Fine-tuning Stacked AEs
Exercise: Implement deep networks for digit classification
Linear Decoders with Autoencoders
Linear Decoders
Exercise:Learning color features with Sparse Autoencoders
Working with Large Images
Feature extraction using convolution
Pooling
Exercise:Convolution and Pooling
Note: The sections above this line are stable. The sections below are still under
construction, and may change without notice. Feel free to browse around however, and
feedback/suggestions are welcome.
Miscellaneous
MATLAB Modules
Style Guide
Useful Links
Miscellaneous Topics
Data Preprocessing
Deriving gradients using the backpropagation idea
Advanced Topics:
Sparse Coding
Sparse Coding
Sparse Coding: Autoencoder Interpretation
Exercise:Sparse Coding
ICA Style Models
Independent Component Analysis
Exercise:Independent Component Analysis
13-3-22 U F LD L Tutorial - U fldl
3/3deeplearning.stanford.edu/w iki/index.php/U F LD L_Tutorial
Others
Convolutional training
Restricted Boltzmann Machines
Deep Belief Networks
Denoising Autoencoders
K-means
Spatial pyramids / Multiscale
Slow Feature Analysis
Tiled Convolution Networks
Material contributed by: Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline
Suen
Retrieved from "http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial"
This page was last modified on 20 October 2011, at 01:28.
13-3-22 N eural N etw orks - U fldl
1/5deeplearning.stanford.edu/w iki/index.php/N eural_N etw orks
Neural Networks
From Ufldl
Consider a supervised learning problem where we have access to labeled training
examples (x
(i)
,y
(i)
). Neural networks give a way of defining a complex, non-linear
form of hypotheses h
W,b
(x), with parameters W,b that we can fit to our data.
To describe neural networks, we will begin by describing the simplest possible
neural network, one which comprises a single "neuron." We will use the following
diagram to denote a single neuron:
This "neuron" is a computational unit that takes as input x
1
,x
2
,x
3
(and a +1
intercept term), and outputs , where
is called the activation function. In these notes, we will choose
to be the sigmoid function:
Thus, our single neuron corresponds exactly to the input-output mapping defined by
logistic regression.
Although these notes will use the sigmoid function, it is worth noting that another
common choice for f is the hyperbolic tangent, or tanh, function:
Here are plots of the sigmoid and tanh functions:
13-3-22 N eural N etw orks - U fldl
2/5deeplearning.stanford.edu/w iki/index.php/N eural_N etw orks
The tanh(z) function is a rescaled version of the sigmoid, and its output range is [
− 1,1] instead of [0,1].
Note that unlike some other venues (including the OpenClassroom videos, and parts of
CS229), we are not using the convention here of x
0
= 1. Instead, the intercept term
is handled separately by the parameter b.
Finally, one identity that'll be useful later: If f(z) = 1 / (1 + exp( − z)) is the
sigmoid function, then its derivative is given by f'(z) = f(z)(1 − f(z)). (If f is
the tanh function, then its derivative is given by f'(z) = 1 − (f(z))
2
.) You can
derive this yourself using the definition of the sigmoid (or tanh) function.
Neural Network model
A neural network is put together by hooking together many of our simple "neurons,"
so that the output of a neuron can be the input of another. For example, here is a
剩余152页未读,继续阅读
小于UP
- 粉丝: 3
- 资源: 40
上传资源 快速赚钱
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
会员权益专享
最新资源
- RTL8188FU-Linux-v5.7.4.2-36687.20200602.tar(20765).gz
- c++校园超市商品信息管理系统课程设计说明书(含源代码) (2).pdf
- 建筑供配电系统相关课件.pptx
- 企业管理规章制度及管理模式.doc
- vb打开摄像头.doc
- 云计算-可信计算中认证协议改进方案.pdf
- [详细完整版]单片机编程4.ppt
- c语言常用算法.pdf
- c++经典程序代码大全.pdf
- 单片机数字时钟资料.doc
- 11项目管理前沿1.0.pptx
- 基于ssm的“魅力”繁峙宣传网站的设计与实现论文.doc
- 智慧交通综合解决方案.pptx
- 建筑防潮设计-PowerPointPresentati.pptx
- SPC统计过程控制程序.pptx
- SPC统计方法基础知识.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论1