没有合适的资源?快使用搜索试试~ 我知道了~
首页Grokking Deep Learning(最新英文原版)[Andrew_W._Trask]无水印有书签高清彩色文字完整版.pdf
本PDF无水印有书签高清彩色文字完整版,全书共16章节(包含where to go from here:a brief guide章节)。 Andrew Trask 是DeepMind的科学家,同时也是OpenMinded的负责人。他著作的《Grokking Deep Learning》(《图解深度学习》)
资源详情
资源评论
资源推荐

grokking
Deep Learning
Andrew W. Trask
MANNING
Shelter ISland

contents
1 Introducing deep learning: why you should learn it 3
Welcome to Grokking Deep Learning 3
Why you should learn deep learning 4
Will this be dicult to learn? 5
Why you should read this book 5
What you need to get started 7
You’ll probably need some Python knowledge 8
Summary 8
2 Fundamental concepts: how do machines learn? 9
What is deep learning? 10
What is machine learning? 11
Supervised machine learning 12
Unsupervised machine learning 13
Parametric vs. nonparametric learning 14
Supervised parametric learning 15
Unsupervised parametric learning 17
Nonparametric learning 18
Summary 19
preface xv
about this book xvii

3 Introduction to neural prediction: forward propagation 21
Step 1: Predict 22
A simple neural network making a prediction 24
What is a neural network? 25
What does this neural network do? 26
Making a prediction with multiple inputs 28
Multiple inputs: What does this neural network do? 30
Multiple inputs: Complete runnable code 35
Making a prediction with multiple outputs 36
Predicting with multiple inputs and outputs 38
Multiple inputs and outputs: How does it work? 40
Predicting on predictions 42
A quick primer on NumPy 44
Summary 46
4 Introduction to neural learning: gradient descent 47
Predict, compare, and learn 48
Compare 48
Learn 49
Compare: Does your network make good predictions? 50
Why measure error? 51
What’s the simplest form of neural learning? 52
Hot and cold learning 54
Characteristics of hot and cold learning 55
Calculating both direction and amount from error 56
One iteration of gradient descent 58
Learning is just reducing error 60
Let’s watch several steps of learning 62
Why does this work? What is weight_delta, really? 64
Tunnel vision on one concept 66
A box with rods poking out of it 67
Derivatives: Take two 68
What you really need to know 69
What you don’t really need to know 69
How to use a derivative to learn 70
Look familiar? 71

Breaking gradient descent 72
Visualizing the overcorrections 73
Divergence 74
Introducing alpha 75
Alpha in code 76
Memorizing 77
5 Learning multiple weights at a time:
generalizing gradient descent
79
Gradient descent learning with multiple inputs 80
Gradient descent with multiple inputs explained 82
Let’s watch several steps of learning 86
Freezing one weight: What does it do? 88
Gradient descent learning with multiple outputs 90
Gradient descent with multiple inputs and outputs 92
What do these weights learn? 94
Visualizing weight values 96
Visualizing dot products (weighted sums) 97
Summary 98
6 Building your rst deep neural network:
introduction to backpropagation
99
The streetlight problem 100
Preparing the data 102
Matrices and the matrix relationship 103
Creating a matrix or two in Python 106
Building a neural network 107
Learning the whole dataset 108
Full, batch, and stochastic gradient descent 109
Neural networks learn correlation 110
Up and down pressure 111
Edge case: Overtting 113
Edge case: Conicting pressure 114
Learning indirect correlation 116
Creating correlation 117
Stacking neural networks: A review 118
Backpropagation: Long-distance error attribution 119
剩余310页未读,继续阅读





安全验证
文档复制为VIP权益,开通VIP直接复制

评论1