■ IntroduCtIon
xx
What You’ll Learn
The chapters covered in this book are as follows:
Chapter 1 — Mathematical Foundations: In this chapter, all the relevant
mathematical concepts from linear algebra, probability, calculus, optimization,
and machine-learning formulation are discussed in detail to lay the mathematical
foundation required for deep learning. The various concepts are explained with a
focus on their use in the fields of machine learning and deep learning.
Chapter 2 — Introduction to Deep-Learning Concepts and TensorFlow: This
chapter introduces the world of deep learning and discusses its evolution
over the years. The key building blocks of neural networks, along with several
methods of learning, such as the perceptron-learning rule and backpropagation
methods, are discussed in detail. Also, this chapter introduces the paradigm of
TensorFlow coding so that readers are accustomed to the basic syntax before
moving on to more-involved implementations in TensorFlow.
Chapter 3 — Convolutional Neural Networks: This chapter deals with convolutional
neural networks used for image processing. Image processing is a computer
vision issue that has seen a huge boost in performance in the areas of object
recognition and detection, object classification, localization, and segmentation
using convolutional neural networks. The chapter starts by illustrating the
operation of convolution in detail and then moves on to the working principles of
a convolutional neural network. Much emphasis is given to the building blocks of
a convolutional neural network to give the reader the tools needed to experiment
and extend their networks in interesting ways. Further, backpropagation through
convolutional and pooling layers is discussed in detail so that the reader has a
holistic view of the training process of convolutional networks. Also covered in this
chapter are the properties of equivariance and translation invariance, which are
central to the success of convolutional neural networks.
Chapter 4 — Natural Language Processing Using Recurrent Neural Networks: This
chapter deals with natural language processing using deep learning. It starts with
different vector space models for text processing; word-to-vector embedding
models, such as the continuous bag of words method and skip-grams; and then
moves to much more advanced topics that involve recurrent neural networks
(RNN), LSTM, bidirection RNN, and GRU. Language modeling is covered in detail
in this chapter to help the reader utilize these networks in real-world problems
involving the same. Also, the mechanism of backpropagation in cases of RNNs and
LSTM as well vanishing-gradient problems are discussed in much detail.
Chapter 5 — Unsupervised Learning with Restricted Boltzmann Machines and
Auto-encoders: In this chapter, you will learn about unsupervised methods
in deep learning that use restricted Boltzmann machines (RBMs) and auto-
encoders. Also, the chapter will touch upon Bayesian inference and Markov
chain Monte Carlo (MCMC) methods, such as the Metropolis algorithm and
Gibbs sampling, since the RBM training process requires some knowledge of
sampling. Further, this chapter will discuss contrastive divergence, a customized
version of Gibbs sampling that allows for the practical training of RBMs. We will
further discuss how RBMs can be used for collaborative filtering in recommender
systems as well as their use in unsupervised pre-training of deep belief networks
(DBNs).