Probabilistic Deep Learning: Bayes
byBackprop
Having understood all the fundamentals, we can now proceed and
apply them to deep learning. If you cannot follow my steps here,
please go back to my previous posts, or comment to this and I’ll
answer you. We’ll have some examples, questions and neat graphics
on the way, so it won’t be too arid.
Wewillnotexplainwhatdeeplearningorneuralnetworksare,but
ifyoudon’tfeelyouhaveasolidunderstanding,readShridhar’s
postasanintroductionorattendAndrewNg’scourseracoursefora
moredetails.
The very base of probabilistic deep learning is understanding a neural
network as a conditional model p that is parameterised by the
parameters or weights θ of the network and output y when some
input xis given. Mathematically, we can write this as follows:
Example:
We feed an image of a red Tesla Model S into the network and it puts
out “red car”.
评论0