You are here

MIA Talks

NN I. Reverse-mode differentiation and autograd

November 2, 2015
Harvard Intelligent Probabilistic Systems, Harvard University

Much of machine learning boils down to constructing a loss function and optimizing it, often using gradients. Reverse-mode differentation (sometimes called "backpropagation") is a general and computationally efficient way to compute these gradients. I'll explain reverse-mode differentiation and show how we've implemented it for Python/Numpy in our automatic differentation package autograd. I'll finish with some demos showing how easy it is to implement several machine learning models once you have automatic differentiation in your toolbox.