Deep Learning Group, Microsoft Research

"You can't train GPT-3 on a single GPU, much less tune its hyperparameters (HPs). I'm here to tell you this is not true: you can tune its HPs on a single GPU — even if you can't train it that way!
 
In the first hour of this talk, I'll describe how, in the so-called “maximal update parametrization” (abbreviated µP), narrow and wide neural networks share the same set of optimal hyperparameters. This lets us tune any large model by just tuning a small version of it — we call this µTransfer. In particular, this allowed us to tune the 6.7 billion parameter version of GPT-3 using only 7% of its pretraining compute budget, and with some asterisks, we get a performance comparable to the original GPT-3 model with twice the parameter count.
 
In the second hour of this talk, I'll discuss the theoretical reason µP has this special property and the connection to the study of infinite-width neural networks and, more generally, the theory of Tensor Programs. 
 
The first hour will target general practitioners or empirical researchers in machine learning, while the second hour targets those who are more theoretically curious."

MIA Talks Search