Abstract: Deep learning uses the language and tools of statistics and classical machine learning, including empirical and population losses and optimizing a hypothesis on a training set. But it uses these tools in regimes where they should not be applicable: the optimization task is non-convex, models are often large enough to overfit, and the training and deployment tasks can radically differ. In this talk I will survey the relation between deep learning and statistics. In particular we will discuss recent works supporting the emerging intuition that deep learning is closer in some aspects to human learning than to classical statistics. Rather than estimating quantities from samples, deep neural nets develop broadly applicable representations and skills through their training.
The talk will not assume background knowledge in artificial intelligence or deep learning.
Bio: Boaz Barak is the Gordon McKay professor of Computer Science at Harvard University’s John A. Paulson School of Engineering and Applied Sciences. Barak’s research interests include all areas of theoretical computer science and in particular cryptography, computational complexity, and the foundations of machine learning. Previously, he was a principal researcher at Microsoft Research New England, and before that an associate professor (with tenure) at Princeton University’s computer science department. Barak has won the ACM dissertation award, the Packard and Sloan fellowships, and was also selected for Foreign Policy magazine’s list of 100 leading global thinkers for 2014. He was also chosen as a Simons investigator and a Fellow of the ACM. Barak is a member of the scientific advisory boards for Quanta Magazine and the Simons Institute for the Theory of Computing. He is also a board member of AddisCoder, a non-profit organization for teaching algorithms and coding to high-school students in Ethiopia and Jamaica. Barak wrote with Sanjeev Arora the textbook “Computational Complexity: A Modern Approach”.