Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.
PO Box 16122 Collins Street West Victoria, Australia
info@domain.com / example@domain.com
Phone: + (066) 0760 0260 / + (057) 0760 0560
Abstract: How can models with more parameters than training examples generalize well, and generalize even better when we add even more parameters, even without explicit complexity control? In recent years, it is becoming increasingly clear that much, or perhaps all, of the complexity control and generalization ability of deep learning comes from the optimization bias, or implicit bias, of the training procedures. In this talk, I will survey our work from the past several years on highlighting the role of optimization geometry in determining such implicit bias, and understanding deep learning through it, and how this view influences the study of further deep learning phenomena.
Bio: Nati (Nathan) Srebro is a professor at the Toyota Technological Institute at Chicago, with cross-appointments at the University of Chicago’s Department of Computer Science, and Committee on Computational and Applied Mathematics. He obtained his PhD from the Massachusetts Institute of Technology in 2004, and previously was a postdoctoral fellow at the University of Toronto, a visiting scientist at IBM, and an associate professor at the Technion, and held visiting position at the Weizmann Institute and at École Polytechnique Fédérale de Lausanne.
Dr. Srebro’s research encompasses methodological, statistical and computational aspects of machine learning, as well as related problems in optimization. Some of Srebro’s significant contributions include work on learning “wider” Markov networks, introducing the use of the nuclear norm for machine learning, introducing the “equalized odds” fairness notion for non-discrimination, work on fast optimization techniques for machine learning, and on the relationship between learning and optimization.
Website: https://nati.ttic.edu/