Acceleration in Optimization, Sampling, and Machine Learning
SDSC, The Auditorium 9836 Hopkins Dr, La Jolla, San Diego, CA, United StatesOptimization, sampling, and machine learning are essential components of data science. In this talk, I will cover my work on accelerated methods in these fields and highlight some connections between them.
In optimization, I will present optimization as a two-player zero-sum game, which is a modular approach for designing and analyzing convex optimization algorithms by pitting a pair of no-regret learning strategies against each other. This approach not only recovers several existing algorithms but also gives rise to new ones. I will also discuss the use of Heavy Ball in non-convex optimization, which is a popular momentum method in deep learning. Despite its success in practice, Heavy Ball currently lacks theoretical evidence for its acceleration in non-convex optimization. To bridge this gap, I will present some non-convex problems where Heavy Ball exhibits provable acceleration guarantees.
In sampling, I will describe how to accelerate a classical sampling method called Hamiltonian Monte Carlo by setting its integration time appropriately, which builds on a connection between sampling and optimization. In machine learning, I will talk about Gradient Descent with pseudo-labels for fast test-time adaptation under the context of tackling distribution shifts.