Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.
PO Box 16122 Collins Street West Victoria, Australia
email@example.com / firstname.lastname@example.org
Phone: + (066) 0760 0260 / + (057) 0760 0560
Optimization, sampling, and machine learning are essential components of data science. In this talk, I will cover my work on accelerated methods in these fields and highlight some connections between them.
In optimization, I will present optimization as a two-player zero-sum game, which is a modular approach for designing and analyzing convex optimization algorithms by pitting a pair of no-regret learning strategies against each other. This approach not only recovers several existing algorithms but also gives rise to new ones. I will also discuss the use of Heavy Ball in non-convex optimization, which is a popular momentum method in deep learning. Despite its success in practice, Heavy Ball currently lacks theoretical evidence for its acceleration in non-convex optimization. To bridge this gap, I will present some non-convex problems where Heavy Ball exhibits provable acceleration guarantees.
In sampling, I will describe how to accelerate a classical sampling method called Hamiltonian Monte Carlo by setting its integration time appropriately, which builds on a connection between sampling and optimization. In machine learning, I will talk about Gradient Descent with pseudo-labels for fast test-time adaptation under the context of tackling distribution shifts.
Bio: Jun-Kun Wang is a postdoctoral researcher in the Department of Computer Science at Yale University, working with Dr. Andre Wibisono. He received his Ph.D. in Computer Science from the Georgia Institute of Technology in 2021, advised by Dr. Jacob Abernethy. He earned an MS in Communication Engineering and a BS in Electrical Engineering from National Taiwan University. His research interests are in the theoretical and algorithmic foundations of optimization, sampling, and machine learning.