Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.


Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us /

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

Loading Events

« All Events

  • This event has passed.

Overparameterization without Overfitting

January 1, 1970 @ 12:00 am

Title: Samet Oymak: Overparameterization without Overfitting: From Compressed Sensing to Deep LearningAgenda: April 1, 2 – 3:30 pm, Computer Science and Engineering Building. Room, 1202
Abstract: Modern machine learning models such as deep networks typically contain more parameters than training data. While this overparameterization results in ill-posed problems, these models often perform well in practice and are successfully deployed in data-driven applications. In this talk, I will present theoretical results demystifying this success by focusing on two class of problems.
In the first class of problems we avoid overfitting by using a model prior, such as sparsity, to narrow down the search space of the algorithm by using a proper regularization. For these problems, we introduce a general framework to quantify the benefit of prior knowledge in terms of problem geometry. This leads to a remarkably accurate characterization of the algorithmic behavior including estimation error, rate of convergence, and sample complexity.
The second class of problems typically arise in deep learning and require no explicit regularization. While neural networks have the capacity to overfit any dataset including noise, somewhat paradoxically, they continue to predict well on unseen test data.
Toward explaining this phenomena, we show that, neural networks trained by the gradient descent algorithm (1) are provably robust to noise/corruption on a constant fraction of the labels and (2) provably generalize to test data despite overparameterization.
Short Bio: Oymak is an assistant professor Department of Electrical and Computer Engineering and a cooperating faculty in the Department of Computer Science at UC Riverside. He received his MS and Ph.D. degrees from California Institute of Technology, where he was awarded the Wilts Prize for the best thesis in Electrical Engineering. Before joining UCR, he spent time at Google and financial industry, and prior to that he was a fellow at the Simons Institute and a postdoctoral scholar at UC Berkeley. His research explores the mathematical foundations of data science and machine learning by using tools from optimization and statistics. His research interests include mathematical optimization, reinforcement learning, deep learning theory, and high-dimensional problems.


January 1, 1970
12:00 am