Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.

map

Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us

info@domain.com / example@domain.com

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

Loading Events

« All Events

  • This event has passed.

Deep Learning: a Non-parametric Statistical Viewpoint

October 24, 2024 @ 10:00 am - 11:00 am

ABSTRACT

The advent of deep learning has completely revolutionized how we perceive data to obtain superhuman performance across all fields of modern science. However, despite the remarkable empirical successes of deep learners, the theoretical guarantees for their statistical accuracy remain rather pessimistic. In particular, the data distributions on which deep learners are generally applied, such as natural images, are often hypothesized to have an intrinsic low-dimensional structure in a typically high-dimensional feature space. However, this is often not reflected in the derived rates in the state-of-the-art analyses. This talk aims to bridge the gap between the theory and practice of deep learning from a statistical perspective. We demonstrate that deep learners exhibit a convergence rate determined solely by the intrinsic dimensionality of the data, rather than its nominal high-dimensional feature representation. Our work not only provides practical guidelines for selecting suitable network architectures but also connects the theoretical analyses of these models to established convergence rates in optimal transport and non-parametric statistics literature. In particular, we derive the sharpest convergence rates for various learning scenarios, including Generative Adversarial Networks (GANs), Wasserstein Autoencoders (WAEs), federated learning, Bi-directional GANs, and general deep supervised learners. Furthermore, we introduce a novel measure, called the entropic dimension, to characterize the intrinsic dimension of probability measures and achieve the sharpest known approximation results for neural networks employing Rectified Linear Unit (ReLU) activation, improving upon classical benchmarks.

BIOGRAPHY

Saptarshi Chakraborty is a fifth-year Ph.D. student in Statistics at the University of California, Berkeley, advised by Prof. Peter Bartlett. Prior to joining Berkeley, he earned his M.Stat and B. Stat (Hons.) degrees in Statistics from the Indian Statistical Institute (ISI), Kolkata, India. He is primarily interested in the theoretical and methodological foundations of machine learning, especially, deep learning theory, unsupervised learning, dimensionality reduction, optimal transport, and optimization.

ZOOM LINK: https://ucsd.zoom.us/j/93363424503

Details

Date:
October 24, 2024
Time:
10:00 am - 11:00 am
Event Category:
Website:
https://ucsd.zoom.us/j/93363424503

Organizer

EnCORE

Other

Format
Hybrid
Speaker
Saptarshi Chakrabort, UC Berkeley