Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.


Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us /

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

Loading Events

« All Events

  • This event has passed.

Andrej Risteski: Better understanding of modern paradigms in probabilistic models

March 12, 2019 @ 2:00 pm

Abstract: In recent years, one of the areas of machine learning that has seen the most exciting progress is unsupervised learning, namely learning in the absence of labels or annotation. An integral part of these advances have been complex probabilistic models for high-dimensional data, capturing different types of intricate latent structure. As a consequence, a lot of statistical and algorithmic issues have emerged, stemming from all major aspects of probabilistic models: representation (expressivity and interpretability of the model), learning (fitting a model from raw data) and inference (probabilistic queries and sampling from a known model). A common theme is that the models that are used in practice are often intractable in the worst-case (either computationally or statistically), yet even simple algorithms are, to borrow from Wigner, unreasonably effective in practice. It thus behooves us to ask why this happens. I will showcase some of my research addressing this question, in the context of (i) computationally efficient inference using Langevin dynamics in the presence of multimodality; (ii) statistical guarantees for learning distributions using GANs (Generative Adversarial Networks); and (iii) explaining surprising properties of vector representations of words (word embeddings).
Short Bio: Andrej Risteski holds a joint position as the Norbert Wiener Fellow at the Institute for Data Science and Statistics (IDSS) and an Instructor of Applied Mathematics at MIT. Before MIT, he was a PhD student in the Computer Science Department at Princeton University, working under the advisement of Sanjeev Arora.  Prior to that he received his B.S.E. degree at Princeton University as well. His work lies in the intersection of machine learning and theoretical computer science. The broad goal of his research is theoretically understanding statistical and algorithmic phenomena and problems arising in modern machine learning.


March 12, 2019
2:00 pm