Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.

map

Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us

info@domain.com / example@domain.com

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

ML

  • January 5, 2024
  • HDSIComm

Uncertainty Quantification for Interpretable Machine Learning | Lili Zheng

Interpretable machine learning has been widely deployed for scientific discoveries and decision-making, while its reliability hinges on the critical role of uncertainty quantification (UQ). In this talk, I will discuss UQ in two challenging scenarios motivated by scientific and societal applications: selective inference for large-scale graph learning and UQ for model-agnostic machine learning interpretations. Specifically, the first part concerns graphical model inference when only irregular, patchwise observations are available, a common setting in neuroscience, healthcare, genomics, and econometrics. To filter out low-confidence edges due to the irregular measurements, I will present a novel inference method that quantifies the uneven edgewise uncertainty levels over the graph as well as an FDR control procedure; this is achieved by carefully disentangling the dependencies across the graph and consequently yields more reliable graph selection. In the second part, I will discuss the computational and statistical challenges associated with UQ for feature importance of any machine learning model. I will take inspiration from recent advances in conformal inference and utilize an ensemble framework to address these challenges. This leads to an almost computationally free, assumption-light, and statistically powerful inference approach for occlusion-based feature importance. For both parts of the talk, I will highlight the potential applications of my research in science and society as well as how it contributes to more reliable and trustworthy data science.

Read More
  • October 2, 2023
  • HDSIComm

The Uneasy Relation Between Deep Learning and Statistics

Deep learning uses the language and tools of statistics and classical machine learning, including empirical and population losses and optimizing a hypothesis on a training set. But it uses these tools in regimes where they should not be applicable: the optimization task is non-convex, models are often large enough to overfit, and the training and deployment tasks can radically differ. In this talk I will survey the relation between deep learning and statistics. In particular we will discuss recent works supporting the emerging intuition that deep learning is closer in some aspects to human learning than to classical statistics. Rather than estimating quantities from samples, deep neural nets develop broadly applicable representations and skills through their training. The talk will not assume background knowledge in artificial intelligence or deep learning.

Read More
  • April 24, 2023
  • Kaleigh O'Merry

Leveraging Simulators for ML Inference in Particle Physics

Abstract: The field of research investigating machine-learning (ML) methods that can exploit a physical model of the world through simulators is rapidly growing, particularly for applications in particle physics. While these methods have shown considerable promise in phenomenological studies, they are also known to be susceptible to inaccuracies in the simulators used to train them. In this work, we design a novel analysis strategy that uses the concept of simulation-based inference for a crucial Higgs Boson measurement, where traditional methods are rendered sub-optimal due to quantum interference between Higgs and non-Higgs processes. Our work develops uncertainty quantification methods that account for the impact of inaccuracies in the simulators, uncertainties in the ML predictions themselves, and novel strategies to test the coverage of these quoted uncertainties. These new ML methods leverage the vast computational resources that have recently become available to perform scientific measurements in a way that was not feasible before. In addition, this talk briefly discusses certain ML-bias-mitigation methods developed in particle physics and their potential wider applications.

Read More
  • April 7, 2023
  • Kaleigh O'Merry

Decoding Nature’s Message Through the Channel of Artificial Intelligence

Abstract: Nature contains many interesting physics we want to search for, but it cannot speak them out loud. Therefore physicists need to build large particle physics experiments that encode nature’s message into experimental data. My research leverages artificial intelligence and machine learning to maximally decode nature’s message from those data. The questions I want to ask nature is: Are neutrinos Majorana particles? The answer to this question would fundamentally revise our understanding of physics and the cosmos. Currently, the most effective experimental probe for Majorana neutrino is neutrinoless double-beta decay(0vββ). Cutting-edge AI algorithms could break down significant technological barriers and, in turn, deliver the world’s most sensitive search for 0vββ. This talk will discuss one such algorithm, KamNet, which plays a pivotal role in the new result of the KamLAND-Zen experiment. With the help of KamNet, KamLAND-Zen provides a limit that reaches below 50 meV for the first time and is the first search for 0νββ in the inverted mass ordering region. Looking further, the next-generation 0vββ experiment LEGEND has created the Germanium Machine Learning group to aid all aspects of LEGEND analysis and eventually build an independent AI analysis. As the odyssey continues, AI will enlighten the bright future of experimental particle physics.

Read More
  • March 15, 2023
  • Kaleigh O'Merry

Scientific Machine Learning Symposium

Recent progress in Artificial Intelligence (AI) and Machine Learning (ML) has provided groundbreaking methods for processing large data sets. These new techniques are particularly powerful when dealing with scientific data with complex structures, non-linear relationships, and unknown uncertainties that are challenging to model and analyze with traditional tools. This has triggered a flurry of activity in science and engineering, developing new methods to tackle problems which used to be impossible or extremely hard to deal with.

The goal of this symposium is to bring together researchers and practitioners at the intersection of AI and Science, to discuss opportunities to use AI to accelerate scientific discovery, and to explore the potential of scientific knowledge to guide AI development. The symposium will provide a platform to nurture the research community, to fertilize interdisciplinary ideas, and shape the vision of future developments in the rapidly growing field of AI + Science.

We plan to use the symposium as the launching event for the AI + Science event series, co-hosted by Computer Science and Engineering(CSE), Halıcıoğlu Data Science Institute (HDSI), and Scripps Institution of Cceanography(SIO) at UC San Diego. The symposium will include a combination of invited talks, posters, panel discussions, social and networking events. The first event will put a particular emphasis on AI + physical sciences. We will invite contribution and participation from physics, engineering, and oceanography, among others. Part of the program will highlight the research from climate science, as a result of our DOE funded scientific ML project for tackling climate extremes.

Read More
  • March 15, 2023
  • Kaleigh O'Merry

Optimal methods for reinforcement learning: Efficient algorithms with instance-dependent guarantees | Wenlong Mou

Reinforcement learning (RL) is a pillar for modern artificial intelligence. Compared to classical statistical learning, several new statistical and computational phenomena arise from RL problems, leading to different trade-offs in the choice of the estimators, tuning of their parameters, and the design of efficient algorithms. In many settings, asymptotic and/or worst-case theory fails to provide the relevant guidance.
In this talk, I present recent advances that involve a more refined approach to RL, one that leads to non-asymptotic and instance-optimal guarantees. The bulk of this talk focuses on function approximation methods for policy evaluation. I establish a novel class of optimal and instance-dependent oracle inequalities for projected Bellman equations, as well as efficient computational algorithms achieving them. Among other results, I will highlight how the instance-optimal guarantees guide the selection of tuning parameters in temporal different methods, and tackle the instability issue with general function classes. Drawing on this perspective, I will also discuss a novel class of stochastic approximation methods that yield optimal statistical guarantees for policy optimization problems.

Read More