Uncertainty Quantification for Interpretable Machine Learning | Lili Zheng
Interpretable machine learning has been widely deployed for scientific discoveries and decision-making, while its reliability hinges on the critical role of uncertainty quantification (UQ). In this talk, I will discuss UQ in two challenging scenarios motivated by scientific and societal applications: selective inference for large-scale graph learning and UQ for model-agnostic machine learning interpretations. Specifically, the first part concerns graphical model inference when only irregular, patchwise observations are available, a common setting in neuroscience, healthcare, genomics, and econometrics. To filter out low-confidence edges due to the irregular measurements, I will present a novel inference method that quantifies the uneven edgewise uncertainty levels over the graph as well as an FDR control procedure; this is achieved by carefully disentangling the dependencies across the graph and consequently yields more reliable graph selection. In the second part, I will discuss the computational and statistical challenges associated with UQ for feature importance of any machine learning model. I will take inspiration from recent advances in conformal inference and utilize an ensemble framework to address these challenges. This leads to an almost computationally free, assumption-light, and statistically powerful inference approach for occlusion-based feature importance. For both parts of the talk, I will highlight the potential applications of my research in science and society as well as how it contributes to more reliable and trustworthy data science.