Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.
PO Box 16122 Collins Street West Victoria, Australia
firstname.lastname@example.org / email@example.com
Phone: + (066) 0760 0260 / + (057) 0760 0560
The last AI seminar of the Fall quarter will take place next Monday, Monday (Dec 7) from 12pm-12:50pm over Zoom (https://ucsd.zoom.us/j/99067937524). Our speaker is Babak Salimi. Looking forward!
FYI: The recording and slides of the previous talks can be found here: https://shangjingbo1226.github.io/teaching/2020-fall-CSE259-AI-seminar
Title: Causal Inference for Responsible Data Science
Abstract: Scaling and democratizing access to big data promises to provide meaningful, actionable information that supports decision-making. Today, data-driven decisions profoundly affect the course of our lives, such as whether to admit applicants to a particular school, offer them a job, or grant them a mortgage. Unfair, inconsistent, or faulty decision-making raises serious concerns about ethics and responsibility. For example, we may know that our training data is biased, but how do we avoid propagating discrimination when we use this data? How do we avoid incorrect, spurious and non-reproducible findings? How can we curate and expose existing data to make it “safe” for informed decision-making?
In this talk, I describe how we can combine techniques from causal inference and data management to develop systems and algorithms that help answer some of these questions. Many existing popular notions of fairness in ML fail to distinguish between discriminatory, non-discriminatory and spurious correlations between sensitive attributes and outcomes of learning algorithms. I present a new notion of fairness that subsumes and improves upon previous definitions and correctly distinguishes between fairness violations and non-violations. Further, I describe an approach to removing discrimination by repairing training data in order to remove the effects of any inappropriate and/or discriminatory causal relationships between a protected attribute and classifier predictions. Finally, I present my most recent work that use counterfactual reasoning and provenance for explaining black-box decision-making algorithms.
Speaker Bio: Babak Salimi is an assistant professor in HDSI at UC San Diego. Before joining UC San Diego, he was a postdoctoral research associate in the Department of Computer Science and Engineering, University of Washington where he worked with Prof. Dan Suciu and the database group. He received his Ph.D. from the School of Computer Science at Carleton University, advised by Prof. Leopoldo Bertossi. His research seeks to unify techniques from theoretical data management, causal inference and machine learning to develop a new generation of decision-support systems that help people with heterogeneous background to interpret data. His ongoing work in causal relational learning aims to develop the necessary conceptual foundations to make causal inference from complex relational data. Further, his research in the area of responsible data science develops needed foundations for ensuring fairness and accountability in the era of data-driven decisions. His research contributions have been recognized with a Postdoc Research Award at University of Washington, a Best Demonstration Paper Award at VLDB 2018, a Best Paper Award at SIGMOD 2019 and a Research Highlight Award at SIGMOD 2020.