Responsible AI: Privacy and Fairness in Decision and Learning Systems
Differential Privacy has become the go-to approach for protecting sensitive information in data releases and learning tasks that are used for critical decision processes. For example, census data is used to allocate funds and distribute benefits, while several corporations use machine learning systems for criminal assessments, hiring decisions, and more. While this privacy notion provides strong guarantees, we will show that it may also induce biases and fairness issues in downstream decision processes. These issues may adversely affect many individuals’ health, well-being, and sense of belonging, and are currently poorly understood.
In this talk, we delve into the intersection of privacy, fairness, and decision processes, with a focus on understanding and addressing these fairness issues. We first provide an overview of Differential Privacy and its applications in data release and learning tasks. Next, we examine the societal impacts of privacy through a fairness lens and present a framework to illustrate what aspects of the private algorithms and/or data may be responsible for exacerbating unfairness. We hence show how to extend this framework to assess the disparate impacts arising in Machine Learning tasks. Finally, we propose a path to partially mitigate these fairness issues and discuss grand challenges that require further exploration.