Differential privacy (DP) is widely regarded as a gold standard for privacy-preserving computation over users’ data. A key challenge with DP is that its mathematical sophistication makes its privacy guarantees difficult to communicate to users, leaving them uncertain about how and whether they are protected. Despite recent widespread deployment of DP, relatively little is known about what users think of differential privacy and how to effectively communicate the practical privacy guarantees it offers.
This talk will cover a series of recent and ongoing user studies aimed at measuring and improving communication with non-technical end users about differential privacy. The first set explores users’ privacy expectations related to differential privacy and measures the efficacy of existing methods for communicating the privacy guarantees of DP systems. We find that users care about the kinds of information leaks against which differential privacy protects and are more willing to share their private information when the risk of these leaks is reduced. Additionally, we find that the ways in which differential privacy is described in-the-wild set users’ privacy expectations haphazardly, which can be misleading depending on the deployment. Motivated by these findings, the second set of user studies develops and evaluates prototype descriptions designed to help end users understand DP guarantees. These descriptions target two important technical details in DP deployments that are often poorly communicated to end users: the privacy parameter epsilon (which governs the level of privacy protections) and the distinctions between the local and central models of DP (which governs who can access exact user data). Based on joint works with Gabriel Kaptchuk, Priyanka Nanayakkara, Elissa Redmiles, Mary Anne Smart, including https://arxiv.org/abs/2110.06452 and https://arxiv.org/abs/2303.00738.