Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.

map

Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us

info@domain.com / example@domain.com

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

Event Series IPE Data Lunchtime Series

The Interplay of Technology, Ethics, and Policy

Public Engagement Building (PEB) 721 9625 Scholars Drive North MC 0305, La Jolla, CA, United States

Abstract: Technology is often designed and deployed without critical reflection of the values that it embodies. Value trade-offs—between security and privacy, free speech and dignity, autonomy and human agency, and different conceptions of fairness—abound in many technologies that are now achieving great scale in commonly used tech platforms. The decisions made by the people inside […]

Event Series IPE Data Lunchtime Series

Just Opt Out? Lessons Learned From a Decade of Evasion

Public Engagement Building (PEB) 721 9625 Scholars Drive North MC 0305, La Jolla, CA, United States

Abstract: With the rise of techlash, an increasing number of users wish they could just say no to data tracking, surveillance capitalism, and the socially divisive effects of creepy technologies in our daily lives. But can we truly walk away from these systems? And what do we learn when we do? In this talk, Vertesi tells […]

Event Series IPE Data Lunchtime Series

Beyond ‘The Algorithm’: Fields, Drama, and Extreme Content Among Vegan Influencers

Public Engagement Building (PEB) 721 9625 Scholars Drive North MC 0305, La Jolla, CA, United States

Abstract: Existing research on polarization on social media platforms emphasizes the role of algorithmic "filter bubbles" and platform failure in amplifying extreme attitudes among online audiences. This article provides a different approach by focusing on online creators rather than audiences. Christin adapts field theory to examine the dynamics structuring exchanges between social media influencers, which […]

IPE Data talk: Berk Ustun on Personalization and Worsenalization

The Institute for Practical Ethics' working group on Data Governance and Accountability (aka IPE Data) is thrilled to announce our first talk, with UCSD's very own Prof. Berk Ustun next Monday at 3pm, on "When Personalization Harms Performance." Please also save the date Mon, 11/13@3pm for a talk with Prof. Dan Ho from Stanford on […]

Enabling Equity Assessments of Government Programs: Law, Policy, and Methods

Governments have increasingly mandated equity assessments for potential demographic disparities in programs and services. This talk will consider some of the legal, policy, and methodological challenges for carrying out such mandates, particularly in the public sector context. First, the talk will discuss emerging tensions between informational privacy and equity assessments, as illustrated by the data minimization principle under the Privacy Act of 1974. Second, it will discuss the range of methods available to improve disparity assessments, including imputation, record linkage, surveys, and form collection. Third, it will illustrate these tensions with a case study of an equity assessment of racial disparities of IRS tax audits.

The Ethical and Policy Implications of Artificial Intelligence

Sanford Consortium

The Institute for Practical Ethics welcomes David Danks as the 2024 keynote speaker.

Danks, a UC San Diego professor in the Department of Philosophy and Halıcıoğlu Data Science Institute, is an expert researcher at the intersection of philosophy, cognitive science and machine learning. He serves on multiple boards, including the United States National AI Advisory Committee.

Artificial intelligence is seemingly everywhere today, both in public perception and in our everyday lives. This growth has led to many stories about the widespread harms that can result from AI done poorly. As a result, there are now numerous demands for ‘ethical AI,’ but relatively little understanding of what that might involve.

In this keynote, David Danks will explore the nature of responsible AI, arguing that it involves much more than code or data. He will critically assess current approaches to producing more responsible AI, then suggest key policy and practical approaches that would likely be more effective. It is critical we create more responsible AI, but that will require rethinking many of our current practices in academia, government and industry.