Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.

map

Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us

info@domain.com / example@domain.com

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

Filters

Changing any of the form inputs will cause the list of events to refresh with the filtered results.

Event Series Special Seminar Series

The continuum of gene regulation at single cell resolution, from Drosophila development to human complex traits | Diego Calderon

Powell-Focht Bioengineering Hall (PFBH), FUNG Auditorium

Single-cell technologies have emerged as powerful tools for studying development, enabling comprehensive surveys of cellular diversity at profiled timepoints. They shed light on the dynamics of regulatory element activity and gene expression changes during the emergence of each cell type. Despite their potential, nearly all atlases of embryogenesis are constrained by sampling density, i.e., the number of discrete time points at which individual embryos are harvested. This limitation affects the resolution at which regulatory transitions can be characterized. In this talk, I present a novel cell collection approach capable of constructing a continuous representation of dynamic regulatory processes. I applied this approach to generate a continuous, single-cell atlas of chromatin accessibility and gene expression spanning Drosophila embryogenesis. Additionally, I will discuss my past and future research, applying new genomic technologies to characterize gene regulation important for human diseases.

Event Series Special Seminar Series

Building Human-AI Alignment: Specifying, Inspecting, and Modeling AI Behaviors | Serena Booth

GPS, Robinson Building Complex (RBC), 3106

Abstract: The learned behaviors of AI and robot agents should align with the intentions of their human designers. Alignment is necessary for AI systems to be used in many sectors of the economy, and so the process of aligning AI systems becomes critical to study for defining effective AI policy. Toward this goal, people must be able to easily specify, inspect, and model agent behaviors. For specifications, we will consider expert-written reward functions for reinforcement learning (RL) and non-expert preferences for reinforcement learning from human feedback (RLHF). I will show evidence that experts are bad at writing reward functions: even in a trivial setting, experts write specifications that are overfit to a particular RL algorithm, and they often write erroneous specifications for agents that fail to encode their true intent. I will also show that the common approach to learning a reward function from non-experts in RLHF uses an inductive bias that fails to encode how humans express preferences, and that our proposed bias better encodes human preferences both theoretically and empirically. I will discuss the policy implications: namely, that engineers' design processes and embedded assumptions in building AI must be considered. For inspection, humans must be able to assess the behaviors an agent learns from a given specification. I will discuss a method to find settings that exhibit particular behaviors, like out-of-distribution failures. I will discuss the policy implications for testing AI systems, for example through red teaming. Lastly, cognitive science theories attempt to show how people build conceptual models that explain agent behaviors. I will show evidence that some of these theories are used in research to support humans, but that we can still build better curricula for modeling. I will discuss the policy need for careful onboarding to AI systems. I will end by discussing my current work in the U.S. Senate on responding to the proliferation of AI. Collectively, my research provides evidence that—even with the best of intentions— current human-AI systems often fail to induce alignment, and my research proposes promising directions for how to build better aligned human-AI systems.

The Ethical and Policy Implications of Artificial Intelligence

Sanford Consortium

The Institute for Practical Ethics welcomes David Danks as the 2024 keynote speaker.

Danks, a UC San Diego professor in the Department of Philosophy and Halıcıoğlu Data Science Institute, is an expert researcher at the intersection of philosophy, cognitive science and machine learning. He serves on multiple boards, including the United States National AI Advisory Committee.

Artificial intelligence is seemingly everywhere today, both in public perception and in our everyday lives. This growth has led to many stories about the widespread harms that can result from AI done poorly. As a result, there are now numerous demands for ‘ethical AI,’ but relatively little understanding of what that might involve.

In this keynote, David Danks will explore the nature of responsible AI, arguing that it involves much more than code or data. He will critically assess current approaches to producing more responsible AI, then suggest key policy and practical approaches that would likely be more effective. It is critical we create more responsible AI, but that will require rethinking many of our current practices in academia, government and industry.