Constrained, Casual, and Logical Reasoning for Neural Language Generation
SDSC, The Auditorium 9836 Hopkins Dr, La Jolla, San Diego, CA, United StatesToday’s language models (LMs) can produce human-like fluent text. However, they generate words with no grounding in the world and cannot flexibly reason about everyday situations and events, such as counterfactual (“what if?”) and abductive (“what might explain these observations?”) reasoning that are important forms of human cognition activities. In this talk, I will present my research on connecting reasoning with language generation. Reasoning for language generation poses several key challenges, including incorporating diverse contextual constraints on the fly, understanding cause and effect when events unfold, and grounding on logic structures for consistent reasoning. I will first discuss COLD decoding, a unified energy-based framework for any off-the-shelf LMs to reason with arbitrary constraints. It also introduces differentiable reasoning over discrete symbolic text for improved efficiency. Secondly, I will focus on a particularly important form of reasoning, counterfactual reasoning, including its first formulation in language generation and our algorithm, DeLorean, that enables off-the-shelf LMs to capture causal invariance. Thirdly, I will present Maieutic prompting, which improves the logical consistency of neural reasoning by integrating with logic structures. I will conclude with future research toward more general, grounded, and trustworthy reasoning with language.