Contact Us

Give us a call or drop by anytime, we endeavor to answer all inquiries within 24 hours.

map

Find us

PO Box 16122 Collins Street West Victoria, Australia

Email us

info@domain.com / example@domain.com

Phone support

Phone: + (066) 0760 0260 / + (057) 0760 0560

Language Models

  • April 6, 2023
  • Kaleigh O'Merry

Language Models and Human Language Acquisition

Abstract: Children have a remarkable ability to acquire language. This propensity has been an object of fascination in science for millennia, but in just the last few years, neural language models (LMs) have also proven to be incredibly adept at learning human language. In this talk, I discuss scientific progress that uses recent developments in natural language processing to advance linguistics—and vice-versa. My research explores this intersection from three angles: evaluation, experimentation, and engineering. Using linguistically motivated benchmarks, I provide evidence that LMs share many aspects of human grammatical knowledge and probe how this knowledge varies across training regimes. I further argue that—under the right circumstances—we can use LMs to test hypotheses that have been difficult or impossible to evaluate with human subjects. Such experiments have the potential to transform debates about the roles of nature and nurture in human language learning. As a proof of concept, I describe a controlled experiment examining how the distribution of linguistic phenomena in the input affects syntactic generalization. While the results suggest that the linguistic stimulus may be richer than often thought, there is no avoiding the fact that current LMs and humans learn language in vastly different ways. I describe ongoing work to engineer learning environments and objectives for LM pretraining inspired by human development, with the goal of making LMs more data efficient and more plausible models of human learning.

Read More
  • March 23, 2023
  • Kaleigh O'Merry

Constrained, Casual, and Logical Reasoning for Neural Language Generation

Today’s language models (LMs) can produce human-like fluent text. However, they generate words with no grounding in the world and cannot flexibly reason about everyday situations and events, such as counterfactual (“what if?”) and abductive (“what might explain these observations?”) reasoning that are important forms of human cognition activities. In this talk, I will present my research on connecting reasoning with language generation. Reasoning for language generation poses several key challenges, including incorporating diverse contextual constraints on the fly, understanding cause and effect when events unfold, and grounding on logic structures for consistent reasoning. I will first discuss COLD decoding, a unified energy-based framework for any off-the-shelf LMs to reason with arbitrary constraints. It also introduces differentiable reasoning over discrete symbolic text for improved efficiency. Secondly, I will focus on a particularly important form of reasoning, counterfactual reasoning, including its first formulation in language generation and our algorithm, DeLorean, that enables off-the-shelf LMs to capture causal invariance. Thirdly, I will present Maieutic prompting, which improves the logical consistency of neural reasoning by integrating with logic structures. I will conclude with future research toward more general, grounded, and trustworthy reasoning with language.

Read More