Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Weinberg Institute for Cognitive Science

Foundations and Frontiers Speaker Series

Using Large Language Models to Understand Human Cognition

Sean Trott Sean Trott
Sean Trott
Sean Trott is an Assistant Professor at the University of California, San Diego. He holds a joint appointment in Cognitive Science and Computational Social Science. His research focuses on how humans and machines understand language, and makes use of Large Language Models (LLMs) to test hypotheses about human cognition.

UPDATE: Sean Trott will be joining us virtually for this event, the presentation will still take place in East Hall 4448.

Schedule
11:00-11:30 am Foundations Presentation
11:30-11:45 am Q & A
—15 minute pizza break—
12:00-12:50 pm Frontiers Presentation
12:50-1:20 pm Q & A

Presentation Abstract
Foundations: Many debates in Cognitive Science—such as whether certain cognitive capacities are innate, or acquired through specific experiential input—are entrenched and difficult to resolve. A new paradigm attempts to address these debates using Large Language Models (LLMs) to test competing theories of human cognition. In particular, because (most) LLMs are trained on linguistic input alone, they serve as useful baselines: measures of what kinds of behaviors and capacities could in principle emerge purely from exposure to statistical patterns in language. In this talk, I discuss the motivations for such an approach, and briefly survey several examples from the literature. Finally, I discuss the relevant trade-offs and considerations that might inform a researcher's decision about whether to use LLMs in their own research, including: the amount (and quality) of data an LLM has been trained on, issues of construct validity, and multimodal models.

Frontiers: Humans often reason about the mental states of others, even when those mental states diverge from their own. The ability to reason about false beliefs—part of the broader constellation abilities that make up "Theory of Mind"—is viewed by many as playing a crucial role in social cognition. Yet there is considerable debate about whether this ability comes from. Some theories emphasize the role of innate biological endowments, while others emphasize the role of experience. In this talk, I consider a hypothesis about a specific kind of experience: language. To test this "language exposure hypothesis", I use GPT-3, a Large Language Model (LLM) trained on linguistic input alone, and ask whether and to what extent such a system displays evidence consistent with Theory of Mind. The LLM displays above-chance performance on a number of tasks, but also falls short of human performance in multiple cases. I conclude by discussing the implications of these results for the language exposure hypothesis specifically, and for research on Theory of Mind more generally.
Sean Trott Sean Trott
Sean Trott

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content