Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where

Presented By: Department of Linguistics

Linguistics Colloquium

Laura Gwilliams, Stanford University

Laura Gwilliams Laura Gwilliams
Laura Gwilliams
Laura Gwilliams received her BA in Linguistics from Cardiff University (UK) and a Master's degree in Cognitive Neuroscience of Language from the BCBL (Basque Country, Spain). Laura then joined NYU, first as a research assistant, and then as a PhD student, working with David Poeppel, Alec Marantz and Liina Pylkkanen. There she used magnetoencephalography (MEG) to study the neural computations underlying dynamic speech understanding. Laura's dissertation `Towards a mechanistic account of speech comprehension' aims to combine insight from theoretical linguistics, neuroscience and machine-learning, and has received recognition from Meta, the William Orr Dingwall Foundation, the Martin Braine Fellowship, the Society for Neuroscience, and the Society for the Neurobiology of Language. As a post-doctoral scholar with Eddie Chang, Laura added intracranial EEG and single-unit recordings to her repertoire of techniques. She used her time in the Chang Lab to understand how laminar single neuronal spiking encodes speech properties in humans. Now as the director of the Gwilliams Laboratory of Speech Neuroscience (the GLySN Lab) at Stanford University, her group aims to understand the neural representations and computations that give rise to successful speech comprehension, using a range of recording methodologies, and analytical techniques.

Dr. Gwilliams will be joining us via Zoom.

Title: Computational architecture of human speech comprehension

Abstract:
Humans understand speech with such speed and accuracy, it belies the complexity of transforming sound into meaning. The goal of my research is to develop a theoretically grounded, biologically constrained and computationally explicit account of how the human brain achieves this feat. In my talk, I will present a series of studies that examine neural responses at different spatial scales: From population ensembles using magnetoencephalography and electrocorticography, to the encoding of speech properties in individual neurons across the cortical depth using Neuropixels probes in humans. The results provide insight into (i) what auditory and linguistic representations serve to bridge between sound and meaning; (ii) what operations reconcile auditory input speed with neural processing time; (iii) how information at different timescales is nested, in time and in space, to allow information exchange across hierarchical structures. My work showcases the utility of combining cognitive science, machine-learning and neuroscience for developing neurally-constrained computational models of spoken language understanding.
Laura Gwilliams Laura Gwilliams
Laura Gwilliams

Livestream Information

 Zoom
November 21, 2025 (Friday) 4:00pm
Meeting ID: 987533376494081

Explore Similar Events

  •  Loading Similar Events...

Keywords


Back to Main Content