BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//UM//UM*Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:America/Detroit
TZURL:http://tzurl.org/zoneinfo/America/Detroit
X-LIC-LOCATION:America/Detroit
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20070311T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20071104T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20251009T103323
DTSTART;TZID=America/Detroit:20251121T160000
DTEND;TZID=America/Detroit:20251121T173000
SUMMARY:Lecture / Discussion:Linguistics Colloquium
DESCRIPTION:Laura Gwilliams received her BA in Linguistics from Cardiff University (UK) and a Master's degree in Cognitive Neuroscience of Language from the BCBL (Basque Country\, Spain). Laura then joined NYU\, first as a research assistant\, and then as a PhD student\, working with David Poeppel\, Alec Marantz and Liina Pylkkanen. There she used magnetoencephalography (MEG) to study the neural computations underlying dynamic speech understanding. Laura's dissertation `Towards a mechanistic account of speech comprehension' aims to combine insight from theoretical linguistics\, neuroscience and machine-learning\, and has received recognition from Meta\, the William Orr Dingwall Foundation\, the Martin Braine Fellowship\, the Society for Neuroscience\, and the Society for the Neurobiology of Language. As a post-doctoral scholar with Eddie Chang\, Laura added intracranial EEG and single-unit recordings to her repertoire of techniques. She used her time in the Chang Lab to understand how laminar single neuronal spiking encodes speech properties in humans. Now as the director of the Gwilliams Laboratory of Speech Neuroscience (the GLySN Lab) at Stanford University\, her group aims to understand the neural representations and computations that give rise to successful speech comprehension\, using a range of recording methodologies\, and analytical techniques.\n\nDr. Gwilliams will be joining us via Zoom.\n\nTitle: Computational architecture of human speech comprehension\n\nAbstract: \nHumans understand speech with such speed and accuracy\, it belies the complexity of transforming sound into meaning. The goal of my research is to develop a theoretically grounded\, biologically constrained and computationally explicit account of how the human brain achieves this feat. In my talk\, I will present a series of studies that examine neural responses at different spatial scales: From population ensembles using magnetoencephalography and electrocorticography\, to the encoding of speech properties in individual neurons across the cortical depth using Neuropixels probes in humans. The results provide insight into (i) what auditory and linguistic representations serve to bridge between sound and meaning\; (ii) what operations reconcile auditory input speed with neural processing time\; (iii) how information at different timescales is nested\, in time and in space\, to allow information exchange across hierarchical structures. My work showcases the utility of combining cognitive science\, machine-learning and neuroscience for developing neurally-constrained computational models of spoken language understanding.
UID:140313-21886911@events.umich.edu
URL:https://events.umich.edu/event/140313
CLASS:PUBLIC
STATUS:CONFIRMED
CATEGORIES:Talk
LOCATION:Off Campus Location
CONTACT:
END:VEVENT
END:VCALENDAR