Skip to Content

Sponsors

No results

Tags

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Department of Linguistics

Linguistics Colloquium: Computational Models of Retrieval Processes

Shravan Vasishth, University of Potsdam

Virtual colloquium (via BlueJeans) featuring Shravan Vasishth. Shravan Vasishth is professor of linguistics at the University of Potsdam, Germany, and holds the chair Psycholinguistics and Neurolinguistics (Language Processing). His research focuses on computational cognitive modeling, in particular, computational modeling of sentence processing in unimpaired and impaired populations, and the application of mathematical, computational, experimental, and statistical methods (particularly Bayesian methods) in linguistics and psychology.

ABSTRACT
Computational models of retrieval processes: An evaluation using benchmark data

The talk will begin by revisiting the key predictions of the ACT-R based model of sentence processing (Lewis and Vasishth, 2005, henceforth LV05). As discussed in Engelmann, Jäger, and Vasishth, 2020, the LV05 model predicts two classes of similarity-based interference effects: inhibitory and facilitatory interference. Jäger, Engelmann, and Vasishth, 2017, carried out a meta-analysis of some 100 existing effect estimates (self-paced reading and eyetracking during reading). This work showed that the LV05 model's predictions are only partly consistent with the current evidence available. A closer look at the published data suggests that the published studies are likely to be severely underpowered. As Gelman and Carlin, 2014, have pointed out, when power is low, statistically significant effect estimates will be highly misleading: either the effects will be overestimated, or the sign of the effect will be incorrect (for a real-life demonstration, see Vasishth, Mertzen, Jäger, and Gelman, 2018). Coupled with the problem of publication bias (in so-called high-impact journals, "big news" claims are published more often than "failed" studies or more tempered claims), these underpowered studies make theory evaluation difficult to impossible. What can we do as researchers? How to proceed?

In the second part of the talk, I show one way that we can resolve these problems. In their classic paper, Roberts and Pashler (2000) laid out two important criteria for model evaluation: the model needs to make quantitatively constrained predictions, and the effect estimates have to be measured with high precision. Modeling researchers usually have one more criterion: model evaluation should always be carried out in the context of a competing baseline model to be meaningful. As a case study of model evaluation, we compare the predictive performance (using k-fold cross-validation) of the LV05 model with a competing model of retrieval processes, the McElree 2003 direct-access model (Nicenboim and Vasishth, 2018). The evaluation data-set is a relatively high-precision study on inhibitory interference effects in German number agreement (Nicenboim, Vasishth, Engelmann, and Suckow, 2018).

Explore Similar Events

  •  Loading Similar Events...

Back to Main Content