Everyday speech is rife with errors and disfluencies, yet processing what we hear usually feels effortless. How does the language comprehension system accomplish such an impressive feat? The current experiment tests the hypothesis that listeners draw on relevant contextual and linguistic cues to anticipate speech errors and mentally correct them even before receiving an explicit correction from the speaker. In the current visual-world eye-tracking experiment, we monitored participants’ eye movements to objects in a display while they listened to utterances containing reparandum-repair speech errors (e.g., …his cat, uh I mean his dog…). The contextual plausibility of the misspoken word, as well as the certainty with which the speaker uttered this word, were systematically manipulated. Results showed that listeners immediately exploited these cues to generate top-down expectations regarding the speaker’s communicative intention. Crucially, listeners used these expectations to constrain the bottom-up speech input and mentally correct perceived speech errors even before the speaker initiated the correction. The results provide powerful evidence regarding the joint process of correcting speech errors that involves both the speaker and the listener.

Document Type

Post-print Article

Publication Date


Publisher Statement

Copyright © 2018 American Psychological Association. Article first published online: December 2018.

DOI: 10.1037/xge0000544

The definitive version is available at:

Please note that downloads of the article are for private/personal use only.

Full Citation:

Lowder, Matthew W. "I See What You Meant to Say: Anticipating Speech Errors During Online Sentence Processing." Journal of Experimental Psychology: General (December 17, 2018): 1-32.