What are the effects of word‐by‐word predictability on sentence processing times during the natural reading of a text? Although information complexity metrics such as surprisal and entropy reduction have been useful in addressing this question, these metrics tend to be estimated using computational language models, which require some degree of commitment to a particular theory of language processing. Taking a different approach, this study implemented a large‐scale cumulative cloze task to collect word‐by‐word predictability data for 40 passages and compute surprisal and entropy reduction values in a theory‐neutral manner. A separate group of participants read the same texts while their eye movements were recorded. Results showed that increases in surprisal and entropy reduction were both associated with increases in reading times. Furthermore, these effects did not depend on the global difficulty of the text. The findings suggest that surprisal and entropy reduction independently contribute to variation in reading times, as these metrics seem to capture different aspects of lexical predictability.

Document Type

Post-print Article

Publication Date


Publisher Statement

Copyright © 2018 Cognitive Science Society, Inc. Article first published online: February 2018.

DOI: 10.1111/cogs.12597

The definitive version is available at:

Please note that downloads of the article are for private/personal use only.

Full citation:

Lowder, Matthew Warren, Wonil Choi, Fernanda Ferreira, and John M. Henderson. "Lexical Predictability during Natural Reading: Effects of Surprisal and Entropy Reduction." Cognitive Science: A Multidisciplinary Journal online (February, 2018): 1-18.