This is an open access article that can be read/downloaded at this link.
Click on image to enlarge
Abstract
Cloze tests have a long history and have been used to measure various abilities, including intelligence, reading comprehension, and language proficiency. To locate cloze tests within a nomological network of cognitive abilities, we conducted a multilevel random effects meta-analysis covering 110 years of research. Studies were eligible if they provided a measure of association between a cognitive fill-in-the-blank test and any cognitive ability test. We synthesized manifest correlations from 89 studies (N = 37,912, k = 634) and found an average correlation of r = .54 (95% CI [.49, .59], k = 485) with crystallized intelligence, r = .48 (95% CI [.42, .54], k = 69) with fluid intelligence, and r =.61 (95% CI [.46, .77], k = 32) with general intelligence. While today's application of the typical cloze is to measure reading comprehension, our results revealed a similarly strong association with a broad range of crystallized abilities. Of the key moderators we investigated—text base, administration mode, deletion pattern, and response type—only the response type showed a significant effect. Sensitivity analyses supported the robustness of our findings. We conclude by revisiting the origin of the cloze test and highlighting the need for systematic studies on how different cloze test designs affect construct validity. Whereas the meta-analytic database predominantly originates from language research, where cloze tests are entrenched as markers of language proficiency, we propose reframing cloze tests as a versatile intelligence test format—just like multiple-choice tests constitute a testing method—that can be tailored to assess various specific cognitive abilities.
