Tuesday, September 02, 2025

From the #Cattell-Horn-Carroll (#CHC) #cognitive #intelligence theory archives: Photos of important 1999 Carroll, Horn, Woodcock, Roid et al. meeting in Chapel Hill, NC.

I was recently cleaning my office when I stumbled upon these priceless photos from a 1999 historical meeting in Chapel Hill, NC that involved John Horn, Jack Carroll, Richard Woodcock, Gale Roid, John Wasserman, Fred Schrank and myself).  The provenance (I’ve always wanted to use this word 😉) for the meeting is provided below the pictures in the form of extracted quotes from Wasserman (2019) and McGrew (2023) (links below), which I confirmed with John Wasserman via a personal email on August, 30, 2025.

The 1990 CHC-based WJ-R had already been published and the WJ III author team were nearing completion of the CHC-based WJ III (2001).  Unbeknownst to many is the fact that Woodock was originally planned to be one of the coauthors of the SB5 (along with Gale Roid), which explains his presence in the photo’s that document one of several planning meetings for the CHC-based SB5.  

I was also involved as a consultant during the early planning for the CHC-based SB5 because of my knowledge of the evolving CHC theory.  My role was to review and integrate all available published and unpublished factor analysis research on all prior editions of the different SB legacy tests. I post these pictures with the names of the people included in each photo immediately below the photo. No other comments (save for the next paragraph) are provided.  

To say the least, my presence at this meeting (as well as many other meetings with Carroll and Horn together, as well as with each alone, that occured when planning the various editions of the WJ’s) was surrealistic.  One could sense a paradigm shift in intelligence testing that was happening in real time during the meetings!  The expertise of the leading theorists regarding what became known as CHC theory, together with the expertise of the applied test developers of Woodcock and Roid, provided me with learning experiences that cannot be captured in any book or university course work. 

Click on images to enlarge.  

Be gentle, these are the best available copies of images taken with an old-school camera (not smart-phone based digital images)

(Carroll, Woodcock, McGrew, Schrank)

(Carroll, Woodcock, McGrew)

(Woodcock, Wasserman, Roid, Carroll, Horn)

(Wasserman, Roid, Carroll, Horn, McGrew)

(Carroll, Woodcock)


———————-


“It was only when I left TPC for employment with Riverside Publishing (now Houghton-Mifflin-Harcourt; HMH) in 1996 that I met Richard W. Woodcock and Kevin S. McGrew and became immersed in the extended Gf-Gc (fluid-crystallized)/ Horn-Cattell theory, beginning to appreciate how Carroll's Three-Stratum (3S) model could be operationalized in cognitive-intellectual tests. Riverside had been the home of the first Gf-Gc intelligence test, the Stanford–Binet Intelligence Scale, Fourth Edition (SB IV; R. L. Thorndike, Hagen, & Sattler, 1986), which was structured hierarchically with Spearman's g at the apex, four broad ability factors at a lower level, and individual subtests at the lowest level. After acquiring the Woodcock–Johnson (WJ-R; Woodcock & Johnson, 1989) from DLM Teaching Resources, Riverside now held a second Gf-Gc measure. The WJ-R Tests of Cognitive Ability measured seven broad ability factors from Gf-Gc theory with an eighth broad ability factor possible through two quantitative tests from theWJ-R Tests of Achievement. When I arrived, planning was underway for new test editions – the WJ III (Woodcock, McGrew, & Mather, 2001) and the SB5 (Roid, 2003) – and Woodcock was then slated to co-author both tests, although he later stepped down from the SB5. Consequently, I had the privilege of participating in meetings in 1999 with John B. Carroll and John L. Horn, both of whom had been paid expert consultants to the development of the WJ-R” (Wasserman, 2019, p. 250)

——————-

In 1999, Woodcock brokered the CHC umbrella term with Horn and Carroll for practical reasons (McGrew 2005)—to facilitate internal and external communication regarding the theoretical model of cognitive abilities underlying the then-overlapping test development activities (and some overlapping consultants, test authors, and test publisher project directors; John Horn, Jack Carroll, Richard Woodcock, Gale Roid, Kevin McGrew, Fred Schrank, and John Wasserman) of the Woodcock–Johnson III and the Stanford Binet–Fifth Edition by Riverside Publishing” (McGrew, 2023, p. 3)

Wednesday, August 27, 2025

IQs Corner: Practice effects persist over two decades of cognitive testing: Implications for longitudinal research - #practiceeffect #cognitive #neurocognit #IQ #intelligence #schoolpsychology #schoolpsychologists

Click on image to enlarge for easy reading


MedRxiv preprint available at.  https://doi.org/10.1101/2025.06.16.25329587

Elman et al. (2025)


ABSTRACT 

Background: Repeated cognitive testing can boost scores due to practice effects (PEs), yet it remains unclear whether PEs persist across multiple follow-ups and long durations. We examined PEs across  multiple assessments from midlife to old age in a nonclinical sample.   

Method: Men (N=1,608) in the Vietnam Era Twin Study of Aging (VETSA) underwent 
neuropsychological assessment comprising 30 measures across 4 waves (~6-year testing intervals) spanning up to 20 years. We leveraged age-matched replacement participants to estimate PEs at each wave. We compared cognitive trajectories and MCI prevalence using unadjusted versus PE-adjusted scores. 

Results: Across follow-ups, a range of 7-12 tests (out of 30) demonstrated significant PEs, especially in episodic memory and visuospatial domains. Adjusting for PEs resulted in improved detection of cognitive decline and MCI, with up to 20% higher MCI prevalence.  

Conclusion: PEs persist across multiple assessments and decades underscoring the 
importance of accounting for PEs in longitudinal studies.
  
Keywords: practice effects; repeat testing; serial testing; longitudinal testing; mild cognitive impairment; cognitive change

Tuesday, August 26, 2025

IQs Corner: What is happening in gifted/high ability research from 2013 to 2023? - #gifted #talented #highability #EDPSY #intelligence #achievement #schoolpsychologists #schoolpsychology

 Click on image to enlarge for easier reading



Trends and Topics Evolution in Research on Giftedness in Education: A Bibliometric Analysis.  Psychology in the Schools, 2025; 62:3403–3413


Rius, C., Aguilar‐Moya, R., Martínez‐Córdoba, C., Cantos‐Roldan, B., Vidal‐Infer, A.

Open access article that can be downloaded and read for free at this link.

ABSTRACT

The article explores the evolution of research on giftedness and high ability through a bibliometric analysis. It highlights challenges in identifying gifted individuals, who represent approximately 6.5% of students, although biased instruments and discriminatory selection practices may affect the identification of high skilled students. The tripartite model, defining giftedness as a combination of high intellectual ability, exceptional achievement, and potential for excellence, serves as a fundamental framework for this study. Using Dirichlet's latent assignment model, major research topics were identified, and trends from 2013 to 2023 were analyzed based on 1071 publications in the Web of Science database. The analysis revealed that publications focus on topics such as giftedness, talent management, and educational programs, showing a significant increase in research on these areas over the past decade. Key topics included psychometrics, gifted programs, and environmental factors. The United States, Germany, and Spain led in productivity with prominent publications addressing cognitive and socio‐emotional aspects of giftedness. Findings underscore the need for targeted educational interventions, including acceleration and enrichment programs, to address the academic and emotional challenges faced by gifted students. Research is shifting toward understanding the environmental influences on these students, highlighting the importance of supportive educational environment for their success.

Monday, August 25, 2025

IQs Corner: What is (and what is not) clinical judgment in intelligence test interpretation? - #IQ #intelligence #ID #intellectualdisability #schoolpsychologists #schoolpsychology #diagnosis

What is clinical judgment in intelligence testing?  

This term is frequently invoked when psychologists explain or defend their intelligence test interpretations.  Below is a brief explanation I’ve used to describe what it is…and what it is not, based on several sources.  Schalock and Luckasson’s AAIDD Clinical Judgment book (now in a 2014 revised version) is the best single source I have found that addresses this slippery concept in intelligence testing, particularly in the context of a potential diagnosis of intellectual disability (ID)—it is a recommended reading.

—————

Clinical judgment is a process based on solid scientific knowledge and is characterized as being “systematic (i.e., organized, sequential, and logical), formal (i.e., explicit and reasoned), and transparent (i.e., apparent and communicated clearly)” (Schalock & Luckasson, 2005, p.1). The application of clinical judgment in the evaluation of IQ scores in the diagnosis of intellectual disability includes consideration of multiple factors that might influence the accuracy of an assessment of general intellectual ability (APA: DSM-5, 2013).  The “unanimous professional consensus that the diagnosis of intellectual disability requires comprehensive assessment and the application of clinical judgment” (Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8).

The misuse of clinical judgment in the interpretation of scores from intelligence test batteries should not be used as the basis for “gut instinct” or “seat-of-the-pants” impressions and conclusions of the assessment professional (Macvaugh & Cunningham, 2009), or justification for shortened evaluations, a means to convey stereotypes or prejudices, a substitute for insufficiently explored questions, or an excuse for incomplete testing and missing data (Schalock & Luckasson, 2005). Idiosyncratic methods and intuitive conclusions are not scientifically based and have unknown reliability and validity. 

If clinical judgement interpretations and opinions regarding an individual’s level of general intelligence are based on novel or emerging research-based principles, the assessment professional must document the bases for these new interpretations as well as the limitations of these principles and methods. This requirement is consistent with the Standards for Educational and Psychological Testing Standard 9.4 which states:

When a test is to be used for a purpose for which little or no validity evidence is available, the user is responsible for documenting the rationale for the selection of the test and obtaining evidence of the reliability/precision of the test scores and the validity of the interpretations supporting the use of the scores for this purpose (p. 143).


American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014).  Standards for educational and psychological testing.  Washington, DC:  Author. 

American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders— Fifth Edition. Washington D.C.:  Author. 

Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8.

MacVaugh, G. S. & Cunningham, M. D. (2009). Atkins v. Virginia: Implications and recommendations for forensic practice.  The Journal of Psychiatry and Law, 37, 131-187.

Schalock, R. L. & Luckasson, R. (2005). Clinical judgment. Washington, DC: American Association on Intellectual and Developmental Disabilities. 

—————

Kevin S. McGrew, PhD.

Educational Psychologist

Director 

Institute for Applied Psychometrics (IAP)

www.theMindHub.com


Sunday, August 17, 2025

Thoughts on the definition of dyslexia: More on the ongoing debate re the definition of dyslexia - #dyslexia #SLD #schoolpsychologists #schoolpsychology #SPED #reading

Thoughts on the Definition of Dyslexia.  Annals of Dyslexia (click here to read or download - open access)

Linda S. Siegel,  David P. Hurford, Jamie L. Metsala, Michaela R. Ozier, & Alex C. Fender

Abstract 

The International Dyslexia Association's current definition of dyslexia was approved by its Board of Directors on November 12, 2002. After two decades of scientific inquiry into the nature of dyslexia, it is time to reconsider and potentially revise the definition in light of what has been learned. We propose a definition of dyslexia based on its essential nature. Dyslexia is a specific learning disability in reading at the word level. It involves difficulty with accurate and/or fluent word recognition and/or pseudoword reading. We also suggest that the definition should focus solely on dyslexia's core features and should not include risk factors, potential secondary consequences, or other characteristics. Until those factors can reliably differentiate between those with and without dyslexia at an individual level, they should not be included in the definition.

Monday, August 11, 2025

WJ V Technical Manual Abstract assessment service bulletin now available for download - #WJV #technicalmanual #schoolpsychologists #schoolpsychology #SLD #SPED #assessment #achievement #intelligence

Click on image to enlarge



The WJ V Technical Manual Abstract assessment service bulletin is now available via Riverside Insights (click here to download and read).  Think of it as an abridged version of the massive WJ V Technical Manual (LaForte, Dailey & McGrew, 2025).  Required reading for anyone interested in the WJ V.  Of course, reading the complete “manifesto” is highly recommended.
 
This is a technical abstract for the Woodcock-Johnson® V (WJ V™; McGrew, Mather, LaForte, & Wendling, 2025), a comprehensive assessment system for measuring general intellectual ability (g), specific cognitive abilities, oral language abilities, and academic achievement from age 4 through 90+. It describes the updates, organization, and technical aspects of the WJ V, including reliability information and evidence to support the validity of the WJ V test and cluster score interpretations. While this document provides a high-level summary of these topics, readers should consult the Woodcock-Johnson V Technical Manual (LaForte et al., 2025) for more comprehensive documentation.

A #metaanalysis of #assessment of self-regulated learning (#SRL) - #selfregulatedlearning #learning #motivation #CAMML #EDPSY #schoolpsychologists #schoolpsychology #conative


Self-regulated learning (SRL) strategies are an important component of models of school learning.  Below is a new meta-analysis of SRL assessment methods.  Overall effect sizes are not large.  More R&D is needed to develop applied practical SRL measurement tools.  SRL is a one of the major components of the 2022 Cognitive-Affective-Motivation Model of Learning; CAMML; click here to access article),

Multimethod assessment of self-regulated learning in primary, secondary, and tertiary education – A meta-analysis.  Learning and Individual Differences (open access—click here to access).

Abstract

Self-regulated learning (SRL) can be measured in several ways, which can be broadly classified into online and offline instruments. Although both online and offline measurements have advantages and disadvantages, the over-dependence of SRL research on offline measurements has been criticised considerably. Currently, efforts are being made to use multimethod SRL assessments. We examined 20 articles with 351 effect sizes that assessed SRL with at least two instruments on at least two SRL components. Most effect sizes were not statistically significant but descriptively higher than others. Combinations of two online instruments showed the highest effect size (r = 0.24). Overall correlations between instruments were highest for university students (r = 0.21). Additionally, results for cognition showed the highest effect size measured with behavioural traces (r = 0.28), and for metacognition measured with microanalysis (r = 0.35). The component of motivation was best measured using self-report questionnaires (r = 0.29).
Educational relevance statement
Self-regulated learning is an important predictor of academical success. It is therefore necessary to measure it as precise and comprehensive as possible. Knowing which instruments are best suited for each age group, SRL component, or reliably predict a specific achievement variable can help educators pick the best instrument for their needs.

Wednesday, August 06, 2025

Leaving no child behind—Beyond cognitive and achievement abilities - #CAMML source “fugitive/grey” working paper now available. Enjoy - #NCLB #learning #EDSPY #motivation #affective #cognitive #intelligence #conative #noncognitive #schoolpsychology #schoolpsychologists



I’ve recently made several posts regarding the importance of conative (i.e., motivation; self-regulated learning strategies; etc.) learner characteristics and how they should be integrated with cognitive abilities (as per the CHC theory of cognitive abilities) to better understand the interplay between learner characteristics and school learning.  These posts have mentioned (and I provided a link) to my recent 2022 article where I articulate a Cognitive-Affective-Motivation Model of Learning; CAMML; click here to access).

In the article I mention that the 2022 CAMML model had its roots in early work I completed as one of the first set of Principal Investigators during the first five years of the University of Minnesota’s National Center on Educational Outcomes (NCEO).  As a result of those posts I’ve had several requests for the original working paper which is best characterized as being “fugitive” or “grey” literature.

The brief back story is that the original 2004 document was a “working paper” (6-15-04; Increasing the Chance of No Child Being Left Behind: Beyond Cognitive and Achievement Abilities, by Kevin McGrew, David Johnson, Anna Casio, Jeffrey Evans) that was written with the aid of discretionary funds from the then Department of Education’s Office of Special Education (OSEP) during the influence of NCLB.  The working draft was submitted but curiously never saw the light of day.

With this post I’m now making the complete 2004 “working paper” (with writing, spelling, and grammar blemish’s in their full glory) available as a PDF.  Click here to access.  Although dated 20 years, IMHO the lengthy paper provides a good accounting of the relevant literature up to 2004, much of which is still relevant.  Below are images of the TOC pages which should give you an hint of the treasure trove of information and literature reviewed.  Enjoy.  Hopefully this MIA paper may help others pursue research and theoretical study in this important area.

Click on images to enlarge for easy reading







Saturday, August 02, 2025

Research Byte: Is trying hard enough? Causal analysis of the effort-IQ relationship suggests not - #intelligence #IQ #motivation #volition #CAMML #conative #noncognitive



Is Trying Harder Enough? Causal Analysis of the Effort-IQ Relationship Suggests Not.  Timothy Bates. Intelligence and Cognitive Abilities (open access—click here to locate article to read or download)


Abstract


Claims that effort increases cognitive scores are now under great doubt. What is needed is randomized controlled trials optimized for testing causal influence and avoiding confounding of self-evaluation of performance with feelings of good effort. Here we report three large studies using unconfounded measures of effort and instrumental analysis to isolate any causal effect of effort on cognitive score. An initial study (N = 393) validated an appropriate effort measure, demonstrating excellent external and convergent validity (β = .61). Study 2 (N = 500, preregistered) randomly allocated subjects to a performance incentive, using an instrumental variable analysis to detect causal effects of effort. The incentive successfully manipulated effort (𝛽 = .18, p = .001). However, the causal effect of effort on scores was near-zero and non-significant (𝛽 = .04, p = .886). Study 3 (N=1,237) replicated this null result with preregistered analysis and an externally developed measure of effort: incentive again raised reported effort (𝛽 = .17, p <.001), but effort had no significant causal effect on cognitive score (β2 = .27 [-0.07, 0.62]), p = .15). Alongside evidence of research fraud and confounding in earlier studies, the present evidence for the absence of any causal effects of effort on cognitive scores, effort research should shift its focus to goal setting – where effort is useful – rather than raising basic ability, which it appears unable to do.


Select quote from discussion: “The present results suggest a potential ‘central dogma of cognition’: that volitional effort can direct cognitive resources but cannot fundamentally alter or bypass the efficacy of the underlying cognitive systems themselves”


These findings are consistent with my proposed cognitive-affective-motivation-model-of-learning (CAMML), grounded extensively on Richard Snows concept of aptitude trait complexes, where motivational constructs are seen as driving and directing the use of cognitive abilities (via personal investment mechanisms), but not directly having a causal effect on cognitive abilities.  See first of two figures below.  Note lack of causal arrows from conative and affective domain constructs to CHC cognitive abilities.  Paper can be accessed by clicking here.

Click on images to enlarge for easier viewing






Tuesday, July 29, 2025

Journal of Intelligence “Best Paper Award” for McGrew, Schneider, Decker & Bulut (2023) Psychometric network analysis of CHC measures - #psychometric #networkanalysis #intelligence #CHC #WJIV #bestpaper #schoolpsychology #schoolpsychologist


Today I (Kevin McGrew), and colleagues Joel Schneider, Scott Decker, and Okan Bulut, were pleased to learn that our recent 2023 Journal of Intelligence article listed above (open access—click link to read or download) was selected as 1 of 2 “Best Paper Awards” for 2023.  

As stated at the journal award page, “The Journal of Intelligence Best Paper Award is granted annually to highlight publications of high quality, scientific significance, and extensive influence. The evaluation committee members choose two articles of exceptional quality that were published in the journal the previous year and announce them online by the end of June.”

Below is the abstract and two figures that may pique your interest. We thank the members of the JOI evaluation committee.

Abstract
For over a century, the structure of intelligence has been dominated by factor analytic methods that presume tests are indicators of latent entities (e.g., general intelligence or g). Recently, psychometric network methods and theories (e.g., process overlap theory; dynamic mutualism) have provided alternatives to g-centric factor models. However, few studies have investigated contemporary cognitive measures using network methods. We apply a Gaussian graphical network model to the age 9–19 standardization sample of the Woodcock–Johnson Tests of Cognitive Ability—Fourth Edition. Results support the primary broad abilities from the Cattell–Horn–Carroll (CHC) theory and suggest that the working memory–attentional control complex may be central to understanding a CHC network model of intelligence. Supplementary multidimensional scaling analyses indicate the existence of possible higher-order dimensions (PPIK; triadic theory; System I-II cognitive processing) as well as separate learning and retrieval aspects of long-term memory. Overall, the network approach offers a viable alternative to factor models with a g-centric bias (i.e., bifactor models) that have led to erroneous conclusions regarding the utility of broad CHC scores in test interpretation beyond the full-scale IQ, g.



Click on images to enlarge for easier viewing/reading






Wednesday, July 23, 2025

Research Byte: Lets hear it (again) for #visual-spatial (#Gv) #workingmemory (#Gwm) and math #reasoning (#Gf-RQ) — #CHC #SPED #EDPSY #schoolpsychology #schoolpsychologist #WJV

From Spatial Construction to Mathematics: Exploring the Mediating Role of Visuospatial Working Memory.  Developmental Psychology.  An open access article that can be downloaded—Click here.

Yuxin Zhang, Rebecca Bull, and Emma C. Burns.

Abstract

This study examined the longitudinal pathways from early spatial skills at 5 and 7 years to their mathematics reasoning abilities at 17 years in a large cohort sample (N = 16,338) from the Millennium Cohort Study. Children were assessed at four time points: Sweep 3 (Mage = 5.29), Sweep 4 (Mage = 7.23), Sweep 5 (Mage = 11.17), and Sweep 7 (Mage = 17.18), with measures including spatial construction skills, visuospatial working memory, mathematics achievement, and mathematics reasoning skills. Path analyses revealed that spatial construction at age 5 directly predicted mathematics achievement at age 7 after accounting for sex, age, socioeconomic status, vocabulary, and nonverbal reasoning ability. Furthermore, spatial construction at 5 and 7 years was directly associated with mathematics reasoning skills at 17, and spatial working memory at age 11 partially mediated this relationship. Notably, the direct effects of spatial construction on mathematics reasoning at age 17 remained significant and robust after accounting for the mediator and covariates. These findings highlight the potential value of early spatial construction skills as predictors of subsequent mathematical development over the long term.

Public Significance Statement.Children with stronger spatial skills at age 5 are more likely to achieve higher scores in mathematics at ages 7 and 17. Visuospatial working memory partly explained this link, and early spatial skills showed a direct and robust association with later mathematics. This study identified early spatial skills as an important long-term predictor of mathematics from preschool through adolescence. The findings highlight the potential of infusing spatial thinking and using spatial strategies to better understand and solve mathematics problems.

Click on image for easier viewing




Comment:  I recently made a post regarding research that demonstrated the importance of visual-spatial working memory abilities for spatial navigation where I also mentioned the new (not yet online as far as I know) WJ V Visual Working Memory test, which was decades in development—an interesting test development “back story”.