Monday, October 27, 2025
Lets hear it for morning Java/expresso!!! - #neurocognitve #brain #cogntion #whitematter #java #coffee #morningjoe #expresso #Gf #flluidintelligence
Sunday, October 19, 2025
The effect of #processingspeed [#Gs] on academic #fluency in children with #neurodevelopmental disorders - #CHC #WISCV #WJV #intelligence #schoolpsychologists #schoolpsychology #SLD #SLD #SPED #fluency #EDPSY
PDF copy of article can be downloaded here.
Abstract
Poor processing speed (PS) is frequently observed in individuals with neurodevelopmental disorders. However, mixed findings exist on the predictive validity of such processing speed impairment and the role of working memory (WM). We conducted a retrospective chart review of patients evaluated at a developmental assessment clinic between March 2018 and December 2022. Patients with available data on the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V) and the Woodcock-Johnson, Fourth Edition, Tests of Achievement (WJ IV ACH) were included (n = 77, 69 % male; Mage = 10.6, SDage = 2.5; FSIQ range = 47–129). We performed a mediation analysis with academic fluency (AF) as the dependent variable, PS as the predictor, WM as the mediator, and academic skills and general intelligence as covariates. Both the direct and indirect effects of PS were significant prior to adding covariates. However, only the direct effect of PS was robust, independent of the effects of academic skills and general intelligence. The indirect effect of PS through WM was insignificant after accounting for the general academic skills and intelligence. Therefore, PS explains a unique variance in AF. This finding suggests that PS may be an exception to the criticism of cognitive profile analysis. Interpreting the PS score as a relative strength or weakness within a cognitive profile may uniquely predict their timed academic performance in youth with neurodevelopmental disorders.
[Do] Humans peak in midlife [?]: A combined #cognitive and #personality trait perspective - #intelligence #developmental #schoolpsychologists #schoolpsychology #CHC
Open access copy of article can be downloaded here.
Highlights
- •Age trends reviewed across 16 key cognitive and personality-related dimensions.
- •All variables plotted on a common scale to enable direct cross-domain comparisons.
- •Age trajectories varied widely: some traits declined, others improved with age.
- •A weighted composite index of functioning was developed from theory and evidence.
- •Overall cognitive-personality functioning peaks between ages 55 and 60.
Extremely important (IMHO) #CHC #cognitive #reading achievement #g+specific abilities #SEM paper - #schoolpsychology #schoolpsychologists #SPED #EDPSY #LD #SLD
Extremely interesting (important/intriging) CHC cognitive-reading achievement relations meta-SEM paper. Why? Because, as far as I know, it is the first g+specific abilities paper to evaluate a model with causal relations specified within and between cognitive and reading achievement CHC constructs. Paper info below, as well as open access link to PDF. Also, this is the first time I’ve seen a meta-structural equation modeling analysis. Kudos to the authors.
Click on images to enlarge fo easy reading
Abstract
Cognitive tests measure psychological constructs that predict the development of academic skills. Research on cognitive–reading achievement relations has primarily been completed with single-test batteries and samples, resulting in inconsistencies across studies. The current study developed a consensus model of cognitive–reading achievement relations using meta-structural equation modeling (meta-SEM) through a cross-sectional analysis of subtest correlations from English-language norm-referenced tests. The full dataset used for this study included 49,959 correlations across 599 distinct correlation matrices.These included correlations among 1112 subtests extracted from 137 different cognitive and achievement test batteries. The meta-SEM approach allowed for increased sampling of cognitive and academic reading skills measured by various test batteries to better inform the validity of construct relations. The findings were generally consistent with previous research, suggesting that cognitive abilities are important predictors of reading skills and generalize across different test batteries and samples. The findings are also consistent with integrated cognitive–reading models and have implications for assessment and intervention frameworks.
Keywords: cognitive abilities; reading skills; cognitive–achievement relations; CHC theory;meta-structural equation modeling
Thursday, October 16, 2025
IQs Corner pub alert: CHC theory of cognitive abilities used to define and evaluate AI - #AI #CHC #intelligence #schoolpsychology #schoolpsychologists #IQ #EDPSY
- “AGI is an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.”
The lack of a concrete definition for Artificial General Intelligence (AGI) obscures the gap between today's specialized AI and human-level cognition. This paper introduces a quantifiable framework to address this, defining AGI as matching the cognitive versatility and proficiency of a well-educated adult. To operationalize this, we ground our methodology in Cattell-Horn-Carroll theory, the most em-pirically validated model of human cognition. The framework dissects general intelligence into ten core cognitive domains—including reasoning, memory, and perception—and adapts established human psychometric batteries to evaluate AI systems. Application of this framework reveals a highly “jagged” cognitive profile in contemporary models. While proficient in knowledge-intensive domains, current AI systems have critical deficits in foundational cognitive machinery, particularly long-term memory storage. The resulting AGI scores (e.g., GPT-4 at 27%, GPT-5 at 58%) concretely quantify both rapid progress and the substantial gap remaining before AGI.
Wednesday, October 15, 2025
IQs Corner: AI and the Future of Skills, Volume 1, Capabilites and Assessments - #AI #intelligence #CHC #schoolpsychology #schoolpsychologists #education
Wednesday, October 08, 2025
IQs Corner. Comfortable fictions—The myth of #Gardner’s #multipleintelligences and other #educational fads (e.g., #learningstyles) - #schoolpsychology #schoolpsychologists #education #pseudoscience #intelligence
Monday, September 29, 2025
What does #ElonMusks #Grok (#xAI) think of Dr. Kevin #McGrew ? - My Grok-based professional bio
I can now see how this technology can help save time with certain tasks. The bio is relatively accurate for a rough draft, but it did need some revisions and notable additions (here-and-there) to be more accurate and thorough. I’ve included a link to my formally posted bio for comparison purposes—yes, it is long as it is my “long” form bio I submit along with my CV when engaged in court/legal work.
The Grok bio does not include URL links to material. Perhaps there is a way to ask Grok to include links, but I don’t know if that is possible at this point. My guess is that it probably can. If you want to see more information on any topic, just go the theMindHub.com and navigate accordingly.
I did notice that during the 2+ minutes of Grok work, it spent a significant amount of time on my professional web page. I guess this means that if you want an AI tool to write your professional bio, you must have some decent content up on the web…it can’t make up material from thin air. Also, it would appear good at providing rough drafts that still requires a content expert to revise accordingly.
All in all, an interesting excercise.
Dr. Kevin S. McGrew bio written by AI program Grok (xAI) on 09-29-25
(strikethrough and edits in red font)
Can be compared to Dr. McGrew’s formal bio posted at theMindHub.com
Dr. Kevin S. McGrew is a prominent educational psychologist and psychometrician whose career spans over five decades, beginning as a practicing school psychologist for 12 years (1975–1989) after earning his MS in School Psychology from Minnesota State University Moorhead. He later obtained a PhD in Educational Psychology (Special Education) from the University of Minnesota in 1989, focusing on research methods and applied psychometrics. McGrew has held academic roles such as Professor of Applied Psychology at St. Cloud State University (1990–2000) and Visiting Lecturer and Adjunct Research Professor at the University of Minnesota (2000–2024). He also served as Research Director for the Woodcock-Muñoz Foundation (2005-2014), Associate Director for Measurement Learning Consultants (2008-2014), and Director of the Institute for Applied Psychometrics (1998-current). His work has profoundly shaped the fields of human intelligence theory and testing, emphasizing empirical rigor, practical application in education and law, and integration of cognitive and non-cognitive factors.
Contributions to Intelligence Theory
McGrew is widely recognized as a leading scholar in the Cattell-Horn-Carroll (CHC) theory of cognitive abilities, a comprehensive psychometric framework that integrates fluid and crystallized intelligence (Cattell-Horn) with Carroll’s three-stratum model. Alongside Dr. Joel Schneider, he has served as an unofficial “gatekeeper” of CHC theory, authoring seminal updates and cross-battery interpretations that have made it the dominant model in contemporary intelligence research, test development, and and interpretation of intellectual assessment results. His efforts have advanced CHC from a theoretical construct model to a practical tool for diagnosing learning disabilities, intellectual giftedness, and intellectual disabilities, influencing guidelines in the American Association on Intellectual and Developmental Disabilities (AAIDD) manual (2021).
McGrew has also pioneered integrative models that extend beyond pure cognition. He developed the Model of Academic Competence and Motivation (MACM) in the early 2000s, which posits that academic success arises from the interplay of cognitive abilities, conative (motivational) factors self-efficacy, achievement orientations, self-beliefs, self-regulated learning, and affective elements such as personality and social-emotional skills, interest and anxiety. This evolved into the broader Cognitive-Affective-Motivation Model of Learning (CAMML), emphasizing how these dimensions interact to predict school achievement and inform interventions. His research on psychometric network analysis has further refined CHC by modeling complex interrelationships among CHC abilities “beyond g” (general intelligence), as highlighted in his 2023 co-authored paper, named the Journal of Intelligence’s “Best Paper” of the year. McGrew has explored the Flynn effect (rising IQ scores over time) and its implications for the interpretation of intelligence test scores in Atkins intellectual disability death penalty cases. theory, as well as CHC’s links to adaptive behavior and neurotechnology applications for cognitive enhancement.
Contributions to Intelligence Testing
McGrew’s practical impact is most evident in intelligence test development and interpretation, where he championed “intelligent testing”—an art-and-science approach inspired by Alan Kaufman that prioritizes the interpretation of broad CHC composite scores profile analysis over single or global IQ scores. As primary measurement consultant for the Woodcock-Johnson Psychoeducational Battery—Revised (WJ-R, 1991), he authored its technical manual and conducted statistical analyses for restandardization. The WJ-R battery was the first major battery of individually administered cognitive and achievement tests based on the first integration of the psychometric intelligence research of Raymond Cattell and John Horn (aka, the Cattell-Horn Gf-Gc model of intelligence) and John Carroll’s seminal (1993) three-stratum model of intelligence. He co-authored the Woodcock-Johnson III (WJ III, 2001) and Woodcock-Johnson IV (WJ IV, 2014), which were the first major batteries explicitly grounded in CHC theory, introducing subtests for underrepresented abilities like auditory processing and long-term retrieval. As senior co-author, he led the development of the digitally administered Woodcock-Johnson V (WJ V, 2025), incorporating recent advances in CHC theory and psychometric network analysis and conative measures.
Internationally, McGrew consulted on the Indonesian AJT Cognitive Assessment (2014–2017), creating the world’s first CHC-based individually administered intelligence test in that country. He and advised the Ayrton Senna Institute on large-scale cognitive assessments in Brazil (2016–2025) and contributed to ASI research focused on integrating constructs from McGrew’s CAMML model with the ASI Big-5 personality based social-emotional skill model. He has provided expert psychometric testimony (through written declarations and/or court testimony) in over 50 Atkins v. Virginia death penalty cases since 2009 and has contributed to refining intellectual disability criteria through CHC lenses.
Publications and Knowledge Dissemination
McGrew has authored or co-authored over 100 peer-reviewed journal articles, book chapters, and eight norm-referenced test batteries, alongside four books on intelligence test interpretation, including Clinical Interpretation of the Woodcock-Johnson Tests of Cognitive Ability (1997, revised 2005). His prolific output includes contributions to the APA Handbook of Intellectual and Developmental Disabilities (2021) and critiques of intellectual disability diagnostics. He maintains influential blogs like IQ’s Corner (www.iqscorner.com), which has synthesized CHC and intelligence theory and related assessment research for practitioners since 2004, and engages on platforms like X (@iqmobile), LinkedIn, and BlueSky (@iqmcgrew.bsky.social) to democratize complex psychometric concepts and share research and insights based on his multiple areas of expertise.
Awards and Legacy
McGrew’s influence is underscored by prestigious honors, including the University of Minnesota Distinguished Alumni Award (2016), Minnesota School Psychologists Association Lifetime Achievement Award (2015), Alan S. Kaufman Excellence in Assessment Award (2023), and the Dr. Richard W. Woodcock Award for Innovations in Ability Testing (2025). His work has bridged theory and practice, empowering educators, clinicians, and policymakers to use intelligence assessments more equitably and effectively, while advocating for a holistic view of human potential that includes motivation and self-regulation alongside cognition.
Thursday, September 25, 2025
IQs Corner. In what way are #intelligence testing (#IQ) and the US Supreme Court (#SCOTUS) alike?—SCOTUS will be hearing important case addressing #multiple IQ scores and #intellectualdisability #Dx in fall 2025 term
- The state of Alabama brief. Alabama is the petitioner. That is, if you want to read why the State of Alabama asked SCOTUS to hear this case, click on the link provided.
- The Alabama brief also includes a very long appendix for those who want to read the prior courts related testimony from the state and various experts. This is a very long read and is not necessary for readers who only want to understand the legal and professional issues.
- Supporting amicus brief from the US Justice Department Solicitor General.
- Two supporting briefs from legal groups—the American Legal Foundation and the Criminal Justice Legal Foundation.
- Supporting amicus briefs from other states (Idaho et al.; Kentucky)
- Smiths respondent brief.
- Supporting amicus brief from American Psychological Association (APA), American Psychiatric Assocation (ApA), and Alabama APA.
- Supporting amicus brief from AAIDD et al.
Tuesday, September 02, 2025
From the #Cattell-Horn-Carroll (#CHC) #cognitive #intelligence theory archives: Photos of important 1999 Carroll, Horn, Woodcock, Roid et al. meeting in Chapel Hill, NC.
I was recently cleaning my office when I stumbled upon these priceless photos from a 1999 historical meeting in Chapel Hill, NC that involved John Horn, Jack Carroll, Richard Woodcock, Gale Roid, John Wasserman, Fred Schrank and myself). The provenance (I’ve always wanted to use this word 😉) for the meeting is provided below the pictures in the form of extracted quotes from Wasserman (2019) and McGrew (2023) (links below), which I confirmed with John Wasserman via a personal email on August, 30, 2025.
The 1990 CHC-based WJ-R had already been published and the WJ III author team were nearing completion of the CHC-based WJ III (2001). Unbeknownst to many is the fact that Woodock was originally planned to be one of the coauthors of the SB5 (along with Gale Roid), which explains his presence in the photo’s that document one of several planning meetings for the CHC-based SB5.
I was also involved as a consultant during the early planning for the CHC-based SB5 because of my knowledge of the evolving CHC theory. My role was to review and integrate all available published and unpublished factor analysis research on all prior editions of the different SB legacy tests. I post these pictures with the names of the people included in each photo immediately below the photo. No other comments (save for the next paragraph) are provided.
To say the least, my presence at this meeting (as well as many other meetings with Carroll and Horn together, as well as with each alone, that occured when planning the various editions of the WJ’s) was surrealistic. One could sense a paradigm shift in intelligence testing that was happening in real time during the meetings! The expertise of the leading theorists regarding what became known as CHC theory, together with the expertise of the applied test developers of Woodcock and Roid, provided me with learning experiences that cannot be captured in any book or university course work.
Click on images to enlarge.
Be gentle, these are the best available copies of images taken with an old-school camera (not smart-phone based digital images)
“It was only when I left TPC for employment with Riverside Publishing (now Houghton-Mifflin-Harcourt; HMH) in 1996 that I met Richard W. Woodcock and Kevin S. McGrew and became immersed in the extended Gf-Gc (fluid-crystallized)/ Horn-Cattell theory, beginning to appreciate how Carroll's Three-Stratum (3S) model could be operationalized in cognitive-intellectual tests. Riverside had been the home of the first Gf-Gc intelligence test, the Stanford–Binet Intelligence Scale, Fourth Edition (SB IV; R. L. Thorndike, Hagen, & Sattler, 1986), which was structured hierarchically with Spearman's g at the apex, four broad ability factors at a lower level, and individual subtests at the lowest level. After acquiring the Woodcock–Johnson (WJ-R; Woodcock & Johnson, 1989) from DLM Teaching Resources, Riverside now held a second Gf-Gc measure. The WJ-R Tests of Cognitive Ability measured seven broad ability factors from Gf-Gc theory with an eighth broad ability factor possible through two quantitative tests from theWJ-R Tests of Achievement. When I arrived, planning was underway for new test editions – the WJ III (Woodcock, McGrew, & Mather, 2001) and the SB5 (Roid, 2003) – and Woodcock was then slated to co-author both tests, although he later stepped down from the SB5. Consequently, I had the privilege of participating in meetings in 1999 with John B. Carroll and John L. Horn, both of whom had been paid expert consultants to the development of the WJ-R” (Wasserman, 2019, p. 250)
——————-
“In 1999, Woodcock brokered the CHC umbrella term with Horn and Carroll for practical reasons (McGrew 2005)—to facilitate internal and external communication regarding the theoretical model of cognitive abilities underlying the then-overlapping test development activities (and some overlapping consultants, test authors, and test publisher project directors; John Horn, Jack Carroll, Richard Woodcock, Gale Roid, Kevin McGrew, Fred Schrank, and John Wasserman) of the Woodcock–Johnson III and the Stanford Binet–Fifth Edition by Riverside Publishing” (McGrew, 2023, p. 3)
Wednesday, August 27, 2025
IQs Corner: Practice effects persist over two decades of cognitive testing: Implications for longitudinal research - #practiceeffect #cognitive #neurocognit #IQ #intelligence #schoolpsychology #schoolpsychologists
Background: Repeated cognitive testing can boost scores due to practice effects (PEs), yet it remains unclear whether PEs persist across multiple follow-ups and long durations. We examined PEs across multiple assessments from midlife to old age in a nonclinical sample.
Method: Men (N=1,608) in the Vietnam Era Twin Study of Aging (VETSA) underwent
neuropsychological assessment comprising 30 measures across 4 waves (~6-year testing intervals) spanning up to 20 years. We leveraged age-matched replacement participants to estimate PEs at each wave. We compared cognitive trajectories and MCI prevalence using unadjusted versus PE-adjusted scores.
Results: Across follow-ups, a range of 7-12 tests (out of 30) demonstrated significant PEs, especially in episodic memory and visuospatial domains. Adjusting for PEs resulted in improved detection of cognitive decline and MCI, with up to 20% higher MCI prevalence.
Conclusion: PEs persist across multiple assessments and decades underscoring the
importance of accounting for PEs in longitudinal studies.
Keywords: practice effects; repeat testing; serial testing; longitudinal testing; mild cognitive impairment; cognitive change
Tuesday, August 26, 2025
IQs Corner: What is happening in gifted/high ability research from 2013 to 2023? - #gifted #talented #highability #EDPSY #intelligence #achievement #schoolpsychologists #schoolpsychology
Click on image to enlarge for easier reading
Trends and Topics Evolution in Research on Giftedness in Education: A Bibliometric Analysis. Psychology in the Schools, 2025; 62:3403–3413
Monday, August 25, 2025
IQs Corner: What is (and what is not) clinical judgment in intelligence test interpretation? - #IQ #intelligence #ID #intellectualdisability #schoolpsychologists #schoolpsychology #diagnosis
Clinical judgment is a process based on solid scientific knowledge and is characterized as being “systematic (i.e., organized, sequential, and logical), formal (i.e., explicit and reasoned), and transparent (i.e., apparent and communicated clearly)” (Schalock & Luckasson, 2005, p.1). The application of clinical judgment in the evaluation of IQ scores in the diagnosis of intellectual disability includes consideration of multiple factors that might influence the accuracy of an assessment of general intellectual ability (APA: DSM-5, 2013). The “unanimous professional consensus that the diagnosis of intellectual disability requires comprehensive assessment and the application of clinical judgment” (Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8).
The misuse of clinical judgment in the interpretation of scores from intelligence test batteries should not be used as the basis for “gut instinct” or “seat-of-the-pants” impressions and conclusions of the assessment professional (Macvaugh & Cunningham, 2009), or justification for shortened evaluations, a means to convey stereotypes or prejudices, a substitute for insufficiently explored questions, or an excuse for incomplete testing and missing data (Schalock & Luckasson, 2005). Idiosyncratic methods and intuitive conclusions are not scientifically based and have unknown reliability and validity.
If clinical judgement interpretations and opinions regarding an individual’s level of general intelligence are based on novel or emerging research-based principles, the assessment professional must document the bases for these new interpretations as well as the limitations of these principles and methods. This requirement is consistent with the Standards for Educational and Psychological Testing Standard 9.4 which states:
When a test is to be used for a purpose for which little or no validity evidence is available, the user is responsible for documenting the rationale for the selection of the test and obtaining evidence of the reliability/precision of the test scores and the validity of the interpretations supporting the use of the scores for this purpose (p. 143).
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014). Standards for educational and psychological testing. Washington, DC: Author.
American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders— Fifth Edition. Washington D.C.: Author.
Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8.
MacVaugh, G. S. & Cunningham, M. D. (2009). Atkins v. Virginia: Implications and recommendations for forensic practice. The Journal of Psychiatry and Law, 37, 131-187.
Schalock, R. L. & Luckasson, R. (2005). Clinical judgment. Washington, DC: American Association on Intellectual and Developmental Disabilities.
—————
Kevin S. McGrew, PhD.
Educational Psychologist
Director
Institute for Applied Psychometrics (IAP)























