Sunday, October 19, 2025

The effect of #processingspeed [#Gs] on academic #fluency in children with #neurodevelopmental disorders - #CHC #WISCV #WJV #intelligence #schoolpsychologists #schoolpsychology #SLD #SLD #SPED #fluency #EDPSY

Click on images to enlarge for easy reading


PDF copy of article can be downloaded here.

Abstract 

Poor processing speed (PS) is frequently observed in individuals with neurodevelopmental disorders. However, mixed findings exist on the predictive validity of such processing speed impairment and the role of working memory (WM). We conducted a retrospective chart review of patients evaluated at a developmental assessment clinic between March 2018 and December 2022. Patients with available data on the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V) and the Woodcock-Johnson, Fourth Edition, Tests of Achievement (WJ IV ACH) were included (n = 77, 69 % male; Mage = 10.6, SDage = 2.5; FSIQ range = 47–129). We performed a mediation analysis with academic fluency (AF) as the dependent variable, PS as the predictor, WM as the mediator, and academic skills and general intelligence as covariates. Both the direct and indirect effects of PS were significant prior to adding covariates. However, only the direct effect of PS was robust, independent of the effects of academic skills and general intelligence. The indirect effect of PS through WM was insignificant after accounting for the general academic skills and intelligence. Therefore, PS explains a unique variance in AF. This finding suggests that PS may be an exception to the criticism of cognitive profile analysis. Interpreting the PS score as a relative strength or weakness within a cognitive profile may uniquely predict their timed academic performance in youth with neurodevelopmental disorders.

[Do] Humans peak in midlife [?]: A combined #cognitive and #personality trait perspective - #intelligence #developmental #schoolpsychologists #schoolpsychology #CHC


Open access copy of article can be downloaded here.

 Highlights


  • Age trends reviewed across 16 key cognitive and personality-related dimensions.
  • All variables plotted on a common scale to enable direct cross-domain comparisons.
  • Age trajectories varied widely: some traits declined, others improved with age.
  • A weighted composite index of functioning was developed from theory and evidence.
  • Overall cognitive-personality functioning peaks between ages 55 and 60.
Abstract
Fluid intelligence, which peaks near age 20 and declines materially across adulthood, is often regarded as the most critical cognitive ability for predicting important life outcomes. Yet, human achievement in domains such as career success tends to peak much later, typically between the ages of 55 and 60. This discrepancy may reflect the fact that, while fluid intelligence may decline with age, other dimensions improve (e.g., crystallized intelligence, emotional intelligence). To examine this possibility, we analyzed age-related trends across nine constructs associated with life success: cognitive abilities, personality traits, emotional intelligence, financial literacy, moral reasoning, resistance to sunk cost bias, cognitive flexibility, cognitive empathy, and need for cognition. We extracted age-related findings from published studies for each dimension and standardized all scores to T-scores for comparability. We then constructed a Cognitive-Personality Functioning Index (CPFI) and compared two weighting approaches: a Conventional model, emphasizing intelligence and core personality traits, and a Comprehensive model, integrating a broader array of dimensions. Both models revealed a peak in overall functioning during late midlife (ages 55 to 60) but diverged at the younger and older ends of adulthood: under Conventional weighting, older adults scored well below young adults, whereas under Comprehensive weighting, the two groups were roughly equivalent. These findings suggest that functional capacity, defined in terms of key differential psychological traits, may peak in late midlife, closely aligning with the typical peak in career achievement. Also, individuals best suited for high-stakes decision-making roles are unlikely to be younger than 40 or older than 65.

Extremely important (IMHO) #CHC #cognitive #reading achievement #g+specific abilities #SEM paper - #schoolpsychology #schoolpsychologists #SPED #EDPSY #LD #SLD

Extremely interesting (important/intriging) CHC cognitive-reading achievement relations meta-SEM paper.  Why?  Because, as far as I know, it is the first g+specific abilities paper to evaluate a model with causal relations specified within and between cognitive and reading achievement CHC constructs.  Paper info below, as well as open access link to PDF.  Also, this is the first time I’ve seen a meta-structural equation modeling analysis.  Kudos to the authors.

Click on images to enlarge fo easy reading


Abstract

Cognitive tests measure psychological constructs that predict the development of academic skills. Research on cognitive–reading achievement relations has primarily been completed with single-test batteries and samples, resulting in inconsistencies across studies. The current study developed a consensus model of cognitive–reading achievement relations using meta-structural equation modeling (meta-SEM) through a cross-sectional analysis of subtest correlations from English-language norm-referenced tests. The full dataset used for this study included 49,959 correlations across 599 distinct correlation matrices.These included correlations among 1112 subtests extracted from 137 different cognitive and achievement test batteries. The meta-SEM approach allowed for increased sampling of cognitive and academic reading skills measured by various test batteries to better inform the validity of construct relations. The findings were generally consistent with previous research, suggesting that cognitive abilities are important predictors of reading skills and generalize across different test batteries and samples. The findings are also consistent with integrated cognitive–reading models and have implications for assessment and intervention frameworks.

Keywords: cognitive abilities; reading skills; cognitive–achievement relations; CHC theory;meta-structural equation modeling


Thursday, October 16, 2025

IQs Corner pub alert: CHC theory of cognitive abilities used to define and evaluate AI - #AI #CHC #intelligence #schoolpsychology #schoolpsychologists #IQ #EDPSY

An exciting new paper from the Dan Hendryks et al. at the Center for AI Safety  The center is  a nonprofit with the mission “to reduce societal-scale risks from artificial intelligence.”  In this just released paper, they propose a modified CHC theory definition/framework for evaluating AI: 
  • AGI is an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.”
Given my extensive research and publications regarding the Cattell-Horn-Carroll (CHC) theory of cognitive abilities, I was pleasantly surprised when Dan reached out for my comments and suggested revisions to the paper.  

I was extremely impressed as Dan and his group had been involved in a deep dive in the CHC literature and had developed, without my involvement, an ingenuous internet-based CHC set of “test” items that can be submitted to different AI agents (GPT-4, GPT-5, Grok) to assess their CHC broad ability domain performance (to evaluate the extent to which AI agents demonstrate the “cognitive versatility and proficiency of a well-educated adult”).  I had zero involvement in the conceptualization or development of the AI modified/adapted CHC assessment framework and resulting CHC AI metrics. 

I want to express my appreciation to Dan for including me among the list of over 24 authors.  I’m very excited to monitor future developments by Dan and his group, as well as to see the impact of the CHC theory model on AI.

Links to secure copies of the paper (in various formats and social media platforms) are listed at the bottom of this post.  

Note.  Click on all images to enlarge for easy reading

Abstract

The lack of a concrete definition for Artificial General Intelligence (AGI) obscures the gap between today's specialized AI and human-level cognition. This paper introduces a quantifiable framework to address this, defining AGI as matching the cognitive versatility and proficiency of a well-educated adult. To operationalize this, we ground our methodology in Cattell-Horn-Carroll theory, the most em-pirically validated model of human cognition. The framework dissects general intelligence into ten core cognitive domains—including reasoning, memory, and perception—and adapts established human psychometric batteries to evaluate AI systems. Application of this framework reveals a highly “jagged” cognitive profile in contemporary models. While proficient in knowledge-intensive domains, current AI systems have critical deficits in foundational cognitive machinery, particularly long-term memory storage. The resulting AGI scores (e.g., GPT-4 at 27%, GPT-5 at 58%) concretely quantify both rapid progress and the substantial gap remaining before AGI.

Modified CHC model for evaluating AI agents

Click on image to enlarge.


As mentioned in the abstract, the paper reports on the CHC AGI capabilities of GPT-4 and GPT-5 in the following figure.  Click on images to enlarge.


I was pleased to see (on page 14 of the PDF paper) the following “intelligence as processor” figure which is based on work by myself and Joel Schneider. The model in Figure 3 (below) is based on Kevin S. McGrew and W. Joel Schneider. CHC theory revised: A visual-graphic summary of Schneider and McGrew's 2018 CHC update chapter. MindHub / IAPsych working paper, 2018.  http://www.iapsych.com/mindhubpub4.pdf 

The Schneider & McGrew (2018) heuristic CHC information processing model is below the Figure 3 figure.  Click on images to enlarge.




Dan Hendrycks and the Center AI Safety provide brief overviews describing this work on LinkedIn as well as Twitter/X (both that can be monitored for comments).

A PDF copy of the paper can be downloaded here.  A clickable web-based version of the paper can be accessed here.

Exciting stuff !!

Wednesday, October 15, 2025

IQs Corner: AI and the Future of Skills, Volume 1, Capabilites and Assessments - #AI #intelligence #CHC #schoolpsychology #schoolpsychologists #education


Although a four-year old publication may seem like a long time in the AI literature, readers interested in the potential interface of AI and human cognitive abilities theory and research, should take a look at this lengthy OECD report which is available free for download here

Of course, given my research and interests, I found Chapter 3 (of 20) of particular interest.

Right click on images to enlarge



Wednesday, October 08, 2025

IQs Corner. Comfortable fictions—The myth of #Gardner’s #multipleintelligences and other #educational fads (e.g., #learningstyles) - #schoolpsychology #schoolpsychologists #education #pseudoscience #intelligence

A thought provoking take at the Learning Dispatch (substack) re Gardner’s theory of multiple intelligences and other educational theories (e.g., learning styles) that appeal to comfortable fictions that “describe the reassuring stories societies tell to preserve moral comfort in the face of contradiction….these narratives about history, identity, and progress allow dominant groups to evade the moral costs of their own actions.



Monday, September 29, 2025

What does #ElonMusks #Grok (#xAI) think of Dr. Kevin #McGrew ? - My Grok-based professional bio


I’ve finally started to play around with different AI programs.  I’ve been taking topics where I know the extant research well (e.g., CHC theory of intelligence, the WJ series of tests) and asking AI agents to summarize the information.  Today I had the idea of asking Grok to write me a professional bio.  After all, I know more about me than I do any other topic.  Below is the result.  I’ve made edits (strikethroughs) and added text (red font) to make it more accurate. 

I can now see how this technology can help save time with certain tasks. The bio is relatively accurate for a rough draft, but it did need some revisions and notable additions (here-and-there) to be more accurate and thorough.  I’ve included a link to my formally posted bio for comparison purposes—yes, it is long as it is my “long” form bio I submit along with my CV when engaged in court/legal work.  

The Grok bio does not include URL links to material.  Perhaps there is a way to ask Grok to include links, but I don’t know if that is possible at this point.  My guess is that it probably can.  If you want to see more information on any topic, just go the theMindHub.com and navigate accordingly.

I did notice that during the 2+ minutes of Grok work, it spent a significant amount of time on my professional web page.  I guess this means that if you want an AI tool to write your professional bio, you must have some decent content up on the web…it can’t make up material from thin air.  Also, it would appear good at providing rough drafts that still requires a content expert to revise accordingly.

All in all, an interesting excercise.

Dr. Kevin S. McGrew bio written by AI program Grok (xAI) on 09-29-25

(strikethrough and edits in red font)

Can be compared to Dr. McGrew’s formal bio posted at theMindHub.com

 

Dr. Kevin S. McGrew is a prominent educational psychologist and psychometrician whose career spans over five decades, beginning as a practicing school psychologist for 12 years (1975–1989) after earning his MS in School Psychology from Minnesota State University Moorhead. He later obtained a PhD in Educational Psychology (Special Education) from the University of Minnesota in 1989, focusing on research methods and applied psychometrics. McGrew has held academic roles such as Professor of Applied Psychology at St. Cloud State University (1990–2000) and Visiting Lecturer and Adjunct Research Professor at the University of Minnesota (2000–2024). He also served as Research Director for the Woodcock-Muñoz Foundation (2005-2014), Associate Director for Measurement Learning Consultants (2008-2014), and Director of the Institute for Applied Psychometrics (1998-current). His work has profoundly shaped the fields of human intelligence theory and testing, emphasizing empirical rigor, practical application in education and law, and integration of cognitive and non-cognitive factors.

Contributions to Intelligence Theory

McGrew is widely recognized as a leading scholar in the Cattell-Horn-Carroll (CHC) theory of cognitive abilities, a comprehensive psychometric framework that integrates fluid and crystallized intelligence (Cattell-Horn) with Carroll’s three-stratum model. Alongside Dr. Joel Schneider, he has served as an unofficial “gatekeeper” of CHC theory, authoring seminal updates and cross-battery interpretations that have made it the dominant model in contemporary intelligence research, test development, and and interpretation of intellectual assessment results.  His efforts have advanced CHC from a theoretical construct model to a practical tool for diagnosing learning disabilities, intellectual giftedness, and intellectual disabilities, influencing guidelines in the American Association on Intellectual and Developmental Disabilities (AAIDD) manual (2021). 

McGrew has also pioneered integrative models that extend beyond pure cognition. He developed the Model of Academic Competence and Motivation (MACM) in the early 2000s, which posits that academic success arises from the interplay of cognitive abilities, conative (motivational) factors self-efficacy, achievement orientations, self-beliefs, self-regulated learning, and affective elements such as personality and social-emotional skills, interest and anxiety.  This evolved into the broader Cognitive-Affective-Motivation Model of Learning (CAMML), emphasizing how these dimensions interact to predict school achievement and inform interventions.  His research on psychometric network analysis has further refined CHC by modeling complex interrelationships among CHC abilities “beyond g” (general intelligence), as highlighted in his 2023 co-authored paper, named the Journal of Intelligence’s “Best Paper” of the year.  McGrew has explored the Flynn effect (rising IQ scores over time) and its implications for the interpretation of intelligence test scores in Atkins intellectual disability death penalty cases. theory, as well as CHC’s links to adaptive behavior and neurotechnology applications for cognitive enhancement.

Contributions to Intelligence Testing

McGrew’s practical impact is most evident in intelligence test development and interpretation, where he championed “intelligent testing”—an art-and-science approach inspired by Alan Kaufman that prioritizes the interpretation of broad CHC composite scores profile analysis over single or global IQ scores.  As primary measurement consultant for the Woodcock-Johnson Psychoeducational Battery—Revised (WJ-R, 1991), he authored its technical manual and conducted statistical analyses for restandardization.  The WJ-R battery was the first major battery of individually administered cognitive and achievement tests based on the first integration of the psychometric intelligence research of Raymond Cattell and John Horn (aka, the Cattell-Horn Gf-Gc model of intelligence) and John Carroll’s seminal (1993) three-stratum model of intelligence.  He co-authored the Woodcock-Johnson III (WJ III, 2001) and Woodcock-Johnson IV (WJ IV, 2014), which were the first major batteries explicitly grounded in CHC theory, introducing subtests for underrepresented abilities like auditory processing and long-term retrieval.  As senior co-author, he led the development of the digitally administered Woodcock-Johnson V (WJ V, 2025), incorporating recent advances in CHC theory and psychometric network analysis and conative measures

Internationally, McGrew consulted on the Indonesian AJT Cognitive Assessment (2014–2017), creating the world’s first CHC-based individually administered intelligence test in that country.  He and advised the Ayrton Senna Institute on large-scale cognitive assessments in Brazil (2016–2025) and contributed to ASI research focused on integrating constructs from McGrew’s CAMML model with the ASI Big-5 personality based social-emotional skill model.  He has provided expert psychometric testimony (through written declarations and/or court testimony) in over 50 Atkins v. Virginia death penalty cases since 2009 and has contributed to refining intellectual disability criteria through CHC lenses. 

Publications and Knowledge Dissemination

McGrew has authored or co-authored over 100 peer-reviewed journal articles, book chapters, and eight norm-referenced test batteries, alongside four books on intelligence test interpretation, including Clinical Interpretation of the Woodcock-Johnson Tests of Cognitive Ability (1997, revised 2005).  His prolific output includes contributions to the APA Handbook of Intellectual and Developmental Disabilities (2021) and critiques of intellectual disability diagnostics.  He maintains influential blogs like IQ’s Corner (www.iqscorner.com), which has synthesized CHC and intelligence theory and related assessment research for practitioners since 2004, and engages on platforms like X (@iqmobile), LinkedIn, and BlueSky (@iqmcgrew.bsky.social) to democratize complex psychometric concepts and share research and insights based on his multiple areas of expertise

Awards and Legacy

McGrew’s influence is underscored by prestigious honors, including the University of Minnesota Distinguished Alumni Award (2016), Minnesota School Psychologists Association Lifetime Achievement Award (2015), Alan S. Kaufman Excellence in Assessment Award (2023), and the Dr. Richard W. Woodcock Award for Innovations in Ability Testing (2025).  His work has bridged theory and practice, empowering educators, clinicians, and policymakers to use intelligence assessments more equitably and effectively, while advocating for a holistic view of human potential that includes motivation and self-regulation alongside cognition.

 


Thursday, September 25, 2025

IQs Corner. In what way are #intelligence testing (#IQ) and the US Supreme Court (#SCOTUS) alike?—SCOTUS will be hearing important case addressing #multiple IQ scores and #intellectualdisability #Dx in fall 2025 term

This fall 2025, the Supreme Court of the United States (SCOTUS) will be hearing a case related to intelligence testing in the context of Atkins intellectual disability (ID) death penalty cases. The case is Hamm v Smith.

The question before SCOTUS is :  Whether and how courts may consider the cumulative effect of multiple IQ scores in assessing Atkins claims (in the context of diagnosis ID in death penalty cases)?

Note.  In order to save space and time, instead of writing “general intelligence” or “general intellectual functioning” every time, I use the abbreviation “IQ”.

The respondent (Joseph Smith) has five IQ test scores from comprehensive IQ tests.  He obtained two scores of 75 and 74 during the developmental period (before age 22), and three scores of 72, 78, and 74 between the ages of 28 and 46.  

This case is important for assessment professionals who conduct intelligence testing in general, and potential ID diagnostic cases (Atkins cases in particular).  I find this SCOTUS case particularly interesting given that in 2021, after the 2021 release of the latest official AAIDD manual (Intellectual disability: Definition, diagnosis, classification, and systems of supports), I published a critique where I specifically stated, as one weakness of the new AAIDD manual that “…many high-stakes ID cases often include case files that include multiple IQ scores across time or from different IQ tests. Some form of guidance, at minimum in a passing reference, to the issues of the convergence of indicators and IQ score exchangeability would have been useful. Users will need to go beyond the AAIDD manual for guidance (see Floyd et al., 2021; McGrew, 2015; and Watson, 2015)” (click here to download and read this critique).

All official petitioner and respondent legal briefs (and amicus briefs) have now been published at the SCOTUS blog as of yesterday.  The number of documents posted on the SCOTUS docket are many.  To help the reader better determine which documents are most critical (the final briefs), instead of clicking away on the various links at the SCOUTUS blog, I’ve organized the petitioner and respondent brief links below.

If you prefer to not wade through all the briefs (it is not for everyone), I would encourage practicing assessment professionals read the three respondent-related briefs.  The points made are relevant to all who conduct intellectual assessments.  As a potential conflict of interest notice, I (Dr. Kevin McGrew), together with Dr. Joel Schneider and Dr. Cecil Reynolds (as noted on page three for the APA amicus brief), were consultants to APA in the drafting of that brief.  This work was performed pro bono. I, at a minimum, suggest reading all the respondent briefs.  If time permits, I would also suggest reading the petitioner’s Alabama brief and the US Justice Department Solicitor General’s brief to better understand the petitioner and respondent positions re Hamm v Smith. 

Petitioner briefs
  • The state of Alabama brief.  Alabama is the petitioner.  That is, if you want to read why the State of Alabama asked SCOTUS to hear this case, click on the link provided.
    • The Alabama brief also includes a very long appendix for those who want to read the prior courts related testimony from the state and various experts. This is a very long read and is not necessary for readers who only want to understand the legal and professional issues. 
  • Supporting amicus brief from the US Justice Department Solicitor General.
  • Two supporting briefs from legal groups—the American Legal Foundation and the Criminal Justice Legal Foundation.
  • Supporting amicus briefs from other states (Idaho et al.; Kentucky)

Respondent briefs
Final comment.  Those from school psychology should make note that we three consultants involved in drafting the APA/ApA,AL-APA brief all had our original educational roots in the profession of school psychology.  Furthermore, SP professionals should note the significant number of authoritative references to publications authored by school psychologists in the respondents briefs, as well as in some of the petitioners briefs.  I’ve been doing expert consultation, writing declarations, and testifying in court re: Atkins ID cases since 2009.  Joel Schneider and Cecil Reynolds have also been active in a similar capacity.  There are more psychologists who come from, or are affiliated with, the field of school psychology who have been prominent consultants/experts to lawyers and the courts re Atkins cases.  

Perhaps some of these briefs should be assigned readings (in intellectual assessment courses or special topic seminars) for graduate students being trained in the art and science of intelligence testing and interpretation.




Tuesday, September 02, 2025

From the #Cattell-Horn-Carroll (#CHC) #cognitive #intelligence theory archives: Photos of important 1999 Carroll, Horn, Woodcock, Roid et al. meeting in Chapel Hill, NC.

I was recently cleaning my office when I stumbled upon these priceless photos from a 1999 historical meeting in Chapel Hill, NC that involved John Horn, Jack Carroll, Richard Woodcock, Gale Roid, John Wasserman, Fred Schrank and myself).  The provenance (I’ve always wanted to use this word 😉) for the meeting is provided below the pictures in the form of extracted quotes from Wasserman (2019) and McGrew (2023) (links below), which I confirmed with John Wasserman via a personal email on August, 30, 2025.

The 1990 CHC-based WJ-R had already been published and the WJ III author team were nearing completion of the CHC-based WJ III (2001).  Unbeknownst to many is the fact that Woodock was originally planned to be one of the coauthors of the SB5 (along with Gale Roid), which explains his presence in the photo’s that document one of several planning meetings for the CHC-based SB5.  

I was also involved as a consultant during the early planning for the CHC-based SB5 because of my knowledge of the evolving CHC theory.  My role was to review and integrate all available published and unpublished factor analysis research on all prior editions of the different SB legacy tests. I post these pictures with the names of the people included in each photo immediately below the photo. No other comments (save for the next paragraph) are provided.  

To say the least, my presence at this meeting (as well as many other meetings with Carroll and Horn together, as well as with each alone, that occured when planning the various editions of the WJ’s) was surrealistic.  One could sense a paradigm shift in intelligence testing that was happening in real time during the meetings!  The expertise of the leading theorists regarding what became known as CHC theory, together with the expertise of the applied test developers of Woodcock and Roid, provided me with learning experiences that cannot be captured in any book or university course work. 

Click on images to enlarge.  

Be gentle, these are the best available copies of images taken with an old-school camera (not smart-phone based digital images)

(Carroll, Woodcock, McGrew, Schrank)

(Carroll, Woodcock, McGrew)

(Woodcock, Wasserman, Roid, Carroll, Horn)

(Wasserman, Roid, Carroll, Horn, McGrew)

(Carroll, Woodcock)


———————-


“It was only when I left TPC for employment with Riverside Publishing (now Houghton-Mifflin-Harcourt; HMH) in 1996 that I met Richard W. Woodcock and Kevin S. McGrew and became immersed in the extended Gf-Gc (fluid-crystallized)/ Horn-Cattell theory, beginning to appreciate how Carroll's Three-Stratum (3S) model could be operationalized in cognitive-intellectual tests. Riverside had been the home of the first Gf-Gc intelligence test, the Stanford–Binet Intelligence Scale, Fourth Edition (SB IV; R. L. Thorndike, Hagen, & Sattler, 1986), which was structured hierarchically with Spearman's g at the apex, four broad ability factors at a lower level, and individual subtests at the lowest level. After acquiring the Woodcock–Johnson (WJ-R; Woodcock & Johnson, 1989) from DLM Teaching Resources, Riverside now held a second Gf-Gc measure. The WJ-R Tests of Cognitive Ability measured seven broad ability factors from Gf-Gc theory with an eighth broad ability factor possible through two quantitative tests from theWJ-R Tests of Achievement. When I arrived, planning was underway for new test editions – the WJ III (Woodcock, McGrew, & Mather, 2001) and the SB5 (Roid, 2003) – and Woodcock was then slated to co-author both tests, although he later stepped down from the SB5. Consequently, I had the privilege of participating in meetings in 1999 with John B. Carroll and John L. Horn, both of whom had been paid expert consultants to the development of the WJ-R” (Wasserman, 2019, p. 250)

——————-

In 1999, Woodcock brokered the CHC umbrella term with Horn and Carroll for practical reasons (McGrew 2005)—to facilitate internal and external communication regarding the theoretical model of cognitive abilities underlying the then-overlapping test development activities (and some overlapping consultants, test authors, and test publisher project directors; John Horn, Jack Carroll, Richard Woodcock, Gale Roid, Kevin McGrew, Fred Schrank, and John Wasserman) of the Woodcock–Johnson III and the Stanford Binet–Fifth Edition by Riverside Publishing” (McGrew, 2023, p. 3)

Wednesday, August 27, 2025

IQs Corner: Practice effects persist over two decades of cognitive testing: Implications for longitudinal research - #practiceeffect #cognitive #neurocognit #IQ #intelligence #schoolpsychology #schoolpsychologists

Click on image to enlarge for easy reading


MedRxiv preprint available at.  https://doi.org/10.1101/2025.06.16.25329587

Elman et al. (2025)


ABSTRACT 

Background: Repeated cognitive testing can boost scores due to practice effects (PEs), yet it remains unclear whether PEs persist across multiple follow-ups and long durations. We examined PEs across  multiple assessments from midlife to old age in a nonclinical sample.   

Method: Men (N=1,608) in the Vietnam Era Twin Study of Aging (VETSA) underwent 
neuropsychological assessment comprising 30 measures across 4 waves (~6-year testing intervals) spanning up to 20 years. We leveraged age-matched replacement participants to estimate PEs at each wave. We compared cognitive trajectories and MCI prevalence using unadjusted versus PE-adjusted scores. 

Results: Across follow-ups, a range of 7-12 tests (out of 30) demonstrated significant PEs, especially in episodic memory and visuospatial domains. Adjusting for PEs resulted in improved detection of cognitive decline and MCI, with up to 20% higher MCI prevalence.  

Conclusion: PEs persist across multiple assessments and decades underscoring the 
importance of accounting for PEs in longitudinal studies.
  
Keywords: practice effects; repeat testing; serial testing; longitudinal testing; mild cognitive impairment; cognitive change

Tuesday, August 26, 2025

IQs Corner: What is happening in gifted/high ability research from 2013 to 2023? - #gifted #talented #highability #EDPSY #intelligence #achievement #schoolpsychologists #schoolpsychology

 Click on image to enlarge for easier reading



Trends and Topics Evolution in Research on Giftedness in Education: A Bibliometric Analysis.  Psychology in the Schools, 2025; 62:3403–3413


Rius, C., Aguilar‐Moya, R., Martínez‐Córdoba, C., Cantos‐Roldan, B., Vidal‐Infer, A.

Open access article that can be downloaded and read for free at this link.

ABSTRACT

The article explores the evolution of research on giftedness and high ability through a bibliometric analysis. It highlights challenges in identifying gifted individuals, who represent approximately 6.5% of students, although biased instruments and discriminatory selection practices may affect the identification of high skilled students. The tripartite model, defining giftedness as a combination of high intellectual ability, exceptional achievement, and potential for excellence, serves as a fundamental framework for this study. Using Dirichlet's latent assignment model, major research topics were identified, and trends from 2013 to 2023 were analyzed based on 1071 publications in the Web of Science database. The analysis revealed that publications focus on topics such as giftedness, talent management, and educational programs, showing a significant increase in research on these areas over the past decade. Key topics included psychometrics, gifted programs, and environmental factors. The United States, Germany, and Spain led in productivity with prominent publications addressing cognitive and socio‐emotional aspects of giftedness. Findings underscore the need for targeted educational interventions, including acceleration and enrichment programs, to address the academic and emotional challenges faced by gifted students. Research is shifting toward understanding the environmental influences on these students, highlighting the importance of supportive educational environment for their success.

Monday, August 25, 2025

IQs Corner: What is (and what is not) clinical judgment in intelligence test interpretation? - #IQ #intelligence #ID #intellectualdisability #schoolpsychologists #schoolpsychology #diagnosis

What is clinical judgment in intelligence testing?  

This term is frequently invoked when psychologists explain or defend their intelligence test interpretations.  Below is a brief explanation I’ve used to describe what it is…and what it is not, based on several sources.  Schalock and Luckasson’s AAIDD Clinical Judgment book (now in a 2014 revised version) is the best single source I have found that addresses this slippery concept in intelligence testing, particularly in the context of a potential diagnosis of intellectual disability (ID)—it is a recommended reading.

—————

Clinical judgment is a process based on solid scientific knowledge and is characterized as being “systematic (i.e., organized, sequential, and logical), formal (i.e., explicit and reasoned), and transparent (i.e., apparent and communicated clearly)” (Schalock & Luckasson, 2005, p.1). The application of clinical judgment in the evaluation of IQ scores in the diagnosis of intellectual disability includes consideration of multiple factors that might influence the accuracy of an assessment of general intellectual ability (APA: DSM-5, 2013).  The “unanimous professional consensus that the diagnosis of intellectual disability requires comprehensive assessment and the application of clinical judgment” (Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8).

The misuse of clinical judgment in the interpretation of scores from intelligence test batteries should not be used as the basis for “gut instinct” or “seat-of-the-pants” impressions and conclusions of the assessment professional (Macvaugh & Cunningham, 2009), or justification for shortened evaluations, a means to convey stereotypes or prejudices, a substitute for insufficiently explored questions, or an excuse for incomplete testing and missing data (Schalock & Luckasson, 2005). Idiosyncratic methods and intuitive conclusions are not scientifically based and have unknown reliability and validity. 

If clinical judgement interpretations and opinions regarding an individual’s level of general intelligence are based on novel or emerging research-based principles, the assessment professional must document the bases for these new interpretations as well as the limitations of these principles and methods. This requirement is consistent with the Standards for Educational and Psychological Testing Standard 9.4 which states:

When a test is to be used for a purpose for which little or no validity evidence is available, the user is responsible for documenting the rationale for the selection of the test and obtaining evidence of the reliability/precision of the test scores and the validity of the interpretations supporting the use of the scores for this purpose (p. 143).


American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014).  Standards for educational and psychological testing.  Washington, DC:  Author. 

American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders— Fifth Edition. Washington D.C.:  Author. 

Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8.

MacVaugh, G. S. & Cunningham, M. D. (2009). Atkins v. Virginia: Implications and recommendations for forensic practice.  The Journal of Psychiatry and Law, 37, 131-187.

Schalock, R. L. & Luckasson, R. (2005). Clinical judgment. Washington, DC: American Association on Intellectual and Developmental Disabilities. 

—————

Kevin S. McGrew, PhD.

Educational Psychologist

Director 

Institute for Applied Psychometrics (IAP)

www.theMindHub.com


Sunday, August 17, 2025

Thoughts on the definition of dyslexia: More on the ongoing debate re the definition of dyslexia - #dyslexia #SLD #schoolpsychologists #schoolpsychology #SPED #reading

Thoughts on the Definition of Dyslexia.  Annals of Dyslexia (click here to read or download - open access)

Linda S. Siegel,  David P. Hurford, Jamie L. Metsala, Michaela R. Ozier, & Alex C. Fender

Abstract 

The International Dyslexia Association's current definition of dyslexia was approved by its Board of Directors on November 12, 2002. After two decades of scientific inquiry into the nature of dyslexia, it is time to reconsider and potentially revise the definition in light of what has been learned. We propose a definition of dyslexia based on its essential nature. Dyslexia is a specific learning disability in reading at the word level. It involves difficulty with accurate and/or fluent word recognition and/or pseudoword reading. We also suggest that the definition should focus solely on dyslexia's core features and should not include risk factors, potential secondary consequences, or other characteristics. Until those factors can reliably differentiate between those with and without dyslexia at an individual level, they should not be included in the definition.