https://psycnet.apa.org/fulltext/2027-14042-001.html
Kevin S. McGrew, PhD
Educational & School Psychologist
Director
Institute for Applied Psychometrics (IAP)
https://www.themindhub.com
******************************************
In 2022 I published an invited big-picture “thought piece” on a proposed CAMML (cognitive-affective-motivation model of learning) in the Canadian Journal of School Psychology The title wasThe Cognitive-Affective-Motivation Model of Learning (CAMML): Standing on the Shoulders of Giants.
I had hoped that by challenging existing narrow assessment practices in school psychology (SP), and proposing a more whole-child assessment model approach (where cognitive testing would be more limited and selective…not the knee jerk practice of most all referred kids for learning problems being administered a complete intelligence test battery), it would gain traction in some SP circles. From the informal and formal professional media sources I monitor, it has not..at least not yet.
Integrating CAMML aptitude-trait complexes, which emphasize that motivation and SRL constructs are the focal personal investment learning mechanisms, in contemporary SP practice is an aspirational goal. The constraints of regulatory frameworks and the understandable skepticism of disability-specific advocacy groups will make such a paradigm-shift difficult. However, embracing the model of CAMML aptitude complexes may be what SP and education need to better address the complex nuances of individual differences in student learning. Snow's concept of aptitude, if embraced in reborn form as the CAMML framework, could reduce the unbalanced emphasis on intelligence testing in SPs assessment practices. However, the greatest impediment to change may be the inertia of tradition in SP”
I just stumbled across a relatively new video covering the history and several major issues regarding intelligence testing and IQ scores. Two scholars that I respect (Dr. Cecil Reynolds; Dr. Stuart Ritchie) are featured in the video. I did see some spelling errors in the subtitles (Dr. Ian Dearie instead of Dr. Ian Deary; Benet instead of Binet; using capital G when referencing Spearman's concept of general intelligence, which is always noted with an italic font small g; etc) and heard several statements that made me cringe slightly.
Also, it left the impression that fluid and crystallized intelligence (and a lessor extent quantitative ability) are the primary recognized broad cognitive abilities measured by intelligence tests. It did not acknowledge contemporary CHC theory as the consensus taxonomy of human cognitive abilities. Also, it left the impression that IQ tests are "bubble in" multiple choice tests. This may be true for group tests, but it is not the case with individually administered intelligence tests.
Overall, it is a reasonable video to share with others as an introduction, possibly in college courses where the concept of intelligence and IQ testing is being introduced. It did a good job of covering the historical bad uses of IQ tests (e.g., discrimination; cultural bias, eugenics movement, etc.)
The complete video is approximately 35 minutes. It did freeze up for me at the 17 minute mark when it was going to display an ad....but I simply restarted the video and quickly moved to that point and then it continued.
Click on image to enlarge for easier reading
This is an open access article that can be read/downloaded here.
Abstract
The term “retest effects” refers to score gains on cognitive ability as well as educational achievement tests upon repeated administration of the same or a similar test. Previous research on this phenomenon has focused mainly on general cognitive ability scores—often using manifest difference scores—and has neglected differences in retest effects across different types of cognitive operations underlying general cognitive abilities. Additionally, these studies have focused primarily on average group-level test scores, neglecting interindividual differences in retest effects. To address these gaps, we used latent growth curve modeling to examine retest effects in N = 203 participants across three test sessions, considering both general cognitive ability and its four underlying operations according to the Berlin intelligence structure model, namely, processing capacity, processing speed, creativity, and memory. Results show a linear improvement in overall performance of 53.60 points (about 10.45 IQ points) with each assessment, corresponding to two thirds of a standard deviation. Participants' slopes—that is, their rates of improvement across test sessions—did not vary significantly, and thus did not correlate with their initial cognitive ability levels. Statistically significant operation-specific differences in the magnitude of retest effects were found, with memory showing the largest retest effect and creativity the smallest. Although participants did not vary in their rates of improvement on the processing-capacity and memory operation, there was significant interindividual variation in the slopes of the other two operations. These findings highlight the importance of considering operation-specific scores in research on retest effects. Implications for cognitive ability retesting practices are discussed.
See prior post regarding Hamm v Smith Atkins ID death penalty case where central issue is how to handle multiple IQ scores. All briefs are at that prior blog post page.
Oral arguments before the SCOTUS justices occurred this past Wednesday, 12-10-25. One can download audio file (arguments lasted 2 hours) or transcript of arguments at this link.
Click on images to enlarge for easy reading
A B S T R A C T
The rapid growth of research literature has made systematic reviews and meta-analyses increasingly time-consuming, limiting their utility in fast-evolving fields such as educational psychology. Artificial intelligence (AI) tools have enormous potential to streamline these processes, yet their adoption remains limited due to usability issues and a lack of systematic guidance. Out of 282 tools that we compiled from overviews that listed AI tools for research syntheses, we filtered a subset of 7 AI tools that met quality standards, such as transparency and accessibility. These tools were evaluated for their potential to support systematic reviews and meta-analyses in educational psychology. Our review highlights the tools' strengths, limitations, and ethical considerations for their responsible use by providing practical guidance and coding information. Educational relevance statement: This research identifies and evaluates AI tools that streamline systematic reviews and meta-analyses, addressing critical challenges in synthesizing educational psychology research. By making these processes more efficient, accessible, and accurate, the study empowers educators and researchers to derive timely insights into diverse learner needs. Practically, the findings guide the adoption of AI tools that reduce workload and cognitive bias, enabling more evidence-based and inclusive educational practices. This work supports the advancement of scientifically rigorous methods that enhance understanding of individual differences in learning, directly contributing to improved educational interventions and outcomes
Abstract
The Woodcock–Johnson V Tests of Cognitive Abilities (WJ V COG), published in February 2025, offers the latest edition of the WJ family of tests alongside tests of academic achievement and oral language. The WJ V COG has changed substantially from previous editions regarding administration, which is now entirely digital. Administration and scoring are housed within the Riverside Insight's online platform. The test battery features several changes, such as the addition of five new tests and the removal of three tests, including measures of Auditory Processing (Ga). The WJ V COG maintains a CHC-based theoretical framework, although updated to align with current theory. Psychometric evidence, including validity, reliability, and item-level analysis, is robust. Evidence is less convincing for children under six. The assessment was co-normed with measures of academic achievement, and the norm sample was gathered post-COVID. Although some may find requirements of digital administration limiting, the WJ V COG offers an engaging and psychometrically sound option for the assessment of intelligence.
Keywords: assessment, intelligence, cognitive, digital administration, test review
The above independent review of the WJ V cognitive battery is now available for reading (and downloading) as an open access article at the Journal of Psychoeducational Assessment.
COI statement: I, Dr. Kevin McGrew, am senior author of the WJ V. However, I no longer (as in the past with the WJ III through WJ IV) have a royalty interest in the in the WJ V—I make ZERO income based on how many are sold. The publisher moved to a new independent contractor reimbursement model where the authors were paid for work on the WJ V prior to publication. However, I clearly have a potential professional non-income based COI given my lengthy history with the WJ batteries and my professional reputation. I do still receive royalties for sales of the WJ IV. I also have no post-publication contract (or COI) to work on any new features that the publisher adds to the digital product after the formal release in Feb 2025, which as a digital product, can add new features on a semi-regular basis. In other words, consider me an “unrestricted free agent” in the intelligence testing space. 😉
I seldom designate an article as a recommended reading. I typically make FYI posts about new research I finding interesting in my small corner of the larger sandbox of psychology…more as FYI alerts. I break with my typical FYI research alert blogging behavior for this article by Dr. John D. Mayer. I recommend reading Mayer’s thought provoking article—especially since it is open source and can be downloaded and read for free (click here to access).
Why? Because it is a well-reasoned “thought piece” about the many unanswered questions regarding the potential positive and negative impact of AI on humans, in this case, human personality and cognition. I’m relatively new to the fast-moving AI movement and, as an educational psychologist, I’m interested in how certain cognitive abilities (especially CHC cognitive abilities) may become “skilled” or “deskilled” with greater reliance on AI.
This is an open access article that can be read/downloaded at this link.
Click on image to enlarge