Monday, September 29, 2025

What does #ElonMusks #Grok (#xAI) think of Dr. Kevin #McGrew ? - My Grok-based professional bio


I’ve finally started to play around with different AI programs.  I’ve been taking topics where I know the extant research well (e.g., CHC theory of intelligence, the WJ series of tests) and asking AI agents to summarize the information.  Today I had the idea of asking Grok to write me a professional bio.  After all, I know more about me than I do any other topic.  Below is the result.  I’ve made edits (strikethroughs) and added text (red font) to make it more accurate. 

I can now see how this technology can help save time with certain tasks. The bio is relatively accurate for a rough draft, but it did need some revisions and notable additions (here-and-there) to be more accurate and thorough.  I’ve included a link to my formally posted bio for comparison purposes—yes, it is long as it is my “long” form bio I submit along with my CV when engaged in court/legal work.  

The Grok bio does not include URL links to material.  Perhaps there is a way to ask Grok to include links, but I don’t know if that is possible at this point.  My guess is that it probably can.  If you want to see more information on any topic, just go the theMindHub.com and navigate accordingly.

I did notice that during the 2+ minutes of Grok work, it spent a significant amount of time on my professional web page.  I guess this means that if you want an AI tool to write your professional bio, you must have some decent content up on the web…it can’t make up material from thin air.  Also, it would appear good at providing rough drafts that still requires a content expert to revise accordingly.

All in all, an interesting excercise.

Dr. Kevin S. McGrew bio written by AI program Grok (xAI) on 09-29-25

(strikethrough and edits in red font)

Can be compared to Dr. McGrew’s formal bio posted at theMindHub.com

 

Dr. Kevin S. McGrew is a prominent educational psychologist and psychometrician whose career spans over five decades, beginning as a practicing school psychologist for 12 years (1975–1989) after earning his MS in School Psychology from Minnesota State University Moorhead. He later obtained a PhD in Educational Psychology (Special Education) from the University of Minnesota in 1989, focusing on research methods and applied psychometrics. McGrew has held academic roles such as Professor of Applied Psychology at St. Cloud State University (1990–2000) and Visiting Lecturer and Adjunct Research Professor at the University of Minnesota (2000–2024). He also served as Research Director for the Woodcock-Muñoz Foundation (2005-2014), Associate Director for Measurement Learning Consultants (2008-2014), and Director of the Institute for Applied Psychometrics (1998-current). His work has profoundly shaped the fields of human intelligence theory and testing, emphasizing empirical rigor, practical application in education and law, and integration of cognitive and non-cognitive factors.

Contributions to Intelligence Theory

McGrew is widely recognized as a leading scholar in the Cattell-Horn-Carroll (CHC) theory of cognitive abilities, a comprehensive psychometric framework that integrates fluid and crystallized intelligence (Cattell-Horn) with Carroll’s three-stratum model. Alongside Dr. Joel Schneider, he has served as an unofficial “gatekeeper” of CHC theory, authoring seminal updates and cross-battery interpretations that have made it the dominant model in contemporary intelligence research, test development, and and interpretation of intellectual assessment results.  His efforts have advanced CHC from a theoretical construct model to a practical tool for diagnosing learning disabilities, intellectual giftedness, and intellectual disabilities, influencing guidelines in the American Association on Intellectual and Developmental Disabilities (AAIDD) manual (2021). 

McGrew has also pioneered integrative models that extend beyond pure cognition. He developed the Model of Academic Competence and Motivation (MACM) in the early 2000s, which posits that academic success arises from the interplay of cognitive abilities, conative (motivational) factors self-efficacy, achievement orientations, self-beliefs, self-regulated learning, and affective elements such as personality and social-emotional skills, interest and anxiety.  This evolved into the broader Cognitive-Affective-Motivation Model of Learning (CAMML), emphasizing how these dimensions interact to predict school achievement and inform interventions.  His research on psychometric network analysis has further refined CHC by modeling complex interrelationships among CHC abilities “beyond g” (general intelligence), as highlighted in his 2023 co-authored paper, named the Journal of Intelligence’s “Best Paper” of the year.  McGrew has explored the Flynn effect (rising IQ scores over time) and its implications for the interpretation of intelligence test scores in Atkins intellectual disability death penalty cases. theory, as well as CHC’s links to adaptive behavior and neurotechnology applications for cognitive enhancement.

Contributions to Intelligence Testing

McGrew’s practical impact is most evident in intelligence test development and interpretation, where he championed “intelligent testing”—an art-and-science approach inspired by Alan Kaufman that prioritizes the interpretation of broad CHC composite scores profile analysis over single or global IQ scores.  As primary measurement consultant for the Woodcock-Johnson Psychoeducational Battery—Revised (WJ-R, 1991), he authored its technical manual and conducted statistical analyses for restandardization.  The WJ-R battery was the first major battery of individually administered cognitive and achievement tests based on the first integration of the psychometric intelligence research of Raymond Cattell and John Horn (aka, the Cattell-Horn Gf-Gc model of intelligence) and John Carroll’s seminal (1993) three-stratum model of intelligence.  He co-authored the Woodcock-Johnson III (WJ III, 2001) and Woodcock-Johnson IV (WJ IV, 2014), which were the first major batteries explicitly grounded in CHC theory, introducing subtests for underrepresented abilities like auditory processing and long-term retrieval.  As senior co-author, he led the development of the digitally administered Woodcock-Johnson V (WJ V, 2025), incorporating recent advances in CHC theory and psychometric network analysis and conative measures

Internationally, McGrew consulted on the Indonesian AJT Cognitive Assessment (2014–2017), creating the world’s first CHC-based individually administered intelligence test in that country.  He and advised the Ayrton Senna Institute on large-scale cognitive assessments in Brazil (2016–2025) and contributed to ASI research focused on integrating constructs from McGrew’s CAMML model with the ASI Big-5 personality based social-emotional skill model.  He has provided expert psychometric testimony (through written declarations and/or court testimony) in over 50 Atkins v. Virginia death penalty cases since 2009 and has contributed to refining intellectual disability criteria through CHC lenses. 

Publications and Knowledge Dissemination

McGrew has authored or co-authored over 100 peer-reviewed journal articles, book chapters, and eight norm-referenced test batteries, alongside four books on intelligence test interpretation, including Clinical Interpretation of the Woodcock-Johnson Tests of Cognitive Ability (1997, revised 2005).  His prolific output includes contributions to the APA Handbook of Intellectual and Developmental Disabilities (2021) and critiques of intellectual disability diagnostics.  He maintains influential blogs like IQ’s Corner (www.iqscorner.com), which has synthesized CHC and intelligence theory and related assessment research for practitioners since 2004, and engages on platforms like X (@iqmobile), LinkedIn, and BlueSky (@iqmcgrew.bsky.social) to democratize complex psychometric concepts and share research and insights based on his multiple areas of expertise

Awards and Legacy

McGrew’s influence is underscored by prestigious honors, including the University of Minnesota Distinguished Alumni Award (2016), Minnesota School Psychologists Association Lifetime Achievement Award (2015), Alan S. Kaufman Excellence in Assessment Award (2023), and the Dr. Richard W. Woodcock Award for Innovations in Ability Testing (2025).  His work has bridged theory and practice, empowering educators, clinicians, and policymakers to use intelligence assessments more equitably and effectively, while advocating for a holistic view of human potential that includes motivation and self-regulation alongside cognition.

 


Thursday, September 25, 2025

IQs Corner. In what way are #intelligence testing (#IQ) and the US Supreme Court (#SCOTUS) alike?—SCOTUS will be hearing important case addressing #multiple IQ scores and #intellectualdisability #Dx in fall 2025 term

This fall 2025, the Supreme Court of the United States (SCOTUS) will be hearing a case related to intelligence testing in the context of Atkins intellectual disability (ID) death penalty cases. The case is Hamm v Smith.

The question before SCOTUS is :  Whether and how courts may consider the cumulative effect of multiple IQ scores in assessing Atkins claims (in the context of diagnosis ID in death penalty cases)?

Note.  In order to save space and time, instead of writing “general intelligence” or “general intellectual functioning” every time, I use the abbreviation “IQ”.

The respondent (Joseph Smith) has five IQ test scores from comprehensive IQ tests.  He obtained two scores of 75 and 74 during the developmental period (before age 22), and three scores of 72, 78, and 74 between the ages of 28 and 46.  

This case is important for assessment professionals who conduct intelligence testing in general, and potential ID diagnostic cases (Atkins cases in particular).  I find this SCOTUS case particularly interesting given that in 2021, after the 2021 release of the latest official AAIDD manual (Intellectual disability: Definition, diagnosis, classification, and systems of supports), I published a critique where I specifically stated, as one weakness of the new AAIDD manual that “…many high-stakes ID cases often include case files that include multiple IQ scores across time or from different IQ tests. Some form of guidance, at minimum in a passing reference, to the issues of the convergence of indicators and IQ score exchangeability would have been useful. Users will need to go beyond the AAIDD manual for guidance (see Floyd et al., 2021; McGrew, 2015; and Watson, 2015)” (click here to download and read this critique).

All official petitioner and respondent legal briefs (and amicus briefs) have now been published at the SCOTUS blog as of yesterday.  The number of documents posted on the SCOTUS docket are many.  To help the reader better determine which documents are most critical (the final briefs), instead of clicking away on the various links at the SCOUTUS blog, I’ve organized the petitioner and respondent brief links below.

If you prefer to not wade through all the briefs (it is not for everyone), I would encourage practicing assessment professionals read the three respondent-related briefs.  The points made are relevant to all who conduct intellectual assessments.  As a potential conflict of interest notice, I (Dr. Kevin McGrew), together with Dr. Joel Schneider and Dr. Cecil Reynolds (as noted on page three for the APA amicus brief), were consultants to APA in the drafting of that brief.  This work was performed pro bono. I, at a minimum, suggest reading all the respondent briefs.  If time permits, I would also suggest reading the petitioner’s Alabama brief and the US Justice Department Solicitor General’s brief to better understand the petitioner and respondent positions re Hamm v Smith. 

Petitioner briefs
  • The state of Alabama brief.  Alabama is the petitioner.  That is, if you want to read why the State of Alabama asked SCOTUS to hear this case, click on the link provided.
    • The Alabama brief also includes a very long appendix for those who want to read the prior courts related testimony from the state and various experts. This is a very long read and is not necessary for readers who only want to understand the legal and professional issues. 
  • Supporting amicus brief from the US Justice Department Solicitor General.
  • Two supporting briefs from legal groups—the American Legal Foundation and the Criminal Justice Legal Foundation.
  • Supporting amicus briefs from other states (Idaho et al.; Kentucky)

Respondent briefs
Final comment.  Those from school psychology should make note that we three consultants involved in drafting the APA/ApA,AL-APA brief all had our original educational roots in the profession of school psychology.  Furthermore, SP professionals should note the significant number of authoritative references to publications authored by school psychologists in the respondents briefs, as well as in some of the petitioners briefs.  I’ve been doing expert consultation, writing declarations, and testifying in court re: Atkins ID cases since 2009.  Joel Schneider and Cecil Reynolds have also been active in a similar capacity.  There are more psychologists who come from, or are affiliated with, the field of school psychology who have been prominent consultants/experts to lawyers and the courts re Atkins cases.  

Perhaps some of these briefs should be assigned readings (in intellectual assessment courses or special topic seminars) for graduate students being trained in the art and science of intelligence testing and interpretation.




Tuesday, September 02, 2025

From the #Cattell-Horn-Carroll (#CHC) #cognitive #intelligence theory archives: Photos of important 1999 Carroll, Horn, Woodcock, Roid et al. meeting in Chapel Hill, NC.

I was recently cleaning my office when I stumbled upon these priceless photos from a 1999 historical meeting in Chapel Hill, NC that involved John Horn, Jack Carroll, Richard Woodcock, Gale Roid, John Wasserman, Fred Schrank and myself).  The provenance (I’ve always wanted to use this word 😉) for the meeting is provided below the pictures in the form of extracted quotes from Wasserman (2019) and McGrew (2023) (links below), which I confirmed with John Wasserman via a personal email on August, 30, 2025.

The 1990 CHC-based WJ-R had already been published and the WJ III author team were nearing completion of the CHC-based WJ III (2001).  Unbeknownst to many is the fact that Woodock was originally planned to be one of the coauthors of the SB5 (along with Gale Roid), which explains his presence in the photo’s that document one of several planning meetings for the CHC-based SB5.  

I was also involved as a consultant during the early planning for the CHC-based SB5 because of my knowledge of the evolving CHC theory.  My role was to review and integrate all available published and unpublished factor analysis research on all prior editions of the different SB legacy tests. I post these pictures with the names of the people included in each photo immediately below the photo. No other comments (save for the next paragraph) are provided.  

To say the least, my presence at this meeting (as well as many other meetings with Carroll and Horn together, as well as with each alone, that occured when planning the various editions of the WJ’s) was surrealistic.  One could sense a paradigm shift in intelligence testing that was happening in real time during the meetings!  The expertise of the leading theorists regarding what became known as CHC theory, together with the expertise of the applied test developers of Woodcock and Roid, provided me with learning experiences that cannot be captured in any book or university course work. 

Click on images to enlarge.  

Be gentle, these are the best available copies of images taken with an old-school camera (not smart-phone based digital images)

(Carroll, Woodcock, McGrew, Schrank)

(Carroll, Woodcock, McGrew)

(Woodcock, Wasserman, Roid, Carroll, Horn)

(Wasserman, Roid, Carroll, Horn, McGrew)

(Carroll, Woodcock)


———————-


“It was only when I left TPC for employment with Riverside Publishing (now Houghton-Mifflin-Harcourt; HMH) in 1996 that I met Richard W. Woodcock and Kevin S. McGrew and became immersed in the extended Gf-Gc (fluid-crystallized)/ Horn-Cattell theory, beginning to appreciate how Carroll's Three-Stratum (3S) model could be operationalized in cognitive-intellectual tests. Riverside had been the home of the first Gf-Gc intelligence test, the Stanford–Binet Intelligence Scale, Fourth Edition (SB IV; R. L. Thorndike, Hagen, & Sattler, 1986), which was structured hierarchically with Spearman's g at the apex, four broad ability factors at a lower level, and individual subtests at the lowest level. After acquiring the Woodcock–Johnson (WJ-R; Woodcock & Johnson, 1989) from DLM Teaching Resources, Riverside now held a second Gf-Gc measure. The WJ-R Tests of Cognitive Ability measured seven broad ability factors from Gf-Gc theory with an eighth broad ability factor possible through two quantitative tests from theWJ-R Tests of Achievement. When I arrived, planning was underway for new test editions – the WJ III (Woodcock, McGrew, & Mather, 2001) and the SB5 (Roid, 2003) – and Woodcock was then slated to co-author both tests, although he later stepped down from the SB5. Consequently, I had the privilege of participating in meetings in 1999 with John B. Carroll and John L. Horn, both of whom had been paid expert consultants to the development of the WJ-R” (Wasserman, 2019, p. 250)

——————-

In 1999, Woodcock brokered the CHC umbrella term with Horn and Carroll for practical reasons (McGrew 2005)—to facilitate internal and external communication regarding the theoretical model of cognitive abilities underlying the then-overlapping test development activities (and some overlapping consultants, test authors, and test publisher project directors; John Horn, Jack Carroll, Richard Woodcock, Gale Roid, Kevin McGrew, Fred Schrank, and John Wasserman) of the Woodcock–Johnson III and the Stanford Binet–Fifth Edition by Riverside Publishing” (McGrew, 2023, p. 3)

Wednesday, August 27, 2025

IQs Corner: Practice effects persist over two decades of cognitive testing: Implications for longitudinal research - #practiceeffect #cognitive #neurocognit #IQ #intelligence #schoolpsychology #schoolpsychologists

Click on image to enlarge for easy reading


MedRxiv preprint available at.  https://doi.org/10.1101/2025.06.16.25329587

Elman et al. (2025)


ABSTRACT 

Background: Repeated cognitive testing can boost scores due to practice effects (PEs), yet it remains unclear whether PEs persist across multiple follow-ups and long durations. We examined PEs across  multiple assessments from midlife to old age in a nonclinical sample.   

Method: Men (N=1,608) in the Vietnam Era Twin Study of Aging (VETSA) underwent 
neuropsychological assessment comprising 30 measures across 4 waves (~6-year testing intervals) spanning up to 20 years. We leveraged age-matched replacement participants to estimate PEs at each wave. We compared cognitive trajectories and MCI prevalence using unadjusted versus PE-adjusted scores. 

Results: Across follow-ups, a range of 7-12 tests (out of 30) demonstrated significant PEs, especially in episodic memory and visuospatial domains. Adjusting for PEs resulted in improved detection of cognitive decline and MCI, with up to 20% higher MCI prevalence.  

Conclusion: PEs persist across multiple assessments and decades underscoring the 
importance of accounting for PEs in longitudinal studies.
  
Keywords: practice effects; repeat testing; serial testing; longitudinal testing; mild cognitive impairment; cognitive change

Tuesday, August 26, 2025

IQs Corner: What is happening in gifted/high ability research from 2013 to 2023? - #gifted #talented #highability #EDPSY #intelligence #achievement #schoolpsychologists #schoolpsychology

 Click on image to enlarge for easier reading



Trends and Topics Evolution in Research on Giftedness in Education: A Bibliometric Analysis.  Psychology in the Schools, 2025; 62:3403–3413


Rius, C., Aguilar‐Moya, R., Martínez‐Córdoba, C., Cantos‐Roldan, B., Vidal‐Infer, A.

Open access article that can be downloaded and read for free at this link.

ABSTRACT

The article explores the evolution of research on giftedness and high ability through a bibliometric analysis. It highlights challenges in identifying gifted individuals, who represent approximately 6.5% of students, although biased instruments and discriminatory selection practices may affect the identification of high skilled students. The tripartite model, defining giftedness as a combination of high intellectual ability, exceptional achievement, and potential for excellence, serves as a fundamental framework for this study. Using Dirichlet's latent assignment model, major research topics were identified, and trends from 2013 to 2023 were analyzed based on 1071 publications in the Web of Science database. The analysis revealed that publications focus on topics such as giftedness, talent management, and educational programs, showing a significant increase in research on these areas over the past decade. Key topics included psychometrics, gifted programs, and environmental factors. The United States, Germany, and Spain led in productivity with prominent publications addressing cognitive and socio‐emotional aspects of giftedness. Findings underscore the need for targeted educational interventions, including acceleration and enrichment programs, to address the academic and emotional challenges faced by gifted students. Research is shifting toward understanding the environmental influences on these students, highlighting the importance of supportive educational environment for their success.

Monday, August 25, 2025

IQs Corner: What is (and what is not) clinical judgment in intelligence test interpretation? - #IQ #intelligence #ID #intellectualdisability #schoolpsychologists #schoolpsychology #diagnosis

What is clinical judgment in intelligence testing?  

This term is frequently invoked when psychologists explain or defend their intelligence test interpretations.  Below is a brief explanation I’ve used to describe what it is…and what it is not, based on several sources.  Schalock and Luckasson’s AAIDD Clinical Judgment book (now in a 2014 revised version) is the best single source I have found that addresses this slippery concept in intelligence testing, particularly in the context of a potential diagnosis of intellectual disability (ID)—it is a recommended reading.

—————

Clinical judgment is a process based on solid scientific knowledge and is characterized as being “systematic (i.e., organized, sequential, and logical), formal (i.e., explicit and reasoned), and transparent (i.e., apparent and communicated clearly)” (Schalock & Luckasson, 2005, p.1). The application of clinical judgment in the evaluation of IQ scores in the diagnosis of intellectual disability includes consideration of multiple factors that might influence the accuracy of an assessment of general intellectual ability (APA: DSM-5, 2013).  The “unanimous professional consensus that the diagnosis of intellectual disability requires comprehensive assessment and the application of clinical judgment” (Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8).

The misuse of clinical judgment in the interpretation of scores from intelligence test batteries should not be used as the basis for “gut instinct” or “seat-of-the-pants” impressions and conclusions of the assessment professional (Macvaugh & Cunningham, 2009), or justification for shortened evaluations, a means to convey stereotypes or prejudices, a substitute for insufficiently explored questions, or an excuse for incomplete testing and missing data (Schalock & Luckasson, 2005). Idiosyncratic methods and intuitive conclusions are not scientifically based and have unknown reliability and validity. 

If clinical judgement interpretations and opinions regarding an individual’s level of general intelligence are based on novel or emerging research-based principles, the assessment professional must document the bases for these new interpretations as well as the limitations of these principles and methods. This requirement is consistent with the Standards for Educational and Psychological Testing Standard 9.4 which states:

When a test is to be used for a purpose for which little or no validity evidence is available, the user is responsible for documenting the rationale for the selection of the test and obtaining evidence of the reliability/precision of the test scores and the validity of the interpretations supporting the use of the scores for this purpose (p. 143).


American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014).  Standards for educational and psychological testing.  Washington, DC:  Author. 

American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders— Fifth Edition. Washington D.C.:  Author. 

Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8.

MacVaugh, G. S. & Cunningham, M. D. (2009). Atkins v. Virginia: Implications and recommendations for forensic practice.  The Journal of Psychiatry and Law, 37, 131-187.

Schalock, R. L. & Luckasson, R. (2005). Clinical judgment. Washington, DC: American Association on Intellectual and Developmental Disabilities. 

—————

Kevin S. McGrew, PhD.

Educational Psychologist

Director 

Institute for Applied Psychometrics (IAP)

www.theMindHub.com


Sunday, August 17, 2025

Thoughts on the definition of dyslexia: More on the ongoing debate re the definition of dyslexia - #dyslexia #SLD #schoolpsychologists #schoolpsychology #SPED #reading

Thoughts on the Definition of Dyslexia.  Annals of Dyslexia (click here to read or download - open access)

Linda S. Siegel,  David P. Hurford, Jamie L. Metsala, Michaela R. Ozier, & Alex C. Fender

Abstract 

The International Dyslexia Association's current definition of dyslexia was approved by its Board of Directors on November 12, 2002. After two decades of scientific inquiry into the nature of dyslexia, it is time to reconsider and potentially revise the definition in light of what has been learned. We propose a definition of dyslexia based on its essential nature. Dyslexia is a specific learning disability in reading at the word level. It involves difficulty with accurate and/or fluent word recognition and/or pseudoword reading. We also suggest that the definition should focus solely on dyslexia's core features and should not include risk factors, potential secondary consequences, or other characteristics. Until those factors can reliably differentiate between those with and without dyslexia at an individual level, they should not be included in the definition.

Monday, August 11, 2025

WJ V Technical Manual Abstract assessment service bulletin now available for download - #WJV #technicalmanual #schoolpsychologists #schoolpsychology #SLD #SPED #assessment #achievement #intelligence

Click on image to enlarge



The WJ V Technical Manual Abstract assessment service bulletin is now available via Riverside Insights (click here to download and read).  Think of it as an abridged version of the massive WJ V Technical Manual (LaForte, Dailey & McGrew, 2025).  Required reading for anyone interested in the WJ V.  Of course, reading the complete “manifesto” is highly recommended.
 
This is a technical abstract for the Woodcock-Johnson® V (WJ V™; McGrew, Mather, LaForte, & Wendling, 2025), a comprehensive assessment system for measuring general intellectual ability (g), specific cognitive abilities, oral language abilities, and academic achievement from age 4 through 90+. It describes the updates, organization, and technical aspects of the WJ V, including reliability information and evidence to support the validity of the WJ V test and cluster score interpretations. While this document provides a high-level summary of these topics, readers should consult the Woodcock-Johnson V Technical Manual (LaForte et al., 2025) for more comprehensive documentation.

A #metaanalysis of #assessment of self-regulated learning (#SRL) - #selfregulatedlearning #learning #motivation #CAMML #EDPSY #schoolpsychologists #schoolpsychology #conative


Self-regulated learning (SRL) strategies are an important component of models of school learning.  Below is a new meta-analysis of SRL assessment methods.  Overall effect sizes are not large.  More R&D is needed to develop applied practical SRL measurement tools.  SRL is a one of the major components of the 2022 Cognitive-Affective-Motivation Model of Learning; CAMML; click here to access article),

Multimethod assessment of self-regulated learning in primary, secondary, and tertiary education – A meta-analysis.  Learning and Individual Differences (open access—click here to access).

Abstract

Self-regulated learning (SRL) can be measured in several ways, which can be broadly classified into online and offline instruments. Although both online and offline measurements have advantages and disadvantages, the over-dependence of SRL research on offline measurements has been criticised considerably. Currently, efforts are being made to use multimethod SRL assessments. We examined 20 articles with 351 effect sizes that assessed SRL with at least two instruments on at least two SRL components. Most effect sizes were not statistically significant but descriptively higher than others. Combinations of two online instruments showed the highest effect size (r = 0.24). Overall correlations between instruments were highest for university students (r = 0.21). Additionally, results for cognition showed the highest effect size measured with behavioural traces (r = 0.28), and for metacognition measured with microanalysis (r = 0.35). The component of motivation was best measured using self-report questionnaires (r = 0.29).
Educational relevance statement
Self-regulated learning is an important predictor of academical success. It is therefore necessary to measure it as precise and comprehensive as possible. Knowing which instruments are best suited for each age group, SRL component, or reliably predict a specific achievement variable can help educators pick the best instrument for their needs.

Wednesday, August 06, 2025

Leaving no child behind—Beyond cognitive and achievement abilities - #CAMML source “fugitive/grey” working paper now available. Enjoy - #NCLB #learning #EDSPY #motivation #affective #cognitive #intelligence #conative #noncognitive #schoolpsychology #schoolpsychologists



I’ve recently made several posts regarding the importance of conative (i.e., motivation; self-regulated learning strategies; etc.) learner characteristics and how they should be integrated with cognitive abilities (as per the CHC theory of cognitive abilities) to better understand the interplay between learner characteristics and school learning.  These posts have mentioned (and I provided a link) to my recent 2022 article where I articulate a Cognitive-Affective-Motivation Model of Learning; CAMML; click here to access).

In the article I mention that the 2022 CAMML model had its roots in early work I completed as one of the first set of Principal Investigators during the first five years of the University of Minnesota’s National Center on Educational Outcomes (NCEO).  As a result of those posts I’ve had several requests for the original working paper which is best characterized as being “fugitive” or “grey” literature.

The brief back story is that the original 2004 document was a “working paper” (6-15-04; Increasing the Chance of No Child Being Left Behind: Beyond Cognitive and Achievement Abilities, by Kevin McGrew, David Johnson, Anna Casio, Jeffrey Evans) that was written with the aid of discretionary funds from the then Department of Education’s Office of Special Education (OSEP) during the influence of NCLB.  The working draft was submitted but curiously never saw the light of day.

With this post I’m now making the complete 2004 “working paper” (with writing, spelling, and grammar blemish’s in their full glory) available as a PDF.  Click here to access.  Although dated 20 years, IMHO the lengthy paper provides a good accounting of the relevant literature up to 2004, much of which is still relevant.  Below are images of the TOC pages which should give you an hint of the treasure trove of information and literature reviewed.  Enjoy.  Hopefully this MIA paper may help others pursue research and theoretical study in this important area.

Click on images to enlarge for easy reading







Saturday, August 02, 2025

Research Byte: Is trying hard enough? Causal analysis of the effort-IQ relationship suggests not - #intelligence #IQ #motivation #volition #CAMML #conative #noncognitive



Is Trying Harder Enough? Causal Analysis of the Effort-IQ Relationship Suggests Not.  Timothy Bates. Intelligence and Cognitive Abilities (open access—click here to locate article to read or download)


Abstract


Claims that effort increases cognitive scores are now under great doubt. What is needed is randomized controlled trials optimized for testing causal influence and avoiding confounding of self-evaluation of performance with feelings of good effort. Here we report three large studies using unconfounded measures of effort and instrumental analysis to isolate any causal effect of effort on cognitive score. An initial study (N = 393) validated an appropriate effort measure, demonstrating excellent external and convergent validity (β = .61). Study 2 (N = 500, preregistered) randomly allocated subjects to a performance incentive, using an instrumental variable analysis to detect causal effects of effort. The incentive successfully manipulated effort (𝛽 = .18, p = .001). However, the causal effect of effort on scores was near-zero and non-significant (𝛽 = .04, p = .886). Study 3 (N=1,237) replicated this null result with preregistered analysis and an externally developed measure of effort: incentive again raised reported effort (𝛽 = .17, p <.001), but effort had no significant causal effect on cognitive score (β2 = .27 [-0.07, 0.62]), p = .15). Alongside evidence of research fraud and confounding in earlier studies, the present evidence for the absence of any causal effects of effort on cognitive scores, effort research should shift its focus to goal setting – where effort is useful – rather than raising basic ability, which it appears unable to do.


Select quote from discussion: “The present results suggest a potential ‘central dogma of cognition’: that volitional effort can direct cognitive resources but cannot fundamentally alter or bypass the efficacy of the underlying cognitive systems themselves”


These findings are consistent with my proposed cognitive-affective-motivation-model-of-learning (CAMML), grounded extensively on Richard Snows concept of aptitude trait complexes, where motivational constructs are seen as driving and directing the use of cognitive abilities (via personal investment mechanisms), but not directly having a causal effect on cognitive abilities.  See first of two figures below.  Note lack of causal arrows from conative and affective domain constructs to CHC cognitive abilities.  Paper can be accessed by clicking here.

Click on images to enlarge for easier viewing