https://www.mdpi.com/2673-5318/7/3/104
Abstract
Kevin S. McGrew, PhD
Educational & School Psychologist
Director
Institute for Applied Psychometrics (IAP)
https://www.themindhub.com
******************************************
The following is a Grok AI generated summary when prompted (by Dr. Kevin McGrew) to explain Richard Snow’s aptitude complexes on March 7, 2026).
(Grok refers to a generative artificial intelligence developed by xAI, an Elon Musk company, as well as a foundational term in science fiction for deep understanding. Several editorial changes were made to insure accuracy, and select links added, by McGrew—to the Grok summary).
Richard E. Snow’s aptitude complexes (introduced in the 1980s–1990s, with key publications in 1987, 1992, and especially the 1996 chapter with Lyn Corno and Douglas Jackson) represent a major shift in how educational psychologists think about aptitude—moving far beyond the traditional narrow view of aptitude as “just cognitive ability” (e.g., IQ or a single test score).
Core Definition
Snow redefined aptitude as:
“the degree of readiness to learn and to perform well in a particular situation or in a fixed domain.”
It is a predisposition or propensity—a latent quality that enables more advanced performance under specific conditions. Aptitudes are situational and domain-specific: what makes someone ready to succeed in math class may be different from what works in a history seminar or a hands-on lab.
Aptitude Complexes (the key innovation)
Snow argued that single constructs (like “fluid reasoning” or “achievement motivation”) are insufficient. Instead, success in learning comes from aptitude complexes—dynamic constellations or critical combinations of variables that work together as a coordinated system.
These complexes draw from the classic “trilogy of the mind”:
• Cognition — abilities and processes for analyzing, interpreting, and solving (e.g., reasoning, knowledge, strategies, cognitive style, CHC abilities).
• Affect — emotions, anxiety, self-concept, emotion regulation, personality traits.
• Conation — motivation, volition, goal-setting, effort, persistence, will (the “want to” and “stick with it” aspects).
An aptitude complex is not just a list of traits—it is how these elements assemble and coordinate in real time within a specific task and context. They are amalgams of cognitive, conative, and affective characteristics.
The Two Pathways That Build Aptitude Complexes
Snow (and later Corno et al., 2002) described aptitudes developing through two parallel, interacting pathways (sometimes called the commitment pathway and the performance/action pathway):
1. Commitment Pathway (motivational/affective/volitional)
• Assembles motivational resources that energize effort.
• Affective and volitional processes modulate how the work proceeds (e.g., regulating anxiety, sustaining intention).
• Outcome: Propensity (how likely the person is to engage and persist).
2. Performance (Action) Pathway (cognitive)
• Assembles and deploys cognitive resources (abilities, strategies, knowledge) to do the task.
• Outcome: Ability/accuracy in execution.
When the two pathways coordinate effectively in a given “problem space” (the task + the broader situation/context), successful performance occurs. Over repeated experiences, these coordinated processes coalesce into stable aptitude complexes. The learner’s perception of the situation’s constraints and affordances is crucial—aptitudes are dynamic and transactional (person ↔ environment).
Aptitudes are both input (what the student brings to learning) and product (they develop and change through education).
Snow Academic Aptitude Model (SAAM)
Snow synthesized decades of research into a provisional taxonomy (called the Snow Academic Aptitude Model or SAAM in an unpublished working white paper [6-15-04; Increasing the Chance of No Child Being Left Behind: Beyond Cognitive and Achievement Abilities, by Kevin McGrew, David Johnson, Anna Casio, Jeffrey Evans]) which is now available for download here. It treats academic aptitudes as a “multivariate repertoire” of readiness for general and domain-specific learning. It explicitly includes affect (personality and social-emotional skills) and conative related skills and traits (e.g., achievement motivation, goal orientation, freedom from debilitating anxiety, positive self-concept, self-efficacy, interests) alongside cognitive abilities. The model emphasizes that “there is more to knowing about an individual’s propensity for academic learning than simply knowing their IQ scores!”
Why This Mattered (and Still Matters)
Snow’s work grew out of his famous Aptitude-Treatment Interaction (ATI) research with Lee Cronbach. He showed that simple aptitude scores rarely predict learning outcomes well because real-world performance depends on these richer, situation-specific complexes.
This framework directly inspired later theories (including Kevin McGrew’s 2022 CAMML model, which updates Snow’s complexes with contemporary CHC cognitive theory, Big Five personality, and self-regulated learning mechanisms while retaining the commitment/action pathway structure).
In short, Snow’s aptitude complexes gave us a whole-person, dynamic, domain-specific way to understand why two students with the same IQ can have dramatically different learning outcomes: their aptitude complexes (the coordinated cognitive + affective + conative “package” or “amalgam”) fit the demands of the task and context differently.
Snow’s legacy is summarized in the 2002 book Remaking the Concept of Aptitude (Corno et al.), which remains the definitive extension of his ideas.
Research highlights the importance of the early identification of social, emotional, and behavioral concerns in young children; however, there are limitations regarding the usability and technical adequacy of available measures. The purpose of the present study was the initial development and validation of the Social, Academic, and Emotional Behavior Risk Screener–Early Childhood measure, a novel tool designed to assess social, emotional, and behavioral functioning for preschool-aged children. Current analyses examined internal structure, reliability, concurrent validity, and diagnostic accuracy. Data were collected from 299 children, ages 2–6, and 42 educators from six early childhood centers in the Midwest and Southeastern regions of the United States. Results of a series of factor analyses provided support for a four-factor model and yielded adequate estimates of the internal consistency reliability of each factor. Correlational and receiver operating characteristic curve findings yielded strong support for the concurrent validity and diagnostic accuracy of the Social, Academic, and Emotional Behavior Risk Screener–Early Childhood Total Behavior, Social Behavior, Early Learning Behavior, and Challenging Behavior scales relative to Devereux Early Childhood Assessment for Preschoolers–Second Edition scales. Less support was found for the Anxious Behavior scale. Limitations, implications for practice, and future directions are also discussed.
My comment: By changing their cognitive strategies, which was evidenced by adaptation in their behavior, the non-standard iPad high latency Android condition introduced construct irrelevant variance into the subjects scores—that is, by changing their strategies to compensate for slower latencies, the subjects changed what the speeded test was measuring…a threat to construct validity. The authors recommend the use of the iPad’s noted in the WJ V Technical Manual (LaForte, Dailey, & McGrew, 2025).
When administering the WJ V, Riverside Insights recommends the examinee device to be an iPad with a screen size of 10” or larger, as that is how the test was standardized. Because the assessment is browser-based, we recognize the examinee can be on any tablet touchscreen device with a screen of 10” or larger and Riverside Insights cannot control the device used. Please note, using a tablet device other than an iPad on timed tests may result in differences in scores, based on the latency times of different devices. Riverside Insights strongly encourages the use of an iPad, especially on timed tests.
I am currently working on expanding my skill set by incorporating AI tools. Although adapting to new technologies can be challenging, leveraging these resources offers significant benefits for professional growth.
AI Brief’s, at IQs Corner, are produced by requesting Google NotebookLM to generate a narrative summary of one or more PDF journal articles. Google NotebookLM takes an uploaded article and with the prompt to “write a narrative summary of the article.” While I find the first draft promising, I do fine the need to make corrections, add important links and missing information, and do more editing to enhance the briefs accuracy and informativeness. Future plans include using this AI tool to summarize multiple articles, find similarities and differences between the articles, and create comparative tables.
These incremental steps mark my transition toward utilizing AI to support one of my primary professional interests: producing informative blog and social media posts aimed at professionals such as school psychologists and special education teachers working with students who often are marginalized in educational contexts. The goal is to help bridge the gap between theory, technology, research, and practical application.
Feedback is encouraged and may be directed to iqmcgrew@gmail.com or via the social media platform (LinkedIn, Twitter/X, BlueSky) comment feature where this blog post was discovered. I’m hoping to add AI Briefs as a regular feature of IQs Corner Blog and associated social media platforms.
AI Brief: Is the Intellectual Functioning Component of AAIDD's 12th Manual Satisficing?
(McGrew, 2021)
Dr. Kevin McGrew with assist from Google NotebookLM
In a commentary published in Intellectual and Developmental Disabilities, Kevin S. McGrew evaluated the intellectual functioning section (prong 1) of the AAIDD’s 12th edition manual (2021) for diagnosing intellectual disabilities (ID). He commends the organization for finally adopting the Cattell-Horn-Carroll (CHC) theory, which aligns the manual with modern scientific consensus on cognitive abilities. However, the author expresses significant concern about the manual’s contradictory guidance on part scores, arguing that its ambiguous stance could lead to legal and diagnostic confusion. McGrew also highlights various technical measurement issues and numerous copyediting errors that he believes undermine the manual's status as an authoritative resource. He suggests that while the manual is satisfactory in its theoretical shift, it does not provide the precise clarity needed for high-stakes clinical and judicial settings.
_______________________
In his review of the 12th edition of the American Association on Intellectual and Developmental Disabilities (AAIDD) manual, McGrew (2021; link for downloading article) evaluates whether the "Intellectual Functioning Component" (aka.,prong 1 of a three-prong definition of intellectual disability—ID) provides a "satisficing"—or satisfactory and sufficient—solution for practitioners and scholars.[1] McGrew draws on over 45 years of experience in school psychology and intelligence research, theory, and test development. In addition, he draws on his expert work and consultation (since 2009) on Atkins intellectual disability (ID) death penalty cases in legal settings. McGrew provides an evaluation of the AAIDD’s manual's prong 1 (intellectual functioning) theoretical grounding, technical guidance, and professional polish. He does not evaluate the other two ID prongs (adaptive behavior and age of onset).
Advancement in Intelligence Theory
McGrew awards the manual a Grade B+ for its formal adoption of the Cattell-Horn-Carroll (CHC) theory of intelligence. This shift aligns the AAIDD manual with the contemporary consensus taxonomy of cognitive abilities, moving away from outdated models. However, McGrew notes that the manual "muddies the CHC waters" by giving preferential treatment to fluid (Gf) and crystallized (Gc) intelligence while neglecting other broad CHC abilities like learning efficiency (Gl), working memory (Gwm), retrieval fluency (Gr), auditory processing (Ga), visual-spatial processing (Gv), and processing speed (Gs). He suggests that a visual-graphic model of the CHC hierarchical model would have been a beneficial addition for users.
Measurement and Organizational Challenges
The manual receives a Grade B- for its treatment of major measurement issues. While it provides adequate coverage of such measurement issues as the standard error of measurement (SEM), confidence intervals, and the Flynn effect (aka., norm obsolescence), McGrew criticizes the lack of a topic index, which makes finding specific guidance very frustrating. For instance, practice effects are obscurely placed under "progressive error" in the glossary, and the Flynn effect is curiously categorized under "Making a Retrospective Diagnosis," despite being relevant to historical and current intellectual assessments.
The Part-Score Controversy
The most critical evaluation—a Grade C—is reserved for the manual's handling of part scores. McGrew identifies three primary failures in this area:
● Inconsistency: The manual contradicts itself by advising against the use of part scores as proxies for general intelligence (psychometric g) while simultaneously suggesting that their valid use requires 3–6 subtests of Gf and Gc.
● Variance with Other Authorities: This "just say no to part scores" stance conflicts with other major authoritative sources, such as the DSM-5, which acknowledges that highly discrepant subtest scores may invalidate an overall IQ score.
● Scientific and Legal Tensions: McGrew argues that the manual fails to address the "General-2-individual" (G2i) legal principle which acknowledges that group-based scientific research (e.g., suggesting full-scale scores are always superior) may not apply to every unique individual case—the G2i principle conundrum is that scientists generalize; but courts must particularize to an individual. He warns that without clearer guidance; legal entities may fill the void with "remedies of dubious quality.”
Editorial Quality and Professionalism
McGrew gives the manual a Grade D for style and substance, citing at least 20 copyedit errors in the sections relevant to the intellectual functioning prong alone. These include misspellings of prominent researchers, incorrect terminology like "test e-norms," and frequent "misplaced italics.” He contends that such preventable errors tarnish the manual’s status as an "authoritative" and "definitive" source for diagnosing intellectual disabilities.
Conclusion
McGrew concludes that while the endorsement of CHC theory is a significant positive revision, the manual’s obfuscation regarding part scores and its numerous editorial flaws represent major missed opportunities. He emphasizes that practitioners cannot wait another decade for the next edition to offer more robust guidance, particularly in high-stakes legal and diagnostic settings. He concludes that while he may be a "tough grader," his critiques are intended to push AAIDD toward more robust and clearer guidance in future editions or supplements
[1] Nobel laureate Herb Simon advanced the behavioral economics concept of satisficing (Simon, 1956)—the idea that, although we may aspire to optimal solutions, real-world constraints often require us to settle on what is both satisfactory and sufficient (hence, the portmanteau term satisficing).