Showing posts with label Jack Carroll. Show all posts
Showing posts with label Jack Carroll. Show all posts

Tuesday, September 02, 2025

From the #Cattell-Horn-Carroll (#CHC) #cognitive #intelligence theory archives: Photos of important 1999 Carroll, Horn, Woodcock, Roid et al. meeting in Chapel Hill, NC.

I was recently cleaning my office when I stumbled upon these priceless photos from a 1999 historical meeting in Chapel Hill, NC that involved John Horn, Jack Carroll, Richard Woodcock, Gale Roid, John Wasserman, Fred Schrank and myself).  The provenance (I’ve always wanted to use this word 😉) for the meeting is provided below the pictures in the form of extracted quotes from Wasserman (2019) and McGrew (2023) (links below), which I confirmed with John Wasserman via a personal email on August, 30, 2025.

The 1990 CHC-based WJ-R had already been published and the WJ III author team were nearing completion of the CHC-based WJ III (2001).  Unbeknownst to many is the fact that Woodock was originally planned to be one of the coauthors of the SB5 (along with Gale Roid), which explains his presence in the photo’s that document one of several planning meetings for the CHC-based SB5.  

I was also involved as a consultant during the early planning for the CHC-based SB5 because of my knowledge of the evolving CHC theory.  My role was to review and integrate all available published and unpublished factor analysis research on all prior editions of the different SB legacy tests. I post these pictures with the names of the people included in each photo immediately below the photo. No other comments (save for the next paragraph) are provided.  

To say the least, my presence at this meeting (as well as many other meetings with Carroll and Horn together, as well as with each alone, that occured when planning the various editions of the WJ’s) was surrealistic.  One could sense a paradigm shift in intelligence testing that was happening in real time during the meetings!  The expertise of the leading theorists regarding what became known as CHC theory, together with the expertise of the applied test developers of Woodcock and Roid, provided me with learning experiences that cannot be captured in any book or university course work. 

Click on images to enlarge.  

Be gentle, these are the best available copies of images taken with an old-school camera (not smart-phone based digital images)

(Carroll, Woodcock, McGrew, Schrank)

(Carroll, Woodcock, McGrew)

(Woodcock, Wasserman, Roid, Carroll, Horn)

(Wasserman, Roid, Carroll, Horn, McGrew)

(Carroll, Woodcock)


———————-


“It was only when I left TPC for employment with Riverside Publishing (now Houghton-Mifflin-Harcourt; HMH) in 1996 that I met Richard W. Woodcock and Kevin S. McGrew and became immersed in the extended Gf-Gc (fluid-crystallized)/ Horn-Cattell theory, beginning to appreciate how Carroll's Three-Stratum (3S) model could be operationalized in cognitive-intellectual tests. Riverside had been the home of the first Gf-Gc intelligence test, the Stanford–Binet Intelligence Scale, Fourth Edition (SB IV; R. L. Thorndike, Hagen, & Sattler, 1986), which was structured hierarchically with Spearman's g at the apex, four broad ability factors at a lower level, and individual subtests at the lowest level. After acquiring the Woodcock–Johnson (WJ-R; Woodcock & Johnson, 1989) from DLM Teaching Resources, Riverside now held a second Gf-Gc measure. The WJ-R Tests of Cognitive Ability measured seven broad ability factors from Gf-Gc theory with an eighth broad ability factor possible through two quantitative tests from theWJ-R Tests of Achievement. When I arrived, planning was underway for new test editions – the WJ III (Woodcock, McGrew, & Mather, 2001) and the SB5 (Roid, 2003) – and Woodcock was then slated to co-author both tests, although he later stepped down from the SB5. Consequently, I had the privilege of participating in meetings in 1999 with John B. Carroll and John L. Horn, both of whom had been paid expert consultants to the development of the WJ-R” (Wasserman, 2019, p. 250)

——————-

In 1999, Woodcock brokered the CHC umbrella term with Horn and Carroll for practical reasons (McGrew 2005)—to facilitate internal and external communication regarding the theoretical model of cognitive abilities underlying the then-overlapping test development activities (and some overlapping consultants, test authors, and test publisher project directors; John Horn, Jack Carroll, Richard Woodcock, Gale Roid, Kevin McGrew, Fred Schrank, and John Wasserman) of the Woodcock–Johnson III and the Stanford Binet–Fifth Edition by Riverside Publishing” (McGrew, 2023, p. 3)

Friday, November 22, 2024

The Evolution of #Intelligence (journals)—the two premiere intelligence journals compared—shout out to two #schoolpsychologists

The Evolution of Intelligence: Analysis of the Journal of Intelligence and Intelligence 

Click here to read and download the paper.

by 
Fabio Andres Parra-Martinez
 1,*
Ophélie Allyssa Desmet
 2 and 
Jonathan Wai
 1
1
Department of Education Reform, University of Arkansas, Fayetteville, AR 72701, USA
2
Department of Human Services, Valdosta State University, Valdosta, GA 31698, USA
*
Author to whom correspondence should be addressed. 
J. Intell. 202311(2), 35; https://doi.org/10.3390/jintelligence11020035

Abstract

What are the current trends in intelligence research? This parallel bibliometric analysis covers the two premier journals in the field: Intelligence and the Journal of Intelligence (JOI) between 2013 and 2022. Using Scopus data, this paper extends prior bibliometric articles reporting the evolution of the journal Intelligence from 1977 up to 2018. It includes JOI from its inception, along with Intelligence to the present. Although the journal Intelligence’s growth has declined over time, it remains a stronghold for traditional influential research (average publications per year = 71.2, average citations per article = 17.07, average citations per year = 2.68). JOI shows a steady growth pattern in the number of publications and citations (average publications per year = 33.2, average citations per article = 6.48, total average citations per year = 1.48) since its inception in 2013. Common areas of study across both journals include cognitive ability, fluid intelligence, psychometrics–statistics, g-factor, and working memory. Intelligence includes core themes like the Flynn effect, individual differences, and geographic IQ variability. JOI addresses themes such as creativity, personality, and emotional intelligence. We discuss research trends, co-citation networks, thematic maps, and their implications for the future of the two journals and the evolution and future of the scientific study of intelligence.

Yes….a bit of a not-so-humble brag.  In the co-citation JOI figure below, the Schneider, W. J. is the Schneider & McGrew (2012) chapter, which has now been replaced by Schneider & McGrew (2018; sorry, I don’t have good PDF copy to link).  In the second Intelligence co-citation network figure, the McGrew, K. S. (2009) paper, next to Carroll’s (1993) seminal work, is your’s truly—my most cited journal article (see Google Scholar Profile).  The frequent citation of the Schneider & McGrew (2012) and McGrew (2009) journal publications are indicators of the “bridger” function Joel and I have provided—providing a bridge between intelligence research/theory and intelligence test development, use, and interpretation in school psychology.  

(Click on images to enlarge for better viewing)



Thursday, November 14, 2024

Stay tunned!!!! #WJV g and non-g multiple #CHC theoretical models to be presented in the forthcoming (2025) technical manual: Senior author’s (McGrew) position re the #pscyhometric #g factor and #bifactorg models.

(c) Copyright, Dr. Kevin S. McGrew, Institute for Applied Psychometrics (11-14-24)

Warning, may be TLDR for many. :).  Also, I will be rereading again multiple times and may tweak minor (not substantive) errors and post updates….hey….blogging has an earthy quality to it:)

        In a recent publication, Scott Decker, Joel Schneider, Okan Bulut and I (McGrew, 2023; click here to download and read) presented structural analysis of the WJ IV norm data using contemporary psychometric network analysis (PNA) methods.  As noted in a clip from the article below, we recommended that intelligence test researchers, and particularly authors and publishers of the respective technical manuals for cognitive test batteries, needed to broaden the psychometric structural analysis of a test battery beyond the traditional (and almost exclusive) relieance on “common cause” factor analysis (EFA and CFA) methods to include PNA analysis…to compliment, not supplant factor based analyses.

(Click on image to enlarge for easier reading)


         Our (McGrew et al., 2023) recommendation is consistent with some critics of intelligence test structural research (e.g., see Dombrowski et al., 2018, 2019; Farmer et al., 2020) who have cogently argued that most intelligence test technical manuals typically present only one of the major classes of possible structural models of cognitive ability test batteries.  Interestingly, many school psychology scholars who conduct and report independent structural analysis of a test battery also do something similar…they often only present one form of structural analysis—-namely, bifactor g analyses.  
        In McGrew et al. (2023) we recommended future cognitive ability test technical manuals embrace a more ecumenical multiple method approach and include, when possible, most all major classes of factor analysis models, as well as PNA. A multiple-methods research approach in test manuals (and journal publications by independent researchers) can better inform users of the strengths and limitations of IQ test interpretations based on whatever conceptualization of psychometric general intelligence (including models with no such construct) underlies each type of dimensional analysis. Leaving PNA methods aside for now, the figure below presents the four major families of traditional CHC theoretical structural models.  These figures are conceptual and are not intended to represent all nuances of factor models. 



(Click on image for a larger image to view)


         Briefly, the four major families of traditional “common cause” CHC CFA structural models (Carroll, 2003; McGrew et al., 2023) vary primarily in the specification (or lack thereof) of a psychometric g factor. The different families of CHC models are conceptually represented in the figure above. In these conceptual representations the rectangles represent individual (sub)tests, the circles latent ability factors at different levels of breadth or generality (stratum levels as per Carroll, 1993), the path arrows the direction of influence (the effect) of the latent CHC ability factors on the tests or lower-order factors, and the single double headed arrow all possible correlations between all CHC broad CHC factors (in the Horn no-g model in panel D).  
        The classic hierarchical g model “places a psychometric g stratum III ability at the apex over multiple broad stratum II CHC abilities” (McGrew et al., 2023, p. 2)This model is most often associated with Carroll (1993; 2003) and is called (in panel A in the above figure) the Carroll hierarchical g broad CHC model. In this model the shared variance of subsets of moderately to highly correlated tests are first specified as 10 CHC broad ability factors (i.e., the measurement model; Gf, Gc, Gv, etc.)Next the covariances (latent factor correlations) among the broad CHC factors are specified as being the direct result of a higher-order psychometric g factor (i.e., the structural model). 
        A sub-model under the Carroll hierarchical g broad CHC model includes three levels of factors—several first-order narrow (stratum I) factors, 10 second-order broad (stratum II) CHC factors, and the psychometric g factor (stratum III). This is called the Carroll hierarchical g broad+narrow CHC model in panel B in the figure above. In the above example, two first-order narrow CHC factors (auditory short-term storage-Wa; and auditory working memory capacity-Wc, which, in simple terms, is a factor defining auditory short-term memory tasks that also include heavy attentional control-based (AC as per Schneider & McGrew, 2018) active manipulation of stimuli—the essence of Gwm or working memory).  For illustrative purposes, a narrow naming facility (NA) first-order factor, which has higher-order effects or influences from broad Gs and Gr is specified for evaluation.  Wouldn’t you like to see the results of this hierarchical broad+narrow CHC model?  Well……..stay tunned for the forthcoming WJ V technical manual (Q1 2025; LaForte, Dailey, & McGrew, 2025, in preparation) and your dream will come true.
        The third model is the Horn no-g model (McGrew, et al., 2023).  John Horn long argued that psychometric g was nothing more than a statistical abstraction or artifact (Horn, 1998; Horn & Noll, 1997; McArdle, 2007; McArdle & Hofner, 2014; Ortiz, 2015) and did not represent a brain or biologically based real cognitive abilityThis is represented by the Horn no-g broad CHC model in panel D. The Horn no-broad CHC model is like the Carroll hierarchical g broad CHC model, but the 10 broad CHC factor intercorrelations are retained instead of specifying a higher- or second-order psychometric g factorIn other words, the measurement models are the same but the structural models are different. In some respects the Horn no-g broad CHC model is like contemporary no-g psychometric network analysis models (see McGrew, 2023) that eschew the notion of a higher-order latent psychometric g factor to explain the positive definite correlation variance between individual tests (or first-order latent factors in the case of the Horn no-model) in an intelligence battery (Burgoyne et al. 2022; Conway &Kovacs, 2015; Euler et al., 2023; Fried, 2020; Kan et al. 2019; Kievit et al. 2016; Kovacs & Conway, 2016, 2019; McGrew, 2023; McGrew et al., 2023; Protzko & Colom 2021a, 2021b, van der Maas et al. 2006, 2014, 2019).  Over the past decade I’ve become more aligned with no-g psychometric network CHC models (e.g, process overlap theory or POT) or Horn’s no-g CHC model, and have, tongue-in-check, referred to the elusive psychometric g ability (not the psychometric g factor)  as the “Loch Ness Monster of Psychology” (McGrew, 2021, 2022).



        Three of these common cause CHC structural models (viz., Carroll hierarchical g broad CHC model, Carroll hierarchical g broad+narrow CHC, and Horn no-g broad CHC), as well as Dr. Hudson Golino and colleagues hierarchical exploratory graph analysis psychometric network analysis models (that topic is saved for another day), are to be presented in the structural analysis section of the forthcoming WJ V technical manual validity chapter.  Stay tunned for some interesting analysis and interpretations in the “must read” WJ V technical manual. Yes….assessment professionals, a well written and thourough technical manual can be your BFF!
        Finally, the fourth family of models, which McGrew et al. (2023) called g-centric models, are commonly known as bifactor g models. In the bifactor g broad CHC model (panel C in figure) the variance associated with a dominant psychometric factor is first extracted from all individual tests. The residual (remaining) variance is modeled as 10 uncorrelated (orthogonal) CHC broad factors. The bifactor model was excluded from the WJ V structural analysisWhy…..after I (McGrew et al., 2023) recommended that all four classes of traditional CHC structural analysis models should be presented in a test batteries technical manual????
        Because…the complexity involved in specifying and evaluating bi-factor g models with 60 cognitive and achievement tests was found to be extremely complex and fraught with statistical convergence issues.  Trust me…I tried hard and long to run bifactor g models for the WJ V norm data.  It was possible to run bifactor g models separately on the cognitive and achievement sets of WJ V tests, but that does not allow for the direct comparison to the other three structural models that utilized all 60 cognitive and achievement tests in single CFA models.  Instead, at of the time the WJ V technical manual analyses were being completed and are now being summarized, the Riverside Insights (RI) internal psychometric research team was tackling the complex issues involved in completing WJ V bifactor g models, first in the separate sets of cognitive and achievement tests.  Stay tunned for future professional conference paper presentations, white papers, or journal article submissions by the RI research team.
        Furthermore, the decision to not include bifactor g models does not suggest that the evaluation of WJ V bifactor g-centric CHC models is not important. As noted by Reynolds and Keith (2017), “bifactor models may serve as a useful mathematical convenience for partitioning variance in test scores” (p. 45; emphasis added)The bifactor g model pre-ordains “that the statistically significant lions share of IQ battery test variance must be of the form of a dominant psychometric g factor (Decker et al., 2021)” (McGrew, et al., 2023, p. 3)Of the four families of CHC structural models, the bifactor g model is the conceptual and statistical model that supports the importance of general intelligence (psychometric g) and the preeminence of the full-scale or global IQ score over broad CHC test scores (e.g., see Dobrowski et al., 2021; Farmer et al., 2021a, 2021b; McGrew et al., 2023)—a theoretical position inconsistent with the position of the WJ V senior author (yours truly) and with Dr. Richard Woodcock’s legacy (see additional footnote comments at the end). It is important to note that there is a growing body of research that has questioned the preference for bifactor g cognitive models based only on statistical fit indices, as structural model fit statistics frequently are biased in favor of bifactor solutions. Per Bonifay et al. (2017),“the superior performance of the bifactor model may be a symptom of ‘overfitting’—that is, modeling not only the important trends in data but also capturing unwanted noise” p. 184–185). For more on this, see Decker (2021), Dueber and Toland (2021), Eid et al., (2018), Greene et al. (2022), and Murray and Johnson(2013). See Dombroski et al. (2020) for a defense of some of the bifactor g criticisms.
        Recognizing the wisdom of Box’s (1976) well known axiom that “all models are wrong, but some are useful” the WJ V technical manual authors (LaForte, Dailey, McGrew, 2025, in preparation) encourage independent researchers to use the WJ V norm data to evaluate and compare bifactor g CHC models with the models presented in forthcoming WJ V technical, as well as  alternative models (e.g., PASS, process overlap theory, Cattell’s triadic Gf-Gc theory, etc.) suggested in the technical manual.


Footnote:  Woodcock’s original (and enduring) position (Woodcock, 1978, 1997, 2002) regarding the validity and purpose of a composite IQ-type g score is at odds with the bifactor g CHC model. With the publication of the original WJ battery, Woodcock (1978) acknowledged the pragmatic predictive value of statistically partitioning cognitive ability test score variance into a single psychometric g factor, with the manifest total IQ score serving as a proxy for psychometric g. Woodcock stated “it is frequently convenient to use some single index of cognitive ability that will predict the quality of cognitive behavior, on the average, across a wide variety of real-life situations. This is the [pragmatic] rationale for using a single score from a broad-based test of intelligence” (p.126). However, Woodcock further stated that “one of the most common misconceptions about the nature of cognitive ability (particularly in discussions characterized by such labels as ‘IQ’ and ‘intelligence’) is that it is a single quality or trait held in varying degrees by individuals, something like [mental] height” (p. 126). In several publications Woodcock’s position regarding the importance of an overall general intelligence or IQ score was clear—“The primary purpose for cognitive testing should be to find out more about the problem, not to obtain an IQ” (Woodcock, 2002, p.6; also see Woodcock, 1997, p. 235). Two of the primary WJ III, WJ IV, and WJ V authors have conducted research or published articles (see Mather & Schneider, 2023; McGrew, 2023; McGrew et al., 2023) consistent with Woodcock’s position and have advocated for a Horn no-g or emergent property no-g CHC network model. Additionally, based on the failure to identify a brain-based biological (i.e., neuro-g; Haier et al., 2024) in well over a century of research since Spearman first proposed in the early 1900’s, McGrew (2020, 2021) has suggested that g may be the “Loch Ness Monster of psychology.” This does not imply that psychometric g is unrelated to combinations of different neurocognitive mechanisms, such as brain-wide neural efficiency and the ability of the whole-brain network, which is comprised of various brain subnetworks and connections via white matter tracts, to efficiently adaptively reconfigure the global network in response to changing cognitive demands (see Ng et al., 2024 for recent compelling research linking psychometric g to multiple brain network mechanisms and various contemporary neurocognitive theories of intelligence; NOTE…click link to download PDF of article and read sufficiently to impress your psychologist friends!!!!).



Friday, November 08, 2024

On the origin and evolution (from 1997 chapter to 2025 #WJV) of the #CHC #intelligence theories definitions: The missing CHC definition’s birth certificate

This is an updated version of an OBG (oldie but goodie) post originally made in 2017.  


The historical development of the CHC model of intelligence has been documented by McGrew (2005) and Schneider and McGrew (2012) and summarized by Kaufman and colleagues (Kaufman, 2009; Kaufman, Raiford & Coalson, 2016). Additional extensions and historical anecdotes were rececntly presented by McGrew (2023) in an article included in a special issue of the Journal of Intelligence focused on Jack Carroll’s tri-stratum theory @ 30 years. McGrew (2023) recommended that CHC theory should now be referred to as a group of CHC theories (i.e., a family of orthogonally correlated models) that recognizes the similarities and differences between the theoretical models of Cattell, Horn and Carroll.

An unexplained crucial, yet missing piece of the CHC story, is the origin of the original CHC broad and narrow ability definitions.  The CHC ability definition birth certificate, until recently, had not been revealed.  To fend off possible CHC “birther” controversies, I will now set the record straight again (as was first done in 2017) regarding the heritage of the past and current CHC definitions.

Given the involvement of both John Horn and Jack Carroll in revisions of the WJ-R and WJ III, which was the impetus for the combined CHC theory, it is not surprising that the relations between the “official” CHC ability definitions and the WJ tests were “reciprocal in nature, with changes in one driving changes in the other” (Kaufman et al., 2016, p. 253).  Furthermore, “the WJ IV represented the first revision in which none of the original CHC theorists was alive at the time of publication, producing and imbalance in this reciprocal relationship—-“the WJ IV manuals now often served as the official source for the latest CHC theory and model of cognitive abilities (J. Schneider, personal communication, March 15, 2015)” (Kaufman et al., 2016; p. 253).  Kaufman et al. noted that with the development of subsequent non-WJ CHC assessment and interpretation frameworks (e.g., Flanagan and colleagues CHC cross-battery assessment; Miller’s integrated school neuropsychology/CHC assessment model), some confusion has crept into what represents the authoritative “official” and “unofficial” definitions and sources.  

In Schneider & McGrew (2012) and Schneider & McGrew (2018), the incestuous nature of the evolution of the CHC definitions continued by building primarily on the McGrew (2005) definitions, which in turn were reflected in the 2001 WJ III manuals, which in turn drew from McGrew (1997).  In my original 2017 post regarding this topic, it was judged time to divorce the official CHC definitions from the WJ series and authors (particularly myself, Kevin McGrew). 

However, the CHC birth certificate is still often questioned.  Did the CHC definitions magically appear?  Did they come down in tablet form from a mountain top?  After the Cattell-Horn and Carroll models were first married by McGrew (1997), were the definitions the result of some form of immaculate conception?  Did  McGrew (1997) develop them unilaterally?  

Here is….the “rest of the story.”  

The original CHC definitions were first presented in McGrew’s (1997) chapter where the individual tests from all major intelligence batteries where classified as per the first integration of the Cattell-Horn and Carroll models of cognitive abilities (then called a “proposed synthesized Carroll and Horn-Cattell Gf-Gc framework”).  In order to complete this analysis, I (Kevin McGrew) needed standard CHC broad and narrow definitions—but none existed.  I consulted the Bible…Carroll’s Human Cognitive Abilities (1993).



I developed the original definitions (primarily the narrow ability definitions) by abstracting definitions from Carroll’s (1993) book.  After completing the first draft of the definitions, I sent them to Carroll. He graciously took time to comment and edit the first draft. I subsequently revised the definitions and sent them back. Jack and I engaged in several iterations until he was comfortable with the working definitions. As a result, the original narrow ability definitions published in McGrew (1997) had the informal stamp of approval of Carroll, but not of Horn. The official CHC definition birth certificate should list Carroll and McGrew as the parents.  

Since then the broad and narrow CHC ability definitions have been parented by McGrew (McGrew & Woodcock, 2001; McGrew, 2005; McGrew et al., 2014) and more recently, uncle Joel Schneider (Schneider & McGrew, 2012; Schneider & McGrew, 2018). The other WJ III and WJ IV authors (Mather, Schrank, and Woodcock) served as aunts and uncles at various points in the evolution of the definitions, resulting in the current “unofficial” definitions being in the WJ IV technical manual (McGrew et al., 2014) and the Schneider & McGew (2018) chapter




With new data-based insights from the the validity analysis of the norm data from the forthcoming WJ V (LaForte, Dailey & McGrew, 2025, in preparation), the WJ V technical manual will provide, yet again, a slightly new and improved set of CHC definitions.  Stay tunned.

No doubt the WJ V 2025 updated CHC definitions will still have a clear Carroll/McGrew, WJ III /WJ IV/WJ V and Joel Schneider genetic lineage (McGrew, 1997—>McGrew & Woodcock, 2001—>McGrew, 2005—>Schneider & McGrew, 2012—>McGrew et al., 2014—>Schneider & McGrew, 2012, 2018).  We (Schneider and McGrew) are reasonably comfortable with this fact.  However, we hope that the WJ—>WJ V set of CHC definitions will eventually move out of the influence of the WJ/CHC house and establish a separate residence, identity, and process for future growth.  I am aware that Dr. Dawn Flanagan and colleagues are working on a new revision of their CHC cross-battery book and related software and will most likely include a new set of revised defintions.  Perhaps a melding with the WJ V technical manual definition appendix with the work of Flanagan et al. would be a good starting point.  Perhaps some group or consortium of interested professionals could be established to nurture, revise, and grow the CHC defintions.

Thursday, March 23, 2017

The origins of the current CHC definitions: Where is the CHC definition birth certificate?



The historical development of the CHC model of intelligence has been documented by McGrew (2005) and Schneider and McGrew (2012) and summarized by Kaufman and colleagues (Kaufman, 2009; Kaufman, Raiford & Coalson, 2016).  An unexplained crucial, yet missing piece of the CHC story, is the origin of the original CHC broad and narrow ability definitions.  The CHC ability definition birth certificate, until now, has not been located.  To fend off possible CHC “birther” controversies, I will now set the record straight regarding the heritage of the past and current CHC definitions.

Given the involvement of both John Horn and Jack Carroll in revisions of the WJ-R and WJ III, which was the impetus for the combined CHC theory, it is not surprising that the relations between the “official” CHC ability definitions and the WJ tests were “reciprocal in nature, with changes in one driving changes in the other” (Kaufman et al., 2016, p. 253).  Furthermore, “the WJ IV represents the first revision in which none of the original CHC theorists was alive at the time of publication, producing and imbalance in this reciprocal relationship, with the WJ IV manuals now serving as the official source for the latest CHC theory and model of cognitive abilities (J. Schneider, personal communication, March 15, 2015)” (Kaufman et al., 2016; p. 253).  Kaufman et al. noted that with the development of subsequent non-WJ CHC assessment and interpretation frameworks (e.g., Flanagan and colleagues cross-battery assessment; Miller’s integrated school neuropsychology/CHC assessment model), confusion has crept into what represents the authoritative “official” and “unofficial” definitions and sources.  


In Schneider & McGrew (2012), the incestuous nature of the evolution of the CHC definitions continued by building primarily on the McGrew (2005) definitions, which in turn were reflected in the 2001 WJ III manuals, which in turn drew from McGrew (1997).   It is time to divorce the official CHC definitions from the WJ series and authors (particularly myself, Kevin McGrew). 

However, the CHC birth certificate question is still present.  Did the CHC definitions magically appear?  After the Cattell-Horn and Carroll models were first married by McGrew (1997), were the definitions the result of some form of immaculate conception?  Did  McGrew (1997) develop them unilaterally?  The original CHC definitions were presented in McGrew’s (1997) chapter where the individual tests from all major intelligence batteries where first classified as per the first integration of the Cattell-Horn and Carroll models of cognitive abilities (then called a “proposed synthesized Carrell and Horn-Cattell Gf-Gc framework”).  In order to complete this analysis, I needed standard CHC broad and narrow definitions—but none existed.


I developed the original broad and narrow definitions by abstracting definitions from Carroll’s (1993) book.  After drafting the first draft of the definitions I sent them to Carroll. He graciously took time to comment and edit the first draft. I subsequently revised the definitions and sent them back. Carrol and I engaged in a number of iterations until he was comfortable with the working definitions. As a result, the original definitions published in 1997 had the informal stamp of approval of Carroll, but not of Horn.  The official CHC definition birth certificate should list Carroll and McGrew as the parents.  Since then the CHC definitions have been primarily parented by McGrew (McGrew & Woodcock, 2001; McGrew, 2005; Schneider and McGrew, 2012; McGrew et al., 2014), and more recently, uncle Joel Schneider (Schneider & McGrew, 2012).  The other WJ III and WJ IV authors (Mather, Schrank, and Woodcock) have served as aunts and uncles at various points in the evolution of the definitions, resulting in the current “official” definitions in the WJ IV technical manual (McGrew et al., 2014).

No doubt the definitions that will appear in the Schneider and McGrew (2012) chapter that is currently under revision, will likely be considered the new “official” CHC definitions as they have a clear Carroll/McGrew and WJ III /WJ IV genetic lineage (McGrew, 1997—>McGrew & Woodcock, 2001—>McGrew, 2005—>Schneider & McGrew, 2012—>McGrew et al., 2014—>Schneider & McGrew, in press).  We (Schneider and McGrew) are reasonably comfortable with this fact.  However, we believe it is time the CHC definitions move out of the WJ/CHC house and establish a separate residence, identity, and process for future growth.  We will provides ideas on how this can be facilitated in our revised CHC chapter.

Wednesday, July 18, 2012

Clarification of Intellectual Ability Construct Terminology


      The terms ability, cognitive ability, achievement, aptitude, aptitude-achievement are tossed around in contemporary psychological and educational assessment circles, often without a clear understanding of the similarities and differences between and among the terms.  For example, what does an “aptitude-achievement” discrepancy, in the context of contemporary models of SLD identification (see Flanagan & Fiorrello, 2010), mean?  Where are the aptitudes in the CHC  model?  It is argued here that it is critical that intelligence assessment professionals and researchers begin to use agreed upon terms to avoid confusion and to enhance collaboration and to facilitate research synthesis.  In this spirit, the figure below illustrates the conceptual distinction between abilities, cognitive abilities, achievement abilities and aptitudes.  These conceptual distinctions are drawn primarily from Carroll (1993)and the work of Snow and colleagues (Corono et al., 2001).    [Click on image to enlarge]

            As reflected in the figure, all constructs in the CHC model are abilities.  As per Carroll (1991), “as used to describe an attribute of individuals, ability refers to the possible variations over individuals in the liminal levels of task difficulty (or in derived measurements based on such luminal levels) at which, on any given occasion in which all conditions appear favorable, individuals perform successfully on a defined class of tasks” (p. 8, italics in original).[1]  In more simple language,“every ability is defined in terms of some kind of performance, or potential for performance (p. 4).”  The overarching domain of abilities includes cognitive and achievement abilities as well as aptitudes (see figure).  Cognitive abilities are abilities on tasks “in which correct or appropriate processing of mental information is critical to successful performance” (p. 10; italics in original).  The key component to the operational definition of cognitive abilities is the processing of mental information (Carroll, 1993).  Achievement abilities “refers to the degree of learning in some procedure intended to produce learning, such as an informal or informal course of instruction, or a period of self study of a topic, or practice of a skill” (p. 17).  As reflected in the above figure, the CHC domains of Grw and Gq are consistent with this definition and Carroll’s indication that these abilities are typically measured with achievement tests.  Most assessment professionals use the terms cognitive and achievement abilities in accordance with these definitions.  However, the term aptitude is often misunderstood.
            Carroll (1993) uses a narrow definition of aptitude—“to refer to a cognitive ability that is possibly predictive of certain kinds of future learning success” (p. 16; emphasis added).  The functional emphasis on prediction is the key to this narrow definition of aptitude and is so indicated by the two horizontal arrows in the figure.  These arrows, which connect the shaded CHC narrow abilities that are combined to predict an achievement ability outcome domain, are the definition of aptitude used in this paper.
 This definition of aptitude is much narrower than the broader notion of aptitude as reflected in the work of Richard Snow.   Snow’s notion of aptitude includes both cognitive and non-cognitive (conative) characteristics of individuals (Corno et al., 2002; Snow et al., 1996).  This broader definition of aptitude focuses on human aptitudes which represent “the characteristics of human beings that make for success or failure in life's important pursuits. Individual differences in aptitudes are displayed every time performance in challenging activities is assessed” (Corno et al., 2002, p. xxiii). Contrary to many current assumptions, aptitude is not the same as ability.  According to Corno et al. (2002), ability is the power to carry out some type of specific task and comes in many forms—reading comprehension, mathematical reasoning, spatial ability, perceptual speed, domain-specific knowledge (e.g., humanities), physical coordination, etc.  This is consistent with Carroll’s definition of ability.  According to Snow and colleagues, aptitude is more aligned with the concepts of readiness, suitability, susceptibility, and proneness, all which suggest a “predisposition to respond in a way that fits, or does not fit, a particular situation or class of situations. The common thread is potentiality—a latent quality that enables the development or production, given specified conditions, of some more advanced performance” (Corno et al., 2002, p. 3; see Scheffler, 1985).  This broader definition includes non-cognitive characteristics such achievement motivation, freedom from anxiety, self-concept, control of impulses, and other (see Beyond IQ project). 
As reflected in the model in the above figure, cognitive and achievement abilities differ primarily in the degree of emphasis on degree of mental information processing (cognitive) and the degree which the ability is an outcome acquired more from informal and formal instruction (achievement).  Here, aptitude is defined as the combination, amalgam or complex of specific cognitive abilities that when combined best predict a specific achievement domain.  Cognitive abilities are always cognitive abilities.  Some cognitive abilities contribute to academic or scholastic aptitudes, which are pragmatic functional measurement entities—not trait-like cognitive abilities.  Different academic or scholastic aptitudes, depending on the achievement domain of interest, likely share certain common cognitive abilities (domain-general) and also include cognitive abilities specific to certain achievement domains (domain-specific).  A simple and useful distinction is that cognitive abilities and achievements are more like unique abilities in a table of human cognitive elements while different aptitudes represent combinations of different cognitive elements to serve a pragmatic predictive function.  For the quantoid readers, the distinction between factor-analysis based latent traits (cognitive abilities) and multiple regression based functional predictors of achievement outcomes (cognitive aptitude) may help clarify the sometimes murky discussion of cognitive and achievement abilities and aptitudes.



[1] As noted by Carroll (1993), luminal refers to specifying threshold values used “in order to take advantage o the fact that the most accurate measurements are obtained at those levels” (p. 8).

Thursday, December 29, 2011

WMF Human Cognitive Abilities Archive Project: Major update 12-29-11


Here is an early New Years present to those interested in the structure of human cognitive abilities and the seminal work of Dr. John Carroll.

The free on-line WMF Human Cognitive Abilities (HCA) archive project had a MAJOR update today. An overview of the project, with a direct link to the archive, can be found at the Woodcock-Muñoz Foundation web page (click on "Current Woodcock-Muñoz Foundation Human Cognitive Abilities Archive") . Also, an on-line PPT copy of a poster presentation I made at the 2008 (Dec) ISIR conference re: this project can be found by clicking here.

Today's update added the following 38 new data sets from John "Jack" Carroll's original collection.  We now have approximately 50% of Jack Carroll's original datasets archived on-line.  Of particular interest is the addtion of one of Carroll's data sets, three by John Horn, and 17 by Guilford et al.  Big names...and some correlation matrices with big numbers of variables.  Data parasites (er....secondary data analysits) should be happy.


  • CARR01.  Carroll, J.B. (1941).  A factor analysis of verbal abilities.  Psychometrika, 6, 279-307.
  • FAIR02.  Fairbank, B.A. Jr., Tirre, W., Anderson, N.S. (1991).  Measures of thirty cognitive tasks:  Intercorrelations and correlations with aptitude battery scores. In P.L. Dann, S. M. Irvine, & J. Collis (Eds.), Advances in computer-based human assessment (pp. 51-101).  Dordrecht & Boston: Kluwer Academic.
  • FLAN01.  Flanagan, J.C., Davis, F.B., Dailey, J.T., Shaycoft, M.F., Orr, D.B., Goldberg, I., Neyman, C.A. Jr., (1964).  The Amercian high school student (Cooperative Research Project No. 635).  Pittsburgh:  University of Pittsburgh.
  • FULG21.  Fulgosi, A., Guilford, J. P. (1966).  Fluctuation of ambiguous figures and intellectual flexibility.  American Journal of Psychology, 79, 602-607.
  • GUIL11.  Guilford, J.P., Berger R.M., Christensen, P.R. (1955).  A factor-analytic stydy of planning:  II. Administration of tests and analysis of results.  Los Angeles:  Reports from the Psychological Laboratory, University of Southern California, No. 12.
  • GUIL31 to GUIL46 (17).  Guilford, J.P., Lacey, J.I. (Eds.) (1947).  Printed classification tests.  Army Air Force Aviation Psychology Program Research Reports, No. 5.  Washington, DC: U.S. Government Printing Office. [discussed or re-analyzed by Lohman (1979)]
  • HARG12.  Hargreaves, H.L. (1927).  The 'faculty' of imagination:  An enquiry concerning the existence of a general 'faculty,' or group factor, of imagination.  British Journal of Psychology Monograph Supplement, 3, No. 10.
  • HECK01.  Heckman, R.W. (1967).  Aptitude-treatment interactions in learning from printed-instruction: A correlational study.  Unpublished Ph.D. thesis, Purdue University.  (University Microfilm 67-10202)
  • HEND01.  Hendricks, M., Guilford, J. P., Hoepfner, R. (1969). Measuring creative social abilities. Los Angeles: Reports from the Psychological Laboratory, University of Southern California, No. 42.
  • HEND11A.  Hendrickson, D.E. (1981). The biological basis of intelligence. Part II: Measurement. In H.J. Eysenck (Ed.), A model for intelligence (pp. 197-228). Berlin: Springer.
  • HHIG01.  iggins, L. C. (1978) A factor analytic study of children's picture interpretation behavior. Educational Communication & Technology, 26, 215-232
  • HISK03/04.  Hiskey, M. (1966). Manual for the Hiskey-Nebraska Test of Learning Aptitude. Lincoln, NE: Union College Press.
  • HORN25/26.  Horn, J. L., & Bramble, W. J. (1967). Second-order ability structure revealed in rights and wrongs scores. Journal of Educational Psychology, 58, 115-122.
  • HORN31.  Horn, J. L., & Stankov, L. (1982) Auditory and visual factors of intelligence. Intelligence, 6, 165-185.
  • KEIT21.  Keith, T. Z., & Novok, C. G. (1987). What is the g that the K-ABC measures? Paper presented at the meeting of the National Association of School Psychologists, New Orleans, L.A.
  • KRAN01/KRAN01A.  Kranzler, J. H. (1990). The nature of intelligence: A unitary process or a number of independent processes? Unpublished doctoral dissertation, University of California at Berkeley.
  • LANS31.  Lansman, M., Donaldson, G., Hunt, E., & Yantis, S. (1982). Ability factors and cognitive processes. Intelligence, 6, 347-386.
  • LORD01.  Lord, F. M. (1956). A study of speed factors in tests and academic grades. Psychometrika, 21, 31-50.
  • LUN21.  Lunneborg, C. E. (1977). Choice reaction time: What role in ability measurement? Applied Psychological Measurement, 1, 309-330.
  • WOTH01.  Wothke, W., Bock, R.D., Curran, L.T., Fairbank, B.A., Augustin, J.W., Gillet, A.H., Guerrero, C., Jr. (1991).  Factor analytic examination of the Armed Services Vocational Aptitude Battery (ASVAB) and the kit of factor-referenced tests.  Brooks Air Force Base, TX: Air Force Human Resources Laboratory Report AFHRL-TR-90-67.
Request for assistance: The HCA project needs help tracking down copies of old journal articles, dissertations, etc. for a number of datasets being archived. We have yet to locate copies of the original manuscripts for a significant number of datasets that have been posted to the archive. Help in locating copies of these MIA manuscripts would be appreciated.  Please visit the special "Requests for Assistance" section of this archive to view a more complete list of manuscripts that we are currently having trouble locating. If you have access to either a paper or e-copy of any of the designated "fugitive" documents, and would be willing to provide them to WMF to copy/scan (we would cover the costs), please contact Dr. Kevin McGrew at the email address listed at the site.  A copy of the complete list or datasets with missing mannuscripts (in red font) can also be downloaded directlly from here.

Please join the WMF HCA listserv to receive routine email updates regarding the WMF HCA project.

All posts regarding this project can be found here.


Technorati Tags: , , , , , , , , , , , , , , , , ,