Showing posts with label factor analysis. Show all posts
Showing posts with label factor analysis. Show all posts

Tuesday, April 08, 2025

Research Byte: Conjectures and refutations in #cognitive ability #structuralvalidity research [with #WISC-V]: Insights from Bayesian structural equation modeling - #schoolpsychology #IQ #intelligence #Wechslers #WISC-V

Conjectures and refutations in cognitive ability structural validity research [with #WISC-V]: Insights from Bayesian structural equation modeling

Click here to view Journal of Psychology of School Psychology source of publication - not open access.

Abstract

The use of Bayesian structural equation modeling (BSEM) provided additional insight into the WISC–V theoretical structure beyond that offered by traditional factor analytic approaches (e.g., exploratory factor analysis and maximum likelihood confirmatory factor analysis) through the specification of all cross loadings and correlated residual terms. The results indicated that a five-factor higher-order model with a correlated residual between the Visual-Spatial and Fluid Reasoning group factors provided a superior fit to the four bifactor model that has been preferred in prior research. There were no other statistically significant correlated residual terms or cross loadings in the measurement model. The results further suggest that the WISC–V ten subtest primary battery readily attains simple structure and its index level scores may be interpreted as suggested in the WISC–V's scoring and interpretive manual. Moreover, BSEM may help to advance IQ theory by providing contemporary intelligence researchers with a novel tool to explore complex interrelationships among cognitive abilities—relationships that traditional structural equation modeling methods may overlook. It can also help attenuate the replication crises in school psychology within the area of cognitive assessment structural validity research through systematic evaluation of complex structural relationships obviating the need for CFA based post hoc specification searches which can be prone to confirmation bias and capitalization on chance.

Monday, December 16, 2024

“Be and see” the #WISC-V correlation matrix: Unpublished analyses of the WISC-V #intelligence test

 I often “play around” with data sets until I satisfy my curiosity…and never submit the results for publication.  These WISC-V analyses were completed 3+ years ago.  I stumbled upon the folder today and decided to simply post the information for assessment professionals interested in the WISC-V.  These results have not been peer-reviewed.  One must know the WISC-V subtest names to decipher the test abbreviations in some of the figures.  

This is a Gv (visual; 8 slides) summary a set of exploratory structural analyses I completed with the WISC-V summary correlation matrix (Table 5.1 in WISC-V manual). View and enjoy. 

You need to click on images to enlarge and read











Thursday, November 07, 2024

McGrew on #IQ scores: In what ways are a car engine, a starling bird #murmuration, and #g (general #intelligence) alike..how are they the same?

Kevin McGrew on IQ scores, borrowing from Detterman (2016) and McGrew et al., (2023)

“General intelligence (represented by a composite IQ score or the factor-analysis derived psychometirc g factor) is a fallible summary statistical (numerical) index of the efficiency of a complex system of dynamically interacting multiple brain networks.  Like the emergent statistical index of horsepower of a car engine, which does not represent a “thing” (a mechanism) in the engine, it reflects the current estimated efficiency of the processing of multiple interacting cognitive abilities and brain networks. It should not be interpreted as being the result of a single brain-based entity or mystical mental energy, as fixed, or reflecting biological/genetic destiny.  The manifest expression of this statistical emergent property index is also influenced by other non-cognitive (conative) (click for relevant article) traits and temporary states of the individual and current environmental variables” (K. McGrew, 11-07-24)


Question.  In what ways are a car engine, a starling bird murmuration, and general intelligence alike..how are they the same?  See slides and comments below for answer


 

(A starling bird murmuration)

Double click on larger more readable images 


















Wednesday, November 06, 2024

More on the conflation of #psychometric #g (general #intelligence): Is g the Loch Ness Monster of psycholgy?



From McGrew et al. (2023) article (click here for prior post and access to the article in Journal of Intelligence.)”  Click here for a series of slides regarding the theoretical and psychometric conflation of g.

The Problem of Conflating Theoretical and Psychometric g

“Contributing to the conflicting g-centric and mixed-g positions (regarding the interpretive value of broad CHC scores) is the largely unrecognized common practice of conflating theoretical and psychometric g. Psychometric g is the statistical extraction of a latent factor (via factor analysis) that accounts for the largest single source of common variance in a collection of cognitive abilities tests. It is an emergent property statistical index. Theoretical g refers to the underlying biological brain-based mechanism(s) that produce psychometric g. The global composite score from IQ test batteries is considered the best manifest proxy for psychometric g. The conflation of psychometric and theoretical g in IQ battery structural research ignores a simple fact—“general intelligence is not the primary fact of mainstream intelligence research; the primary fact is the positive manifold….general intelligence is but one interpretation of that primary fact” (Protzko and Colom 2021a, p. 2; italic emphasis added). As described later, contemporary intelligence and cognitive psychology research has provided reasonable and respected theories (e.g., dynamic mutualism; process overlap theory; wired cognition; attentional control), robust methods (psychometric network analysis), and supporting research (Burgoyne et al. 2022Conway and Kovacs 2015Kan et al. 2019Kievit et al. 2016Kovacs and Conway 20162019van der Maas et al. 200620142019) that accounts for the positive manifold of IQ test correlations in the absence of an underlying latent causal theoretical or psychometric gconstruct.” (p.3; bold font emphasis added).

Monday, November 04, 2024

A Psychometric Network Analysis of CHC Intelligence Measures: Implications for Research, Theory, and Interpretation of Broad CHC Scores "Beyond g"

(Note.  I’ve made several similar posts with a similar message on several social media outlets over the last 1.5 years)

Yes.  This may be seen as a brag post (I plead the fifth). But, I really want (need?) to share this recent publication (January 2023).  Why? Because, after 40 years of scholarship, I consider this article (which is open access and can be downloaded and read freely) to be one of my 5 top peer-reviewed research publications. The article is part of a special issue (Assessment of Human Intelligence-State of the Art in the 2020s) of the Journal of Intelligence, edited by Alan Kaufman et al. Warning—it is a long article. The article is the result of collaboration with Joel Schneider, Scott Decker and Okan Bulut. 

The content of the article pushes the “edge of the envelop” regarding intelligence theories and testing via the use of exploratory psychometric network analysis (PNA) within the context of network non-g (i.e., psychometric g) models of intelligence. This approach represents an emerging paradigm shift for thinking about intelligence theories and testing. As stated by Savi et al. (2021) "factor analysis models dominated the 20th century of intelligence research, but network models will dominate the 21st."  I believe Savi et al. are more-or-less correct. I believe PNA and non-g network models can move intelligence theories and testing forward—as they have become stagnant via the repeated use of "common cause" descriptive and taxonomic-generating factor analysis methods.  Used in isolation, factor analysis-based intelligence test and theory models constrain school psychologists and other assessment professionals from moving forward (as described in the paper).  For far too long, especially in school psychology, we have been "stuck on g" factor analysis based models of test interpretation.

As stated in our article, "newer non-g emergent property theories of intelligence might lead to better intervention research for individuals who have been marginalized by society. Holden and Hart (2021) suggest that network-based non-g theories, particularly those that feature Gwm-AC mechanisms [the working memory-attentional control complex] (process overlap theory in particular) may hold promise as a vehicle for improving, and not harming, social justice and equity practices and valued outcomes for individuals in marginalized groups" (McGrew et al., 2023).  Read the original Holden and Hart article if you are interested in the social justice implications of a new way of thinking about intelligence grounded in modern network non-g conceptualizations of intelligence.


Even if the methodological material is not your cup of tea, much of the McGrew et al. (2023) introduction is relevant to assessment practitioners. Also, several sections in the discussion deal with practical implications for understanding new insights into intelligence theories, broad cluster test interpretation in general, and some strengths and weaknesses of the WJ IV CHC test and cluster scores. 


If you are not familiar with the Journal of Intelligence (JOI), I would suggest SPs take a look. It is not the Intelligence journal from ISIR. It is the "new kid on the block" and has quickly become a prestigious open access publication outlet with a top notch editorial board. Since it is open access, all articles can be downloaded, read, and shared freely—an awesome free source of emerging thinking in the field of intelligence. JOI is publishing interesting articles from a wide variety of perspectives by a diversity of scholars interested in intelligence, cognition, and related topics. It has become one of my favorite journals the past few years. 


Finally, exploratory hierarchical psychometric network analysis methods (along with traditional structural analysis methods) were applied to the WJ V norm data—these results will be in the WJ V Technical Manual (LaForte, Dailey, McGrew, 2025).


 My WJ IV conflict of interest (COI) is included in the linked PDF article.  My WJ V COI and additional COI information can be found at the MindHub web portal.


Saturday, February 22, 2020

Wednesday, October 24, 2018

Problems with bi-factor intelligence research - theoretically agnostic and psychologically naive

Kevin McGrew (@iqmobile)
Problems with #bifactor #intelligence #IQ test research studies. #gfactor may not represent a real thing or ability but may be an #emergent factor...like #SES or #DJI. #g and primary abilities uncorrelated....seriously????? Bifactor models are theoretically #agnostic pic.twitter.com/Go77F32UTI

Download the Twitter app








Tuesday, September 16, 2014

Good intro overview article on exploratory factor analysis

This is a nice overview article of exploratory factor analysis. It includes a nice table of "rules of thumb" and an appendix with definitions and explanations of key concepts and terms. A good article for helping teach EFA to others. Click on image to enlarge



- Posted using BlogPress from my iPad

Monday, September 10, 2012

AP101 Brief # 16: Beyond CHC: Within-CHC Domain Complexity Optimized Measures

[Note:  This is a working draft of a larger paper (Implications of 20 years of CHC Cognitive-Achievement Research:  Back-to-the-future and Beyond CHC) that will be presented at the first Inaugural Session of the Richard Woodcock Institute for Advancement of Contemporary Cognitive Assessment at Tufts University (Sept, 29, 2012):  The Evolution of CHC Theory and Cognitive Assessment).]   Working knowledge of the WJ III test batery will make this brief easier to understand, but is not necessary.

Beyond CHC:  ITD—Within-CHC Domain Complexity Optimized Measures
            Optimizing Cognitive Complexity of CHC measures
I have recently begun to recognize the contribution that The Brunswick Symmetry derived Berlin Intelligence Structure (BIS) model can make in applied intelligence research, especially for increasing predictor-criteria relations by maximizing these relations via matching the predictor-criteria space on the dimension of cognitive complexity.  What is cognitive complexity?  Why is it important?  More important, what role should it play in designing intelligence batteries to optimize CHC COG-ACH relations?
Cognitive complexity is often operationalized by inspecting individual test loadings on the first principal component from principal component analysis (Jensen, 1998).  The high g-test rationale is that performance on tests that are more cognitively complex “invoke a wider range of elementary cognitive processes (Jensen, 1998; Stankov, 2000, 2005)” (McGrew, 2010b, p. 452).  High g-loading tests are often at the center of MDS (multidimensional scaling) radex models (click here for AP101 Brief Report #15:  Cognitive-Aptitude-Achievement Trait Complexes example)—but this isomorphism does not always hold.   David Lohman, a student of Richard Snow’s, has made extensive use of MDS methods to study intelligence and has one of the best grasps of what cognitive complexity, as represented in the hyperspace of MDS figures, contributes to understanding intelligence and intelligence tests.  According to Lohman (2011), those tests closer to the center are more cognitively complex due five possible factors—larger number of cognitive component processes; accumulation of speed component differences: more important component processes (e.g., inference); increased demands of attentional control and working memory; and/or or more demands on adaptive functions (assembly, control, and monitoring).  Schneider’s (in press) level of abstraction description of broad CHC factors is similar to cognitive complexity.  He uses the simple example of 100 meter hurdle performance.  According to Schneider (in press), one could independently measure 100 meter sprinting speed and then standing still and jumping over a hurdle (both examples of narrow abilities).  However, running a 100 meter race is not the mere sum of the two narrow abilities and as is more of a non-additive combination and integration of narrow abilities.  This analogy captures the essence of cognitively complexity—which, in the realm of cognitive measures, are tasks that have more of the five factors listed by Lohman involvedduring successful task performance.
Of critical importance is the recognition that factor or ability domain breadth (i.e., broad or narrow) is not synonymous with cognitively complexity.  More important, cognitive complexity has not always been a test design concept (as defined by the Brunswick Symmetry and BIS model) explicitly incorporated into "intelligent" intelligence test design (ITD).  A number of tests have incorporated the notion of cognitive complexity in their design plans, but I believe this type of cognitive complexity is different than the within-CHC domain cognitive complexity discussed here.
For example, according to Kaufman and Kaufman (2004), “in developing the KABC-II, the authors did not strive to develop ‘pure’ tasks for measuring the five CHC broad abilities.  In theory, Gv tasks should exclude Gf or Gs, for example, and tests of other broad abilities, like Gc or Glr, should only measure that ability and none other.  In practice, however, the goal of comprehensive tests of cognitive ability like the KABC-II is to measure problem solving in different contexts and under different conditions, with complexity being necessary to assess high-level functioning” (p. 16; italics emphasis added).  Although the Kaufman’s address the importance of cognitively complex measures in intelligence test batteries, their CHC-grounded description defines complex measures as those that are factorially complex or mixed measures of abilities from more than one broad CHC domain.  The Kaufman’s also address cognitive complexity from the non-CHC neurocognitve three-block functional Luria neurocognitive model when they indicate that it is important to provide measurement that evaluates the “dynamic integration of the three blocks” (Kaufman & Kaufman, 2004, p.13).   This emphasis on neurocognitive integration (and thus, complexity) is also an explicit design goal of the latest Wechsler batteries.  As stated in the WAIS-IV manual (Wechsler, 2008), “although there are distinct advantages to the assessment and division of more narrow domains of cognitive functioning, several issues deserve note.  First, cognitive functions are interrelated, functionally and neurologically, making it difficult to measure a pure domain of cognitive functioning” (p. 2).  Furthermore, “measuring psychometrically pure factors of discrete domains may be useful for research, but it does not necessarily result in information that is clinically rich or practical in real world applications (Zachary, 1900)” (Wechsler, p. 3).   Finally, Elliott (2007) similarly argues for the importance of recognizing neurocognitive-based “complex information processing” (p. 15; italics emphasis added) in the design of the DAS-III, which results in tests or composites measuring across CHC-described domains, as important in test design.
The ITD principle explicated and proposed here is that of striving to develop cognitively complex measures within broad CHC domains—that is, not attaining complexity via the blending of abilities across CHC broad domains and not attempting to directly link to neurocognitive network integration.[1]   The Brunswick Symmetry based BIS model provides a framework for attaining this goal via the development and analysis of test complexity by paying attention to cognitive content and operations facets. 
Figure 12 presents the results of a 2-D MDS Radex model of most all key WJ III broad and narrow CHC cognitive and achievement clusters (for all norm subjects from approximately 6 years of age thru late adulthood). [2]   The current focus of the interpretation of the results in Figure 12 is only on the degree of cognitive complexity (proximity to the center of the figure) of the broad and narrow WJ III clusters within the same domain (interpretations of the content and operations facets are not a focus of this current material).  Within a domain the broadest three-test parent clusters are designated by black circles.[3]  Two-test broad clusters are designed by gray circles.  Two test narrow offspring clusters within broad domains are designated by white circles.  All clusters within a domain are connected to the broadest parent broad cluster by lines.  The critically important information is the within-domain cognitive complexity of the respective parent and sibling clusters as represented by their relative distances from the center of the figure.  A number of interesting conclusions are apparent. [Click on image to enlarge]

First, as expected, the WJ III GIA-Ext cluster is almost perfectly centered in the figure—it is clearly the most cognitively complex WJ III cluster.   In comparison, the three WJ III Gv clusters are much weaker in cognitive complexity than all other cognitive clusters with no particular Gv cluster demonstrating a clear cognitive complexity advantage.    As expected, the measured reading and math achievement clusters are primarily cognitively complex measures.  However, those achievement clusters that deal more with basic skills (Math Calculation—MTHCAL; Basic Reading Skills—RDGBS) are less complex that the application clusters (Reading Comprehension-RDGCMP; Math Reasoning-MTHREA). 
The most intriguing findings in Figure 12 are the differential cognitive complexity patterns within CHC domains (with at least one parent and at least one offspring cluster).  For example, the narrow Perceptual Speed (Gs-P) offspring cluster is more cognitively complex than the broad parent Gs cluster.  The broad Gs cluster is comprised of the Visual Matching (Gs-P) and Decision Speed (Gs-R9; Glr-NA) tests, tests that measure different narrow abilities.  In contrast the Perceptual Speed cluster (Gs-P) is comprised of two tests that are classified as both measuring the same narrow ability (perceptual speed).  This finding appears, on first blush, counterintuitive as one would expect a cluster comprised of tests that measure different content and operations (Gs cluster) would be more complex (as per the above definition and discussion) than one comprised of two measures of the same narrow ability (Gs-P).  However, one must task analyze the two Perceptual Speed tests to realize that although both are classified as measuring the same narrow ability (perceptual speed), they differ in both stimulus content and cognitive operations.  Visual Matching requires processing of numeric stimuli.  Cross Out requires the processing of visual-figural stimuli.  These are two different content facets in the BIS model.  The Cross Out visual-figural stimuli are much more spatially challenging than the simple numerals in Visual Matching.  Furthermore, the Visual Matching test requires the examinee to quickly seek out and discover and mark two digit pairs that are identical.  In contrast, in the Cross Out test the subject is provided a target visual-figural shape and the subject must then quickly scan a row of complex visual images and mark two that are identical to the target.  Interesting, in other unpublished  analyses I have completed, the Visual Matching test often loads on or groups with quantitative achievement tests while Cross Out has frequently show to load on a Gv factor.  Thus, task analysis of the content and cognitive operations of the WJ III Perceptual Speed tests suggests that although both are classified as narrow indicators of Gs-P, they differ markedly in task requirements.  More important, the Perceptual Speed cluster tests, when combined, appear to require more cognitively complex processing than the broad Gs cluster.  This finding is consistent with Ackerman, Beier and Boyle’s (2002) research that suggests that perceptual speed has another level of factor breadth via the identification of four subtypes of perceptual speed (i.e., pattern recognition, scanning, memory and complexity; see McGrew 2005 and Schneider & McGrew, 2012 for discussion of a hierarchically organized model of speed abilities).  Based on Bruinswick Symmetry/BIS cognitive complexity principles, one would predict that a Gs-P cluster comprised of two parallel forms of the same task (e.g., two Visual Matching or two Cross Out tests) would be less cognitively complex than broad Gs.  A hint of the possible correctness of this hypothesis is present in the inspection of the Gsm-MS-MW domain results.
The WJ III Gsm cluster is the combination of the Numbers Reversed (MW) and Memory for Words (MS) tests.  In contrast, the WJ III Auditory Memory Span cluster (AUDMS; Gsm-MS) cluster is much less cognitively complex when compared to Gsm (see Figure 12).  Like the Perceptual Speed (Gs-P) cluster described in the context of the processing speed family of clusters, the Auditory Memory Span cluster is comprised of two tests with the same memory span (MS) narrow ability classification (Memory for Words; Memory for Sentences).  Why is this narrow cluster less complex than its broad parent Gsm cluster while the opposite held true for Gs-P and Gs?  Task analysis suggests that the two memory span tests are more alike than the two perceptual speed tests.  The Memory for Words and Memory Sentences tests require the same cognitive operation—simply repeating back, in order, words or sentences spoken to the subject.  This differs from the WJ III Perceptual Speed cluster as the similarly classified narrow Gs-P tests most likely invoke both common and different cognitive component operations.  Also, the Memory Span cluster tests are comprised of stimuli from the same BIS content facet (i.e., words and sentences; auditory-linguistic/verbal).  In contrast, the Gs-P Visual Matching and Cross Out tests involve two different content facets (numeric and visual-figural).
In contrast, the WJ III Working Memory cluster (Gsm-MW) is more cognitively complex than the parent Gsm cluster.  This finding is consistent with the prior WJ III Gs/Perceptual Speed and WJ III Gsm/Auditory Memory Span discussion.  The WJ III Working Memory cluster is comprised of the Numbers Reversed and Auditory Working Memory tests.  Numbers Reversed requires the processing of stimuli from one BIS content facet—numeric stimuli.  In contrast, Auditory Working Memory requires the processing of stimuli from two BIS content factors—numeric and auditory-linguistic/verbal; numbers and words).  The cognitive operations of the two tests also differ.  Both require the holding of the presented stimuli in active working memory space.  Numbers Reversed then requires the simple reproduction of the numbers in reverse order.  In contrast, the Auditory Working Memory test requires the storage of the numbers and words in separate chunks, and then the production of the forward sequence of each respective chunk (numbers or words), one chunk before the other.  Greater reliance on divided attention is most likely occurring during the Auditory Working Memory test. 
In summary, the results presented in Figure 12 suggest that it is possible to develop cluster scores that vary by degree of cognitively complexity within the same broad CHC domain.  More important is the finding that the classification of clusters as broad or narrow does not provide information on the measures cognitive complexity.  Cognitively complexity, as defined in the classification of clusters as broad or narrow does not provide information on the measures cognitive complexity.  Cognitive complexity, as in the Lohman sense, can be achieved within CHC domains without resorting to mixing abilities across CHC domains.  Finally, narrow clusters can be more cognitively complex, and thus likely better predictors of complex school achievement, than broad clusters or other narrow clusters. 

Implications for Test Battery Design and Assessment Strategies
The recognition of cognitive complexity as an important ITD principle suggests that the push to feature broad CHC clusters in contemporary test batteries, or in the construction of cross-battery assessments, fails to recognize the importance of cognitive complexity.  I plead guilty to contributing to this focus via my role in the design of the WJ III which focused extensively on broad CHC domain construct representation—most WJ III narrow CHC clusters require the use of the third WJ III cognitive book (the Diagnostic Supplement; Woodcock, McGrew, Mather & Schrank, 2003).  Similarly, guilty as charged in the dominance of broad CHC factor representation in the development of the original cross-battery assessment principles (Flanagan & McGrew, 1997; McGrew & Flanagan, 1998). 
It is also my conclusion that the narrow is better conclusion of McGrew and Wendling (2010) may need modification.   Revisiting the McGrew and Wendling (2010) results suggest that the narrow CHC clusters that were more predictive of academic achievement likely may have been so not necessarily because they are narrow, but because they are more cognitively complex.  I offer the hypothesis that a more correct principle is that cognitively complex measures are better.   I welcome new research focused on testing this principle.
In retrospect, given the universe of WJ III clusters, a broad+narrow hybrid approach to intelligence battery configuration (or cross-battery assessment) may be more appropriate.  Based exclusively on the results presented in Figure 12, the following clusters would appear those that might better be featured in the “front end” of the WJ III or a selective testing constructed assessment—those clusters that examiners should consider first within each CHC broad domain:  Fluid Reasoning (Gf)[4], Comprehension-Knowledge (Gc), Long-term Retrieval (Glr), Working Memory (Gsm-MW), Phonemic Awareness 3 (Ga-PC), and Perceptual Speed (Gs-P).  No clear winner is apparent for Gv, although the narrow Visualization cluster is slightly more cognitively complex than the Gv and Gv3 clusters.  The above suggests that if broad clusters are desired for the domains of Gs, Gsm and Gv, then additional testing beyond the “front end” or featured tests and clusters would require administration of the necessary Gs (Decision Speed), Gsm (Memory for Words) and Gv (Picture Recognition) tests.

Utilization of the ITD test design principle of optimizing within-CHC cognitively complexity of clusters suggests that a different emphasis and configuration of WJ III tests might be more appropriate.  It is proposed that the above WJ III cluster complexity priority or feature model would likely allow practitioners to administer the best predictors of school achievement.  I further hypothesize that this cognitive complexity based broad+narrow test design principle most likely applies to other intelligence test batteries that have adhered to the primary focus on featuring tests that are the purest indicators of two or more narrow abilities within the provided broad CHC interpretation scheme.  Of course, this is an empirical question that begs research with other batteries.  More useful with be similar MDS Radex cognitive complexity analysis of cross-battery intelligence data sets.[5]

References (not included in this post.  The complete paper will be announced and made available for reading and download in the near future)



[1] This does not mean that cognitive complexity may not be related to the integrity of the human connectome or different brain networks. I am excited about contemporary brain network research (Bressler & Menon, 2010; Cole, Yarkoni, Repovs, Anticevic & Braver, 2012; Toga, Clark, Thompson, Shattuck, & Van Horn, 2012; van den Heuvel & Sporns, 2011), particularly that which has demonstrated links between neural network efficiency and working memory, controlled attention and clinical disorders such as ADHD (Brewer, Worunsky, Gray, Tang, Weber & Kober, 2011; Lutz, Slagter, Dunne, & Davidson, 2008; McVay & Kane, 2012). The Parietal-Frontal Integration (P-FIT) theory of intelligence is particularly intriguing as it has been linked to CHC psychometric measures (Colom, Haier, Head, Álvarez-Linera, Quiroga, Shih, & Jung, 2009; Deary, Penke, & Johnson, 2010; Haier, 2009; Jung & Haier, 2007) and could be linked to CHC cognitively-optimized psychometric measures.
[2] Only reading and math clusters were included to simplify the presentation of the results and the fact, as reported previously, that reading and writing measures typically do not differentiate well in multivariate analysis—and thus the Grw domain in CHC theory.
[3] GIA-Ext is also represented by a black circle.
[4] Although the WJ III Fluid Reasoning 3 cluster (Gf3) is slightly closer to the center of the figure, the difference from Fluid Reasoning (Gf) is not large and time efficiency would argue for the two-test Gf cluster.
[5] It is important to note that the cognitive complexity analysis and interpretation discussed here is specific to within the WJ III battery only. The degree of cognitive complexity in the WJ III cognitive clusters in comparison to composite scores from other intelligence batteries can only be ascertained by cross-battery MDS complexity analysis.

Friday, December 30, 2011

Dissertation Dish: Gf cognitive test analysis via CFA and task analysis






A comparison of confirmatory factor analysis and task analysis of fluid intelligence cognitive subtests by Parkin, Jason R., Ph.D., University of Missouri - Columbia, 2010 , 132 pages; AAT 3488814

Abstract

Cross-battery assessment relies on the classification of cognitive subtests into the Cattell- Horn-Carroll (CHC) theory's broad and narrow ability definitions. Generally, broad ability classifications have used ability data analyzed through factor analytic methods, while narrow ability classifications have used data about subtest task demands. The purpose of this investigation is to determine whether subtest similarity judgments based on task demands data, and judgments based on ability measurement provide similar results. It includes two studies. First, middle school students (N = 63) completed six target fluid reasoning subtests that were subjected to confirmatory factor analyses to analysis subtest similarities. Second, school psychology practitioners (N = 32) sorted subtest descriptions into similarity groups. Their judgments were analyzed with multiple non-hierarchical cluster analyses. Results partially confirmed that the six target subtests were classified similarly using both data types, though need to be interpreted cautiously due to limitations. Implications for assessment practices are discussed.




Posted via DraftCraft app

Thursday, December 29, 2011

WMF Human Cognitive Abilities Archive Project: Major update 12-29-11


Here is an early New Years present to those interested in the structure of human cognitive abilities and the seminal work of Dr. John Carroll.

The free on-line WMF Human Cognitive Abilities (HCA) archive project had a MAJOR update today. An overview of the project, with a direct link to the archive, can be found at the Woodcock-Muñoz Foundation web page (click on "Current Woodcock-Muñoz Foundation Human Cognitive Abilities Archive") . Also, an on-line PPT copy of a poster presentation I made at the 2008 (Dec) ISIR conference re: this project can be found by clicking here.

Today's update added the following 38 new data sets from John "Jack" Carroll's original collection.  We now have approximately 50% of Jack Carroll's original datasets archived on-line.  Of particular interest is the addtion of one of Carroll's data sets, three by John Horn, and 17 by Guilford et al.  Big names...and some correlation matrices with big numbers of variables.  Data parasites (er....secondary data analysits) should be happy.


  • CARR01.  Carroll, J.B. (1941).  A factor analysis of verbal abilities.  Psychometrika, 6, 279-307.
  • FAIR02.  Fairbank, B.A. Jr., Tirre, W., Anderson, N.S. (1991).  Measures of thirty cognitive tasks:  Intercorrelations and correlations with aptitude battery scores. In P.L. Dann, S. M. Irvine, & J. Collis (Eds.), Advances in computer-based human assessment (pp. 51-101).  Dordrecht & Boston: Kluwer Academic.
  • FLAN01.  Flanagan, J.C., Davis, F.B., Dailey, J.T., Shaycoft, M.F., Orr, D.B., Goldberg, I., Neyman, C.A. Jr., (1964).  The Amercian high school student (Cooperative Research Project No. 635).  Pittsburgh:  University of Pittsburgh.
  • FULG21.  Fulgosi, A., Guilford, J. P. (1966).  Fluctuation of ambiguous figures and intellectual flexibility.  American Journal of Psychology, 79, 602-607.
  • GUIL11.  Guilford, J.P., Berger R.M., Christensen, P.R. (1955).  A factor-analytic stydy of planning:  II. Administration of tests and analysis of results.  Los Angeles:  Reports from the Psychological Laboratory, University of Southern California, No. 12.
  • GUIL31 to GUIL46 (17).  Guilford, J.P., Lacey, J.I. (Eds.) (1947).  Printed classification tests.  Army Air Force Aviation Psychology Program Research Reports, No. 5.  Washington, DC: U.S. Government Printing Office. [discussed or re-analyzed by Lohman (1979)]
  • HARG12.  Hargreaves, H.L. (1927).  The 'faculty' of imagination:  An enquiry concerning the existence of a general 'faculty,' or group factor, of imagination.  British Journal of Psychology Monograph Supplement, 3, No. 10.
  • HECK01.  Heckman, R.W. (1967).  Aptitude-treatment interactions in learning from printed-instruction: A correlational study.  Unpublished Ph.D. thesis, Purdue University.  (University Microfilm 67-10202)
  • HEND01.  Hendricks, M., Guilford, J. P., Hoepfner, R. (1969). Measuring creative social abilities. Los Angeles: Reports from the Psychological Laboratory, University of Southern California, No. 42.
  • HEND11A.  Hendrickson, D.E. (1981). The biological basis of intelligence. Part II: Measurement. In H.J. Eysenck (Ed.), A model for intelligence (pp. 197-228). Berlin: Springer.
  • HHIG01.  iggins, L. C. (1978) A factor analytic study of children's picture interpretation behavior. Educational Communication & Technology, 26, 215-232
  • HISK03/04.  Hiskey, M. (1966). Manual for the Hiskey-Nebraska Test of Learning Aptitude. Lincoln, NE: Union College Press.
  • HORN25/26.  Horn, J. L., & Bramble, W. J. (1967). Second-order ability structure revealed in rights and wrongs scores. Journal of Educational Psychology, 58, 115-122.
  • HORN31.  Horn, J. L., & Stankov, L. (1982) Auditory and visual factors of intelligence. Intelligence, 6, 165-185.
  • KEIT21.  Keith, T. Z., & Novok, C. G. (1987). What is the g that the K-ABC measures? Paper presented at the meeting of the National Association of School Psychologists, New Orleans, L.A.
  • KRAN01/KRAN01A.  Kranzler, J. H. (1990). The nature of intelligence: A unitary process or a number of independent processes? Unpublished doctoral dissertation, University of California at Berkeley.
  • LANS31.  Lansman, M., Donaldson, G., Hunt, E., & Yantis, S. (1982). Ability factors and cognitive processes. Intelligence, 6, 347-386.
  • LORD01.  Lord, F. M. (1956). A study of speed factors in tests and academic grades. Psychometrika, 21, 31-50.
  • LUN21.  Lunneborg, C. E. (1977). Choice reaction time: What role in ability measurement? Applied Psychological Measurement, 1, 309-330.
  • WOTH01.  Wothke, W., Bock, R.D., Curran, L.T., Fairbank, B.A., Augustin, J.W., Gillet, A.H., Guerrero, C., Jr. (1991).  Factor analytic examination of the Armed Services Vocational Aptitude Battery (ASVAB) and the kit of factor-referenced tests.  Brooks Air Force Base, TX: Air Force Human Resources Laboratory Report AFHRL-TR-90-67.
Request for assistance: The HCA project needs help tracking down copies of old journal articles, dissertations, etc. for a number of datasets being archived. We have yet to locate copies of the original manuscripts for a significant number of datasets that have been posted to the archive. Help in locating copies of these MIA manuscripts would be appreciated.  Please visit the special "Requests for Assistance" section of this archive to view a more complete list of manuscripts that we are currently having trouble locating. If you have access to either a paper or e-copy of any of the designated "fugitive" documents, and would be willing to provide them to WMF to copy/scan (we would cover the costs), please contact Dr. Kevin McGrew at the email address listed at the site.  A copy of the complete list or datasets with missing mannuscripts (in red font) can also be downloaded directlly from here.

Please join the WMF HCA listserv to receive routine email updates regarding the WMF HCA project.

All posts regarding this project can be found here.


Technorati Tags: , , , , , , , , , , , , , , , , ,