Friday, November 09, 2007

IQ Research bytes #2: WJ III, CoGAT, RIAS

Check out the following research articles re: commonly used group (CoGAT) and individually administered (WJ III: RIAS) intelligence tests.

1. Although not that recent (2003), I just discovered a manuscript by David Lohman, coauthor of the CoGAT (group IQ test), where he presents the results of an investigation between the relations of the group-based CoGAT scores and scores from the individually administered Woodcock-Johnson Tests of Cognitive Ability--Third Edition (WJ III ; conflict of interest disclosure - I'm a co-author of the WJ III, and both the CoGAT and WJ III are published by Riverside Publishing). The title, abstract, and a link to a pdf copy of manuscript is below.

  • Lohman, D. (2003, March). The Woodcock-Johnson III and the Cognitive Abilities Test (Form 6): A Concurrent Validity Study (click here to view)
  • Abstract - This study investigated the concurrent validity of the Woodcock-Johnson III (WJ-III; Woodcock, McGrew, & Mather, 2001) and Form 6 of the Cognitive Abilities Test (CogAT; Lohman & Hagen, 2001). A total of 178 students in grades 2, 5, and 9 were administered 13 tests from the WJ-III and the appropriate level of the CogAT. Interbattery confirmatory factor analyses showed that the general factors on the two batteries correlated r = .82. Correlations between broad-group clusters on the WJ-III and battery-level scores on the CogAT generally supported the construct interpretations of each, but also suggested important differences in the abilities measured by both batteries.

2. In another WJ III related study (see conflict of interest disclosure statement above), Craig Firsby and Steven Osterlind present a follow-up to an earlier descriptive analysis of the WJ III Test Session Observation Checklist norm data. Their most recent analysis focused on potential differences in examiner ratings for self-identified Hispanic's in the WJ III standardization sample. The abstract and article speak for themselves. See information below.

  • Frisby, C. & Osterlind, C. (2007). Hispanic Test-Session Behavior on the Woodcock Johnson Psychoeducational Battery– Third Edition. Journal of Psyhoeducational Assessment, 25 (3), 257-270. (click here to view)
  • Abstract--This study examined potential differential examiner ratings for a large sample of self-identified Hispanics on the Woodcock Johnson Psychoeducational Battery–Third Edition (WJ-III) Test Session Observation Checklist (TSOC). Both between-group (Hispanics vs. non-Hispanics) and within-group analyses (Hispanics disaggregated by first spoken language, language spoken in the home, and mother’s highest educational level) were conducted. Four research hypotheses were tested through 44 analyses. Most comparisons were not statistically significant, and across- and within-group differences had minimal influence in analyses that were statistically significant. The authors conclude that there is no compelling evidence of substantial systematic differences in examiner ratings of Hispanics’ test-session behaviors on the WJ-III.

3. Finally, Nelson et al. (2007) report an independent investigation of the internal (structural) validity of the Reynolds Inellectual Assessment Scales (RIAS). The article information is below. The long story short - Nelson et al. conclude that their factor extraction methods were more rigorous and appropriate than the exploratory (EFA) and confirmatory factor analysis (CFA) reported by the RIAS authors. More importantly, Nelson et al. conclude that the RIAS should only be interpreted as a single-factor measure of g (general intelligence). Their analysis did not support a three-factor structure presented by the RIAS authors. As we all know, the "my factor analysis methods are better than your factor analysis methods" has been a common battle cry in the factor analysis literature for decades. Regardless of this caveat, I do have a couple of comments based on my quick skim of the article.

First, I concur with Nelson et al. that the EFA methods should include oblique correlated factors...and not orthogonal (uncorrelated) factors as reported by the test authors. Cognitive abilities are correlated and this should be modeled in the analyses. Second, in defense of the RIAS authors, the Nelson et al. sample consists of referred subjects, and not a nationally representative sample. Although referral samples are much better for investigating the factor structure of instruments than clinical samples (e.g., LD), it is possible that the Nelson et al. results are influenced by characteristics of the non-normal sample. Third, I believe that the continued criticism of the over-use of CFA methods in test development is misdirected. If one reads the writings of one of the fathers of SEM-based methods (Joreskog), SEM methods (in this case CFA) can be used not only for model confirmation, but also may play a critical role in "model development and generation." Those of us who develop measures of cognitive abilities often use CFA as a model generation/building tool as per Joreskog . And..this is OK.

Fourth, since all test manuals must report test intercorrelation matrices as per the Joint Test Standards, it is possible for independent researchers to import the published correlation matrices into standard stat software and factor analyze the orignal norm-based correlations. Given this possibility, it would have been nice if Nelson et al. would have included, as a companion to their factor analysis in the referral sample, a similar analysis (using the same logic and methods) of the same correlation matrices the RIAS authors used. If similar results were found vis-a-vis the application of their methods in both the norm and referred samples, it would make their arguement stronger.

Cecil.....I know you catch my blog every now and then. If you would like to prepare a response to the Nelson et al. article, I'd be happy to post it as a guest blog post.

  • Nelson, J., Canivez, G., Lindstrom, W. & Hatt, C. (2007) Higher-order exploratory factor analysis of the Reynolds Intellectual Assessment Scales with a referred sample. Journal of School Psychology, 45, 439-456 (click here to view)
  • Abstract--The factor structure of the Reynolds Intellectual Assessment Scales (RIAS; [Reynolds, C.R., & Kamphaus, R.W. (2003). Reynolds Intellectual Assessment Scales. Lutz, FL: Psychological Assessment Resources, Inc.]) was investigated with a large (N=1163) independent sample of referred students (ages 6–18). More rigorous factor extraction criteria (viz., Horn's parallel analysis (HPA); [Horn, J.L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179–185.], and Minimum Average Partial (MAP) analysis; [Velicer, W.F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika, 41, 321–327.]), in addition to those used in RIAS development, were investigated. Exploratory factor analyses using both orthogonal and oblique rotations and higher-order exploratory factor analyses using the Schmid and Leiman [Schmid, J., and Leiman, J.M. (1957). The development of hierarchical factor solutions. Psychometrika, 22, 53–61.] procedure were conducted. All factor extraction criteria indicated extraction of only one factor. Oblique rotations resulted in different results than orthogonal rotations, and higher-order factor analysis indicated the largest amount of variance was accounted for by the general intelligence factor. The proposed three-factor solution was not supported. Implications for the use of the RIAS with similarly referred students are discussed.


Technorati Tags: , , , , , , , , , , , , , , ,

Powered by ScribeFire.

1 comment:

  1. Anonymous7:11 PM

    Some analysis on what the Cogat scores mean would be beneficial. Thank you.

    ReplyDelete

Note: Only a member of this blog may post a comment.