Joel Schneider comment:
Kevin's recent blog post about the Gf=g hypothesis is interesting and worth reading.
For most hypotheses about the structure of cognitive abilities, I can think of no better dataset on which to test them than on the WJ-III standardization sample. However, in this particular case, I've always had my doubts about WJ-III Gf tests. I am confident that both the primary WJ-III Gf tests are excellent markers of Gf. However, I've always thought that they contained a hint of common variance that was non-Gf related. What that is, I can't quite put my finger on it but it has something to do with executive control of attention. Both involve a need to generate hypotheses and test them in working memory in ways that seem more involved than the traditional matrix Gf tests. Both of them also seem to require math-like thought processes, especially in the more difficult items.
Suppose that the Gf=g hypothesis were true. Let's say that Concept Formation and Analysis-Synthesis both consist of the following sources of variance:
CF = Gf + Something Extra + error
AS = Gf + Something Extra + error
The latent variable that would be constructed to represent Gf in a CFA would thus be: WJ-III Gf = Gf + Something Extra
The chi square test to see if constraining the Gf to g path to 1.0 would be significant, not because the Gf=g hypothesis is wrong, but because the 2 WJ-III Gf subtests were not pure enough markers of Gf. It would only take a little something extra for the chi square test to be significant.
I would think that adding one Raven-like matrix in the Gf mix would reduce the problem (if there actually is a problem). These tests seem less-mathy and more visual-spatialish and thus might reduce the non-Gf common variance.
The tables Kevin links to include a Gf latent variable that consists of:
Numerical Reasoning (Number Matrices + Number Series?)
If I am right about CF and AS being mathy and if mathiness is not exactly the same as Gf, then this WJ-III Gf is likely to be WJ-III Gf = Gf + Mathiness
I was very surprised to see how strong an indicator of Gf Quantitative Concepts is, given its Gc-like question format. Perhaps it is glomming onto Gf not because it has a lot of Gf in it but because it is attracted to the math-like elements of the other indicators. Even so, I am very much at a loss to understand why Quantitative Concepts has a higher loading on Gf than does Applied Problems.
Ruben Lopez responds:
Maybe the messiness of Gf's measurability even in an exceptional measure like the WJ-III--may have more to do with abstraction and its relationship to g than with a separate Gf.
Consider Dr. David Lohman's discussion of Gf's relationship to Gq in "The Woodcock-Johnson III and the Cognitive Abilities Test (Form 6): A concurrent validity study" (March 2003):
"Recent discussions of the nature of general ability have emphasized the importance of physiological processes (Jensen, 1998), the role of working memory (Kyllonen, 1996), or the congruence between aprimary Inductive Reasoning factor, the stratum II Fluid Ability factor (Gf), and g (Gustafsson, 2002). However, the present study supports Keith and Witta's (1997) hypothesis that quantitative reasoning may be an even better indicator of g. Quantitative reasoning has always been represented in some form in achievement test batteries, and in aptitude tests (such as the SAT) designed to predict academic success. But a broad quantitative knowledge factor (Gq) was not added to Gf-Gc theory until the late 1980s (Horn, 1989). Carroll's (1993) three-stratum theory, on the other hand, considers quantitative reasoning to be part of a broad fluid reasoning (Gf) factor. Confirmatory factor analyses of different ability test batteries mirror this ambivalence. Some studies find g and Gq indistinguishable [as in Keith & Bickley's (1992) factor analysis of the Stanford-Binet IV or Lohman & Hagen's (2002) factor analyses of the CogAT Primary Battery], other studies find Gq to be the best indicator of g (as in Keith & Witta's (1997) factor analyses of the WISC-III or Lohman & Hagen's (2002) factor analyses of the CogAT Multilevel Battery], and yet other studies find distinguishable g and Gq factors [as in Bickley, Keith, & Wolfe's (1995) factor analysis of the Woodcock-Johnson Psychoeducational Battery-Revised]
Paradoxically quantitative reasoning has not been much studied because it is difficult to separate from g unless combined with tests of more specific mathematical knowledge and skill (as in the Gq factor). But it is this overlap with g that makes quantitative reasoning particularly interesting as a vehicle for understanding the nature of g. Perhaps the most salient characteristic of quantitative concepts is abstraction. Even elementary operations like counting require abstraction: two cats are in some way the same as two dogs or two anything. The number line itself is an abstraction, especially when it includes negative numbers. Abstraction is most obvious in understanding concepts such as variable or, later, imaginary number.
Several early definitions of g emphasized abstract thinking or reasoning abilities. And the transition from concrete to abstract thinking figured prominently in Piaget's theory of intelligence. Modern definitions of g emphasize the importance of working memory resources or even of reasoning, but do not have much to say about the role of abstract thinking. These analyses suggest a closer study of quantitative reasoning might be a good place to begin in exploring this possibility.
And don't forget Keith and colleagues recommendation that the Arithmetic subtest should be added to the Perceptual Reasoning scale to assess Gf.
Cathy Fiorello chimes in: