Friday, April 29, 2005

Journal of Intelligence and ISIR - pre-pub "in press" feature

For those serious about staying current with contemporary intelligence research, the journal Intelligence is a must subscription. I would urge blog readers to visit the ISIR (International Society for Intelligence Research) web page to consider joining this organization. Membership provides members with a subscription to the journal (members can also download pdf copies of the articles from the web site).

Another nice feature is that "corrected proof" pdf copies of "in press" articles are made available with regularity. This feature allows members to keep abreast of contemporary developments ASAP. I think this feature is a model that should be followed by other journals.

Below is a select number of abstracts from recently published "in press-corrected proofs" that were available for download.


Ashton, M. C., & Lee, K. (2005). Problems with the method of correlated vectors. Intelligence, In Press, Corrected Proof.


  • The method of correlated vectors has been used widely to identify variables that are associated with general intelligence (g). Briefly, this method involves finding the correlation between the vector of intelligence subtests' g-loadings and the vector of those subtests' correlations with the variable in question. We describe two major problems with this method: first, associations of a variable with non-g sources of variance can produce a vector correlation of zero even when the variable is strongly associated with g; second, the g-loadings of subtests are highly sensitive to the nature of the other subtests in a battery, and a biased sample of subtests can cause a spurious correlation between the vectors.

Arendasy, M., & Sommer, M. (2005). The effect of different types of perceptual manipulations on the dimensionality of automatically generated figural matrices. Intelligence, In Press, Corrected Proof.

  • Two pilot studies (n1 = 155, n2 = 451) are presented in this article, which were carried out within the development of an item generator for the automatic generation of figural matrices items. The focus of the presented studies was to compare two types of item designs with regard to the effect of variations of the property "perceptual organization" on the psychometric properties and concurrent validity of figural matrices. The main results not only indicate a comparably high concurrent validity of the automatically generated figural matrices with regard to Raven matrices but also point to interesting differential effects of two kinds of implementations of perceptual organization. The conclusion of the two studies is that a more thorough understanding of the component processes of inductive reasoning and especially the impact of various item features on their difficulty and psychometric properties must be obtained.

Barber, N. (2005). Educational and ecological correlates of IQ: A cross-national investigation. Intelligence, In Press, Corrected Proof.

  • The new paradigm of evolutionary social science suggests that humans adjust rapidly to changing economic conditions, including cognitive changes in response to the economic significance of education. This research tested the predictions that cross-national differences in IQ scores would be positively correlated with education and negatively correlated with an agricultural way of life. Regression analysis found that much of the variance in IQ scores of 81 countries (derived from [Lynn, R., & Vanhanen, T. (2002). IQ and the wealth of nations. Westport, CT: Praeger]) was explained by enrollment in secondary education, illiteracy rates, and by the proportion of agricultural workers. Cross-national IQ scores were also related to low birth weights. These effects remained with national wealth, infant mortality, and geographic continent controlled (exception secondary education) and were largely due to variation within continents. Cross-national differences in IQ scores thus suggest that increasing cognitive demands in developed countries promote an adaptive increase in cognitive ability.


Johnson, W., & Bouchard, Jr. T. J. (. The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized. Intelligence, In Press, Corrected Proof.

  • In a heterogeneous sample of 436 adult individuals who completed 42 mental ability tests, we evaluated the relative statistical performance of three major psychometric models of human intelligence--the Cattell-Horn fluid-crystallized model, Vernon's verbal-perceptual model, and Carroll's three-strata model. The verbal-perceptual model fit significantly better than the other two. We improved it by adding memory and higher-order image rotation factors. The results provide evidence for a four-stratum model with a g factor and three third-stratum factors. The model is consistent with the idea of coordination of function across brain regions and with the known importance of brain laterality in intellectual performance. We argue that this model is theoretically superior to the fluid-crystallized model and highlight the importance of image rotation in human intellectual function.

Johnson, W., & Bouchard, Jr. T. J. (2005). Constructive replication of the visual-perceptual-image rotation model in Thurstone's (1941) battery of 60 tests of mental ability. Intelligence, In Press, Corrected Proof.

  • We recently evaluated the relative statistical performance of the Cattell-Horn fluid-crystallized model and the Vernon verbal-perceptual model of the structure of human intelligence in a sample of 436 adults heterogeneous for age, place of origin, and educational background who completed 42 separate tests of mental ability from three test batteries. We concluded that the Vernon model's performance was substantively superior but could be significantly improved. In so doing, we proposed a four-stratum model with a g factor at the top of the hierarchy and three factors at the third stratum. We termed this the Verbal-Perceptual-Image Rotation (VPR) model. In this study, we constructively replicated the model comparisons and development of the VPR model using the data matrix published by Thurstone and Thurstone (1941) [Thurstone, L. L., & Thurstone T. G. (1941). Factorial studies of intelligence. Chicago: University of Chicago Press]. The data matrix was generated by scores of 710 Chicago eighth graders on 60 tests of mental ability.

Kempel, P., Gohlke, B., Klempau, J., Zinsberger, P., Reuter, M., & Hennig, J. (2005). Second-to-fourth digit length, testosterone and spatial ability. Intelligence, In Press, Corrected Proof.

  • Based on stimulating findings suggesting that prenatal levels of steroids may influence cognitive functions, a study with N=40 healthy volunteers of both sexes was conducted. Prenatal levels of testosterone (T) were estimated by use of the second-to-fourth digit ratio (2D:4D) which is supposed to be controlled by the same genes involved in maturation of gonadal tissue and therefore may reflect the level of prenatal T. Moreover, activational effects of T were investigated by measuring T levels in saliva directly. Subjects completed several subtests of intelligence batteries for verbal, numerical and spatial abilities. Levels of T were not related to any of the cognitive functions. The 2D:4D was lower in males as compared to females. Males outperform females on spatial ability. Moreover, females with low 2D:4D performed better on cognitive tests measuring spatial as well as numerical ability as compared to females with high 2D:4D. Results are discussed with respect to the assumed role of prenatal and present levels of T in brain development and cognitive functioning.


Luo, D., Thompson, L. A., & Detterman, D. K. (2005). The criterion validity of tasks of basic cognitive processes. Intelligence, In Press, Corrected Proof.

  • The present study evaluated the criterion validity of the aggregated tasks of basic cognitive processes (TBCP). In age groups from 6 to 19 of the Woodcock-Johnson III Cognitive Abilities and Achievement Tests normative sample, the aggregated TBCP, i.e., the processing speed and working memory clusters, correlate with measures of scholastic achievement as strongly as the conventional indexes of crystallized intelligence and fluid intelligence. These basic processing aggregates also mediate almost exhaustively the correlations between measures of fluid intelligence and achievement, and appear to explain substantially more of the achievement measures than the fluid ability index. The results from the Western Reserve Twin Project sample using TBCP with more rigorous experimental paradigms were similar, suggesting that it may be practically feasible to adopt TBCP with experimental paradigms into the psychometric testing tradition. Results based on the latent factors in structural equation models largely confirmed the findings based on the observed aggregates and composites.


McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, In Press, Corrected Proof.

  • The relationship between brain volume and intelligence has been a topic of a scientific debate since at least the 1830s. To address the debate, a meta-analysis of the relationship between in vivo brain volume and intelligence was conducted. Based on 37 samples across 1530 people, the population correlation was estimated at 0.33. The correlation is higher for females than males. It is also higher for adults than children. For all age and sex groups, it is clear that brain volume is positively correlated with intelligence.

Reed, T. E., Vernon, P. A., & Johnson, A. M. (2005). Confirmation of correlation between brain nerve conduction velocity and intelligence level in normal adults. Intelligence, In Press, Corrected Proof.

  • In 1992, Reed and Jensen [Intelligence 16 (1992) 259-272] reported a positive correlation (.26; p=.002; .37 after correcting for restricted intelligence range) between a brain nerve conduction velocity (NCV) and intelligence level in 147 normal male students. In the first follow-up of their study, we report on a study using similar NCV methodologies, but testing both male and female students and using more extensive measures of cognitive abilities. One-hundred eighty-six males and 201 females, aged 18-25 years, were tested in three different NCV conditions and with nine cognitive tests, including Raven Progressive Matrices as used by Reed and Jensen. None of the 27 independent correlations in either the males or in the females are significant at Bonferroni-corrected probability levels, but 25 of 27 correlations in males and 20 of 27 correlations in females have positive signs. The exact binomial probabilities for these results are 5.6X10-6 and .002, respectively. We discuss possible reasons for the differences between the results of Reed and Jensen and our results. We also find that males have four percent faster NCVs than females with each of the three test conditions, probably due to their faster increase of white matter in the brain during adolescence.

Learning and the Brain conference

To regular blogsters...I'm back. I just returned (early) from the Learning and the Brain-Rewiring the Brain: Using Brain Research to Enhance Learning, Treatment and Teaching conference in in Cambridge, MA. This was my first time attending and presenting at this conference. I must say that my eyes were opened regarding the state-of-the art of research into neuroplasticity and learning/interventions. I think I shall keep this annual conference on my list of conferences to consider attending in the future.

Revised Code of Fair Testing Practices in Education

FYI - the Joint Committe on Testing Practices (JTCP) has released a Revised Code of Fair Testing Practices in Education. The code (in pdf format) can downloaded by visiting the JTCP page and clicking on the appropriate link.

As described on the JTCP page:


  • The Code of Fair Testing Practices in Education was initially developed by JCTP in 1988 as a statement of the primary obligations that professionals who develop or use educational tests have toward test takers. The Code has been revised to be consistent with the 1999 Standards for Educational and Psychological Testing. The Code provides guidance to professionals who develop or use educational tests.

Monday, April 25, 2005

Blogs catch mainstream medias attention-again.

Blogs are catching the attention of the mainstream media.

Recently (May 2, 2005), Business Week magazine saw fit to feature, as their cover topic, a series of articles on the importance of blogs in contemporary business. If you are still wondering about the potential impact of blogs, and whether you need "get up to speed" regarding all the hype regarding blogs, blogsters, the blogsphere, etc., these articles might be worth a peak.

John Carroll "On learning a piece on the piano" - A historical note from Jack's daughter

As I mentioned in a prior post, I was fortunate to be the last scholar to visit and work with John "Jack" Carroll during the last few months of his retirement in Fairbanks, Alaska. At the time he was living with his daughter (Melissa "Mimi" Chapin) , her husband, and their son.

During this visit I had the serendipitous opportunity to video Jack playing the piano one evening. Unfortunately, as I mention below in an editorial note, it was my first clumsy attempt at using the video feature on my new camera. I only captured a few minutes of his performance, and the picture quality was poor. But, I did capture a precious piece of history for those interested in the history of prominent psychologists.

Below is a post I'm making on behalf of Jack's daughter regarding this piano performance and subsequent private notes ( "On learning a piece on the piano") she discovered while gonig through his boxes of private papers.


Note from Jack Carroll's daughter (Mimi Chapin)


I was pleased to see that your blog readers had asked for more anecdotes about my father, In connection with your visit to Fairbanks, they may be interested in the .mpg of my father playing the piano that you made while there. He listened to it later and commented that it wasn't his best performance. Nonetheless, for me that evening when he played for you was poignant.

During my childhood, my father used to sit down at the piano every evening and play for relaxation, and for joy, usually after a concentrated stint of work, before going to bed. I hadn't heard him do this for years. I think when he played for you it was his way of expressing some of the relief he felt that night that he had passed on what he could of his programs and methods.

Perhaps your blog readers would also be interested in reading some notes my father had made about learning an Etude by Scriabin which I found in his file cabinet labeled "biography". I am fairly sure that the piece he was playing in your excerpt was an Etude by Scriabin, though I haven't found the music to confirm this. Perhaps the piece to which he alludes in the notes is the same piece we have recorded!! Perhaps someone with a keener ear than mine can follow the steps he took to identify the piece.

------------------------------------------------------
[Editorial Note to readers from K. McGrew. By clicking here, blogsters can visit a web page where you can find and view (if you have a proper media player) the mpg file of J. Carroll playing this piano piece on 5-25-03, approximately one month prior to his passing away. Unfortunately, the video is of poor quality as it was the first time I had ever used the video function on a new camera. I only wish now that I had learned the video feature before I had this serendipitous opportunity to video Jack Carroll at the piano.

Can anyone confirm Mimi's belief, as stated above, that the piece he was playing was indeed Etude by Scriabin?]
------------------------------------------------------

The notes start with a quote from Arthur Jensen about the nature of musical memory. My father describes becoming fascinated with an Etude by Scriabin, identifying it and marvelling at its composition. Then he starts to analyze the process by which it is learned, breaking it down into number of key strikings per second.

At the convention I attended with my father, I remember Arthur Jensen's description of experiments on intelligence measuring the speed of striking of keys....I wonder where my father was going with these notes. What is particularly interesting to me is the way these notes reveal the essentially romantic cast of my father's mind, which he then set about to codify and analyze.

I'll include the entire text of these notes below and have sent you a .pdf file of the original (blogsters can click here to view/download these type-written pieces of history. )

12/21/73 – the first day of winter, 1973 - notes by John "Jack" Carroll

Jensen, A.R. Social class and verbal learning. In M. Deutsch, I. Katz and A. R. Jensen. Social class, race and psychological development. NY Holt, Rinehart & Winston, 1968.

p. 121. , The pianist who has memorized a piece of music later finds he cannot recall the notes unless he sits at the piano and begins to play. His “memory” of the composition seems to lie more in his motor behavior than in any symbolic representation of the music; he can play the piece at the piano even though at his desk he would be quite unable to write out the score. And if he makes a mistake while playing, he cannot easily spot and start again at just any point. He usually has to go back to some “beginning” point in the music and continue from there.

On Learning a Piece on the Piano

It is called an Étude, and that word is to be taken literally in this case: a study, something to be studied. But it is not only that—it is something that, when faithfully studied, practiced and perfected, can become what it was intended to be—a work of art, that, unlike a painting can be realized only its performance.

A conjunction of circumstances nudged me into testing my pianistic competence against this formidable piece—composed in about his twenty-second year by Alexander Scriabin. Fashions in music came and go, but it happens that in the last several years the music of Scriabin has once more come into favor and popularity—at least among that small minority of people whose ears are best attuned to what is known as “classical” music. New recordings of Scriabin’ piano works are being played almost daily on the classical radio station that I like to listen to, and it was on one of those broadcasts that I heard a recording of the Étude in G sharp minor, Opus 8, No. 9.

I was listening only rather casually, that afternoon, to the usual mélange of selections—ranging from Bach to Stravinsky—that had been programmed to occupy the 55 minutes between news broadcasts. The announcer’s mention of the name of Scriabin alerted me, however, to the possibility that I might be about to hear a rendition of one of the pieces in the collection of Scriabin's Preludes and Etudes that I had must acquired by mail-order from a well-know reprint publishing house. From the announcement I was able to apprehend the fact that the piece about to be aired was an Etude, but I was not listening carefully enough to catch the opus number or the key. A few seconds into the piece, however, I could tell that it had be in G sharp Minor, and a rapid scanning of the table of contents of the collection, which I had ready at hand, allowed me to know that it could only be Opus 8, No. 9, for there is only one Scriabin Etude in that key. Turning rapidly to the page where the score began, I was then able to have the rare double experience of listening to a new piece of music while at the same time watching its printed representation unfold.

Perhaps it is trite to say the music “cast its spell” on me, but that is the only phrase that I can use to describe what was happening. Whether it was the artistry of the pianist, the eerie quality of the music itself, or the particular mood I myself happened to be in—I do not know: something caused the piece to make a strong impression on my mind, such that I was to hear it long after the music came to an end. I marveled at the haunting yet simple character of the main theme, the rich harmonies that accompanied it, the subtle modulations through which it coursed, the calm assurance of the second theme, the passion with which the return to the main theme was gradually approached, and the crashing sound of the final section—sounds which in the end gave way to a faint, almost whimpering, desperate flight of rapid octaves terminated by a somber, sustained minor chord played twice, slowly. Throughout, I was impressed with the technical skill, on the part of the pianist, that had to be invoked to perform the piece in any reasonable way at all. It was truly an Etude, in the best tradition of Chopin and all who have essayed this genre.

Even as an amateur pianist, or perhaps precisely because I am an amateur pianist, I am challenged when I learn of a piece of music that offers stubborn technical difficulties but at the same time promises a rich artistic reward to anyone who can overcome those difficulties. This Etude by Scriabin struck me immediately as falling in that category. I wanted to learn to perform it, as well as I could.

Perhaps it is not the most difficult piece for the piano ever composed (at this level, I don’t know how difficulty can really be gauged), but it is difficult enough. Containing 103 measures, its performance requires 3665 separate strikings of piano keys in a space of about 220 seconds (if it is to be played at the slower range of the metronome marking of a quarter note = 120-136 proposed by the composer, allowing for a somewhat slower speed in the middle section), or about 16-17 strikings per second. (The rate averages actually about 18-19 strokes per second in the first and last sections, but about 12 per second in the short, slower, middle section.) The 3665 separate strikings are not like completely random events (as they might be in some contemporary music); there is much pattern and redundancy in them. Many notes are duplicated at the octave, or even quadrupled at further octaves; phrases follow systematic patterns, and segments of them may be repeated with no variation in different parts of the piece, groups of notes played simultaneously usually constitute harmonies which are familiar to the musician.

Thursday, April 21, 2005

Clarification of value of IQ tests in education: Response to the Reid Lyon DC gang

It has come to my attention that a report I previously directed blog readers to (Expectations for Students with Cognitive Disabilities: Is the Cup Half Empty or Half Full? Can the Cup Flow Over?) (IQ Scores, Forest Gump & NCLB: Run Forrest..Run), has made it to high places....which is not always good....especially when these places are awash in a sea of politics.

Briefly, in the recent announcement from the U.S. Department of Education that there should be an increase in the percent of students with disabilities who can be excluded from state NCLB-related assessments, (Raising Achievement: Alternative Assessment for Students with Disabilities), yours truly (and, indirectly, my associate Jeff Evans), is cited in support of, what can only be called, and anti-intelligence testing position in special education. As a coauthor of the Woodcock-Johnson III battery, which includes a comprehensive CHC-based intelligence battery, the attribution of an anti-IQ position to me may strike readers as bizarre, and, if believed, possibly suggestive of myself experiencing a break from reality.

Below is the select quote (emphasis added by me) from the ED.gov web URL cited above:
  • "Research also supports the idea that IQ does not dictate achievement and, thus, cannot be used as a predictor. Kevin McGrew of the Institute for Applied Psychometrics notes that for most children with below average IQ scores, it is not possible to predict expected achievement with much accuracy. Lower-than-average IQ does not automatically translate into lower achievement or less ability to learn reading, language arts, mathematics, or other subjects. Other important variables affecting achievement appear to be interpersonal skills, motivation, engagement, and study skills, all of which can be positively influenced by high standards and expectations. Unfortunately, students are too often given a curriculum that is driven by educators' expectations of their students (based in part on a misunderstanding of IQ).
First, the later half of this statement is 100% true. As I've written elsewhere, non-cognitive factors (e.g., conative variables - motivation, self-regulated learning strategies, social and interpersonal abilities, etc.) are often ignored when making statements about a student's "aptitude" for school learning. I'm a firm believer in the Richard Snow approach to defining school aptitude as a combination of both cognitive and conative abilities. Nowhere have I ever written that valid measures of theory-based intelligence tests are not useful or predictive. In fact, I've authored/coauthored four different books on the interpretation of intelligence batteries. In the report drawn upon, I simply describe the normal curve of achievement scores that surround any specific IQ score, and this variability makes it difficult to predict, with precise accuracy, the exact level of expected current or future achievement for a specific student.

What I did write, and which is misconstrued in the ED.gov announcement, is the honest truth about our best measures of cognitive abilities. Namely, they are fallible predictors. Intelligence tests do not, nor have they ever, nor will they likely ever, account for school achievement beyond a threshold of approximately 40-60% of academic achievement variance....which is a hell of a lot in the field of psychology!

Cognitive measures, as per all respected models of school learning (e.g., Carroll's Model of School Learning; Walberg's Model of Educational Productivity), all include student characteristics, and cognitive characteristics in particular, as important contributing/predictive variables in explaining school learning. The point made in the Forrest Gump report is that even today's best available intelligence batteries are fallible predictors that do not allow educators to predict, beyond a reasonable doubt, specific expected levels of current or future achievement without a known degree of error. But, this degree of uncertainty (or error of prediction) is known and can be quantified. When combined with other variables (e.g., conative characteristics; home environment variables; quantity and quality of instruction), intelligence tests can provide valuable explanatory and predictive information.

The anti-IQ rhetoric coming out of many politically correct academic school psychologist channels, as well as the powers-that-be driving the special education reforms at the federal level, is accurate in the statement that global/full-scale IQ scores, and their use in ability-achievement formula's for LD determination, is not an empirically-based defensible procedure. I couldn't agree more.

However, cognitive measures, especially those that are based on contemporary psychometric theory (e.g., CHC theory), and those that provide reliable and construct valid measures of most of the major broad CHC ability domains, can, and do provide useful information in the hands of skilled clinicians. In fact, those advocating against IQ tests offer, as alternatives, measures or "markers" of phonemic awareness, rapid automatized naming, working memory, and vocabulary to identify students at risk for reading disabilities. Anyone with any knowledge of CHC theory recognizes that these marker/screening abilities can be directly mapped to the CHC abilities of phonetic coding (Ga-PC), naming facility (Glr-NA), working memory (Gsm-MW), and lexical knowledge (Gc-VL). AND, all of these abilities are measured on the most comprehensive CHC-based intelligence batteries. Isn't the addage "don't throw the baby out with the bathwater" applicable?

The real irony (in the misrepresentation of the information in the Forrest Gump report) in the U. S. Dept. Of Education statement, is that the primary premise of the report was that students with disabilities should not be denied high expectations, inclusion in state assessments, or access to the general education curriculum based only on a single point IQ score. The primary theme of the report was that, for too long, some educators, psychologists, and policy makers have believed in the supreme power of IQ tests, to the point that inappropriately low academic achievement expectations may be formed. Furthermore, the primary thesis of that report is that kids with disabilities (that are so classified based on intelligence test scores) should be NOT be automatically excluded from high standards and state accountability systems--the exact opposite message that the ED.gov statement uses our report to support (viz., increasing the percent of students with disabilities in NCLB accountability activities).

Whatever happened to President Bush's statement, when unveiling NCLB, that for too long, many children have been victims of "the soft bigotry of low expectations." Isn't the exclusion of more students with disabilities from high-stakes state accountability systems, systems that require schools to raise their expectations for students, promoting the same "soft bigotry of low expectations?"

As I've often said, and I don't know who to attribute the original quote to (I want to say Ralph Reiten of the Halstead-Reitan Neurospcyhological Battery fame), "if you give a monkey a Stradivarius violin and you get bad music, you don't blame the violin." By extension, if you give a "politically motivated federal bureaucrat honest scholarly information....... [never mind, you get the point].

Temporary MIA: I shall return

I apologize to blog readers for a lack of posts the past few days. Just wayyyyyy to busy to come up for air. But, fear not, I've got plenty of stuff bouncing around my cranium and will start to post new info, musings, etc. by this weekend at the latest. Thanks.

So much data....so little time

Sunday, April 17, 2005

Gs, Glr, executive function and aging

I just skimmed the interesting article below. The abstract is pretty much self-explanatory. If one uses their CHC-SL (CHC as a Second Language) skills, the CHC interpretation is that the study focuses on Glr (recognition memory), Gs and executive functions (which seems to be Gf related).

Bunce, D., & Macready, A. (2005). Processing speed, executive function, and age differences in remembering and knowing. Quarterly Journal of Experimental Psychology Section A Human Experimental Psychology, 58(1), 155-168.

Abstract
A group of young ( n = 52, M = 23.27 years) and old ( n = 52, M = 68.62 years) adults studied two lists of semantically unrelated nouns. For one list a time of 2 s was allowed for encoding, and for the other, 5 s. A recognition test followed where participants classified their responses according to Gardiner's (1988) remember-know procedure. Age differences for remembering and knowing were minimal in the faster 2-s encoding condition. However, in the longer 5-s encoding condition, younger persons produced significantly more remember responses, and older adults a greater number of know responses. This dissociation suggests that in the longer encoding condition, younger adults utilized a greater level of elaborative rehearsal governed by executive processes, whereas older persons employed maintenance rehearsal involving short-term memory. Statistical control procedures, however, found that independent measures of processing speed accounted for age differences in remembering and knowing and that independent measures of executive control had little influence. The findings are discussed in the light of contrasting theoretical accounts of recollective experience in old age.

fMRI of IQ blog readers

Below is a composite fMRI of the brain activity of readers of this blog. Cool...don't you think?



Honestly, this is just a tease. I just wanted a cool and colorful fMRI picture on my blog so I could look hip.

Saturday, April 16, 2005

WISC-III/WJ-III cluster analysis results

So much data....so little time!

In a prior shameless plug, I briefly summarized the results of a recently published CHC-based confirmatory factor analysis study of a WJ-III/WISC-III cross-battery data set (Phelps, McGrew, Knopik & Ford, 2005). Following a favorite quantoid mantra ("there is more than one way to explore a data set"), I couldn't resist but conduct a more loosey-goosey (sp?) exploratory analysis of the data.

One of my favorite exploratory tools, given the Gv presentation of the multivariate structure of the data, is hierarchical cluster analysis (sometimes referred to as the "poor man's" factor analysis). Without going into detail, I subjected the data set previously described to Ward's clustering algorithm. As a word of caution, it is important to note that cluster analysis will provide neat looking cluster dendograms for random data....so one must be careful not to over-interpret the results. Yet, I find the looser constraints of cluster analysis and, in particular, the continued collapsing of clusters of tests (and lower-order clusters) into ever increasing broad higher-order clusters very thought provoking---the results often suggest different broad (stratum II) or intermediate level strata (as per Carroll's 3-stratum model).

I present the current results "as is" (click here to view or download). Blogsters will need to consult prior posts to glean the necessary pieces of information to interpret the CHC factor codes and names, the abilities measured by the WJ III tests, etc.

To say the least, some interesting hypothesis are suggested. In particular, I continue to be intrigued by the possibility of a higher-order dual cognitive processing model structure (within the CHC taxonomy) --that is, a distinction between automatic vs controlled/deliberate processing

Bayles Scales of Infant Development-II: A review

FYI post.

Alfonso et al (2005) have published a critical review of the Bayley Scales of Infant Development-2nd Edition, the venerable standardized measure of development abilities and milestones for infants up to 42 months of age. The review is published in Vol 59 (2) of the Spring issue of APA's Division 16 "The School Psychologist Newsletter." A copy can be viewed (or downloaded) by clicking on the Spring 2005 issue at the APA Division 16 web page.

Thursday, April 14, 2005

Student progress monitoring summer institute

I'm an equal assessment methodology blogster.

Yes, although I've spent most of my career developing standardized measures of intelligence and achievement, at my core is the essence of a true quantoid---someone who believes that psychology and education benefit from the development of all kinds of good measurement tools that may serve different functions.

In this spirit, and in my belief in the importance of continuous student monitoring systems, I hereby notify readers of this blog of a potentially valuable summer training experience on continuous student progress monitoring systems.

The National Center on Student Progress Monitoring has just announced it's 2005 Summer Institute (July 7-8 in Washington, DC). Registration is now open at the web site.

Long live good measurement on behalf of students.

Tuesday, April 12, 2005

'95 CHC Carroll mountain top mtg

I have been surprised by the number of blogsters who have an interest in history. My prior "In memory of Jack Carroll" post, which included a picture of Jack approximately 1 month prior to his passing away, generated a number of emails asking for more personal stories and/or picutres.

As I've written elsewhere (CHC Theory: Past, Present and Future), I believe that the recently declared CHC bandwagon has its roots in the cross-fertlization of the theoretical/research of John Horn and John "Jack" Carroll and the applied psychometric work of Dr. Richard Woodcock, over 20 years ago (1985). Amazing...it has been twenty years.

The Horn-Carroll-Woodcock collaboration, as I describe historically in the link above, continued up through the publication of the WJ3 in 2001. Below is another bit of history. A meeting 10 years ago (1-12-95) where (from left to right--after Jack Carroll on far left) Dr. Richard Woodcock, myself (don't ask why I'm so serious), and Dr. Fred Schrank gathered for one of our ongoing work sessions in Chapel Hill, NC. Yes, Woodcock, Schrank, and McGrew made routine visits to one of the CHC mountains (the other being John Horn) to sit in a circle and ask questions and learn pearls of wisdom. These truly were once-in-lifetime opportunities to learn from one of the greatest educational psychologists of all times.

A handsome and bright looking group...don't you think?

Example

Reading and phonemic awareness and RAN: The rest of the story

Is it possible that the NICHD sponsored reading disability research has produced an emperor that is only partially clothed? Have the constructs of rapid automatized naming (RAN) and phonological awareness (PA) become more powerful than they should?

While rooting around the literature, I recalled the following meta-analytic review that contributes to answering these questions. Two things to note:

  • The article is published in the none-to-shabby journal the Review of Educational Research. RER is one of the highly respected flagship journals of the American Educational Research Association (AERA). AERA is a tough crowd, as anyone who has presented a paper at their conference soon learns during the Q/A session. These folks know their research and methods. Making an AERA presentation is not for the weak.
  • By using meta-analysis, which is a statistical review technique that provides a quantitative summary of findings across an entire body of research, the study provides for a more quantitative/objective perspective regarding the importance of PA and RAN.
Article reference
  • Swanson, H. L., Trainin, G., Nechoechea, D. M., & Hammill, D. D. (2003). Rapid naming, phonological awareness, and reading: A meta-analysis of the correlation evidence. Review of Educational Research, 73(4), 407-440.
Article Abstract
  • This study provides a meta-analysis of the correlational literature on measures of phonological awareness, rapid naming, reading, and related abilities. Correlations (N = 2,257) were corrected for sample size, restriction in range, and attenuation ffom 49 independent samples. Correlations between phonological awareness (PA) and rapid naming (RAN) were low (.38) and loaded on different factors. PA and RAN were moderately correlated with real-word reading (.48 and .46, respectively). Other findings were that (a) real-word reading was correlated best (r values were .60 to.80) with spelling and pseudoword reading, but correlations with RAN, PA, vocabulary, orthography, IQ, and memory measures were in the low-to-moderate range (.37 to .43); and (b) correlations between reading and RAN/PA varied minimally across age groups but were weaker in poor readers than in skilled readers. The results suggested that the importance of RAN and PA measures in accounting for reading performance has been overstated.

My comments (aka; Dr. IQ's 2 cents)

Unless a school-based assessment professional has been living in a cave for the last 10 years, it would be rare to find an assessment professional who has not heard of the prominence the double-deficit dyslexia hypothesis (Wolf and Bowers, 1999). This hypothesis suggests that some deficits in reading may be related to the speed with which one can name aloud a series of letters, objects, and numbers (rapid automatized naming-RAN), as well as to deficits in phonological awareness (PA). This reading disability model has dominated reading disability research this past decade (particularly that sponsored via NICHD). Few assessment professionals working in the schools have not learned the lingo of RAN and PA.

The Swanson et al. (2003) review raises questions about the strength of the RAN/PA double-deficit model position, a position that continues to exert a significant influence on federal educational policy decisions. The meta-analysis based conclusions suggest that the hot RAN/PA position may be overstated. The study serves to remind us of the potential problem of model specification error in research.

Briefly, specification error occurs when potentially important variables in predictive or explanatory studies are omitted, an error that can lead to biased estimates of the effects of predictive variables. In reading research, specification error can be particularly problematic as it may result in:
  • (a) abilities being excluded from the analysis as they are incorrectly judged to be unimportant, when they are vital,
  • (b) the accurate relative contribution of multiple abilities to reading achievement remains unknown,
  • (c) researchers have a tendency to follow the “bandwagon” and may continue to build on studies where an incomplete set of predictor/causal ability variables are included [which may lead to premature reliance and focus on certain predictors measures – a "premature hardening of the predictors"], and
  • (d) consumers of this research may be misled by less than accurate conclusions
  • [See Evans, Floyd, McGrew (yes, it be me) & Leforgee, 2002. The relations between Cattell-Horn-Carroll (CHC) cognitive abilities and reading achievement during childhood and adolescence, School Psychology Review, 31(2), 246-262 – these issues are discussed in greater detail in the context of CHC theory in this article.]

According to Swanson et al (2003), aside from PA and RAN, a significant number of other studies have identified other cognitive processes as important to reading such as those related to orthography, semantics, and working memory span. Our own research (Evans et al, 2002) has suggested that the CHC ability of associative memory (MA) may be a victim of specification error in contemporary reading disability research.

Swanson et al. (2003) concluded that:
  • “When corrected for sample size and sample heterogeneity (variations in SES, ethnicity, and age), the majority correlations related to PA and RAN are in the low range (mean r= .38).”
  • “Furthermore, correlations between real-word reading, PA, and RAN are also in the low-to-moderate range (.35 to .50).”
  • “These findings remain stable even after partialing for variations in the samples as a function of age, SES, distribution of reading ability, gender, and ethnicity.”
  • “The present synthesis appears to support the observations of Wolf and Bowers that performance on RAN measures and performance on PA measures are the results of independent processes.”
  • “Thus, it appears to us that less emphasis should be placed on PA and RAN measures in attempts to classify children at risk for reading and more emphasis should be placed on spelling.”
  • “The double-deficit theory of reading disability does not fit at all.”
  • “Our synthesis is consistent with the current literature suggesting that isolated processes, such as phonological coding, do play a modest part in predicting real-world reading and pseudoword reading. However, our study highlights additional processes as playing equally important roles in reading. It suggests the importance of phonological awareness may have been overstated in the literature.”
Final musing/comment

Regardless of the independence of PA and RAN measures in predicting reading, the meta-analysis suggests that RAN and PA measures, although related to reading achievement and reading disabilities, are not the only measures of important constructs that should be used in reading research and reading-related applied assessments by practitioners.

From my reading of the literature, when cast within the context of CHC theory and when focusing on the early elementary grades, the potentially important set of reading “marker” variables (with CHC notations included) should include, in addition to PA (Phonetic Coding, Ga-PC) and RAN (Naming Facility, Glr-NA), associative memory (Glr-MA), working memory (Gsm-MW), processing speed (Gs), and vocabulary/semantics (lexical knowledge, Gc-VL).

Friday, April 08, 2005

IQ PREPLOG: Floyd & Bergeron KDEFS/WJ3 FA research

On behalf of Dr. Randy Floyd, and one of his doctoral students, a copy their NASP 2005 poster paper (Joint Exploratory Factor Analysis of the Delis-Kaplin Executive Function System and the Woodcock-Johnson III Tests of Cognitive Abilities) has been posted "as is" for online viewing or downloading (left or righ click here).

All previously posted disclaimers regarding IQ PREPLOG posts, as well as the Gene Glass based philosophy on evaluating the integrity of non-reviewed research reports on the internet, apply. I have NOT passed any judgment (positive nor negative) regarding this courtesy post.

Interpreting differential working memory test performance

Ah ha!! A small, yet bright (I proclaim) light bulb fired in my cortex this evening while skimming an article on the “bimodal format effects in working memory” (Note to self – I need to poll blogsters to see if they read similar exciting literature on Friday evenings). Why? Because I detected a link between the theory and research articulated in the article and clinical reports from assessment practitioners who wonder why some individuals perform differently on different working memory tasks.

In particular, I’ve heard anecdotal reports of individuals who do not experience problems with simple digit reversal tasks (e.g., Wechsler Digit Span Backwards; WJ III Numbers Reversed), but who turn around and perform either noticeably higher (or lower) on what appears (at face value) to be a more difficult task such as listening to numbers and words mixed together, and then being required to “list the numbers in order first, followed by the words in order” (e.g., WAIS-III Letter-Number Sequencing; WJ III Auditory Working Memory “How can this be?” The answer may lay in different working memory tasks placing differential demands on selective attention due to different format presentation modes.

As reported in a summary of two different experiments, Paula Goolkasian and Paul Foos’s (2005, American Journal of Psychology, 118[1], 61-77) research suggests that feature similarity effects can influence the ease of splitting divided attention. Given that working memory is hypothesized to include two different short-term working “slave” scratchpads (viz., articulatory or phonological loop for words and sounds; visual-spatial sketchpad for visual and figural stimuli), if all the material presented is from the same general stimulus family (e.g., words and sounds – both language), the scratchpad devoted to this form of processing (i.e., the phonological loop) may become overloaded and the chances increase that similar features will be “overwritten.” In other words, the phonological scratchpad may become too full if the two sets of stimuli (in a classic auditory working memory task) are more similar in general stimulus features.

Conversely, the more dissimilar features are in two sets (e.g., words and pictures), the greater the probability that the full resources of both scratchpads will be utilized. According to what the authors, the feature model predicts that “participants differentially allocate their attention resources across stimulus features, with distinctive features receiving more attention” (p. 63). Furthermore, the “probability of a feature being overwritten increases with increased similarity to subsequent events. Generalizing from this theory, picture and spoken word combinations should produce the most benefit because they are the most dissimilar.”

In the case of working memory tasks like the Wechsler Letter-Number Sequencing and WJ III Auditory Working Memory tests, the fact that both sets of stimuli are generally from the same class of stimulus features (viz., both spoken words), some examinees may overload their phonological loop scratchpad, resulting in a significant decrement in performance. In contrast, the same examinee might perform substantially different on a working memory task where the two stimulus sets are much different in format (e.g., spoken words and pictures of objects), given that each scratchpad resource can be maximized.

Long story short. Assessment personal, when comparing performance across different working memory tasks, should evaluate the extent to which different test paradigms increase or decrease the feature similarity of the classes of stimuli to which an examine must divide their selective attention. Based on their collective program of research, the authors offer the educationally relevant hypothesis (a bit broad and sweeping in my opinion), which also highlights the relative degree of difficulty of different working memory tasks derived from their program of research, when they conclude that:
  • "When information is presented in educational and other settings, our results suggest that dual-modality presentation is very beneficial, particularly when pictures and words are combined. When only one presentation mode is possible, printed words should be your last choice. Finally, as one presents information, the last thing anyone wants is irrelevant speech, which interferes with both printed words and pictures" (p. 75).

More FYI on dynamic assessment

On 3-24-05 I posted, in response to a question on the CHC listserv, an FYI regarding dynamic assessment references found in the IAP Procite Reference DataBase.

Today, on the NASP listserv, Dr. Mogens R. Jensen, Director of the International Center for Mediated Learning provided the following information that may be of interest to those interested in dynamic assessment. All usual disclaimers apply (I make no endorsement, etc. regarding the cite nor materials....I'm just serving as an information conduit):

  • "An overview of the work that more precisely guides the dynamic assessment model which will be shared at the June 13-17 training is available in two parts in Educational and Child Psychology, 2003, Vol. 20(2). The references are provided below. Both articles can be downloaded in the pdf format by clicking on Library at www.mindladder.org.
  • Jensen, M. R. (2003a). Mediating knowledge construction: Towards a dynamic model of assessment and learning. Part I: Philosophy and theory. Educational and Child Psychology, Vol. 20(2), 100-117.
  • Jensen, M. R. (2003b). Mediating knowledge construction: Towards a dynamic model of assessment and learning. Part II: Applied programmes and research. Educational and Child Psychology, Vol. 20(2), 118-142.
  • A comprehensive resource for published research on dynamic assessment is available at www.dynamicassessment.com."

In memory of John"Jack" Carroll

A number of folks, either via this blog or during personal conversations, have expressed interest in learning more about my visit to see Dr. John "Jack" Carroll in May of 2003, approximately 0ne month prior to his passing away.

The purpose of the trip was for me to be tutored on the suite of self-written exploratory factor analyses programs he used for his 1993 seminal factor analysis book treatise. With time, I will try to provide brief snippets (and possibly more pictures) of this visit that convey the reverence and respect I have for Dr. Carroll and his work.

Below is a picture of Jack (and yours truly) when out for dinner. The real prize of my trip is that his daughter Mimi later sent me the cardigan he is wearing in this picture. This cardigan was like his official uniform (and topic of good natured ribbing from family members) during his retirement days at his daughters house in Alaska. I have it in a safe place and consider it an honor to have it in my possession


Example

CHC definition project

FYI post for those who may be unaware of this activity.

The Institute for Applied Psychometrics (IAP), in conjunction with Evans Consulting, recently initiated the Cattell-Horn-Carroll (CHC) Definition Project as part of the Carroll Human Cognitive Abilities Project.

The primary goal of the CHC Definition Project is to continue the legacy of intelligence scholars who have contributed to the development of the CHC (Gf-Gc) taxonomy of human cognitive abilities, via the provision of a clearinghouse mechanism by which to reach consensus definitions of the major narrow (stratum I) and broad (stratum II) abilities that have been identified. Based on Carroll's (1993) treatise on the factor structure of human cognitive abilities, I (McGrew,1997--chapter in Flanagan et al., 1997, CAI1 book) originally abstracted brief definitions of the narrow and broad CHC abilities.

The real story behind the development of these definitions is that after abstracting rough draft definitions from Jack Carroll's book, I sent them to Dr. Carroll. He graciously took time to comment and edit this draft. I subsequently revised the defintions and sent them back. Jack and I went through a number of iterations until he was generally comfortable with the working definitions. So, the original definitions published in my 1997 book chapter did have the informal stamp of approval of Dr. Carroll --- that, my folks, "is the rest of the story."

These definitions were recently revised, expanded, and clarified and have been posted at the CHC Definition Project web page . The revised "working" definitions are based on a review of ability definitions from a variety of sources, including Carroll (1993), the original ETS Factor Reference Work group publications (Ekstrom et al., 1979), the Encyclopedia of Human Intelligence (Sternberg, 1994), the Dictionary of Psychology (Corsini, 1999), and recent published research.

Please send comments, suggestions, etc., regarding any aspect of these definitions (i.e., organization, labels, examples, wording, etc.) to the blogmaster at iap@earthlink.net.

For those interested in the HCA project, IAP is continuting to search for funding mechanisms to "kick start" this time-intensive activity. If you have ideas, please send them along.

Keywords: CHC teaching tool

Thursday, April 07, 2005

Search engine for Gv thinkers: KartOO

For those who are strong Gv thinkers, there is a search engine my associate (Jeff Evans) pointed me to. It is KartOO. As described at the site:
  • KartOO is a metasearch engine with visual display interfaces. When you click on OK, KartOO launches the query to a set of search engines, gathers the results, compiles them and represents them in a series of interactive maps through a proprietary algorithm.
Just type in your regular search terms and you are presented a Gv presentation of the search results. Information is presented in words, icons (for web pages), different shades of color, and when you put your cursor over a term, lines appear that demonstrate links to other terms.

I saved a search and it can be viewed by clicking here. The line links are not present in the file I saved and can only be viewed by doing your own search and giving it a try.

I have added a window for this search engine below my blog counter in the sidebar. Enjoy.

Wednesday, April 06, 2005

Monitoring internet "chatter" via Technorati

Folks have been asking me about the Technorati search box next to my blog counter.

Below is some text from the Technorati site. The long-story-short -- if you want to monitor what folks are saying in blogs on the internet, for any topic, just do a search and it will show you the latest comments from different sources (and links to them). If you want to try one, try "CHC intelligence"---yep, this current blog is pretty much the active voice in the land of CHC.

What can be real cool is setting up your own "watchlists" to moniter and feed you information about new stuff that is being posted, almost in real time. Isn't technology grand? I still can remember the days of dial-up with a 9600 baud modem and using "Gopher" to find information on this funny thing called the internet.

You can also check out the top blogs overall as well as top blogs by certain general topic areas (e.g., politics).

Text "borrowed" from the Technorati web site.
  • "Technorati is the authority on what is happening on the real-time web. To find out what people are saying about any subject or website, just enter some text or a website address (URL) in the search box above and and click the "Search" button. Join Technorati today! It is free and you can setup a Watchlist for your favorite URLs and search terms. Watchlists are a powerful way to track what is happening on the web right now: news, sports, corporate information or whatever information is interesting and important to you. Start tracking the daily conversations emerging around the weblogs or websites that are important to you or your business."
"As a member, you can:
  • Add a photograph to your profile and it will appear next to every search referencing your site.
  • Help other people find your site's posts and learn more about you and your writing.
  • Create free watchlists utilizing RSS to stay informed and track conversations as they happen.
  • Enable your readers to search your blog on your own web pages with the Technorati Searchlet."

Ability composite scores in the face of variability: Lohman's thoughts

David Lohman (University of Iowa) made a post to the CHC listserv today regarding a recurring question that surfaces in the interpretation of composite scores in the face of significant variability in the tests that comprise a composite. I have reproduced his comments below and have embedded a link to a paper to which he refers.

This post is for the quanotids who read this blog and who are dying for their RDA of quanotid-speak.

David Lohman stated in CHC listserv post on 4-6-05

  • "As I see it, there are two issues in the spread or scatter of scores used to estimate a composite. The first is the dependability of the composite score and the second is whether the test measures more than one dimension. Most people worry about the latter problem, I and my students have worried about the former problem. The basic idea is that the composite or average score is most meaningful in those cases in which the scores that define it agree with one another and least meaningful when part or subtests scores vary markedly. The problem is how best to represent this variability in a way that is meaningful for test users. We have approached the problem from the standpoint of person fit in IRT, but expressed the degree of misfit in the metric of the score scale. Specifically, we use variability of subtest scores as one estimate of error of measurement in the composite score for the individual. Examinees who show much scatter have a composite scores with wide confidence intervals whereas examinees with minimal scatter have the confidence intervals typical for that score level (i.e., proportional to the conditional standard error of measurement). These procedured are actually implemented in the 6th edition of the Cognitive Abilities tests, which I co-author with Betty Hagen. For a brief description of how these individual confidence intervals are derived, see pp 62-63 in the CogAT6 Research Handbook."
For a more extensive discussion, click here to see(and/or download) a paper by Lohman that describes his approach to this issue.

Working memory and "choking": More is less?

A recently published study (listed below) reports that, ironically, individuals most likely to fail under pressure are those who, in the absence of pressure, may have the highest capacity (viz., high working memory) for success. A situation where more can be less?


Beilock, S. L., & Carr, T. H. (2005). When high-powered people fail - Working memory and ''choking under pressure'' in math. Psychological Science, 16(2), 101-105.

Abstract
We examined the relation between pressure-induced performance decrements, or “choking under pressure,” in mathematical problem solving and individual differences in working memory capacity. In cognitively based academic skills such as math, pressure is thought to harm performance by reducing the working memory capacity available for skill execution. Results demonstrated that only individuals high in working memory capacity were harmed by performance pressure, and, furthermore, these skill decrements were limited to math problems with the highest demands on working memory capacity. These findings suggest that performance pressure harms individuals most qualified to succeed by consuming the working memory capacity that they rely on for their superior performance.

Study Highlights
Little research has addressed the causal mechanisms by which high-stakes performance situations result in disappointing performances. Even less is known about the specific characteristics of individuals most likely to experience unwanted failure in such situations. In this study, the researchers set out to explore the cognitive processes that may govern ''choking under pressure”…especially in situations in which the desire for high-level performance is maximal.

The specific causal mechanism explored was working memory and the experimental situation was performance on mathematical problem solving under both low- and high-pressure conditions.

In general, the researchers found that:

  • As expected, performance decrements were limited to problems that made the largest demands on working memory. Surprisingly, however, only the individuals high in working memory (HWM) capacity demonstrated these decrements. Interestingly, high-pressure situations completely eliminated the advantage that HWMs enjoyed over LWMs.
  • It was hypothesized that high performance pressure consumes the extra working memory capacity that is used by HWM capacity individuals to solve the most difficult problems (those with high working memory demands). Individuals with LWM capacity performed less well on the high-demand problems in the absence of pressure, but when pressure was applied, LWM's disadvantage disappeared since their level of achievement did not decline under pressure.
  • The hypothesized construct central to these findings is executive or controlled attention (see work of Engle, Kane and associates). Working memory is at heart the ability to focus attention on a central task and execute its required operations while inhibiting irrelevant information. Under normal conditions, HWMs tend to outperform LWMs because they have superior attentional allocation capacities. However, when such attentional capacity is compromised, HWMs' advantage disappears. According to the researchers, “the idea that pressure specifically targets individuals who have high working memory capacity carries implications for interpreting performance in real-world high-pressure situations” (e.g., high stakes examines like GRE, SAT, medical boards)

Tuesday, April 05, 2005

WISC-III/WJ III CHC study: Shameless plug

Yes.... a shameless (but, non-commercial) plug. Be sure to read my stated conflict of interest statements on this blog's home page (WJ III coauther)

The following Wechsler/WJ3 cross-battery confirmatory factor analysis article was just published in the most recent edition of School Psychology Quarterly. Contact the journal to subscribe and get your copy now.....before supplies are depleted. Don't be the last one on your block to read it!

Phelps, L., McGrew, K. S., Knopik, S. N., & Ford, L. (2005). The general (g), broad, and narrow CHC stratum characteristics of the WJ III and WISC-III tests: A confirmatory cross-battery investigation. School Psychology Quarterly, 20(1), 66-88.

Abstract: One hundred, forty-eight randomly selected children (grades three-five) were administered the WISC-III, WJ III Tests of Cognitive Abilities, WJ III Tests of Achievement, and seven research tests selected from the WJ III Diagnostic Supplement. The validity of the existing WISC-III and WJ III broad Cattell-Horn-Carroll (CHC) test classifications was investigated via the application of CHC-organized, broad-factor, cross-battery confirmatory factor analyses (CFA). Likewise, the validity of the WISC-III and WJ III narrow CHC ability classifications was investigated via the evaluation of a three-stratum hierarchical (narrow+broad+g) CHC CFA cross-battery model. The Tucker-Lewis Index, the Comparison Fit Index, and the Root Mean Square Error of Approximation evaluated the fit for the resulting models. All statistical values indicated good to excellent fit.

Select study highlights
  • This study represented the first-ever three-stratum, CFA CHC-based analyses of a jointWechsler/Woodcock data set.
  • At the broad CHC factor level, prior WISC-III/WJ III test classifcations where supported.At the broad factor level, the WISC-III included a greater proportion of Gv tests (Picture Completion, Picture Arrangement, Block Design, Object Assembly) than the WJ III (Spatial Relations and Block Rotation).
  • WISC-III Block Design test was the strongest single indicator of Gv (and an integrated narrow spatial relations/visualization [SR/Vz] ability factor). Consistent with prior CHC-based Wechsler studies, the results continued to NOT support the interpretation of Wechsler Block Design as a measure “reasoning” (Gf). WISC-III Object Assembly also appeared to be a strong indicator of Gv, although its interpretation at the narrow ability level was indeterminate.
  • Relatively low Gv loadings for WISC-III Picture Completion and Picture Arrangement, plus an additional secondary Gc loading for Picture Completion, reinforced prior research that, in the context of CHC-defined assessments, the use of these two WISC-III tests is discouraged as scores on these two tests may confound ability profile interpretation of Gv composites.
  • The two WJ III tests (Visual Matching, Decision Speed) and two WISC-III speeded tests (Coding, Symbol Search) were all identified as strong indicators of broad Gs, but differentiation at the narrow Gs ability level was not supported.
  • At variance from prior suggested interpretations of WISC-III Arithmetic as an indicator of quantitative reasoning (under Gf), when included together with select WJ III math achievement measures, WISC-III Arithmetic was found to be a mixed measure of Gq (quantitative knowledge) and Gs.
  • At the narrow ability level, support was found for interpreation of the WJ III Numbers Reversed and Auditory Working Memory Tests as measures of working memory (MW), whereas WJ III Memory for Words and Memory for Sentences are best interpreted as measures of memory span (MS). In contrast to apriori hypothesis, the WISC-III was found to be a measure of MS and not MS and MW.
  • The WJ III Planning test was found to be less of an indicator of Gv as reported in previous studies, and, instead, loaded primarily on Gf. The possible involvement of working memory, an ability linked to Gf and g, was hypothesized as a possible reason.
  • Consistent with the extant Wechsler CHC CFA research, the WJ III was found to provide valid measures of three broad CHC domains (Gf, Glr, Ga) not measued by the WISC-III.

IQ PREPLOG: Dissemination of pre-pub scholary information via the "invisible university" network

Often the time lag between the completion of a scholarly manuscript and its formal publication is measured in years. During this time, the importance of the information often goes unnoticed (except among the “invisible university” of scholars who happen to exchange papers amongst themselves) by other researchers and practitioners.

With this post, I’m announcing another experimental IQ Blog feature..a feature I believe can help reduce this time-lag problem….the IQ PREPLOG (which stands for the dissemination of highlights of PRE-publication articles scholars would like to disseminate or “PLug” via the blOG---thus, PREPLOG…hey…..give me a few points for good Glr----some form of "acronym fluency" or creativity).

Vanessa Danthiir (and associates) in Germany have agreed to be the first guinnea pigs. This research group, which has been extremely active in contemporary CHC structural evidence research during the past decade, have recently completed two manuscripts (both “in press”) that focus on the structure of human cognitive processing speed (Gs, CDS, Gt). With the permission of the authors, below are the manuscript citations and abstracts. Interested readers are encouraged to contact Dr. Danthiir (danthiiv@rz.hu-berlin.de) for additional information and/or to inquire about receiving pre-publication versions of the manuscripts.

Also, according to the authors, the main findings reported in the paper-and-pencil mental speed measures (study) have since been replicated with computerised versions of the tasks. These results are currently being summarized in written form.

If other scholars have manuscripts that fit within the scope of this blog, and who are dying to get “the word out” regarding their forthcomming publications, and more importantly, want to connect with other intelligence scholars/practitioners via an internet-based invisible university structure, please contact me, the IQ Blogmaster, via my email (iap@netlinkcom.com). As editor (dictator?) of this blog, I have final decision-making power over what is posted......It is good to be king….at least within my small private professional sandbox.


Danthiir, V., Wilhelm, O., Schulze, R., & Roberts, R. (2005; in press). Factor Structure and Validity of Paper-and-Pencil Measures of Mental Speed: Evidence for a Higher-Order Model? Intelligence.

This study explored the structure of elementary cognitive tasks (ECTs) and relations between the corresponding construct(s) with processing speed (Gs) and fluid intelligence (Gf). Participants (N = 321) completed 14 ECTs, 3 Gs, and 6 Gf marker tests, all administered in paper-and-pencil format to reduce potential confounds evident when tasks are presented using different media. Factor analysis of the ECTs resulted in a general mental speed factor, along with several task-class specific factors. General mental speed was indistinguishable from Gs and highly correlated with Gf. Significant correlations were also found between Gf and variance specific to task-class speed factors. The findings point to the non-unitary nature of mental speed and the potentially important role of specific speed factors for examining the relationship between speed and fluid intelligence.

Danthiir, V., Wilhelm, O. & Schacht, A. (2005, in press). Decision Speed in Intelligence Tasks: Correctly an Ability? Psychology Science.

Relatively little is known regarding the broad factor of correct decision speed (CDS), which is represented in the theory of fluid and crystallized intelligence. The current study (N = 186) examined the possibility that distinct CDS factors may exist that are specific to the broad ability assessed by the tasks from which the correct response latencies are derived, in this instance fluid and crystallized intelligence (Gf and Gc) tasks. Additionally, the relationships between the correct response latencies and Gf, Gc, and processing speed (Gs) were investigated. Two distinct yet correlated factors of CDS were identified for Gf and Gc tasks, respectively. Both CDS factors were related to their ability factor counterparts, and CDSGc was lowly related to Gs. However, item difficulty moderated the relationships between CDS and the abilities. When item difficulty was considered relative to groups of participants differing in ability level, differences in the speed of responses were found amongst the ability groups. The pattern of differences in speed amongst the ability groups was similar across all levels of item difficulty. It is argued that this method of analysis is the most appropriate for assessing the relationship between ability level and CDS. The status of CDS as a broad ability construct is considered in light of these findings.

Monday, April 04, 2005

IQ Bog readers: I shall return

To those who have sent me private supportive notes (regarding this blog), or who shared their positive comments with me at the NASP convention last week, be patient. I'm now back and need a day or two to clean the top of my desk and my email inbox before resuming my posts and musings via the blog. Your positive feedback and continued checking of the site are providing me the necessary reinforcement to keep this experiment going. I just need a day or two before I can find time to write and post material of reasonable worth. "I shall return."

Saturday, April 02, 2005

Evolution of CHC Cross-Battery Assessment: Dr. Dawn Flanagan's NASP comments

Yesterday, at the end of a NASP paper session on CHC Cross-Battery (CB) assessment, Dr. Dawn Flanagan, leading expert on CB, made an interesting comment.

She stated that CB was developed prior to the emergence of a number of well standardized intelligence batteries (WJ III, SB5, KABC-II) that collectively provide "norm-based" composites of the major broad CHC abilities. Thus, she stated that the real value of the CB method should now morph towards supplementing CHC-designed batteries at the narrow ability level (e.g., trying to obtain a two-test Visual Memory composite when giving the WJ III would require supplementing the Picture Recognition test with another test of MV). I thought this was a very perceptive insight regarding the responsivness of CB methods to evolution of the major intelligence batteries.

Friday, April 01, 2005

CHC bandwagon officially declared: Observations from 2005 NASP convention

OK. I unilaterally declare that the CHC assessment bandwagon is here and gathering steam in school psychology.

After two days at the National Association of School Psychologists (NASP) annual convention in Atlanta, I’m officially declaring that the CHC “tipping point” occurred sometime during the past five years and that the CHC bandwagon is getting larger.

During breakfast this morning, while skimming the convention program, I counted at least 20+ different workshops, papers, and/or posters that either dealt with CHC-designed batteries (e.g., WJ III, KABC-III, SB5), CHC theory, CHC Cross-Battery (CB) assessment, or mentioned CHC in the program abstract. This represents, in my informal memory-based analysis, a significant increase in presentations related to Gf-Gc/CHC theory and measurement over the past decade.

Having been involved in the 1977 WJ-to-WJ-R revision, a process that included having both Dr. John “Jack” Carroll and Dr. John Horn as the primary theory consultants (back then it was referred to as Gf-Gc theory), it is now exciting to see that the theory-to-practice gap is finally being bridged, and that an ever-increasing number of test authors and assessment practitioners are on the bandwagon riding over the CHC bridge. This is good for the field. This is good for kids (data being used to make decisions is now built on a solid foundation of validity evidence).

Welcome aboard Gale Roid (SB5 author) and Alan and Nadeen Kaufman (KABC-II authors). [Note....I predict that the DAS-II, will also have a strong CHC flavor.] It is good to see that respected scholars and test developers are now validating the “ahead of the curve” conclusion of Dr. Richard Woodcock, back in 1985, that the then Gf-Gc theory (now known as CHC theory) was “the” structural theory of intelligence that had the most solid empirical and theoretical foundation from which to develop measures of intelligence.

For those who want a historical perspective of what happened, when, and how (with regard to the movement of CHC research into school psychology assessment practice), please read my historical account as posted “up in the sky” (click here..also published in CAI2 book). The events that are documented provide the evidence for my claim of a CHC bandwagon effect.