Showing posts with label CHC listserv. Show all posts
Showing posts with label CHC listserv. Show all posts

Monday, August 22, 2016

"Intelligent" intelligence testing with the WJ IV COG #7: Why do some individuals obtain markedly different scores on the various WJ IV Ga tests?

This is # 7 in the "Intelligent" intelligence testing with the WJ IV COG series at IQs Corner.  Copies of the PPT module can be downloaded by clicking on the LinkedIn icon in the right-hand corner of the slide show below  A PDF copy of all slides can be found here.

This module was developed in response to a thread on the IAPCHC listserv where an individual asked for help in understanding why the WJ IV Phonological Processing test score could be so much different (lower) that the WJ IV Sound Blending and Segmentation test scores.

Enjoy.



Friday, May 15, 2009

The promise of CHC theory of intelligence

Combine the past 20 years of CHC-driven intelligence test development and research activities (click here and here) with the ongoing refinement and extension of CHC theory (McGrew, 2005; 2009) and one concludes that these are exciting times in the field of intelligence testing. But is this excitement warranted in school psychology? Has the drawing of a reasonably circumscribed “holy grail” taxonomy of cognitive abilities led us to the promised land of intelligence testing in the schools—using the results of cognitive assessments to better the education of children with special needs? Or, have we simply become more sophisticated in the range of measures and tools used to “sink shafts at more critical points” in the mind (see Lubinksi, 2000) which, although important for understanding and studying human individual differences, fails to improve diagnosis, classification, and instruction in education?
It is an interesting coincidence that McDermott, Fantuzzo, and Glutting’s (1990) now infamous and catchy admonition to psychologists who administer intelligence tests to “just say no to subtest analysis” occurred almost 20 years ago—the time when contemporary CHC intelligence theory and assessment was emerging. By 1990, McDermott and colleagues had convincingly demonstrated, largely via core profile analysis of the then current Wechsler trilogy of batteries (WPPSI, WISC-R, WAIS-R) that ipsative strength and weakness interpretation of subtest profiles was not psychometrically sound. In essence, “beyond g (full scale IQ)—don’t bother.”
I believe that optimism is appropriate regarding the educational relevance of CHC- driven test development and research. Surprisingly, cautious optimism has been voiced by prominent school psychology critics of intelligence testing. In a review of the WJ-R, Ysseldyke (1990) described the WJ-R as representing “a significant milestone in the applied measurement of intellectual abilities” (p. 274). More importantly, Ysseldyke indicated he was “excited about a number of possibilities for use of the WJ-R in empirical investigations of important issues in psychology, education, and, specifically, in special education…we may now be able to investigate the extent to which knowledge of pupil performance on the various factors is prescriptively predictive of relative success in school. That is, we may now begin to address treatment relevance.” (p. 273). Reschly (1997), in response to the first CHC-based cognitive-achievement causal modeling research report (McGrew, Flanagan, Keith & Vanderwood, 1997) which demonstrated that some specific CHC abilities are important in understanding reading and math achievement above and beyond the effect of general intelligence (g), concluded that “the arguments were fairly convincing regarding the need to reconsider the specific versus general abilities conclusions. Clearly, some specific abilities appear to have potential for improving individual diagnoses. Note, however, that it is potential that has been demonstrated” (Reschly, 1997, p. 238).
Clearly the potential and promise of improved intelligence testing, vis-à-vis CHC organized test batteries, has been recognized since 1989. But has this promise been realized during the past 20 years? Has our measurement of CHC abilities improved? Has CHC-based cognitive assessment provided a better understanding of the relations between specific cognitive abilities and school achievement? Has it improved identification and classification? More importantly, in the current educational climate, where does CHC- grounded intelligence testing fit within the context of the emerging Response-to-Intervention (RTI) paradigm?
An attempt to answer these questions will be forthcoming in a manuscript submitted for publication (McGrew & Wendling, 2009) as well as a revision of the CHC COG-ACH Relations Research Synthesis Project available at IQs Corner (warning - current posted material is now outdated and does not reflect the final conclusions of the McGrew & Wendling (2009) review. This material is in the process of being revised and will be posted soon. Stay tuned to IQs Corner Blog or announcements via the NASP and CHC listservs.

Click here for other posts in this series.

Tuesday, February 10, 2009

CHC (Cattell-Horn-Carroll) listerv n=1111


I've noticed an uptick in email notifications to me indicating that new people have been joining the CHC listserv. So...I checked and see that we have broken the 1000 barrier!!!!! As of today n=1111. Today 1111....next year 2000+

Spread the word.

Thursday, January 08, 2009

CHC COG-ACH research synthesis project important update 1-8-09

[Double click on image to enlarge]

I'm pleased to announce another update and major revision to the the Cattell - Horn - Carroll (CHC) Cognitive Abilities-Achievement Research Synthesis project, a project first described in a prior post. This is a work "in progress". The purpose of this project is to systematically synthesize the key Cattell-Horn- Carroll (CHC) theory of cognitive abilities designed research studies that have investigated the relations between broad and narrow CHC abilities and school achievement.

The status of the project can be accessed via a clickable MindMap visual-graphic navigational tool (similar to the image above...but "active" and "dyanamic") or via the more traditional web page outline navigational method. You can toggle back and forth between the different navigation methods via the options in the upper right hand corner of the respective home web page.

Feedback is appreciated. I request that feedback be funneled to either the CHC and/or NASP professional listservs, mechanisms that provide for a more dynamic give-and-take exchange of ideas, thoughts, reactions, criticisms, suggestions, etc.

The most significant new revisions/additions/changes are in branches 4 and 6. A subtopic under branch 4 (studies included in the review) now includes the references of all the 19 studies reviewed AND URL links to the actual studies. The most significant revision is the addition of branch 6, which provides access to four summary charts/figures (sample figure is above) that attempt to synthesize the massive amount of coded information in the tabular summary tables (branch 5).

Barb Wendling and I are now going to commence interpretation of these charts/figures (and the tabular summary tables). I wish I had my interpretation, caveats, explanations of surprising findings, etc. written today, but it is going to take some time. To faciliate the process I would LOVE feedback from CHC experts on insights they may glean from a review of the four summary figures. Please discuss any hypotheses, insights, etc. on the CHC and/or NASP listservs.

This project is evolving into a manuscript to be submitted for publication and will also serve as the basis of my mini-skills workshop at NASP re: this research project.

Enjoy.


Technorati Tags: , , , , , , , , , , , , , , , , , , , , , ,

Powered by ScribeFire.

Friday, February 01, 2008

WJ III NU scoring issue explanation: Guest blog post by David Dailey


Recently a post was made to the CHC listserv asking for clarification regarding a particular score provided by the WJ III NU norms. I thought the question provided a "teachable moment" regarding certain psychometric principles and methods used in the WJ family of instruments.

I asked David Dailey, the resident statistician and technical consultant to the WJ author team, to write a brief explanation. His well written response is below. Enjoy.

[Interested readers may also be interested in a recently published WJ III NU Assessment Service Bulletin that explains why scores may differ between the WJ III and NU norms. Also, conflict of interest disclosure - I'm a coauthor of the WJ III]


Dear Ms. Jensen (person who posed the question):

Thank you for sharing the interesting profile of reading scores with the CHC mailing list. Kevin McGrew has asked me to write a few sentences about the phenomenon exhibited by these scores-- particularly, as you ask, why this 61-month-old child's Broad Reading score is "so low". I have been heavily involved in the development of the WJ III and WJ III NU norm tables, and I hope I will be able to shed some light on your question.

You reported that your subject earned a particular set of standard scores on the reading tests and clusters. I have augmented those scores with the approximate W, W-difference, and RPI scores that would also have appeared for that subject, in the following table (best viewed in a fixed-width font):

[I apologize for the formatting of the numbers below.....I tried hard to get a nice table format but was unable to get anything to work. I'm still a relatively newbie when it comes to using blogging software]

Test/Cluster, SS, W, W-diff, RPI

Letter-Word ID, 140 , 431, +87, 100

Word Attack, 138, 463, +81, 100

Reading Fluency, 128, 477, +13, 97

Passage Comprehension , 133, 458, +56, 100

Broad Reading, 125, 455, +52, 100

Brief Reading, 149, 444, +71, 100

Basic Reading Skills, 145, 448, +85, 100


You can verify for yourself that the cluster W scores are the arithmetic means of the W scores for the tests making up the cluster. The W-differences and the RPIs show that this subject's reading development is far above that of his/her age peers-- but they also show that the Reading Fluency score is not nearly as exceptional as the remaining scores.

You were concerned that the Broad Reading cluster standard score was so much lower than the other cluster standard scores. Although this subject's scores were exceptionally high for all the clusters (in terms of proficiency relative to age peers), the Broad Reading score is not as exceptional when compared to the other clusters. Its W-difference is lower than the other clusters because it include Reading Fluency, for which the subject outperformed age peers by "only" 13 W points.

The W-difference score in the table above is one of two terms that go into calculating a subject's standard score. The other is a scaling factor (SD - standard deviation) that accounts for how widely or narrowly spread the test scores were in the reference peer group.

In Woodcock-Johnson products, the scaling factor (SD) for subjects performing below the median for the peer reference group is permitted to be, and often is, different from the scaling factor (SD) for subjects performing above the median. So the WJ scoring model has always been able to reflect different amounts of spread among high performers than among low performers.

It turns out that, for young subjects such as yours, the scaling factor (SD) for high performers on the reading clusters is quite large-- meaning it takes a very large W-difference to earn a standard score that is far away from the mean. This is because, for most of these reading skills, the scores for the above-median subjects is very widely spread out. For Broad Reading, a 61-month-old subject must earn 32 W points more than the median to receive a standard score of 115 (one standard deviation above the mean). For the other two reading clusters, the number is somewhat smaller; that, coupled with the higher W-differences your subject earned on those clusters, accounts for the standard-score pattern for your subject.

(You might notice that the scaling factor for Reading Fluency is quite small. This reflects the fact that there is very little variation among above-median subjects at this age on this task.)

So the bottom line here is that the Broad Reading score suffers a "double whammy"-- a comparatively lower W-difference (due to the lower Reading Fluency) and a larger amount of above-median variation in the norming-sample scores. And this subject earned much higher standard scores on the other clusters because their relative performance (in terms of raw ability) on those clusters was much higher, plus the variation within the norming sample was smaller.

Thank you again for your question. I hope I have been able to help you understand more about how these scores work.

Technorati Tags: , , , , , , , ,

Powered by ScribeFire.

Thursday, January 10, 2008

CHC listserv n=1000+: Time to celebrate

Call MSNBC, CNN, the AP....spread the word.....the CHC listserv has reached the n=1000 membership level!!!!!!! As of this post, the current membership is n=1003.

The 1000th person to join was Jon Ross. Kudos to Jon. As a result of Jon's excellent timing, he is going to be shipped one of the few remaining "classic" IAP Gf/Fluid Intelligence coffee mugs. They are collectors items.

Check out the picture of the mug and the prior message. This accomplishment has me contemplating the idea of bringing back these mugs for a moderate price. If enough folks would be interested to make it worth my time and costs, I might be willing to offer these for sale to dedicated CHC'ers. I would need to figure out costs to have them designed from scratch as well as the cost to package and ship...plus a little ching in my pocket for the time (I would probably pay a local kid to handle the fulfillment).

In order to ascertain interest, send me an email (iap@earthlink.net) indicating your interest, and more importantly the number of mugs you would be willing to purchase. Of course, multiple mugs would require some kind of special pricing. Suggested prices would also be welcome.

University professors - think of the idea of purchasing these for all students in your intellectual assessment classes...and awarding them at the successful completion of their assessment sequence, their first completed CHC assessment report, the successful defense of their dissertation, etc. If I could anticipate a regular stream of moderate bulk orders from trainers, that would spur me to contemplate this with greater fervor. The tricky part is how many to order.