Showing posts with label WJ III. Show all posts
Showing posts with label WJ III. Show all posts

Tuesday, September 02, 2025

From the #Cattell-Horn-Carroll (#CHC) #cognitive #intelligence theory archives: Photos of important 1999 Carroll, Horn, Woodcock, Roid et al. meeting in Chapel Hill, NC.

I was recently cleaning my office when I stumbled upon these priceless photos from a 1999 historical meeting in Chapel Hill, NC that involved John Horn, Jack Carroll, Richard Woodcock, Gale Roid, John Wasserman, Fred Schrank and myself).  The provenance (I’ve always wanted to use this word 😉) for the meeting is provided below the pictures in the form of extracted quotes from Wasserman (2019) and McGrew (2023) (links below), which I confirmed with John Wasserman via a personal email on August, 30, 2025.

The 1990 CHC-based WJ-R had already been published and the WJ III author team were nearing completion of the CHC-based WJ III (2001).  Unbeknownst to many is the fact that Woodock was originally planned to be one of the coauthors of the SB5 (along with Gale Roid), which explains his presence in the photo’s that document one of several planning meetings for the CHC-based SB5.  

I was also involved as a consultant during the early planning for the CHC-based SB5 because of my knowledge of the evolving CHC theory.  My role was to review and integrate all available published and unpublished factor analysis research on all prior editions of the different SB legacy tests. I post these pictures with the names of the people included in each photo immediately below the photo. No other comments (save for the next paragraph) are provided.  

To say the least, my presence at this meeting (as well as many other meetings with Carroll and Horn together, as well as with each alone, that occured when planning the various editions of the WJ’s) was surrealistic.  One could sense a paradigm shift in intelligence testing that was happening in real time during the meetings!  The expertise of the leading theorists regarding what became known as CHC theory, together with the expertise of the applied test developers of Woodcock and Roid, provided me with learning experiences that cannot be captured in any book or university course work. 

Click on images to enlarge.  

Be gentle, these are the best available copies of images taken with an old-school camera (not smart-phone based digital images)

(Carroll, Woodcock, McGrew, Schrank)

(Carroll, Woodcock, McGrew)

(Woodcock, Wasserman, Roid, Carroll, Horn)

(Wasserman, Roid, Carroll, Horn, McGrew)

(Carroll, Woodcock)


———————-


“It was only when I left TPC for employment with Riverside Publishing (now Houghton-Mifflin-Harcourt; HMH) in 1996 that I met Richard W. Woodcock and Kevin S. McGrew and became immersed in the extended Gf-Gc (fluid-crystallized)/ Horn-Cattell theory, beginning to appreciate how Carroll's Three-Stratum (3S) model could be operationalized in cognitive-intellectual tests. Riverside had been the home of the first Gf-Gc intelligence test, the Stanford–Binet Intelligence Scale, Fourth Edition (SB IV; R. L. Thorndike, Hagen, & Sattler, 1986), which was structured hierarchically with Spearman's g at the apex, four broad ability factors at a lower level, and individual subtests at the lowest level. After acquiring the Woodcock–Johnson (WJ-R; Woodcock & Johnson, 1989) from DLM Teaching Resources, Riverside now held a second Gf-Gc measure. The WJ-R Tests of Cognitive Ability measured seven broad ability factors from Gf-Gc theory with an eighth broad ability factor possible through two quantitative tests from theWJ-R Tests of Achievement. When I arrived, planning was underway for new test editions – the WJ III (Woodcock, McGrew, & Mather, 2001) and the SB5 (Roid, 2003) – and Woodcock was then slated to co-author both tests, although he later stepped down from the SB5. Consequently, I had the privilege of participating in meetings in 1999 with John B. Carroll and John L. Horn, both of whom had been paid expert consultants to the development of the WJ-R” (Wasserman, 2019, p. 250)

——————-

In 1999, Woodcock brokered the CHC umbrella term with Horn and Carroll for practical reasons (McGrew 2005)—to facilitate internal and external communication regarding the theoretical model of cognitive abilities underlying the then-overlapping test development activities (and some overlapping consultants, test authors, and test publisher project directors; John Horn, Jack Carroll, Richard Woodcock, Gale Roid, Kevin McGrew, Fred Schrank, and John Wasserman) of the Woodcock–Johnson III and the Stanford Binet–Fifth Edition by Riverside Publishing” (McGrew, 2023, p. 3)

Thursday, June 19, 2025

Research Byte: Individual differences in #spatial navigation and #workingmemory - lets hear it for the new #WJV visual working memory test—#CHC #Gv #Gwm #schoolpsychology #cognition #intelligence

Individual differences in spatial navigation and working memory
Intelligence. Sorry, but not an open access downloadable article 😕

Abstract

Spatial navigation is a complex skill that relies on many aspects of cognition. Our study aims to clarify the role of working memory in spatial navigation, and particularly, the potentially separate contributions of verbal and visuospatial working memory. We leverage individual differences to understand how working memory differs among types of navigators and the predictive utility of verbal and visuospatial working memory. Data were analyzed from N = 253 healthy, young adults. Participants completed multiple measures of verbal and visuospatial working memory and a spatial navigation task called Virtual Silcton. We found that better navigators may rely more on visuospatial working memory. Additionally, using a relative weights analysis, we found that visuospatial working memory accounts for a large majority of variance in spatial navigation when compared to verbal working memory. Our results suggest individual differences in working memory are domain-specific in this context of spatial navigation, with visuospatial working memory being the primary contributor.
————————
As an FYI.  The WJ V has a new cognitive Visual Working Memory test that I created. Unfortunately, it was not included in the original WJ V launch and will be added in a later release…not sure when…no one has told me…but I think this fall.
The back story is that this test was in development for over 30 years by yours truly.  For the WJ III I developed, and we normed, a visual working memory test where examinee’s were shown a abstract line-based image on a dotted grid and were instructed to rotate the image in their mind (after the test stimuli figure was removed) and then draw the rotated image on a identical blank grid.  The idea of examinees drawing their response was to add additional clinical information about visual-motor abilities, in addition to visual working memory.  Unfortunately, after being completely normed, we learned via inter-rater reliability studies that the scoring reliability was not adequate…darn.  
The second attempt was an earlier version of the current WJ V Visual Working Memory test that had already been printed for the WJ IV norming test books.  The WJ IV version was shelved at the last minute due to cost issues as a result of the financial crises at the end of the Bush presidency.  We were instructed to reduce the cost of the WJ IV norming.  This test simply had too many printed test easel pages (was called a “page eater”) and was eliminated…double darn.  
However, this turned out to be a blessing in disguise.  With the new digital testing platform, the WJ IV version was now presented without a concern for the number of pages, and more importantly, it could have a much more complex and informative underlying scoring system since all taps on an asymetrical response grid were recorded (which was a richer set of response data than the original WJ IV version).  As stated in the WJ V technical manual (LaForte, Dailey & McGrew, 2025, p. 40):
The Visual Working Memory test requires the use of visual working memory “in the context of processing” (Maehara & Saito, 2007). For each item, the examinee briefly studies a pattern of stimulus dots inside of randomly placed squares on the screen and then must recall the specific locations of the dots. The presentation and recall screens are separated by a quick and simple visual discrimination distractor item. This test requires the examinee to maintain information in working memory while actively processing the distractor requirements. Once the distractor task is completed, it must be quickly removed from active memory to focus on recalling the locations of the stimulus dots (Burgoyne et al., 2022). Errors of both omission (i.e., erroneously recalling a dot in a box where no dot was present) and commission (i.e., failing to identify a box associated with a dot's correct location) are both factored into the test's scoring model; however, heavier emphasis is placed on visual recall through a relatively higher penalty for errors of commission.
Validity information in the WJ V TM provides evidence that the new Visual Working Memory test is a mixed measure of Gv and Gwm.  Preliminary evidence (inspection of growth curves and standard deviation distributional characteristics) was interpreted as being consistent with other measures of executive functioning.  Additional concurrent validity studies with established measures of executive functioning are needed before an evidence-based claim of executive functioning score variance can clearly be established.
I think the 30+ year wait was worth it.  I’m very proud of this test in its current form.  A “shout out” to Dr. Erica LaForte and David Dailey for creating such a response-rich stream of data for scoring…something that was not possible in the planned non-digital WJ III and WJ IV versions.

Friday, November 08, 2024

On the origin and evolution (from 1997 chapter to 2025 #WJV) of the #CHC #intelligence theories definitions: The missing CHC definition’s birth certificate

This is an updated version of an OBG (oldie but goodie) post originally made in 2017.  


The historical development of the CHC model of intelligence has been documented by McGrew (2005) and Schneider and McGrew (2012) and summarized by Kaufman and colleagues (Kaufman, 2009; Kaufman, Raiford & Coalson, 2016). Additional extensions and historical anecdotes were rececntly presented by McGrew (2023) in an article included in a special issue of the Journal of Intelligence focused on Jack Carroll’s tri-stratum theory @ 30 years. McGrew (2023) recommended that CHC theory should now be referred to as a group of CHC theories (i.e., a family of orthogonally correlated models) that recognizes the similarities and differences between the theoretical models of Cattell, Horn and Carroll.

An unexplained crucial, yet missing piece of the CHC story, is the origin of the original CHC broad and narrow ability definitions.  The CHC ability definition birth certificate, until recently, had not been revealed.  To fend off possible CHC “birther” controversies, I will now set the record straight again (as was first done in 2017) regarding the heritage of the past and current CHC definitions.

Given the involvement of both John Horn and Jack Carroll in revisions of the WJ-R and WJ III, which was the impetus for the combined CHC theory, it is not surprising that the relations between the “official” CHC ability definitions and the WJ tests were “reciprocal in nature, with changes in one driving changes in the other” (Kaufman et al., 2016, p. 253).  Furthermore, “the WJ IV represented the first revision in which none of the original CHC theorists was alive at the time of publication, producing and imbalance in this reciprocal relationship—-“the WJ IV manuals now often served as the official source for the latest CHC theory and model of cognitive abilities (J. Schneider, personal communication, March 15, 2015)” (Kaufman et al., 2016; p. 253).  Kaufman et al. noted that with the development of subsequent non-WJ CHC assessment and interpretation frameworks (e.g., Flanagan and colleagues CHC cross-battery assessment; Miller’s integrated school neuropsychology/CHC assessment model), some confusion has crept into what represents the authoritative “official” and “unofficial” definitions and sources.  

In Schneider & McGrew (2012) and Schneider & McGrew (2018), the incestuous nature of the evolution of the CHC definitions continued by building primarily on the McGrew (2005) definitions, which in turn were reflected in the 2001 WJ III manuals, which in turn drew from McGrew (1997).  In my original 2017 post regarding this topic, it was judged time to divorce the official CHC definitions from the WJ series and authors (particularly myself, Kevin McGrew). 

However, the CHC birth certificate is still often questioned.  Did the CHC definitions magically appear?  Did they come down in tablet form from a mountain top?  After the Cattell-Horn and Carroll models were first married by McGrew (1997), were the definitions the result of some form of immaculate conception?  Did  McGrew (1997) develop them unilaterally?  

Here is….the “rest of the story.”  

The original CHC definitions were first presented in McGrew’s (1997) chapter where the individual tests from all major intelligence batteries where classified as per the first integration of the Cattell-Horn and Carroll models of cognitive abilities (then called a “proposed synthesized Carroll and Horn-Cattell Gf-Gc framework”).  In order to complete this analysis, I (Kevin McGrew) needed standard CHC broad and narrow definitions—but none existed.  I consulted the Bible…Carroll’s Human Cognitive Abilities (1993).



I developed the original definitions (primarily the narrow ability definitions) by abstracting definitions from Carroll’s (1993) book.  After completing the first draft of the definitions, I sent them to Carroll. He graciously took time to comment and edit the first draft. I subsequently revised the definitions and sent them back. Jack and I engaged in several iterations until he was comfortable with the working definitions. As a result, the original narrow ability definitions published in McGrew (1997) had the informal stamp of approval of Carroll, but not of Horn. The official CHC definition birth certificate should list Carroll and McGrew as the parents.  

Since then the broad and narrow CHC ability definitions have been parented by McGrew (McGrew & Woodcock, 2001; McGrew, 2005; McGrew et al., 2014) and more recently, uncle Joel Schneider (Schneider & McGrew, 2012; Schneider & McGrew, 2018). The other WJ III and WJ IV authors (Mather, Schrank, and Woodcock) served as aunts and uncles at various points in the evolution of the definitions, resulting in the current “unofficial” definitions being in the WJ IV technical manual (McGrew et al., 2014) and the Schneider & McGew (2018) chapter




With new data-based insights from the the validity analysis of the norm data from the forthcoming WJ V (LaForte, Dailey & McGrew, 2025, in preparation), the WJ V technical manual will provide, yet again, a slightly new and improved set of CHC definitions.  Stay tunned.

No doubt the WJ V 2025 updated CHC definitions will still have a clear Carroll/McGrew, WJ III /WJ IV/WJ V and Joel Schneider genetic lineage (McGrew, 1997—>McGrew & Woodcock, 2001—>McGrew, 2005—>Schneider & McGrew, 2012—>McGrew et al., 2014—>Schneider & McGrew, 2012, 2018).  We (Schneider and McGrew) are reasonably comfortable with this fact.  However, we hope that the WJ—>WJ V set of CHC definitions will eventually move out of the influence of the WJ/CHC house and establish a separate residence, identity, and process for future growth.  I am aware that Dr. Dawn Flanagan and colleagues are working on a new revision of their CHC cross-battery book and related software and will most likely include a new set of revised defintions.  Perhaps a melding with the WJ V technical manual definition appendix with the work of Flanagan et al. would be a good starting point.  Perhaps some group or consortium of interested professionals could be established to nurture, revise, and grow the CHC defintions.

Saturday, April 24, 2021

The evolution of the Woodcock-Johnson (WJ--WJ IV) global IQ or g scores - The WJ is the Elon Musk of IQ testing

 Across the various editions of the WJ Tests of Cognitive Abilities (WJ, WJ-R, WJ III, WJ IV), the authors continually have sought to improve the measurement of intelligence via following contemporary research and theory.  As a result, in contrast to many other IQ tests, the WJ has been known for global IQ scores (original called Broad Cognitive Ability, later changed to General Intellectual Ability) that changed rather dramatically from one revision to the next.  We the WJ authors might be considered to be the Elon Musk of IQ test development.

I was recently asked to explain the changing nature of the BAC/GIA scores.  The result is the table inserted below (double click to enlarge).  I believe the table is self-explanatory.  A nice PDF copy can be downloaded here.  Enjoy.




Monday, March 06, 2017

CHC impact: The Cattell-Horn-Carroll (CHC) taxonomy of cognitive abilities has gone global



[Note.  Original post on March 6, 2017 has now been updated (March 7, 2017) to include reference to research in Spain]

The CHC taxonomy is officially a globetrotter with a large bank of frequent flier miles.   An indicator of the increasing prominence and spread of the CHC taxonomy is reflected in the globalization of CHC assessment activities in countries beyond the United States.  Several examples, which are not exhaustive, are summarized below. 

The influence of CHC theory, primarily via university assessment training in the use of the CHC-based Batería III Woodcock-Munoz (BAT III; Muñoz-Sandoval, Woodcock, McGrew, Mather, N. (2005a, 2005b), is prominent in Spanish speaking countries south of the US border.  This includes training, research or clinical use of the BAT III in Cuba, Mexico, Chile, Costa Rica, Panama, and Guatemala.[1]  Farther south, researchers in Brazil were early adopters of the CHC taxonomy as a guide for intelligence test development (Primi, 2003; Wechsler & de Cassia Nakano, 2016).  For example, Wechsler and colleagues (Wechsler & Schelini, 2006; Wechsler Nunes, Schelini, Pasian, Homsi, Moretti, & Anache, 2010; Wechsler, Vendramini, & Schelini, 2007) completed several studies in an attempt to adapt the CHC-based WJ III to Brazil.  More recently, Wechsler, Vendramini, Schelini, Lourenconi, de Souza and Bundim (2014) developed the Brazilian Adult Intelligence Battery (BAIB), which although only measuring Gf and Gc, is grounded in CHC theory.  Other Brazilian researchers have focused on the nature and measurement of Gf (Primi, Maria Ferrã, Almeida, 2010; Primi, 2014) with their research clearly couched in the context of the CHC model.  Even broader in scope, I (Kevin McGrew) together with Dr. Joel Schneider consulted with researchers from the Brazilian Ayrton Senna Institute (from 2016 to 2017) on the use of the CHC model as the key cognitive ability framework for developing measures of critical thinking and creativity as per the Organization for Economic Cooperationand Development (OECD, 2016) efforts to develop 21st century skills in students.

CHC influences are also present north of the US border in Canada.  The CHC-based WJ III has been used by practitioners in Canada based on a US-Canadian matched sampled comparison study (Ford, Swart, Negreiros, Lacroix & McGrew, 2010).  The WJ IV is also sold and used in Canada.  Additionally, a school-based group administered CHC test (Insight; Beal, 2011) measuring Gf, Gc, Gv, Ga, Gwm, Glr, Gs, CDS (Gt) is available in Canada.  CHC theory and testing has a prominent place in school psychology assessment courses in several major Canadian universities (e.g., University of British Columbia; University of Alberta).[2]

One of first systematic global CHC test development outreach project was efforts, led by Richard Woodcock and the Woodcock-Munoz Foundation, to provide Eastern European countries with cost-effective briefer versions of the CHC-based Woodcock-Johnson Tests of Cognitive Ability—Third Edition.   The WJ III-IE (international editions) projects started in the early 2000’s and continued until approximately 2015.  WJ III-IE norming efforts occurred in the Czech Republic, Hungary, Latvia, Romania, and Slovakia.  Other European efforts include the development of the Austrian-developed computerized Intelligence Structure Battery (INSBAT; Arendasy, Hornket, Sommer, Wagner-Menghin, Gittler, Hausler, Bongnar, & Wenzl, 2012) that measures six broad CHC abilities (Gf, Gq, Gc, Gwm, Gv, Glr).  The spread of CHC theory has also reached France and Spain.  French researchers have analyzed French versions of the various Wechsler scales from the perspective of the CHC framework (e.g., see Golay, Reverte, Rossier J, Favez N and Lecerf, 2013; also, Lecerf, Rossier, Favez, Revert and Coleaux, 2010).  In Spain, researchers in computer science education have used the CHC taxonomy to analyze the components of the Computational Thinking Test (CTt; Roman-Gonzalez, Perez-Gonzalez, Jimenez-Fernandez, 2016). German intelligence research has also been influenced by the CHC model (e.g., see Baghaei & Tabatabaee, 2015) as best illustrated by its incorporation in the popular German-based Berlin Intelligence Structure (BIS) program of research literature (Beauducel, Brocke & Liepmann, 2001; SÜß & Beauducel, 2015; Vock, Preckel, Holling, 201X),  Additionally, the Wuerzburger Psychologische Kurz-Diagnostik (WUEP-KD), a neuropsychological battery used in German speaking countries, is grounded in the CHC model (Ottensmeier, Zimolong, Wolff, Ehrich, Galley, von Hoff , Kuehl and Rutkowski, 2015).

Additional emerging CHC outposts in northern Europe include the Netherlands and Belgium.  Hurksa and Bakker (2016) reviewed the influence of CHC theory, as well as the neuropsychological PASS theory, in an article providing a historical overview of intelligence testing efforts in the Netherlands.  A strong indicator of the growing interest in CHC theory was a CHC theory and assessment conference (New angles on intelligence! A closer look on the CHC–model) at Thomas More University, Antwerp Belgium, in February 2015.  Faculty at Thomas More University have developed a CHC assessment battery (CoVaT-CHC) for children in Flanders that measures the CHC domains of Gf, Gc, Gv, Gwm, and Gs. 

Transported via the Chunnel to the United Kingdom and Northern Ireland, the CHC flame has been lit, but has not yet resulted in significant CHC test development.  In the 1990’s the WJ III author team was consulted to develop Irish norms for the WJ III.  One of the WJ III authors (Fred Schrank) visited and consulted at several universities in Ireland (University College Dublin, in particular) and continues to do so regarding the WJ IV (Fred Schrank, personal communication, March 2, 2017).  The CHC theory is now the dominant cognitive taxonomy taught in psychology departments (Trevor James, personal communication, March 3, 2017). 

Traveling to the Middle East, known CHC activities have been occurring in Jordan and Turkey.  Under the direction of Bashir Abu-Hamour (Abu-Hamour, 2014; Abu-Hamour, Hmouz, Mattar & Muhaidat, 2012;), the CHC-based WJ III has received considerable attention and the WJ IV was recently translated, adapted, and nationally normed in Jordan (Abu-Hamour & Al-Hmouz, 2017).  In Turkey, the first national intelligence test (Anatolu-Sak Intelligence Scale; ASIS) was developed between 2015-2017.  Although the ASIS composite scores are not couched in the CHC nomenclature, the theories listed as the foundation for the Turkish ASIS are general intelligence, CHC and PASS.  Additionally, I (Kevin McGrew) worked with two universities in 2016 in the preparation of government sponsored grant proposals for additional national intelligence test development in Turkey, both that proposed to use the CHC taxonomy. 

Pivoting toward Asia and the world “down under” reveals major CHC test development efforts.  Since the publication of the CHC-based WJ III several key universities and an Australian publisher have delved deep into CHC theory and assessment.  Psychological Assessments Australia (PAA) has translated, adapted, and normed the CHC-based WJ III and WJ IV in Australia and New Zealand.  The Melbourne area has been a particular flash point for CHC training and research.  Neuropsychologist and researcher Stephen Bowden and his students at the University of Melbourne have produced a series of multiple sample confirmatory factor analysis studies with markers of CHC abilities to investigate the constructs measured by neuropsychological tests measures.  The University of Monash, initially under the direction of John Roodenburg, and subsequently by his students, placed the CHC model at the core of their assessment course sequence and have influenced in the infusion of the CHC framework into the assessment practices of Australian psychologists (James, Jacobs, Roodenburg, 2015). 

Finally, one of the most ambitious CHC test development projects has been occurring in Indonesia since 2013.  Sponsored and directed by the Yayasan Dharma Bermakna Foundation (YDB), a nationally normed (over 4000 individuals) individually administered CHC-based battery of tests for school age children (ages 5-18) is, at the time of this writing, nearing completion.  The AJT Cognitive Assessment Test (AJT-CAT) will be one of the most comprehensive individually administered tests of cognitive abilities in the world.  The AJT-CAT currently consists of 27 individual cognitive tests designed to measure 21 different narrow CHC abilities (and two psychomotor tests to screen for motor difficulties) and preliminary confirmatory factor analysis indicated that the battery measure eight broad CHC cognitive domains (Gf, Gc, Gv, Gwm, Ga, Gs, Gl, Gr) and the Gp motor domain.[3]





[1] Thanks to Dr. Todd Fletcher for providing this information.

[2] Thanks to Laurie Ford and Damien Cormier for this information.


[3] Kevin McGrew has served as the CHC and applied psychometric expert consultant on this project and helped complete these preliminary structural analyses. 

Thursday, October 15, 2015

Evolution of the WJ to WJ IV GIA and CHC clusters


Click on image to enlarge

Long-time users of the various editions of the WJ cognitive battery (WJ, WJ-R, WJ III, WJ IV) know that the battery has continued to evolve over time.  Above is a portion of a large table that summarizes the same and different tests included in the GIA (g-score) and broad CHC clusters across editions.  The complete table demonstrates that the WJ has not remained static, with each new edition evolving as per research and theory.  

The practical benefit of the complete table comes when examiners want to compare scores from  similarly named cluster scores across different editions of the WJ--different scores may be due, in part, to different mixtures of tests in  clusters across editions.  I hope this is helpful.

The complete table can be downloaded here.  The table is adapted from a similar table in
Cormier, D., McGrew, K., Bulut, O., Funamoto, A. (2015).  Exploring the relationships between broad Cattell-Horn-Carroll (CHC) cognitive abilities and reading achievement during the school-age years, Manuscript submitted for publication.

Saturday, June 07, 2014

More research on the validity of the C-LIM framework in intelligence testing

Another article adding to the Cultural-Linquistic Interpretive Matrix research literature. Click on images to enlarge. A copy of the article can be found here.

"Conclusion

The primary conclusion drawn from this study and previous research is that linguistic demand is an important consideration when selecting and interpreting tests of cognitive abilities. The implications of this study go beyond a re-classification of the C-LIM to emphasizing one of the underlying motivations of the C-LIM's initial inception—the importance of considering a student's linguistic background and abilities prior to selecting, administering, and interpreting tests of cognitive abilities. A comprehensive evaluation that takes a student's linguistic ability into consideration should consider that a student's language ability (i.e., conversational proficiency) might not be an accurate representation of a student's academic language abilities (Cummins, 2008). Thus, it would be beneficial to gather information on a student's academic language abil-ity, due to the relationship between education and IQ (Matarazzo & Herman, 1984). A student's receptive and expressive language abilities may be a worthwhile pursuit in future research, as student's level of conversational proficiency in the classroom may mislead educators and psy-chologists to assume that the student has been exposed to English with the same frequency and depth as his or her peers (Cummins, 2008). Moreover, as suggested by the results of this study, considering the influence of linguistic ability when assessing cognitive abilities should continue to be supported by empirical evidence, instead of school psychologists continuing to rely on informal measures of linguistic ability through language samples and student interviews to gain information on language ability (Ochoa, Galarza, & Gonzalez, 1996).

A second conclusion is that it is unclear how cultural loading can be represented quantitatively in a way that is meaningful both theoretically and practically. An important, albeit unanswered question is, "What variables do practitioners take into account when making decisions about the cultural influences that may affect the selection and interpretation of tests from cognitive batteries?" Flanagan and Ortiz (2001) define cultural loading as "the degree to which a given test requires specific knowledge of or experience with mainstream culture" (p. 243). However, this broad definition does not identify specific variables that practitioners may consider in practice to make these decisions about whether a student's experiences are significantly different from mainstream culture. Given these unanswered questions, it is possible that the underlying reasoning that led to the creation of the C-LIM and its categorization system needs to be re-thought (as also suggested by Styck & Watkins, 2013), particularly with respect to cultural loading. Specifically, it would be important to consider what is occurring and possible in practice, as this is the intended use of the C-LIM."

 

 

Wednesday, February 26, 2014

Woodcock-Johnson IV (WJ IV) NASP 2014 introduction and overview workshop slide shows

(Click on images to enlarge)

Last week I, together with Dr. Fred Schrank and Dr. Nancy Mather, unveiled the new Woodcock-Johnson IV Battery at the National Association of School Psychologists (NASP) annual 2014 convention in Washington, DC.  We presented a three hour introductory and overview workshop.  NASP members can download the handouts we provided at the NASP website.  It is my understanding that NASP will eventually provide access to a video of the workshop that will allow NASP members to view and earn CEU credits (I am not 100% sure of this; check with NASP--don't email me).

Since the information we presented is now public, we three coauthors wish to provide access to our presentation to others.  The three presentation title slides are below.  Each are followed by a link to my SlideShare account (click this link if you want to see all three listed, as well as all my other PPT modules) where the slide shows can be viewed.  You will note that not all the slides were presented at the workshop session are included, due to test security issues and the pre-publication nature of various technical information from the forthcoming technical manual.

Enjoy.  Also, as coauthors of the WJ IV, we all have a financial interest in the instrument.  A disclosure statement is present in Part 1 of the slides.  My individual conflict of interest disclosure statement can be found at the MindHub web portal.

Additional information can be found at the official WJ IV Riverside Publishing web page. 


 (Click here for Part 1)


 (Click here for Part 2)


 (Click here for Part 3)