Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Monday, June 15, 2015

AAIDD chapters on intellectual functioning and the Flynn effect - overdue post

I just made this post over at the Intellectual Competence and Death Penalty blog and am repeating it here for interested readers.

(Click on image to enlarge)




It has been along time since I've been able to devote time to any of my three professional blogs.  I have been unbelievably busy with travel and professional presentations.  In fact, I have been so busy that I failed to feature two of my own recent Atkin's death penalty related book chapters that appeared in the new AAIDD book "Determining Intellectual Disability in the courts: Focus on capital cases." I have made these two chapters available via the MindHub web portal but do not believe I featured them at this blog (or at IQ's Corner).  One chapter deals with assessment of intellectual functioning issues and the other IQ test norm obsolescence (aka., the Flynn Effect).  The references (with links) are below.

McGrew, K. (2015a). Intellectual functioning. In Polloway, E. (Ed.), Determining Intellectual Disability in the courts: Focus on capital cases (pp. 85-111). Washington, DC: American Association on Intellectual and Developmental Disabilities.

McGrew, K. (2015b). Norm obsolescence: The Flynn Effect. In Polloway, E. (Ed.), Determining Intellectual Disability in the courts: Focus on capital cases (pp. 155-169). Washington, DC: American Association on Intellectual and Developmental Disabilities

Tuesday, July 29, 2014

Quotes to note: Importance of high quality psychological testing to psychologists




I just read this nice statement at the begging of the following article by Robert J. Ivnik, Ph.D., ABPP Professor of Psychology, Mayo Clinic College of Medicine, Rochester, MN.

The only professional services that are uniquely psychology's are testing-based assessments. Every other service that psychology offers can be obtained from other professions. In light of testing's central importance to our profession, and considering the number of years that psychologists have been practicing, we assure that our tests are scientifically sound and have been validated for the purposes to which they are put (e.g., research proves that our tests make accurate predictions). Correct? After all, in today's health care environment would any profession knowingly expose its core service to potential attack?

Although testing-based assessments are psychology's defining feature, they may also be our profession's Achilles' heel. Unfortunately, the manner in which many tests have been developed, standardized, normed, and validated may be most kindly described as ‘‘varied'' when it comes to scientific rigor. The science behind some of psychology's older and commercially successful tests tends to be stronger when some of the profit accrued by their sale is devoted to improving the tests. Lacking similar financial resources, many other tests have simply not been developed or validated very well
.

Click on image to enlarge.


- Posted using BlogPress from my iPad

Saturday, April 24, 2010

Journal of Psych Assessment: New editor focus on measurement



Kudos to Cecil Reynolds.  I like the new emphasis on basic measurement for the journal.

Reynolds, C. (2010).  Measurement and Assessment:  An Editorial View.  Psychological Assessment, 22(1), 1-4

If a thing exists, it can be measured. Measurement is a central component of assessment if we believe that fear, anxiety, intelligence, self-esteem, attention, and similar latent variables exist and are useful to us in developing an understanding of the human condition and leading us to ways to improve it. Much of what is published in Psychological Assessment deals with the development and the application of measurement devices of various sorts with the end goal of applications in assessment practice. What is submitted but not published largely deals with the same topics. As the new Editor writing the inaugural editorial, I am focusing on this topic for two major reasons. The first is that the most frequent reason why manuscripts are rejected in the peer-review process for Psychological Assessment and other high-quality journals devoted to clinical or neuropsychological assessment is inadequate attention to sound and high-quality measurement practices. The second reason is my surmise that measurement as a science is no longer taught with the rigor that characterized the earlier years of professional psychology. One of the tasks of Psychological Assessment is to promote a strong science of clinical assessment as practiced throughout professional psychology. To that end, I have attempted to pull together an eclectic group of Associate Editors and Consulting Editors. Our hope is to attract more and better manuscripts that deal with issues focusing on all aspects of clinical assessment

Technorati Tags: , , , , , ,

Wednesday, July 01, 2009

Applied Psych Test Development Series: Parts C/D--Develop norm plan and calculating norms

The fourth and fifth in the series Art and Science of Applied Test Development is now available.

The fourth module (Part D--Develop norm [standardization] plan) is now available.

The fifth module (Part E--Calcuate norms and derived scores) is also now available.


These are the fourth and fifth in a series of PPT modules explicating the development of psychological tests in the domain of cognitive ability using contemporary methods (e.g., theory-driven test specification; IRT-Rasch scaling; etc.). The presentations are intended to be conceptual and not statistical in nature. Feedback is appreciated.

This project can be tracked on the left-side pane of the blog under the heading of Applied Test Development Test Development Series.

The first module (Part A: Planning, development frameworks & domain/test specification blueprints) was posted previously and is accessible via SlideShare.

The second module (Part B: Test and item development) was posted previously and is accessible via SlideShare.

The third module (Part C--Use of Rasch scaling technology) was posted previously and is accessible via Slideshare.

You are STRONGLY encouraged to view them in order as concepts, graphic representation of concepts and ideas, build on each other from start to finish.

Enjoy...more to come.

Technorati Tags: , , , , , , , , , , , , , , , , ,

Friday, June 26, 2009

Applied Psych Test Development Series: Part B-Test and Item Development

The second in the series Art and Science of Applied Test Development is now available. The second module (Part B: Test and Item Development) is now available.

This is the second in a series of PPT modules explicating the development of psychological tests in the domain of cognitive ability using contemporary methods (e.g., theory-driven test specification; IRT-Rasch scaling; etc.). The presentations are intended to be conceptual and not statistical in nature. Feedback is appreciated.

This project can be tracked on the left-side pane of the blog under the heading of Applied Test Development Test Development Series.

The first module (Part A: Planning, development frameworks & domain/test specification blueprints) was posted previously and is accessible via SlideShare.

Enjoy...more to come.


Applied Psych Test Development Series: Part A-Planning, development frameworks & domain/test specification blueprints

Announcement--the Art and Science of Applied Test Development. Let the games begin.

This is the first in a series of PPT modules explicating the development of psychological tests in the domain of cognitive ability using contemporary methods (e.g., theory-driven test specification; IRT-Rasch scaling; etc.). The presentations are intended to be conceptual and not statistical in nature. Feedback is appreciated.

This project can be tracked on the left-side pane of the blog under the heading of Applied Test Development Test Development Series.

The first module (Part A: Planning, development frameworks & domain/test specification blueprints) is now available for viewing via SlideShare.

Stay tuned.


Monday, March 02, 2009

CHC selective referral-focused testing scenarios

I just posted Part B of the mini-skills workshop I just made at NASP 2009 Boston (CHC COG-ACH Relations Research Synthesis:  What We've Learned From 20 Years of Research) as on on-line viewable PPT at SlideShare, and my SlideShare space in particular. You should view the prior description of this project and presentation at the link above. 

The second part of this presentation is the application of the results of the research synthesis vis-a-vis the demonstration of "CHC selective (branching tree) referral-focused testing scenarios" that are grounded in the research synthesis.  The direct link to the slide show can be accessed by clicking here.

The description included with the slide show follows:

This is the second half of the a mini-skills workshop made at NASP 2009 in Boston (CHC COG-ACH Relations Research Synthesis:  What We've Learned From 20 Years of Research".  The first half of this presentation is also available at Kevin McGrew’s SlideShare space and is called “CHC-Cog-Ach Relations Research Synthesis”  ---- the current module is an attempt to demonstrate selective testing (branching-tree) referral-focused testing scenarios based on the results of the CHC Cog-Ach relations research synthesis, using the WJ III battery as the illustrative instrument.  The viewer should first view the CHC Cog-Ach Relations Research Synthesis module prior to viewing this module.

Enjoy.  Feedback is appreciated.Technorati Tags: , , , , , , , , , , , , , , ,

Testing and reading skills

From BPS blog

http://bps-research-digest.blogspot.com/2009/03/txtng-associated-wiv-superior-reading.html


Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

Saturday, February 28, 2009

WJ III Achievement tests Braille Adaptation & conference info.



FYI.  If you are interested in the assessment of school achievement for individuals with serious vision problems, check out the forthcoming Woodcock-Johnson Tests of Achievement:  Braille Edition.  A workshop on the new adaptation is scheduled August 28-29 in Phoenix, Az --- sponsored by the American Printing House for the Blind and hosted by Desert Valley's Regional Cooperative.

[conflict of interest disclosure - I'm a coauthor of the WJ III Battery]Technorati Tags: , , , , , , , , , ,

Wednesday, February 18, 2009

CHC COG-ACH research synthesis project: 1-18-09 update and revision


I just posted another update to the on-line PPT SlideShare show that presents my current interpretation of the results of a "CHC cognitive-achievement relations research synthesis" project that I've been working on.   The newest feature is the inclusion of a set of "cheat-sheet" summary slides to be used by assessment professionals to engage in more selective referral-focused cognitive assessments.  These research-to-practice summary slides (click here if you want to see an example) are intended to take the research synthesis results (the first 100 slides....yes...the show has 130 in total and is not yet finished) and make the results practical.

This presentation presents an update of the "CHC COG-ACH correlates research synthesis" project described and hosted at IQ's Corner and IAP. The viewer should first read the background materials regarding this project at these sites (how to access is also included in first slide). The results summarized in this on-line show are part of a manuscript that is in preparation with Barb Wendling and will also serve as the foundation for a mini-skills workshop at the 2009 NASP conference in Boston.

Revisit IQ's Corner to keep abreast of updates.


Technorati Tags: , , , , , , , , , , , , , ,

Saturday, February 14, 2009

Australian Psychologist special issue on culture, language, and cognitive assessment



The March 2009 issue of the Australian Psychologist (44-1) is a special issued dealing with cultural and language issues in cognitive assessment in Australia. The opening editorial by Stolk (2009) is "Approaches to the influence of culture and language on cognitive assessment instruments: The Australian context"

Technorati Tags: , , , , , , , , , , ,

Assessing cognitive impairment in indigenous Australians: Kimberley instrument

For my friends "down under."  Australian Psychologist article by Smith et al. (2009)Assessing cognitive impairment in Indigenous Australians: Re-evaluation of the Kimberley Indigenous Cognitive Assessment in Western Australia and the Northern Territory


Abstract

  • The Kimberley Indigenous Cognitive Assessment (KICA) was initially developed and validated as a culturally appropriate dementia screening tool for older Indigenous people living in the Kimberley. This paper describes the re-evaluation of the psychometric properties of the cognitive section (KICA-Cog) of this tool in two different populations, including a Northern Territory sample, and a larger population-based cohort from the Kimberley. In both populations, participants were evaluated on the KICA-Cog tool, and independently assessed by expert clinical raters blinded to the KICA scores, to determine validity and reliability of dementia diagnosis for both groups. Community consultation, feedback and education were integral parts of the research. For the Northern Territory sample, 52 participants were selected primarily through health services. Sensitivity was 82.4% and specificity was 87.5% for diagnosis of dementia, with area under the curve (AUC) of .95, based on a cut-off score of 31/32 of a possible 39. For the Kimberley sample, 363 participants from multiple communities formed part of a prevalence study of dementia. Sensitivity was 93.3% and specificity was 98.4% for a cut-off score of 33/34, with AUC¼.98 (95% confidence interval: 0.97–0.99). There was no education bias found. The KICA-Cog appears to be most reliable at a cut-off of 33/39.
Technorati Tags: , , , , , , , , , ,

Friday, February 06, 2009

LD, RTI, cognitive testing: AGORA course

Interested in expert opinions on the interaction of LD identification, RTI and the role of cognitive testing? Check out the AGORA multi-media course. I participated as one of the talking heads, but receive no royalties [I got a small honorarium]. A copy of the flyer can be found by clicking here. Description is below


  • Attached is a flyer describing a multi-media course on SLD identification that includes coverage of both RTI and Comprehensive Assessment and that culminates in seven best practices principles based on current research. It is 6 hours long and is meant to be delivered in either two, 3-hour, 1/2 day sessions or one, 6-hour, full day session. It is intended to be purchased by districts and to be delivered by someone in district (anyone who volunteers, as there is no skill or knowledge base necessary to be a facilitator). Continuing education credits may be obtained after completing this course. Note that some districts cannot afford to send school personnel to conferences and many districts cannot afford to bring speakers in due to significant budget cuts. This course is very cost effective and can serve to train many professionals in house for a fraction of what it would cost to send them to conferences or bring in a speaker. Finally, this professional development program will enhance any course in assessment or Specific Learning Disability (SLD), in particular, because students are exposed to the research and viewpoints of many leaders on all sides of the SLD identification controversies.
Technorati Tags: , , , , , , , , , , ,

Thursday, January 22, 2009

iAbstract: Data-based test analytic framework

DOUBLE CLICK TO ENLARGE IMAGE. From latest issue of Psych. Methods.

See two prior posts for more info on this feature.

Tuesday, October 21, 2008

Monday, August 25, 2008

Early childhood assessment: Nacational Academies Press pre-pub announcement

The National Academies Press has just announced pre-publication access to Early Childhood Assessment: Why, What and How? Electronic and hard copy versions can be ordered now. In addition, if you don't mind reading pdf files on your computer, you can read the entire text for free on-line. A free copy of the Executive Summary is available (click here).

Tuesday, July 15, 2008

Dr. Richard Woodcock Neuropsych Keynote slide show: Evolution of Cognitive Assessments

A copy of Dr. Richard Woodcock's Keynote address (The Evolution of Cognitive Assessments) at the Third National School Psychology Neuropsychology conference (Dallas, July 10, 2008) is now available for viewing as an on-line PPT show (that can also be downloaded). Click here to go to my SlideShare site where this is now available. The slides do not contain everything Dr. Woodcock presented as certain copy-protected information was removed prior to posting. Also, his personal comments/stories about the serendipitous events that brought him to the field of psychoeducational test development are not included in the slides.


Friday, February 15, 2008

CHC interpretation of the KABC-II: Guest post by John Garruto


The following is a guest post by John Garruto, school psychologist with the Oswego School District and member of the IQs Corner Virtual Community of Scholars. John reviewed the following article and has provided his comments below. [Blog dictator note - John's review is presented "as is" with only a few minor copy edits by the blog dictator and the insertion of some URL links]


Reynolds, M.R., Keith, T.Z., Goldenring-Fine, J., Fisher, M.E. & Low, J.A. (2007). Confirmatory Factor Structure of the Kaufman Assessment Battery for Children—Second Edition: Consistency With Cattell-Horn-Carroll Theory. School Psychology Quarterly, 22(4), 511-539. [click here to view article]


Abstract:
  • The Kaufman Assessment Battery for Children-Second Edition (KABC-II) is a departure from the original KABC in that it allows for interpretation via two theoretical models of intelligence. This study had two purposes: to determine whether the KABC-II measures the same constructs across ages and to investigate whether those constructs are consistent with Cattell-Horn-Carroll (CHC) theory. Multiple-sample analyses were used to test for equality of the variancecovariance matrices across the 3- to 18-year-old sample. Higher-order confirmatory factor analyses were used to compare the KABC-II model with rival CHCmodels for children ages 6 to 18. Results show that the KABC-II measures the same constructs across all ages. The KABC-II factor structure for school-age children is aligned closely with five broad abilities from CHC theory, although some inconsistencies were found. Models without time bonuses fit better than those with time bonuses. The results provide support for the construct validity of the KABC-II. Additional research is needed to more completely understand the measurement of fluid reasoning and the role of time bonuses on some tasks.
Okay, I have to tie in the Super Bowl somewhere because the New York Giants won and I waited seventeen years for this to happen again. Like the Super Bowl, there are many practitioners who are interested in the end results (the score at the end of the game), not necessarily how one got there (the whole study) or the analysis of each play (the statistics). For the ease of readers, I’m going to jump to the score at the end of the game.

The Results: The K-ABC II is emerging as a serious contender among cognitive assessment batteries. I also want to say that from reading his posts on the CHC listserv, and now this article, I’m expecting to see some more good stuff from Matthew Reynolds (I’ve always been a fan of Tim Keith’s research and really like his stuff on the WJ-III).

This study seeks to analyze the K-ABC II from a CHC perspective, which is one of two theories the test is built upon (the other is the Luria-Das perspective where the original has its origins.) It’s worth mentioning that Kaufman is not new to CHC theory. The Kaufman Adolescent and Adult Intelligence Test (KAIT) also used Gf-Gc as its basis. The current study is an internal validity study using factor analysis methods. Several analysis were performed to determine best model fit with certain manipulations of the analysis.

First it bears mentioning that the test g (general intelligence) loadings are consistent with prior research (Gf and Gc being high g loaders-Gv as well interestingly!) Reynolds et al. sought to answer some interesting hypotheses regarding model fit as well as cross-factor test loadings. Here are some questions posed and answers provided:

  • Does Gestalt Closure measure Gc? The test requires subjects to look at “inkblots” (Gv) that resemble familiar objects (Gc). It was concluded that there was a Gc load on this subtest. My own thoughts…it might be neat to show the child a list of objects that represented the stimuli after the assessment is complete. From there, one could rule in or rule out Gc contamination. Nevertheless, the Gc load is important because Gv is often thought to be (or supposed to be) an area where less “acquired cultural knowledge” should impact performance.
  • Do Hand Movements measure Gf? This subtest measures a pantomime of repeated hand movements and is purported to load on Gsm. The authors note a relationship to Gf. The hypothesis was generated relating to strategy for success and working memory. My own thoughts…why isn’t anyone talking Gv? Sure this test requires motor planning (frontal activiation?), but I argue that success can result from remembering a visual sequence. Furthermore, Gsm has often been related to verbal prompt/auditory modality. Although the intertwining of working memory and fluid reasoning has been discussed..I’m not sure I see a huge component of either. The task appears very sequential to me. The visualization component is too hard to ignore. Given the lower load on Gsm I would be interested in looking at a Gv link.
  • Does Pattern Reasoning measure Gv? The analysis suggested loadings on Gv as well as Gf. I have lately found this link to be of interest. It seems pure measures of Gf are hard to find. Sometimes the comparison on the WISC-IV of Picture Concepts (Gf-I) and Matrix Reasoning (Gf-I) is interesting given that I see a major discrepancy in scores. Indeed the former requires more Gc and the latter more Gv-especiall if transformation of the stimuli is required in order to logically complete the puzzles. This becomes even more important if g=Gf...as some scholars have suggested. Under what presentation conditions then is Gf (g?) more successful? Could the learning modality trend be returning (just kidding-not touching that one!)
  • Does Story Completion measure Gc or Gv? The analysis suggested that the answer is "no." Story Completion appears to be a measure of Gf. It’s interesting because I remember reading that a similar test on the WISC-III (Picture Arrangement) had similar loadings on VCI as POI. I might have thought that there would be more of a Gc load on Story Completion than on Gestalt Closure.....but then Gc also requires verbal recall of names whereas this requires logical sequencing ability. I imagine there’s probably some Gc necessary, but not enough that having a lot of it will predict success (or too little will predict failure).
  • Do Rover and Block Counting measure Gf as well as Gv? The analysis suggested…definitely. Gv for Block Counting (which I would intuitively agree with) and the jury is still out for Rover. The deductive reasoning element with Rover is certainly apparent...but I think it’s important not to forget that Rover has some executive function elements to it (it’s not very much unlike Planning on the WJ-III). Right now though t seems Gv is present for Block Counting and Rover.
  • The Issue of Time bonuses: This research question was very important to me. I recall giving this battery to someone and finding out that the difference on one of the tests (whether timed bonuses were provided or not) resulted in a scaled score difference of almost two standard deviations! I followed the manual, which endorses using timed points unless there’s a reason not to. However, Reynolds et al. found a better model fit for no time bonus. This is not bad news. Sometimes we learn different practices after a test has been normed and published. I remember Kaufman’s book on the WISC-III where he indicated Symbol Search to be a higher ‘g’ loader than Coding and the informed practitioner may wish to have SS substitute for Coding as long as the decision was made in an apriori fashion. I guess for me, I do not want Gs contaminating a different factor I’m attempting to measure. I prefer to measure it far away from ‘g’ via a cross battery technique given that Gs has shown weaker relationships to ‘g’ but significant relationships to learning disabilities. Sometimes we learn new ways to practice as a result of follow-up research-this certainly fits that mold.
Overall Conclusion: I think K-ABC II is going somewhere. It is receiving some interesting recognition by some scholars and even is purported to have some utility with nonverbal, non English speaking, and/or autistic spectrum populations. Given the potential of this instrument in cognitive assessment, the research opportunities are certainly plentiful. I still see the Wechsler as my test of choice for Gc, the WJ-III as my favorite test to fill in the holes left by many cognitive batteries...but there certainly seems to be significant practical implications for the K-ABC II. Certainly the relationships to CHC theory are again very much substantiated. Certainly there are plenty of Patriots who want to view assessment from a traditional framework. Also, like the Patriots, there are those who are jumping on the loudest flavor of the month (RTI as being the only way to diagnose learning disabilities). However, the Reynolds et al. study continues to show that CHC theory has stood solid time and again as one of the "Giant" individual differences frameworks for use by school psychologists.

Technorati Tags: , , , , , , , ,

Powered by ScribeFire.

Wednesday, January 16, 2008

Gs (speed) invaders computerized game test

I just skimmed, with considerable interest, an article by McPherson and Burns (2007; Behavior Research Methods) that demonstrated a unique use of the CHC theory of cognitive abilities framework and validated markers of CHC abilities (viz., Gv and Gs).

Briefly, the authors developed a computer game-like test designed to assess cognitive processing speed (Gs) as defined by CHC theory. Although the two reported studies are based on small samples of undergraduate psychology students, the study demonstrates how contemporary CHC theory and research can be used as a framework to developed computer game-like measures of CHC abilities.

Cool.

Technorati Tags: , , , , , , , , , , ,

Powered by ScribeFire.

Tuesday, November 27, 2007

WJ III Pair Cancellation test as measure of vigilance (sustained attention)

As a result of my recent "harvesting" of various unpublished CHC/WJ III-related thesis/dissertations, my interest in the WJ III Pair Cancellation (PC) test has been rekindled. Since the WJ III was first published I've maintained that the Pair Cancellation test was a good measure of sustained attention or vigilance, an aspect of executive functioning. Unfortunately, we (the WJ III authors) did not report much in the way of special validity studies to support of this interpretation.

[Conflict of interest note - I am a coauthor of the WJ III]

The purpose of this post is to share my recent thinking re: the WJ III PC test. My bottom line conclusion - I still believe that the WJ III Pair Cancellation test is an under appreciated test in the WJ III battery. Because Pair Cancellation's administration is not required to obtain any of the primary cognitive clusters (General Intellectual Ability; CHC factor clusters) it is a test that is often ignored (not administered). I think practitioners need to pay closer attention to the potential of this test, particularly when issues of vigilance, ADHD and executive functions are prominent in a referral for assessment. On what basis do I make this recommendation?

First, lets start with a description of the task. In the WJ III Pair Cancellation task a subject is presented with rows that contain repeating pictures of a dog and a ball (in no particular sequence) and must circle all instances of when the “ball is followed by the dog”. The test has a three-minute time limit. Thus, a subject must locate and mark a repeated pattern of pictures while simultaneously controlling for interference of potentially distracting information (i.e., demonstate good inhibition).

Second, lets consider the CHC basics for Pair Cancellation. As per my most recent CHC classification of all WJ III tests, Pair Cancellation (based on the published CFA analysis in the WJ III Technical Manual) is clearly a speeded test (Gs). Original logical narrow ability content analysis suggested a classification as a measure of P (perceputal speed) and/or AC (sustained attention/concentration). Using Ackerman and colleagues recent fine-grained analysis of perceptual speed measures (which suggests that perceptual speed may be an intermediate stratum ability between narrow and broad abilities defined by four narrow sub-abilities), the Pair Cancellation test might better be consider a measure of "complex perceptual speed" (Pc), which is the "ability to perform visual pattern recognition tasks that impose additional cognitive demands such as spatial visualization, estimating and interpolating, and heightened memory span loads."

Third, in a study of 39 subjects (21 with ADHD; 18 controls) Poock (2005) reported that the Pair Cancellation test, along with the Concept Formation and Auditory Working Memory tests, reliably differentiated ADHD and non-ADHD subjects.

Fourth, there is a rich base of neuropsychological literature that has demonstrated that various "cancellation tasks" are good measures of sustained attention or vigilance. Borrowing from Brawn's (2007) review of the literature:

  • Cancellation Tasks (CTs) are the immediate antecedents to CPT's [continuous performance tests]. Indeed, some researchers refer to CTs as "paper-and-pencil" CPTs (e.g., Barkley, 1998). They assess "...visual selectivity at a fast speed on a repetitive motor response task" (Lezak, 1995, p. 548), by requiring that a subject rapidly scan printed rows of digits, letters, symbols, or pictures in order to mark pre-specified targets interspersed throughout the symbols, or pictures in order to mark pre-specified targets interspersed throught the array.
  • Cancellation Tasks have been demonstrated to be sensitive to response slowing and inattentiveness as a fundtion of diffuse cerebral damage or acute brain conditions, and, like CPTs, they are classified as basic vigilance tests (Lezak, 1995). However, of the two, CPTs may be the purer measure of vigilance. Cancellation tasks require the subject to use a pencil, as well as to quickly and accurately scan rows of printed stimuli; thus, performance relies substantially on motor processing, visual-motor integration, and subject driven visual scanning (Lezak, 1995; Wechsler, 1997b; Woodcock et al., 2001).

Interested readers may wish to check out the recent "meta-search" I completed (and posted) re: the cancellation task assessment paradigm.

Finally, in a previously reported "Carroll analysis" of the complete WJ III battery, Pair Cancellation was found to the strongest loading test on the broad "cognitive" processing speed (Gsc) factor [this analysis also produced a broad "achievement" processing speed factor-Gsa]. In my opinion, this is consistent with the Ackerman-based classification of Pair Cancellation as a measure of complex perceptual speed (Pc).

Technorati Tags: , , , , , , , , , , , , , , , , ,

Powered by ScribeFire.