Showing posts with label RTI. Show all posts
Showing posts with label RTI. Show all posts

Wednesday, August 18, 2010

Hale et al (2010) expert consensus specific learning disabilities (SLD) LDQ article

At Brad Hale's request (he has been spending lots of time emailing people copies of this paper and he needs a break), the Hale et al. (2010) the final version of the expert consensus white paper on "Critical issues in response-to-intervention, comprehensive evaluation, and specific learning disabilities identification and intervention" published in Learning Disability Quarterly can now be accessed by clicking here.



Technorati Tags: , , , , , , , , , , , , , , , , , , , , ,

Tuesday, August 17, 2010

Reading fluency and reading LD/dyslexia: Guest post by John DeMann

The following is a guest blog post (previously called virtual scholars at this blog)  by John J. DeMann, NCSP, School Psychologist, North Allegheny School District John took advantage of my standing offer to readers of my blogs to receive a PDF copy of any article I mention in a research brief (or byte ) or any article that may be in a recent "IQs Corner Recent Literature of Interest" post.  I know that many practitioners do not have access to journals......so if a person volunteers to make a brief written post, I'm willing to send them a PDF copy of the article in exchange for the post.

This feature benefits all readers as the post is "added value and commentary" which then allows me to provide a link to the full article (via the "fair use doctrine"---esp. for educational purposes) for all to read.  So it is a win-win and "help your colleagues" type of exchange program.

John's post is very well written and provides a nice overview of the article along with some stimulating ideas and thoughts.  Thanks John.  His post is reproduced below "as is" (save any minor copy edits and or the adding or URL links by the blogmaster).  If you are considering a guest post, don't think your post has to be as long as John's.  Individual differences in guest posting is valued and recognized.

Recently, increased interest in reading fluency has emerged in both the professional literature and in applied practice. Oral reading fluency is typically the outcome variable by which response to intervention (RTI) models are evaluated, and is usually measured by a child's rate and accuracy (words correct/minute) when reading connected text. With the ubiquity of interventions targeting core phonological awareness deficits, attention has shifted to other cognitive variables that influence reading development beyond single-word reading and decoding difficulties. Although traditional assessment and definitions of dyslexia focus on single-word reading and decoding deficits, difficulty with reading fluency has been increasingly recognized as an important characteristic of dyslexics. For example, the recent reauthorization of the Individuals with Disability Education Improvement Act (IDEA, 2004) now recognizes reading fluency as one of the eight areas of specific learning disability. More recent conceptualizations of the term dyslexia also include references to fluency as an area of difficulty experiences by individuals with dyslexia. Further, the authors of the forthcoming revision to the Diagnostic and Statistical Manual of Mental Disorders (5th edition) are proposing a revised definition of dyslexia that includes difficulties in accuracy or fluency. This increased attention to fluency as an important aspect of reading may be the result of fluency being recognized as an important contributor to the overall goal of reading - comprehension. Reading fluency is essential for a child's academic success, as dysfluent reading is likely to significantly interfere with reading comprehension and thereby hamper the learning of content area knowledge. Although intervention research has established reading fluency's importance in developing overall reading skills, more work is needed to explore dyslexia characterized primarily by a lack of fluency and gain consensus regarding disability subtypes and cognitive components of fluency.

Meisinger et al.'s articleReading Fluency: implications for the assessment of children with reading disabilities (Annals of Dyslexia, 2010, 60, 1-17) establishes an argument for the importance of fluency as an overall indicator of reading ability, and stresses the importance of including standardized measures of fluency when conducting comprehensive assessments. In the current age of formative assessment and response-to-treatment models dominating the school psychology landscape, these authors argue that reliable and valid measures of fluency may be an overlooked aspect of assessment given the shortcomings of many assessment instruments. They argue that many common assessment instruments that measure reading skills include measures of word reading, decoding, and comprehension, but seldom include measures of reading fluency. Additionally, they point-out the inconsistency of how reading fluency is defined by various tests. For example, the Reading Fluency subtest from the Woodcock-Johnson Tests of Achievement - Third Edition (WJ-III ACH) measures an individual's ability to quickly read simple statements and decide whether they are accurate (i.e. includes comprehension), whereas other measures characterize fluency as an individual's ability quickly and accurately read larger blocks of text (e.g. GORT-4). Regardless of how fluency is measured, Meisinger et al. caution that the omission of fluency in the assessment of an individual's reading skills may have important implications for diagnostic decision making. They reference recent research that suggests word reading and reading fluency are distinct skills that each make unique contributions to an individuals reading comprehension. Therefore, evaluations that do not include measures of reading fluency may lead to erroneous or misleading conclusions regarding an individual's reading abilities.

As a result of this significant problem, Meisinger et al. chose to examine the diagnostic utility of reading fluency to identify children with reading disabilities by (a) determining whether there are children who have typically developing word identification and decoding skills but show specific deficits in reading fluency; (b) examine which cognitive features differentiate children with specific reading fluency deficits from struggling and normal readers, and (c) investigating whether the omission of reading fluency in the assessment of children would results in the under-identification of children with reading disabilities. The results of their study suggest:

* reading fluency measures are more sensitive in detecting reading problems than word reading measures
* it is essential to evaluate reading fluency when assessing children referred for reading difficulties; failure to do so may result in the under-identification of children with reading disabilities
* results support the identification of a subgroup of children who exhibit specific deficits in reading fluency without concordant deficits in single word reading in isolation or in decoding unknown words ("double-deficit" reading disability subtypes
* RAN is an underlying process that plays an important role in determining the rate at which children read connected text
* compared to children with normal reading skills, children with deficits in reading fluency were characterized by deficits in rapid naming speed but not in phonological processing

These results, as the authors suggest, have important implications for practitioners, suggesting that psycho-educational assessment that does not include measures of reading fluency is at risk of under-identifying children who would otherwise be classified as reading-disabled. These results also support the need for increased focus on intervention that leads to improved reading skills beyond the single-word level.

In review of this article, a few criticisms/caveats to consider: the authors indicate that a comprehensive, standardized test that measures word reading, decoding, fluency, and comprehension does not exist, making a cross-battery approach necessary to measure all variables in this study. Therefore, as the authors suggest, differences in test characteristics could account for the observed differences in performance on these measures. Although the WJ-III measures all aspects of reading used in their study, they chose to use a measure of fluency that aligns with more current definitions (e.g. National Reading Panel). It might be interesting to see how these tests choose to conceptualize fluency in future test revisions. The new WIAT-III (which wasn't released until after this study was submitted for review) defines fluency much like the GORT-4, and benefits from being a comprehensive, co-normed battery. A replication of this study using the WIAT-III norming sample could mitigate the sampling and testing error differences reported in this study, and determine whether these results generalize to a larger normative sample - the sample used in this study was selected from a largely white, clinic-referred sample of children previously diagnosed with a reading disability or suspected of having reading problems. Lastly, the authors suggest that their results should be replicated and expanded upon by exploring other potentially important variables that may contribute to reading fluency performance. For example, working memory is offered as another potentially important cognitive variable for reading fluency that could be included in this model to predict variance in reading fluency performance. Despite the evidence that demonstrates RAN is an underlying process that plays an important role in identifying reading difficulties, our understanding of why children with reading problems display these deficits is still limited. From a CHC perspective, RAN tasks share both cognitive speediness (Gs) and naming/retrieval (Glr) performance aspects; another question that remains as a result of this study is whether RAN deficits represent a more general slow speed of processing (Gs), or whether RAN deficits are related to slowness specific to letters/numbers that hampers the development of fluent reading.

It is apparent that reading fluency represents a largely under-studied area of reading research that may be a key area of assessment for children who experience reading problems. Most importantly, assessment practices that include standardized fluency measures may help differentiate intervention for students who experience difficulty developing fluency beyond word-identification skills.


Technorati Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Friday, July 16, 2010

New SLD identification model: Wayne County uses CHC theory for cognitive component

Many school districts are working to implement the new IDEA law and regulations regarding SLD identification in the context of a Tier-based Response to Intervention (RTI) model.  Today I learned of on school system (Wayne County) that uses, for the cognitive strenghts and weaknesses component, the Cattell-Horn-Carroll (CHC) model.  Information regarding their system can be found at their web page. 

I would be interested in hearing of other school systems that have organized the cognitive pattern of strengths and weaknesses SLD component around the CHC model.  Contact me at iap@earthlink.net if you would have something to share and that I could share with IQs Corner readers.




Technorati Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Friday, May 15, 2009

The promise of CHC theory of intelligence

Combine the past 20 years of CHC-driven intelligence test development and research activities (click here and here) with the ongoing refinement and extension of CHC theory (McGrew, 2005; 2009) and one concludes that these are exciting times in the field of intelligence testing. But is this excitement warranted in school psychology? Has the drawing of a reasonably circumscribed “holy grail” taxonomy of cognitive abilities led us to the promised land of intelligence testing in the schools—using the results of cognitive assessments to better the education of children with special needs? Or, have we simply become more sophisticated in the range of measures and tools used to “sink shafts at more critical points” in the mind (see Lubinksi, 2000) which, although important for understanding and studying human individual differences, fails to improve diagnosis, classification, and instruction in education?
It is an interesting coincidence that McDermott, Fantuzzo, and Glutting’s (1990) now infamous and catchy admonition to psychologists who administer intelligence tests to “just say no to subtest analysis” occurred almost 20 years ago—the time when contemporary CHC intelligence theory and assessment was emerging. By 1990, McDermott and colleagues had convincingly demonstrated, largely via core profile analysis of the then current Wechsler trilogy of batteries (WPPSI, WISC-R, WAIS-R) that ipsative strength and weakness interpretation of subtest profiles was not psychometrically sound. In essence, “beyond g (full scale IQ)—don’t bother.”
I believe that optimism is appropriate regarding the educational relevance of CHC- driven test development and research. Surprisingly, cautious optimism has been voiced by prominent school psychology critics of intelligence testing. In a review of the WJ-R, Ysseldyke (1990) described the WJ-R as representing “a significant milestone in the applied measurement of intellectual abilities” (p. 274). More importantly, Ysseldyke indicated he was “excited about a number of possibilities for use of the WJ-R in empirical investigations of important issues in psychology, education, and, specifically, in special education…we may now be able to investigate the extent to which knowledge of pupil performance on the various factors is prescriptively predictive of relative success in school. That is, we may now begin to address treatment relevance.” (p. 273). Reschly (1997), in response to the first CHC-based cognitive-achievement causal modeling research report (McGrew, Flanagan, Keith & Vanderwood, 1997) which demonstrated that some specific CHC abilities are important in understanding reading and math achievement above and beyond the effect of general intelligence (g), concluded that “the arguments were fairly convincing regarding the need to reconsider the specific versus general abilities conclusions. Clearly, some specific abilities appear to have potential for improving individual diagnoses. Note, however, that it is potential that has been demonstrated” (Reschly, 1997, p. 238).
Clearly the potential and promise of improved intelligence testing, vis-à-vis CHC organized test batteries, has been recognized since 1989. But has this promise been realized during the past 20 years? Has our measurement of CHC abilities improved? Has CHC-based cognitive assessment provided a better understanding of the relations between specific cognitive abilities and school achievement? Has it improved identification and classification? More importantly, in the current educational climate, where does CHC- grounded intelligence testing fit within the context of the emerging Response-to-Intervention (RTI) paradigm?
An attempt to answer these questions will be forthcoming in a manuscript submitted for publication (McGrew & Wendling, 2009) as well as a revision of the CHC COG-ACH Relations Research Synthesis Project available at IQs Corner (warning - current posted material is now outdated and does not reflect the final conclusions of the McGrew & Wendling (2009) review. This material is in the process of being revised and will be posted soon. Stay tuned to IQs Corner Blog or announcements via the NASP and CHC listservs.

Click here for other posts in this series.

Wednesday, February 18, 2009

CHC COG-ACH research synthesis project: 1-18-09 update and revision


I just posted another update to the on-line PPT SlideShare show that presents my current interpretation of the results of a "CHC cognitive-achievement relations research synthesis" project that I've been working on.   The newest feature is the inclusion of a set of "cheat-sheet" summary slides to be used by assessment professionals to engage in more selective referral-focused cognitive assessments.  These research-to-practice summary slides (click here if you want to see an example) are intended to take the research synthesis results (the first 100 slides....yes...the show has 130 in total and is not yet finished) and make the results practical.

This presentation presents an update of the "CHC COG-ACH correlates research synthesis" project described and hosted at IQ's Corner and IAP. The viewer should first read the background materials regarding this project at these sites (how to access is also included in first slide). The results summarized in this on-line show are part of a manuscript that is in preparation with Barb Wendling and will also serve as the foundation for a mini-skills workshop at the 2009 NASP conference in Boston.

Revisit IQ's Corner to keep abreast of updates.


Technorati Tags: , , , , , , , , , , , , , ,

Friday, February 06, 2009

LD, RTI, cognitive testing: AGORA course

Interested in expert opinions on the interaction of LD identification, RTI and the role of cognitive testing? Check out the AGORA multi-media course. I participated as one of the talking heads, but receive no royalties [I got a small honorarium]. A copy of the flyer can be found by clicking here. Description is below


  • Attached is a flyer describing a multi-media course on SLD identification that includes coverage of both RTI and Comprehensive Assessment and that culminates in seven best practices principles based on current research. It is 6 hours long and is meant to be delivered in either two, 3-hour, 1/2 day sessions or one, 6-hour, full day session. It is intended to be purchased by districts and to be delivered by someone in district (anyone who volunteers, as there is no skill or knowledge base necessary to be a facilitator). Continuing education credits may be obtained after completing this course. Note that some districts cannot afford to send school personnel to conferences and many districts cannot afford to bring speakers in due to significant budget cuts. This course is very cost effective and can serve to train many professionals in house for a fraction of what it would cost to send them to conferences or bring in a speaker. Finally, this professional development program will enhance any course in assessment or Specific Learning Disability (SLD), in particular, because students are exposed to the research and viewpoints of many leaders on all sides of the SLD identification controversies.
Technorati Tags: , , , , , , , , , , ,

Tuesday, April 29, 2008

Third National School Psychology Neuropsychology Conference - July 9-12, 2008

Registration for the The Third National School Neuropsychology Conference (July 9-12; Grapevine, Texas) is now open. It looks like a very good conference with presentations/workshops on the NEPSY-II, CHC cross-battery assessment, the CAS, working memory assessment, the D-KEFS, culturally and linguistically oriented assessment, LD/RTI, etc.

The Keynote Address is by Richard Woodcock (The evolution of the assessment of cognitive functions).

Shameless plug. I'm down for an invited address (immediately following Dr. Woodcock's address) on Advances in the prediction of academic achieveming using WJ III cognitive subtests. In reality this presentation will be CHC-focused, with research derived from the WJ III serving as the primary vehicle to illuminate CHC-achievement relations. This will be similar to the first half of my NASP08 workshop were I unveiled the Cattell-Horn-Carroll (CHC) Cognitive Abilities Meta-Analysis project.

Kudos to Dr. Dan Miller for organizing an exciting conference.

Technorati Tags: , , , , , , , , , , , , , , , ,

Monday, March 31, 2008

Cognitive assessment and RtI: Excellent overview by Brad Hale

Kudos to Brad Hale for his well written explanation "Response to Intervention: Guidelines for Parents and Practitioners" at Wrightslaw. The article provides an excellent overview of RtI (Response to Intervention) and the use of cognitive/neuropsychological assessment during the Tier III component of RtI models.

Readers who want to consult the article Brad references (Hale et al., 2006), which is the primary foundation for his thoughts on the use of cognitive/neuropsychological assessment in an RtI framework, can find a copy (along with guest blog comments by John Garruto) by clicking here.


Technorati Tags: , , , , , , , , , , , , ,

Thursday, March 06, 2008

Cognitive assessment and RTI: Shinn response, correction, plus...

My prior FYI post regarding the Kearns and Fuch's LDA presentation resulted in some spirited exchange over on the NASP listserv. The most detailed response came from Dr. Mark Shinn (see his mug shot to the left). Below are Mark's listserv comments "as is" (with some slight formatting changes by the blogmaster).

Mark was also gracious enough to provide me a full conflict of interest disclosure statement which can be found at the bottom of this post. Finally, I think I have a correction to make. In an email response on the NASP list, I suggested that Fuch's was a student of Dr. Stan Deno, who is widely considered as the father of CBM. Doug was a student at the U of M prior to my arrival, so I am not aware of the complete history. But, I now believe that Doug was not involved with Dr. Deno during the development of CBM. I believe Doug's doctoral mentor was the late and great Bruce Balow. However, Doug has been involved in researching various aspects of CBM as it relates to LD identification. Enough said....I don't have time to run down the lineage of all fellow U of M scholars.

Also, I want to make a statement re: one of the products that Dr. Shinn mentions in his COI statement...namely...AimsWeb. Of all the tools for continuous monitoring I've been most impressed with the AimsWeb product...just my two cents. Finally, I'm done commenting on this thread. Folks who want to track further developments should attend to the NASP listserv.

Mark Shinn responded to a members "exciting" response to my prior blog post re: the Kearns & Fuchs LDA presentation.
  • Before going overboard with excitement, I'd encourage a careful read of the presentation.
  • This is not about the role/importance of cognitive assessment in LD identification. In particular, this is not a presentation about ATIs.
  • On slide 60, regarding the "potential concerns" note: "Many of the studies did not identify cognitive deficits at all" "When they did, they did not always use cognitive assessment"
  • This is among a number of other weaknesses.
  • Slide 62 states "The Use of Cognitive Assessment Has Potential (Their Emphasis) Benefit--"May" is not the same as "Does" and this review doesn't provide much of a compelling argument as to how or why I could go on and on, but it would not be a good use of time. Note, however, among a number of concerns...
  • The authors seem to confuse the p value with impact...the lower the p, the greater the effect (slides 34, 42. Minnesota statisticians would be chastising beyond belief. Effect sizes were reported only on Slide 55. Only 10 of the 36 studies were judged to be of high quality while 14 were judged to be of low quality--not excluded, but still interpreted anyway. Subjects were unspecified, but IF the topic was the role of cognitive assessment in SLD, then one would presume that the studies would have SLD students as subjects. A few clearer are not targeted on SLD. For example, on slide 48, the students are 14 students with low WMRT scores. Slide 41 lists subjects as ADHD. Hmmmm.
  • Dependent measures...Visuo-spatial working memory circles, Span Boards, Raven's Head movements (slide 34). Perhaps most importantly, the presentation reports results of cognitive "interventions," not cognitive assessments. Let's see Slide 31 Do cognitive interventions have a positive effect on cognitive outcomes?
  • Slide 34 Findings: Students in intervention had greater Slide 35 Performance on cognitive tasks can be improved with a working memory intervention improvement in Slide 36 Cognitive interventions have a positive effect on cognitive outcomes Slide 37 Do studies with hybrid cognitive+academic interventions produce academic gains and on and on and on...
  • As a final note, what is the difference between a "cognitive" intervention and an "academic intervention?" Seems like an artificial contrivance.
Dr. Shinn's COI statement: Mark R. Shinn, Ph.D. has commercial relationships with three companies. He serves as a paid consultant to Pearson Assessment as Chief Scientist for AIMSweb, a company created by Gary Germann and Steven Jennen that was sold to the Psychological Corporation in 2006. His responsibilities include contributing to software/product development and improvement and field testing. He does not receive royalties or commissions on AIMSweb sales. He also is a consultant to Glencoe Publishing, a McGraw-Hill Company, for their Jamestown Reading Navigator (JRN) product. JRN is a reading intervention for at risk and very low-performing adolescents. His role has been to assist JRN in the use of CBM maze as part of their reading progress monitoring systems. He is scheduled to receive royalties (one-quarter of 1%) should the product achieve profitability. He also currently serves as an unpaid contributor (without royalties) to VMath, published by Voyager, a math intervention for at risk students Grade 3-12. HIs role has been to assist VMath in the development and use of CBM Math Computation as part of their math progress monitoring system.


Technorati Tags: , , , , , , , ,

Wednesday, August 29, 2007

LD and RTI - guest blog post by Jim Hanson

The following is a guest blog post by Jim Hanson (School Psychologist, M.Ed., Portland Public Schools, Portland, Oregon), a new member of IQs Corner Virtual Community of Scholars project.

Jim recently shared some material (on the CHC listserv) that he and his colleagues had developed in response to new regulations regarding the identification of children with specific learning disabilities (SLD). He received many "me to" requests for copies of the materials he was offering. IQ's Corner invited Jim to share his materials via a guest post and to ask Jim to become a regular guest blogger. He agreed!!!!!

Below are links to the two documents he was distributing. One is in the form of a pdf file (click here to view). The other is a PowerPoint show, which I've made available via Slideshare (click here to view). Below are Jim's comments. His colleagues are listed on the title slide of the PPT show.

  • Federal and most state regulations have changed the critieria for identifying specific learning disabilities from the IQ/achievement discrepancy model to 1) response to intervention (RTI) and/or 2) a pattern of strengths and weaknesses in achievement or performance relative to age, state grade level standards, and intellectual development (PSW). School districts are struggling to interpret what PSW means. Some administrators wish to continue using the IQ/achievement discrepancy model and call it PSW. This ignores voluminous research evidence on the nature and the federal definition of learning disabilities, which define SLD as a weakness in one or more of the basic psychological processes. The reason for some districts' wish to continue with "business as usual" might be that district personnel are not familiar with the neurology of learning disabilities. If they are acquainted with cognitive science, they might still be daunted by the science's diversity of terms among researchers, its technological complexity, and its relation to effectiveness and ease in application across a wide variety of schools and school teams. The proposed reductionist model is based on models by several leading researchers in the field. It is designed as a first step in acquainting administrators with current cognitive science. It may also provide an acceptable research model until personnel can be trained in more expansive and technically adequate methods of identification. Interested persons are welcome to contact Jim Hanson atJaBrHanson@yahoo.com, or the Oregon School Psychologists Association for further questions and comments."

Wednesday, April 18, 2007

Math screening and prgressing monitoring - Guest post by John Garruto

The following is a guest post by John Garruto, school psychologist with the Oswego School District and member of the IQs Corner Virtual Community of Scholars. John reviewed the following article and has provided his comments below. [Blog dictator note - John's review is presented "as is" with only a few minor copy edits by the blog dicator

Fuchs, L.S., Fuchs, D., Compton, D.L., Bryant, J.D., Hamlett, C.L., & Seethaler, P.M. (2007). Mathematics Screening and Progress Monitoring at First Grade: Implications for Responsiveness to Intervention. Exceptional Children, 73(3), 311-330.


Abstract
  • The predictive utility of screening measures for forecasting math disability (MD) at the end of 2nd grade and the predictive and discriminant validity of math progress-monitoring tools were assessed. Participants were 225 students who entered the study in 1st grade and completed data collection at the end of 2nd grade. Screening measures were Number Identification/Counting,Fact Retrieval, Curriculum-Based Measurement (CBM) Computation, and CBM Concepts/Applications. or Number Identification/Counting and CBM Computation, 27 weekly assessments were lso collected. MD was defined as below the 1 Oth percentile at the end of 2nd grade on calculations nd word problems. Logistic regression showed that the 4-variable screening model produced ood and similar fits in accounting for MD-calculation and MD-word problems. Classification ccuracy was driven primarily by CBM Concepts/Applications and CBM Computation; CBM oncepts/Applications was the better of these predictors. CBM Computation, but not Number dentification/Counting, demonstrated validity for progress monitoring.
John Garruto speaks (in his own words)
  • This article begins with a summary of research that has already been done to date regarding math disabilities. One of the most important conclusions reached through analysis of past studies is that screening outcomes will likely differentiate math computation from math reasoning skills--looking at math as a universal entity would likely be erroneous.
  • Examples of pre-existing research included reviewing the influence of various screeners in predicting outcome measures. Screeners across studies included (but were not restricted to): Number knowledge, digit span backwards, missing number, etc. Outcome measures included various group administered normed reference tests (such as the Stanford Achievement Test) or individually administered normed reference tests (such as the WJ-R). Most of the screening predictors had some degree of predictability on outcome measures (usually ranging from about .42-.72). Nor surprisingly, number knowledge was the highest predictor while missing number and digit span backwards also seemed to account for some degree of variance across studies. These findings correlate with the CHC (Cattell-Horn-Carroll) framework advocating that crystallized intelligence, fluid reasoning, and short-term memory are all important predictors of math performance (See Flanagan, Ortiz, Alfonso, & Mascolo, 2006).
  • Fuchs et al. used the WJ-III to obtain a profile of their subject sample at the beginning of first grade. They used four CBM techniques in order to predict performance on the dependent measures at the end of second grade (Number ID/Counting, Fact Retrieval, CBM-Computation, and CBM-Applied Problems.) Outcome measures included the WRAT for calculation and Jordan’s Story Problems (using local norms to establish the percentile from a neighboring school district) for word reasoning. Using ROC curves, they indicated that their screeners had a good fit in targeting the number of students who would be identified as math disabled (however, there were some concerns--30 students were identified as MD (math disabled-calculation) by screening even though not by normative measure...so called “false alarms”. Seven students were identified as not MD-calculation even though the curriculum indicated that they were..so called “misses”. The numbers were 36 false alarms and 7 misses for MD-reasoning.
  • Of the different screening measures-all but Number ID/Counting predict MD, with CBM-Applied Problems being the best predictor for both. The authors further hypothesize that screening measures that include a wide variety of problems (such as with applied problems) likely is a characteristic that helps with predictive power.
  • I have a few thoughts on this study. Clearly, looking at the previous research has flagged that we probably should continue to focus on cognitive factors as guides to help with hypothesis generation (although clearly prior knowledge still seems to reign as most important in prediction..although that was not the focus of this study.
  • I personally had concerns in the use of the WRAT and then Jordan’s Story Problems. One uses a nationally normed set of data, while the other used local norms from a nearby location. The authors indicated the sample was “local but representative”, although given all other outcome measures were nationally normed instruments--this posed as a problem for me.
  • I was heartened to learn that three CBM measures performed adequately in predicting the presence of a MD and was also enlightened on the importance of applied problems (or math reasoning)--a construct that I think has been very much overlooked in much of the RTI literature on math in contrast to number of digits correct. However, when I see that there were over thirty “false alarms” for prediction--this is a cause for genuine concern. The implications go beyond providing “tutoring for those who do not need it.” It can lead to an attribution of “disability” when one might not necessarily exist..a Type I error. This can lead one to lower our expectations of students who may not necessarily present with deficiencies.
  • My other concern was the difference in scores in the studies. Although the WRAT is noted to correlate at .71 with the WJ-III, the mean SS for the WJ-III at the beginning of first grade was 91 for MD and 71 for the WRAT at end of second grade. For word problems, the mean score for MD on WJ-III was 92 and about 66 on the Jordan. The standard deviations are pretty large for the WJ-III and WRAT (about 7-11 points) while smaller on the Jordan (about three points), but again I become concerned at how very dissimilar these profiles are. A large part of it can be explained by the calculation demands at beginning first (absolutely nothing outside of writing numbers) and end second (subtraction-possibly with regrouping)-although the magnitude is still very great. I might ask a quantitative expert to help me pull that apart further as this “applied practitioner” has been known to “stumble on the stats” at times.
Technorati Tags: , , , , , , , , , , , ,

Powered by ScribeFire.

Friday, January 05, 2007

Can IQ scores predict response to intervention? Guest post by John Garruto

The following is a guest post by John Garruto, school psychologist with the Oswego School District and member of the IQs Corner Virtual Community of Scholars. John reviewed the following article and has provided his comments below. [Blog dictator note - John's review is presented "as is" with only a few minor copy edits by the blog dicator and the insertion of some URL links]

Fuchs, D. & Young, C.L. (2006). On the Irrelevance of Intelligence in Predicting Responsiveness to Reading Instruction. Exceptional Children, 73(1), 8-30. (click here to view)

There has been considerable debate regarding the role of cognitive/intellectual assessment in the Response to Intervention (RTI) paradigm, primarily with regard to the identification of students with learning disabilities (LD). The purpose of this article was to review the relationship of IQ scores to performance in reading intervention program. Fuchs and Young reviewed 13 studies with somewhat mixed results. Below is my brief synopsis of their main conclusions:
  • Eight of the studies found IQ to be a significant correlate to reading. However, in many cases, fidelity of treatment was not always established.
  • The authors noted a modest ATI (Aptitude-Treatment-Interaction) relating to the importance of IQ to success with the type of treatment (for example, those with higher IQ’s tended to do better with decoding, fluency, and comprehension training, but IQ was less related to success in phonemic awareness training.)
  • Overall, Fuchs & Young, while conceding that IQ and a multi-factorial view of cognitive abilities are probably not as important to LD diagnosis as other proponents might espouse, there seems to be a role for the IQ test as part of the process of determining how to differentiate instruction.
An interesting twist in the article was the primarily focus on ‘g’, (or overall IQ; general intelligence), something that many cognitive processing theorists frown upon.. However, a closer examination of the dynamics of the studies reviewed might lead to some conclusions not stated in the paper. I offer the following observations and comments:
  • Fuchs and Young appropriately suggest that further investigations need to examine the relations between multi-factorial models of intelligence (e.g., CHC theory) and response-to-treatment interventions. They do concede that all of their studies use overall ‘g’ as a single predictor. However, a careful inspection reveals that most of the studies used the Wechsler batteries (WISC-III or earlier versions). If one examines the Olde School Wechsler…half of the test measures Verbal IQ or crystallized intelligence (Gc). There is a substantial body of literature relating crystallized intelligence (Gc) to reading ability.
  • None of the IQ tests included in the review measured auditory processing abilities (Ga), like those measured in the WJ III and specialized batteries (e.g., CTOPP). Therefore, it is not surprising that the authors found little relationship between intelligence tests and phonemic awareness training.
  • Cognitive ability tests have changed since the days of David Wechsler. We now know (and as Fuchs and Carlson highlight) that skills such as phonological processing (Ga-PC), rapid naming (Glr-NA), orthographic processing, etc. are important in learning to read. Although the Wechsler batteries do not assess these skills, there are cognitive ability tests that currently do. I would suggest that the relations between cognitive batteries that include measures of these important reading-related abilities would likely demonstrate stronger relations with intervention responsiveness. Of course, this is an empirical question begging further research.
  • Although Fuchs and Young do not necessarily espouse a multi-factorial model of intelligence for LD identification, clearly there are implications for a problem-solving model. If IQ (and let’s face it…IQ is predominantly Gc in the case of the Wechsler's) accounts for unique variance in predicting treatment outcome and explicit PA training is not related to IQ (but I would wager it would be related to auditory processing profiles), using CHC theory and CHC-designed batteries (e.g., WJ III; cross-battery designed assessments) seems to fit well within the context of this study. Other significant correlates to reading have been processing speed (Gs), short-term memory (Gsm), and long-term storage and retrieval (Glr). All of the above abilities are either underrepresented or not represented in the Wechsler batteries, the primary battery that served as the crux of this review.
  • The article recognizes and describes the two primary factions that are prominent in contemporary special education and school psychology fields--those who believe in response-to-intervention as the way to determine LD eligibility, and those who espouse the need for cognitive assessment. This article does not diminish the importance of RTI or the problem-solving model. In fact, it supports many of the changes noted in the regulations regarding the importance of RTI as a part LD determination process. It places importance on using empirically-based instruction and interventions. Fuchs and Young also highlight the significance of formative assessment and ongoing progress monitoring
Technorati Tags: , , , , , , , , , , , , , ,

powered by performancing firefox

Friday, October 27, 2006

RTI and cognitive assessment--Guest post by John Garruto

The following is a guest post by John Garruto, school psychologist with the Oswego School District and member of the IQs Corner Virtual Community of Scholars. John reviewed the following article and has provided his comments below. [Blog dictator note - John's review is presented "as is" with only a few minor copy edits and the insertion of some URL links]

Hale, J.B., Kaufman, A., Naglieri, J.A. & Kavale, K.A. (2006). Implementation Of IDEA: Integrating Response To Intervention And Cognitive Assessment Methods. Psychology in the Schools, 43(7), 753-770. (click here to view)

This article (and the entire journal series in this special issue) has articulated much of what I have been saying and thinking for a long time. Hale and colleagues open up by discussing the RTI (response-to-intervention) and cognitive assessment “factions”. Although I had nothing to do with this article, I chuckled at the similarity to a PowerPoint I did for graduate study in July of 2005 (click here). I joked about these factions as having a paradigm that was analogous to “Star Wars”. I likened the idea of school psychologists who espoused both RTI and cognitive assessment as necessary requirements for the identification of SLD (Specific Learning Disability) as comprising “a rebel alliance”…primarily because it seemed we were advocating such a balanced approach. Clearly this Psychology in the Schools special issue suggests there is an increasing number of professionals who advocate this approach.

Before beginning with a general summary and sharing my overall impressions, it is important to acknowledge the obvious conflict of interest of most dissenters (in the special issue); both Kaufman (KABC-II) and Naglieri (CAS) are intelligence test authors. That said, it is important to note that two of the other authors are not test authors. In fact, Kavale (a.k.a., the intervention effect size guru) is frequently cited by many RTI-only proponents. Therefore, it is suggested that the scope of this article ending at a conflict of interest is very unlikely.

  • The Hale et al. article begins with the acknowledgment that there seem to be two factions in school psychology assessment circles--those who believe in response-to-intervention as the way to determine eligibility for SLD, and those who espouse the need for cognitive assessment. The Hale et al. article does not diminish the importance of RTI or the problem-solving model. In fact, it supports many of the changes noted in the regulations (e.g., the importance of looking at RTI as a part of the process for determining eligibility for learning disabilities.) It places emphasis on the use of empirically-based instruction and interventions. It also highlights the significance of formative assessment and ongoing progress monitoring. Such practices will illustrate the effect of interventions.
  • After supporting the importance of RTI, the authors contend that at Tier-III, a responsible individualized assessment (including cognitive assessment) needs to occur. Clearly, jumping to conclusions about a neurologically-based deficit based only on failure to RTI would lead to a significant number of false positives (Type I errors). The authors do an exemplary job of identifying the importance of cognitive processing deficits related to SLD in the problem-solving literature. This approach does not embrace the much maligned ability-achievement discrepancy LD identification procedure, but instead endorses examining those that processes are leading (if any) to the negative outcomes. The authors conclude with a case study that describes a child who seemed to have one problem on the surface, but via cognitive assessment was discovered to have an underlying latent problem (i.e., was not observably manifest.) The authors contended that this discovery, vis-à-vis appropriately designed cognitive assessment methods, facilitated the problem-solving model by allowing the team to implement new interventions. The beauty of this example is that the focus was not on eligibility as the end result, but instead, using individualized assessment to help piece the puzzle together.
  • I’ve spoken quite a bit about the authors and a possible conflict of interest. One thing I do want to mention is that I continue to be a school-based practitioner. This framework is one I have been endorsing (as a practitioner) for a long time (my presentation noted above has been online many months before this article went to press.) I’ve had many spirited debates with teachers, arguing that the spirit of formative assessment and research-based interventions has a very positive research history and we are remiss not to use these methods first. However, for those kids who are not responding, I can often complete a solid individualized assessment that provides logical reasons as to why they are not responding, and continue to provide interventions that are related to dynamics and skills that are not readily manifest. There is absolutely no doubt in my mind that combining both approaches will allow us to look beyond “eligibility” to determining what a child needs.
  • Another of my thoughts is that much of the criticism of cognitive assessment not leading to intervention has been the lack of research for establishing ATIs (aptitude-treatment-interactions). However, establishing individualized interventions based on the needs of the child (that might not have a huge history of published research) does not mean we throw it out. Many RTI-only proponents argue that we might was well go right to special education and simply intensify the research-based interventions that could be done with a special education paradigm. I argue that doing flash cards to aid sight-reading might have an empirical support base, but doing flash cards all day long (one-on-one) with a blind student isn’t going to do a thing. However, designing an intervention around the varied needs and interests of the child could (and has) lead to positive results.
  • Finally, my other concern with the RTI-only paradigm is it seems “stuck” on reading…and only on three out of the big five components of the National Reading Panel (Phonemic Awareness, Phonics, and Fluency). There is little research on using CBM for math reasoning or written expression (beyond spelling and perhaps writing fluency.) I believe the most recent edition of School Psychology Review, 35(3), which focused on CBM for reading, writing, and math might have provided practice-based school psychologists with the research we need. Quite the contrary, most all of the articles dealt with math calculations and fluency, as well as on spelling, mechanics, and writing fluency. Clearly CBM/RTI research on higher-level reasoning processes, vocabulary, induction, deduction, inferential reasoning, and writing organization, were lacking from this issue. Until RTI-only advocates start providing research and guidance in these areas, we would be remiss to discard relevant assessment techniques that provide insights into these important skills and abilities.
Technorati Tags: , , , , , , , , , , , , , , , ,

powered by performancing firefox

Monday, January 16, 2006

RTI and cognitive assessment EWOK

I'm trying an experiment in knowledge development and sharing.

I'm currently working on a literature review (for a journal article) related to the role of cognitive assessment in RTI (response to intervention) special education assessment models. As a known Gv guy (aka Dr. Gv), I make frequent use of a mind map program called Mind Manager (from Mind Jet). It is a great tool for storing notes, copies of articles, etc,. in a visual map. Then, as I continue to read I rearrange the map as my thinking evolves.

Given the recent interest in RTI and the role of cognitive assessment, and given my interest is ascertaining if people would find dissemination of information via web-based maps (what I am calling EWOKS - Evolving Webs of Knowledge :) ), I thought I'd post my current working notes for people to view. This accomplishes two goals.

1. Sharing the literature (on this important topic) I've located to date.
2. Demonstrating how this information dissemination strategy mechanism works....and to see if folks might find it useful as a means to build living and breathing EWOKs on topics.

Take a peak at this URL.

[Note to fellow Gv folks....click on the "overview map" link if you want to see the entire visual map]

Be gentle...all I've done is cut and past text from various PDF copies of articles into branch nodes. I've made no attempt to format anything to look pretty. I've not yet started the step of taking the notes and starting to write. I'm not done with my literature acquisition and
review. The cut-paste notes (with some bolding for me to think about) are presented "as is." This is NOT any kind of finished web product...it is just an attempt to present something "in process".......these are my working notes.

What is really cool (re: this product as a tool for writing and dissemination) is that once you save it in web format and upload via an ftp program, all the links work....and the links to the original articles, if they are also uploaded, are active.......so folks can click on the links and view the articles themselves.

Enough geek self-revelation for now. I'd be interested in feedback regarding the perceived value of EWOKS (much more polished, of course). Remember that the concept would be for special topic EWOKs to be constantly updated, revised, extended, etc.

A nicer example (without the pdf article links) can be found at this link.

Technorati tags: