Showing posts with label CAS2. Show all posts
Showing posts with label CAS2. Show all posts

Thursday, November 14, 2024

Research Byte: Evaluating the treatment utility of the Cognitive Assessment System (#CAS): A #metaanalysis of #reading and #mathematics outcomes

 


Richard J. McNulty a 1 Randy G. Floyd a 2

https://doi.org/10.1016/j.jsp.2024.101384

Abstract

There has been a long search for cognitive assessments that reveal aptitudes thought to be useful for treatment planning. In this regard, since the 1990s, there has been some enthusiasm for the Cognitive Assessment System (CAS) and its potential promise for informing treatment due to its alignment of theory, assessment instrument, and suite of interventions. The purpose of this meta-analytic review was to synthesize research pertinent to the treatment utility of the CAS according to a taxonomy of treatment utility. A total of 252 articles were produced by an electronic search and eligibility screening yielded 16 articles meeting criteria for consideration. Most studies described in these articles utilized obtained difference designs, focused on the Planning composite scores from the CAS, and addressed math interventions. Only seven studies with publication dates from 1995 to 2010 yielded sufficient information to be included in the meta-analysis. A random effects model was employed to determine the overall treatment utility effect across 114 participants apportioned to 14 groups and comprising eight comparisons. Results yielded an overall moderate effect size (0.64, 95% CI [0.24, 1.03], p = .002), but it was associated with significant imprecision (due to a low number of viable studies and small sample sizes across most studies) that prohibits reliable conclusions from being drawn. Assessment of between-study heterogeneity and moderator analysis was not possible. Considering these findings, additional research is needed to support the treatment utility of the CAS—even after more than 27 years of study. Furthermore, there are no published studies regarding the treatment utility of the second edition of the CAS, which was published in 2014. These results suggest that there is insufficient empirical grounding to enable practitioners to use this instrument to develop effective treatments for reading, mathematics, or writing. More direct interventions designed to enhance academic skill development should be employed.

Tuesday, November 22, 2016

Research Bytes: A Systematic Examination of the Linguistic Demand of Cognitive Test Directions Administered to School-Age Populations

A Systematic Examination of the Linguistic Demand of Cognitive Test Directions Administered to School-Age Populations

  1. Damien C. Cormier1
  2. Okan Bulut1
  3. Deepak Singh1
  4. Kathleen E. Kennedy1
  5. Kun Wang1
  6. Alethea Heudes1
  7. Adam J. Lekwa2
  1. 1University of Alberta, Edmonton, Canada
  2. 2Rutgers University, New Burnswick, NJ, USA
  1. Damien C. Cormier, Department of Educational Psychology, University of Alberta, 6-107E Education North, Edmonton, Alberta, Canada T6G 2G5. Email: dcormier@ualberta.ca

Abstract

The selection and interpretation of individually administered norm-referenced cognitive tests that are administered to culturally and linguistically diverse (CLD) students continue to be an important consideration within the psychoeducational assessment process. Understanding test directions during the assessment of cognitive abilities is important, considering the high-stakes nature of these assessments. Therefore, the linguistic demand of spoken test directions from the following commonly used cognitive test batteries was examined and compared: Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V), Woodcock–Johnson IV Tests of Cognitive Abilities (WJ IV COG), Cognitive Assessment System, Second Edition (CAS2), and Kaufman Assessment Battery for Children, Second Edition (KABC-II). On average, the linguistic demand of the standard test directions was greater than the linguistic demand of the supplementary test directions. When examining individual test characteristics, very few individual tests were identified as outliers with respect to the linguistic demand of their test directions. This finding differs from previous research and suggests that the linguistic demand of the required directions for most tests included in commonly used cognitive batteries is similar. Implications for future research and test development are discussed