Showing posts with label reading. Show all posts
Showing posts with label reading. Show all posts

Thursday, June 19, 2025

Research Byte: Positive #schoolclimate can make a difference in #reading, #mentalhealth and #coritical thinning - #schoolpsychology #SPED #EDPSY #cognition


Positive school climate boosts children’s reading achievement, mental health and cortical thinning.  

Brain and Cognition.  Sorry, not an open access article you can download.  ðŸ˜’


Abstract

Growing evidence underscores school climate as an important protective factor for children’s academic achievement and mental health. However, whether and how school climate impacts child development from behavioral to brain has remained largely unknown. This study aimed to investigate the protective roles of school climate in children’s reading achievement, mental health, and cortical thickness. Behavioral and neuroimaging data were obtained from 400 children aged 6–12 years (mean age = 9.65 years). First, results showed that a positive school climate was significantly associated with better reading performance and reduced internalizing/externalizing problems. Notably, school climate compensated for disadvantaged family environments, particularly among children with less educated parents. Second, externalizing problems significantly mediated the link between school climate and reading achievement. Third, compared with their peers, children from schools with more positive climate showed accelerated cortical thinning in the lingual/ pericalcarine/ cuneus and postcentral regions, the hubs for visual processing and sensorimotor integration. Fourth, the cortical thickness of the lingual/ pericalcarine/cuneus and postcentral gyri significantly mediated the role of school climate in reading achievement. These results highlight school climate as a multi-level protective factor that fosters academic resilience via behavioral regulation and cortical thinning.

Thursday, February 27, 2025

What is #dyslexia?— An expert delphi consensus on #dyslexia definition, #assessment and #identification—-#SLD #dyslexia #SPED #schoolpsychology



An open access journal article that can be downloaded for reading.  Click here to access/download


ABSTRACT 

This paper discusses the findings of a Delphi study in which dyslexia experts, including academics, specialist teachers, educational psychologists, and individuals with dyslexia, were asked for their agreement with a set of key statements about defining and identifying dyslexia: why it should be assessed and how and when this assessment should be conducted. Two rounds of survey responses provided a vehicle for moving towards consensus on how to assess for dyslexia. Forty-two consensus statements were ultimately accepted. Findings suggested that assessment practice should take account of risks to the accurate identification of dyslexia. An assessment model, with guidelines for assessors, is presented, based on the Delphi's findings. This hypothesis-testing model requires assessors to investigate and weigh up the factors most likely to result in an accurate assessment before reaching conclusions, assigning terminology, and making recommendations for intervention and management.

Click on following images for larger more readable versions of figures




Thursday, November 14, 2024

Research Byte: Evaluating the treatment utility of the Cognitive Assessment System (#CAS): A #metaanalysis of #reading and #mathematics outcomes

 


Richard J. McNulty a 1 Randy G. Floyd a 2

https://doi.org/10.1016/j.jsp.2024.101384

Abstract

There has been a long search for cognitive assessments that reveal aptitudes thought to be useful for treatment planning. In this regard, since the 1990s, there has been some enthusiasm for the Cognitive Assessment System (CAS) and its potential promise for informing treatment due to its alignment of theory, assessment instrument, and suite of interventions. The purpose of this meta-analytic review was to synthesize research pertinent to the treatment utility of the CAS according to a taxonomy of treatment utility. A total of 252 articles were produced by an electronic search and eligibility screening yielded 16 articles meeting criteria for consideration. Most studies described in these articles utilized obtained difference designs, focused on the Planning composite scores from the CAS, and addressed math interventions. Only seven studies with publication dates from 1995 to 2010 yielded sufficient information to be included in the meta-analysis. A random effects model was employed to determine the overall treatment utility effect across 114 participants apportioned to 14 groups and comprising eight comparisons. Results yielded an overall moderate effect size (0.64, 95% CI [0.24, 1.03], p = .002), but it was associated with significant imprecision (due to a low number of viable studies and small sample sizes across most studies) that prohibits reliable conclusions from being drawn. Assessment of between-study heterogeneity and moderator analysis was not possible. Considering these findings, additional research is needed to support the treatment utility of the CAS—even after more than 27 years of study. Furthermore, there are no published studies regarding the treatment utility of the second edition of the CAS, which was published in 2014. These results suggest that there is insufficient empirical grounding to enable practitioners to use this instrument to develop effective treatments for reading, mathematics, or writing. More direct interventions designed to enhance academic skill development should be employed.

Sunday, March 13, 2016

Research Byte: Executive functioning and working memory deficits in kingergarten are predictive of reading and math deficits in first grade

 
Nice study.  For WJ III/WJ IV users, the measure of working memory (Gwm), which was the most predictive variable of first grade reading and math, was Numbers Reversed.
 
Available online 7 March 2016

Executive functioning deficits increase kindergarten children's risk for reading and mathematics difficulties in first grade

  • 1 The Pennsylvania State University
  • 2 University of California, Irvine

Highlights

• Executive functioning deficits in kindergarten uniquely predict reading and mathematics difficulties in first grade
• Executive functioning deficits more strongly predict mathematics difficulties than reading difficulties, although these deficits predict both types of difficulties
• Working memory deficits more strongly predict mathematics and reading difficulties than cognitive flexibility deficits

Abstract

Whether executive functioning deficits result in children experiencing learning difficulties is presently unclear. Yet evidence for these hypothesized causal relations has many implications for early intervention design and delivery. We used a multi-year panel design, multiple criterion and predictor variable measures, extensive statistical control for potential confounds including autoregressive prior histories of both reading and mathematics difficulties, and additional epidemiological methods to preliminarily examine these hypothesized relations. Results from multivariate logistic regression analyses of a nationally representative and longitudinal sample of 18,080 children (i.e., the Early Childhood Longitudinal Study-Kindergarten Cohort of 2011, or ECLS-K: 2011) indicated that working memory and, separately, cognitive flexibility deficits uniquely increased kindergarten children's risk of experiencing reading as well as mathematics difficulties in first grade. The risks associated with working memory deficits were particularly strong. Experimentally-evaluated, multi-component interventions designed to help young children with reading or mathematics difficulties may also need to remediate early deficits in executive function, particularly in working memory.

Keywords

  • Executive functioning;
  • working memory;
  • cognitive flexibility;
  • learning difficulties;
  • longitudinal

Tuesday, March 03, 2015

Music Makes You a Better Reader, Says Neuroscience

Dr. Nina Kraus does awesome research on auditory processing (Ga) abilities at Northwestern.

Content duration courtesy of IQ McGrew and the MindHub






Monday, February 24, 2014

Motivational constructs in reading: A recent lit review

Good stuff. See the Beyond IQ, and MACM (Motivation and Academic Competence Model) at the MindHub for related and additional information.

 

 

Monday, November 18, 2013

Taub and Benson (2009). Gc and Gv related to rdg comp in college students

11-19-13 update.  I first posted this articleFYI without digesting the results in detail.  A subsequent comment (see comment section) brought my attention to a possible problem with a Heywood case.  I tend to agree with that comment upon further review.


A new study that shows direct effect of comprehension-knowledge (Gc) and visual-spatial processing (Gv), and indirect effect of general intelligence (g), on reading comprehension of college students. Click on images to enlarge



Thursday, November 01, 2012

Research bytes: Three interesting reading research articles

New articles on silent reading interventions, skills involved in silent reading, and definition and components of orthographic knowledge. Click images to enlarge

 

 

 

Monday, October 15, 2012

Research byte: Cognitive-neuro models of reading: A meta-analysis

Excellent research synthesis study that relates cognitive models of reading ability/disability to brain regions. Awesome use of colors and figures to demonstrate research results Click on images to enlarge.

 

Friday, October 12, 2012

Another study demonstrates positive impact of Interactive Metronome on reading achievement


I just learned that the following article is soon to be published (click here for journal info)
[Click on image to enlarge]



This is the second peer-reviewed article to demonstrate a significant positive impact of Interactive Metronome (IM) training on certain reading behaviors in a study with both experimental and control groups.  The other study was one I was involved with (Taub, McGrew, & Keith, 2007; the abstract is presented below).  You can access that complete 2007 manuscript at the Brain Clock blog.
[Click image to enlarge]


In the new Ritter et al. study, IM was combined with reading and language interventions in school-age children that had language and reading impairments.  This will be called the IM+language/reading intervention experimental group (IM+).  Half of the subjects were randomly assigned to this experimental group (n=21).  The other subjects (n=28) were randomly assigned to the same language/reading intervention, but without IM.  So, this study is not a pure investigation of the isolated benefits of IM.  Instead, it should be viewed as a study that investigated whether IM training could be a good “add on” component to other interventions focused on language and reading.  The outcome domain assessed was various components of reading achievement.

Both groups demonstrated statistically significant gains in reading rate/fluency and comprehension.  However, the IM+ demonstrated statistically significant stronger gains than the language/reading intervention only (control) group.  This suggests that IM may be a useful adjunct intervention to be used with other more traditional academic related treatments directed at reading improvement.
Similar to the Taub et al. (2007) study, the IM+ students showed more improvement (over the control students) in reading fluency/rate.  This consistent finding across both studies has been hypothesized to be due to either (a)  improvements in speed of cognitive processing, which results in greater efficiency and automaticity in reading words, (b) greater controlled attention (focus) which improves working memory functioning, or (c) a combination of both.

The new study differed from the earlier study in that IM+ group displayed greater reading comprehension gains than the academic only intervention group.  Taub et al. (2007) found no improvement in reading comprehension.  Given that both groups received the same language and reading comprehension treatment, it is hypothesized that the addition of IM may be impacting some cognitive processes that facilitate reading comprehension.  I agree with Ritter et al. (2012) that a viable hypothesis is that by increasing focus (attentional control) the students working memory’s were more efficient.  Working memory is the minds limited capacity “mental workbench” (just think of trying to recall a new phone number you just looked up in the phone book).   Increased attentional control (focus) increases the ability to actively maintain information just read in working memory long enough for it to be associated with material retrieved from long-term memory—thus “hooking” newly read information into the person’s store of acquired knowledge.  Click here for a recent brief video (I think…therefore IM) where I explain the role of focus and working memory and how it may facilitate higher level cognitive processing, comprehension, etc.

Of course, the small total sample (n=49) suggests some degree of caution.  But when combined with the Taub et al. (2007) study with larger samples, this form of replication in a new sample provides more support for the academic benefits (especially ease and rate of reading words) of IM interventions in school-age children.  Independent replication is a cornerstone of scientific research.


Sunday, July 22, 2012

Research byte: Support for validity of carefully constructed Cloze reading comprehension test.




Click on image to enlarge. The results are consistent with a white paper I wrote for the National Reading Panel over a decade ago


Posted using BlogPress from Kevin McGrew's iPad
www.themindhub.com

Monday, July 09, 2012

Research byte: Rise time perception and reading disabilities

Another article implicating auditory temporal processing abilities and readind disabilities...rise time perception problems.

Click image to enlarge



Posted using BlogPress from Kevin McGrew's iPad
www.themindhub.com

Thursday, July 05, 2012

AP101 Brief # 13: CHC-consistent scholastic aptitude clusters: Back to the Future


This is a continuation of a set of analyses previously posted under the title  Visual-graphic tools for implementing intelligent intelligence testing in SLD contexts:  Formative concepts and tools.  It is recommended that you read the prior post to obtain the necessary background and context, which will not be repeated here.

The third  method approach to SLD identification (POSW; pattern of strengths and weaknesseshas been advanced primarily by Flanagan and colleagues, as well as Hale and colleagues and Naglieri (see Flanagan & Fiorrello, 2010 for an overview and discussion).  A central concept to these POSW third method SLD models is that an individual with a possible SLD must show cognitive deficits that have been empirically or theoretically demonstrated to be the most relevant cognitive abilities for the achievement domain where the person is deficient.  That is, the individual's cognitive deficits are consistent or concordant with the persons academic deficits, in the context of other cognitive/achievement strengths that suggest strengths in non-SLD areas.  I have often referred to this as a domain-specific constellation or complex of abilities and achievements.

Inherent in these models is the operationalization of the notion of  aptitude-achievement consistency or concordance.  It is important to note that aptitude is not the same as general intelligence or IQ.  Aptitude in this contexts draws on the historical/traditional notion of aptitude that has been around for decades.  Richard Snow and colleagues have (IMHO) written the best information regarding this particular definition of aptitude.  Aptitude includes both cognitive and conative characteristics of a person (see Beyond IQ Project).  But for this specific post, I am only focusing on the cognitive portion of aptitude--which would, in simple terms, represent the best combination of particular CHC narrow or broad cognitive abilities that are most highly correlated with success within a particular narrow or broad achievement domain.

What are the CHC narrow or broad abilities most relevant to different achievement domains?  This information has been provided in a narrative research synthesis form by Flanagan and colleagues (in their various cross-battery books and chapters) and more recently in a structured empirical research synthesis by McGrew and Wendling (2010).  These CHC-based COG--ACH relations summaries provide assessment professionals with information on the specific broad or narrow CHC abilities most associated with sudomains in reading and math, and to a lessor extent writing.  Additionally, the McGrew and Wendling's (2010) synthesis provides information on developmental considerations--that is, the relative importance of CHC abilities for different achievement domains varies as a function of age.  McGrew and Wendling (2010) presented their results for three broad age groups (6-8; 9-13; 14-18 years of age).

Given this context, I presented a series of analysis (see the first post mentioned above as recommended background reading) that took the findings of the McGrew and Wendling (2010) as an initial starting point and used logical, empirical, and theoretical considerations to identify the best set of WJ III cognitive test predictors in the same three age groups for two illustrative achievement domains.  I have since winnowed down the best set of cognitive predictors in the two achievement domains (basic reading skills-BRS; math reasoning-MR).  I then took each set of carefully selected predictor tests and ran multiple regression models for each year of age from ages 6 thru 18 in the WJ III NU norm data.  I saved the standardized regression coefficients for each predictor and  plotted them by age. The plotted raw standardized coefficients demonstrated clear systematic developmental trends, but with noticeable "bounce" due to sampling error.  I thus generated smoothed curves using a non-linear smoothing function...with the smoothed curve representing the best estimate of the population parameters.  This technique has been used previously in a variety of studies that explored the relations between WJ-R/WJ III clusters and achievement (see McGrew, 1993 and McGrew and Wrightston, 1997 for examples and description of methodology).  Below is a plot of the raw standardized coefficients and the smoothed curve two of the significant predictors (Verbal Comprehension; Visual-Auditory Learning) for the prediction of the WJ III Basic Reading Skills cluster. [click on images to enlarge]. It is clear that the relative importance of Verbal Comprehension and Visual-Auditory Learning increase/decrease (respectively) systematically with age.

The next two figures present the final smoothed results for the CHC-based aptitude clusters for the prediction of the WJ III Basic Reading Skills and Math Reasoning clusters.

There is much that could be discussed after looking at the two figures.  Below are a few comments and thoughts.
  • The composition of what I am calling CHC-consistent scholastic aptitude clusters make theoretical and empirical (CHC-->ACH research synthesis) sense. For example, in both BRS and MR, Gc-LD/VL abilitiy (Verbal Comprehension) is salient at all ages and systematically increases in importance with age.  In BRS, visual-auditory associative memory (Glr-MA; Vis-Aud. Learning) is very important during the early school years (ages 6-9), but then disappears from being important in the prediction model.  This ability (test) is not found in the MR model.  Gf abilities (quantitative reasoning-RQ, Number Matrices; general sequential reasoning-RG, Analysis-Synthesis) are important throughout all ages for predicting math reasoning achievement.  In fact, both increase in relative importance with age, particularly for the measure of Gf-RQ (Number Matrices).  These two Gf tests are no where to be found in the BRS plot.  Instead, measures of  Ga abilities (Sound Blending; Sound Awareness) are important in the BRS model.  Gs and Gsm-WM (domain-general cognitive efficiency variables) are present in both the BRS and MR models.
  • The amount of explained variance (multiple R squared; Tables in figures) is higher for the CHC-consistent scholastic aptitude clusters when compared to the WJ III General Intellectual Ability (GIA-Std) clusters.  This is particularly true at the oldest ages for MR.  Of course, these values capitalize on chance factors due to the nature of multiple regression and would likely shrink somewhat in independent sample cross-validation (yes...I could have split the sample in half to develop and then cross-validate the models..but I didn't). 
  • These age-by-age plots provide a much more precise picture of the developmental nature of the relations between narrow CHC abilities and achievement than the McGrew & Wendling (2010) and Flanagan and colleagues reviews.  These findings suggest that when selecting tests for referral-focused selective assessment (see McGrew & Wendling, 2010) it is critical that examiners know the developmental nature of CHC--ACH relations research.  The fact that some specific narrow CHC tests show such dramatic changes across the ages suggests that those who implement a CHC-based aptitude-achievement consistency SLD model must be cautious and not use a "one size fits all" approach when determining which CHC abilities should be examined for the aptitude portion of the consistency model.  An ability that may be very important certain age levels may not be important at other age levels (e.g., Vis-Aud. Learning in the WJ III BRS aptitude cluster).  
  • The above results further reinforce the conclusion of McGrew & Wendling (2010) that development of more "intelligent" referral-focused selective assessment strategies requires a recognition that this process requires an understanding of the 3-way interaction of CHC abilities X Ach domains X Age (developmental status)
These results suggest that the field of intellectual assessment, particularly in the context of educational-related assessments, should go "Back to the Future."  The 1977 WJ and 1989 WJ-R batteries both included scholastic aptitude clusters (SAPTs; click here to read relevant select text from McGrew's two WJTCA books) as part of the WJ/WJ-R pragmatic decision-making discrepancy model.  In particular, see the Type I aptitude-achievement discrepancy feature in the second figure.  





The WJ and WJ-R SAPT's were differentially weighted combinations of the four best predictive tests across the norms sample.  See the two figures below which show the weighting schemes used.  Due to the lack of computerized norm tables and scoring that is now possible, a single set of average test weights were used for all ages.

[WJ SAPT weights]




 As I wrote in 1986, "because of their differential weighting system, the WJTCA Scholastic Aptitude clusters should provide some of the best curriculum-specific expectancy information available in the field of psychoeducational assessment" (p. 217).  Woodock (1984), in a defense of the SAPTs in School Psychology Review, made it clear that the composition of these clusters was to make the best possible aptitude-achievement comparison.  He stated that "the mix of cognitive skills included in each of the four scholastic aptitude clusters represents the best match with those achievement skills that could be obtained from the WJ cognitive subtests" (p.359).  However, the value of the WJ SAPTs were not fully appreciated at the time and was largely due to the IQ-ACH discrepancy model that constrained assessment professionals from using these measures as intended (McGrew, 1994).  This, unfortunately, led to their elimination in the WJ III and their replacement with the Predicted Achievement (PA) option which provided achievement domain-specific predictions of achievement based on the age-based optimal weighting of the seven individual tests that comprised the WJ III GIA-Std cluster.  Although effective and stronger predictors of achievement than the GIA-Std, the PA option never captured the attention of many assessment professionals...for a number of reasons (not covered here).

As I reiterated in 1994, when discussing the WJ-R SAPTs (same link as before), "The purpose of the WJTCA-R differential aptitude clusters is to provide predictions of current levels of achievement.  If a person obtains low scores on individual tests that measure cognitive abilities related to a specific achievement area and these tests are included in the aptitude cluster, then the person's current achievement expectancies should also be lowered.   This expectancy information will be more accurately communicated by the narrower WJTCA-R different aptitude clusters than by any broad-based score from the WJTCA-R or other tests" (p. 223).

The original WJ and WJ-R SAPTs were not presented as part of an explicitly defined comprehensive SLD identification model based on the concepts of consistency/concordance as was eventually advanced by Flanagan et al, Hale et al and Naglieri.  They were presented as part of a more general psychoeducational pragmatic decision making model.  However, it is clear that the WJ and WJ-R SAPTs were ahead of their time as they philosophically are in line with the aptitude portion of the aptitude-achievement consistency/concordance component of contemporary third method  SLD models.  In a sense, the field has now caught up with the WJ/WJ-R operationalization of aptitude clusters and they would now serve an important role in the aptitude-consistency SLD models.  It is my opinion that they represented the best available measurement approach to operationalizing domain-specific aptitudes for different achievement domains, which is at the heart of the new SLD models.

It is time to bring the SAPT's back...Back to the Future...as the logic of their design is a nice fit with the aptitude component of the aptitude-achievement consistency/concordance SLD models.  The field is now ready for this type of conceptualized and developed measure.


However, the original concept can now be improved upon via the methods and analyses presented in this (and the prior) post.  They can be improved upon via two methods:

1.  CHC-consistent aptitude clusters (aka, CHC designer aptitudes).  Creating  4-5 test clusters that are the bests predictors of achievement subdomains should utilize the extant CHC COG-->ACH relations literature when selecting the initial pool of tests to include in the prediction models.  This extant research literature should also guide the selection of variables in the final models...the models should not allowed to be driven by the raw empiricism of prediction.  This varies from the WJ and WJ-R SAPTS which were designed primarily based on empirical criteria (which combination predicted the most achievement variance), although their composition often made considerable theoretical sense when viewed via a post-hoc CHC lens.

2.  Provide age-based developmental weighting of the tests in the different CHC SAPTs.  The authors of the WJ III provided the necessary innovation to make this possible when they implemented an approach to constructing age-based differentially-weighted GIA g-scores via the WJ III computer scoring software.  The same technology can readily be applied to the development of CHC-designed SAPTS with developmentally shifting weights (as per the smoothed curves in the models above).  The technology is available.

Finally, I fully recognize that there are significant limitations in using an incremental partitioning of variance multiple regression approach to develop CHC-based SAPT's.  In other papers (g+specific abilities research using SEM causal models) I have been critical of this method.  The method was used here in an "intelligent" manner.....the selection of the initial pool of predictors was guided by the CHC COG-ACH extant literature and variables were not allowed to enter blindly into the final models.  The purpose of this (and the prior post) is to demonstrate the feasibility of designing CHC-consistent scholastic aptitude clusters.  I am pursuing other analyses with different methods to expand and improve upon this set of formative analyses and results.

Build it and they shall come.