Showing posts with label SLD. Show all posts
Showing posts with label SLD. Show all posts

Sunday, August 17, 2025

Thoughts on the definition of dyslexia: More on the ongoing debate re the definition of dyslexia - #dyslexia #SLD #schoolpsychologists #schoolpsychology #SPED #reading

Thoughts on the Definition of Dyslexia.  Annals of Dyslexia (click here to read or download - open access)

Linda S. Siegel,  David P. Hurford, Jamie L. Metsala, Michaela R. Ozier, & Alex C. Fender

Abstract 

The International Dyslexia Association's current definition of dyslexia was approved by its Board of Directors on November 12, 2002. After two decades of scientific inquiry into the nature of dyslexia, it is time to reconsider and potentially revise the definition in light of what has been learned. We propose a definition of dyslexia based on its essential nature. Dyslexia is a specific learning disability in reading at the word level. It involves difficulty with accurate and/or fluent word recognition and/or pseudoword reading. We also suggest that the definition should focus solely on dyslexia's core features and should not include risk factors, potential secondary consequences, or other characteristics. Until those factors can reliably differentiate between those with and without dyslexia at an individual level, they should not be included in the definition.

Tuesday, April 01, 2025

Research Byte: Science and Practice of Identifying #SpecificLearningDisabilities (#SLD): Kind Conversations About a Wicked Problem —#SPED #schoolpsychology #LD





Science and Practice of Identifying Specific Learning Disabilities: Kind Conversations About a Wicked Problem - Daniel B. Hajovsky, Kathrin E. Maki, Christopher R. Niileksela, Ryan J. McGill, 2025, Journal of Psychoeducational Assessment.  

Click here to visit journal page (unfortunately not an open access PDF).


Abstract
Although specific learning disabilities (SLD) represent the largest category for which school-age children receive special education services, the science and practice of SLD identification continues to evade consensus. Our goal is to bring together trainers and researchers with different perspectives on SLD identification to help spur a move toward a potential consensus, discuss agreements and disagreements on SLD identification in the field including amongst ourselves, and work toward productive discussion that may help move the field forward. We review essential conceptual questions that require greater scrutiny and thought to build a stronger understanding of SLD. We then discuss current assessment and identification practices, focusing on the not-so-controversial and the controversial issues in the field. Finally, we conclude with questions and considerations that challenge many of the established assumptions and systems currently in place. The aim of this article is to support constructive discussion on the topic of SLD that may have profound effects on the perennial issues the field continues to face.

Thursday, February 27, 2025

What is #dyslexia?— An expert delphi consensus on #dyslexia definition, #assessment and #identification—-#SLD #dyslexia #SPED #schoolpsychology



An open access journal article that can be downloaded for reading.  Click here to access/download


ABSTRACT 

This paper discusses the findings of a Delphi study in which dyslexia experts, including academics, specialist teachers, educational psychologists, and individuals with dyslexia, were asked for their agreement with a set of key statements about defining and identifying dyslexia: why it should be assessed and how and when this assessment should be conducted. Two rounds of survey responses provided a vehicle for moving towards consensus on how to assess for dyslexia. Forty-two consensus statements were ultimately accepted. Findings suggested that assessment practice should take account of risks to the accurate identification of dyslexia. An assessment model, with guidelines for assessors, is presented, based on the Delphi's findings. This hypothesis-testing model requires assessors to investigate and weigh up the factors most likely to result in an accurate assessment before reaching conclusions, assigning terminology, and making recommendations for intervention and management.

Click on following images for larger more readable versions of figures




Tuesday, January 28, 2025

Research Byte: Diagnostic Criteria for Children with #Nonverbal #LearningDisability (#NVLD) based on #CHC theory - #schoolpsychology #schoolpsychologists #SLD #SPED #CHC


A Study on the Characteristics and Diagnostic Criteria of Children with Nonverbal Learning Disability(NVLD) based on CHC Theory


This was published in the Korean Learning Disbility Association publication.  I have no direct access to this publication.  The link his here.  The generalization to other populations (e.g., US) is unknown.

Abstract

Research has been conducted on the existence and characteristics of nonverbal learning disabilities(NVLD) over the past decades. However, consensus on whether they belong to learning disabilities has not been reached. Also, their characteristics and diagnostic criteria has not yet been clarified. In order to solve these blind spots related to NVLD, cognitive characteristics were explored based on CHC intelligence theory, and structure of WISC test was analyzed to explore what subtests can be used to diagnose NVLD. The results of this study are as follows: First, it was confirmed that NVLD has deficits in Gv(visual processing), Gf(fluid reasoning), and Gs(processing speed), in contrast to strengths in Gc(crystallized intelligence) in the CHC theory. Second, according to the structural analysis of the WISC test, it was confirmed that subtests in the area of Verbal Comprehension Index(VCI), Visual Spatial Index(VSI), Fluid Reasoning Index(FRI), and Processing Speed Index(PSI) can be used to diagnose NVLD. By the result of this study, diagnosis and identification method for NVLD, applying new terms for the disability, and directions for subsequent studies were discussed. 


Wednesday, November 20, 2024

Research Byte: A Systematic #Review of #Working#Memory (#Gwm) Applications for #Children with #LearningDifficulties (#LD): Transfer Outcomes and Design Principles

 A Systematic Review of Working Memory Applications for Children with Learning Difficulties: Transfer Outcomes and Design Principles 

by 
Adel Shaban
 1,*
Victor Chang
 2
Onikepo D. Amodu
 1
Mohamed Ramadan Attia
 3 and 
Gomaa Said Mohamed Abdelhamid
 4,5
1
Middlesbrough College, University Centre Middlesbrough, Middlesbrough TS2 1AD, UK
2
Aston Business School, Aston University, Birmingham B4 7UP, UK
3
Department of Educational Technology, Faculty of Specific Education, Fayoum University, Fayoum 63514, Egypt
4
Department of Educational Psychology, Faculty of Education, Fayoum University, Fayoum 63514, Egypt
5
Department of Psychology, College of Education, Sultan Qaboos University, Muscat 123, Oman
*
Author to whom correspondence should be addressed. 
Educ. Sci. 202414(11), 1260; https://doi.org/10.3390/educsci14111260

Visit article page where PDF of article can be downloaded

Abstract

Working memory (WM) is a crucial cognitive function, and a deficit in this function is a critical factor in learning difficulties (LDs). As a result, there is growing interest in exploring different approaches to training WM to support students with LDs. Following the PRISMA 2020 guidelines, this systematic review aims to identify current computer-based WM training applications and their theoretical foundations, explore their effects on improving WM capacity and other cognitive/academic abilities, and extract design principles for creating an effective WM application for children with LDs. The 22 studies selected for this review provide strong evidence that children with LDs have low WM capacity and that their WM functions can be trained. The findings revealed four commercial WM training applications—COGMED, Jungle, BrainWare Safari, and N-back—that were utilized in 16 studies. However, these studies focused on suggesting different types of WM tasks and examining their effects rather than making those tasks user-friendly or providing practical guidelines for the end-user. To address this gap, the principles of the Human–Computer Interaction, with a focus on usability and user experience as well as relevant cognitive theories, and the design recommendations from the selected studies have been reviewed to extract a set of proposed guidelines. A total of 15 guidelines have been extracted that can be utilized to design WM training programs specifically for children with LDs. 


https://www.mdpi.com/2227-7102/14/11/1260#

Wednesday, November 13, 2024

Research Byte: #MetaAnalysis of Research on Children With #LearningDisabilities—-#LD #SLD #metaanalysis

 H. Lee Swanson

University of New Mexico/University of California-Riverside

DOI: 

https://doi.org/10.18666/LDMJ-2023-V28-I2-12307

Keywords: 

specific learning disabilities, meta-analysis, Intelligence, Reading disabilities, Math disabilities

Abstract

The purpose of this paper is to review some of our meta-analysis findings that apply to controversial issues within the field of specific learning disabilities (SLD). Four issues are discussed: (1) What is the role of intelligence, (2) Does response to instruction (RtI) reduce the incidence of children with SLD, (3) What role do cognitive processes play in identification and treatment outcomes for children with SLD, and (4) What is the best instructional model for children with SLD? The presentation provided eight general conclusions based on a synthesis of the research literature. Although children with SLD are responsive to evidence-based instructional practices, such practices have as yet been able to eliminate the gap in achievement with average-performing peers.

Wednesday, February 08, 2017

WJ IV ASB 8 available: The WJ IV Core-Selective Evaluation Process Applied to the Identification of a Specific Learning Disability

Click on image to enlarge


WJ IV ASB # 8 (The WJ IV™ Core-Selective Evaluation Process Applied to Identification of a Specific Learning Disability) is now available (click here to download)

Fredrick A. Schrank, PhD, ABPP

T ammy L. Stephens-Pisecco, PhD

Edward K. Schultz, PhD

Abstract
Each of the WJ IV batteries contains a “core” set of tests that provides a representative survey of abilities measured by the battery. Examiners can selectively administer additional tests to provide greater breadth of measurement in an area of cognition or linguistic competency or in a domain of achievement. This Assessment Service Bulletin describes how to use the WJ IV in a core-selective evaluation process (C-SEP) for identification of a specific learning disability (SLD). The basic premise of the C-SEP model for SLD identification is that test selection and data analysis are proportional to problem complexity—based on the presenting problem or referral question and the evaluator's professional judgment in determining what tests to administer . Information provided in this bulletin can be used to support professional judgment in determining what tests, beyond the core tests, to administer in an evaluation. Test-to-cluster correlation tables support the validity of the C-SEP as a data-based model for diagnostic decision making.

I did contribute the appendix which reports correlations from the WJ IV norm sample between all the Cognitive and Oral Language tests and the WJ IV achievement clusters at different age groups. We ran out of time to get this information in the technical manual. It should help with the design of selective referral-focused assessments, as described in the ASB.

- Posted using BlogPress from my iPad

Tuesday, November 22, 2016

Research Bytes: Cognitive Clusters in Specific Learning Disorder

Cognitive Clusters in Specific Learning Disorder

  1. Michele Poletti, PsyD1
  2. Elisa Carretta, MS2,3
  3. Laura Bonvicini, MS2,3
  4. Paolo Giorgi-Rossi, PhD2,3
  1. 1Child and Adolescent Neuropsychiatry Service, AUSL of Reggio Emilia, Italy
  2. 2Inter-Institutional Epidemiological Unit, AUSL of Reggio Emilia, Italy
  3. 3Arcispedale S. Maria Nuova, IRCCS, Reggio Emilia, Italy
  1. Michele Poletti, Department of Mental Health and Pathological Addiction, Child Neuropsychiatry Service, AUSL of Reggio Emilia, Via Amendola 2, 42100, Reggio Emilia, Italy. Email: michele.poletti2@ausl.re.it

Abstract

The heterogeneity among children with learning disabilities still represents a barrier and a challenge in their conceptualization. Although a dimensional approach has been gaining support, the categorical approach is still the most adopted, as in the recent fifth edition of the Diagnostic and Statistical Manual of Mental Disorders. The introduction of the single overarching diagnostic category of specific learning disorder (SLD) could underemphasize interindividual clinical differences regarding intracategory cognitive functioning and learning proficiency, according to current models of multiple cognitive deficits at the basis of neurodevelopmental disorders. The characterization of specific cognitive profiles associated with an already manifest SLD could help identify possible early cognitive markers of SLD risk and distinct trajectories of atypical cognitive development leading to SLD. In this perspective, we applied a cluster analysis to identify groups of children with a Diagnostic and Statistical Manual–based diagnosis of SLD with similar cognitive profiles and to describe the association between clusters and SLD subtypes. A sample of 205 children with a diagnosis of SLD were enrolled. Cluster analyses (agglomerative hierarchical and nonhierarchical iterative clustering technique) were used successively on 10 core subtests of the Wechsler Intelligence Scale for Children–Fourth Edition. The 4-cluster solution was adopted, and external validation found differences in terms of SLD subtype frequencies and learning proficiency among clusters. Clinical implications of these findings are discussed, tracing directions for further studies.

Wednesday, February 11, 2015

The WJ IV ASB # 3: The WJ IV Gf-Gc Composite and SLD identifcation

I am pleased to announce that the WJ IV Assessment Service Bulletin # 3 (The WJ IV Gf-Gc Composite and its use in the identification of specific learning disabilities) is now available here. It will be posted at the publishers WJ IV web site within a week. Below is the abstract

The authors of the Woodcock-Johnson IV (WJ IV; Schrank, McGrew, & Mather , 2014a) discuss the WJ IV Tests of Cognitive Abilities (WJ IV COG; Schrank, McGrew, & Mather , 2014b) Gf-Gc Composite, contrast its composition with that of the WJ IV COG General Intellectual Ability (GIA) score, and synthesize important information that supports its use as a reliable and valid measure of intellectual development or intellectual level. The authors also suggest that the associated WJ IV COG Gf-Gc Composite/Other Ability comparison procedure can yield information that is relevant to the identification of a specific learning disability (SLD) in any model that is allowed under the 2004 reauthorization of the federal Individuals with Disabilities Education Improvement Act (IDEA).




Click on image to enlarge

Posted using BlogPress from my iPad

Tuesday, December 25, 2012

What we've learned from 20 years of CHC COG-ACH relations research: Back to the future and Beyond CHC

A draft of the paper I presented at the 1st Richard Woodcock Institute on Advances in Cognitive Assessment (this past spring at Tufts) can now be read by clicking here. Three of the 12 figures are included below......as a tease :). The final paper will be published by WMF Press.

 

Sunday, November 25, 2012

Implications of 20 Years of CHC Cognitive-Achievement Research: Back-to-the-Future and Beyond CHC

[Click image to enlarge]
 
The key slides from my presentation at the first Richard Woodcock Institute on Cognitive Assessment are now posted at SlideShare.  I thought I had posted these before, but I can't seem to find them.  So here they are for the first (or second) time.  Below is the abstract for the paper that I also submitted--to be published eventually by the WMF Press.


Much has been learned about CHC CHC COG-->ACH relations during the past 20 years (McGrew & Wendling’s, 2010).  This paper built on this extant research by first clarifying the definitions of abilities, cognitive abilities, achievement abilities, and aptitudes.  Differences between domain-general and domain-specific CHC predictors of school achievement were defined.   The promise of Kafuman’s “intelligent” intelligence testing approach was illustrated with two approaches to CHC-based selective referral-focused assessment (SRFA).  Next, a number of new intelligent test design (ITD) principles were described and demonstrated via a series of exploratory data analyses that employed a variety of data analytic tools (multiple regression, SEM causal modeling, multidimensional scaling).  The ITD principles and analyses resulted in the proposal to construct developmentally-sensitive CHC-consistent scholastic aptitude clusters, measures that can play an important role in contemporary third method (pattern of strength and weakness) approaches to SLD identification. 
The need to move beyond simplistic conceptualizations of COG COG-->ACH relations and SLD identification models was argued and demonstrated via the presentation and discussion of CHC COG-->ACH causal SEM models.  Another example was the proposal to identify and quantify cognitive-aptitude-achievement trait complexes (CAATCs).  A revision in current PSW third-method SLD models was proposed that would integrate CAATCs.  Finally, the need to incorporate the degree of cognitive complexity of tests and composite scores within CHC domains in the design and organization of intelligence test batteries (to improve the prediction of school achievement) was proposed.  The various proposals presented in this paper represented a mixture of (a) a call to return to old ideas with new methods (Back-to-the-Future) or (b) the embracing of new ideas, concepts and methods that require psychologists to move beyond the confines of the dominant CHC taxonomy of human cognitive abilities (i.e., Beyond CHC).




Monday, October 15, 2012

Research byte: Cognitive-neuro models of reading: A meta-analysis

Excellent research synthesis study that relates cognitive models of reading ability/disability to brain regions. Awesome use of colors and figures to demonstrate research results Click on images to enlarge.

 

Friday, August 10, 2012

AP101 Brief # 15: Beyond CHC: Cognitive-Aptitude-Achievement Trait Complex Analysis: Implications for SLD Assessment and Dx




This is the final post in a series of posts clarifying the nature of cognitive, aptitude, achievement ability constructs.  Readers should consult the preceding post (which contains links to all prior background posts) that defined cognitive abilities, aptitudes, achievement abilities, and CHC cognitive-aptitude-achievement trait complexes (CATTCs).  I apologize for not including the reference list.  These posts are snippets of a manuscript in preparation and I like to post to IQs Corner for feedback that I might incorporate in the final manuscript.  References are the last thing I do.

Beyond CHC:  CHC Cognitive-Aptitude-Achievement Trait Complex Analyses

I have previously argued that alternative non-factor analysis methodological (e.g., multidimensional scaling-MDS) and theoretical lenses need to be applied to validated CHC measures to better understand “both the content and processes underlying performance on diverse cognitive tasks” (McGrew, 2005, p. 172).  When MDS “faceted” methods have been applied to data sets previously analyzed by exploratory or confirmatory factor methods, “new insights into the characteristics of tests and constructs previously obscured by the strong statistical machinery of factor analysis emerge.” (Schneider & McGrew, 2012, p. 110).[1]   

Following the methods similar to that explained and demonstrated by Beauducel, Brocke and Liepmann (2001), Beauducel and Kersting (2002), SÜß and Beauducel (2005), Tucker-Drob and Salthouse (2009; this is an awesome example of MDS analyses side-by-side with factor analsysis of the same set of variables) and Wilhelm (2005), I subjected all WJ-R standardization subjects (McGrew, Werder & Woodcock, 1991) who had complete sets of scores (i.e., listwise deletion of missing data) for the WJ-R Broad Cognitive Ability-Extend (BCA-EXT), Reading Aptitude (RAPT), Math Aptitude (MAPT), Written Language Aptitude (WLAPT), Gf-Gc cognitive factors (Gf, Gc, Glr, Gsm, Gv, Ga, Gs), and Broad Reading (BRDG), Broad Math (BMATH), and Broad Written Language (BWLANG) achievement clusters to a Guttman Radex MDS analysis (n = 4,328 subjects from early school years to late adulthood).[2]  MDS procedures have more relaxed assumptions than linear statistical models and allow for the simultaneous analysis of variables that share common variables or tests—a situation that results in non-convergence problems due to excessive multicolinearity when using linear statistical models.  This feature made it possible to explore the degree of similarity of the WJ-R operationalized measures of the constructs of cognitive abilities, general intelligence (g), scholastic aptitudes, and academic achievement, in a single analysis.  That is, it was possible to explore the relations between and among the core elements of CHC-based cognitive-aptitude-achievement trait complexes (CAATC).  The results are presented in Figure 1. [Click on images to enlarge] 


Figure 1 (Click on image to enlarge)

WJ-R MDS Analysis:  Basic Interpretation

In Guttman Radex models, variables closest to the center of the 2-D plots are the most cognitively complex. Also, the variables are located along two continua or dimensions that often have substantive/theoretical interpretations.  The two dimensions in Figure 1 are labeled A<->B and C<->D.  The following is concluded from a review of Figure 1:

--The WJ-R g-measure (BCA-EXT) is almost directly at the center of the plot and is the most cognitive complex variable.  This makes theoretical sense given that it is a composite comprised of 14 tests from 7 of the CHC Gf-Gc cognitive domains.  Proximity to the center of MDS plots is sometimes considered evidence for g.

--Reading and Writing Aptitude (GRWAPT) and MAPT are also cognitively complex.  Both the GRWAPT[3] and MAPT clusters are comprised of four equally weighted tests of four different Gf-Gc abilities—and thus, the finding that they are also among the most cognitively complex WJ-R measures is not surprising.  The CHC Gf-Gc cognitive measures of Gf and Gc are much more cognitively complex that Gv, Glr, Ga and Gsm.[4]

--The A<->B dimension appears to reflect the ordering of variables as per stimulus content, a common finding in MDS analyses.  The cognitive variables on the left-hand side of the continuum midlines (Gv, Glr, Gf, Gs, MAPT) are comprised of measures with predominant visual-figural or numeric/quantitative characteristics.  The majority of the variables on the right-hand of the continuum midline (GRWAPT, Gc, Ga, Gsm, BRDG, BWLANG) are characterized as more auditory-linguistic, language, or verbal.  This visual-figural/numeric/quantitative-to-auditory-linguistic/language/verbal content dimension is very similar to the verbal, figural, and numeric content facets of the Berlin Model of Intelligence Structure (BIS; SÜß and Beauducel, 2005).[5] 

--The C<->D dimension appears to reflect the ordering of variables as per cognitive operations or processes, another common finding in MDS analyses.  The majority of the cognitive variables above the continua midline (Gv, Glr, Ga, Gc, Gsm, BCAEXT, GRWAPT) are comprised primarily of cognitive ability tasks the involve mental processes or operations.  Conversely, although not as consistent, three of the lowest variables below the continua midline are the achievement ability clusters (BRDG BWLANG; BMATH).  Thus, the C<->D dimension is interpreted as representing a cognitive operations/process-to-acquired knowledge/product dimension.

--In contrast to factor analysis, interpretation of MDS is more is more qualitative and subjective.  Variables that may share a common dimension are typically identified as lying on relatively straight lines or planes, in separate quadrants or partitions, or tight groupings (often represented by circles or ovals or connected as a shape via lines).  Inspecting the four quadrants created by the A<->B C<->D dimensions (see Figure 1) suggests the following.  The AC quadrant is interpreted to represent (excluding BCAEXT which is near the center) cognitive operations with visual-figural content (Gv; Glr).  The CB quadrant is interpreted as representing auditory-linguistic/language/verbal content based cognitive operations.  The BC quadrant only includes the three broad achievement clusters, and is thus an achievement or an acquired knowledge dimension.  Finally, the DA quadrant can be interpreted as cognitive operations that involve quantitative operations or numeric stimuli (e.g., Gf is highly correlated with math achievement; McGrew & Wendling, 2010; one-half of the Gs-P cluster is the Visual Matching test which requires the efficient perceptual processing of numeric stimuli—Glr-N).[6]  The interpretation of these four quadrants is very consistent with the BIS content-faceted content-by-operations model research.

--The theoretical interpretation of the two continua and four quadrants provides potentially important insights into the abilities measured by the WJ-R measures.  More importantly, the conclusions provide potentially important theoretical insights into the nature of human intelligence, insights that typically fail to emerge when using factor analysis methods (see Schneider & McGrew, 2012 and SÜß and Beauducel, 2005). In other MDS analyses I have completed, similar visual-figural/numeric/quantitative-to-auditory-linguistic/language/verbal and cognitive operations/process-to-acquired knowledge/product continua dimensions have emerged (McGrew, 2005; Schneider & McGrew, 2012).  When I have investigated a handful of 3-D MDS[7] models the same two dimensions emerge along with a third automatic-to-deliberate/controlled cognitive processing dimension which is consistent with the prominent dual-process models of cognition and neurocognitive functioning (Evans, 2008, 2011; Barrouillet, 2011; Reyna & Brainerd, 2011; Rico & Overton, 2011; Stanovich, West & Toplak, 2011) that are typically distinguished as Type I/II or System I/II  (see Kahneman’s, 2011, highly acclaimed Thinking, Fast and Slow).[8] 

--These higher-order cognitive processing dimensions, which are not present in the CHC taxonomy, suggest that intermediate strata (or dimensions that cut across broad CHC abilities) might be useful additions to the current three-stratum CHC model.  These higher-order dimensions may be capturing the essence of fundamental neurocognitive processes and argue for moving beyond CHC to integrate neurocognitive research to better understand intellectual performance.


WJ-R MDS Analysis:  Cognitive-Aptitude-Achievement Trait Complex (CAATC) Interpretation

Figure 2 is an extension of the results presented in Figure 1.  Two different CAATCs are suggested.  These were identified by starting first with the BMATH and BRDG/BWLANG achievement variables and next connecting these variables to their respective SAPTs (GRWAPT; MAPT).  Next, the closest cognitive Gf-Gc measures that were in the same general linear path were connected (the goal was to find the math and reading related variables that were closest to lying on a straight line).  Ovals encompassing the entire space comprising the two circle-line-circle traces where superimposed on the figure.  A dotted line that represented the approximate bisection of each of the cognitive-aptitude-achievement trait complex vectors was drawn.  Finally, an approximate correlation (r = .55; see Figure 2) between the two multidimensional CAATC was estimated via measurement of the angle between the CAATC vector dotted lines.[9]

Figure 2 (Click on image to enlarge)

As presented in Figure 2, Math and Reading-Writing CAATCs are suggested as a viable perspective from which to view the relations between cognitive abilities, aptitudes, and achievement abilities.  The primary conclusions, insights, and questions are drawn from Figure 1 and 2 are:

--It appears that the potential exists to empirically identify CAATCs via the use of CHC-grounded theory, the extant CHC COG->ACH relations research, and multidimensional scaling.  It also appears possible to estimate the correlation between different trait complexes (see math/reading-writing trait complex r = .55 in Figure 2).  I suggest these preliminary findings may help the field of cognitive-achievement assessment and research better approximate the multidimensional nature of human cognitive abilities, aptitudes, and achievement abilities.

--Although the WJ-R battery is not as comprehensive a measure of CHC abilities as the WJ III, the cognitive abilities within the respective math and reading/writing CAATCs are very consistent with the extant CHC COG->CHC relations research (McGrew & Wendling, 2010; click here for visual-graphic summary).  The reading-writing trait complex (see Figure 2) includes Ga-PC, Gc-LD/VL, and via the GRWAPT, Gs-P, and Gsm-MS, abilities that are listed as domain-general and domain-specific abilities in Figure 3.  In the case of math, the trait complex includes indicators of Gf-RG, Gv-MV, and via the MAPT, Gs-P (Visual Matching, which might also tap Gs-N) and Gc-LD/VL, abilities that are either domain-general or domain-specific for math in Figure 3.  Working memory (Gsm-WM) is not present (as suggested by Figure 3) as the WJ-R battery did not include a working memory cluster that could enter the analysis.


Figure 3 (Click image to enlarge)

--Also of interest are the three WJ-R cognitive factors (Gsm-MS, Glr-MA, Gs-P) that are excluded from the hyperspace representations of the proposed math and reading-writing CAATCs.  Although highly speculative, it may be possible that their separation from the designated trait complexes suggest, that if known to be related to reading-writing or math achievement, their independence from the narrower trait complexes may be an indication that they represent domain-general abilities.  Glr-MA and Gs-P are both listed as domain-general abilities in Figure 3.  Additional work is needed to determine if the independence (from identified CAATCs) of CHC measures known to be significantly related to achievement indicates domain general abilities.  Alternatively, it is very possible, given the previously demonstrated developmental nuances of CHC COG->ACH relations that the results presented in Figures 1 and 2, which used the entire age range of the WJ-R measures, may mask or distort findings in unknown ways.

--Those knowledgeable of the CHC COG->ACH relations research will obviously note the prior inclusion of certain Gv abilities (Vz, SR, MV) in Figure 3 as well as the inclusion of the WJ-R Gv-MV/CS cluster as part of the proposed math CAATC (Figure 2), despite the lack of consistently reported significant CHC Gv-ACH relations.  McGrew and Wendling (2010) recognized that some Gv abilities have clearly been linked to reading and math achievement (especially the later) in non CHC-organized research.  They speculated that the “Gv Mystery” may be due to certain Gv abilities being threshold abilities or that the cognitive batteries included in their review did not include Gv measures that measured complex Gv related Vz or MV processes.  Given this context, it may be an important finding (via the methods described above) that the WJ-R Gv measure is unexpectedly included in the math CAATC.  This may support the importance of Gv abilities in explaining math and concurrently indicate a problem with the operational Gv measures. 

--The long distance from the WJ-R Gv measure to the center of the diagram (see Figure 2) indicates that the WJ-R Gv measure, which included tests classified as indicators of CS and MV, is not cognitively complex.  This conclusion is consistent with Lohman’s seminal review of Gv abilities (Lohman, 1979) where he specifically mentions CS and MV as representing low level Gv processes and “such tests and their factors consistently fall near the periphery of scaling representations, or at the bottom of a hierarchical model” (Lohman, 1979, 126-127).  I advance the hypothesis that the math CAATC in Figure 2 suggests that Gv is a math-relevant domain, but more complex Gv tests (e.g., 3-D mental “mind’s eye” rotation; complex visual working memory), which would be closer to the center of the MDS hyperspace, need to be developed and included in cognitive batteries.  This suggestion is consistent with Wittmann’s concept of Brunswick Symmetry, which, in turn, is founded on the fundamental concept of symmetry which has been central to success in most all branches of science (Wittmann & SÜß, 1999).  The Brunswick Symmetry model argues that in order to maximize prediction or explanation between predictor and criterion variables, one should match the level of cognitive complexity of the variables in both the predictor and criterion space (Hunt, 2011; Wittmann & SÜß, 1999).  The WJ-R Gv-WJ-R BRMATH relation may represent a low (WJ-R Gv)-to-high (WJ-R BMATH) predictor-criterion complexity mismatch, thus dooming any possible significant relation. 

--Researchers and practitioners in the area of SLD should recognize that when third method POSW “aptitude-achievement” discrepancies are evaluated to determine “consistency”, the combination of domain-general and domain-specific abilities that comprise an aptitude for a specific achievement domain in many ways can be considered a mini-proxy for general intelligence (g).  In Figures 1 and 2 the BCA-EXT and MAPT and GRWAPT variables are in close proximity (which also represents high correlation) and are all near the center of the MDS Radex model.  The manifest correlations between the WJ-R BCA-EXT (in the WJ-R data used to generate the CAATCs in Figure 10) and RAPT, WLAPT, and MAPT clusters are .91, .89 and .91, respectively.  This reflects the reality of the CHC COG->ACH research as in both reading and math achievement, cognitive tests or clusters with high g-loadings (viz., measures of Gc and Gf), as well as shared domain-general abilities, are always in the pool of CHC measures associated with the academic deficit.

--However, the placement of GRWAPT and MAPT in the different content/operations quadrants in Figures 1 and 2 suggests that more differentiated CHC-designed achievement domain SAPT measures might be possible to develop.   The manifest correlations between MAPT and the two GRWAPT measures were .82 to .84, suggesting approximately 69 % shared variance.  GRWAPT and MAPT are strongly related SAPTs, yet there is still unique variance in each.  Furthermore, the WJ-R SAPT measures used in this analysis were equally weighted clusters and not the differentially weighted clusters as in the original WJ.  As presented previously, research suggests that optimal SAPT prediction requires developmentally shifting weights across age.  It is my opinion that the development of developmentally-sensitive CHC-designed SAPTs will result in lower correlations between RAPT and MAPT measures.


Beyond CHC Theory:  Cognitive-Aptitude-Achievement Trait Complexes and SLD Identification Models

The possibility of measuring, mapping and quantifying CAATCs raises intriguing possibilities for re-conceptualizing approaches to the identification of SLD.  Figure 4 presents the generic representation of the prevailing third-method SLD models as well as a formative proposal for a conceptual revision.  As noted previously, the prevailing POSW model (left half of Figure 4), although useful for communication and enhancing understanding of the conceptual approach, is simplistic.   Implementation of the model requires successive calculations of simple (and often multiple) discrepancies which fails to capture the multidimensional and multivariate nature of human cognitive, aptitude, and achievement abilities.  I believe that the CAATC representations in Figure 2, although still clearly imperfect and fallible representations of the non-linear nature of reality, are a better approximation of the complex nature of cognitive-aptitude trait complex relations.  The right-side of Figure 4 is an initial attempt to conceptualize SLD within a CAATC framework.  In this formative model, the bottom two components of the current third-method models (i.e., academic and cognitive weakness) have been combined into a single multidimensional CAATC domain.



Figure 4 (Click on image to enlarge)

CAATCs better operationalize the notion of consistency among the multiple cognitive, aptitude, and achievement elements of an important academic learning domain or domain of SLD.  As noted in the operational definition of a CAATC presented earlier, the emphasis is on a constellation or combination of elements that are related and are combined together in a functional fashion.  These characteristics imply a form of a centrally inward directed force that pulls elements together much like magnetism.  Cohesion appears the most appropriate term for this form of multiple element bonding.  Cohesion is defined, as per the Shorter English Oxford Dictionary (2002), as “the action or condition of sticking together or cohering; a tendency to remain united” (p. 444).  Element bonding and stickiness are also conveyed in the APA Dictionary of Psychology (VandenBos, 2007) definition of cohesion as “the unity or solidarity of a group, as indicated by the strength of the bonds that link group members to the group as a whole” (p. 192).  Thus, in the CAATC-based SLD proposal in Figure 4, the degree of cohesion within a CAATC (as designed by circular icon shape) is considered an integral and critical step to ascertaining if a strong cohesive CAATC, which represents a particular academic domain deficit, is present.  

The stronger the within-CAATC cohesion, the more confidence one could place in the identification of a CAATC as possibly indicative of a SLD.  This focus on quantifying the CAATC cohesion is seen as a necessary, but not sufficient, first step in attempting to identify SLD based on a multivariate POSW.  If the CAATC demonstrates very weak cohesion, the hypothesis of a possible SLD should receive less consideration.  If there is significant (yet to be defined) moderate to strong CAATC cohesion, then the comparison of the CAATC to the cognitive/academic strengths portion of the conceptual model is appropriate for SLD consideration.  To simplify, POSW-based SLD identification would be based first on the identification of a weakness in a cohesive specific CAATC which is then determined to be significantly discrepant from relative strengths in other cognitive and achievement domains.  

Of course, additional variations of this model require further exploration.  For example, should discrepant/discordant comparisons be made between other empirically identified and quantified CAATCs?  Would CAATC-to-CAATC comparisons between high empirical and theoretically correlated CAATCs (e.g., basic reading skills and basic writing skills), when contrasted to less empirically and theoretically correlated CAATC-to-CAATC domains (e.g., basic reading skills and math reasoning), be diagnostically important?  I have more questions than answers at this time.
      
Yes—this proposed framework is speculative and in the formative stages of conceptualization.  It is based on exploratory data analyses, theoretical considerations, and well reasoned logic.  It is not yet ready for applied practice.  Appropriate statistical metrics and methods for operationalizing the degree of domain cohesion are required.  I do not see this as an insurmountable hurdle as methods based on Euclidean distance measures (e.g., Mahalanobis and or Minkowski distance) which can quantify the cohesion between CAATC measures as well as the distance of all the trait complex elements from the centroid of a CAATC exist.  Or, statisticians much smarter than I can might apply centroid-based multivariate statistical measures to quantify and compare CAATC domain cohesion.  I urge those with such skills and interest to pursue the development of these metrics.  Also, the current limited exploratory results with the WJ-R data should be replicated and extended in more contemporary samples with a larger range of both CHC cognitive, aptitude, and achievement tests and clusters.  I would encourage split-sample CAATC model-development and cross-validation in the WJ III norm data.

The proposed CAATC framework, and integration into SLD models is, at this time, simply that—a proposal.  It is not ready for prime-time, in-the-field implementation.  It is presented here as a formative idea that will hopefully encourage others to explore.  Additional research and development, some of which I suggested above, will either prove this to be a promising methodology or an idea with limited validity or one with too many practical constraints that render it hard to implement.  Nevertheless, the results presented here suggest promise.  The results suggest possible incremental progress toward better defining SLD and learning complexes that are more consistent with nature—with the identification of CAATC taxon’s[10] that better approximate “nature carved at the joints” (Meehl, 1973, as quoted and explained by Greenspan, 2006, in the context of MR/ID diagnosis).  Such a development would be consistent with Reynolds and Lakin’s (1987) plea, 25 years ago, for disability identification methods that better represent dispositional taxon’s rather than classes or categories based on specific cutting scores which are grounded in “administrative conveniences with boundaries created out of political and economic considerations” (p. 342). 






[1] See SÜß and Beauducel (2005) and Tucker-Drob and Salthouse (2009) for excellent descriptions of these methods and illustrative results.

[2] The WJ-R battery was analyzed since it was the last version of the WJ series to include scholastic aptitude clusters.

[3] As noted in Figure 1, the Reading and Written Language Aptitude clusters, which were separate variables in the analysis, shared 3 of 4 common tests and nearly overlapped in the MDS plot.  Thus, for simplicity they were combined into the single GRWAPT variable in Figure 1.  This is also consistent the factor analysis of reading and writing achievement variables that typically produce a single Grw factor and not separate reading and writing factors.

[4] The primary narrow abilities measured by each of the cognitive Gf-Gc cluster are included in the label for each cluster.  Contrary to the WJ III, the Gf-Gc clusters were not all operationally constructed as broad Gf-Gc abilities (see McGrew, 1997; McGrew & Woodcock, 2001).  Only the WJ-R Gf and Gc clusters can be interpreted as measuring broad domains as per the requirement that broad measures must include indicators of different narrow abilities (e.g., Concept Formation-I and Analysis-Synthesis-RG).  The other five WJ-R Gf-Gc clusters are now understood to be valid indicators of narrow CHC abilities (Gsm-MS; Ga-PC; Glr-MA; Gv-MV/CS; Gs-P).

[5]  The BIS model is a heuristic framework, derived from both factor analysis and MDS facet analysis, for the classification of performance on different tasks and is not to be considered a trait-like structural model of intelligence as exemplified by the factor-based CHC theory.  Nevertheless, Guttman Radex MDS models often show strong parallels to hierarchical factor based models based on the same set of variables (Kyllonen, 1996; SÜß & Beauducel, 2005; Tucker-Drob & Salthouse, 2009).

[6] The MAPT cluster also includes the two Gf tests and Visual Matching.

[7] WJ III 3-D MDS model for norms subjects aged 9-13 is available at http://www.iqscorner.com/2008/10/wj-iii-guttman-radex-mds-analysis.html

[8] A similar dimension emerged as a plausible higher-order cognitive processing dimension in the previously mentioned Carroll type analysis of 50 WJ III test variables.

[9] Using trigonometry, the cosine of the intersection of the two trait complex vectors was converted to a correlation.  I thank Dr. Joel Schneider for helping fill the gap in my long-lost expertise in basic trigonometry via an excel spreadsheet that converted the measured angle to a correlation.

[10] The Shorter Oxford English Dictionary defines a taxon as “a taxonomic group of any ran, as species, family, class, etc; an organism contained in such a group” (p. 3193) and taxonomy as “classification, esp. in relation to its general laws or principles; the branch of science, or of a particular science or subject, that deals with classification; esp. the systematic classification of living organisms” (p. 3193; italics in original)