Saturday, November 30, 2024

On making individual tests in #CHC #intelligence test batteries more #cogntivelycomplex: Two approaches



The following information is from a section of the WJ IV techncial manual (McGrew, LaForte & Schrank, 2014) and will again be included in the WJ V technical manual (LaForte, Dailey, McGrew, Q1, 2025).  It was first discussed in McGrew (2012)

On making individual tests in intelligence test batteries more cogntively complex

In the applied intelligence test literature, their are typically two different approaches typically used to increase the cognitive complexity of individual tests (McGrew et al., 2014). The first approach is to deliberately design factorially complex CHC tests, or tests that deliberately include the influence of two or more narrow CHC abilities. This approach is exemplified by Kaufman and Kaufman (2004a) in the development of the Kaufman Assessment Battery for Children–Second Edition (KABC-II), where:

the authors did not strive to develop “pure” tasks for measuring the five CHC broad abilities. In theory, Gv tasks should exclude Gf or Gs, for example, and tests of other broad abilities, like Gc or Glr, should only measure that ability and no other abilities. In practice, however, the goal of comprehensive tests of cognitive abilities like the KABC-II is to measure problem solving in different contexts and under different conditions, with complexity being necessary to assess high-level functioning. (p. 16)

In this approach to test development, construct-irrelevant variance (Benson, 1998; Messick, 1995) is not deliberately minimized or eliminated. Although tests that measure more than one narrow CHC ability typically have lower validity as indicators of CHC abilities, they tend to lend support to other types of validity evidence (e.g., higher predictive validity). The WJ V has several new cognitive tests that use this approach to cognitive complexity. 

The second approach to enhancing the cognitive complexity of tests is to maintain the CHC factor purity of tests or clusters (as much as possible) while concurrently and deliberately increasing the complexity of information processing demands of the tests within the specific broad or narrow CHC domain (McGrew, 2012). As described by Lohman and Lakin (2011), the cognitive complexity of the abilities measured by tests can be increased by (a) increasing the number of cognitive component processes, (b) including differences in speed of component processing, (c) increasing the number of more important component processes (e.g., inference), (d) increasing the demands of attentional control and working memory, or (e) increasing the demands on adaptive functions (assembly, control, and monitoring). This second form of cognitive complexity, not to be confused with factorial complexity, is the inclusion of test tasks that place greater demands on cognitive information processing (i.e., cognitive load), that require greater allocation of key cognitive resources (viz., working memory or attentional control), and that invoke the involvement of more cognitive control or executive functions. Per this second form of cognitive complexity, the objective is to design a test that is more cognitively complex within a CHC domain, not to deliberately make it a mixed measure of two or more CHC abilities.

A large number of prior IQs Corner’s posts regarding the topic of cognitive complexity in intelligence testing can be found here.

Benson, J. (1998). Developing a strong program of construct validation: A test anxiety example. Educational Measurement: Issues and Practice, 17(1), 10–22.

Lohman, D. F., & Lakin, J. (2011). Reasoning and intelligence. In R. J. Sternberg & S. B. Kaufman (Eds.), The Cambridge handbook of intelligence (2nd ed., pp. 419–441). New York, NY: Cambridge University Press.

McGrew, K. S. (2012, September). Implications of 20 years of CHC cognitive-achievement research: Back-to-the-future and beyond CHC. Paper presented at the Richard Woodcock Institute, Tufts University, Medford, MA. (click here to access)

Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses in performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749.


Tuesday, November 26, 2024

An investigation of the #madgenius archetype: A #MetaAnalysis of #Bipolar Disorder and #Creativity:

 A Meta-Analysis of Bipolar Disorder and Creativity:

https://doi.org/10.21203/rs.3.rs-5509147/v1

This work is licensed under a CC BY 4.0 License

The relationship between bipolar disorder (BD) and creativity has long fascinated both the academic community and the public. However, empirical evidence and meta-analytic findings have remained ambiguous and complex. This meta-analysis systematically investigates the association between BD (including clinical and subclinical samples) and various dimensions of creativity, including divergent thinking, creative achievements, and artistic creativity, with a particular focus on the moderating effects of multiple influencing factors. A thorough literature search of 6,298 screened articles yielded 35 relevant studies, encompassing 114 effect sizes and 48,979 individuals. Using a multilevel random-effects model, our analysis found a small but statistically significant positive relationship between BD and creativity (g = 0.20, 95% CI: [0.08, 0.32]). Specifically, individuals with subclinical BD were associated with higher levels of creative output (g = 0.32, 95% CI: [0.22, 0.41]) than clinical sample (g = 0.06, 95% CI: [-0.17, 0.29]), which somewhat supports the inverted U-shaped relationship hypothesis. Studies using correlational methods or self-reported creativity exhibited a significantly positive link between BD and creativity. Additionally, this link was moderated by several key variables, such as the severity and type of BD, the creativity assessment method, and various demographic factors. By addressing methodological inconsistencies in previous research and offering a more comprehensive analysis of moderator variables, this meta-analysis deepens our understanding creativity in BD.

#Cognition and #Memory after #Covid19 in a Large Community Sample | New England Journal of Medicine

 Cognition and Memory after Covid-19 in a Large Community Sample | New England Journal of Medicine 
https://www.nejm.org/doi/full/10.1056/NEJMoa2311330

#Intellectual #disability (#ID) and adjudicative #competence evaluations: A detailed review of an often-overlooked population.

Intellectual disability and adjudicative competence evaluations: A detailed review of an often-overlooked population. 

https://psycnet.apa.org/record/2025-49587-001

Wood, M. E., Potts, H., & Wang, S. (2024). Intellectual disability and adjudicative competence evaluations: A detailed review of an often-overlooked population. Psychology, Public Policy, and Law. Advance online publication. https://doi.org/10.1037/law0000446

Abstract
Research has indicated that individuals with intellectual disability represent a relatively small but meaningful subset of defendants referred for adjudicative competence evaluations. While scholars have consistently argued that this population is unique and requires special consideration in terms of the competency assessment and treatment process, little is known about this population overall and/or the relative effectiveness of the uniquely tailored interventions recommended in the literature. The current study, an archival analysis of 117 court-ordered adjudicative competence evaluations, aimed to address this gap by focusing exclusively on a known group of defendants with intellectual disability. The results revealed a significantly lower base rate of opined competence (18.8%) relative to the larger population of defendants referred for competency evaluations (i.e., historically between 70% and 80%). Nearly one-quarter of the sample was opined unrestorable, which was associated with significantly lower measured intelligence (d = 0.61 and 0.91) and adaptive behavior scores (d = 1.04) than their counterparts. These results add to a very limited body of research on this subset of defendants. Implications are discussed in terms of systemic considerations, with a particular emphasis on the need for appropriate services for this subset of defendants, as well as a commitment to research on the efficacy of these interventions. (PsycInfo Database Record (c) 2024 APA, all rights reserved)

Monday, November 25, 2024

A massive #dataset of the #NeuroCognitive Performance Test, a web-based #cognitive assessment

A massive dataset of the NeuroCognitive Performance Test, a web-based cognitive assessment

Click here to download/read PDF


Paul I. Jaffe , Aaron  Kaluszka, Nicole  F.  Ng & Robert  J.  Schafer  

We present a dataset of approximately 5.5 million subtest scores from over 750,000 adults who
completed the NeuroCognitive Performance test (NCPt; Lumos Labs, Inc.), a validated, self- administered cognitive test accessed via web browser. the dataset includes assessment scores from eight test batteries consisting of 5–11 subtests that collectively span several cognitive domains including working memory, visual attention, and abstract reasoning. In addition to the raw scores and normative data from each subtest, the dataset includes basic demographic information from each participant (age, gender, and educational background). the scale and diversity of the dataset provides an unprecedented opportunity for researchers to investigate population-level variability in cognitive abilities and their relation to demographic factors. to facilitate reuse of this dataset by other researchers, we provide a Python module that supports several common preprocessing steps.

Sunday, November 24, 2024

#AAIDD's #IQ Part-Score Position (with reference to diagnosing #intellectual #disabiity [#ID]) Is at Variance With Other Authoritative Sources—Important for #schoolpsychologists



In a 2021 commentary regarding the most recent official AAIDD intellectual disability definition and classification manual (2021), I raised a concern regarding AAIDD’s position that only a full scale or global IQ score can be used for a Dx of ID.  No room was left for clinical judgement for n=1 unique cases.  Click here to download and read the complete article.  Below is some select text.

“AAIDD's [IQ] Part-Score Position Is at Variance With Other Authoritative Sources”

In AAIDD’s The Death Penalty and Intellectual Disability (Polloway, 2015), both McGrew (2015) and Watson (2015) suggest that [IQ] part scores can be used in special cases. (Note that these two chapters, although published in an AAIDD book, do not necessarily represent the official position of AAIDD.) The limited use of part scores is also described in the 2002 National Research Council book on ID and social security eligibility (see McGrew, 2015; Watson, 2015). The authoritative Diagnostic and Statistical Manual of Mental Disorder—Fifth Edition (DSM-5) manual implies that part scores may be necessary when it states that ‘‘highly discrepant subtest scores may make an overall IQ score invalid'' (American Psychiatric Association, 2013, p. 37). Finally, in the recent APA Handbook of Intellectual and Developmental Disabilities (Glidden, 2021), Floyd et al. (2021) state ‘‘in rare situations in which the repercussions of a false negative diagnostic decision would have undue or irreparable negative impact upon the client, a highly g-loaded part score (see McGrew, 2015a) might be selected to represent intellectual functioning'' (emphasis added; p. 412).

 In a unique n = 1 high-stakes setting, a psychologist may be ethically obligated to proffer an expert opinion whether the full-scale score is (or is not) the best indicator of general intelligence. There must be room for the judicious use of clinical judgment-based part scores. AAIDD's purple manual complicates rather than elucidates guidance for psychologists and the courts. In high-stakes settings, a psychologist may be hard pressed to explain that their proffered expert opinions are grounded in the AAIDD purple manual, but then explain why they disagree with the ‘‘just say no to part scores'' AAIDD position.”

Friday, November 22, 2024

The Evolution of #Intelligence (journals)—the two premiere intelligence journals compared—shout out to two #schoolpsychologists

The Evolution of Intelligence: Analysis of the Journal of Intelligence and Intelligence 

Click here to read and download the paper.

by 
Fabio Andres Parra-Martinez
 1,*
Ophélie Allyssa Desmet
 2 and 
Jonathan Wai
 1
1
Department of Education Reform, University of Arkansas, Fayetteville, AR 72701, USA
2
Department of Human Services, Valdosta State University, Valdosta, GA 31698, USA
*
Author to whom correspondence should be addressed. 
J. Intell. 202311(2), 35; https://doi.org/10.3390/jintelligence11020035

Abstract

What are the current trends in intelligence research? This parallel bibliometric analysis covers the two premier journals in the field: Intelligence and the Journal of Intelligence (JOI) between 2013 and 2022. Using Scopus data, this paper extends prior bibliometric articles reporting the evolution of the journal Intelligence from 1977 up to 2018. It includes JOI from its inception, along with Intelligence to the present. Although the journal Intelligence’s growth has declined over time, it remains a stronghold for traditional influential research (average publications per year = 71.2, average citations per article = 17.07, average citations per year = 2.68). JOI shows a steady growth pattern in the number of publications and citations (average publications per year = 33.2, average citations per article = 6.48, total average citations per year = 1.48) since its inception in 2013. Common areas of study across both journals include cognitive ability, fluid intelligence, psychometrics–statistics, g-factor, and working memory. Intelligence includes core themes like the Flynn effect, individual differences, and geographic IQ variability. JOI addresses themes such as creativity, personality, and emotional intelligence. We discuss research trends, co-citation networks, thematic maps, and their implications for the future of the two journals and the evolution and future of the scientific study of intelligence.

Yes….a bit of a not-so-humble brag.  In the co-citation JOI figure below, the Schneider, W. J. is the Schneider & McGrew (2012) chapter, which has now been replaced by Schneider & McGrew (2018; sorry, I don’t have good PDF copy to link).  In the second Intelligence co-citation network figure, the McGrew, K. S. (2009) paper, next to Carroll’s (1993) seminal work, is your’s truly—my most cited journal article (see Google Scholar Profile).  The frequent citation of the Schneider & McGrew (2012) and McGrew (2009) journal publications are indicators of the “bridger” function Joel and I have provided—providing a bridge between intelligence research/theory and intelligence test development, use, and interpretation in school psychology.  

(Click on images to enlarge for better viewing)



Research Byte: Beyond Individual #Tests: Youths’ #Cognitive Abilities, Basic #Reading, and #Writing—relevant to #schoolpsychologists #CHC

An impressive multiple test-battery CHC theory cognitive and achievement (cross-battery) confirmatory factor analysis study based on a research design first conceptualized by Jack McArdle (planned missing data reference variable design) that finds that multiple broad CHC abilities are important in explaining reading and writing achievement above and beyond psychometric g.  Of course, the results would be different if a bi-factor model were run (see McGrew et al., 2023 for discussion of three major classes of cognitive-achievement CFA/SEM research designs).  

Click here to download/read this open access articles. 

This research group, IMHO, does some of the best CFA/SEM modeling in the assessment and school psychology literature.

Click on images to enlarge for easier viewing.



Beyond Individual Tests: Youths’ Cognitive Abilities, Basic Reading, and Writing 

by  1,* 1 2 and  3
1
Department of Educational Psychology, University of Connecticut, Storrs, CT 06268, USA
2
Department of Educational Psychology, University of Texas at Austin, Austin, TX 78712, USA
3
Department of Educational Psychology, University of Kansas, Lawrence, KS 66045, USA
*
Author to whom correspondence should be addressed. 
J. Intell. 202412(11), 120; https://doi.org/10.3390/jintelligence12110120
Abstract

Broadly, individuals’ cognitive abilities influence their academic skills, but the significance and strength of specific cognitive abilities varies across academic domains and may vary across age. Simultaneous analyses of data from many tests and cross-battery analyses can address inconsistent findings from prior studies by creating comprehensively defined constructs, which allow for greater generalizability of findings. The purpose of this study was to examine the cross-battery direct effects and developmental differences in youths’ cognitive abilities on their basic reading abilities, as well as the relations between their reading and writing achievement. Our sample included 3927 youth aged 6 to 18. Six intelligence tests (66 subtests) and three achievement tests (10 subtests) were analyzed. Youths’ general intelligence (g, large direct and indirect effects), verbal comprehension–knowledge (large direct effect), working memory (large direct effect), and learning efficiency (moderate direct effect) explained their basic reading skills. The influence of g and fluid reasoning were difficult to separate statistically. Most of the cognitive–basic reading relations were stable across age, except the influence of verbal comprehension–knowledge (Gc), which appeared to slightly increase with age. Youths’ basic reading had large influences on their written expression and spelling skills, and their spelling skills had a large influence on their written expression skills. The directionality of the effects most strongly supported the direct effects from the youths’ basic reading to their spelling skills, and not vice versa.

Wednesday, November 20, 2024

Research Byte: A Systematic #Review of #Working#Memory (#Gwm) Applications for #Children with #LearningDifficulties (#LD): Transfer Outcomes and Design Principles

 A Systematic Review of Working Memory Applications for Children with Learning Difficulties: Transfer Outcomes and Design Principles 

by 
Adel Shaban
 1,*
Victor Chang
 2
Onikepo D. Amodu
 1
Mohamed Ramadan Attia
 3 and 
Gomaa Said Mohamed Abdelhamid
 4,5
1
Middlesbrough College, University Centre Middlesbrough, Middlesbrough TS2 1AD, UK
2
Aston Business School, Aston University, Birmingham B4 7UP, UK
3
Department of Educational Technology, Faculty of Specific Education, Fayoum University, Fayoum 63514, Egypt
4
Department of Educational Psychology, Faculty of Education, Fayoum University, Fayoum 63514, Egypt
5
Department of Psychology, College of Education, Sultan Qaboos University, Muscat 123, Oman
*
Author to whom correspondence should be addressed. 
Educ. Sci. 202414(11), 1260; https://doi.org/10.3390/educsci14111260

Visit article page where PDF of article can be downloaded

Abstract

Working memory (WM) is a crucial cognitive function, and a deficit in this function is a critical factor in learning difficulties (LDs). As a result, there is growing interest in exploring different approaches to training WM to support students with LDs. Following the PRISMA 2020 guidelines, this systematic review aims to identify current computer-based WM training applications and their theoretical foundations, explore their effects on improving WM capacity and other cognitive/academic abilities, and extract design principles for creating an effective WM application for children with LDs. The 22 studies selected for this review provide strong evidence that children with LDs have low WM capacity and that their WM functions can be trained. The findings revealed four commercial WM training applications—COGMED, Jungle, BrainWare Safari, and N-back—that were utilized in 16 studies. However, these studies focused on suggesting different types of WM tasks and examining their effects rather than making those tasks user-friendly or providing practical guidelines for the end-user. To address this gap, the principles of the Human–Computer Interaction, with a focus on usability and user experience as well as relevant cognitive theories, and the design recommendations from the selected studies have been reviewed to extract a set of proposed guidelines. A total of 15 guidelines have been extracted that can be utilized to design WM training programs specifically for children with LDs. 


https://www.mdpi.com/2227-7102/14/11/1260#