Showing posts with label intelligence testing. Show all posts
Showing posts with label intelligence testing. Show all posts

Wednesday, August 27, 2025

IQs Corner: Practice effects persist over two decades of cognitive testing: Implications for longitudinal research - #practiceeffect #cognitive #neurocognit #IQ #intelligence #schoolpsychology #schoolpsychologists

Click on image to enlarge for easy reading


MedRxiv preprint available at.  https://doi.org/10.1101/2025.06.16.25329587

Elman et al. (2025)


ABSTRACT 

Background: Repeated cognitive testing can boost scores due to practice effects (PEs), yet it remains unclear whether PEs persist across multiple follow-ups and long durations. We examined PEs across  multiple assessments from midlife to old age in a nonclinical sample.   

Method: Men (N=1,608) in the Vietnam Era Twin Study of Aging (VETSA) underwent 
neuropsychological assessment comprising 30 measures across 4 waves (~6-year testing intervals) spanning up to 20 years. We leveraged age-matched replacement participants to estimate PEs at each wave. We compared cognitive trajectories and MCI prevalence using unadjusted versus PE-adjusted scores. 

Results: Across follow-ups, a range of 7-12 tests (out of 30) demonstrated significant PEs, especially in episodic memory and visuospatial domains. Adjusting for PEs resulted in improved detection of cognitive decline and MCI, with up to 20% higher MCI prevalence.  

Conclusion: PEs persist across multiple assessments and decades underscoring the 
importance of accounting for PEs in longitudinal studies.
  
Keywords: practice effects; repeat testing; serial testing; longitudinal testing; mild cognitive impairment; cognitive change

Monday, August 25, 2025

IQs Corner: What is (and what is not) clinical judgment in intelligence test interpretation? - #IQ #intelligence #ID #intellectualdisability #schoolpsychologists #schoolpsychology #diagnosis

What is clinical judgment in intelligence testing?  

This term is frequently invoked when psychologists explain or defend their intelligence test interpretations.  Below is a brief explanation I’ve used to describe what it is…and what it is not, based on several sources.  Schalock and Luckasson’s AAIDD Clinical Judgment book (now in a 2014 revised version) is the best single source I have found that addresses this slippery concept in intelligence testing, particularly in the context of a potential diagnosis of intellectual disability (ID)—it is a recommended reading.

—————

Clinical judgment is a process based on solid scientific knowledge and is characterized as being “systematic (i.e., organized, sequential, and logical), formal (i.e., explicit and reasoned), and transparent (i.e., apparent and communicated clearly)” (Schalock & Luckasson, 2005, p.1). The application of clinical judgment in the evaluation of IQ scores in the diagnosis of intellectual disability includes consideration of multiple factors that might influence the accuracy of an assessment of general intellectual ability (APA: DSM-5, 2013).  The “unanimous professional consensus that the diagnosis of intellectual disability requires comprehensive assessment and the application of clinical judgment” (Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8).

The misuse of clinical judgment in the interpretation of scores from intelligence test batteries should not be used as the basis for “gut instinct” or “seat-of-the-pants” impressions and conclusions of the assessment professional (Macvaugh & Cunningham, 2009), or justification for shortened evaluations, a means to convey stereotypes or prejudices, a substitute for insufficiently explored questions, or an excuse for incomplete testing and missing data (Schalock & Luckasson, 2005). Idiosyncratic methods and intuitive conclusions are not scientifically based and have unknown reliability and validity. 

If clinical judgement interpretations and opinions regarding an individual’s level of general intelligence are based on novel or emerging research-based principles, the assessment professional must document the bases for these new interpretations as well as the limitations of these principles and methods. This requirement is consistent with the Standards for Educational and Psychological Testing Standard 9.4 which states:

When a test is to be used for a purpose for which little or no validity evidence is available, the user is responsible for documenting the rationale for the selection of the test and obtaining evidence of the reliability/precision of the test scores and the validity of the interpretations supporting the use of the scores for this purpose (p. 143).


American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014).  Standards for educational and psychological testing.  Washington, DC:  Author. 

American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders— Fifth Edition. Washington D.C.:  Author. 

Brief of Amici Curiae American Psychological Association, American Psychiatric Association, American Academy of Psychiatry and the Law, Florida Psychological Association, National Association of Social Workers, and National Association of Social Workers Florida Chapter, in Support of Petitioner; Hall v. Florida; S.Ct., No. 12-10882; 2014; p. 8.

MacVaugh, G. S. & Cunningham, M. D. (2009). Atkins v. Virginia: Implications and recommendations for forensic practice.  The Journal of Psychiatry and Law, 37, 131-187.

Schalock, R. L. & Luckasson, R. (2005). Clinical judgment. Washington, DC: American Association on Intellectual and Developmental Disabilities. 

—————

Kevin S. McGrew, PhD.

Educational Psychologist

Director 

Institute for Applied Psychometrics (IAP)

www.theMindHub.com


Wednesday, April 23, 2025

On #factoranalysis of #IQ tests—impact of software choice—plus comments about art+science of factor analysis in #intelligence test research—#schoolpsychology



Contributing to the reproducibility crisis in Psychology: The role of statistical software choice on factor analysis.  Journal of School Psychology.  Stefan C. Dombrowski.  Click here to view article source and abstract.

This is an important article for those who conduct (and also those who consume) factor analysis results of intelligence or cognitive ability tests.  

Abstract (note - bold font in abstract has been added by me)

A potentially overlooked contributor to the reproducibility crisis in psychology is the choice of statistical application software used for factor analysis. Although the open science movement promotes transparency by advocating for open access to data and statistical methods, this approach alone is insufficient to address the reproducibility crisis. It is commonly assumed that different statistical software applications produce equivalent results when conducting the same statistical analysis. However, this is not necessarily the case. Statistical programs often yield disparate outcomes, even when using identical data and factor analytic procedures, which can lead to inconsistent interpretation of results. This study examines this phenomenon by conducting exploratory factor analyses on two tests of cognitive ability—the WISC-V and the MEZURE—using four different statistical programs/applications. Factor analysis plays a critical role in determining the underlying theory of cognitive ability instruments, and guides how those instruments should be scored and interpreted. However, psychology is grappling with a reproducibility crisis in this area, as independent researchers and test publishers frequently report divergent factor analytic results. The outcome of this study revealed significant variations in structural outcomes among the statistical software programs/applications. These findings highlight the importance of using multiple statistical programs, ensuring transparency with analysis code, and recognizing the potential for varied outcomes when interpreting results from factor analytic procedures. Addressing these issues is important for advancing scientific integrity and mitigating the reproducibility crisis in psychology particularly in relation to cognitive ability structural validity.

My additional comments

The recommendation that multiple factor analysis software programs be used when analyzing the structural validity of cognitive abilities tests makes sense.  Kudos to Dr. Dombrowski for demonstrating this need.

Along these lines, it is also important to recognize that the use and interpretation of any factor analysis software is highly dependent on the statistical and substantive expertise and skills of the researcher.  I made these points (based on the writings and personal conversations with Jack Carroll) in a recent article (McGrew, 2023; open access so you can download and read) in the Journal of Intelligence.  The salient material is reproduced below.  This article can be accessed either a the journal website or via the Research and Reports section of my MindHub web page (McGrew, 2023)


(Note - Bold font in text below, extracted from McGrew (2023), is not in the original published article)

“I was fortunate to learn important tacit EFA and CFA knowledge during my 17 years of interactions with Carroll, and particularly my private one-to-one tutelage with Carroll in May 2003. Anyone who reads Chapter 3 (Survey and Analysis of Correlational and Factor-Analytic Research on Cognitive Abilities: Methodology) of Carroll's 1993 book, as well as his self-critique of his seminal work (Carroll 1998) and other select method-focused post-1993 publications (Carroll 1995, 1997), should conclude what is obvious—to Carroll, factor analyses were a blend of art and science. As articulated by some of his peers (see footnote #2), his research reflected the work of an expert with broad and deep substantive knowledge of research and theories in intelligence, cognitive psychology, and factor analysis methods. 

In 2003, after Carroll had been using CFA to augment his initial EFA analyses for at least a decade, Carroll expressed (to me during our May 2003 work week) that he was often concerned with the quality of some reported factor analyses (both EFA and CFA) of popular clinical IQ tests or other collections of cognitive ability measures (Carroll 1978, 1991, 1995, 2003). Carroll's characteristic positive skepticism regarding certain reported factor analyses was first articulated (as far as I know) in the late 1970's, when he stated “despite its many virtues, factor analysis is a very tricky technique; in some ways it depends more on art than science, that is, more on intuition and judgment than on formal rules of procedure. People who do factor analysis by uncritical use of programs in computer packages run the risk of making fools of themselves” (Carroll 1978, p. 91; emphasis added). It is my opinion that Carroll would still be dismayed by some of the EFA and CFA studies of intelligence tests published during the past two decades that often used narrow or restricted forms of factor analysis methods and rigid formal statistical rules for decision-making, with little attempt to integrate contemporary substantive research or theory to guide the analysis and interpretation of the results (e.g., see Decker 2021; Decker et al. 2021; McGrew et al. 2023). 

Carroll's unease was prescient of recently articulated concerns regarding two aspects of the theory crises in structural psychological research—the conflation of statistical (primarily factor analysis) models with theoretical models and the use of narrow forms of factor analysis methods (Fried 2020; McGrew et al. 2023). First, many intelligence test batteries only report CFA studies in their technical manuals. EFA results, which often produce findings that vary from CFA findings, are frequently omitted. This often leads to debates between independent researchers and test authors (or test publishers) regarding the validity of the interpretation of composite or cluster scores, leaving test users confused regarding the psychometric integrity of composite score interpretations. McGrew et al. (2023) recently recommended that intelligence test manuals, as well as research reports by independent researchers, include both EFA and CFA (viz., bifactor g, hierarchical g, and Horn no-g models), as well as psychometric network analysis (PNA) and possibly multidimensional scaling analyses (MDSs; McGrew et al. 2014; Meyer and Reynolds 2022). As stated by McGrew et al. (2023), “such an ecumenical approach would require researchers to present results from the major classes of IQ test structural research methods (including PNA) and clearly articulate the theoretical basis for the model(s) the author's support. Such an approach would also gently nudge IQ test structural researchers to minimize the frequent conflation of theoretical and psychometric g constructs. Such multiple-methods research in test manuals and journal publications can better inform users of the strengths and limitations of IQ test interpretations based on whatever conceptualization of psychometric general intelligence (including models with no such construct) underlies each type of dimensional analysis” (p. 24).”


Saturday, May 19, 2018

The Relation between Intelligence and Adaptive Behavior: A Meta-Analysis 

Very important meta-analysis of AB IQ relation. Primary finding on target with prior informal synthesis by McGrew (2015)

The Relation between Intelligence and Adaptive Behavior: A Meta-Analysis   
 
Ryan M. Alexander 
 
ABSTRACT 
 
Intelligence tests and adaptive behavior scales measure vital aspects of the multidimensional nature of human functioning. Assessment of each is a required component in the diagnosis or identification of intellectual disability, and both are frequently used conjointly in the assessment and identification of other developmental disabilities. The present study investigated the population correlation between intelligence and adaptive behavior using psychometric meta-analysis. The main analysis included 148 samples with 16,468 participants overall. Following correction for sampling error, measurement error, and range departure, analysis resulted in an estimated population correlation of ρ = .51. Moderator analyses indicated that the relation between intelligence and adaptive behavior tended to decrease as IQ increased, was strongest for very young children, and varied by disability type, adaptive measure respondent, and IQ measure used. Additionally, curvilinear regression analysis of adaptive behavior composite scores onto full scale IQ scores from datasets used to report the correlation between the Wechsler Intelligence Scales for Children- Fifth edition and Vineland-II scores in the WISC-V manuals indicated a curvilinear relation—adaptive behavior scores had little relation with IQ scores below 50 (WISC-V scores do not go below 45), from which there was positive relation up until an IQ of approximately 100, at which point and beyond the relation flattened out. Practical implications of varying correlation magnitudes between intelligence and adaptive behavior are discussed (viz., how the size of the correlation affects eligibility rates for intellectual disability).
 
Other Key Findings Reported
 
McGrew (2012) augmented Harrison's data-set and conducted an informal analysis including a total of 60 correlations, describing the distributional characteristics observed in the literature regarding the relation. He concluded that a reasonable estimate of the correlation is approximately .50, but made no attempt to explore factors potentially influencing the strength of the relation.
 
Results from the present study corroborate the conclusions of Harrison (1987) and McGrew (2012) that the IQ/adaptive behavior relation is moderate, indicating distinct yet related constructs. The results showed indeed that the correlation is likely to be stronger at lower IQ levels—a trend that spans the entire ID range, not just the severe range. The estimated true mean population is .51, and study artifacts such as sampling error, measurement error, and range departure resulted in somewhat attenuated findings in individual studies (a difference of about .05 between observed and estimated true correlations overall).
 
 
The present study found the estimated true population mean correlation to be .51, meaning that adaptive behavior and intelligence share 26% common variance. In practical terms, this magnitude of relation suggests that an individual's IQ score and adaptive behavior composite score will not always be commensurate and will frequently diverge, and not by a trivial amount. Using the formula Ŷ = Ȳ + ρ (X - X ̅ ), where Ŷ is the predicted adaptive behavior composite score, Ȳ  is the mean adaptive behavior score in the population, ρ  is the correlation between adaptive behavior and intelligence, X is the observed IQ score for an individual, and X ̅ is the mean IQ score, and accounting for regression to the mean, the predicted adaptive behavior composite score corresponding to an IQ score of 70, given a correlation of .51, would be 85 —a score that is a full standard deviation above an adaptive behavior composite score of 70, the cut score recommended by some entities to meet ID eligibility requirements. With a correlation of .51, and accounting for regression to the mean, an IQ score of 41 would be needed in order to have a predicted adaptive behavior composite score of 70. Considering that approximately 85% of individuals with ID have reported IQ scores between 55 and 70±5 (Heflinger et al., 1987; Reschly, 1981), the eligibility implications, especially for those with less severe intellectual impairment, are alarming. In fact, derived from calculations by Lohman and Korb (2006), only 17% of individuals obtaining an IQ score of 70 or below would be expected to also obtain an adaptive behavior composite score of 70 or below when the correlation between the two is .50. 
 
 
The purpose of this study was to investigate the relation between IQ and adaptive behavior and variables moderating the relation using psychometric meta-analysis. The findings contributed in several ways to the current literature with regard to IQ and adaptive behavior. First, the estimated true mean population correlation between intelligence and adaptive behavior following correction for sampling error, measurement error, and range departure is moderate, indicating that intelligence and adaptive behavior are distinct, yet related, constructs. Second, IQ level has a moderating effect on the relation between IQ and adaptive behavior. The correlation is likely to be stronger at lower IQ levels, and weaker as IQ increases. Third, while not linear, age has an effect on the IQ/adaptive behavior relation. The population correlation is highest for very young children, and lowest for children between the ages of five and 12. Fourth, the magnitude of IQ/adaptive behavior correlations varies by disability type. The correlation is weakest for those without disability, and strongest for very young children with developmental delays. IQ/adaptive behavior correlations for those with ID are comparable to those with autism when not matched on IQ level. Fifth, the IQ/adaptive correlation when parents/caregivers serve as adaptive behavior respondents is comparable to when teachers act as respondents, but direct assessment of adaptive behavior results in a stronger correlation. Sixth, an individual's race does not significantly alter the correlation between IQ and adaptive behavior, but future research should evaluate the influence of race of the rater on adaptive behavior ratings. Seventh, the correlation between IQ and adaptive behavior varies depending on IQ measure used—the population correlation when Stanford-Binet scales are employed is significantly higher than when Wechsler scales are employed. And eighth, the correlation between IQ and adaptive behavior is not significantly different between adaptive behavior composite scores obtained from the Vineland, SIB, and ABAS families of adaptive behavior measures, which are among those that have been deemed appropriate for disability identification. Limitations of this study notwithstanding, it is the first to employ meta-analysis procedures and techniques to examine the correlation between intelligence and adaptive behavior and how moderators alter this relation. The results of this study provide information that can help guide practitioners, researchers, and policy makers with regard to the diagnosis or identification of intellectual and developmental disabilities.


- Posted using BlogPress from my iPad

Saturday, March 17, 2018

The importance of differential psychology for school learning: 90% of school achievement variance is due to student characteristics

This is why the study of individual differences/differential psychology is so important. If you don’t want to read the article you can watch a video of Dr. Detterman where he summarizes his thinking and this paper.

Education and Intelligence: Pity the Poor Teacher because Student Characteristics are more Significant than Teachers or Schools. Article link.

Douglas K. Detterman

Case Western Reserve University (USA)

Abstract

Education has not changed from the beginning of recorded history. The problem is that focus has been on schools and teachers and not students. Here is a simple thought experiment with two conditions: 1) 50 teachers are assigned by their teaching quality to randomly composed classes of 20 students, 2) 50 classes of 20 each are composed by selecting the most able students to fill each class in order and teachers are assigned randomly to classes. In condition 1, teaching ability of each teacher and in condition 2, mean ability level of students in each class is correlated with average gain over the course of instruction. Educational gain will be best predicted by student abilities (up to r = 0.95) and much less by teachers' skill (up to r = 0.32). I argue that seemingly immutable education will not change until we fully understand students and particularly human intelligence. Over the last 50 years in developed countries, evidence has accumulated that only about 10% of school achievement can be attributed to schools and teachers while the remaining 90% is due to characteristics associated with students. Teachers account for from 1% to 7% of total variance at every level of education. For students, intelligence accounts for much of the 90% of variance associated with learning gains. This evidence is reviewed


- Posted using BlogPress from my iPad

Monday, October 23, 2017

The Evolution of the Cattell-Horn-Carroll (CHC) Theory of Intelligence: Schneider & McGrew 2018 summary


The Evolution of the Cattell-Horn-Carroll (CHC) Theory of Intelligence: Schneider & McGrew 2018 summary from Kevin McGrew

This presentation includes a portion of key material to be published in a forthcoming CHC update/revision chapter--In D. P. Flanagan & Erin M .McDonough (Eds.), Contemporary intellectual assessment: Theories, tests and issues (4thed.,) New York: Guilford Press.

This is only a small amount of the chapter. Also, I have inserted some new material related to test interpretation that is not included in the to-be-published chapter. The tentative date for publication of the Flanagan book is spring 2018. The majority, but not all, of this SlideShare presentation was originally presented at the 2017 NYASP conference October 19,2017.

Monday, August 22, 2016

"Intelligent" intelligence testing with the WJ IV COG #7: Why do some individuals obtain markedly different scores on the various WJ IV Ga tests?

This is # 7 in the "Intelligent" intelligence testing with the WJ IV COG series at IQs Corner.  Copies of the PPT module can be downloaded by clicking on the LinkedIn icon in the right-hand corner of the slide show below  A PDF copy of all slides can be found here.

This module was developed in response to a thread on the IAPCHC listserv where an individual asked for help in understanding why the WJ IV Phonological Processing test score could be so much different (lower) that the WJ IV Sound Blending and Segmentation test scores.

Enjoy.



Thursday, May 12, 2016

"Intelligent" intelligence testing with the WJ IV Tests of Cognitive Ability #6: Within-Gc assessment tree


Here is the second WJ IV Within-CHC Assessment Tree--this time for Gc.  See prior post where I explain the basis of these groupings (example is for Gf-tree) and what the various arrows and fonts designate.   I am now also including a tabular form of the information.  This is part of my "Intelligent intelligence testing with the WJ IV Tests of Cognitive Abilities" series.

A PDF copy, which is quite clean, can be downloaded here.

Relevant broad and narrow definitions are below

Comprehension-knowledge (Gc):  The depth and breadth of declarative and procedural knowledge and skills valued by one’s culture. Comprehension of language, words, and general knowledge developed through experience, learning and acculturation.

  • General (verbal) information (K0): The breadth and depth of knowledge that one’s culture deems essential, practical, or worthwhile for most everyone to know.
  • Language development (LD): The general understanding of spoken language at the level of words, idioms, and sentences.  An intermediate factor between broad Gc and other narrow Gc abilities.  It usually represents a number of narrow language abilities working together in concert—therefore it is not likely a unique ability. 
  •  Lexical knowledge (VL): The knowledge of the word definitions and the concepts that underlie them. Vocabulary knowledge.
  • Listening ability (LS): The ability to understand speech, starting with comprehending single words and increasing to long complex verbal statements. 
Domain-specific knowledge (Gkn): The depth, breadth, and mastery of specialized declarative and procedural knowledge typically acquired through one’s career, hobby, or other passionate interest. The Gkn domain is likely to contain more narrow abilities than are currently listed in the CHC model.  
  • Knowledge of culture (K2): The range of knowledge about the humanities (e.g., philosophy, religion, history, literature, music, and art).

Click on images to enlarge and for clearer image.




I, Kevin McGrew, am solely responsible for this content.  The information presented here (and in this series) does not necessarily reflect the views of my WJ IV coauthors or that of the publisher of the WJ IV (HMH).


"Intelligent" intelligence testing with the WJ IV Tests of Cognitive Ability #2.5: What is Kaufman's "intelligent" intelligence testing?



This should have been one of the first posts in my "intelligent" testing series.  Better late than never.  Alan Kaufman's 1979 "Intelligent intelligence testing with the WISC-R" had a profound impcat on my intelligence testing practices when I was a practicing school psychologist and in many ways influenced my career to move into applied psychometrics, scholarship, etc.  If you prefer a PDF copy, with one slide per page, it can be found here.


Thursday, January 28, 2016

"Intelligent" intelligence testing with the W IV Tests of Cognitive Ability #3: Within-CHC assessment trees - a Gf "tease"



I have decided to temporarily skip the planned third installment in this series, and provide a "tease" for a small fraction of the "intelligent" testing material I will be positing in this series.  I will post an introduction to "intelligent" intelligence testing is (as per Kaufman and as applied to the WJ IV COG/OL) after this tease post.

One feature of Alan Kaufman's "intelligent" testing with the Wecshler series has been the provision of supplemental test groupings--groups of tests that may measure a shared common ability, but a group that is not one of the test's published clusters or indexes.

I have developed what I call "Within-CHC domain assessment and interpretation trees" for all 7 CHC domains in the WJ IV COG.  I developed these assessment trees by reviewing and integrating the following sources of information.


Close examination of the CFA results in the WJ IV Technical Manual (TM)

Close examination of the EFA, cluster analysis and MDS results in WJ IV TM

Additional unpublished EFA, CFA, cluster analysis and MDS (2D & 3D) completed post-WJ IV publication (across ages 6-19)

Review of supplemental/clinical groupings for the WJ, WJ-R and WJ III (e.g., McGrew, 1986; 1984--my two WJ COG books)

Extensive unpublished “Beyond CHC”  analysis of the WJ III data

Theoretical and clinical considerations


Below is the within-Gf assessment tree.  Click on images to enlarge for clear viewing.


(Note.  Since making this original post, I have now added a tabular version of the above information below.  Also, a clean PDF copy of both images can be found here.)



The dark arrows with bold font labels designate the Gf clusters provided by the WJ IV.  You will see Gf, Gf-Ext, and Quantitative Reasoning.  The dashed lines suggest other tests that might be important to inspect when evaluating a person's Gf abilities.  Note the line from Gf-Ext to the Visualization test.  It is labeled Gf-Ext 4/Gf+Gv hybrid.  This label is not in bold, indicating that it is not a cluster with score norms.  Close inspection of all data analyses of the WJ IV norm data found the Visualization test tending to "hang out" or near the primary Gf tests.  Also, as reported by Carroll (1993), sometimes Gv and Gf tests frequently would form a Gf/Gv hybrid factor (it is well known that some times factor analysis has a hard time differentiating Gf and Gv indicators).  This grouping  suggests that examiners should look to see if the Visualization test is consistent with the other Gf tests....which may reflect more shared Gf variance than anything specific to the Visualization test.

Also notice the Quantitative Reasoning-Ext (RQ) supplemental grouping,  This suggests that if the Quantitative Reasoning score is either high or low, on should inspect the Number Matrices and Applied Problems tests from the ACH battery---they, at times, will "follow" the scores on the Quantitative Reasoning  cluster.

Finally, one set of CFA models in the WJ IV TM suggested a possible Gf-Verbal vs Gf-Quantitative split.  The Verbal Reasoning supplemental grouping consists of the Concept Formation, Analysis-Synthesis, Oral Vocabulary, and Passage Comprehension tests.  Below the is a section of the CFA results that support the possible Gf-Verbal and Gf-Quantitative distinction.  This information is in the WJ IV Technical Manual.  This information suggests that the TM can be your "friend."  It contains considerable valuable information regarding tests that are not part of a cluster, but that showed evidence of some shared variance with a possible published cluster, or new clinical supplemental test groupings I will present.

Relevant Gf broad and narrow definitions are below:

Fluid reasoning (Gf): The use of deliberate and controlled focused attention to solve novel “on the spot” problems that cannot be solved solely by using prior knowledge (previously learned habits, schemas, or scripts).  Reasoning that depends minimally on learning and acculturation.
  • Induction (I): The ability to infer general implicit principles or rules that govern the observed behavior of a phenomenon or the solution to a problem.  Rule discovery.
  • General sequential reasoning (RG): The ability to reach logical conclusions from given premises and principles, often in a series of two or more sequential steps.  Deductive reasoning.
  • Quantitative reasoning (RQ): The ability to reason, either with induction or deduction, with numbers or mathematical relations, operations and algorithms.
Given that I know that people tend to not to devour technical manuals like I do, my assessment trees are aids that incorporate all of this information in visual-graphic form--saving you from having to extract this potential interpretation-relevant information from the TM.

Stay tuned.  Some of the within-CHC assessment trees suggest many more test groupings to consider for clinical interpretation (than this Gf example.)

I, Kevin McGrew, am solely responsible for this content.  The information presented here (and in this series) does not necessarily reflect the views of my WJ IV coauthors or that of the publisher of the WJ IV. 

Click on images to enlarge



Sunday, January 24, 2016

"Intelligent" testing with the WJ IV Tests of Cognitve Ability #2: Connecting the dots of relevant intelligence research

Click on image to enlarge.

Research that falls under the breadth of the topic of human intelligence is extensive. 

For decades I have attempted to keep abreast with intelligence-related research, particularly research that would help with the development, analysis, and interpretation of applied intelligence tests.   I frequently struggled with integrating research that focused on brain-behavior relations or networks, neural efficiency, etc.  I then rediscovered a simple three-level categorization of intelligence research by Earl Hunt.  I modified it into a four-level model, and the model is represented in the figure above.

In this "intelligent" testing series, primary emphasis will be on harnessing information from the top "psychometric level" of research to aid in test interpretation.  However, given the increased impact of cognitive neuropsychological research on test development, often one must turn to level 2 (information processing) to understand how to interpret specific tests.

This series will draw primarily from the first two levels, although there may be times were I import knowledge from the two brain-related levels.

To better understand this framework, and put the forthcoming information in this series in proper perspective, I would urge you to view the "connecting the dots" video PPT that I previously posted at this blog. 

Here it is.  The next post will start into the psychometric level information that serves as the primary foundation of "intelligent" intelligence testing.