Showing posts with label intelligence. Show all posts
Showing posts with label intelligence. Show all posts

Tuesday, July 29, 2025

Journal of Intelligence “Best Paper Award” for McGrew, Schneider, Decker & Bulut (2023) Psychometric network analysis of CHC measures - #psychometric #networkanalysis #intelligence #CHC #WJIV #bestpaper #schoolpsychology #schoolpsychologist


Today I (Kevin McGrew), and colleagues Joel Schneider, Scott Decker, and Okan Bulut, were pleased to learn that our recent 2023 Journal of Intelligence article listed above (open access—click link to read or download) was selected as 1 of 2 “Best Paper Awards” for 2023.  

As stated at the journal award page, “The Journal of Intelligence Best Paper Award is granted annually to highlight publications of high quality, scientific significance, and extensive influence. The evaluation committee members choose two articles of exceptional quality that were published in the journal the previous year and announce them online by the end of June.”

Below is the abstract and two figures that may pique your interest. We thank the members of the JOI evaluation committee.

Abstract
For over a century, the structure of intelligence has been dominated by factor analytic methods that presume tests are indicators of latent entities (e.g., general intelligence or g). Recently, psychometric network methods and theories (e.g., process overlap theory; dynamic mutualism) have provided alternatives to g-centric factor models. However, few studies have investigated contemporary cognitive measures using network methods. We apply a Gaussian graphical network model to the age 9–19 standardization sample of the Woodcock–Johnson Tests of Cognitive Ability—Fourth Edition. Results support the primary broad abilities from the Cattell–Horn–Carroll (CHC) theory and suggest that the working memory–attentional control complex may be central to understanding a CHC network model of intelligence. Supplementary multidimensional scaling analyses indicate the existence of possible higher-order dimensions (PPIK; triadic theory; System I-II cognitive processing) as well as separate learning and retrieval aspects of long-term memory. Overall, the network approach offers a viable alternative to factor models with a g-centric bias (i.e., bifactor models) that have led to erroneous conclusions regarding the utility of broad CHC scores in test interpretation beyond the full-scale IQ, g.



Click on images to enlarge for easier viewing/reading






Sunday, July 06, 2025

CHC Theory (2009) article hits 2000+ citations. Thanks.

2005 citations since 2009!!!!!

On occasion I check my Google Scholar profile.  Yesterday I was pleased to see that my most frequently cited peer-reviewed journal article (CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric research—Intelligence) had achieved the 2000+ (n=2005) total reference citations mark.  This clearly has been my most important peer-reviewed journal contribution to the field of intelligence and human cognitive abilities.

Thanks to all who have found the article useful.  And a special thanks to Dr. Doug Detterman.  After making an ISIR presentation about this topic, Doug, who was then the editor of Intelligence, invited me to submit an article.


Click on images to enlarge for easy reading





Saturday, February 01, 2025

New journal in intelligence: #Intelligence and #Cognitive #Abilities


Passing this long to professionals interested in intelligence and cognitive abilities research.

Dear Colleagues and Friends,


Intelligence & Cognitive Abilities (ICA) is up and running and ready for submissions! This has happened in record time thanks to support, encouragement, and input from many people, especially those who helped fund us. We’re also thrilled to announce that Anna-Lena Schubert and Tim Bates have agreed to be Associate Editors.

Since Elsevier’s Intelligence has a “new” direction (and most of their EB resigned in protest), ICA was created to ensure intelligence researchers have a publication outlet edited by individuals with strong track records of intelligence research and committed to free inquiry. Moreover, to maximize the availability of every published paper, ICA is entirely online and Open Access. Because profit is not a primary motivating factor, the publication fee is substantially lower than all other journals that cover this area of psychology (at least by 50%). Since many intelligence researchers do not have funding for publication costs, ICA offers generous waivers and discounts. The Editorial Board represents expertise for the full, diverse range of intelligence topics, research designs, and data analysis methods.

But wait. There’s more. To encourage submissions, any manuscripts submitted before September 1st, 2025 and subsequently accepted will have all publication charges waived.

ICA is hosted on the Scholastica journal publishing platform. Here is a link for official ICA information including Aims and Scope, Editorial Board, and what authors need to know for preparing and submitting manuscripts: icajournal.scholasticahq.com (see tabs on upper left; this site will be evolving visually and adding functionally over the next few weeks but it already supports all aspects of submission, review, and publication). Any email invitations you receive inviting reviews of submissions and other simple site registration will come from Scholastica.

Intelligence research has moved far beyond traditional psychometrics into cognitive psychology, genetics, neuroimaging, neuroscience, and many other domains. All perspectives are welcomed to publish in ICA. We pledge fair and constructive reviews by experts and speedy online publication. But to be successful and serve the intelligence research community, we need submissions so please consider clicking the link above with your best work as soon as possible. It’s now up to you.

With gratitude,

Sincerely,

Tom Coyle, Editor-in-Chief


Rich Haier, Consulting Editor

Friday, November 22, 2024

The Evolution of #Intelligence (journals)—the two premiere intelligence journals compared—shout out to two #schoolpsychologists

The Evolution of Intelligence: Analysis of the Journal of Intelligence and Intelligence 

Click here to read and download the paper.

by 
Fabio Andres Parra-Martinez
 1,*
Ophélie Allyssa Desmet
 2 and 
Jonathan Wai
 1
1
Department of Education Reform, University of Arkansas, Fayetteville, AR 72701, USA
2
Department of Human Services, Valdosta State University, Valdosta, GA 31698, USA
*
Author to whom correspondence should be addressed. 
J. Intell. 202311(2), 35; https://doi.org/10.3390/jintelligence11020035

Abstract

What are the current trends in intelligence research? This parallel bibliometric analysis covers the two premier journals in the field: Intelligence and the Journal of Intelligence (JOI) between 2013 and 2022. Using Scopus data, this paper extends prior bibliometric articles reporting the evolution of the journal Intelligence from 1977 up to 2018. It includes JOI from its inception, along with Intelligence to the present. Although the journal Intelligence’s growth has declined over time, it remains a stronghold for traditional influential research (average publications per year = 71.2, average citations per article = 17.07, average citations per year = 2.68). JOI shows a steady growth pattern in the number of publications and citations (average publications per year = 33.2, average citations per article = 6.48, total average citations per year = 1.48) since its inception in 2013. Common areas of study across both journals include cognitive ability, fluid intelligence, psychometrics–statistics, g-factor, and working memory. Intelligence includes core themes like the Flynn effect, individual differences, and geographic IQ variability. JOI addresses themes such as creativity, personality, and emotional intelligence. We discuss research trends, co-citation networks, thematic maps, and their implications for the future of the two journals and the evolution and future of the scientific study of intelligence.

Yes….a bit of a not-so-humble brag.  In the co-citation JOI figure below, the Schneider, W. J. is the Schneider & McGrew (2012) chapter, which has now been replaced by Schneider & McGrew (2018; sorry, I don’t have good PDF copy to link).  In the second Intelligence co-citation network figure, the McGrew, K. S. (2009) paper, next to Carroll’s (1993) seminal work, is your’s truly—my most cited journal article (see Google Scholar Profile).  The frequent citation of the Schneider & McGrew (2012) and McGrew (2009) journal publications are indicators of the “bridger” function Joel and I have provided—providing a bridge between intelligence research/theory and intelligence test development, use, and interpretation in school psychology.  

(Click on images to enlarge for better viewing)



Tuesday, November 19, 2024

Occam’s razor and human #intelligence (and #cognitive ability tests)….yes…but sometimes no…food for thought for #schoolpsychologists

 


Occam's razor (also spelled Ockham's razor or Ocham's razorLatinnovacula Occami) is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements. It is also known as the principle of parsimony or the law of parsimony (Latinlex parsimoniae)”

In the context of fitting structural CFA models to intelligence test data, it can be summarized as “given two models with similar fit to the data, the simpler model is preferred” (Kline, 2011, p. 102).” The law of parsimony is frequently invoked in research articles when an investigator is faced with competing factor models regarding the underlying structure of a cognitive ability test battery. However, when complex human behavior is involved, especially something as complex as human intelligence and the brain, it is possible that Occam’s razor might interfer with a thourough understanding of human intelligence and test batteries designed to measure intelligence. The following quote2note has stuck with me as an important reminder that when faced with alternative and more complex statistical CFA models, these models should not be summarily dismissed based only on the parsimony principle. As stated by Stankov, Boyle, and Cattell (1995)


while we acknowledge the principle of parsimony and endorse it whenever applicable, the evidence points to relative complexity rather than simplicity…the insistence on parsimony at all costs can lead to bad science” (p. 16).


Stankov, L., Boyle, G. J., & Cattell, R. B. (1995). Models and paradigms in personality and intelligence research. In D. Saklofske & M. Zeidner (Eds.), International handbook of personality and intelligence (pp. 15–43). New York, NY: Plenum Press.

Wednesday, January 06, 2021

The Model of Achievement Competence Motivation (MACM) Part B: An overview of the MACM model

The Model of Achievement Competence Motivation (MACM) is a series of slide modules.  By clicking on the link you can view the slides at SlideShare.  This is the second (Part B) in the series--An overview of the model.  There will be a total of five modules.  The modules will serve as supplemental materials to "The Model of Achievement Competence Motivation (MACM)--Standing on the shoulders of giants" (McGrew, in press, 2021 - in a forthcoming special issue on motivation in the Canadian Journal of School Psychology)

Click here for first of the series (Part A:  Introduction and Background)

Click here for prior "beyond IQ" labeled posts at this blog.

Monday, January 04, 2021

The Model of Achievement Competence Motivation (MACM): Part A - Introduction to module series

The Model of Achievement Competence Motivation (MACM) is a series of slide modules.  By clicking on the link you can view the slides at SlideShare.  This is the first (Part A) in the series. The modules will serve as supplemental materials to "The Model of Achievement Competence Motivation (MACM)--Standing on the shoulders of giants" (McGrew, in press, 2021 - in a forthcoming special issue on motivation in the Canadian Journal of School Psychology)



Click here for prior "beyond IQ" labeled posts at this blog.




Wednesday, December 16, 2020

The big picture ecological systems perspective of intelligence (and IQ tests): Is COVID disrupting and rearranging the hierarchy of ecological system influences on children's learning?

Understanding intelligence testing in the context of Bronfrenbrenner's ecological systems model--is COVID seriously damaging, rearranging, decoupling, etc. the major proximal and distal sources of influence on a child's learning, resulting in a need to look closer at non-cognitive (conative) variables...beyond IQ?

This morning I revisited one of my favorite videos (of those I have posted), first posted in 2015, where I explained how intelligence testing needed to be understood in the context of distal and proximal influences in a child's environment.  I believe that a "big picture" understanding of the wide range of variables that influence school learning requires a "humbling" of the status of intelligence testing, a field where I have spent the majority of my professional career.  After one finishes the video, think about the "big picture" ecological systems model that is described. IMHO, COVID may be seriously impacting that the primary distal and proximal variables that influence (both positively and negatively) school learning (national educational policy; school systems and local community sources of formal and informal support; individual schools; the lack of in class learning; parents working from home or being unemployed), as well as peer interactions in a child's neighborhood (due to social distancing).  Stare at the final big picture figure and reflect on how COVID is disrupting all the primary sets of variables that influence school learning.  The range of disrupted causal influences is staggering. 

The end result, for many children, is learning via distance learning methods, often with the aid of parents who are not educators.  Although intelligence is very important, and may be more important as children must use their abilities to learn more independently, it strikes me that at this point in our countries (global) current crises, it may be the non-cognitive variables that might need better understanding and enhancement.  That is, the conative (aka., noncognitive) "beyond IQ" variables of motivation and self-regulated learning (aka., a part of volition) may be more important today than ever.  To engage in independent, loosely (dis)organized instruction, students who have strong motivation and independent self-regulation learning strategies may have a distinct advantage--those who do not, may be at a serious disadvantage.  Jack Carroll's seminal model of school learning, that spawned decades of research on models of school learning, reminds us, in elegant terms, that aside from key student individual difference variables, the quantity (opportunity for instruction) and quality of instruction are key variables in school learning.  Both of these are being seriously impacted due to COVID.

COVID appears to be a high level all encompassing distal variable (wielding impact at the global, national, community, and school system levels) that is rearranging the the relative importance of  variables in school learning.  Students now, and in the future, may need more assistance in acquiring critical non-cognitive motivational dispositions and independent self-regulated learning strategies in order to maximize what they can from their repertoire of cognitive abilities in order to continue and maintain academic growth.  If may be necessary to revise the degree of influence of distal and proximal school learning influence variables as per Bronfrenbreener's ecological systems model.





Saturday, June 23, 2018

How to raise a societies average intelligence—education : A meta-analysis




How Much Does Education Improve Intelligence? A Meta-Analysis.
Psychological Science 1 –12. Article link.

Stuart J. Ritchie and Elliot M. Tucker-Drob

Abstract

Intelligence test scores and educational duration are positively correlated. This correlation could be interpreted in two ways: Students with greater propensity for intelligence go on to complete more education, or a longer education increases intelligence. We meta-analyzed three categories of quasiexperimental studies of educational effects on intelligence: those estimating education-intelligence associations after controlling for earlier intelligence, those using compulsory schooling policy changes as instrumental variables, and those using regression-discontinuity designs on school-entry age cutoffs. Across 142 effect sizes from 42 data sets involving over 600,000 participants, we found consistent evidence for beneficial effects of education on cognitive abilities of approximately 1 to 5 IQ points for an additional year of education. Moderator analyses indicated that the effects persisted across the life span and were present on all broad categories of cognitive ability studied. Education appears to be the most consistent, robust, and durable method yet to be identified for raising intelligence.

From summary

The results reported here indicate strong, consistent evidence for effects of education on intelligence. Although the effects—on the order of a few IQ points for a year of education—might be considered small, at the societal level they are potentially of great conse-quence. A crucial next step will be to uncover the mechanisms of these educational effects on intelligence in order to inform educational policy and practice.


- Posted using BlogPress from my iPad

Saturday, May 19, 2018

The Relation between Intelligence and Adaptive Behavior: A Meta-Analysis 

Very important meta-analysis of AB IQ relation. Primary finding on target with prior informal synthesis by McGrew (2015)

The Relation between Intelligence and Adaptive Behavior: A Meta-Analysis   
 
Ryan M. Alexander 
 
ABSTRACT 
 
Intelligence tests and adaptive behavior scales measure vital aspects of the multidimensional nature of human functioning. Assessment of each is a required component in the diagnosis or identification of intellectual disability, and both are frequently used conjointly in the assessment and identification of other developmental disabilities. The present study investigated the population correlation between intelligence and adaptive behavior using psychometric meta-analysis. The main analysis included 148 samples with 16,468 participants overall. Following correction for sampling error, measurement error, and range departure, analysis resulted in an estimated population correlation of ρ = .51. Moderator analyses indicated that the relation between intelligence and adaptive behavior tended to decrease as IQ increased, was strongest for very young children, and varied by disability type, adaptive measure respondent, and IQ measure used. Additionally, curvilinear regression analysis of adaptive behavior composite scores onto full scale IQ scores from datasets used to report the correlation between the Wechsler Intelligence Scales for Children- Fifth edition and Vineland-II scores in the WISC-V manuals indicated a curvilinear relation—adaptive behavior scores had little relation with IQ scores below 50 (WISC-V scores do not go below 45), from which there was positive relation up until an IQ of approximately 100, at which point and beyond the relation flattened out. Practical implications of varying correlation magnitudes between intelligence and adaptive behavior are discussed (viz., how the size of the correlation affects eligibility rates for intellectual disability).
 
Other Key Findings Reported
 
McGrew (2012) augmented Harrison's data-set and conducted an informal analysis including a total of 60 correlations, describing the distributional characteristics observed in the literature regarding the relation. He concluded that a reasonable estimate of the correlation is approximately .50, but made no attempt to explore factors potentially influencing the strength of the relation.
 
Results from the present study corroborate the conclusions of Harrison (1987) and McGrew (2012) that the IQ/adaptive behavior relation is moderate, indicating distinct yet related constructs. The results showed indeed that the correlation is likely to be stronger at lower IQ levels—a trend that spans the entire ID range, not just the severe range. The estimated true mean population is .51, and study artifacts such as sampling error, measurement error, and range departure resulted in somewhat attenuated findings in individual studies (a difference of about .05 between observed and estimated true correlations overall).
 
 
The present study found the estimated true population mean correlation to be .51, meaning that adaptive behavior and intelligence share 26% common variance. In practical terms, this magnitude of relation suggests that an individual's IQ score and adaptive behavior composite score will not always be commensurate and will frequently diverge, and not by a trivial amount. Using the formula Ŷ = Ȳ + ρ (X - X ̅ ), where Ŷ is the predicted adaptive behavior composite score, Ȳ  is the mean adaptive behavior score in the population, ρ  is the correlation between adaptive behavior and intelligence, X is the observed IQ score for an individual, and X ̅ is the mean IQ score, and accounting for regression to the mean, the predicted adaptive behavior composite score corresponding to an IQ score of 70, given a correlation of .51, would be 85 —a score that is a full standard deviation above an adaptive behavior composite score of 70, the cut score recommended by some entities to meet ID eligibility requirements. With a correlation of .51, and accounting for regression to the mean, an IQ score of 41 would be needed in order to have a predicted adaptive behavior composite score of 70. Considering that approximately 85% of individuals with ID have reported IQ scores between 55 and 70±5 (Heflinger et al., 1987; Reschly, 1981), the eligibility implications, especially for those with less severe intellectual impairment, are alarming. In fact, derived from calculations by Lohman and Korb (2006), only 17% of individuals obtaining an IQ score of 70 or below would be expected to also obtain an adaptive behavior composite score of 70 or below when the correlation between the two is .50. 
 
 
The purpose of this study was to investigate the relation between IQ and adaptive behavior and variables moderating the relation using psychometric meta-analysis. The findings contributed in several ways to the current literature with regard to IQ and adaptive behavior. First, the estimated true mean population correlation between intelligence and adaptive behavior following correction for sampling error, measurement error, and range departure is moderate, indicating that intelligence and adaptive behavior are distinct, yet related, constructs. Second, IQ level has a moderating effect on the relation between IQ and adaptive behavior. The correlation is likely to be stronger at lower IQ levels, and weaker as IQ increases. Third, while not linear, age has an effect on the IQ/adaptive behavior relation. The population correlation is highest for very young children, and lowest for children between the ages of five and 12. Fourth, the magnitude of IQ/adaptive behavior correlations varies by disability type. The correlation is weakest for those without disability, and strongest for very young children with developmental delays. IQ/adaptive behavior correlations for those with ID are comparable to those with autism when not matched on IQ level. Fifth, the IQ/adaptive correlation when parents/caregivers serve as adaptive behavior respondents is comparable to when teachers act as respondents, but direct assessment of adaptive behavior results in a stronger correlation. Sixth, an individual's race does not significantly alter the correlation between IQ and adaptive behavior, but future research should evaluate the influence of race of the rater on adaptive behavior ratings. Seventh, the correlation between IQ and adaptive behavior varies depending on IQ measure used—the population correlation when Stanford-Binet scales are employed is significantly higher than when Wechsler scales are employed. And eighth, the correlation between IQ and adaptive behavior is not significantly different between adaptive behavior composite scores obtained from the Vineland, SIB, and ABAS families of adaptive behavior measures, which are among those that have been deemed appropriate for disability identification. Limitations of this study notwithstanding, it is the first to employ meta-analysis procedures and techniques to examine the correlation between intelligence and adaptive behavior and how moderators alter this relation. The results of this study provide information that can help guide practitioners, researchers, and policy makers with regard to the diagnosis or identification of intellectual and developmental disabilities.


- Posted using BlogPress from my iPad

Wednesday, May 16, 2018

Higher intelligence related to more efficiently organized brains-bigger/larger/more not always better




Click on image to enlarge

Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Article link.

Erhan Genç, Christoph Fraenz, Caroline Schlüter, Patrick Friedrich, Rüdiger Hossiep, Manuel C. Voelkle, Josef M. Ling, Onur Güntürkün, & Rex E. Jung

Abstract

Previous research has demonstrated that individuals with higher intelligence are more likely to have larger gray matter volume in brain areas predominantly located in parieto-frontal regions. These findings were usually interpreted to mean that individuals with more cortical brain volume possess more neurons and thus exhibit more computational capacity during reasoning. In addition, neuroimaging studies have shown that intelligent individuals, despite their larger brains, tend to exhibit lower rates of brain activity during reasoning. However, the microstructural architecture underlying both observations remains unclear. By combining advanced multi-shell diffusion tensor imaging with a culture-fair matrix-reasoning test, we found that higher intelligence in healthy individuals is related to lower values of dendritic density and arborization. These results suggest that the neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner, fostering more directed information processing and less cortical activity during reasoning.

From discussion

Taken together, the results of the present study contribute to our understanding of human intelligence differences in two ways. First, our findings confirm an important observation from previous research, namely, that bigger brains with a higher number of neurons are associated with higher intelligence. Second, we demonstrate that higher intelligence is associated with cortical mantles with sparsely and well-organized dendritic arbor, thereby increasing processing speed and network efficiency. Importantly, the findings obtained from our experimental sample were confirmed by the analysis of an independent validation sample from the Human Connectome Project25



- Posted using BlogPress from my iPad

Thursday, April 26, 2018

Meta-analytic SEM of literacy and language development relations

Using Meta-analytic Structural Equation Modeling to Study Developmental Change in Relations Between Language and Literacy. Article link.

Jamie M. Quinn Richard K. Wagner

The purpose of this review was to introduce readers of Child Development to the meta-analytic structural equa-tion modeling (MASEM) technique. Provided are a background to the MASEM approach, a discussion of its utility in the study of child development, and an application of this technique in the study of reading compre-hension (RC) development. MASEM uses a two-stage approach: first, it provides a composite correlation matrix across included variables, and second, it fits hypothesized a priori models. The provided MASEM application used a large sample (N = 1,205,581) of students (ages 3.5–46.225) from 155 studies to investigate the factor structure and relations among components of RC. The practical implications of using this technique to study development are discussed.

Click on images to enlarge.









- Posted using BlogPress from my iPad

Monday, March 05, 2018

Developments in the CHC domain of visual processing



Much new is occurring regarding the domain of Gv. Below is a new review of the Gv research and a proposed heuristic framework. This is then followed by select excerpts from our (Schneider and McGrew, 2018) upcoming CHC update chapter in the CIA book, where we add some caution regarding new “proposed”Gv frameworks.

A Heuristic Framework of Spatial Ability: A Review and Synthesis of Spatial Factor Literature to Support its Translation into STEM Education. Article link.

Jeffrey Buckley & Niall Seery & Donal Canty


Abstract

An abundance of empirical evidence exists identifying a significant correlation between spatial ability and educational performance particularly in science, technology, engineering and mathematics (STEM). Despite this evidence, a causal explanation has yet to be identified. Pertinent research illustrates that spatial ability can be developed and that doing so has positive educational effects. However, contention exists within the relevant literature concerning the explicit definition for spatial ability. There is therefore a need to define spatial ability relative to empirical evidence which in this circumstance relates to its factor structure. Substantial empirical evidence supports the existence of unique spatial factors not represented in modern frameworks. Further understanding such factors can support the development of educational interventions to increase their efficacy and related effects in STEM education. It may also lead to the identification of why spatial ability has such a significant impact on STEM educational achievement as examining more factors in practice can help in deducing which are most important. In light of this, a synthesis of the spatial factors offered within existing frameworks with those suggested within contempo-rary studies is presented to guide further investigation and the translation of spatial ability research to further enhance learning in STEM education.

Keywords Spatial ability . Spatial factors . STEM education . Human intelligence

Click on image to enlarge.




The following are select sections of our Gv chapter in the forthcoming CIA book.




Visual processing (Gv) can be defined as the ability to make use of simulated mental imagery to solve problems—perceiving, discriminating, manipulating, and recalling nonlinguistic images in the “mind’s eye.” Humans do more than “act” in space; they “cognize” about space (Tommasi & Laeng, 2012). Once the eyes have transmitted visual information, the visual system of the brain automatically performs several low-level computations (e.g., edge detection, light–dark perception, color differentiation, motion detection). The results of these low-level computations are used by various higher-order processors to infer more complex aspects of the visual image (e.g., object recognition, constructing models of spatial configuration, motion prediction). Traditionally, tests measuring Gv are designed to measure individual differences in these higher-order processes as they work in tandem to perceive relevant information (e.g., a truck is approaching!) and solve problems of a visual-–spatial nature (e.g., arranging suitcases in a car trunk).

Among the CHC domains, Gv has been one of the most studied (Carroll, 1993). Yet it has long been considered a second-class citizen in psychometric models of intelligence, due in large part to its relatively weak or inconsistent prediction of important outcomes in comparison to powerhouse abilities like Gf and Gc (Lohman, 1996). But “the times they are a-changing.” Carroll (1993), citing Eliot and Smith (1983), summarized three phases of research on spatial abilities, ending in large part in the late 1970s to early 1980s (Lohman, 1979). A reading of Carroll’s survey conveys the impression that his synthesis reflects nothing more than what was largely known already in the 1980s. We believe that the Gv domain is entering a fourth period and undergoing a new renaissance, which will result in its increased status in CHC theory and eventually in cognitive assessment. Carroll, the oracle, provided a few hints in his 1993 Gv chapter.

Carroll (1993) was prophetic regarding two of the targets of the resurgent interest in Gv and Gv-related constellations (often broadly referred to as spatial thinking, spatial cognition, spatial intelligence, or spatial expertise; Hegarty, 2010; National Research Council, 2006). In Carroll’s discussion of “other possible visual perception factors” (which he did not accord formal status in his model), he mentioned “ecological” abilities (e.g., abilities reflecting a person’s ability to orient the self in real-world space and maintain a sense of direction) and dynamic (vs. static) spatial reasoning factors (e.g., predicting where a moving object is moving and when it will arrive at a predicted location).

Carroll’s ecological abilities are reflected in a growing body of research regarding large-scale spatial navigation. Large-scale spatial navigation is concerned with finding one’s way, or the ability to represent and maintain a sense of direction and location, and move through the environment (Allen, 2003; Hegarty, 2010; Newcombe, Uttal, & Sauter, 2013; Wolbers & Hegarty, 2010; Yilmaz, 2009). Using a map or smartphone GPS system to find one’s way to a restaurant, and then to return to one’s hotel room, in an unfamiliar large city requires large-scale spatial navigation. A primary distinction between small- and large-scale spatial abilities is the use of different perspectives or frames of reference. Small-scale spatial ability, as represented by traditional psychometric tests on available cognitive or neuropsychological batteries, involves allocentric or object-based transformation.

Large-scale spatial ability typically involves an egocentric spatial transformation, in which the viewer’s internal perspective or frame of reference changes regarding the environment, while the person’s relationship with the objects do not change (Hegarty & Waller, 2004; Newcombe et al., 2013; Wang, Cohen, & Carr, 2014). Recent meta-analyses indicate that large-scale spatial abilities are clearly distinct from small-scale spatial abilities, with an overall correlation of approximately .27. In practical terms, this means that the ability to easily solve the 3D Rubik’s cube may not predict the probability of getting lost in a large, unfamiliar city. Also supporting a clear distinction between the two types of spatial abilities is developmental evidence suggesting that large-scale spatial abilities show a much faster rate of age-related decline, and that the two types are most likely related to different brain networks (Newcombe et al., 2013; Wang et al., 2014).

The distinction between static and dynamic spatial abilities is typically traced to work by Pellegrino and colleagues (Hunt, Pellegrino, Frick, Farr, & Alderton, 1988; Pellegrino, Hunt, Abate, & Farr, 1987) and is now considered one of the two primary organizational facets of spatial thinking (Uttal, Meadow, et al., 2013). Static spatial abilities are well represented by standard tests of Gv (e.g., block design tests). Dynamic and static spatial tasks differ primarily by the presence or absence of movement. “Dynamic spatial ability is one's ability to estimate when a moving object will reach a destination, or one's skill in making time-to-contact (TTC) judgments” (Kyllonen & Chaiken, 2003, p. 233). The ability to catch a football, play a video game, or perform as an air traffic controller requires dynamic spatial abilities, as “one must note the position of the moving object, judge the velocity of the object, anticipate when the object will reach another point (e.g., one's hand, car, or ship), and take some motor action in response to that judgment. In the perception literature, the research surrounding this everyday human information-processing activity has been known as ‘time to collision’” (Kyllonen & Chaiken, 2003, p. 233). Although the dynamic–static distinction has gained considerable traction and support (Allen, 2003; Buckley, Seery, & Canty, 2017; Contreras, Colom, Hernandez, & Santacreu, 2003), some research has questioned whether the underlying difference reflects an actual spatial ability distinction. [AU: Any update on status of Buckley et al.? No] Kyllonen and Chaiken (2003) reported research suggesting that the underlying cognitive process involved in performing dynamic spatial tasks may be a nonspatial, counting-like clock mechanism—temporal processing, not spatial.

The driving forces behind the increased interest and new conceptual developments regarding spatial thinking are threefold. First, rapid technological changes in the past decade have now made access to relatively cheap and accessible visual-graphic-based technology available to large portions of the population. Individuals can immerse themselves in 3D virtual-reality environments for pleasure or learning. Computer visualizations, often available on smartphones and computer tablets, can be used to teach medical students human anatomy and surgery. The complexities and nuances underling “bid data” can now be unearthed with complex visual network models than can be rotated at will. Anyone can learn geography by zooming over the world via Google Earth to explore locations and cities. Individuals rely on car- or phone-based GPS visual navigation systems to move from point A to point B. Clearly, developing Gv abilities (or spatial thinking) is becoming simultaneously easier via technology, but also more demanding as humans must learn how to use and understand Gv graphic interface tools that present complex visual displays of multidimensional information.
Second, ever-increasing calls have been made to embed spatial thinking throughout the educational curriculum—“spatializing” the curriculum (Newcombe, 2013)—to raise the collective spatial intelligence of our children and youth (Hegarty, 2010; National Research Council, 2006). The extant research has demonstrated a significant link between spatial abilities and educational performance in the fields of science, technology, engineering, and mathematics (STEM; Buckley et al., 2017; Hegarty, 2010; Lubinski, 2010; Newcombe et al., 2013). Gv abilities and individuals with spatially oriented cognitive “tilts” (Lubinksi, 2010) are becoming increasingly valued by technologically advanced societies. More important, research has demonstrated that spatial abilities or strategies are malleable (National Research Council, 2006; Tzuriel & Egozi, 2010; Uttal, Meadow, et al., 2013; Uttal, Miller, & Newcombe, 2013).

Although many psychologists are important drivers of the renewed interest in an expanded notion of the conceptualization and measurement of Gv (e.g., Allen, 2003; Hegarty, 2010; Kyllonen & Chaiken, 2009; Kyllonen & Gluck, 2003; Lubinski, 2010; Uttal, Miller, et al., 2013; Wang et al., 2014), some of the more active research and conceptualizing are being driven by researchers in education (e.g., National Research Council, 2006; Yilmaz, 2009), cognitive neuroscience (e.g., Thompson, Slotnick, Burrage, & Kosslyn, 2009; Wolbers & Hegarty, 2010), and the STEM disciplines (Harle & Towns, 2010; Seery, Buckley, & Delahunty, 2015). Clearly the CHC model’s “mind’s eye” (Gv) is achieving more prominence, which needs to be supported with renewed research on yet to be identified well-supported additional narrow abilities and innovative measurement methods, particularly regarding large-scale and dynamic spatial abilities.



Do other Gv narrow abilities exist? Of course. As with all CHC domains, the validated narrow abilities in the current taxonomy are largely the result of bottom-up programs of research predicated on developing tests for practical purposes (e.g., prediction, diagnosis). Recent conceptualizations of Gv as a broader spatial thinking construct; the dynamic versus spatial and large-scale versus small-scale conceptualizations; and other functional family conceptualizations of Gv abilities are opening a potential Pandora’s box of hypothesized new Gv narrow abilities. For example, Buckley and colleagues (2017) have proposed a comprehensive Gv taxonomy that includes the current Gv abilities and posits 16 potential new narrow abilities based on either theory or research, some previously reviewed by Carroll (1993). These possible new narrow abilities are related to classic spatial tasks (spatial orientation); imagery (quality and speed); illusions (shape and direction, size contrast, overestimation and underestimation, frame of reference); judgments (direction, speed, movement); and dynamic versions of current Gv abilities (visual memory, serial perceptual integration, spatial scanning, perceptual alternations).

These new Gv conceptualizations are welcomed, but they must be studied with serious caution. All new candidates for Gv abilities will need to be validated with well-conceptualized structural validity research (see “Criteria for Updating CHC Theory,” above). Also, if new Gv abilities are identified, it is important to determine whether they have any practical use or validity. An instructive example is a recent CFA CHC-designed study that provided preliminary support for a narrow ability of face recognition (called face identification recognition by the researchers), distinct from other Gv and CHC abilities (Gignac, Shankaralingam, Walker, & Kilpatrick, 2016). The face recognition ability may have practical usefulness, as it could facilitate measurement and research regarding the phenomenon of prosopagnosia (in which a cognitively capable individual is completely unable to recognize familiar faces). Although it is important to guard against premature hardening of the CHC categories (McGrew, 2005; Schneider & McGrew, 2012), we believe that even greater due diligence is necessary to prevent premature proliferation of new entries in the Gv domain in the CHC model. We don’t want to be at a place soon where formal START negotiations (STrategic Ability Reduction Talks) are necessary to halt unsupported speculation about and proliferation of Gv abilities.



- Posted using BlogPress from my iPad

Friday, March 02, 2018

BB (blatant brag): McGrew CHC 2009 article in Intelligence #1 (2008-2015) and top #10 all time




This was a pleasant surprise. I knew my 2009 Intelligence article was cited frequently but I never knew it was number one from 2008-2015 and it made the top 10 all time list for the journal Intelligence. I believe this is a reflection of the impact the CHC taxonomy has had. This should make my mom proud. Here is a link to the original article.

Bibliometric analysis across eight years 2008–2015 of Intelligence articles: An updating of Wicherts (2009). Article link.

Bryan J. Pesta

Abstract

I update and expand upon Wicherts' (2009) editorial in Intelligence. He reported citation counts of papers pub-lished in this journal from 1977 to 2007. All these papers are now at least a decade old, and many more new articles have been published since Wichert's analysis. An updated study is needed to help (1) quantify the journal's more recent impact on the scientific study of intelligence, and (2) alert researchers and educators to highly-cited articles; especially newer ones. Thus, I conducted a bibliometric analysis of all articles published here from 2008 to 2015. Data sources included both the Web of Science (WOS), and Google Scholar (GS). The eight-year set comprised 619 articles, published by 1897 authors. The average article had 17.0 (WOS), and 32.9 (GS) citations overall (2.75, and 5.33 citations per year, respectively). These metrics compare favorably with those from other psychology journals. In addition, a list of the most prolific authors is provided. Also reported is a list showing many articles in this set with counts greater than one hundred, and an updated top 25 list for the history of this journal.


“Also noteworthy is that nine of the articles in the old list (not shown here) dropped off the new list. Of their replacements, only three of the nine were published within the last decade: Deary, Strand, Smith, and Fernandes (2007); McGrew (2009), and Strenze (2007). The McGrew (2009) paper is again notable. It is the only article in my newer set (2008–2015) to make the all-time list. The paper ranks ninth on the all-time list with 281 citations, just eight years after being published.

More recent Google Scholar citation info indicates that the article is still going strong from 2016-2017.



Click on images to enlarge.








- Posted using BlogPress from my iPad

Tuesday, December 20, 2016

Detterman on the importance of human intelligence

In a recent article by Robert Colom (2016) in the Spanish Journal of Psychology, I was reminded of an important quote by one of the leaders in Intelligence over the past 50 years.....Dr. Doug Detterman

In the farewell editorial note published by D. K. Detterman after being editor of the journal ‘Intelligence' for four decades he wrote: “from very early, I was convinced that intelligence was the most important thing of all to understand, more important than the origin of the universe, more important than climate change, more impor-tant than curing cancer, more important than anything else. That is because human intelligence is our major adaptive function and only by optimizing it will we be able to save ourselves and other living things from ultimate destruction. It is as simple as that”.


- Posted using BlogPress from my iPad

Monday, December 05, 2016

Human intelligence research four-levels of explanation: Connecting the dots - an Oldie-But-Goodie (OBG) post

Click on image to enlarge.

Research that falls under the breadth of the topic of human intelligence is extensive.

For decades I have attempted to keep abreast with intelligence-related research, particularly research that would help with the development, analysis, and interpretation of applied intelligence tests.   I frequently struggled with integrating research that focused on brain-behavior relations or networks, neural efficiency, etc.  I then rediscovered a simple three-level categorization of intelligence research by Earl Hunt.  I modified it into a four-level model, and the model is represented in the figure above.

In this "intelligent" testing series, primary emphasis will be on harnessing information from the top "psychometric level" of research to aid in test interpretation.  However, given the increased impact of cognitive neuropsychological research on test development, often one must turn to level 2 (information processing) to understand how to interpret specific tests.

This series will draw primarily from the first two levels, although there may be times were I import knowledge from the two brain-related levels.

To better understand this framework, and put the forthcoming information in this series in proper perspective, I would urge you to view the "connecting the dots" video PPT that I previously posted at this blog.

Here it is.  The next post will start into the psychometric level information that serves as the primary foundation of "intelligent" intelligence testing.