Showing posts with label g. Show all posts
Showing posts with label g. Show all posts

Friday, June 06, 2025

Research Byte: General Ability (#g) Level Moderates Cognitive–#Achievement Relations for #Mathematics (#WJIV)—#WJIV #WJV #schoolpsychology #mathematics #SPED #EDPSYCH

[Blogmaster comment:   First…COI info…I’m a coauthor of the WJ IV and WJ V.  Second, regular readers may have noticed that I’ve been MIA on my various social media outlets the past 2-3 months.  I needed a break after spending the last five years working on the WJ V.  I also needed to attend to some family issues.  I plan to restart my sharing of interesting new research and FYI opinion posts].

Click on image to enlarge


New pub in Journal of Intelligence.  Click here to view and download (open access).

General Ability Level Moderates Cognitive–Achievement Relations for Mathematics 

by 
Christopher R. Niileksela
  
Jacob Robbins
 
Daniel B. Hajovsky
 
Abstract

Spearman’s Law of Diminishing Returns (SLODR) suggests general intelligence would be a stronger predictor of academic skills at lower general ability levels, and broad cognitive abilities would be stronger predictors of academic skills at higher general ability levels. Few studies have examined how cognitive–mathematics relations may vary for people with different levels of general cognitive ability. Multi-group structural equation modeling tested whether cognitive–mathematics relations differed by general ability levels for school-aged children (grades 1–5 and grades 6–12) using the Woodcock-Johnson Third Edition (n = 4470) and Fourth Edition (n = 3891) standardization samples. Results suggested that relationships between cognitive abilities and mathematics varied across general ability groups. General intelligence showed a stronger relative effect on mathematics for those with lower general ability compared to those with average or high general ability, and broad cognitive abilities showed a stronger relative effect on mathematics for those with average or high general ability compared to those with lower general ability. These findings provide a more nuanced understanding of cognitive–mathematics relations.

Monday, December 16, 2024

“Be and see” the #WISC-V correlation matrix: Unpublished analyses of the WISC-V #intelligence test

 I often “play around” with data sets until I satisfy my curiosity…and never submit the results for publication.  These WISC-V analyses were completed 3+ years ago.  I stumbled upon the folder today and decided to simply post the information for assessment professionals interested in the WISC-V.  These results have not been peer-reviewed.  One must know the WISC-V subtest names to decipher the test abbreviations in some of the figures.  

This is a Gv (visual; 8 slides) summary a set of exploratory structural analyses I completed with the WISC-V summary correlation matrix (Table 5.1 in WISC-V manual). View and enjoy. 

You need to click on images to enlarge and read











Thursday, November 14, 2024

Stay tunned!!!! #WJV g and non-g multiple #CHC theoretical models to be presented in the forthcoming (2025) technical manual: Senior author’s (McGrew) position re the #pscyhometric #g factor and #bifactorg models.

(c) Copyright, Dr. Kevin S. McGrew, Institute for Applied Psychometrics (11-14-24)

Warning, may be TLDR for many. :).  Also, I will be rereading again multiple times and may tweak minor (not substantive) errors and post updates….hey….blogging has an earthy quality to it:)

        In a recent publication, Scott Decker, Joel Schneider, Okan Bulut and I (McGrew, 2023; click here to download and read) presented structural analysis of the WJ IV norm data using contemporary psychometric network analysis (PNA) methods.  As noted in a clip from the article below, we recommended that intelligence test researchers, and particularly authors and publishers of the respective technical manuals for cognitive test batteries, needed to broaden the psychometric structural analysis of a test battery beyond the traditional (and almost exclusive) relieance on “common cause” factor analysis (EFA and CFA) methods to include PNA analysis…to compliment, not supplant factor based analyses.

(Click on image to enlarge for easier reading)


         Our (McGrew et al., 2023) recommendation is consistent with some critics of intelligence test structural research (e.g., see Dombrowski et al., 2018, 2019; Farmer et al., 2020) who have cogently argued that most intelligence test technical manuals typically present only one of the major classes of possible structural models of cognitive ability test batteries.  Interestingly, many school psychology scholars who conduct and report independent structural analysis of a test battery also do something similar…they often only present one form of structural analysis—-namely, bifactor g analyses.  
        In McGrew et al. (2023) we recommended future cognitive ability test technical manuals embrace a more ecumenical multiple method approach and include, when possible, most all major classes of factor analysis models, as well as PNA. A multiple-methods research approach in test manuals (and journal publications by independent researchers) can better inform users of the strengths and limitations of IQ test interpretations based on whatever conceptualization of psychometric general intelligence (including models with no such construct) underlies each type of dimensional analysis. Leaving PNA methods aside for now, the figure below presents the four major families of traditional CHC theoretical structural models.  These figures are conceptual and are not intended to represent all nuances of factor models. 



(Click on image for a larger image to view)


         Briefly, the four major families of traditional “common cause” CHC CFA structural models (Carroll, 2003; McGrew et al., 2023) vary primarily in the specification (or lack thereof) of a psychometric g factor. The different families of CHC models are conceptually represented in the figure above. In these conceptual representations the rectangles represent individual (sub)tests, the circles latent ability factors at different levels of breadth or generality (stratum levels as per Carroll, 1993), the path arrows the direction of influence (the effect) of the latent CHC ability factors on the tests or lower-order factors, and the single double headed arrow all possible correlations between all CHC broad CHC factors (in the Horn no-g model in panel D).  
        The classic hierarchical g model “places a psychometric g stratum III ability at the apex over multiple broad stratum II CHC abilities” (McGrew et al., 2023, p. 2)This model is most often associated with Carroll (1993; 2003) and is called (in panel A in the above figure) the Carroll hierarchical g broad CHC model. In this model the shared variance of subsets of moderately to highly correlated tests are first specified as 10 CHC broad ability factors (i.e., the measurement model; Gf, Gc, Gv, etc.)Next the covariances (latent factor correlations) among the broad CHC factors are specified as being the direct result of a higher-order psychometric g factor (i.e., the structural model). 
        A sub-model under the Carroll hierarchical g broad CHC model includes three levels of factors—several first-order narrow (stratum I) factors, 10 second-order broad (stratum II) CHC factors, and the psychometric g factor (stratum III). This is called the Carroll hierarchical g broad+narrow CHC model in panel B in the figure above. In the above example, two first-order narrow CHC factors (auditory short-term storage-Wa; and auditory working memory capacity-Wc, which, in simple terms, is a factor defining auditory short-term memory tasks that also include heavy attentional control-based (AC as per Schneider & McGrew, 2018) active manipulation of stimuli—the essence of Gwm or working memory).  For illustrative purposes, a narrow naming facility (NA) first-order factor, which has higher-order effects or influences from broad Gs and Gr is specified for evaluation.  Wouldn’t you like to see the results of this hierarchical broad+narrow CHC model?  Well……..stay tunned for the forthcoming WJ V technical manual (Q1 2025; LaForte, Dailey, & McGrew, 2025, in preparation) and your dream will come true.
        The third model is the Horn no-g model (McGrew, et al., 2023).  John Horn long argued that psychometric g was nothing more than a statistical abstraction or artifact (Horn, 1998; Horn & Noll, 1997; McArdle, 2007; McArdle & Hofner, 2014; Ortiz, 2015) and did not represent a brain or biologically based real cognitive abilityThis is represented by the Horn no-g broad CHC model in panel D. The Horn no-broad CHC model is like the Carroll hierarchical g broad CHC model, but the 10 broad CHC factor intercorrelations are retained instead of specifying a higher- or second-order psychometric g factorIn other words, the measurement models are the same but the structural models are different. In some respects the Horn no-g broad CHC model is like contemporary no-g psychometric network analysis models (see McGrew, 2023) that eschew the notion of a higher-order latent psychometric g factor to explain the positive definite correlation variance between individual tests (or first-order latent factors in the case of the Horn no-model) in an intelligence battery (Burgoyne et al. 2022; Conway &Kovacs, 2015; Euler et al., 2023; Fried, 2020; Kan et al. 2019; Kievit et al. 2016; Kovacs & Conway, 2016, 2019; McGrew, 2023; McGrew et al., 2023; Protzko & Colom 2021a, 2021b, van der Maas et al. 2006, 2014, 2019).  Over the past decade I’ve become more aligned with no-g psychometric network CHC models (e.g, process overlap theory or POT) or Horn’s no-g CHC model, and have, tongue-in-check, referred to the elusive psychometric g ability (not the psychometric g factor)  as the “Loch Ness Monster of Psychology” (McGrew, 2021, 2022).



        Three of these common cause CHC structural models (viz., Carroll hierarchical g broad CHC model, Carroll hierarchical g broad+narrow CHC, and Horn no-g broad CHC), as well as Dr. Hudson Golino and colleagues hierarchical exploratory graph analysis psychometric network analysis models (that topic is saved for another day), are to be presented in the structural analysis section of the forthcoming WJ V technical manual validity chapter.  Stay tunned for some interesting analysis and interpretations in the “must read” WJ V technical manual. Yes….assessment professionals, a well written and thourough technical manual can be your BFF!
        Finally, the fourth family of models, which McGrew et al. (2023) called g-centric models, are commonly known as bifactor g models. In the bifactor g broad CHC model (panel C in figure) the variance associated with a dominant psychometric factor is first extracted from all individual tests. The residual (remaining) variance is modeled as 10 uncorrelated (orthogonal) CHC broad factors. The bifactor model was excluded from the WJ V structural analysisWhy…..after I (McGrew et al., 2023) recommended that all four classes of traditional CHC structural analysis models should be presented in a test batteries technical manual????
        Because…the complexity involved in specifying and evaluating bi-factor g models with 60 cognitive and achievement tests was found to be extremely complex and fraught with statistical convergence issues.  Trust me…I tried hard and long to run bifactor g models for the WJ V norm data.  It was possible to run bifactor g models separately on the cognitive and achievement sets of WJ V tests, but that does not allow for the direct comparison to the other three structural models that utilized all 60 cognitive and achievement tests in single CFA models.  Instead, at of the time the WJ V technical manual analyses were being completed and are now being summarized, the Riverside Insights (RI) internal psychometric research team was tackling the complex issues involved in completing WJ V bifactor g models, first in the separate sets of cognitive and achievement tests.  Stay tunned for future professional conference paper presentations, white papers, or journal article submissions by the RI research team.
        Furthermore, the decision to not include bifactor g models does not suggest that the evaluation of WJ V bifactor g-centric CHC models is not important. As noted by Reynolds and Keith (2017), “bifactor models may serve as a useful mathematical convenience for partitioning variance in test scores” (p. 45; emphasis added)The bifactor g model pre-ordains “that the statistically significant lions share of IQ battery test variance must be of the form of a dominant psychometric g factor (Decker et al., 2021)” (McGrew, et al., 2023, p. 3)Of the four families of CHC structural models, the bifactor g model is the conceptual and statistical model that supports the importance of general intelligence (psychometric g) and the preeminence of the full-scale or global IQ score over broad CHC test scores (e.g., see Dobrowski et al., 2021; Farmer et al., 2021a, 2021b; McGrew et al., 2023)—a theoretical position inconsistent with the position of the WJ V senior author (yours truly) and with Dr. Richard Woodcock’s legacy (see additional footnote comments at the end). It is important to note that there is a growing body of research that has questioned the preference for bifactor g cognitive models based only on statistical fit indices, as structural model fit statistics frequently are biased in favor of bifactor solutions. Per Bonifay et al. (2017),“the superior performance of the bifactor model may be a symptom of ‘overfitting’—that is, modeling not only the important trends in data but also capturing unwanted noise” p. 184–185). For more on this, see Decker (2021), Dueber and Toland (2021), Eid et al., (2018), Greene et al. (2022), and Murray and Johnson(2013). See Dombroski et al. (2020) for a defense of some of the bifactor g criticisms.
        Recognizing the wisdom of Box’s (1976) well known axiom that “all models are wrong, but some are useful” the WJ V technical manual authors (LaForte, Dailey, McGrew, 2025, in preparation) encourage independent researchers to use the WJ V norm data to evaluate and compare bifactor g CHC models with the models presented in forthcoming WJ V technical, as well as  alternative models (e.g., PASS, process overlap theory, Cattell’s triadic Gf-Gc theory, etc.) suggested in the technical manual.


Footnote:  Woodcock’s original (and enduring) position (Woodcock, 1978, 1997, 2002) regarding the validity and purpose of a composite IQ-type g score is at odds with the bifactor g CHC model. With the publication of the original WJ battery, Woodcock (1978) acknowledged the pragmatic predictive value of statistically partitioning cognitive ability test score variance into a single psychometric g factor, with the manifest total IQ score serving as a proxy for psychometric g. Woodcock stated “it is frequently convenient to use some single index of cognitive ability that will predict the quality of cognitive behavior, on the average, across a wide variety of real-life situations. This is the [pragmatic] rationale for using a single score from a broad-based test of intelligence” (p.126). However, Woodcock further stated that “one of the most common misconceptions about the nature of cognitive ability (particularly in discussions characterized by such labels as ‘IQ’ and ‘intelligence’) is that it is a single quality or trait held in varying degrees by individuals, something like [mental] height” (p. 126). In several publications Woodcock’s position regarding the importance of an overall general intelligence or IQ score was clear—“The primary purpose for cognitive testing should be to find out more about the problem, not to obtain an IQ” (Woodcock, 2002, p.6; also see Woodcock, 1997, p. 235). Two of the primary WJ III, WJ IV, and WJ V authors have conducted research or published articles (see Mather & Schneider, 2023; McGrew, 2023; McGrew et al., 2023) consistent with Woodcock’s position and have advocated for a Horn no-g or emergent property no-g CHC network model. Additionally, based on the failure to identify a brain-based biological (i.e., neuro-g; Haier et al., 2024) in well over a century of research since Spearman first proposed in the early 1900’s, McGrew (2020, 2021) has suggested that g may be the “Loch Ness Monster of psychology.” This does not imply that psychometric g is unrelated to combinations of different neurocognitive mechanisms, such as brain-wide neural efficiency and the ability of the whole-brain network, which is comprised of various brain subnetworks and connections via white matter tracts, to efficiently adaptively reconfigure the global network in response to changing cognitive demands (see Ng et al., 2024 for recent compelling research linking psychometric g to multiple brain network mechanisms and various contemporary neurocognitive theories of intelligence; NOTE…click link to download PDF of article and read sufficiently to impress your psychologist friends!!!!).



Thursday, November 07, 2024

McGrew on #IQ scores: In what ways are a car engine, a starling bird #murmuration, and #g (general #intelligence) alike..how are they the same?

Kevin McGrew on IQ scores, borrowing from Detterman (2016) and McGrew et al., (2023)

“General intelligence (represented by a composite IQ score or the factor-analysis derived psychometirc g factor) is a fallible summary statistical (numerical) index of the efficiency of a complex system of dynamically interacting multiple brain networks.  Like the emergent statistical index of horsepower of a car engine, which does not represent a “thing” (a mechanism) in the engine, it reflects the current estimated efficiency of the processing of multiple interacting cognitive abilities and brain networks. It should not be interpreted as being the result of a single brain-based entity or mystical mental energy, as fixed, or reflecting biological/genetic destiny.  The manifest expression of this statistical emergent property index is also influenced by other non-cognitive (conative) (click for relevant article) traits and temporary states of the individual and current environmental variables” (K. McGrew, 11-07-24)


Question.  In what ways are a car engine, a starling bird murmuration, and general intelligence alike..how are they the same?  See slides and comments below for answer


 

(A starling bird murmuration)

Double click on larger more readable images 


















Wednesday, November 06, 2024

More on the conflation of #psychometric #g (general #intelligence): Is g the Loch Ness Monster of psycholgy?



From McGrew et al. (2023) article (click here for prior post and access to the article in Journal of Intelligence.)”  Click here for a series of slides regarding the theoretical and psychometric conflation of g.

The Problem of Conflating Theoretical and Psychometric g

“Contributing to the conflicting g-centric and mixed-g positions (regarding the interpretive value of broad CHC scores) is the largely unrecognized common practice of conflating theoretical and psychometric g. Psychometric g is the statistical extraction of a latent factor (via factor analysis) that accounts for the largest single source of common variance in a collection of cognitive abilities tests. It is an emergent property statistical index. Theoretical g refers to the underlying biological brain-based mechanism(s) that produce psychometric g. The global composite score from IQ test batteries is considered the best manifest proxy for psychometric g. The conflation of psychometric and theoretical g in IQ battery structural research ignores a simple fact—“general intelligence is not the primary fact of mainstream intelligence research; the primary fact is the positive manifold….general intelligence is but one interpretation of that primary fact” (Protzko and Colom 2021a, p. 2; italic emphasis added). As described later, contemporary intelligence and cognitive psychology research has provided reasonable and respected theories (e.g., dynamic mutualism; process overlap theory; wired cognition; attentional control), robust methods (psychometric network analysis), and supporting research (Burgoyne et al. 2022Conway and Kovacs 2015Kan et al. 2019Kievit et al. 2016Kovacs and Conway 20162019van der Maas et al. 200620142019) that accounts for the positive manifold of IQ test correlations in the absence of an underlying latent causal theoretical or psychometric gconstruct.” (p.3; bold font emphasis added).

Research Byte: Predicting #Achievement From #WISC-V #Composites: Do #Cognitive-Achievement Relations Vary Based on #GeneralIntelligence?

Predicting Achievement From WISC-V Composites: Do Cognitive-Achievement Relations Vary Based on General Intelligence?

Click here for open access PDF of article.

Abstract 

In order to make appropriate educational recommendations, psychologists must understand how cognitive test scores influence specific academic outcomes for students of different ability levels. We used data from the WISC-V and WIAT-III ( N = 181) to examine which WISC-V Index scores predicted children’s specific and broad academic skills and if cognitive-achievement relations varied by general intelligence. Verbal abilities predicted most academic skills for children of all ability levels, whereas processing speed, working memory, visual processing, and fluid reasoning abilities differentially predicted specific academic skills. Processing speed and working memory demonstrated significant interaction effects with full-scale IQ when predicting youth’s essay writing. Findings suggest generalized intelligence may influence the predictive validity of certain cognitive tests, and replication studies in larger samples are encouraged.

Tuesday, July 14, 2020

Evidence for a unitary structure of spatial cognition (Gv) beyond general intelligence (g)

A very convincing study supporting a general unitary, but multidimensional, spatial Gv factor. Click here for Open Access copy.

Evidence for a unitary structure of spatial cognition beyond general intelligence

Margherita Malanchini, Kaili Rimfeld, Nicholas G. Shakeshaft, Andrew McMillan, Kerry L. Schofield, Maja Rodic, Valerio Rossi, Yulia Kovas, Philip S. Dale, Elliot M. Tucker-Drob,mand Robert Plomin

Abstract 

Performance in everyday spatial orientation tasks (e.g., map reading and navigation) has been considered functionally separate from performance on more abstract object-based spatial abilities (e.g., mental rotation and visualization). However, few studies have examined the link between spatial orientation and object-based spatial skills, and even fewer have done so including a wide range of spatial tests. To examine this issue and more generally to test the structure of spatial ability, we used a novel gamified battery to assess six tests of spatial orientation in a virtual environment and examined their association with ten object-based spatial tests, as well as their links to general cognitive ability (g). We further estimated the role of genetic and environmental factors in underlying variation and covariation in these spatial tests. Participants (N = 2660; aged 19–22) were part of the Twins Early Development Study. The six tests of spatial orientation clustered into a single ‘Navigation' factor that was 64% heritable. Examining the structure of spatial ability across all 16 tests, three, substantially correlated, factors emerged: Navigation, Object Manipulation, and Visualization. These, in turn, loaded strongly onto a general factor of Spatial Ability, which was highly heritable (84%). A large portion (45%) of this high heritability was independent of g. The results point towards the existence of a common genetic network that supports all spatial abilities. 

Click on image to enlarge.





Thursday, March 05, 2020

Book Nook: General and Specific Mental Abilities - McFarland (Ed)-


 

Book Description

The history of testing mental abilities has seen the dominance of two contrasting approaches, psychometrics and neuropsychology. These two traditions have different theories and methodologies, but overlap considerably in the tests they use. Historically, psychometrics has emphasized the primacy of a general factor, while neuropsychology has emphasized specific abilities that are dissociable. This issue about the nature of human mental abilities is important for many practical concerns. Questions such as gender, ethnic, and age-related differences in mental abilities are relatively easy to address if they are due to a single dominant trait. Presumably such a trait can be measured with any collection of complex cognitive tests. If there are many specific mental abilities, these would be much harder to measure and associated social issues would be more difficult to resolve. The relative importance of general and specific abilities also has implications for educational practices. This book includes the diverse opinions of experts from several fields including psychometrics, neuropsychology, speech language and hearing, and applied psychology.

Saturday, February 29, 2020

Spatial ability (Gv) and math (Gq; Gf-RQ): A meta-analysis






Fang Xie & Li Zhang  & Xu Chen & Ziqiang Xin


Abstract

The relationship between spatial and mathematical ability is controversial. Thus, the current study conducted a meta-analysis of 73 studies, with 263 effect sizes to explore the relationship between spatial and mathematical ability. Furthermore, we explored potential factors that moderate this relationship. Results showed that the relationship between mathematical and spatial ability was not simply linear. Specifically, logical reasoning had a stronger association with spatial ability than numerical or arithmetic ability with spatial ability. Intrinsic-dynamic, intrinsic-static, extrinsic-dynamic, extrinsic-static spatial ability, and visual–spatial memory showed comparable associations with mathematical ability. The association between spatial and mathematical ability showed no differences between children, adolescents, and adults and no differences between typically developing individuals and individuals with developmental disabilities. The implications of these findings for theory and practice are discussed.

Keywords Spatial ability . Mathematical ability . Meta-analysis . robumeta package . Spatial training.


Implications for Theory and Practice

“Our study can shed light on our understanding of the relationship between spatial and mathematical abilities. The relationship between spatial and mathematical abilities is not simply linear. Our moderation analyses suggested that logical reasoning was more strongly associated with spatial ability than numerical and arithmetical ability. As such, when examin-ing the mechanism of the association between spatial and mathematical ability, each domain of mathematical ability should be separately examined. The current study has important educational implications. Although we did not prove the causal relationship between spatial and mathematical ability, our findings might provide some pedagogical suggestions about how to train spatial ability to improve children's mathematical abilities. Notably, a recent intervention study by Sorby et al. (2018) demonstrated the positive effect of spatial interventions on STEM-related skills, and several studies have shown that spatial training can improve mathematical achievement (Cheng and Mix 2014; Clements et al. 2011; Sorby and Baartmans 2000). Firstly, our findings shed light on what kind of spatial ability training should be chosen. The current study indicated that different domains of spatial ability are associated with mathemat-ical ability to a similar degree. Therefore, training in other domains of spatial ability, not just intrinsic-dynamic spatial abilities (Cheng and Mix 2014; Clements et al. 2011; Taylor and Hutton 2013), should be encouraged in educational practice. Further, our findings shed light on when to begin spatial ability training. This study showed that the close association between spatial and mathematical abilities exists in childhood and adolescence. Therefore, spatial training can be beneficial for both children and adolescents. For children, spatial training can be rooted in the real world to develop direct experience by using regular activities such as paper folding, paper cutting (Burte et al. 2017), and Lego construction (Nath and Szücs 2014). For adoles-cents, it is better to carry out spatial training through comprehensive courses involving theory and practice in a series of spatial skills (Miller and Halpern 2013; Patkin and Dayan 2013; Sorby et al. 2013).”

Educational Psychology Review

Friday, December 06, 2019

Psychometric Network Analysis of the Hungarian WAIS


Christopher J. Schmank, Sara Anne Goring, Kristof Kovacs and Andrew R. A. Conway

Received: 1 June 2019; Accepted: 24 August 2019; Published: 9 September 2019

Abstract: The positive manifold—the finding that cognitive ability measures demonstrate positive correlations with one another—has led to models of intelligence that include a general cognitive ability or general intelligence (g). This view has been reinforced using factor analysis and reflective, higher-order latent variable models. However, a new theory of intelligence, Process Overlap Theory (POT), posits that g is not a psychological attribute but an index of cognitive abilities that results from an interconnected network of cognitive processes. These competing theories of intelligence are compared using two different statistical modeling techniques: (a) latent variable modeling and (b) psychometric network analysis. Network models display partial correlations between pairs of observed variables that demonstrate direct relationships among observations. Secondary data analysis was conducted using the Hungarian Wechsler Adult Intelligence Scale Fourth Edition (H-WAIS-IV). The underlying structure of the H-WAIS-IV was first assessed using confirmatory factor analysis assuming a reflective, higher-order model and then reanalyzed using psychometric network analysis. The compatibility (or lack thereof) of these theoretical accounts of intelligence with the data are discussed.

Keywords: intelligence; Process Overlap Theory; psychometric network analysis; latent variable modeling; statistical modeling

Click on image to enlarge.







Saturday, September 21, 2019

All you need is g? Predicting piano skill acquisition in beginners: The role of general intelligence, music aptitude, and mindset


Abstract
;  This study was designed to investigate sources of individual differences in musical skill acquisition. We had 171 undergraduates with little or no piano-playing experience attempt to learn a piece of piano music with the aid of a video-guide, and then, following practice with the guide, attempt to perform the piece from memory. A panel of musicians evaluated the performances based on their melodic and rhythmic accuracy. Participants also completed tests of working memory capacity, fluid intelligence, crystallized intelligence, processing speed, and two tests of music aptitude (the Swedish Music Discrimination Test and the Advanced Measures of Music Audiation). Measures of general intelligence and music aptitude correlated significantly with skill acquisition, but mindset did not. Structural equation modeling revealed that general intelligence, music aptitude, and mindset together accounted for 22.4% of the variance in skill acquisition. However, only general intelligence contributed significantly to the model (β = 0.44, p < .001). The contributions of music aptitude (β = 0.08, p = .39) and mindset (β = −0.06, p = .50) were non-significant after accounting for general intelligence. We also found that openness to experience did not significantly predict skill acquisition or music aptitude. Overall, the results suggest that after accounting for individual differences in general intelligence, music aptitude and mindset do not predict piano skill acquisition in beginners.




 


Wednesday, October 24, 2018

Problems with bi-factor intelligence research - theoretically agnostic and psychologically naive

Kevin McGrew (@iqmobile)
Problems with #bifactor #intelligence #IQ test research studies. #gfactor may not represent a real thing or ability but may be an #emergent factor...like #SES or #DJI. #g and primary abilities uncorrelated....seriously????? Bifactor models are theoretically #agnostic pic.twitter.com/Go77F32UTI

Download the Twitter app