Monday, August 15, 2022

Intelligence Correlates with the Temporal Variability of Brain Networks - ScienceDirect

 Intelligence Correlates with the Temporal Variability of Brain Networks - ScienceDirect 
https://www.sciencedirect.com/science/article/abs/pii/S0306452222004043?via%3Dihub

Abstract
Intelligence is the ability to recognize and understand objective things, and use knowledge and experience to solve problems. Highly intelligent people show the ability to switch between different thought patterns and shift their mental focus. This suggests a link between intelligence and the dynamic interaction of brain networks. Thus, we investigated the relationships between resting-state dynamic brain network remodeling (temporal variability) and scores on the Wechsler Adult Intelligent Scale using a large dataset comprising 606 individuals. We found that performance intelligence was associated with greater temporal variability in the functional connectivity patterns of the dorsal attention network. High variability in these areas indicates flexible connectivity patterns, which may contribute to cognitive processes such as attention selection. In addition, performance intelligence was related to greater temporal variability in the functional connectivity patterns of the salience network. Thus, this study revealed a close relationship between performance intelligence and high variability in brain networks involved in attentional choice, spatial orientation, and cognitive control.

******************************************
Kevin S. McGrew, PhD
Educational & School Psychologist
Director
Institute for Applied Psychometrics (IAP)
https://www.themindhub.com
******************************************

Saturday, August 06, 2022

J. Intell. | Free Full-Text | Intelligence IS Cognitive Flexibility: Why Multilevel Models of Within-Individual Processes Are Needed to Realise This

 J. Intell. | Free Full-Text | Intelligence IS Cognitive Flexibility: Why Multilevel Models of Within-Individual Processes Are Needed to Realise This 
https://www.mdpi.com/2079-3200/10/3/49

Abstract
Despite substantial evidence for the link between an individual's intelligence and successful life outcomes, questions about what defines intelligence have remained the focus of heated dispute. The most common approach to understanding intelligence has been to investigate what performance on tests of intellect is and is not associated with. This psychometric approach, based on correlations and factor analysis is deficient. In this review, we aim to substantiate why classic psychometrics which focus on between-person accounts will necessarily provide a limited account of intelligence until theoretical considerations of within-person accounts are incorporated. First, we consider the impact of entrenched psychometric presumptions that support the status quo and impede alternative views. Second, we review the importance of process-theories, which are critical for any serious attempt to build a within-person account of intelligence. Third, features of dynamic tasks are reviewed, and we outline how static tasks can be modified to target within-person processes. Finally, we explain how multilevel models are conceptually and psychometrically well-suited to building and testing within-individual notions of intelligence, which at its core, we argue is cognitive flexibility. We conclude by describing an application of these ideas in the context of microworlds as a case study. View Full-Text
Keywords: cognitive flexibilityergodic assumptionformative modelsmultilevel modelscomplex problem-solving

******************************************
Kevin S. McGrew, PhD
Educational & School Psychologist
Director
Institute for Applied Psychometrics (IAP)
https://www.themindhub.com
******************************************

Friday, August 05, 2022

Reassessment of innovative methods to determine the number of factors: A simulation-based comparison of exploratory graph analysis and next eigenvalue sufficiency test. - PsycNET

 Reassessment of innovative methods to determine the number of factors: A simulation-based comparison of exploratory graph analysis and next eigenvalue sufficiency test. - PsycNET 
https://psycnet.apa.org/doiLanding?doi=10.1037%2Fmet0000527

Brandenburg, N., & Papenberg, M. (2022). Reassessment of innovative methods to determine the number of factors: A simulation-based comparison of exploratory graph analysis and next eigenvalue sufficiency test. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000527

Next Eigenvalue Sufficiency Test (NEST; Achim, 2017) is a recently proposed method to determine the number of factors in exploratory factor analysis (EFA). NEST sequentially tests the null-hypothesis that k factors are sufficient to model correlations among observed variables. Another recent approach to detect factors is exploratory graph analysis (EGA; Golino & Epskamp, 2017), which rules the number of factors equal to the number of nonoverlapping communities in a graphical network model of observed correlations. We applied NEST and EGA to data sets under simulated factor models with known numbers of factors and scored their accuracy in retrieving this number. Specifically, we aimed to investigate the effects of cross-loadings on the performance of NEST and EGA. In the first study, we show that NEST and EGA performed less accurately in the presence of cross-loadings on two factors compared with factor models without cross-loadings: We observed that EGA was more sensitive to cross-loadings than NEST. In the second study, we compared NEST and EGA under simulated circumplex models in which variables showed cross-loadings on two factors. Study 2 magnified the differences between NEST and EGA in that NEST was generally able to detect factors in circumplex models while EGA preferred solutions that did not match the factors in circumplex models. In total, our studies indicate that the assumed correspondence between factors and nonoverlapping communities does not hold in the presence of substantial cross-loadings. We conclude that NEST is more in line with the concept of factors in factor models than EGA. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

Impact Statement
Exploratory factor analysis (EFA) is a method to develop hypotheses concerning common factors governing correlations among variables. This makes EFA a valuable instrument in various fields of psychology (such as test development). A key problem in EFA is to determine the optimal number of factors that fits observed correlations and keeps resulting models parsimonious. Contemporary research on this problem does not provide consensus on the optimal solution. Next Eigenvalue Sufficiency Test (NEST; Achim, 2017) and exploratory graph analysis (EGA; Golino & Epskamp, 2017) are recently proposed methods to approach this problem. Both were shown to determine accurately the number of factors in simulated factor models in which variables indicated one factor each. In our report, we compare NEST and EGA with simulated factor models in which each variable indicated multiple factors to varying degrees. These conditions suit validation of methods to detect factors because the premise of an unknown number of factors implies that one may not assume how many factors link to individual variables. We conducted two simulation studies: In Study 1, we show that methods detect factors less accurately when variables indicated multiple factors each and highlight that EGA suffered stronger than NEST. In Study 2, we simulated circumplex models—a particular class of factor models—and show that NEST achieved high accuracy while EGA was strikingly inaccurate. We discuss reasons for the methods' performances and argue that the signal that EGA detects is incongruent on a statistical level with the understanding of factors in factor analysis. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

******************************************
Kevin S. McGrew, PhD
Educational & School Psychologist
Director
Institute for Applied Psychometrics (IAP)
https://www.themindhub.com
******************************************

Tuesday, August 02, 2022

Wayfinding in Children: A Descriptive Literature Review of Research Methods: The Journal of Genetic Psychology: Vol 0, No 0

 Large scale spatial navigation is part of Gv domain.

Wayfinding in Children: A Descriptive Literature Review of Research Methods: The Journal of Genetic Psychology: Vol 0, No 0 
https://www.tandfonline.com/doi/abs/10.1080/00221325.2022.2103789

Abstract
Wayfinding refers to the process of locating unseen destinations in the spatial environment and is an important spatial skill for children. Despite a growing interest in wayfinding development in children, less attention has been focused on documenting the vast methodological heterogeneity of the existing research body, which impacts the ability to synthesize results across different studies. This review aims to systematically catalog and examine the research methods of the wayfinding development literature. We identified a total of 96 studies that examined 4- to 16- year-old children's wayfinding of unfamiliar, large-scale environments and were published between 1965 and 2020. Based on the environments, we grouped these studies into virtual reality (VR) vs. real-life and indoor vs. outdoor. The review revealed a vast diversity in research methods regarding participants, environments, independent variables (IVs), environmental exposure, dependent variables (DVs), and cognitive/behavioral correlates. The field has seen growing research interests in VR environments and atypical development. The most common IVs focused on the environmental features of landmarks and turn information. Relatively less research considered how different cognitive processes such as attention, memory, and learning contribute to wayfinding. Various outcome measures have been used to investigate landmark, route, and survey knowledge regarding DVs. This review showed an imbalance of topic areas in the field, systematic differences between different types of studies, and the need for greater attention on a number of important topics. Finally, we provided targeted, detailed recommendations for future research.

******************************************
Kevin S. McGrew, PhD
Educational & School Psychologist
Director
Institute for Applied Psychometrics (IAP)
https://www.themindhub.com
******************************************

Process-oriented intelligence research: A review from the cognitive perspective - ScienceDirect

 Process-oriented intelligence research: A review from the cognitive perspective - ScienceDirect 
https://www.sciencedirect.com/science/article/pii/S0160289622000629?via%3Dihub

Abstract
Despite over a century of research on intelligence, the cognitive processes underlying intelligent behavior are still unclear. In this review, we summarize empirical results investigating the contribution of cognitive processes associated with working memory capacity, processing speed, and executive processes to intelligence differences. Specifically, we (a) evaluate how cognitive processes associated with the three different cognitive domains have been measured, and (b) how these processes are related to individual differences in intelligence. Consistently, this review illustrates that isolating single cognitive processes using average performance in cognitive tasks is hardly possible. Instead, formal models that implement theories of cognitive processes underlying performance in different cognitive tasks may provide more adequate indicators of single cognitive processes. Therefore, we outlined which models for working memory capacity, processing speed, and executive processes may provide more specific insights into cognitive processes associated with individual differences in intelligence. Finally, we discuss implications of a process-oriented intelligence research using cognitive measurement models for psychometric theories of intelligence and argue that a model-based approach might overcome validity problems of traditional intelligence theories.

******************************************
Kevin S. McGrew, PhD
Educational & School Psychologist
Director
Institute for Applied Psychometrics (IAP)
https://www.themindhub.com
******************************************