Showing posts with label BIS. Show all posts
Showing posts with label BIS. Show all posts

Friday, November 15, 2024

#WJIV Geometric-Quantoid (#geoquant) #intelligence art: A geoquant interpretation of #cognitive tests is worth a 1000 words—some similar “art parts” will be in #WJV technical manual


(You will need to click on image to enlarge figure to read)

I frequently complete data analyses that never see the light-of-day in a journal article. The results are all I need (at the time) to answer intriguing questions for me, and I then move on…or tantalize psychologists during a workshop or conference presentation.  Thus, this is non-peer reviewed information.  Below is one of my geoquant figures from a series of 2016 analyses (later updated in 2020) I completed on a portion of the WJ IV norm data.  To interpret you should have knowledge of the WJ IV tests—so you can understand the test variable abbbreviation names.  This MDS figure includes numerous interesting cognitive psychology constructs and theoretical principles based on multiple methodological lenses and supporting theory/research.  This was completed before I was introduced to psychometric network analysis methods as yet another visual means to understand intelligence test data.  You can play “where’s Waldo” and look for the following

  • CHC broad cognitive factors
  • Cognitive complexity information re WJ IV tests
  • Kahneman’s two systems of cognition (System I/II thinking)
  • Berlin BIS ability x content facet framework
  • Two of Ackerman’s intelligence dimensions as per PPIK theory (intelligence-as-process; intelligence-as-knowledge)
  • Cattell’s general fluid (gf) and general crystallized (gc) abilities, the two major domains in his five domain triadic theory of intelligence.…..lower case gf/gc notation is deliberate and indicates more “general” capacities (akin, in breadth, to Spearman’s g, who was Cattell’s mentor) and not the Horn and Carroll-like broad Gf and Gc
  • Newlands process and product dominant distinction of cognitive abilities.
Enjoy.  MDS analyses and figures will also be in the forthcoming (Q1 2025)  WJ V technical manual (LaForte, Dailey, & McGrew, 2025, in preparation) but not in the form of these mutiple method/theory synthesis grand figures….stay tunned.  I may create such beautiful geoquant WJ V masterpieces once the WJ V is launched in Q1 2025.  We shall see.  I find these grand synthesis figures particularly useful when interpreting test rests…all critical information in one single figure…would you?

Friday, May 24, 2013

A useful taxonomy for classifying Gf tests: Oliver Wilhelm chapter

This is a post made early in the history of this blog.  Still relevant and important.

In a prior post I summarized a taxonomic lens for analyzing performance on figural/spatial matrix measures of fluid intelligence (Gf). Since then I have had the opportunity to read “Measuring Reasoning Ability” by Oliver Wilhelm (see early blog post on recommended books to read – this chapter is part of the Handbook of Understanding and Measuring Intelligence by Wilhelm and Engle). Below are a few select highlights.

The need for a more systematic framework for understanding Gf measures

As noted by Wilhelm, “there is certainly no lack of reasoning measures” (p. 379). Furthermore, as I learned when classifying tests as per CHC theory with Dr. Dawn Flanagan, the classificaiton of Gf tests as measures of general sequential (deductive) reasoning (RG) inductive reasoning (I), and quantitative reasoning (QR) is very difficult. Kyllonen and Christal’s 1990 statement (presented in the Wilhelm chapter) that the “development of good tests of reasoning ability has been almost an art form, owing more to empirical trial-and-error than to systematic delineation of the requirements which such tests must satisfy” (p.446 in Kyllonen and Christal; p. 379 in Wilhelm). It thus follows that the logical classification of Gf tests is often difficult…or, as we used to say when I was in high school..”no sh____ batman!!!!”

As a result, “scientists and practitioners are left with little advice from test authors as to why a specific test has the form it has. It is easy to find two reasoning tests that are said to measure the same ability but that are vastly different in terms of their features, attributes, and requirements” (p. 379).

Wilhelm’s system for formally classifying reasoning measures

Wilhelm articulates four aspects to consider in the classification of reasoning measures. These are:
  • Formal operation task requirements – this is what most CHC assessment professionals have been encouraged to examine via the CHC lens. Is a test a measure of RG, I, RQ, or a mixture of more than one narrow ability?
  • Content of tasks – this is where Wilhelm’s research group has made one of its many significant contributations during the past decade. Wilhelm et al. have reminded us that just because the Rubik’s cube model of intelligence (Guilford’s SOI model) was found seriously wanting, the analyses of intelligence tests by operation (see above) and content facets is theoretically and empirically sound. I fear that many psychologists, having been burned by the unfulfilled promise of the SOI interpretative framework, have often thrown out the content facet with the SOI bath water. There is clear evidence (see my prior post that presents evidence for content facets based on the analysis of 50 CHC designed measures via a Carroll analyses of the data) that most psychometric tests can be meaningfully classified as per stimulus content – figural, verbal, and quantitative.
  • The instantiation of the reasoning tasks/problems – what is the formal underlying structure of the reasoning tasks? Space does not allow a detailed treatment here, but Wilhelm provides a flavor of this feature when he suggests that one must go through a “decision tree” to ascertain if the problems are concrete vs. abstract. Following the abstract branch, further differentiation might occur vis-à-vis the distinction of “nonsense” vs. “variable” instantiation. Following the concrete branch decision tree, reasoning problem instantiation can be differentiated as to whether they require prior knowledge or not. And so on.
    • As noted by Wilhelm, “it is well established that the form of the instantiation has substantial effects on the difficulty of structurally identical reasoning tasks” (p. 380).
  • Vulnerability of task to reasoning ‘strategies” – all good clinicians know, and have seen, that certain examinees often change the underlying nature of a psychometric task via the deployment of unique metacognitive/learning strategies. I often call this the “expansion of a tests specificity by the examinee.” According to Wilhelm, “if a subgroup of participants chooses a different approach to work on a given test, the consequence is that the test is measuring different abilities for different subgroups…depending on which strategy is chosen, different items are easy and hard, respectively” (p, 381). Unfortunately, research-based protocols for ascertaining which strategies are used during reasoning task performance are more-or-less non-existent.

Ok…that’s enough for this blog post. Readers are encouraged to chew on this taxonomic framework. I do plan (but don’t hold me to the promise…it is a benefit of being the benevolent blog dictator) to summarize additional information from this excellent chapter. Whilhelm’s taxonomy has obvious implications for those who engage in test development. Wilhelm’s framework suggests a structure from which to systematically design/specify Gf tests as per the four dimensions.

On the flip side (applied practice), Whilhelm’s work suggests that our understanding of the abilities measured by existing Gf tests might be facilitated via the classification of different Gf tests as per these dimensions. Work on the “operation” characteristic has been going strong since the mid 1990’s as per the CHC narrow ability classification of tests.

Might not a better understanding of Gf measures emerge if those leading the pack on how to best interpret intelligence tests add (to the CHC operation classifications of Gf tests) the analysis of tests as per the content and instantiation dimensions, as well as identifying the different types of cognitive strategies that might be elicited by different Gf tests by different individuals?

I smell a number of nicely focused and potentially important doctoral dissertations based on the administration of a large collection of available practical Gf measures (e.g., Gf tests from WJ III, KAIT, Wechslers, DAS, CAS, SB5, Ravens, and other prominent “nonverbal” Gf measures) to a decent sample, followed by exploratory and/or confirmatory factor analyses and multidimensional scaling (MDS). Heck….doesn’t someone out there have access to that ubiquitous pool of psychology experiment subjects --- viz., undergraduates in introductory psychology classes? This would be a good place to start.


Tuesday, November 15, 2011

Thinking..fast and slow: Dual process models of cognition/intelligence--hot topic

Dual cognitive process (sometimes called Type I/II processing) have increased in prominence the past five years.  Within the past few weeks the long anticipated book "Thinking, Fast and Slow" by Daniel Kaneham was released, and it is already near the top of most non-fiction best selling books.  I can't wait to get my copy, as it will put Malcom Gladwell's "Blink" in it's proper place.  This will give the layperson, and many professionals, a better understanding of these two general classes of cognitive processes.

My thinking about applied intelligence test development and interpretation has been incorporated this general dichotomy in the form of a working (evolving) test development/interpretation framework (see summary figure below).
[Double click on images to enlarge]

The most recent journal to devote a special issue to dual process models is Developmental Review.  Below are the key articles and a few intriguing model figures.















Generated by: Tag Generator

Thursday, October 07, 2010

Research bytes 10-7-10: PDA, notebook and paper pencil testing of Gf abilities similar

Schroeders, U., & Wilhelm, O. (2010). Testing Reasoning Ability with Handheld Computers, Notebooks, and Paper and Pencil. European Journal of Psychological Assessment, 26(4), 284-292.

Electronic devices can be used to enhance or improve cognitive ability testing. We compared three reasoning-ability measures delivered on handheld computers, notebooks, and paper-and-pencil to test whether or not the same underlying abilities were measured irrespective of the test medium. Rational item-generative principles were used to generate parallel item samples for a verbal, a numerical, and a figural reasoning test, respectively. All participants, 157 high school students, completed the three measures on each test medium. Competing measurement models were tested with confirmatory factor analyses. Results show that 2 test-medium factors for tests administrated via notebooks and handheld computers, respectively, had small to negligible loadings, and that the correlation between these factors was not substantial. Overall, test medium was not a critical source of individual differences. Perceptual and motor skills are discussed as potential causes for test-medium factors


Technorati Tags: , , , , , , , , , , , , , , , , , , , ,

Wednesday, April 01, 2009

Divergent thinking (creative problem solving) is content-specifc?

Results from a large scale study (n=1300+) of German subjects (Kuhn & Holing, 2009;  European Journal of Psychological Assessment) suggests that the factor structure of divergent thinking (idea generation, creative problem solving, etc.) abilities may be domain-specific (numerical, verbal, figural), consistent with the BIS intelligence theory (which was the framework for the study). 

My only criticism is that no attempt was made to relate (test a model?) or interpret the results as per the divergent abilities that are subsumed as the fluency/rate factors under Glr in the CHC theory of intelligence.  Without detailed descriptions of the tests in the manuscript, it is not possible to do a post-hoc BIS-CHC "cross-walk."  My hunch is that the content-classified divergent thinking tests used in this study may tap a content facet or intermediate stratum of the CHC taxonomy.  A number of the CHC Glr fluency factors would appear, at face value, to be readily classified (on a logical basis) as per these three content dimensions/facets.

I find these BIS-based results interesting and in need of integration within the CHC taxonomy (click here for recent CHC overview article in Intelligence)...and vice-versa.

Technorati Tags: , , , , , , , , , , , , , ,

Saturday, August 20, 2005

Working memory - more on Gf connection

As indicated by a number of prior posts ("Berlin BIS model of intelligence--material to review"; "g, working memory, specific CHC abilities and achievement") regarding the relationship between working memory (Gsm-MW) and fluid reasoning (Gf) or g, there is no shortage of contemporary research that continues to investigate the interesting relationship between higher-level cognitive processing and working memory. Below is a brief summary of yet another research article that sheds light on the possible reasons for the working memory/Gf/g relation.

Buehner, M., Krumm, S., & Pick, M. (2005). Reasoning=working memory ≠ attention. Intelligence, 33(3), 251-272.


Abstract
  • The purpose of this study was to clarify the relationship between attention, components of working memory, and reasoning. Therefore, twenty working memory tests, two attention tests, and nine intelligence subtests were administered to 135 students. Using structural equation modeling, we were able to replicate a functional model of working memory proposed by Oberauer, Suess, Wilhelm, and Wittmann (2003) [Oberauer, K., Suess, H.-M., Wilhelm, O., & Wittmann, W. W. (2003). The multiple faces of working memory: Storage, processing, supervision, and coordination. Intelligence, 31, 167-193]. The study also revealed a weak to moderate relationship between the "selectivity aspect of attention" and working memory components as well as the finding that "supervision" was only moderately related to "storage in the context of processing" and to "coordination". No significant path was found from attention to reasoning. Reasoning could be significantly predicted by "storage in the context of processing" and "coordination". All in all, 95% of reasoning variance could be explained. Controlling for speed variance, the correlation between working memory components and intelligence did not decrease significantly.
Major findings
  • Oberauer, Suess, Wilhelm, and Wittmann’s (2000, 2003) model of working memory hypothesizes that working memory can be separated into two facets: a content facet (contains verbal/ numerical material and figural/spatial material) and a functional facet (separated into the components of storage in the context of processing, coordination, and supervision.
    • A characteristic storage task is a dual task, where participants have to remember words, then perform another task and finally recall the remembered words. This factor is similar to the updating and working memory capacity of Miyake, Friedman, Emerson, Witzki, Howerter, and Wager (2000) and Engle, Tuholski, Laughlin, and Conway (1999).
    • Coordination is the ability to build new relations between elements and to integrate relations into structures (Oberauer et al., 2003, p. 169).
    • Supervision involves the monitoring of ongoing cognitive processes and actions, the selective activation of relevant representations and procedures, and the suppression of irrelevant, distracting ones.
  • The variance explained by working memory components (especially storage in the context of processing and coordination) on Gf was 95% regarding the latent factors. Storage in the context of processing was the best predictor of Gf. It was revealed that coordination is also a significant predictor of Gf. Supervision and the selectivity aspect of attention had only little or no impact on Gf.
  • The excellent global-fit confirmed the structure of working memory found by Oberauer et al. (2003). However, the content factors could not be confirmed. This might be due to the reduced standard deviations and (consequently) lower reliabilities of some working memory tasks.