Showing posts with label validity. Show all posts
Showing posts with label validity. Show all posts

Friday, March 21, 2025

Research Byte: Co-Occurrence and Causality Among #ADHD, #Dyslexia, and #Dyscalculia - #SLD #schoolpsychology #sped #genetics #EDPSY

Co-Occurrence and Causality Among ADHD, Dyslexia, and Dyscalculia

Published in Psychological Science.  Click here to access PDF copy of article

Abstract
ADHD, dyslexia, and dyscalculia often co-occur, and the underlying continuous traits are correlated (ADHD symptoms, reading, spelling, and math skills). This may be explained by trait-to-trait causal effects, shared genetic and environmental factors, or both. We studied a sample of ≤ 19,125 twin children and 2,150 siblings from the Netherlands Twin Register, assessed at ages 7 and 10. Children with a condition, compared to those without that condition, were 2.1 to 3.1 times more likely to have a second condition. Still, most children (77.3%) with ADHD, dyslexia, or dyscalculia had just one condition. Cross-lagged modeling suggested that reading causally influences spelling (β = 0.44). For all other trait combinations, cross-lagged modeling suggested that the trait correlations are attributable to genetic influences common to all traits, rather than causal influences. Thus, ADHD, dyslexia, and dyscalculia seem to co-occur because of correlated genetic risks, rather than causality.



 

Tuesday, October 04, 2011

Dr. Doug Detterman's bytes: Psychometric validity




I have been remiss (busy) in my posting of Dr. Doug Detterman's bytes. Here is a new one on validity

Validity is the extent to which a test measures what it is supposed to measure and predicts what it is supposed to predict. When Binet developed his intelligence test, his goal was to identify children who would not do well in school so they could be given help. To the extent that Binet's test identified such children, it was valid. In Binet's case, proving the validity of the test amounted to showing that the test predicted or correlated with school performance. (Binet was handicapped, though, since the correlation coefficient was not widely known at the time of his first test.) Note that there is no requirement to provide an explanation of why the test predicts what it was designed to predict, only that it do it. Validity provides an empirical relationship that may be absent of any theoretical meaning. Theoretical meaning is given to the relationship when people attempt to explain why the test works to produce this validity relationship.

Tests designed to predict one thing may be found to predict other things. This is
certainly the case with intelligence tests. Relationships between intelligence and many other variables have been found. Such relationships help to build a theory about how and why the test works and ultimately about the relationship of the variables studied.


- iPost using BlogPress from Kevin McGrew's iPad

Generated by: Tag Generator


Friday, July 15, 2011

Intelligent IQ testing: Joel Schneider on proper interpretation of composite/cluster scores







Dr. Joel Schneider has (again) posted an amazing and elegant video tutorial to help individuals who engage in intelligence test interpretation understand whether composite/cluster scores should be interpreted as valid when the individual subtests comprising the composite are significantly different or discrepant (according to Dr. Schneider--"short answer: not very often"). It is simply AWESOME...and makes me envious that I don't have the time or skills to develop similar media content.

His prior and related video can be found here.

Clearly the message is that the interpretation of test scores is not simple and is clearly a mixture of art and science. As Tim Keith once said in a journal article title (1997)...."Intelligence is important, intelligence is complex." This should be modified to read "intelligence is important, intelligence is complex, and intelligent intelligence test interpretation is also complex."


- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator


Saturday, November 27, 2010

Visual-graphic of how to develop psychological measures of constructs

I found this figure, which I had developed a few years ago for a specific grant process (thus the scratched out box that is not relevant to this post), which summarizes in a single figure the accepted/recommended approach to developing and validating tests. In simple terms, one starts with the specification of the theoretical domain construct(s) of interest, then examines the measurement domain for possible types of tests to operationalize the constructs, and then one develops and scales the test items (optimally using IRT scaling methods) Very basic. Thought I would share---I love visual-graphic explanations.

Double click on image to enlarge





- iPost using BlogPress from my Kevin McGrew's iPad


Thursday, May 27, 2010

iPost: Do neuropsych tests measure same abilities when translated to Spanish


Do neuropsychological tests have the same meaning in Spanish speakers as they do in English speakers?.
By Siedlecki, Karen L.; Manly, Jennifer J.; Brickman, Adam M.; Schupf, Nicole; Tang, Ming-Xin; Stern, Yaakov
Neuropsychology, Vol 24(3), May 2010, 402-411.
 Abstract

Objective: The purpose of this study was to examine whether neuropsychological tests translated into Spanish measure the same cognitive constructs as the original English versions. Method: Older adult participants (N = 2,664), who did not exhibit dementia from the Washington Heights Inwood Columbia Aging Project (WHICAP), a community-based cohort from northern Manhattan, were evaluated with a comprehensive neuropsychological battery. The study cohort includes both English (n = 1,800) and Spanish speakers (n = 864) evaluated in their language of preference. Invariance analyses were conducted across language groups on a structural equation model comprising four neuropsychological factors (memory, language, visual-spatial ability, and processing speed). Results: The results of the analyses indicated that the four-factor model exhibited partial measurement invariance, demonstrated by invariant factor structure and factor loadings but nonequivalent observed score intercepts. Conclusion: The finding of invariant factor structure and factor loadings provides empirical evidence to support the implicit assumption that scores on neuropsychological tests are measuring equivalent psychological traits across these two language groups. At the structural level, the model exhibited invariant factor variances and covariances.  

Monday, July 13, 2009

John Horn's (1965) doctoral dissertation test of Cattell's Gf-Gc theory


John Horn's Gf-Gc dissertation available for viewing.

I'm working on a visual-graphic and tex
t-based summary and extension of my previously published "CHC Theory: Past, Present and Future" book chapter...so it can be displayed on the web, and more importantly, can serve as a presentation for instructional/historical purposes. When done I will be giving this material away to those that are interested.

In the process I'm trying to embed hyperlinks to classic articles that will give readers the chance to view and read many of the seminal works that have led us to contemporary CHC theory and intellectual assessment.

Today I'm posting a real gem I found in the process of completing this project. A PDF copy of John Horn's original dissertation (1965). According to Carroll (1993), this was the first real empirical test of Cattell's Gf-Gc theory.

You are forewarned. The file is very large...17+MB. I suggest you don't try download or view from a land phone line or wifi.

Technorati Tags: , , , , , , , , , , , , , , , ,

Wednesday, July 08, 2009

Applied Psych Test Design Part G: Psychometric/technical statistical analysis: External

The seventh  in the series Art and Science of Applied Test Development is now available.

The seventh module (Part G:  Psychometric/technical statistical analysis:  External) is now posted and is accessible via SlideShare.

In addition, I've made some new edits and additions  to prior presentations (Part A-F)....so if you've viewed the prior modules you may want to revisit them again.

This is the seventh in a series of PPT modules explicating the development of psychological tests in the domain of cognitive ability using contemporary methods (e.g., theory-driven test specification; IRT-Rasch scaling; etc.). The presentations are intended to be conceptual and not statistical in nature. Feedback is appreciated.

This project can be tracked on the left-side pane of the blog under the heading of Applied Test Development Test Development Series.

The first module (Part A: Planning, development frameworks & domain/test specification blueprints) was posted previously and is accessible via SlideShare.

The second module (Part B: Test and item development) was posted previously and is accessible via SlideShare.

The third module (Part C--Use of Rasch scaling technology) was posted previously and is accessible via Slideshare.

The fourth module (Part D--Develop norm [standardization] plan) was posted previously and is accessible via Slideshare.

The fifth module (Part E--Calcuate norms and derived scores) was posted previously and is accessible via Slideshare.

The sixth module (Part F--Psychometric/technical statistical analysis: Internal) was posted previously and is accessible via Slideshare.

You are STRONGLY encouraged to view them in order as concepts, graphic representation of concepts and ideas, etc., build on each other from start to finish.

That's it for now.  I will likely be revising and adding more material in the future---but this is the "basic" set of materials for now.

Technorati Tags: , , , , , , , , , , , , , , , , ,



Tuesday, July 07, 2009

Applied Psych Test Development Series: Parts F--Psychometric/technical statistical analysis: Internal

The sixth in the series Art and Science of Applied Test Development is now available.

The sixth module (Part F--Psychometric/technical statistical analysis:  Internal) is now available.

In addition, I've made some edits and additions (esp. summary "Tools, Tips, and Troubles" and "Advanced Topics" slides) to prior presentations (Part A-E).

This is the sixth in a series of PPT modules explicating the development of psychological tests in the domain of cognitive ability using contemporary methods (e.g., theory-driven test specification; IRT-Rasch scaling; etc.). The presentations are intended to be conceptual and not statistical in nature. Feedback is appreciated.

This project can be tracked on the left-side pane of the blog under the heading of Applied Test Development Test Development Series.

The first module (Part A: Planning, development frameworks & domain/test specification blueprints) was posted previously and is accessible via SlideShare.

The second module (Part B: Test and item development) was posted previously and is accessible via SlideShare.

The third module (Part C--Use of Rasch scaling technology) was posted previously and is accessible via Slideshare.

The fourth module (Part D--Develop norm [standardization] plan) was posted previously and is accessible via Slideshare.

The fifth module (Part E--Calcuate norms and derived scores) was posted previously and is accessible via Slideshare.

You are STRONGLY encouraged to view them in order as concepts, graphic representation of concepts and ideas, etc., build on each other from start to finish.

Enjoy...more to come.

Technorati Tags: , , , , , , , , , , , , , , , , ,