Saturday, January 25, 2014

Sharing Controlling for increased guessing enhances the independence of the Flynn effect from g: The return of the Brand effect via BrowZine

Controlling for increased guessing enhances the independence of the Flynn effect from g: The return of the Brand effect
Woodley, Michael Anthony; te Nijenhuis, Jan; Must, Olev; Must, Aasa
Intelligence, Vol. 43 – 2014: 27 - 34

10.1016/j.intell.2013.12.004

University of Minnesota Users:
https://www.lib.umn.edu/log.phtml?url=http://www.sciencedirect.com/science/article/pii/S0160289613001761

Non-University of Minnesota Users: (Full text may not be available)
http://www.sciencedirect.com/science/article/pii/S0160289613001761

Tuesday, January 21, 2014

Intensive reading remediation in grade 2 or 3: Are there effects a decade later? [feedly]




Intensive reading remediation in grade 2 or 3: Are there effects a decade later?
// Journal of Educational Psychology - Vol 106, Iss 1
Despite data supporting the benefits of early reading interventions, there has been little evaluation of the long-term educational impact of these interventions, with most follow-up studies lasting less than 2 years (Suggate, 2010). This study evaluated reading outcomes more than a decade after the completion of an 8-month reading intervention using a randomized design with 2nd and 3rd graders selected on the basis of poor word-level skills (Blachman et al., 2004). Fifty-eight (84%) of the original 69 participants took part in the study. The treatment group demonstrated a moderate to small effect size advantage on reading and spelling measures over the comparison group. There were statistically significant differences with moderate effect sizes between treatment and comparison groups on standardized measures of word recognition (i.e., Woodcock Basic Skills Cluster, d = 0.53; Woodcock Word Identification, d = 0.62), the primary, but not exclusive, focus of the intervention. Statistical tests on other reading and spelling measures did not reach thresholds for statistical significance. Patterns in the data related to other educational outcomes, such as high school completion, favored the treatment participants, although differences were not significant. (PsycINFO Database Record (c) 2014 APA, all rights reserved)

----
Shared via my feedly reader



National Geographic features the Human Connectome Project [feedly]




National Geographic features the Human Connectome Project
// Human Connectome Project

New research from members of our HCP team suggests that brain circuitry is organized more like Manhattan's street grid than London's chaotic tangle of random roadways. Read the full article in the February 2014 issue of National Geographic.


Article: Fluctuations in attention are related to fluid but not crystallized intelligence




Monday, January 20, 2014

Friday, January 17, 2014

Extended Gf-Gc Theory, Visualized


 
 
Shared via feedly // published on Assessing Psyche, Engaging Gauss, Seeking Sophia // visit site
Extended Gf-Gc Theory, Visualized

Horn updated Gf-Gc Theory many times. I made a visual representation of the last iteration (Horn & Blankson, 2005) . The three major groupings are conceptual categories, not abilities themselves.

.pdf (can be zoomed without losing image quality)

Prezi Version

Low-ish quality image:

Extended Gf-Gc Theory

Horn, J. L. & Blankson, N. (2005) Foundations for better understanding of cognitive abilities. In D. Flanagan & P. Harrison (Eds.) Contemporary Intellectual Assessment: Theories, tests, and issues (2nd ed., pp. 41–68). New York, NY: Guilford Press.





Wednesday, January 15, 2014

Article: Brain Facts: Download the Audio Recording - BrainFacts.org



Dr. Procrustes does not need to see you; he has your test scores. [feedly]


 
 
Shared via feedly // published on Assessing Psyche, Engaging Gauss, Seeking Sophia // visit site
Dr. Procrustes does not need to see you; he has your test scores.

I rock at the Tower of Hanoi—you could give me a stack of as many discs as you like and I can move the whole stack from one peg to the other without any hesitation and without a single error. I don't mean to be immodest about it, but it's true. My performance is like 11.8 standard deviations above the mean, which by my calculations is so rare that if a million people were born every second ever since the Big Bang, there is still only a 2.7% chance that I would have been born by now—I feel very lucky (and honored) to be here.

You would be forgiven for thinking that I had excellent planning ability…but not if you voiced such an opinion out loud, within earshot of my wife, causing her to die of laughter—I would miss her very much. No, it is not by preternatural planning ability that I compete with only the gods in Tower of Hanoi tournaments-in-the-sky. In fact, the first time I tried it, my score was not particularly good. I am not going say what it was but the manual said that I ranked somewhere between the average Darwin Award winner and the person who invented English spelling rules. After giving the test some thought, however, I realized that each movement of the discs is mechanically determined by a simple rule. I will not say what the rule is for fear of compromising the validity of the test for more people. The rule is not so simple that you would figure it out while taking the test for the first time, but it is simple enough that once you learn it, you will be surprised how easy the test becomes.

All kidding aside, it is important for the clinician to be mindful of the process by which a child performs well or poorly on a test. For me, the Tower of Hanoi does not measure planning. For others, it might. Edith Kaplan (1988) was extremely creative in her methods of investigating how people performed on cognitive tests. Kaplan-inspired tools such as the WISC-IV Integrated provide more formal methods of assessing strategy use. However, careful observations and even simply asking children how they approached a task (after the tests have been administered according to standard procedures) is often enlightening and can save time during the follow-up testing phase. For example, I once read about an otherwise low-performing boy who scored very well on the WISC-IV Block Design subtest. When asked how he did so well on it, he said that he had the test at home and that he practiced it often. The clinician doubted this very much but his story turned out to be true! His mother was an employee at a university and saw someone from the Psychology Department throwing outdated WISC-III test kits into the garbage. She was intrigued and took one home for her children to play with.

I once gave the WAIS-III to a woman who responded to the WAIS-III Vocabulary subtest as if it were a free association test. I tried to use standard procedures to encourage her to give definitions to words but the standard prompts ("Tell me more") just made it worse. Finally, I broke with protocol and said, "These are fabulous answers and I like your creativity. However, I think I did not explain myself very well. If you were to look up this word in the dictionary, what might it say about what the word means?" In the report I noted the break with protocol but I believe that the score she earned was much more reflective of her Lexical Knowledge than would have been the case had I followed procedures more strictly. I do not wish to be misunderstood, however; I never deviate from standard procedures except when I must. Even then, I conduct additional follow-up testing to make sure that the scores are correct.

This post is an excerpt from:

Schneider, W. J. (2013). Principles of assessment of aptitude and achievement. In D. Saklofske, C. Reynolds, & V. Schwean (Eds.), Oxford handbook of psychological assessment of children and adolescents (pp. 286–330). New York: Oxford University Press.





Monday, January 13, 2014

Why the resistance to statistical innovations? Bridging the communication gap. [feedly]


 
 
Shared via feedly // published on Psychological Methods - Vol 18, Iss 4 // visit site
Why the resistance to statistical innovations? Bridging the communication gap.
While quantitative methodologists advance statistical theory and refine statistical methods, substantive researchers resist adopting many of these statistical innovations. Traditional explanations for this resistance are reviewed, specifically a lack of awareness of statistical developments, the failure of journal editors to mandate change, publish or perish pressures, the unavailability of user friendly software, inadequate education in statistics, and psychological factors. Resistance is reconsidered in light of the complexity of modern statistical methods and a communication gap between substantive researchers and quantitative methodologists. The concept of a Maven is introduced as a means to bridge the communication gap. On the basis of this review and reconsideration, recommendations are made to improve communication of statistical innovations. (PsycINFO Database Record (c) 2014 APA, all rights reserved)




Type I error and statistical power of the Mantel-Haenszel procedure for detecting DIF: A meta-analysis. [feedly]


 
 
Shared via feedly // published on Psychological Methods - Vol 18, Iss 4 // visit site
Type I error and statistical power of the Mantel-Haenszel procedure for detecting DIF: A meta-analysis.
This article presents a meta-analysis of studies investigating the effectiveness of the Mantel-Haenszel (MH) procedure when used to detect differential item functioning (DIF). Studies were located electronically in the main databases, representing the codification of 3,774 different simulation conditions, 1,865 related to Type I error and 1,909 to statistical power. The homogeneity of effect-size distributions was assessed by the Q statistic. The extremely high heterogeneity in both error rates (I² = 94.70) and power (I² = 99.29), due to the fact that numerous studies test the procedure in extreme conditions, means that the main interest of the results lies in explaining the variability in detection rates. One-way analysis of variance was used to determine the effects of each variable on detection rates, showing that the MH test was more effective when purification procedures were used, when the data fitted the Rasch model, when test contamination was below 20%, and with sample sizes above 500. The results imply a series of recommendations for practitioners who wish to study DIF with the MH test. A limitation, one inherent to all meta-analyses, is that not all the possible moderator variables, or the levels of variables, have been explored. This serves to remind us of certain gaps in the scientific literature (i.e., regarding the direction of DIF or variances in ability distribution) and is an aspect that methodologists should consider in future simulation studies. (PsycINFO Database Record (c) 2014 APA, all rights reserved)

A new look at Horn’s parallel analysis with ordinal variables. [feedly]


 
 
Shared via feedly // published on Psychological Methods - Vol 18, Iss 4 // visit site
A new look at Horn's parallel analysis with ordinal variables.
Previous research evaluating the performance of Horn's parallel analysis (PA) factor retention method with ordinal variables has produced unexpected findings. Specifically, PA with Pearson correlations has performed as well as or better than PA with the more theoretically appropriate polychoric correlations. Seeking to clarify these findings, the current study employed a more comprehensive simulation study that included the systematic manipulation of 7 factors related to the data (sample size, factor loading, number of variables per factor, number of factors, factor correlation, number of response categories, and skewness) as well as 3 factors related to the PA method (type of correlation matrix, extraction method, and eigenvalue percentile). The results from the simulation study show that PA with either Pearson or polychoric correlations is particularly sensitive to the sample size, factor loadings, number of variables per factor, and factor correlations. However, whereas PA with polychorics is relatively robust to the skewness of the ordinal variables, PA with Pearson correlations frequently retains difficulty factors and is generally inaccurate with large levels of skewness. In light of these findings, we recommend the use of PA with polychoric correlations for the dimensionality assessment of ordinal-level data. (PsycINFO Database Record (c) 2014 APA, all rights reserved)




Computing confidence intervals for standardized regression coefficients. [feedly]


 
 
Shared via feedly // published on Psychological Methods - Vol 18, Iss 4 // visit site
Computing confidence intervals for standardized regression coefficients.
With fixed predictors, the standard method (Cohen, Cohen, West, & Aiken, 2003, p. 86; Harris, 2001, p. 80; Hays, 1994, p. 709) for computing confidence intervals (CIs) for standardized regression coefficients fails to account for the sampling variability of the criterion standard deviation. With random predictors, this method also fails to account for the sampling variability of the predictor standard deviations. Nevertheless, under some conditions the standard method will produce CIs with accurate coverage rates. To delineate these conditions, we used a Monte Carlo simulation to compute empirical CI coverage rates in samples drawn from 36 populations with a wide range of data characteristics. We also computed the empirical CI coverage rates for 4 alternative methods that have been discussed in the literature: noncentrality interval estimation, the delta method, the percentile bootstrap, and the bias-corrected and accelerated bootstrap. Our results showed that for many data-parameter configurations—for example, sample size, predictor correlations, coefficient of determination (R²), orientation of β with respect to the eigenvectors of the predictor correlation matrix, RX—the standard method produced coverage rates that were close to their expected values. However, when population R² was large and when β approached the last eigenvector of RX, then the standard method coverage rates were frequently below the nominal rate (sometimes by a considerable amount). In these conditions, the delta method and the 2 bootstrap procedures were consistently accurate. Results using noncentrality interval estimation were inconsistent. In light of these findings, we recommend that researchers use the delta method to evaluate the sampling variability of standardized regression coefficients. (PsycINFO Database Record (c) 2014 APA, all rights reserved)




Sharing The Structure of Cognition: Attentional Episodes in Mind and Brain via BrowZine

The Structure of Cognition: Attentional Episodes in Mind and Brain
Duncan, John
Neuron, Vol. 80 Issue 1 – 2013: 35 - 50

10.1016/j.neuron.2013.09.015

University of Minnesota Users:
https://www.lib.umn.edu/log.phtml?url=http://www.sciencedirect.com/science/article/pii/S0896627313008465

Non-University of Minnesota Users: (Full text may not be available)
http://www.sciencedirect.com/science/article/pii/S0896627313008465

The Death Penalty and Intellectual Disability: AAIDD forthcoming book publication--TOC with authors and chapter titles

    

 The Death Penalty and Intellectual Disability: A Guide (1/3/14)*


* Note the above title is as registered by AAIDD with Library of Congress and as presentedon their website. The working title of the task force had been:  Determining Intellectual Disability in the Courts: Focus on Capital Cases
 

As described at the AAIDD publications page:
 In the 2002 landmark decision Atkins v. Virginia 536 U.S. 304, the Supreme Court of the United States ruled that executing a person with intellectual disability is a violation of the Eighth Amendment of the U.S. Constitution, which prohibits “cruel and unusual punishment,” but left states to determine their own criteria for intellectual disability. AAIDD has always advocated against the death penalty for people with intellectual disability and has long provided amicus curiae briefs in Supreme Court cases. Thus, in this comprehensive new book published by AAIDD, notable authors in the field of intellectual disability discuss all aspects of the issues, with a particular focus on foundational considerations, assessment factors and issues, and professional concerns in Atkins assessments. 



                     

Chapter
Titles
Authors

Preface
Ed Polloway

Foreword
Honorable Kevin Foley

Part 1:  Foundational Considerations

1
Guide for Persons with Intellectual Disability and Capital Cases:
An Introduction
Edward A. Polloway
James R. Patton
J. David Smith
2
Intellectual Disability:  A Review of its Definitions and Diagnostic Criteria
Marc J. Tassé
3
Mild Intellectual Disability
Gary Siperstein
Melissa Collins
4
Analysis of Atkins Cases
John Blume
Karen Salekin

Part 2:  Assessment Considerations


A.  General Topics:

5
Concepts of Measurement
Keith Widaman
6
Age of Onset and the Developmental Period Criterion
Stephen Greenspan
George Woods
Harvey Switzky

B. Intellectual Functioning:

7
Intellectual Functioning: Conceptual Issues
Kevin McGrew
8
Consideration in the Selection and Analysis of IQ Tests
Dale Watson
9
Variability of IQ scores
Stephen Greenspan
J. Gregory Olley
10
Norm Obsolescence: The Flynn Effect
Kevin McGrew


C. Adaptive Behavior:

11
Evolving Concepts of Adaptive Behavior
Stephen Greenspan
12
Selection of Appropriate Adaptive Behavior Instruments
J. Gregory Olley
13
Challenges in Assessment of Adaptive Behavior in Capital Cases
Caroline Everington
Gilbert S. Macvaugh III
Karen Salekin
Timothy J. Derning
14
Time at Which Disability Must Be Shown in Atkins Cases
J. Gregory Olley
15
Briseño Factors
Stephen Greenspan

Part 3:  Related Topics

16
Cultural Factors in Assessment
Richard Ruth
17
Assessment Issues: Competence to Waive Miranda Rights and Competence to Stand Trial
Karen Salekin
Caroline Everington
18
Considerations of Retrospective Assessment and Malingering
Denis Keyes
David Freedman
19
Intellectual Disability, Comorbid Disorders and Differential Diagnosis
George Woods
David Freedman
Timothy J. Derning
20
School and Other Key Records
James Patton
21
Relevance of Other Assessments in Atkins Evaluations
Karen Salekin
Gilbert S. Macvaugh III
Timothy J. Derning
22
Professional Issues in Atkins Assessments
Gilbert S. Macvaugh III
Mark D. Cunningham Marc J. Tassé