Wednesday, December 30, 2009

WMF Human Cognitive Abilities (HCA) project update: 12-30-09 -- FREE data for secondary analysis!

The free on-line WMF Human Cognitive Abilities (HCA) archive project was updated today. An overview of the project, with a direct link to the archive, can be found at the Woodcock-Muñoz Foundation web page (click on "Current Woodcock-Muñoz Foundation Human Cognitive Abilities Archive") . Also, an on-line PPT copy of a poster presentation I made at the 2008 (Dec) ISIR conference re: this project can be found by clicking here.

After a period of inactivity (due to being swamped), I am pleased to announce the following additions and revisions.

Currently, 115 of Jack Carroll's original correlation matrices (in Excel file format) are now available at the archive. These correlation files can be downloaded for free and can be used for secondary data analysis. Of these 115, 75 also include the original manuscript which provides descriptive information regarding the variables in the correlation matrices. Finally, 59 of the 115 include the correlation matrices, original publication, and Carroll's official EFA results.  We continue to work hard to try to locate copies of old articles, missing files, etc.

Below are the additions and revisions to the archive for the current update.  The abbreviations used below are those used by Jack Carroll  in his 1993 book. A master index of these abbreviations and associated references can be found at the following link.

The following are new correlation matrices that are available:
  • DETTOO
  • DEVRO1
  • DUNC11
  • SATT11
  • SAUN03
  • SAUN11
  • SCHE11
  • SCHI01
  • SCHI02
  • SCHI11
  • SCHI12
  • SCHU01
  • SCHU02
The original manuscript (dissertation) for FAUL11 has now been added to that dataset branch.

Errors in the variable names for REIN01, REIN02, REIN03, and REIN04 Excel correlation files have been fixed.

Happy holidays.


Technorati Tags: , , , , , , , , , , , , ,

Neuroethics and Law round up

As usual, a plethora of interesting story links

http://kolber.typepad.com/ethics_law_blog/2009/12/pebs-neuroethics-roundup-from-jhu-guest-blogger-2.html

Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

Tuesday, December 29, 2009

iPost: The Edison Brainmeter

Thanks to MIND HACKS for interesting historical tidbit

http://www.mindhacks.com/blog/2009/12/the_edison_brainmete.html

Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

Monday, December 28, 2009

Dssertation Dish: Woodcock -- Johnson and KABC-profile research


Validation of neuropsychological subtypes of learning disabilities by Hiller, Todd R., Ph.D., Ball State University, 2009 , 99 pages; AAT 3379243

Abstract
The present study used archival data of individuals given the Woodcock-Johnson Tests of Cognitive Abilities 3 rd Edition and the Woodcock-Johnson Tests of Achievement 3 rd Edition in an effort to define subtypes of LD. The sample included 526 subjects aged 6 years to 18 years old who had a diagnosis of some type of LD. Of these, 22.7% had an additional diagnosis other than LD. It was expected that subtypes similar to Rourke's classification of his nonverbal learning disorder and his basic phonological processing disorder would be found.

Portions of the battery were used in a latent class cluster analysis in order to determine group patterns of strengths and weaknesses. Using the Lo-Mendell-Rubin test, a 3 solution model was selected. These three groups showed no evidence of patterns of strengths and weaknesses. These groups were best described as a high, middle, and low group, in that the high group had scores that were universally larger than scores from the middle group, which had scores that were universally higher than the low group. The rates of individuals with comorbid disorders varied greatly between the clusters. The high group had the lowest comorbidity rates in the study, with only 6.8%. That is compared to 26.4% of the middle group and 44.8% of the low group.

These results suggest that clusters found differ more in severity rather than types of LD. Individuals with LD and comorbid disorders are more likely to have more severe deficits.


Profile analysis of the Kaufman Assessment Battery for Children, Second Edition with African American and Caucasian preschool children by Dale, Brittany Ann, Ph.D., Ball State University, 2009 , 130 pages; AAT 3379238

Abstract
The purpose of the present study was to determine if African American and Caucasian preschool children displayed similar patterns of performance among the Cattell-Horn-Carroll (CHC) factors measured by the Kaufman Assessment Battery for Children, Second Edition (KABC-II). Specifically, a profile analysis was conducted to determine if African Americans and Caucasians displayed the same patterns of highs and low and scored at the same level on the KABC-II composites and subtests. Forty-nine African American (mean age = 59.14 months) and 49 Caucasian (mean age = 59.39) preschool children from a Midwestern City were included in the study and were matched on age, sex, and level of parental education. Results of a profile analysis found African American and Caucasian preschool children had a similar pattern of highs and lows and performed at the same level on the CHC broad abilities as measured by the KABC-II. Comparison of the overall mean IQ indicated no significant differences between the two groups. The overall mean difference between groups was 1.47 points, the smallest gap seen in the literature. This finding was inconsistent with previous research indicating a one standard deviation difference in IQ between African Americans and Caucasians. A profile analysis of the KABC-II subtests found the African American and Caucasian groups performed at an overall similar level, but did not show the same pattern of highs and lows. Specifically, Caucasians scored significantly higher than African Americans on the Expressive Vocabulary subtest which measures the CHC narrow ability of Lexical Knowledge.

Results of this study supported the KABC-II's authors' recommendation to make interpretations at the composite level. When developing hypotheses of an individual's strengths and weaknesses in narrow abilities, clinicians should be cautious when interpreting the Expressive Vocabulary subtest with African Americans. Overall, results of this study supported the use of the KABC-II with African American preschool children. When making assessment decisions, clinicians can be more confident in an unbiased assessment with the KABC-II.

Future research could further explore the CHC narrow abilities in ethnically diverse populations. Additionally, more research should be conducted with other measures of cognitive ability designed to adhere to the CHC theory, and the appropriateness of those tests with an African American population. Furthermore, future research with the KABC-II could determine if the results of the present study were replicated in other age groups.

Technorati Tags: , , , , , , , , , , , ,


Wednesday, December 23, 2009

IQs Corner Recent Literature of Interest 12-23-09

This weeks "recent literature of interest" is now available. Click here to view or download.

Information regarding this feature, its basis, and the reasons for type of references included in each weekly installment can be found in a prior post.

Technorati Tags: , , , , , ,


CHC theory of intelligence and its impact on contemporary intelligence test batteries

I frequently reference the CHC (Cattell-Horn-Carroll) theory of intelligence and the impact it has had on contemporary intelligence test development.  I realize that not everyone has the time to rummage through all the blog posts I've made regarding CHC theory.  Thus, today I'm posting a brief summary of CHC theory and its impact on applied intelligence test development.  The summary includes hyperlinks to key references, terms, and other readings (for more indepth information). 

The report can be viewed as a web page or can be downloaded or viewed as a PDF file.

Enjoy

Technorati Tags: , , , , , , , , , , , , , , , , , , , , ,

Research Byte 12-23-09: Shared and unshared genetic factors in timed and untimed reading and math abilities


A factorial analysis of timed and untimed measures of mathematics and reading abilities in school aged twins (Sara A. Hart, Stephen A. Petrill and Lee A. Thompson)  Learning and Individual Differences, In Press, Corrected Proof, Available online 27 October 2009,

Abstract
The present study examined the phenotypic and genetic relationship between fluency and non-fluency-based measures of reading and mathematics performance. Participants were drawn from the Western Reserve Reading and Math Project, an ongoing longitudinal twin project of same-sex MZ and DZ twins from Ohio. The present analyses are based on tester-administered measures available from 228 twin pairs (age M = 9.86 years). Measurement models suggested that four factors represent the data, namely Decoding, Fluency, Comprehension, and Math. Subsequent quantitative genetic analyses of these latent factors suggested that a single genetic factor accounted for the covariance among these four latent factors. However, there were also unique genetic effects on Fluency and Math, independent from the common genetic factor. Thus, although there is a significant genetic overlap among different reading and math skills, there may be independent genetic sources of variation related to measures of decoding fluency and mathematics.
[double click on image to enlarge]




Comments extracted from article
Results suggested a four-factor model including reading Decoding, reading Fluency, reading Comprehension, and Math. Further quantitative genetic analysis suggested that a common genetic factor is important to the covariance among phenotypically distinct latent factors (e.g., Plomin & Kovas, 2005). However, Fluency and Math factors were also influenced by unique genetic influences, independent from the general genetic factor.
Interestingly, the two factors with unique genetic in fluences are the only ones to contain measures of timed performance, or fluency. Previous work has suggested that there are large and significant effects due to heritability on measures of reading fluency (h² = .65–.67; Harlaar, Spinath, Dale, & Plomin, 2005), and mathematics fluency (h² = .63; Hart et al., 2009). It is possible that the fluency components in each of these factors are important for explaining the unique genetic effects on both.
Notably, there is no genetic overlap between the factors which contain fluency-based measures, outside of the general genetic overlap among all the latent factors. That, as well as the comparison of phenotypic models 4 and 5 in Table 3, suggests that the genetic influences of reading fluency are not the same as the genetic influences of mathematics fluency, although both are strongly independently influenced by genes.
It is also interesting to note the shared environmental overlap among all the factors. Instruction in this age-group is typically for the skills represented by these factors (e.g., Chall, 1983). This would serve to influence these processes through the shared environment, especially given that for most students in the early elementary years, academic skill exposure and learning are a function of what is taught in school. Moreover, in the case of twins, they also share the same rearing environment. This overlap is of note as it is shared between all mathematics and reading factors, suggesting that whether it is school- and/or family-level influences, there is a common environmental etiology underlying academic difficulties. This can have ramifications in how academic skill-based interventions are conceptualized.
The math literature sometimes separates math into components of computation and problem solving. Our findings in the current study and others (Petrill & Hart, 2009) suggest that the data were best represented by one latent factor. However, all measures of math are based on the Woodcock–Johnson, which may be serving to make them more similar.
Related to this issue, although the shared environmental influences on math are higher than those on reading, this difference cannot be directly tested statistically.


Authors conclusion
In sum, the results suggest that there are some common genetic and environmental factors that connect reading and mathematics performance. At the same time, there also appear to be independent genetic effects for reading fluency and for mathematics. Although requiring further study, these findings may suggest that the overlap in reading and mathematics performance may be due to both genes and the shared environment whereas the discrepancy between math and reading may be genetically mediated. This has ramifications for our understanding of math and reading difficulties. Independent genetic effects may be serving to make math disability and reading disability distinct, and differentially prevalent. On the other hand, the extent to which they are comorbid in some children, common genes and environments may be affecting the outcomes.

Technorati Tags: , , , , , , , , , , , , , , , , ,


Tuesday, December 22, 2009

State special education definitions of MR/ID

Thanks to Randy Floyd for sending this article to me.  The article summarizes state eligibility guidelines for MR/ID as promulgated by state education agencies--thus covering the school-age population of students with potential MR/ID and special education services.  State law governing the definition and criteria for Atkin's decisions do not directly correspond with state special education rules, laws, and regulations.

Bergeron, R., Floyd, R., & Shands, E.  (2008).  States’ Eligibility Guidelines for Mental Retardation: An Update and Consideration of Part Scores and Unreliability of IQs.  Education and Training in Developmental Disabilities, 43(1), 123–131. (click here to view)

Abstract
Mental retardation (MR) has traditionally been defined as a disorder in intellectual and adaptive functioning beginning in the developmental period. Guided by a federal definition of MR described in the Individuals with Disabilities Education Act, it is the responsibility of each of the United States to describe eligibility guidelines for special education services. The purpose of this study was to examine eligibility guidelines for MR for the 50 states and the District of Columbia. This study examined the terms used to describe MR, the use of classification levels, the cutoff scores, and the adaptive behavior considerations for each state. In addition, this study examined guidelines for consideration of intelligence test part scores and consideration of the unreliability of IQs through consideration of the standard error of measurement (SEM) or an IQ range. As found in previous studies, results revealed great variation in the specific eligibility guidelines for MR from state to state. The greatest variation appeared to be across the adaptive behavior considerations. Approximately 20% of states (10) recommend consideration of intelligence test part scores, and approximately 39% of states (20) recommend attention to unreliability of IQs through consideration of the SEM or an IQ range.


Technorati Tags: , , , , , , , , , , , , , , , ,

Sunday, December 20, 2009

iPost: What is metaethics and the law?

Interesting explanation of metaethics at link below.

http://lsolum.typepad.com/legaltheory/2009/12/legal-theory-lexicon-metaethics.html

Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

iPost: Neuroethics round up

Great neuroscience round up at NEUROETHICS LAW blog http://bit.ly/7fY3QH


Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

Saturday, December 19, 2009

Research does not support learning styles

From SCIENCE DAILY

http://www.sciencedaily.com/releases/2009/12/091216162356.htm


Kevin McGrew PhD
Educational/School Psych.
IAP (www.iapsych.com)

Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

Friday, December 18, 2009

Research bytes 12-18-09: Gv research on spatial text processing and vis-spatial working memory & understanding maps

Meneghetti, C., Gyselinck, V., Pazzaglia, F., & DeBeni, R. (2009). Individual differences in spatial text processing: High spatial ability can compensate for spatial working memory interference. Learning and Individual Differences, 19(4), 577-589.

The present study investigates the relation between spatial ability and visuo-spatial and verbal working memory in spatial text processing. In two experiments, participants listened to a spatial text (Experiments 1 and 2) and a non-spatial text (Experiment 1), at the same time performing a spatial or a verbal concurrent task, or no secondary task. To understand how individuals who differ in spatial ability process spatial text during dual task performance, spatial individual differences were analyzed. The tasks administered were the Vandenberg and Kuse [Vandenberg, S. G., & Kuse, A. R. (1978). Mental rotation, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills, 47, 599-604.] mental rotation test (MRT) and a reading comprehension task (RCT). Individuals with high (HMR) and low (LMR) mental rotation differed in MRT scores but had similar RCT performance. Results showed that the HMR group, in contrast with LMR counterparts, preserved good spatial text recall even when a spatial concurrent task was performed; however, Experiment 2 revealed a modification of spatial concurrent task performance in LMR as well in HMR group. Overall, results suggest that HMR individuals have more spatial resources than LMR individuals, allowing them to compensate for spatial working memory interference, but only to a limited extent, given that the processing of spatial information is still mediated by VSWM.

Liben, L. S. (2009). The Road to Understanding Maps. Current Directions in Psychological Science, 18(6), 310-315.

Children and even some adults struggle to understand and use maps. In the symbolic realm, users must appreciate that the marks on a surface stand for environments and must understand how to interpret individual symbols. In the spatial realm, users must understand how representational space is used to depict environmental space. To do so, they must understand the consequences of cartographic decisions about the map's viewing distance, viewing angle, viewing azimuth, and geometric projection. Research identifies age-linked progressions in symbolic and spatial map understanding that are linked to normative representational and spatial development, and reveals striking individual differences. Current work focuses on identifying experiences associated with better map understanding. New technologies for acquiring, manipulating, analyzing, and displaying geo-referenced data challenge users and researchers alike.


Technorati Tags: , , , , , , , , , , , , ,

iPost: FYI Five laws of human nature

From the NEW SCIENTIST.


Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

Tuesday, December 15, 2009

Small v large-scale Gv abilities: Implications for CHC taxonomy and measurement?

Yet another study suggesting that we who tend to worship at the later of the CHC taxonomy (McGrew, 2005; McGrew, 2009) need to head the warnings of  the primarily architects of the model (Horn, Carroll) who warned us (in their writings) that the taxonomy is incomplete and will evolve over time.  Evidence for the correctness of this admonition is yet another study investigating small scale Gv (e.g., SR, Vz) and large-scale Gv (e.g., environmental navigation).  Large-scale Gv is missing from the current consensus CHC taxonomy.  The study below, which found that training on small-scale Gv did not generalize to changes in large-scale Gv in children (while prior research has suggested that the two are linked in adults), is the third study I've posted that has made this small v large-scale Gv distinction.  See prior posts regarding my comments about CHC taxonomy and implications for potential test development (i.e., development of large-scale navigational abilities)

  • Jansen, P. (2009). THE DISSOCIATION OF SMALL- AND LARGE-SCALE SPATIAL ABILITIES IN SCHOOL-AGE CHILDREN. Perceptual and Motor Skills, 109(2), 357-361.

Abstract
This experiment with school-age children was designed to assess the extent to which training in a “small-scale space”—so-called manual rotation training—can improve performance in a “large-scale space.” In a preliminary test, 72 9- and 10-yr.-olds completed a direction estimation test. Half of the children then completed manual rotation training or played a nonspatial computer game. All of the children subsequently performed the direction estimation test again. Perfor-mance in direction estimation did not differ between the preliminary test and the posttest. Thus, in contrast to the parallel study with adults, the “small-scale spatial ability” was not associated with “large-scale ability.”

Technorati Tags: , , , , , , , , , , , , , , , ,

Status of creativity research: Annual Review of Psychology (2010) summary

Monday, December 14, 2009

AAIDD intellectual disability manual (11th edition): Intelligence component -1 standard deviation below average? Part 1 of series of posts.




“For purposes of diagnosis, intellectual functioning is currently best conceptualized and captured by a general factor of intelligence.  Intelligence is a general mental ability.  It includes reasoning, planning, solving problems, thinking abstractly, comprehending simple ideas, learning quickly, and learning from experience.  The “significant limitations in intellectual functioning” criterion for a diagnosis of intellectual disability is an IQ score that is approximately two standard deviations below the mean, considering the standard error of measurement for the specific instruments used and the instruments strengths and limitations.”  (AAIDD, 2010, p. 31)
[Note - this is a cross-blog post originally posted to IQs Corner sister blog (ICDP) yesterday.]

It has been nearly 50 years since the first official American Association on Intellectual and Developmental Disabilities (AIDD; previously AAMR  and AAMD) manual (1961) for defining intellectual disability (ID: previously mentally retarded-MR; Greenspan & Switzky, 2006a) and the latest version (11th edition; aka the “green book”) was published (AAIDD, 2010).  The ongoing refinement of the ID manual has taken many twists and turns, often producing internal debates within the ID community (see Greenspan & Switzky, 2006b; Greenspan, 1997, 2006).  Despite the debates, the ongoing evolution of each successive manual has been guided by the goal to describe “best practices” for defining, classifying, and providing services for individuals with ID.  The green book continues this tradition.  One wonders whether the latest iteration of the official AAIDD manual will resolve many of the ongoing questions and debates (see Switzky & Greenspan, 2006a) or, whether it will generate new controversies and fissures in the profession regarding the definition and classification of ID. 

With great anticipation, I recently received my copy of the AAIDD green book.  Although my research interests have spanned (at different times in my career) both theoretical and assessment issues in the domains of personal competence, adaptive behavior, and intelligence, my most recent research and writings have focused primarily on intelligence theory and testing.  Thus, I immediately turned to Chapter 4 of the manual (Intellectual Functioning and Its Assessment).  Questions in my mind where:  Is it up-to-date?  Did it incorporate state-of-the-art research on the evolving taxonomy of human cognitive abilities?  Did it provide guidance to practitioners regarding critical intelligence testing issues? 

The existence of this blog post (to introduce a series of future blog posts) reflects my obvious answers to the above questions.  To be frank, Chapter 4 is a disappointment (of a magnitude of at least -1 standard deviation below expectations).  I’ve waited two months since first reading the chapter before drafting this introductory post.  I needed time to reflect on whether my initial knee-jerk reactions were accurate or possibly related to potential conflicts of interest (see “full disclosure” note at end of this post).  I also needed to decide if I had the fortitude to take a controversial public stance regarding the AAIDD chapter on intellectual functioning.  With each passing week my decision was made easier as I read more-and-more psychometrically and professionally flawed Atkins MR/ID death penalty court decisions (many of these decisions, along with some of my comments, can be found at the current blog --www.atkinsmrdeathpenalty.com).  I finally decided I had a professional responsibility to share my analysis and comments.

I believe, given the adversarial nature of Atkins court proceedings (see Greenspan & Switzky, 2006c), that certain lawyers and courts might use (or misinterpret) the contents of Chapter 4 (particularly the focus on "general intelligence" and thus, a single full scale IQ score and resultant "bright line" cutoff criteria) to circumvent the rights of individuals with ID to fair, equitable treatment and equal protection under the law.   And as others have noted, these are literally life-or-death issues.  Thus, I’ve decided to publically post my comments, criticisms and questions in hopes of stimulating debate and dialogue.  I will personally invite  members of the AAIDD Ad Hoc Committee on Terminology and Classification to provide guest response posts to my criticisms if they feel so compelled (which I will post “as is” as guest posts to the ICDP blog). 

Before sharing my concerns regarding Chapter 4, I acknowledge and recognize the hard work of the dedicated AAIDD Ad Hoc committee members.  Reaching professional committee-based consensus on the definition and classification of ID has always been a challenge (“A committee is a cul-de-sac down which ideas are lured and then quietly strangled,” Sir Barnett Cocks, in New Scientist, 1973). The committee members obviously spent considerable time and effort grappling with complex and conflicting issues.  I recognize that by nature, multiple member and viewpoint committee’s are constraint-driven consensus mechanisms.  Such constraints (political, economic, resources, ethical, possible conflicts of interest, etc.) will obviously not allow for the production of the “perfect” manual.  By definition, constraint-driven design typically results in “satisficing” (adequate and satisfactory) outcomes—not perfect outcomes (Simon, 2003).

Also, it would be professionally inappropriate if I only mentioned criticisms to select sections of Chapter 4.  On the positive side of the ledger I am pleased Chapter 4 addresses a number of important intelligence testing issues such as measurement error (SEM), test fairness, the Flynn Effect, comparability of scores from different IQ tests, practice effects, extreme scores, examiner credentials, and the ever complex and controversial use of cutoff scores.

Chapter 4 of the manual spans slightly more than 11 pages and covers the operational definition of intelligence (single general ability vs multiple intelligences), limitations in the operational definition, and challenges and guidelines regarding the use of IQ scores.  Obviously there will be some negative reactions given the breadth of topics covered in a mere 11+ pages (page length was probably one of the design constraints).  Despite this acknowledged constraint, my professional evaluation of the intellectual functioning component chapter finds it seriously wanting in four primary areas:
  1. A failure to reflect state-of-the-art intelligence theory and assessment research
  2. A misunderstanding and inaccurate description of the major intelligence theories
  3. An apparent lack of scientific rigor in the section on the nature and definition of intelligence as evidenced by little in the way of substantive revision of the content (and minimal reference updating or “refreshing”) from the 2002 manual to the same section in the 2010 manual—resulting in the failure to incorporate significant advances and the emerging consensus regarding the nature of psychometrically-based intelligence theories, theories that have historically provided the foundation for technically sound intelligence batteries used in ID diagnosis and classification
  4. The elimination of the 2002 section that reviewed commonly available intelligence test batteries. 

These four areas will be the foundation of my future posts in this series, which in turn may spin off additional specific or splinter issue-based posts and recommendations.

In conclusion, as written, I believe that the AAIDD operational definition of intelligence has the potential to misinform professionals working in the field of ID.  More importantly, given that the AAIDD manual is no longer only a guide for professionals and agencies working in clinical settings, but each word, sentence and paragraph of the manual are now parsed in adversarial Atkins ID death penalty deliberations (Greenspan & Switzky, 2006c), the deficiencies in the AAIDD operational definition of intelligence has potentially very serious ramifications.

I know that I am often a naïve idealist.  Ideally I hope that my forthcoming critical comments, combined with a spirited back-and-forth dialogue, will produce productive scholarly discourse, discourse that may result in AAIDD upgrading/revising their current written statement regarding the first prong of an ID diagnosis—intellectual functioning (Chapter 4) via new position papers or journal articles, web-based clarifications, and/or the publication of more specific professional guidelines.

Stay tuned.  Hopefully my first critique post will be completed within a week.

  • American Association on Intellectual and Developmental Disabilities (2010).  Intellectual Disability:  Definition, classification, and systems of supports.  Washington, DC.  Author
  • Greenspan, S. (1997).  Dead manual walking?  Why the 1992 AAMR definition needs redoing.  Education and Training in Mental Retardation and Developmental Disabilities, 32, 179-190.
  • Greenspan, S. (2006).  Mental retardation in the real world:  Why the AAMR definition is not there yet.  In S. Greenspan and H. Switzky, (Eds.), What is mental retardation?  Ideas for an evolving disability in the 21st Century.  Washington, DC:  American Association on Mental Retardation.
  • Greenspan, S. & Switzky, H. (2006a).  Forty-four years of AAMR manuals.  In S. Greenspan and H. Switzky, (Eds.), What is mental retardation?  Ideas for an evolving disability in the 21st Century.  Washington, DC:  American Association on Mental Retardation.
  • Greenspan, S. & Switzky, H. (2006b).  What is mental retardation?  Ideas for an evolving disability in the 21st Century.  Washington, DC:  American Association on Mental Retardation.
  • Greenspan, S. & Sitzky, H. (2006c).  Lessons from the Atkins decision for the next AAMR manual.  In S. Greenspan and H. Switzky, (Eds.), What is mental retardation?  Ideas for an evolving disability in the 21st Century.  Washington, DC:  American Association on Mental Retardation.
  • Simon, H.A. (2003).  Nobel Prize in Economic Sciences, 1978.  American Psychologist, 58 (9), 753-755. 
Full disclosure statement:  I, Kevin McGrew, am a coauthor of the Woodcock-Johnson III Battery, a battery that includes an intelligence (IQ) component that is often used in the assessment and classification of individuals with ID.  Thus, I have a potential monetary conflict of interest regarding policies and guidelines related to the use of intelligence tests.  Furthermore, all  comments in this blog post, and future blog posts, reflect my individual professional opinion and do not necessarily reflect the opinions of the WJ III author team or the publisher of the WJ III (Riverside Publishing).


Technorati Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,


New IAP Applied Psychometrics 101 Report: IQ scores and SEM



A new IAP Applied Psychometrics 101 report (#5) is now available.  The title of the report and abstract is below.  The report can be downloaded by clicking here.

Applied Psychometrics 101 #4:  The Standard Error of Measurement (SEM):  An Explanation and Facts for "Fact Finders" in Atkins MR/ID death penalty proceedings.

Abstract

The standard error of measurement (SEM) is a professionally accepted and scientifically based measurement concept that allows users of psychological test scores to account for the known degree of imprecision in the scores.  Atkins MR/ID cases almost always involve standardized psychological testing in the domains of intelligence (IQ tests) and adaptive behavior (AB).  Scores from IQ and AB measures are fallible—not perfectly reliable.  This report provides an easy to understand explanation of the psychometric concept of SEM augmented by an example based on real-world data.  The report concludes with 8 SEM facts that “fact finders” should understand and internalize when evaluating psychological test data during legal proceedings--Atkins MR/ID death penalty proceedings in particular.
Here is a visual treat/tease from the report:


All prior IAP AP101 reports can be accessed via the Applied Psychometrics 101 (AP101) Reports section of the blog--on the blog sidebar.

Technorati Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , ,


Friday, December 11, 2009

IQs Corner Recent Literature of Interest 12-07-09

This weeks "recent literature of interest" is now available. Click here to view or download.

Information regarding this feature, its basis, and the reasons for type of references included in each weekly installment can be found in a prior post.

Technorati Tags: , , , , , ,


Thursday, December 10, 2009

iPost: Mental Earworms

Thanks to MIND HACKS

http://www.mindhacks.com/blog/2009/12/cant_get_you_out_of.html


Kevin McGrew PhD
Educational/School Psych.
IAP (www.iapsych.com)

Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

Tuesday, December 08, 2009

Quantoids corner. Current issue of Psych Bulletin


Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues.

A general approach for estimating scale score reliability for panel survey data.

Mon, Dec 7 2009 11:20 PM 
by Biemer, Paul P.; Christ, Sharon L.; Wiesen, Christopher A.

Scale score measures are ubiquitous in the psychological literature and can be used as both dependent and independent variables in data analysis. Poor reliability of scale score measures leads to inflated standard errors and/or biased estimates, particularly in multivariate analysis. Reliability estimation is usually an integral step to assess data quality in the analysis of scale score data. Cronbach's a is a widely used indicator of reliability but, due to its rather strong assumptions, can be a poor estimator (L. J. Cronbach, 1951). For longitudinal data, an alternative approach is the simplex method; however, it too requires assumptions that may not hold in practice. One effective approach is an alternative estimator of reliability that relaxes the assumptions of both Cronbach's a and the simplex estimator and thus generalizes both estimators. Using data from a large-scale panel survey, the benefits of the statistical properties of this estimator are investigated, and its use is illustrated and compared with the more traditional estimators of reliability. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

Pin:  Mark Read:

Determining the statistical significance of relative weights.

Mon, Dec 7 2009 11:20 PM 
by Tonidandel, Scott; LeBreton, James M.; Johnson, Jeff W.

Relative weight analysis is a procedure for estimating the relative importance of correlated predictors in a regression equation. Because the sampling distribution of relative weights is unknown, researchers using relative weight analysis are unable to make judgments regarding the statistical significance of the relative weights. J. W. Johnson (2004) presented a bootstrapping methodology to compute standard errors for relative weights, but this procedure cannot be used to determine whether a relative weight is significantly different from zero. This article presents a bootstrapping procedure that allows one to determine the statistical significance of a relative weight. The authors conducted a Monte Carlo study to explore the Type I error, power, and bias associated with their proposed technique. They illustrate this approach here by applying the procedure to published data. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

Pin:  Mark Read:

Using derivative estimates to describe intraindividual variability at multiple time scales.

Mon, Dec 7 2009 11:20 PM 
by Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.

The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of variation, are likely to be incomplete representations of intraindividual variability. This article shows that the study of intraindividual variability can be made more productive by examining variability of interest at specific time scales, rather than considering the variability of entire time series. Furthermore, examination of variance in observed scores may not be sufficient, because these neglect the time scale dependent relationships between observations. The current article outlines a method of using estimated derivatives to examine intraindividual variability through estimates of the variance and other distributional properties at multiple time scales. In doing so, this article encourages more nuanced discussion about intraindividual variability and highlights that variability and variance are not equivalent. An example with simulated data and an example relating variability in daily measures of negative affect to neuroticism are provided. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

Pin:  Mark Read:

A conceptual and empirical examination of justifications for dichotomization.

Mon, Dec 7 2009 11:20 PM 
by DeCoster, Jamie; Iselin, Anne-Marie R.; Gallucci, Marcello

Despite many articles reporting the problems of dichotomizing continuous measures, researchers still commonly use this practice. The authors' purpose in this article was to understand the reasons that people still dichotomize and to determine whether any of these reasons are valid. They contacted 66 researchers who had published articles using dichotomized variables and obtained their justifications for dichotomization. They also contacted 53 authors of articles published in Psychological Methods and asked them to identify any situations in which they believed dichotomized indicators could perform better. Justifications provided by these two groups fell into three broad categories, which the authors explored both logically and with Monte Carlo simulations. Continuous indicators were superior in the majority of circumstances and never performed substantially worse than the dichotomized indicators, but the simulations did reveal specific situations in which dichotomized indicators performed as well as or better than the original continuous indictors. The authors also considered several justifications for dichotomization that did not lend themselves to simulation, but in each case they found compelling arguments to address these situations using techniques other than dichotomization. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

Pin:  Mark Read:

An introduction to recursive partitioning: Rationale, application, and characteristics of classification and regression trees, bagging, and random forests.

Mon, Dec 7 2009 11:20 PM 
by Strobl, Carolin; Malley, James; Tutz, Gerhard

Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and bioinformatics within the past few years. High-dimensional problems are common not only in genetics, but also in some areas of psychological research, where only a few subjects can be measured because of time or cost constraints, yet a large amount of data is generated for each subject. Random forests have been shown to achieve a high prediction accuracy in such applications and to provide descriptive variable importance measures reflecting the impact of each variable in both main effects and interactions. The aim of this work is to introduce the principles of the standard recursive partitioning methods as well as recent methodological improvements, to illustrate their usage for low and high-dimensional data exploration, but also to point out limitations of the methods and potential pitfalls in their practical application. Application of the methods is illustrated with freely available implementations in the R system for statistical computing. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

Pin:  Mark Read:

Bayesian mediation analysis.

Mon, Dec 7 2009 11:20 PM 
by Yuan, Ying; MacKinnon, David P.

In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptually simpler for multilevel mediation analysis. Simulation studies and analysis of 2 data sets are used to illustrate the proposed methods. (PsycINFO Database Record (c) 2009 APA, all rights reserved)

Pin:  Mark Read:

"Psychometric approaches for developing commensurate measures across independent studies: Traditional and new models": Clarification to Bauer and Hussong (2009).

Mon, Dec 7 2009 11:20 PM 
by Bauer, Daniel J.; Hussong, Andrea M.

Reports a clarification to "Psychometric approaches for developing commensurate measures across independent studies: Traditional and new models" by Daniel J. Bauer and Andrea M. Hussong (Psychological Methods, 2009[Jun], Vol 14[2], 101-125). In this article, the authors wrote, "To our knowledge, the multisample framework is the only available option within these [latent variable] programs that allows for the moderation of all types of parameters, and this approach requires a single categorical moderator variable to define the samples." Bengt Muthén has clarified for the authors that some programs, including Mplus and Mx, can allow for continuous moderation through the implementation of nonlinear constraints involving observed variables, further enlarging the class of MNLFA models that can be fit with these programs. (The following abstract of the original article appeared in record 2009-08072-001.) When conducting an integrative analysis of data obtained from multiple independent studies, a fundamental problem is to establish commensurate measures for the constructs of interest. Fortunately, procedures for evaluating and establishing measurement equivalence across samples are well developed for the linear factor model and commonly used item response theory models. A newly proposed moderated nonlinear factor analysis model generalizes these models and procedures, allowing for items of different scale types (continuous or discrete) and differential item functioning across levels of categorical and/or continuous variables. The potential of this new model to resolve the problem of measurement in integrative data analysis is shown via an empirical example examining changes in alcohol involvement from ages 10 to 22 years across 2 longitudinal studies. (PsycINFO Database Record (c) 2009 APA, all rights reserved)


Sent from KMcGrew iPhone (IQMobile). (If message includes an image-double click on it to make larger-if hard to see) 

The beautiful mind interviewed

Thanks to Mind Hacks for this post about John Nash.

http://www.mindhacks.com/blog/2009/12/john_nash_a_beautif.html


Kevin McGrew PhD
Educational/School Psych.
IAP (www.iapsych.com)

Sent from KMcGrew iPhone (IQMobile). (If message includes an image-
double click on it to make larger-if hard to see)

Saturday, December 05, 2009

iPost: Poor working memory linked to poor parenting?


News Release

December 3 , 2009
For Immediate Release

Contact: Barbara Isanski 
Association for Psychological Science 
202.293.9300 
bisanski@psychologicalscience.org

Parents Gone Wild? Study Suggests Link Between Working Memory and Reactive Parenting

We've all been in situations before where we get so frustrated or angry about something, we will lash out at someone without thinking. This lashing out — reactive negativity — happens when we can't control our emotions. Luckily, we are usually pretty good at self-regulating and controlling our emotions and behaviors. Working memory is crucial for cognitive control of emotions: It allows us to consider information we have and reason quickly when deciding what to do as opposed to reacting automatically, without thinking, to something.

For parents, it is particularly important to maintain a cool head around their misbehaving children. This can be challenging and sometimes parents can't help but react negatively towards their kids when they act badly. However, chronic parental reactive negativity is one of the most consistent factors leading to child abuse and may reinforce adverse behavior in children.

To avoid responding reactively to bad behavior, parents must be able to regulate their own negative emotions and thoughts. In the current study, psychologists Kirby Deater-Deckard and Michael D. Sewell from Virginia Polytechnic Institute and State University, Stephen A. Petrill from Ohio State University, and Lee A. Thompson from Case Western Reserve University examined if there is a link between working memory and parental reactive negativity.

Mothers of same-sex twins participated in this study. Researchers visited the participants' homes and videotaped each mother as she separately interacted with each twin as they participated in two frustrating tasks (drawing pictures with an Etch-A-Sketch and moving a marble through a tilting maze). In addition, the mothers completed a battery of tests measuring various cognitive abilities, including working memory.

The results, reported in Psychological Science, a journal of the Association for Psychological Science, reveal that the mothers whose negativity was most strongly linked with their child's challenging behaviors were those with the poorest working memory skills. The authors surmise that "for mothers with poorer working memory, their negativity is more reactive because they are less able to cognitively control their emotions and behaviors during their interactions with their children." They conclude that education and intervention efforts for improving parenting may be more effective if they incorporate strategies that enhance working memory skills in parents.

###

For more information about this study, please contact: Kirby Deater-Deckard (kirbydd@vt.edu)

Psychological Science is ranked among the top 10 general psychology journals for impact by the Institute for Scientific Information. For a copy of the article "Maternal Working Memory and Reactive Negativity in Parenting" and access to other Psychological Science research findings, please contact Barbara Isanski at 202-293-9300 or bisanski@psychologicalscience.org.



Sent from KMcGrew iPhone (IQMobile). (If message includes an image-double click on it to make larger-if hard to see)