Saturday, March 10, 2012

School Neuro News - March, 2012



School Neuro News - March, 2012
You've subscribed to Kids, Inc communications. For best delivery, please add dcmiller@kidsinc.com to your address book or safe sender list.
To view this email in your browser click here.



School Neuro News: March, 2012
SNP Logo
What's new from www.schoolneuropsych.com? A message from Daniel C. Miller, Ph.D., ABPP, ABSNP, NCSP
       
 
2012-13 School Neuropsychology Post-Graduate Certification Program
Course Deposits are Now Being Accepted for the 2012-13 School Neuropsychology Post-Graduate Certification Program.
  • Month #1: September 14-16, 2012 (Online Webinar Lecture Month)
  • Month #2: October 12-14, 2012 (Online Webinar Lecture Month and Small Group Supervision Session)
  • Month #3: November 9-11, 2012 (Face-to-Face Meeting in Dallas, Texas -Lecture/Clinical Supervision Month)
  • Month #4: December 7-9, 2012 (Online Webinar Lecture Month and Small Group Supervision Session)
  • Month #5: January 11-13, 2013 (Online Webinar Lecture Month and Small Group Supervision Session)
  • Month #6: February 8-10, 2013 (Online Webinar First Integrated Case Study Presentations by Students)
  • Month #7: March 8-10, 2013 (Online Webinar Lecture Month and Small Group Supervision Session)
  • Month #8: April 12-14, 2013 (Online Webinar Second Integrated Case Study Presentations by Students)
  • Month #9: May 17-19, 2013 (Online Webinar Lecture Month and Small Group Supervision Session)
  • Month #10: June - No formal class meeting - Individual clinical supervision as needed to prepare students for the final case study presentation.
  • Month #11: July 9-10, 2013 – Final Written Exam and Final Oral Case Staffings held at the National School Neuropsychology Summer Institute.

Check out all of the details for the School Neuropsychology Post-Graduate Certification Program at http://www.schoolneuropsych.com/training/index.php?id=4

July 11-13, 2012 School Neuropsychology Summer Institute Speakers Finalized
Registration is now Open!

Wednesday, July 11, 2012

  • Full Day Preconference Workshop:  Dr. George McCloskey - Understanding Executive Functions: From Theories, Assessment, and Interventions
  • Keynote Presentation: Drs. Alan and Nadeen Kaufman and Dr. Elaine Fletcher-Janzen - The Role of Cognitive Assessment in Intervention Planning: Historical to Contemporary Practices

Thursday, July 12, 2012

  • Dr. Margaret Semrud-Clikeman - Nonverbal Learning Disabilities: In the Eye of the Beholder?
  • Using neuropsychology processing deficits to identify Specific Learning Disabilities: Oregon Models - Jim Hanson and Karen Apgar
  • Dr. David Schwartz - Making School Neuropsychological Reports Relevant to Curriculum and Instruction
  • Dr. Jack Naglieri - Using PASS Neurocognitive Theory and Cognitive Assessment System: From Evaluation to Instruction

Friday, July 13, 2012

  • Dr. Christopher G. Vaughan - Concussion Recognition and Response
  • Dr. Sam Goldstein - Executive Functions and Childhood Disorders: New Assessments and Interventions
  • Drs. Erin Avirett and Jordana Mortimer - Updates on Executive Functioning: How to Incorporate the Most Recent Literature on Executive Functioning into Practice
  • Dr. Daniel C. Miller - Update on the School Neuropsychology Conceptual Model and School Neuropsychology Report Shell

Check all of the Summer Institute details at http://www.schoolneuropsych.com/conferences/index.php?id=6.

 
     
New California Board of Behavioral Sciences CE Provider
KIDS, Inc. has been approved by the California Board of Behavioral Sciences to provide continuing education. This will specifically be helpful to those professional who are California Licensed Educational Psychologists (LEPs).
 
Share this Newslette with your friends
schoolneuropsych.com is a Division of KIDS, Inc. that specializes in quality training in the emerging specialization of school neuropsychology. 

If you have received this newsletter from a friend and would like to be added to the mailing list to receive future issues, click here

Use buttons to share this information with your friends:

     


Trouble viewing this email? Go to:
http://maildogmanager.com/page.html?p=0000015Fu8vj5D4E4uOGyc+n6CvhsX&email=iap@earthlink.net
You've subscribed to Kids, Inc communications. For best delivery, please add dcmiller@kidsinc.com to your address book or safe sender list.
1156 Point Vista Road, Hickory Creek, TX 75065, USA
Review our Privacy Policy and Acceptable Use Policy.
Unsubscribe or manage your Subscription Preferences

Crafted and delivered by Kids, Inc's Mail Dog!


Thursday, March 08, 2012

Happy birthday IQs Corner - 7 years later + something big coming




It was seven years ago today that I started IQs Corner blog. I want to thank all my loyal readers. It has been fun...and I am geared for seven more.

As a tease, sometime in the next 1-2 months IQs Corner, together with my IAP web page and other blogs will be making a big announcement. Stay tunned :)



- Posted using BlogPress from Kevin McGrew's iPad

Article: STUDY ALERT: Making creative metaphors: The importance of fluid intelligence for creative thought


STUDY ALERT: Making creative metaphors: The importance of fluid intelligence for creative thought
http://scottbarrykaufman.com/article/study-alert-making-creative-metaphors-the-importance-of-fluid-intelligence-for-creative-thought/

(Sent from Flipboard)


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

IQs Corner Recent Literature of Interest--to the 3rd power


I have been slammed with work and travel and am behind in my posts re: recent literature of interest. Today I make available not one, not two, but three batches of references for readers to review.

Enjoy



- Posted using BlogPress from Kevin McGrew's iPad

Research Bytes: What is cognitive efficiency--conceptual and methodological models




Cognitive efficiency has become a hot construct in the psychometric measurement of intelligence. However, not much conceptual or methodological foundation work has addressed the issue of "what is CE" and "what methodological model should be used?"

Hoffman and associates have written some interesting conceptual articles that we in the area of intelligence testing may want to review to add some clarity to the measurement and interpretation of our CE measures on IQ batteries. Below are two recent articles.

Click on images to enlarge.









- Posted using BlogPress from Kevin McGrew's iPad

Article: Do You Hear What I Hear?



Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Wednesday, March 07, 2012

The Time Doc IM-Home blog posts re: brain clock based Interactive Metronome

Most of my readers are aware of my interest in brain-clock based neurotechnologies particularly as they relate to improving cognitive functioning. All posts related to this area of interest, as well as posts linking readers to other neuroscience developments, can be found at the Brain Clock blog.

I drill down deeper into Interactive Metronome as a guest blogger at the IM-Home blog. Now all my IM-related posts can be viewed via one URL. I hope readers check out these posts and become more aware of the exciting neurotechnologies that are emerging based on the concept of temporal processing and the human brain clock.



- Posted using BlogPress from Kevin McGrew's iPad

Tuesday, March 06, 2012

Journal of Experimental Psychology: Learning, Memory, and Cognition - Volume 38, Issue 2


:



Neural correlates of creativity in analogical reasoning.
Page 264-272
Green, Adam E.;Kraemer, David J. M.;Fugelsang, Jonathan A.;Gray, Jeremy R.;Dunbar, Kevin N.

Tracking cognitive phases in analogical reasoning with event-related potentials.
Page 273-281
Maguire, Mandy J.;McClelland, M. Michelle;Donovan, Colin M.;Tillman, Gail D.;Krawczyk, Daniel C.


Mixing metaphors in the cerebral hemispheres: What happens when careers collide?
Page 295-311
Chettih, Selmaan;Durgin, Frank H.;Grodner, Daniel J.

Assessing the effect of lexical variables in backward recall.
Page 312-324
Guérard, Katherine;Saint-Aubin, Jean

Recollection can be weak and familiarity can be strong.
Page 325-339
Ingram, Katherine M.;Mickes, Laura;Wixted, John T.

The Rumsfeld effect: The unknown unknown.
Page 340-355
Hampton, James A.;Aina, Bayo;Andersson, J. Mathias;Mirza, Humaira Z.;Parmar, Sejal


Influences of part-list cuing on different forms of episodic forgetting.
Page 366-375
Bäuml, Karl-Heinz T.;Samenieh, Anuscheh

Dissociating positive and negative influences of verbal processing on the recognition of pictures of faces and objects.
Page 376-390
Nakabayashi, Kazuyo;Burton, A. Mike;Brandimonte, Maria A.;Lloyd-Jones, Toby J.

Illusory expectations can affect retrieval-monitoring accuracy.
Page 391-404
McDonough, Ian M.;Gallo, David A.

The effect of study time distribution on learning and retention: A Goldilocks principle for presentation rate.
Page 405-412
de Jonge, Mario;Tabbers, Huib K.;Pecher, Diane;Zeelenberg, René

Overdistribution in source memory.
Page 413-439
Brainerd, C. J.;Reyna, V. F.;Holliday, R. E.;Nakamura, K.

Position–item associations play a role in the acquisition of order knowledge in an implicit serial reaction time task.
Page 440-456
Schuck, Nicolas W.;Gaschler, Robert;Keisler, Aysha;Frensch, Peter A.

How specific is source memory for faces of cheaters? Evidence for categorical emotional tagging.
Page 457-472
Bell, Raoul;Buchner, Axel;Erdfelder, Edgar;Giang, Trang;Schain, Cécile;Riether, Nina



Selective memory retrieval can impair and improve retrieval of other memories.
Page 488-494
Bäuml, Karl-Heinz T.;Samenieh, Anuscheh

Adaptive memory: Enhanced location memory after survival processing.
Page 495-501
Nairne, James S.;VanArsdall, Joshua E.;Pandeirada, Josefa N. S.;Blunt, Janell R.


Three tests and three corrections: Comment on Koen and Yonelinas (2010).
Page 513-523
Jang, Yoonhee;Mickes, Laura;Wixted, John T.



Research byte: Sex differences in reaction time

http://psycnet.apa.org/psycinfo/2012-05961-001/


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Neuropsychology - Volume 26, Issue 2


:




Capturing the fragile X premutation phenotypes: A collaborative effort across multiple cohorts.
Page 156-164
Hunter, Jessica Ezzell;Sherman, Stephanie;Grigsby, Jim;Kogan, Cary;Cornish, Kim

Specificity of dyspraxia in children with autism.
Page 165-171
MacNeil, Lindsey K.;Mostofsky, Stewart H.

Decision-making impairment on the Iowa Gambling Task after endovascular coiling or neurosurgical clipping for ruptured anterior communicating artery aneurysm.
Page 172-180
Escartin, Gemma;Junqué, Carme;Juncadella, Montserrat;Gabarrós, Andreu;de Miquel, Maria Angels;Rubio, Francisco

Superior intellectual ability in schizophrenia: Neuropsychological characteristics.
Page 181-190
MacCabe, James H.;Brébion, Gildas;Reichenberg, Abraham;Ganguly, Taposhri;McKenna, Peter J.;Murray, Robin M.;David, Anthony S.

Altered implicit category learning in anorexia nervosa.
Page 191-201
Shott, Megan E.;Filoteo, J. Vincent;Jappe, Leah M.;Pryor, Tamara;Maddox, W. Todd;Rollin, Michael D. H.;Hagman, Jennifer O.;Frank, Guido K. W.

Gist-based conceptual processing of pictures remains intact in patients with amnestic mild cognitive impairment.
Page 202-208
Deason, Rebecca G.;Hussey, Erin P.;Budson, Andrew E.;Ally, Brandon A.

Mild cognitive impairment is associated with selected functional markers: Integrating concurrent, longitudinal, and stability effects.
Page 209-223
Dolcos, Sanda;MacDonald, Stuart W. S.;Braslavsky, Anna;Camicioli, Richard;Dixon, Roger A.

Effects of familiarity and cognitive function on naturalistic action performance.
Page 224-237
Park, Norman W.;Lombardi, Sabrina;Gold, David A.;Tarita-Nistor, Luminita;Gravely, Mark;Roy, Eric A.;Black, Sandra E.

Genetic architecture of the Delis-Kaplan executive function system Trail Making Test: Evidence for distinct genetic influences on executive function.
Page 238-250
Vasilopoulos, Terrie;Franz, Carol E.;Panizzon, Matthew S.;Xian, Hong;Grant, Michael D.;Lyons, Michael J.;Toomey, Rosemary;Jacobson, Kristen C.;Kremen, William S.

Age group and sex differences in performance on a computerized neurocognitive battery in children age 8−21.
Page 251-265
Gur, Ruben C.;Richard, Jan;Calkins, Monica E.;Chiavacci, Rosetta;Hansen, John A.;Bilker, Warren B.;Loughead, James;Connolly, John J.;Qiu, Haijun;Mentch, Frank D.;Abou-Sleiman, Patrick M.;Hakonarson, Hakon;Gur, Raquel E.



Monday, March 05, 2012

Examination of the structural, convergent, and incremental validity of the Reynolds Intellectual Assessment Scales (RIAS) with a clinical sample.

Psychological Assessment - Vol 22, Iss 2
Empirical examination of the Reynolds Intellectual Assessment Scales (RIAS; C. R. Reynolds & R. W. Kamphaus, 2003a) has produced mixed results regarding its internal structure and convergent validity. Various aspects of validity of RIAS scores with a sample (N = 521) of adolescents and adults seeking psychological evaluations at a university-based clinic were examined. Results from exploratory factor analysis indicated only 1 factor, and confirmatory factor analysis (CFA) indicated that the 1-factor model was a good fit and a better fit than the 2-factor model. Hierarchical factor analysis indicated the higher order, general intelligence factor accounted for the largest amount of variance. Correlations with other measures of verbal/crystallized and nonverbal/fluid intelligence were supportive of the convergent validity of the Verbal Intelligence Index but not the Nonverbal Intelligence Index. Joint CFA with these additional measures resulted in a superior fit of the 2-factor model compared with the 1-factor model, although the Odd-Item-Out subtest was found to be a poor measure of nonverbal/fluid intelligence. Incremental validity analyses indicated that the Composite Intelligence Index explained a medium to large portion of academic achievement variance; the NIX and VIX explained a small amount of remaining variance. Implications regarding interpretation of the RIAS when assessing similar individuals are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Sent with Reeder


Sent from my KMcGrew IPhone

The measurement of executive function at age 5: Psychometric properties and relationship to academic achievement.

Psychological Assessment - Vol 22, Iss 2
This study examined the psychometric properties and criterion validity of a newly developed battery of executive function (EF) tasks for use in early childhood. The battery was included in the Family Life Project (FLP), a prospective longitudinal study of families who were oversampled from low-income and African American families at the birth of a new child (N = 1,292). Ninety-nine percent (N = 1,036) of children who participated in the age 5 home visit completed 1 or more (M = 5.8, Mdn = 6) of the 6 EF tasks. Results indicated that tasks worked equally well for children residing in low-income and not low-income homes, that task scores were most informative about the ability level of children in the low-average range, that performance on EF tasks was best characterized by a single factor, and that individual differences on the EF battery were strongly related to a latent variable measuring overall academic achievement, as well as to individual standardized tests that measured phonological awareness, letter–word identification, and early math skills. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Sent with Reeder


Sent from my KMcGrew IPhone

Age group and sex differences in performance on a computerized neurocognitive battery in children age 8−21.

Neuropsychology - Vol 24, Iss 4
Objective: Examine age group effects and sex differences by applying a comprehensive computerized battery of identical behavioral measures linked to brain systems in youths that were already genotyped. Such information is needed to incorporate behavioral data as neuropsychological "biomarkers" in large-scale genomic studies. Method: We developed and applied a brief computerized neurocognitive battery that provides measures of performance accuracy and response time for executive-control, episodic memory, complex cognition, social cognition, and sensorimotor speed domains. We tested a population-based sample of 3,500 genotyped youths ages 8–21 years. Results: Substantial improvement with age occurred for both accuracy and speed, but the rates varied by domain. The most pronounced improvement was noted in executive control functions, specifically attention, and in motor speed, with some effect sizes exceeding 1.8 standard deviation units. The least pronounced age group effect was in memory, where only face memory showed a large effect size on improved accuracy. Sex differences had much smaller effect sizes but were evident, with females outperforming males on attention, word and face memory, reasoning speed, and all social cognition tests and males outperforming females in spatial processing and sensorimotor and motor speed. These sex differences in most domains were seen already at the youngest age groups, and age group × sex interactions indicated divergence at the oldest groups with females becoming faster but less accurate than males. Conclusions: The results indicate that cognitive performance improves substantially in this age span, with large effect sizes that differ by domain. The more pronounced improvement for executive and reasoning domains than for memory suggests that memory capacities have reached their apex before age 8. Performance was sexually modulated and most sex differences were apparent by early adolescence. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Sent with Reeder


Sent from my KMcGrew IPhone

Prefrontal cortex study

http://medicalxpress.com/news/2012-03-gain-insight-prefrontal-cortex.html


Sent from my KMcGrew IPhone

Advances in Data Analysis and Classification, Vol. 6, Issue 1 - New Issue Alert


For my quantoid readers 


Monday, March 5

Dear Valued Customer,
We are pleased to deliver your requested table of contents alert for Advances in Data Analysis and Classification. Volume 6 Number 1 is now available on SpringerLink

Register for Springer's email services providing you with info on the latest books in your field. ... More!
Important News!
open choice
Springer Open Choice - Your Way to Open Access
Publish open access in over 1,300 established subscription-based Springer journals!
More...
In this issue:
Editorial
Editorial
Hans-Hermann Bock
Abstract    Full text PDF

Editorial
Editorial
Maurizio Vichi
Abstract    Full text PDF

Regular Article
A latent variables approach for clustering mixed binary and continuous variables within a Gaussian mixture model
Isabella Morlini
Abstract    Full text PDF

Regular Article
Exploring incomplete data using visualization techniques
Matthias Templ, Andreas Alfons & Peter Filzmoser
Abstract    Full text PDF

Regular Article
Analyzing multiset data by the Power STATIS-ACT method
Jacques Bénasséni & Mohammed Bennani Dosse
Abstract    Full text PDF

Regular Article
Cohen's linearly weighted kappa is a weighted average
Matthijs J. Warrens
Abstract    Full text PDF   
Do you want to publish your article in this journal?
Please visit the homepage of Advances in Data Analysis and Classification for full details on:
   - aims and scope
   - editorial policy
   - article submission

Impact Factor: 0.581 (2010)*
* Journal Citation Reports®, Thomson Reuters
Request a free sample copy
If you are not a current subscriber to the journal, click here to read a free sample copy online.

Subscribers to a Springer publication are entitled to read the full-text articles online in SpringerLink. For registration information please contact your librarian or send us an e-mail:
In the Americas: springerlink-ny@springer.com
In all other countries: springerlink@springer.com
Alert information



Friday, March 02, 2012

Article: Why It’s Important to Talk Math With Kids



Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Article: Special Issue: Dual-Process Theories of Cognitive Development


Special Issue: Dual-Process Theories of Cognitive Development
http://www.sciencedirect.com/science/journal/02732297/31/2-3

(Sent from Flipboard)


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Article: The Atlantic: Kids Are Changing, Neuroplasticity is Real, and Education Needs a Revolution


The Atlantic: Kids Are Changing, Neuroplasticity is Real, and Education Needs a Revolution
http://www.brainpowerinitiative.com/2012/03/the-atlantic-kids-are-changing-neuroplasticity-is-real-and-education-will-need-to-change/

(Sent from Flipboard)


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Article: Mapping out a new era in brain research



Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Thursday, March 01, 2012

IAP101 Brief #12: Use of IQ component part scores as indicators of general intelligence in SLD and MR/ID diagnosis

   
            Historically the concept of general intelligence (g), as operationalized by intelligence test battery global full scale IQ scores, has been central to the definition and classification of individuals with a specific learning disability (SLD) as well as individuals with an intellectual disability (ID).  More recently, contemporary definitions and operational criteria have elevated intelligence test battery composite or part scores to a more prominent role in diagnosis and classification of SLD and more recently in ID.
            In the case of SLD, third-method consistency definitions prominently feature component or part scores in (a) the identification of consistency between low achievement and relevant cognitive abilities or processing disorders and (b) the requirement that an individual demonstrate relative cognitive and achievement strengths (see Flanagan, Fiorello & Ortiz, 2010).  The global IQ score is de-emphasized in the third-method SLD methods.
            In contrast, the 11th edition of the AAIDD Intellectual Disability: Definition, Classification, and Systems of Supports manual (AAIDD, 2010) placed general intelligence, and thus global composite IQ scores, as central to the definition of intellectual functioning.  This has not been without challenge.  For example, the AAIDD ID definition has been criticized for an over-reliance on the construct of general intelligence and for ignoring contemporary psychometric theoretical and empirical research that has converged on a multidimensional hierarchical model of intelligence (viz., Cattell-Horn-Carroll or CHC theory).
The potential constraints of the “ID-as-a-general-intelligence-disability” definition was anticipated by the Committee on Disability Determination for Mental Retardation, in its National Research Council report “Mental Retardation:  Determining Eligibility for Social Security Benefits” (Reschly, Meyers & Hartel, 2001).  This national committee of experts concluded that “during the next decade, even greater alignment of intelligence tests and the IQ scores derived from them and the Horn-Cattell and Carroll models is likely.  As a result, the future will almost certainly see greater reliance on part scores, such as IQ scores for Gc and Gf, in addition to the traditional composite IQ.  That is, the traditional composite IQ may not be dropped, but greater emphasis will be placed on part scores than has been the case in the past” (Reschly et al., 2002, p. 94).  The committee stated that “whenever the validity of one or more part scores (subtests, scales) is questioned, examiners must also question whether the test’s total score is appropriate for guiding diagnostic decision making.  The total test score is usually considered the best estimate of a client’s overall intellectual functioning.  However, there are instances in which, and individuals for whom, the total test score may not be the best representation of overall cognitive functioning.” (p. 106-107).
            The increased emphasis on intelligence test battery composite part scores in SLD and ID diagnosis and classification raises a number of measurement and conceptual issues (Reschly et al., 2002).  For example, what are statistically significant differences?  What is a meaningful difference?  What appropriate cognitive abilities should serve as proxies of general intelligence when the global IQ is questioned?  What should be the magnitude of the total test score? 
Appropriate cognitive abilities will only be the only issue discussed here.  This issue addresses  which component or part scores are more correlated with general intelligence (g)—that is, what component part scores are high g-loaders?  The traditional consensus has been that measures of Gc (crystallized intelligence; comprehension-knowledge) and Gf (fluid intelligence or reasoning) are the highest g-loading measures and constructs and are the most likely candidates for elevated status when diagnosing ID (Reschly et al., 2002).  Although not always stated explicitly, the third method consistency SLD definitions specify that an individual must demonstrate “at least an average level of general cognitive ability or intelligence” (Flanagan et al., 2010, p.745), a statement that implicitly suggests cognitive abilities and component scores with high g-ness.
Table 1 is intended to provide guidance when using component part scores in the diagnosis and classification of SLD and ID (click on images to enlarge and use the browser zoom feature  to view; it is recommended you click here to access a PDF copy of the table..and also zoom in on it).  Table 1 presents a summary of the comprehensive, nationally normed, individually administered intelligence batteries that possess satisfactory psychometric characteristics (i.e., national norm samples, adequate reliability and validity for the composite g-score) for use in the diagnosis of ID and SLD.



The Composite g-score column lists the global general intelligence score provided by each intelligence battery.  This score is the best estimate of a persons general intellectual ability, which currently is most relevant to the diagnosis of ID as per AAIDD.  All composite g-scores listed in Table 1 meet Jensens (1998) psychometric sampling error criteria as valid estimates of general intelligence.  As per Jensens number of tests criterion, all intelligence batteries g-composites are based on a minimum of nine tests that sample at least three primary cognitive ability domains.  As per Jensens variety of tests criterion (i.e., information content, skills and demands for a variety of mental operations), the batteries, when viewed from the perspective of CHC theory, vary in ability domain coveragefour (CAS, SB5), five (KABC-II, WISC-IV, WAIS-IV), six (DAS-II) and seven (WJ III) (Flanagan, Ortiz & Alfonso, 2007; Keith & Reynolds, 2010).   As recommended by Jensen (1998), the particular collection of tests used to estimate g should come as close as possible, with some limited number of tests, to being a representative sample of all types of mental tests, and the various kinds of test should be represented as equally as possible (p. 85).  Users should consult sources such as Flanagan et al. (2007) and Keith and Reynolds, 2010) to determine how each intelligence battery approximates Jensens optimal design criterion, the specific CHC domains measured, and the proportional representation of the CHC domains in each batteries composite g-score.
Also included in Table 1 are the component part scales provided by each battery (e.g., WAIS-IV Verbal Comprehension Index, Perceptual Reasoning Index, Working Memory Index, and Processing Speed Index), followed by their respective within-battery g-loadings.[1]  Examination of the g-ness of composite scores from existing batteries (see last three columns in Table 1) suggests the traditional assumption that measures of Gf and Gc are the best proxies of general intelligence may not hold across all intelligence batteries.[2] 
In the case of the SB5, all five composite part scores are very similar in g-loadings (h2 = .72 to .79).  No single SB5 composite part score appears better than the other SB5 scores for suggesting average general intelligence (when the global IQ score is not used for this purpose).  At the other extreme is the WJ III where the Fluid Reasoning, Comprehension-Knowledge, Long-term Storage and Retrieval cluster scores are the best g-proxies for part-score based interpretation within the WJ III.  The WJ III Visual Processing and Processing Speed clusters are not composite part scores that should be emphasized as indicators of general intelligence.  Across all batteries that include a processing speed component part score (DAS-II, WAIS-IV, WISC-IV, WJ III) the respective processing speed scale is always the weakest proxy for general intelligence and thus, would not be viewed as a good estimate of general intelligence. 
            It is also clear that one cannot assume that composites with similar sounding names of measured abilities should have similar relative g-ness status within different batteries.  For example, the Gv (visual-spatial or visual processing) clusters in the DAS-II (Spatial Ability), SB5 (Visual-Spatial Processing) are relatively strong g-measures within their respective battery, but the same cannot be said for the WJ III Visual Processing cluster.  Even more interesting are the differences in the WAIS-IV and WISC-IV relative g-loadings for similarly sounding index scores. 
For example, the Working Memory Index is the highest g-loading component part score (tied with Perceptual Reasoning Index) in the WAIS-IV but is only third (out of four) in the WISC-IV.   The Working Memory Index is comprised of the Digit Span and Arithmetic subtests in the WAIS-IV and the Digit Span and the Letter-Number Sequencing subtests in the WISC-IV.  The Arithmetic subtest has been reported to be a factorially complex test which may tap fluid intelligence (Gf-RQ—quantitative reasoning), quantitative knowledge (Gq), working memory (Gsm), and possible processing speed (Gs; Keith & Reynolds, 2010; Phelps, McGrew, Knopik & Ford, 2005).   The factorially complex characteristics of the Arithmetic subtest (which, in essence, makes it function like a mini-g proxy) would explain why the WAIS-IV Working Memory Index is a good proxy for g in the WAIS-IV but not in the WISC-IV. The WAIS-IV and WISC-IV Working Memory Index scales, although named the same, are not measuring identical constructs.

A critical caveat is that the g-loadings cannot be compared across different batteries.  g-loadings may change when the mixture of measures included in the analyses change.  Different "flavors" of g can result (Carroll, 1993; Jensen, 1998). The only way to compare the g-ness across batteries is with appropriately designed cross- or joint-battery analysis (e.g., WAIS-IV, SB5 and WJ III analyzed in a common sample).
The above within and across intelligence battery examples illustrates that those who use component part scores as an estimate of a person’s general intelligence must be aware of the composition and psychometric g-ness of the component scores within each intelligence battery.  Not all component part scores in different intelligence batteries are created equal (with regard to g-ness).  Also, not all similarly named factor-based composite scores may measure the same identical construct and may vary in degree of within battery g-ness.  This is not a new problem in the context of naming factors in factor analysis, and by extension, factor-based intelligence test composite scores, Cliff (1983) described this nominalistic fallacy in simple language—“if we name something, this does not mean we understand it” (p. 120). 




[1] As noted in the footnotes in Table 1, all composite score g-loadings were computed by Kevin McGrew by entering the smallest number (and largest age ranges covered) of the published correlation matrices within each intelligence batteries technical manual (note the exception for the WJ III) in order to obtain an average g-loading estimate.  It would have been possible to calculate and report these values for each age-differentiated correlation matrix for each intelligence battery.  However, the purpose of this table is to provide the best possible average value across the entire age-range of each intelligence battery.  Floyd and colleagues have published age-differentiated g-loadings for the DAS-II and WJ III.  Those values were not used as they are based on the use of the principal common factor analysis method, a method that  analyzes the reliable shared variance among tests.  Although principal factor and principal component loadings typically will order measures in the same relative position, the principal factor loadings typically will be lower.  Given that the imperfect manifest composite scale scores are those that are utilized in practice, and to also allow uniformity in the calculation of the g-loadings reported in Table 1, principal component analysis was used in this work. The same rationale was used for not using the latent factor loadings on a higher-order g-factor in SEM/CFA analysis of each test battery.  Loadings from CFA analyses represent the relations between the underlying theoretical ability constructs and g purged of measurement error.  Also, frequently the final CFA solutions reported in a batteries technical manual (or independent journal articles) allow tests to be factorially complex (load on more than one latent factor), a measurement model that does not resemble the real world reality of the manifest/observed composite scores used in practice.  Latent factor loadings on a higher-order g-factor will often differ significantly from principal component loadings based on the manifest measures, both in absolute magnitude and relative size (e.g., see high Ga loading on g in WJ III technical manual which is at variance with the manifest variable based Ga loading reported in Table 1) 
[2] The h2 values are the values that should be used to compare the relative amount of g-variance present in the component part scores within each intelligence battery.

Article: STUDY ALERT: Trends in intelligence research



Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

New Dictionary App Available From the APA




APA Concise Dictionary of Psychology Mobile App

This new app gives you all the power of a complete dictionary at your fingertips, along with great features that a print dictionary can't offer.

Download a free but limited version of the app today, or the full version for just $29.99.

The full version allows you to search more than 10,000 entries directly or browse an alphabetical list, add notes about definitions, mark terms as favorites, and link directly between cross-references. Entries cover concepts, processes, and therapies across 90 subareas of psychology. The full app also offers:

  • "Word of the Day" and "Historical Figures in Psychology" pop-up features
  • abbreviations and alternative spellings
  • search term suggestions
  • search history

Download the APA Concise Dictionary of Psychology app today for your iOS or Android mobile device:

FREE TRIAL VERSION:
At the iTunes Store.
At the Android Market.

FULL VERSION:
At the iTunes Store.
At the Android Market.

For more information about the print edition of this and other APA dictionaries and reference books,
follow this link.


You are receiving this message to help you make full use of APA resources. Your email address was obtained from the APA Membership Database. Electronic communication, which costs a fraction of printing and mailing, is cost effective and timely. This message is a one time mailing. We are developing a system by which you can identify preferences for the types of email you would like to receive from the Association. Until then, to unsubscribe to promotional email, send a message to this email contact: Membership@apa.org or send a regular mail post to the address: Unsubscribe Membership Department, 750 First Street, NE, Washington, DC 20002-4242. You may also call 202-336-5580.


25 Facts You Should Know About Your Gray Matter | Online Universities

http://www.onlineuniversities.com/25-facts-you-should-know-about-your-gray-matter


Sent from my KMcGrew IPhone