Thursday, March 31, 2011

Research briefs: Social science and law; crime causality; jury research




Monahan, J., & Walker, L. (2011). Twenty-Five Years of Social Science in Law. Law and Human Behavior, 35(1), 72-82.


In this essay, we take the publication of the seventh edition of the casebook Social Science in Law (2010) as an opportunity to reflect on continuities and changes that have occurred in the application of social science research to American law over the past quarter-century. We structure these reflections by comparing and contrasting the original edition of the book with the current one. When the first edition appeared, courts’ reliance on social science was often confused and always contested. Now, courts’ reliance on social science is so common as to be unremarkable. What has changed—sometimes radically—are the substantive legal questions on which social science has been brought to bear.



Murray, J., Thomson, M. E., Cooke, D. J., & Charles, K. E. (2011). Influencing expert judgment: Attributions of crime causality. Legal and Criminological Psychology, 16(1), 126-143.

Courts occasionally permit psychologists to present expert evidence in an attempt to help jurors evaluate eyewitness identification evidence. This paper reviews research assessing the impact of this expert evidence, which we argue should aim to increase jurors' ability to discriminate accurate from inaccurate identifications. With this in mind we identify three different research designs, two indirectly measuring the expert's impact on juror discrimination accuracy and one which directly assesses its effect on this measure. Across a total of 24 experiments, three have used the superior direct methodology, only one of which provides evidence that expert testimony can improve jurors' ability to discriminate between accurate and inaccurate eyewitness identifications.


Wright, D. B., Strubler, K. A., & Vallano, J. R. (2011). Statistical techniques for juror and jury research. Legal and Criminological Psychology, 16(1), 90-125.

Juror and jury research is a thriving area of investigation in legal psychology. The basic ANOVA and regression, well-known by psychologists, are inappropriate for analysing many types of data from this area of research. This paper describes statistical techniques suitable for some of the main questions asked by jury researchers. First, we discuss how to examine manipulations that may affect levels of reasonable doubt and how to measure reasonable doubt using the coefficients estimated from a logistic regression. Second, we compare models designed for analysing the data like those which often arise in research where jurors first make categorical judgments (e.g., negligent or not, guilty or not) and then dependent on their response may make another judgment (e.g., award, punishment). We concentrate on zero-inflated and hurdle models. Third, we examine how to take into account that jurors are part of a jury using multilevel modelling. We illustrate each of the techniques using software that can be downloaded for free from the Internet (the package R) and provide a web page that gives further details for running these analyses.


- iPost using BlogPress from my Kevin McGrew's iPad

Why IQ composite scores often are higher or lower than the subtest scores: Awesome video explanation

This past week Dr. Joel Schneider and I released a paper called " 'Just say no' to averaging IQ subtest scores." The report generated considerable discussion on a number of professional listservs.

One small portion of the paper explained why composite/cluster scores from IQ tests often are higher (or lower) than the arithmetic mean of the tests that comprise the composite. This observation often baffles test users.

I would urge those who have ponder this question to read that section of the report. And THEN, be prepared to be blown away by an instructional video Joel posted at his blog where he leads you through a visual-graphic explanation of the phenomena. Don't be scared by the geometry or some of the terms. Just sit back and relax and now recognize, even if all the technical stuff is not your cup-of-tea, that there is an explanation for this score phenomena. And when colleagues ask, just refer them to Joel's blog.

It is brilliant and worth a view, even if you are not a quantitatively oriented thinker.

Below is a screen capture of the start [double click on icon to enlarge]



- iPost using BlogPress from my Kevin McGrew's iPad

Wednesday, March 30, 2011

IQ beliefs and learning@abmarkman, 3/30/11 9:11 AM

Art Markman (@abmarkman)
3/30/11 9:11 AM
Your beliefs about intelligence affect your beliefs about what you learn. tinyurl.com/4waf3k8


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Monday, March 28, 2011

Cognitive ability domain cohesion-why composite scores comprised of significantly different subtest scores are still valid

Some excellent discussion has been occurring on the NASP and CHC listservs in response to the "Just say no to averaging IQ subtest scores" blog post and report.

An issue/question that has surfaced (not for the first time) is why markedly discrepant subtest scores that form a composite can still be considered valid indicators of the construct domain. Often clinicians believe that if there is a significant and large discrepancy between tests within a composite, the total score should be considered invalid.

The issue is complex and was touched on briefly in our report and in the NASP and CHC threads by Joel Schneider. Here I mention just ONE concept for consideration.

Below is a 2-D MDS analysis of the WJ III Cog/Ach tests for subjects aged 6-18 in the norm sample. MDS also finds structure as does factor analysis. This 2D model is based on the analysis of the tests correlation matrix. What I think is a major value of MDS, and other spatial statistics, is that one can "see" the numerical relations between tests. Although the metrics are not identical, the visual-spatial map of the WJ III tests does, more-or-less, mirror the intercorrelations between tests. [Double click on image to enlarge]




So....take a look at the Gc, Grw, or Gq tests in this MDS map. All of these tests cluster closely together. Inspection of their intercorrelations finds high correlations among all measures. Conversely, look at the large amount of spatial territory covered by the WJ III Gv tests. Also look at the Ga tests (note that a red line is not connecting Auditory Attention, AA, down in the right-hand quadrant with the other Ga tests). Furthermore, even though most of the Gsm tests are relatively cohesive or tight, Memory for Sentences is further away from the other Gsm tests.

IMHO, these visual-spatial maps, which mirror intercorrelations, tell us than in humans, not all cognitive/ach domains include narrow abilities that are highly interrcorrrelated. I call it "ability domain cohesion." Clearly the different Gv abilities measured by the WJ III Gv tests indicate that the Gv domain is less cohesive (less tight) than the Gc or Grw domain. This does not suggest the tests are flawed..instead it tells us about the varying degrees of cohesiveness present in different ability domains.

Thus, for ability domains that are very very broad (in terms of domain cohesion--e.g., Gv and Ga in this MDS figure), wildly different test scores (e.g., between WJ III Spatial Relations, SR, and Picture Recognition, PR) may be valid and simply reflect that inherent lower cohesiveness (tightness) of these ability domains in human intelligence. Thus, if a person is significantly different in his/her respective Gv SR or PR scores, and these scores are providing valid indications of their relative standing on these measured abilities, then combining them together is appropriate and reflects a valid estimate of the Gv domain....which by nature is broad...and people will often display significant within-domain variability.

Bottom line. Composite scores produced by subtests that are markedly different are likely valid estimates of domains...it is just the nature of human intelligence that some of these domains are more tight or cohesive than others.
- iPost using BlogPress from my Kevin McGrew's iPad

Dissertation dish: WJ III TBI cognitive profiles by gender





Click on image to enlarge abstract


- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator


Sunday, March 27, 2011

IAP Applied Psychometrics 101 Report #10: "Just say no" to averaging IQ subtest scores

Should psychologists engage in the practice of calculating simple arithmetic averages of two or more scaled or standard scores from different subtests (pseudo-composites) within or across different IQ batteries? Dr. Joel Schneider and I, Dr. Kevin McGrew say "no."

Do psychologists who include simple pseudo-composite scores in their reports, or make interpretations and recommendations based on such scores, have a professional responsibility to alert recipients of psychological reports (e.g., lawyers, the courts, parents, special education staff, other mental health practitioners, etc.) of the potential amount of error in their statements when simple pseudo-composite scores are the foundation of some of their statements? We believe "yes."

Simple pseudo-composite scores, in contrast to norm-based scores (i.e., composite scores with norms provided by test publishers/authors--e.g., Wechsler Verbal Comprehension Index), contain significant sources of error. Although they have intuitive appeal, this appeal cloaks hidden sources of error in the scores---with the amount of error being a function of a combination of psychometric variables.

IAP Applied Psychometrics 101 Report #10 addresses the psychometric issues involved in pseudo-composite scores.

In the report we offer recommendations and resources that allow users to calculate psychometrically sound pseudo-composites when they are deemed important and relevant to the interpretation of a person's assessment results.

Finally, understanding the sources of error in simple pseudo-composite scores provides an opportunity for practitioners to understand the paradoxical phenomenon frequently observed in practice where norm-based or psychometrically sound pseudo-composite scores are often higher (or lower) than the subtest scores that comprise the composite. The "total does not equal the average of the parts" phenomenon is explained conceptually, statistically, and via an interesting visual explanation based on trigonometry.



Abstract

The publishers and authors of intelligence test batteries provide norm-based composite scores based on two or more individual subtests. In practice, clinicians frequently form hypotheses based on combinations of tests for which norm-based composite scores are not available. In addition, with the emergence of Cattell-Horn-Carroll (CHC) theory as the consensus psychometric theory of intelligence, clinicians are now more frequently “crossing batteries” to form composites intended to represent broad or narrow CHC abilities. Beyond simple “eye-balling” of groups of subtests, clinicians at times compute the arithmetic average of subtest scaled or standard scores (pseudo-composites). This practice suffers from serious psychometric flaws and can lead to incorrect diagnoses and decisions. The problems with pseudo-composite scores are explained and recommendations made for the proper calculation of special composite scores.


- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator





Saturday, March 19, 2011

Resarch bytes: First "g"..now "GFP" (general factor of personality): Real or artifact

A very nice, concise overview of recent research and theoretical discussions of the possibility of a personality g-factor...akin to "g" (general intelligence), as well as the findings of possible "plasticity" and "stability" higher-order factors above the Big 5 personality traits. Double click on image to enlarge.



- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator


Tuesday, March 15, 2011

Does the WAIS-III measure the same intellectual abilities in MR/ID individuals?

I have had a number of people send me copies of this article (see abstracts and journal info below), especially those who do work related to Dx of MR/ID in Atkins death penalty cases.

The abstract is self-explanatory--the authors conclude that the WAIS-III four-factor structure is not validated in an MR/ID population. I can hear a lawyer now--"so Dr. __________, according to MacLean et al. the WAIS-III doesn't measure the same abilities in individuals with MR/ID...so aren't your results questionable?"

A close read of the article suggests the results should be take with a serious grain of salt. In fact, the discussion is primarily a discussion of the various methodological and statistical reasons why the published 4-factor model may not have fit.

As is often the case when dealing with samples of convenience (the authors own words), especially samples of individuals at the lower end of the ability continuum, the variables often show significant problems with non-normality and skew. This is present in this sample. Given that we are dealing with SEM-based statistics, the problem is actually one of not meeting the assumption of multivariate normality. The variables also showed restricted SD's---restricted range of talent, a condition that dampens correlations in a matrix.

While doing extensive modeling research at the Institute for Community Integration at the University of Minnesota, an institute devoted to individuals with MR/ID/DD, I was constantly faced with data sets with these problems. As a result, I was constantly faced with model fit statistics that were much lower than the standard acceptable rules-of-thumbs for model fit statistics...which reflected the less than statistical and distributional robustness of such sample data. The best way to overcome the resultant low model fits (after trying transformations of the variables to different scales), was to compare the fit of competing models. The best fitting model, when compared to competing models, may still show a relatively poor absolute fit value (when compared to the standard rules of thumb), but by demonstrating that it was the best when compared to alternatives, the case could be made that it was still the best possible model given the constraints of the sample data.

This leads to the MAJOR flaw of this study. Although the authors discuss the sample problems above, they only tested one model...the WAIS-III four-factor model. They then looked at the absolute value of the fit statistics and concluded that the 4-factor model was not a good fit. I see this as a major flaw. Since the standard rules-of-thumb for absolute magnitude of fit stats may no longer hold in samples with statistical and distributional problems, they should have specified competing models (e.g., two-factor; CHC-model, single factor, etc.) and then compared the relative model fit statistics before rendering a conclusion.

Finally, as the authors correctly point out, the current results, even with the flaws above, may simply reflect the well-established finding that the differentiation of cognitive abilities is less for lower functioning individuals, and more for higher functioning. This is Spearman's Law of Diminishing Returns (SLODR) [Click here for an interesting recent discussion of SLODR]

Bottom line for the blogmaster--I judge the authors conclusions to be overstated for the reasons noted above, particularly the failure to compare the 4-factor model to alternative models. It is very possible that the 4-factor model may be the best fitting model given the statistical and distributional constraints of the underlying sample data.


Abstract

Intellectual assessment is central to the process of diagnosing an intellectual disability and the assessment process needs to be valid and reliable. One fundamental aspect of validity is that of measurement invariance, i.e. that the assessment measures the same thing in different populations. There are reasons to believe that measurement invariance of the Wechsler scales may not hold for people with an intellectual disability. Many of the issues which may influence factorial invariance are common to all versions of the scales. The present study, therefore, explored the factorial validity of the WAIS-III as used with people with an intellectual disability. Confirmatory factor analysis was used to assess goodness of fit of the proposed four factor model using 13 and 11 subtests. None of the indices used suggested a good fit for the model, indicating a lack of factorial validity and suggesting a lack of measurement invariance of the assessment with people with an intellectual disability. Several explanations for this and implications for other intellectual assessments were discussed.

- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator


Sunday, March 13, 2011

Saturday, March 12, 2011

FYiPOST: SAGE Open - submit your manuscripts and become part of this groundbreaking publication



Subject: SAGE Open - submit your manuscripts and become part of this groundbreaking publication
Reply-To: mailbox21946x119339 <sagebb@updates.sagepub.com>

Trouble viewing? Try the web version.
Mobile user? Try the mobile version.

SAGE open
 

SAGE Open is now accepting manuscripts - prepare yours today!
www.sagepub.com
Forward
Safelist

Dear Kevin McGrew,

SAGE Open

SAGE Open, our new open access publication, has received more than 200 submissions since launching on January 1, with new articles being submitted daily. Be a part of this groundbreaking publication and prepare your manuscript today.

SAGE Open publishes peer-reviewed, original research and review articles in an interactive, open-access format. Articles may span the full spectrum of the social and behavioral sciences and the humanities. Find out more, including manuscript submission guidelines, at www.sageopen.com.

 

Why publish in SAGE Open?

  • Quick review and decision times for authors
  • Speedy, continuous-publication online format
  • Global distribution of your research via SAGE Journals Online, including enhanced online features such as public usage metrics, comments features, subject categories, and article ranking and recommendations
  • Professional copyediting and typesetting of your article
  • $195 introductory author acceptance fee (discounted from the regular price of $695)

Consider publishing in SAGE Open if you want...

  • Quality reviews and efficient production, ensuring the quickest publication time
  • Free, broad, and global distribution on a powerful, highly discoverable publishing platform
  • Branding and marketing by a world-leading social science publisher, including promotion of your article via publicity and social media channels
  • Open access publication due to university or government mandates
 

Manuscript submissions are handled online through SAGE Track, SAGE's web-based peer review and submission system, powered by ScholarOne Manuscripts™. Submit your manuscripts today at http://mc.manuscriptcentral.com/sageopen.

Interested in serving as a reviewer?

  1. Visit the manuscript submission site and click the "Create Account: new users click here" button in the center of the screen.
  2. Be prepared to enter your e-mail address and to select at least five (5) keywords to describe your areas of expertise.

Please direct any inquiries to sageopen@sagepub.com.

Sincerely,

Bob Howard
Executive Director, Social Science Journals

Footer
link to homepagewww.sagepub.comBOOKS
 | TEXTBOOKS | REFERENCE BOOKS | JOURNALS ELECTRONIC PRODUCTS | PUBLISH WITH SAGE | EMAIL ALERTS
HOME  |  UNSUBSCRIBE  |  ABOUT US |  PRIVACY POLICY  |  CONTACT US   |   2455 TELLER RD THOUSAND OAKS CA 91320

1115029JA

footer

Measurement invariance explained @psypress, 3/12/11 6:21 AM

Psychology Press (@psypress)
3/12/11 6:21 AM
Have a sneak peek at new title Statistical Approaches to Measurement Invariance. Sample chapter available here: goo.gl/ol8mM #stats


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Measurement invariance explained @psypress, 3/12/11 6:21 AM

Psychology Press (@psypress)
3/12/11 6:21 AM
Have a sneak peek at new title Statistical Approaches to Measurement Invariance. Sample chapter available here: goo.gl/ol8mM #stats


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Friday, March 11, 2011

Research byte: "Feeling" numbers--what is "number sense" ? (Gq in CHC model)


While at the NASP conference I ran across a number of excellent poster papers on number sense, a hot topic in mathematics and individual differences. Today I received a PDF of one of the papers I requested....a review of the literature on the various components that have been mentioned regarding number sense (34 different elements). The poster paper can be viewed by clicking here (authors gave me permission). [double click on image to enlarge}






Of interest to me is the factor structure of number sense. Although 34 elements may be mentioned in definitions, how many latent dimensions really exist? My reading has suggested two. This is a nice paper and the references are awesome for anyone looking to get up to speed in this area.


- iPost using BlogPress from my Kevin McGrew's iPad

Generated by: Tag Generator


Thursday, March 10, 2011

FYiPOST: Orthogonal higher order structure and confirmatory factor analysis of the French Wechsler Adult Inte

According to the most widely accepted Cattell–Horn–Carroll (CHC) model of intelligence measurement, each subtest score of the Wechsler Intelligence Scale for Adults (3rd ed.; WAIS–III) should reflect both 1st- and 2nd-order factors (i.e., 4 or 5 broad abilities and 1 general factor). To disentangle the contribution of each factor, we applied a Schmid–Leiman orthogonalization transformation (SLT) to the standardization data published in the French technical manual for the WAIS–III. Results showed that the general factor accounted for 63% of the common variance and that the specific contributions of the 1st-order factors were weak (4.7%–15.9%). We also addressed this issue by using confirmatory factor analysis. Results indicated that the bifactor model (with 1st-order group and general factors) better fit the data than did the traditional higher order structure. Models based on the CHC framework were also tested. Results indicated that a higher order CHC model showed a better fit than did the classical 4-factor model; however, the WAIS bifactor structure was the most adequate. We recommend that users do not discount the Full Scale IQ when interpreting the index scores of the WAIS–III because the general factor accounts for the bulk of the common variance in the French WAIS–III. The 4 index scores cannot be considered to reflect only broad ability because they include a strong contribution of the general factor. (PsycINFO Database Record (c) 2011 APA, all rights reserved)





Sent with MobileRSS HD


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

FYIPOST: Measurement invariance of neuropsychological tests in diverse older persons.

Objective: Comparability of meaning of neuropsychological test results across ethnic, linguistic, and cultural groups is important for clinicians challenged with assessing increasing numbers of older ethnic minorities. We examined the dimensional structure of a neuropsychological test battery in linguistically and demographically diverse older adults. Method: The Spanish and English Neuropsychological Assessment Scales (SENAS), developed to provide psychometrically sound measures of cognition for multiethnic and multilingual applications, was administered to a community dwelling sample of 760 Whites, 443 African Americans, 451 English-speaking Hispanics, and 882 Spanish-speaking Hispanics. Cognitive function spanned a broad range from normal to mildly impaired to demented. Multiple group confirmatory factor analysis was used to examine equivalence of the dimensional structure for the SENAS across the groups defined by language and ethnicity. Results: Covariance among 16 SENAS tests was best explained by five cognitive dimensions corresponding to episodic memory, semantic memory/language, spatial ability, attention/working memory, and verbal fluency. Multiple Group confirmatory factor analysis supported a common dimensional structure in the diverse groups. Measures of episodic memory showed the most compelling evidence of measurement equivalence across groups. Measurement equivalence was observed for most but not all measures of semantic memory/language and spatial ability. Measures of attention/working memory defined a common dimension in the different groups, but results suggest that scores are not strictly comparable across groups. Conclusions: These results support the applicability of the SENAS for use with multiethnic and bilingual older adults, and more broadly, provide evidence of similar dimensions of cognition in the groups represented in the study. (PsycINFO Database Record (c) 2011 APA, all rights reserved)





Sent with MobileRSS HD


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

FYiPOST: Verbal ability and executive functioning development in preschoolers at head start.

Research suggests that executive functioning skills may enhance the school readiness of children from disadvantaged homes. Questions remain, however, concerning both the structure and the stability of executive functioning among preschoolers. In addition, there is a lack of research addressing potential predictors of longitudinal change in executive functioning during early childhood. This study examined the structure of executive functioning from fall to spring of the preschool year using a multimethod battery of measures. Confirmatory factor analyses revealed a unidimensional model fit the data well at both time points, and tests of measurement invariance across time points indicated that children's mean latent executive functioning scores significantly improved over time. Verbal ability was a significant predictor of longitudinal change in executive functioning. Theoretical implications and directions for future research are discussed. (PsycINFO Database Record (c) 2011 APA, all rights reserved)





Sent with MobileRSS HD


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Three layers of working memory?@PsyPost, 3/10/11 1:27 PM

PsyPost.org (@PsyPost)
3/10/11 1:27 PM
Study proves the brain has 3 layers of working memory bit.ly/gdxmZr


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Wednesday, March 09, 2011

Intelligent IQ testing@sbkaufman, 3/9/11 5:14 PM

Scott Barry Kaufman (@sbkaufman)
3/9/11 5:14 PM
Intelligent Testing huff.to/ihrwF3 via @huffingtonpost


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

FYiPOST: Psychologists who Tweet - first major update

We've updated our list of psychologists (plus a few stray neuroscientists, therapists, students and psych-bloggers) who Tweet. Follower counts were correct as of Friday 4 March 2011. Compare with the previous list compiled in November 2010. The Digest editorial team are in purple highlight.

Laura Kauffman. Child psychologist. Followers: 86444
Richard Wiseman. Parapsychologist. Followers: 68001
George Huba. Psychologist. Followers: 20628
Aleks Krotoski. Psychologist, tech journalist. Followers: 16043
Marsha Lucas. Neuropsychologist. Followers: 14462
Jonah Lehrer. Writer, blogger. Followers: 11080
Dan Ariely. Behavioural Economist, author. Followers: 10314
Jo Hemmings. Celebrity psychologist. Followers: 9735
Steven Pinker. Psycholinguist, evolutionary psychologist, author. Followers: 8978
David Ballard. Psychologist, Head of APA marketing. Followers: 6737
Graham Jones. Internet (cyber) psychologist. Followers: 6603
Christian Jarrett. That's me, editor of BPS Research Digest! Followers: 5417
Melanie Greenberg. Clinical health psychologist. Followers: 4723
Petra Boynton. Psychologist, sex educator. Followers: 4686
Ciarán O'Keeffe. Parapsychologist. Followers: 4603
Vaughan Bell. Clinical neuropsychologist, blogger. Followers: 4109
Mo Costandi. Writer, blogger. Followers: 4072
Jeremy Dean. Blogger. Followers: 3335
John Grohol. Founder of Psychcentral. Followers: 3182
Bruce Hood. Cognitive scientist. Followers: 2602
Rita Handrich. Psychologist, editor. Followers: 2435
David Eagleman. Neuroscientist, author. Followers: 2422
Daniel Levitin. Psychologist, author. Followers: 2419
Brian MacDonald. Clinical psychologist. Followers: 2371
David Webb101. Psychology tutor, blogger. Followers: 2320
Sandeep Gautam. Blogger. Followers: 1952
Jay Watts. Clinical psychologist, Lacanian. Followers: 1567
Maria Panagiotidi. Grad student. Followers: 1562
Wendy Cousins. Skeptic. Followers: 1473
Anthony Risser. Neuropsychologist, blogger. Followers: 1416
Chris Atherton. Cognitive psychologist. Followers: 1315
G. Tendayi Viki. Social psychologist. Followers: 1267
Ana Loback. Psychologist. Followers: 1244
Alex Linley. Positive psychologist. Followers: 1237
Mark Changizi. Cognitive psychologist, author. Followers: 1221
Jesse Bering. Psychologist, blogger. Followers: 1214
Rolfe Lindgren. Psychologist, personality expert. Followers: 1187
Cary Cooper. Occupational psychologist. Followers: 1093
Jason Goldman85. Grad student, blogger. Followers: 1082
Joseph LeDoux. Neuroscientist, rocker. Followers: 1033
Sophie Scott. Neuroscientist. Followers: 982
Chris French. Anomalistic psychologist. Followers: 973
Dorothy Bishop. Developmental neuropsychologist. Followers: 882
The Neurocritic. Blogger. Followers: 880
Jon Sutton. Editor of The Psychologist. Followers: 796
Karen Pine. Psychologist, author. Followers: 783
Uta Frith. Developmental neuropsychologist, autism expert. Followers: 730
Claudia Hammond. Radio presenter. Followers: 715
John Cacioppo. Psychologist, social neuroscientist. Followers: 705
Sarah-Jayne Blakemore. Cognitive neuroscientist. Followers: 691
Mark Batey. Creativity expert. Followers: 682
Rob Archer. Organisational psychologist. Followers: 680
Ben Hawkes. Psychologist, comedian. Followers: 679
Monica Whitty. Cyberpsychologist. Followers: 663
Charles Fernyhough. Developmental psychologist, author. Followers: 662
Marco Iacoboni. Neuroscientist, mirror neuron expert. Followers: 615
James Neill. Psychology lecturer. Followers: 590
Eran Katz. Grad student (tweets in Hebrew). Followers: 549
Rory O'Connor. Health psychologist, suicide researcher. Followers: 526
Tom Stafford. Psychologist, author. Followers: 494
Christopher H. Ramey. Psychologist. Followers: 485
Bruce Hutchison. Clinical psychologist. Followers: 465
Rachel Robinson. Child psychologist. Followers: 447
Manon Eileen. Clinical psychologist and criminologist. Followers: 442
Rebecca Symes. Sports psychologist. Followers: 427
Wray Herbert. Writer for APS, author. Followers: 417
Hilary Bruffell. Social psychologist. Followers: 412
Atle Dyregrov. Psychologist, expert in crisis psychology. Followers: 405
Steven Brownlow. Clinical and forensic psychologist. Followers: 405
Mike Garth. Sports psychologist. Followers: 402
Victoria Galbraith. Counselling psychologist. Followers: 389
Daniel Simons. Cognitive psychologist, author. Followers: 355
Daryl O'Connor. Health psychologist. Followers: 352
David Matsumoto. Psychologist and judoka. Followers: 326
Karen Franklin. Forensic psychologist. Followers: 299
Patrick Macartney. Psychologist and sociologist. Followers: 297
Caroline Watt. Parapsychologist. Followers: 296
Ciarán Mc Mahon. Psychologist. Followers: 283
Tim Byron. Music psychologist. Followers: 275
Voula Grand. Psychologist and writer. Followers: 273
Lorna Quandt. Grad student. Followers: 267
Bex Hewett. PhD student in occupational psychology. Followers: 261
Kevin McGrew. Intelligence expert. Followers: 259
Daniela O'Neill. Developmental psychologist. Followers: 245
Sean Nethercott. Psychologist. Followers: 243
Romeo Vitelli. Psychologist in private practice. Followers: 233
Andy Fugard. Cognitive scientist. Followers: 229
Erika Salomon. Grad student. Followers: 217
CoertVisser. Psychologist. Followers: 217
Jenna Condie. Environmental psychologist. Followers: 216
Astrid Kitti. Grad student. Followers: 203
Margarita Holmes. Psychologist and sex therapist. Followers: 203
Alex Fradera. Editor of BPS Occupational Digest. Followers: 194
Sue Hartley. Psychologist. Followers: 194
Johnrev Guilaran. Clinical psychologist trainee. Followers: 185
Janet Civitelli. Counselling psychologist. Followers: 175
Jon Simons. Cognitive scientist. Followers: 174
Ken Gilhooly. Cognitive psychologist. Followers: 166
Adrian Wale. Cognitive scientist, writer. Followers: 162
Sanja Dutina. Psychologist. Followers: 161
Gareth Morris. Grad student. Followers: 155
Talya Grumberg. Mental health counsellor. Followers: 155
Lila Chrysikou. Psychologist. Followers: 151
Ruthanna Gordon. Psychologist, sustainability expert. Followers: 151
Alex Birch. Business psychologist. Followers: 136
Craig Bertram. Grad student. Followers: 135
Suzanne Conboy-Hill. Clinical psychologist. Followers: 135
Simon Dymond. Behavioural neuroscientist. Followers: 130
Marc Scully. Social psychologist. Followers: 127
Mark Hoelterhoff. Experimental existential psychologist. Followers: 127
Nancy Hoffman. Neuropsychologist. Followers: 117
Valeschka Guerra. Psychology lecturer. Followers: 116
Emma Dunlop. Grad student. Followers: 115
Deb Halasz. Research psychologist. Followers: 112
Matteo Cantamesse. Social psychologist. Followers: 112
Catriona Morrison. Experimental psychologist. Followers: 107
Dylan Lopich. Clinical psychologist. Followers: 106
John Houser. School psychologist. Followers: 106
Arvid Kappas. Emotion researcher. Followers: 89
Andrew and Sabrina. Psychological scientists. Followers: 84
Simon Knight. Psychologist. Followers: 84
Peter Kinderman. Clinical psychologist. Followers: 83
Paul Hanges. Organisational psychologist. Followers: 83
John Hyland. Experimental psychologist. Followers: 82
Chelsea Walsh. Family and marriage therapist. Followers: 81
Kevin Friery. Psychologist, psychotherapist. Followers: 80
Gerald Guild. Psychologist, autism specialist. Followers: 78
Gillian Smith. Alcohol and drug researcher. Followers: 75
Jen Lewis. Grad student. Followers: 74
Scott Kaufman. Cognitive psychologist. Followers: 69
Jui Bhagwat. Child psychologist. Followers: 63
Tom Walton. Grad student. Followers: 61
Chris Brand. Cognitive psychologist in training. Followers: 59
Odette Beris. Psychologist and coach. Followers: 59
David Hughes. Psychologist. Followers: 53
Barry McGuinness. Psychologist, writer. Followers: 47
Caitlin Allison. Trainee counselling psychologist. Followers: 47
Philip Collier. Sport and positive psychologist. Followers: 40
David Yates. Grad student. Followers: 36
Alison Price. Occupational psychologist. Followers: 35
Sian Jones. Grad student. Followers: 31
Helen Jones. Clinical psychologist. Followers: 29
John Taylor. Cognitive psychologist. Followers: 23
Kathryn Newns. Clinical psychologist. Followers: 21
Lorraine Hope. Cognitive psychologist. Followers: 10
Victoria Mason. Psychology lecturer. Followers: 9

Thanks to Ben Watson for updating the follower counts. If you'd like to be added to future iterations of the list please add your full name and Twitter tag to comments. Future additions to the list must be fully-qualified psychologists. Also, we're restricting the list to individuals, so no organisations please. 





Sent with MobileRSS HD


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

Tuesday, March 08, 2011

Sunday, March 06, 2011

Complex creative people@sbkaufman, 3/6/11 12:21 PM

Scott Barry Kaufman (@sbkaufman)
3/6/11 12:21 PM
Why Creative People Are So Complex huff.to/epY0TX via @huffingtonpost


Sent from Kevin McGrew's iPad
Kevin McGrew, PhD
Educational Psychologist

FYiPOST: Psychometrika, Vol. 76, Issue 1 - New Issue Alert

For the quantoid readers of IQ's Corner


Sunday, March 6

Dear Valued Customer,
We are pleased to deliver your requested table of contents alert for Psychometrika. Good news: now you will find quick links to the full text of the article. Access the article with only one click!

Volume 76 Number 1 is now available on SpringerLink

Register for Springer's email services providing you with info on the latest books in your field. ... More!
Important News!
Follow us on Twitter
Follow @Springernomics on Twitter

In this issue:
Acknowledgements
Abstract    Full text PDF

Simplicity and Typical Rank Results for Three-Way Arrays
Jos M. F. ten Berge
Abstract    Full text PDF

Kullback–Leibler Information and Its Applications in Multi-Dimensional Adaptive Testing
Chun Wang, Hua-Hua Chang & Keith A. Boughton
Abstract    Full text PDF

Structural Modeling of Measurement Error in Generalized Linear Models with Rasch Measures as Covariates
Michela Battauz & Ruggero Bellio
Abstract    Full text PDF

A Boundary Mixture Approach to Violations of Conditional Independence
Johan Braeken
Abstract    Full text PDF

A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
Guangjian Zhang, Sy-Miin Chow & Anthony D. Ong
Abstract    Full text PDF

Investigating the Impact of Uncertainty About Item Parameters on Ability Estimation
Jinming Zhang, Minge Xie, Xiaolan Song & Ting Lu
Abstract    Full text PDF

Positive Definiteness via Off-diagonal Scaling of a Symmetric Indefinite Matrix
Peter M. Bentler & Ke-Hai Yuan
Abstract    Full text PDF

A Network Approach for Evaluating Coherence in Multivariate Systems: An Application to Psychophysiological Emotion Data
Fushing Hsieh, Emilio Ferrer, Shuchun Chen, Iris B. Mauss, Oliver John & James J. Gross
Abstract    Full text PDF

Book Review
WILCOX, R. R. (2010) Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy, 2nd edition.
Tian Siva Tian
Abstract    Full text PDF

Book Review
MOSTELLER, F. (2010) The Pleasures of Statistics: The Autobiography of Frederick Mosteller.
Howard Wainer
Abstract    Full text PDF

Program of the 75th Annual Meeting of the Psychometric Society
Abstract    Full text PDF

Minutes of the Psychometric Society Business Meeting
Abstract    Full text PDF

Report of the Treasurer of the Psychometric Society
Abstract    Full text PDF

Index for Volume 75
Abstract    Full text PDF
Do you want to publish your article in this journal?