I wear a number of hats within the broad filed of educational psychology. One is that of an applied psychometrician. Whenever anyone asks what I do, I receive strange looks when that title rolls out of my mouth. I then always need to provide a general explanation.
I've decided to take a little time and generate a brief explanation. I hope this helps.
The online American Psychological Association (APA) Dictionary of Psychology defines psychometrics as:
n. the branch of psychology concerned with the quantification and measurement of mental attributes, behavior, performance, and the like, as well as with the design, analysis, and improvement of the tests, questionnaires, and other instruments used in such measurement. Also called psychometric psychology; psychometry.
The definition can be understood from the two components of the word. Psycho refers to “psyche” or the human mind. Metrics refers to “measurement.” Thus, in simple terms, psychometrics means psychological measurement--it is the math and science behind psychological testing.
Applied psychometrics is concerned with the application of psychological theory, techniques, statistical methods, and psychological measurement to applied psychological test development, evaluation, and test interpretation. This compares to more pure or theoretical psychometrics which focuses on developing new measurement theories, methods, statistical procedures, etc. An applied psychometrician uses the various theories, tools and techniques developed by more theoretical psychometricians in the actual development, evaluation, and interpretation of psychological tests. By way of analogy, applied psychometrics is to theoretical psychometrics, as applied research is to pure research.
The principles of psychometric testing are very broad in their potential application., and have been applied to such areas as intelligence, personality, interest, attitudes, neuropsychological functioning, and diagnostic measures (Irwing & Hughes, 2018). As noted recently by Irwing and Hughes (2018), psychometrics is broad as “It applies to many more fields than psychology, indeed biomedical science, education, economics, communications theory, marketing, sociology, politics, business, and epidemiology amongst other disciplines, not only employ psychometric testing, but have also made important contributions to the subject” (p. 3).
Although there are many publications of relevance to the topic of test development and psychometrics, the most useful and important single source is “the Standards for Educational and Psychological Testing” (aka., the Joint Test Standards; American Educational Research Association [AERA], American Psychological Association [APA], National Council on Measurement in Education [NCME], 2014). The Joint Test Standards outline standards and guidelines for test developers, publishers, and users (psychologists) of tests.
Given that the principles and theories of psychometrics are generic (they cut across all subdisciplines of psychology that use psychological tests), and there is a standard professionally accepted set of standards (the Joint Test Standards), an expert in applied psychometrics has the skills and expertise to evaluate the fundamental, universal or core measurement integrity (i.e., quality of norms, reliability, validity, etc.) of various psychological tests and measures (e.g., surveys, IQ tests, neuropsychological tests, personality tests), although sub-disciplinary expertise and training would be required to engage in expert interpretation by sub-disciplines. For example, expertise in brain development, functioning and brain-behavior relations would be necessary to use neuropsychological tests to make clinical judgements regarding brain dysfunction, type of brain disorders, etc. However, the basic psychometric characteristics of most all psychological and educational tests (e.g., neuropsychological, IQ, achievement, personality, interest, etc.) assessment can be evaluated by professionals with expertise in applied psychometrics.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014). Standards for Educational and Psychological Testing. Washington, DC: Author.
Irwing, P. & Hughes, D. J. (2018). Test development. In P. Irwing, T. Booth, & D. J. Hughes (Eds.), The Wiley Handbook of Psychometric Testing: A Multidisciplinary Reference on Survey, Scale and Test Development (pp. 3-49. Hoboken, NJ: John Wiley & Sons
Showing posts with label psychometrics. Show all posts
Showing posts with label psychometrics. Show all posts
Monday, July 16, 2018
Thursday, July 12, 2018
Great psychometric resource: The Wiley Handbook of Psychometric Testing.
I just received my two volume set of this excellent resource on psychometric testing. There are not many good books that cover such a broad array of psychometric measurement issues. This is not what I would call "easy reading." This is more like a "must have" resource book to have "at the ready" when seeking to understand contemporary psychometric test development issues.
Saturday, March 17, 2018
The importance of differential psychology for school learning: 90% of school achievement variance is due to student characteristics
This is why the study of individual differences/differential psychology is so important. If you don’t want to read the article you can watch a video of Dr. Detterman where he summarizes his thinking and this paper.
Education and Intelligence: Pity the Poor Teacher because Student Characteristics are more Significant than Teachers or Schools. Article link.
Douglas K. Detterman
Case Western Reserve University (USA)
Abstract
Education has not changed from the beginning of recorded history. The problem is that focus has been on schools and teachers and not students. Here is a simple thought experiment with two conditions: 1) 50 teachers are assigned by their teaching quality to randomly composed classes of 20 students, 2) 50 classes of 20 each are composed by selecting the most able students to fill each class in order and teachers are assigned randomly to classes. In condition 1, teaching ability of each teacher and in condition 2, mean ability level of students in each class is correlated with average gain over the course of instruction. Educational gain will be best predicted by student abilities (up to r = 0.95) and much less by teachers' skill (up to r = 0.32). I argue that seemingly immutable education will not change until we fully understand students and particularly human intelligence. Over the last 50 years in developed countries, evidence has accumulated that only about 10% of school achievement can be attributed to schools and teachers while the remaining 90% is due to characteristics associated with students. Teachers account for from 1% to 7% of total variance at every level of education. For students, intelligence accounts for much of the 90% of variance associated with learning gains. This evidence is reviewed
- Posted using BlogPress from my iPad
Education and Intelligence: Pity the Poor Teacher because Student Characteristics are more Significant than Teachers or Schools. Article link.
Douglas K. Detterman
Case Western Reserve University (USA)
Abstract
Education has not changed from the beginning of recorded history. The problem is that focus has been on schools and teachers and not students. Here is a simple thought experiment with two conditions: 1) 50 teachers are assigned by their teaching quality to randomly composed classes of 20 students, 2) 50 classes of 20 each are composed by selecting the most able students to fill each class in order and teachers are assigned randomly to classes. In condition 1, teaching ability of each teacher and in condition 2, mean ability level of students in each class is correlated with average gain over the course of instruction. Educational gain will be best predicted by student abilities (up to r = 0.95) and much less by teachers' skill (up to r = 0.32). I argue that seemingly immutable education will not change until we fully understand students and particularly human intelligence. Over the last 50 years in developed countries, evidence has accumulated that only about 10% of school achievement can be attributed to schools and teachers while the remaining 90% is due to characteristics associated with students. Teachers account for from 1% to 7% of total variance at every level of education. For students, intelligence accounts for much of the 90% of variance associated with learning gains. This evidence is reviewed
- Posted using BlogPress from my iPad
Labels:
Beyond CHC,
brain networks,
CHC,
CHC theory,
conative,
g,
general intelligence,
information processing,
intellectual assessment,
intelligence testing,
intervention,
MACM,
motivation,
P-FIT,
personal competence,
psychometrics,
self-beliefs,
self-regulation,
social-emotional learning,
volitional controls
Monday, December 05, 2016
Human intelligence research four-levels of explanation: Connecting the dots - an Oldie-But-Goodie (OBG) post
Click on image to enlarge.
For decades I have attempted to keep abreast with intelligence-related research, particularly research that would help with the development, analysis, and interpretation of applied intelligence tests. I frequently struggled with integrating research that focused on brain-behavior relations or networks, neural efficiency, etc. I then rediscovered a simple three-level categorization of intelligence research by Earl Hunt. I modified it into a four-level model, and the model is represented in the figure above.
In this "intelligent" testing series, primary emphasis will be on harnessing information from the top "psychometric level" of research to aid in test interpretation. However, given the increased impact of cognitive neuropsychological research on test development, often one must turn to level 2 (information processing) to understand how to interpret specific tests.
This series will draw primarily from the first two levels, although there may be times were I import knowledge from the two brain-related levels.
To better understand this framework, and put the forthcoming information in this series in proper perspective, I would urge you to view the "connecting the dots" video PPT that I previously posted at this blog.
Here it is. The next post will start into the psychometric level information that serves as the primary foundation of "intelligent" intelligence testing.
Tuesday, September 16, 2014
Good intro overview article on exploratory factor analysis
Wednesday, June 25, 2014
Tuesday, May 27, 2014
Victory for psychometrics in Hall v Florida "bright line" (ignoring SEM) SCOTUS decision re Atkins MR/ID death penalty cases
This morning SCOTUS rectified the long standing "bright line" (ignoring SEM) problem with Atkins ID/MR cases in Florida. Click here for background information. Click here for today's decision.
Monday, August 27, 2012
For the psychometrically inclined...and psychologists who should be
Very good food for thought regarding the need for progress in applied psychological measurement.
Double click on image to enlarge.

Posted using BlogPress from Kevin McGrew's iPad
www.themindhub.com
Double click on image to enlarge.

Posted using BlogPress from Kevin McGrew's iPad
www.themindhub.com
Labels:
psychometrics
Friday, August 24, 2012
IQ Score Interpretations in Atkins MR/ID Death Penalty Cases: The Good, Bad and the Ugly
I just uploaded the following PPT presentation to my SlideShare account---IQ Score Interpretation in Atkins MR/ID Death Penalty Cases: The Good, Bad and the Ugly. It was presented this month (Sept, 2012) at the Habeas Assistance Training Seminar. Click here to view.

Posted using BlogPress from Kevin McGrew's iPad
www.themindhub.com

Posted using BlogPress from Kevin McGrew's iPad
www.themindhub.com
Saturday, May 12, 2012
Sampling error--the law of small numbers as per "Thinking fast and slow"
Nice explanation of problem with sampling error in small samples in Kahneman's highly regarded book.
Click on I age to enlarge

Posted using BlogPress from Kevin McGrew's iPad
www.themindhub.com
Click on I age to enlarge

Posted using BlogPress from Kevin McGrew's iPad
www.themindhub.com
Labels:
psychometrics
Monday, March 12, 2012
Research Byte: Implications of select psychometric issues for neuropsych assessment
Friday, January 13, 2012
How to estimate best IQ score if someone has taken multiple IQ tests: The psychometric magic of Dr. Joel Schneider
Dr. Joel Schneider has posted an excellent explanation on how to estimate a person's "true IQ score" when a person has taken multiple IQ tests at different times. Probably the most important take-away message is one should never calculated the simple arithmetic average. The median would be more appropriate, but Joel provides and even more psychometrically sound method and an Excel spreadhsheet for implementing his excellent logic and methods.
- Posted using BlogPress from Kevin McGrew's iPad
- Posted using BlogPress from Kevin McGrew's iPad
Wednesday, January 11, 2012
Saturday, October 08, 2011
Tuesday, October 04, 2011
Dr. Doug Detterman's bytes: Psychometric validity

I have been remiss (busy) in my posting of Dr. Doug Detterman's bytes. Here is a new one on validity
Validity is the extent to which a test measures what it is supposed to measure and predicts what it is supposed to predict. When Binet developed his intelligence test, his goal was to identify children who would not do well in school so they could be given help. To the extent that Binet's test identified such children, it was valid. In Binet's case, proving the validity of the test amounted to showing that the test predicted or correlated with school performance. (Binet was handicapped, though, since the correlation coefficient was not widely known at the time of his first test.) Note that there is no requirement to provide an explanation of why the test predicts what it was designed to predict, only that it do it. Validity provides an empirical relationship that may be absent of any theoretical meaning. Theoretical meaning is given to the relationship when people attempt to explain why the test works to produce this validity relationship.
Tests designed to predict one thing may be found to predict other things. This is
certainly the case with intelligence tests. Relationships between intelligence and many other variables have been found. Such relationships help to build a theory about how and why the test works and ultimately about the relationship of the variables studied.
- iPost using BlogPress from Kevin McGrew's iPad
intelligence IQ tests IQ testing IQ scores CHC intelligence theory CHC theory Cattell-Horn-Carroll human cognitive abilities psychology school psychology individual differences cognitive psychology neuropsychology neuroscience psychology special education educational psychology psychometrics psychological assessment psychological measurement IQs Corner general intelligence intelligent IQ testing validity Doug Detterman
Generated by: Tag Generator
Tuesday, September 20, 2011
IRT-based clinical psychological assessment and test development

IRT based test development has been one of the most important psychometric developments during the past few decades.
This is a followup to a prior brief FYI post about an excellent review article regarding the benefits of IRT methods for psychological test development and interpretation. I have now read the article in depth and have provided additional comments and links via the IQs Reading blog feature.
Enjoy.
- iPost using BlogPress from Kevin McGrew's iPad
Labels:
IRT,
psychometrics,
Rasch
Thursday, September 15, 2011
Book Nook: An Introduction to Psychometrics

People always ask me for recommendations for a good intro book on psychometrics. Until recently, there have been no such books. There are older texts by Thorndike and Nunally, and a boat load of highly topic-specific advanced books (IRT; factor analysis, etc.), but few books suitable for a first course in psychometrics.
I recently ordered the above book and have been skimming sections when I find time. I believe that this is probably one of the better contemporary introductory texts on psychometrics. I would recommend it to anyone wanting to learn more about the basics of psychometrics.
- iPost using BlogPress from Kevin McGrew's iPad
Saturday, September 10, 2011
Research Bytes: Bi-factor item factor analysis and CFA model fit/misfit
Sunday, August 28, 2011
Research byte: Another good overview article on IRT test development methods--relevance to clinical assessment

Double click on image to enlarge

- iPost using BlogPress from Kevin McGrew's iPad
intelligence IQ tests IQ testing IQ scores CHC intelligence theory CHC theory Cattell-Horn-Carroll human cognitive abilities psychology school psychology individual differences cognitive psychology neuropsychology neuroscience psychology special education educational psychology psychometrics psychological assessment psychological measurement IQs Corner general intelligence intelligent IQ testing validity intellectual assessment IRT Rasch
Generated by: Tag Generator
Labels:
IRT,
psychometrics,
Rasch
Sunday, August 21, 2011
Subscribe to:
Posts (Atom)