Brian Habing's Research Interests
Psychometrics and Educational Measurement
and Item Response Theory in Particular
My current research is in the field of Educational and Psychological
Measurement. This is one of the major branches of Psychometrics, the
science of applying statistics to psychological and educational data.
Psychometricians are found in departments of Statistics, Educational Psychology,
and Quantitative Psychology, as well as at many national testing companies
and large school districts.
Four of the main journals which feature this kind of research are:
and articles on particular topics can be found by searching the
ISI Citation Database.
Many of the papers found in these journals
were originally presented at the annual meetings of: the
National Council on Measurement in Education, and Division D of the
American Educational Research Association
If one were looking for employment as a Psychometrician or Measurement
specialist, some relevant job search sites include:
Additionally, position openings are posted at the web-sites for all of the
large testing companies:
Many of these corporations also have internship and post-doc programs.
From the Statistical standpoint much of the work in Psychometrics
focusses on the areas of: Multivariate Analysis (i.e. factor analysis,
dual scaling, etc...), Categorical Data Analysis (i.e. loglinear
modeling), and Item Response Theory. This last topic, Item Response
Theory (IRT), is where most of my current research takes place.
Item response theory is the latent variable modeling approach commonly
applied to data from large scale standardized testing.
For example, after taking the ACT or SAT the raw data is the huge
matrix of 0s and 1s. Each row of this matrix represents a different
examinee, and each column represents a different item. A 0 would
represent an incorrect response, and a 1 would be a correct response.
The goal of IRT is to describe both the properties
of the items (are they difficult or easy? are they informative?) and
determine the ability of the examinees. It is called a latent model
because the ability is not manifested directly, it can only be
measured indirectly by the various items which get at the subject.
That is, math ability isn't like measuring free throw shooting
percentage, or speed running the 100 yard dash. While those latter
two can be directly measured on repeated occasions, math ability is
abstract and not entirely well defined. Math ability is defined only
through the set of all the possible items that could be asked on the exam.
For items that can be scored only as correct or incorrect, the basic units
of item response theory are the Item Response Functions (IRFs).
has one of these curves, which function similarly to the curves in
logistic regression. They give
the probability of correctly answering the item, given the examinee's ability.
Of course in logistic regression the independent variable is generally
assumed to be
measured without error. Here the examinee ability isn't measured at all
except through the answers to the items!
This significantly complicates the estimation of the various parameters.
The figure to the right is an
example of an IRF. Here the examinee ability is on the standard normal
scale, and there is a 20% chance of guessing the item correctly. (The lower
Some of the current issues of study in IRT include:
For further reading, the following are some excellent books on IRT.
- Computer Adaptive Testing - Estimating the examinee ability when the
examinees take different items, which are selected based on their responses
to the previous items; and, how to select the item to be given next.
- Differential Item Functioning / Bias - Developing methods to determine if items differentiate between examinees of different demographic groups even though
they have the same ability (as opposed to differences in the
average scores of groups that should be there because one of the groups isn't as
- Dimensionality Assessment - Determining how many different latent
dimensions make up the 'ability.' Is the exam just a math test? Or should
separate algebra, geometry, and trigonometry scores be reported.
- IRT Models - New models for polytomous data (not just 0, 1, but with
partial credit) and for data that isn't unidimensional (you need more than
the single score) are still being developed and improved.
- As an Introduction:
Hambleton & Swaminathan,Fundamentals of Item
Response Theory, SAGE, 1991.
- An overview of the most commonly used models:
van der Linden &
Hambleton, Handbook of Modern Item Response Theory, Springer, 1997.
- The classic text in the field:
Lord & Novick, Statistical Theories of Mental Test Scores,
- A guide to the commonly used estimation methods:
Baker, Item Response Theory: Parameter Estimation Techniques,
Marcel Dekker, 1992.
to return to Brian Habing's home page