Brian Habing's Research Interests
Psychometrics and Educational Measurement
and Item Response Theory in Particular
 
My current research is in the field of Educational and Psychological Measurement. This is one of the major branches of Psychometrics, the science of applying statistics to psychological and educational data. Psychometricians are found in departments of Statistics, Educational Psychology, and Quantitative Psychology, as well as at many national testing companies and large school districts.

Four of the main journals which feature this kind of research are:

and articles on particular topics can be found by searching the ERIC database or the ISI Citation Database. Many of the papers found in these journals were originally presented at the annual meetings of: the Psychometric Society, the National Council on Measurement in Education, and Division D of the American Educational Research Association .

If one were looking for employment as a Psychometrician or Measurement specialist, some relevant job search sites include:

Additionally, position openings are posted at the web-sites for all of the large testing companies: ETS, ACT, LSAC, and CTB. Many of these corporations also have internship and post-doc programs.

From the Statistical standpoint much of the work in Psychometrics focusses on the areas of: Multivariate Analysis (i.e. factor analysis, dual scaling, etc...), Categorical Data Analysis (i.e. loglinear modeling), and Item Response Theory. This last topic, Item Response Theory (IRT), is where most of my current research takes place. Item response theory is the latent variable modeling approach commonly applied to data from large scale standardized testing. For example, after taking the ACT or SAT the raw data is the huge matrix of 0s and 1s. Each row of this matrix represents a different examinee, and each column represents a different item. A 0 would represent an incorrect response, and a 1 would be a correct response.

The goal of IRT is to describe both the properties of the items (are they difficult or easy? are they informative?) and determine the ability of the examinees. It is called a latent model because the ability is not manifested directly, it can only be measured indirectly by the various items which get at the subject. That is, math ability isn't like measuring free throw shooting percentage, or speed running the 100 yard dash. While those latter two can be directly measured on repeated occasions, math ability is abstract and not entirely well defined. Math ability is defined only through the set of all the possible items that could be asked on the exam.

For items that can be scored only as correct or incorrect, the basic units of item response theory are the Item Response Functions (IRFs). Each item has one of these curves, which function similarly to the curves in logistic regression. They give the probability of correctly answering the item, given the examinee's ability. Of course in logistic regression the independent variable is generally assumed to be measured without error. Here the examinee ability isn't measured at all except through the answers to the items! This significantly complicates the estimation of the various parameters. The figure to the right is an example of an IRF. Here the examinee ability is on the standard normal scale, and there is a 20% chance of guessing the item correctly. (The lower asymptote).

Some of the current issues of study in IRT include:

For further reading, the following are some excellent books on IRT.
Click here to return to Brian Habing's home page