Example data: 635 older adults (age 80-100) self-reporting on 7 items assessing the Instrumental Activities of Daily Living (IADL) as follows:

  1. Housework (cleaning and laundry): 1=64%
  2. Bedmaking: 1=84%
  3. Cooking: 1=77%
  4. Everyday shopping: 1=66%
  5. Getting to places outside of walking distance: 1=65%
  6. Handling banking and other business: 1=73%
  7. Using the telephone 1=94%

Two versions of a response format were available:

Binary -> 0 = “needs help”, 1 = “does not need help”

Categorical -> 0 = “can’t do it”, 1=”big problems”, 2=”some problems”, 3=”no problems”

Higher scores indicate greater function. We will look at each response format in turn.

Package Installation and Loading

if (require(lavaan) == FALSE){
  install.packages("lavaan")
}
library(lavaan)
if (require(mirt) == FALSE){
  install.packages("mirt")
}
library(mirt)

Data Import into R

The data are in a text file named adl.dat orignally used in Mplus (so no column names were included at the top of the file). The file contains more items than we will use, so we select only items above from the whole file.

#read in data file (Mplus file format so having to label columns)
adlData = read.table(file = "adl.dat", header = FALSE, na.strings = ".", col.names = c("case", paste0("dpa", 1:14), paste0("dia", 1:7), paste0("cpa", 1:14), paste0("cia", 1:7)))
#select Situations items and PersonID variables
iadlDataInit = adlData[c(paste0("dia", 1:7))]
#remove cases with all items missing
removeCases = which(apply(X = iadlDataInit, MARGIN = 1, FUN = function (x){ length(which(is.na(x)))}) == 7)
iadlData = iadlDataInit[-removeCases,]

Estimation with Marginal Maximum Likelihood

We will introduce the mirt package as a method for estimating IRT models. Overall, the package is very good, but typically is used for scaling purposes (measurement rather than use of latent variables in additional model equations). We use the package to demonstrate estimating IRT models using marginal maximum likelihood. If you wish to use the latent trait estimates in secondary analyses (which you would otherwise use SEM for simultaneously), there are additional steps to take to ensure the error associated with each score is carried over to the subsequent analysese

When all the items of the model are the same type, the mirt syntax is very short. The mirt() function is used to provide estimates, with options model=1 for all items measuring the same trait and itemtype="2PL" for the two-parameter logistic model ("Rasch" is used for the 1PL shorthand). The "Rasch" designation estimates a model where the loadings are all set to one and the factor/latent trait variance is estimated) – which is an equivalent model to the one estimated below but we seek to keep the latent trait standardized. We will estimate both simultaneously here:

mirt1PLsyntax = "
IADL = 1-7
CONSTRAIN = (1-7, a1)
COV = 1
"
model1PLmirt = mirt(data = iadlData, model = mirt1PLsyntax)

Iteration: 1, Log-Lik: -1878.487, Max-Change: 1.89501
Iteration: 2, Log-Lik: -1577.569, Max-Change: 1.18305
Iteration: 3, Log-Lik: -1505.605, Max-Change: 0.58836
Iteration: 4, Log-Lik: -1483.057, Max-Change: 0.34308
Iteration: 5, Log-Lik: -1474.066, Max-Change: 0.23870
Iteration: 6, Log-Lik: -1469.990, Max-Change: 0.16868
Iteration: 7, Log-Lik: -1466.436, Max-Change: 0.06185
Iteration: 8, Log-Lik: -1465.995, Max-Change: 0.04630
Iteration: 9, Log-Lik: -1465.692, Max-Change: 0.03439
Iteration: 10, Log-Lik: -1465.184, Max-Change: 0.01246
Iteration: 11, Log-Lik: -1465.079, Max-Change: 0.01223
Iteration: 12, Log-Lik: -1464.992, Max-Change: 0.01176
Iteration: 13, Log-Lik: -1464.662, Max-Change: 0.00863
Iteration: 14, Log-Lik: -1464.646, Max-Change: 0.00732
Iteration: 15, Log-Lik: -1464.632, Max-Change: 0.00642
Iteration: 16, Log-Lik: -1464.579, Max-Change: 0.00288
Iteration: 17, Log-Lik: -1464.576, Max-Change: 0.00252
Iteration: 18, Log-Lik: -1464.574, Max-Change: 0.00216
Iteration: 19, Log-Lik: -1464.569, Max-Change: 0.00417
Iteration: 20, Log-Lik: -1464.568, Max-Change: 0.00277
Iteration: 21, Log-Lik: -1464.567, Max-Change: 0.00138
Iteration: 22, Log-Lik: -1464.566, Max-Change: 0.00109
Iteration: 23, Log-Lik: -1464.566, Max-Change: 0.00161
Iteration: 24, Log-Lik: -1464.565, Max-Change: 0.00118
Iteration: 25, Log-Lik: -1464.565, Max-Change: 0.00168
Iteration: 26, Log-Lik: -1464.564, Max-Change: 0.00142
Iteration: 27, Log-Lik: -1464.564, Max-Change: 0.00099
Iteration: 28, Log-Lik: -1464.564, Max-Change: 0.00104
Iteration: 29, Log-Lik: -1464.563, Max-Change: 0.00102
Iteration: 30, Log-Lik: -1464.563, Max-Change: 0.00086
Iteration: 31, Log-Lik: -1464.563, Max-Change: 0.00082
Iteration: 32, Log-Lik: -1464.563, Max-Change: 0.00074
Iteration: 33, Log-Lik: -1464.563, Max-Change: 0.00070
Iteration: 34, Log-Lik: -1464.563, Max-Change: 0.00076
Iteration: 35, Log-Lik: -1464.563, Max-Change: 0.00065
Iteration: 36, Log-Lik: -1464.563, Max-Change: 0.00074
Iteration: 37, Log-Lik: -1464.563, Max-Change: 0.00051
Iteration: 38, Log-Lik: -1464.563, Max-Change: 0.00065
Iteration: 39, Log-Lik: -1464.563, Max-Change: 0.00048
Iteration: 40, Log-Lik: -1464.563, Max-Change: 0.00049
Iteration: 41, Log-Lik: -1464.563, Max-Change: 0.00039
Iteration: 42, Log-Lik: -1464.563, Max-Change: 0.00031
Iteration: 43, Log-Lik: -1464.563, Max-Change: 0.00010
Iteration: 44, Log-Lik: -1464.563, Max-Change: 0.00009
model2PLmirt = mirt(data = iadlData, model = 1, itemtype = "2PL")

Iteration: 1, Log-Lik: -1878.487, Max-Change: 1.36006
Iteration: 2, Log-Lik: -1572.925, Max-Change: 1.11015
Iteration: 3, Log-Lik: -1499.005, Max-Change: 0.91115
Iteration: 4, Log-Lik: -1474.055, Max-Change: 0.72471
Iteration: 5, Log-Lik: -1464.621, Max-Change: 0.54473
Iteration: 6, Log-Lik: -1460.408, Max-Change: 0.40623
Iteration: 7, Log-Lik: -1456.776, Max-Change: 0.16294
Iteration: 8, Log-Lik: -1456.384, Max-Change: 0.13300
Iteration: 9, Log-Lik: -1456.118, Max-Change: 0.10161
Iteration: 10, Log-Lik: -1455.651, Max-Change: 0.03754
Iteration: 11, Log-Lik: -1455.561, Max-Change: 0.02954
Iteration: 12, Log-Lik: -1455.486, Max-Change: 0.02638
Iteration: 13, Log-Lik: -1455.201, Max-Change: 0.01568
Iteration: 14, Log-Lik: -1455.180, Max-Change: 0.01315
Iteration: 15, Log-Lik: -1455.164, Max-Change: 0.01114
Iteration: 16, Log-Lik: -1455.117, Max-Change: 0.00739
Iteration: 17, Log-Lik: -1455.110, Max-Change: 0.00634
Iteration: 18, Log-Lik: -1455.105, Max-Change: 0.00517
Iteration: 19, Log-Lik: -1455.092, Max-Change: 0.00484
Iteration: 20, Log-Lik: -1455.089, Max-Change: 0.00409
Iteration: 21, Log-Lik: -1455.087, Max-Change: 0.00363
Iteration: 22, Log-Lik: -1455.084, Max-Change: 0.00367
Iteration: 23, Log-Lik: -1455.082, Max-Change: 0.00288
Iteration: 24, Log-Lik: -1455.081, Max-Change: 0.00267
Iteration: 25, Log-Lik: -1455.078, Max-Change: 0.00262
Iteration: 26, Log-Lik: -1455.078, Max-Change: 0.00195
Iteration: 27, Log-Lik: -1455.077, Max-Change: 0.00180
Iteration: 28, Log-Lik: -1455.075, Max-Change: 0.00088
Iteration: 29, Log-Lik: -1455.075, Max-Change: 0.00063
Iteration: 30, Log-Lik: -1455.075, Max-Change: 0.00065
Iteration: 31, Log-Lik: -1455.074, Max-Change: 0.00024
Iteration: 32, Log-Lik: -1455.074, Max-Change: 0.00014
Iteration: 33, Log-Lik: -1455.074, Max-Change: 0.00013
Iteration: 34, Log-Lik: -1455.074, Max-Change: 0.00073
Iteration: 35, Log-Lik: -1455.074, Max-Change: 0.00016
Iteration: 36, Log-Lik: -1455.074, Max-Change: 0.00048
Iteration: 37, Log-Lik: -1455.074, Max-Change: 0.00010

Unlike lavaan, mirt does not provide a nice formatting of parameters with the summary statment. Rather, we get parts of estimates through various pieces.

The model log-likelihood and summary information is given by the show() function:

show(model1PLmirt)

Call:
mirt(data = iadlData, model = mirt1PLsyntax)

Full-information item factor analysis with 1 factor(s).
Converged within 1e-04 tolerance after 44 EM iterations.
mirt version: 1.25 
M-step optimizer: BFGS 
EM acceleration: Ramsay 
Number of rectangular quadrature: 61

Log-likelihood = -1464.563
Estimated parameters: 8 
AIC = 2945.125; AICc = 2945.355
BIC = 2980.754; SABIC = 2955.355
show(model2PLmirt)

Call:
mirt(data = iadlData, model = 1, itemtype = "2PL")

Full-information item factor analysis with 1 factor(s).
Converged within 1e-04 tolerance after 37 EM iterations.
mirt version: 1.25 
M-step optimizer: BFGS 
EM acceleration: Ramsay 
Number of rectangular quadrature: 61

Log-likelihood = -1455.074
Estimated parameters: 14 
AIC = 2938.149; AICc = 2938.826
BIC = 3000.499; SABIC = 2956.051

Also note that the model log-likelihood information does not include a test of the model against an alternative, as does a typical CFA analysis in comparing the model fit of your model to one where all parameters were estimated. This is because the saturated model in IRT is different (for models where all items are binary, it is Multivariate Bernoulli) in that the statistics of interest come in the form of the proportion of people with a given response pattern.

To see estimates, use the coef() function. Here are the estimates for the 1PL model:

coef1PL = coef(model1PLmirt)
coef1PL
$dia1
      a1     d g u
par 4.39 1.637 0 1

$dia2
      a1     d g u
par 4.39 4.651 0 1

$dia3
      a1     d g u
par 4.39 3.509 0 1

$dia4
      a1     d g u
par 4.39 1.908 0 1

$dia5
      a1     d g u
par 4.39 1.881 0 1

$dia6
      a1     d g u
par 4.39 2.988 0 1

$dia7
      a1     d g u
par 4.39 7.467 0 1

$GroupPars
    MEAN_1 COV_11
par      0      1

The coef() function returns an R list of the parameters for each item along with the structural model parameters (the $GroupPars element), which shows the mean and variance of the latent variable. For each item, there are at least four parameters listed:

Note how the item discrimination (the a1 term) is equal for all items – this is done by convention in the 1PL model.

Putting the parameters into equation form, we have a slope/intercept form of the IRT model:

\[P(Y_{si} = 1 | \theta_s) = g_i + (u_i-g_i)\frac{\exp\left(d_i + a1_i \theta_s \right)}{1+\exp\left(d_i + a1_i \theta_s \right)}\]

Another commonly used parameterization of the IRT model is called discrimination/difficulty, given by:

\[P(Y_{si} = 1 | \theta_s) = g_i + (u_i-g_i)\frac{\exp\left(a1_i \left( \theta_s - b_i \right) \right)}{1+\exp\left(a1_i \left( \theta_s - b_i \right) \right)}\]

The two parameterizations are equivalent and one can be found by re-arranging terms of the other. To get the item difficulty from the slope/intercept parameterization:

\[b_i = -\frac{d_i}{a1_i}\]

For our results, we can use the lapply function to add the item difficulties:

getDifficulty = function(itemPar){
  parnames = colnames(itemPar)
  if ("a1" %in% parnames){
    itemPar = c(itemPar, -1*itemPar[2]/itemPar[1])
    names(itemPar) = c(parnames, "b")
    return(itemPar)
  } else {
    return(itemPar)
  }
}
lapply(X = coef1PL, FUN = getDifficulty)
$dia1
        a1          d          g          u          b 
 4.3895702  1.6372853  0.0000000  1.0000000 -0.3729944 

$dia2
       a1         d         g         u         b 
 4.389570  4.651017  0.000000  1.000000 -1.059561 

$dia3
        a1          d          g          u          b 
 4.3895702  3.5086672  0.0000000  1.0000000 -0.7993191 

$dia4
        a1          d          g          u          b 
 4.3895702  1.9084551  0.0000000  1.0000000 -0.4347704 

$dia5
        a1          d          g          u          b 
 4.3895702  1.8810667  0.0000000  1.0000000 -0.4285309 

$dia6
        a1          d          g          u          b 
 4.3895702  2.9876548  0.0000000  1.0000000 -0.6806258 

$dia7
       a1         d         g         u         b 
 4.389570  7.466592  0.000000  1.000000 -1.700985 

$GroupPars
    MEAN_1 COV_11
par      0      1
itemPar = coef1PL[[1]]

For the 2PL, we can use a similar method (here condensed to display the item difficulties):

coef2PL = lapply(X = coef(model2PLmirt), FUN = getDifficulty)
coef2PL
$dia1
       a1         d         g         u         b 
 4.373935  1.606529  0.000000  1.000000 -0.367296 

$dia2
       a1         d         g         u         b 
 5.058002  5.227547  0.000000  1.000000 -1.033520 

$dia3
        a1          d          g          u          b 
 4.3647262  3.4551388  0.0000000  1.0000000 -0.7916049 

$dia4
       a1         d         g         u         b 
 7.197112  2.944432  0.000000  1.000000 -0.409113 

$dia5
       a1         d         g         u         b 
 4.273968  1.807076  0.000000  1.000000 -0.422810 

$dia6
        a1          d          g          u          b 
 3.4634209  2.4201714  0.0000000  1.0000000 -0.6987806 

$dia7
       a1         d         g         u         b 
 3.303801  5.952140  0.000000  1.000000 -1.801604 

$GroupPars
    MEAN_1 COV_11
par      0      1

As the 1PL is nested within the 2PL, we can use a likelihood ratio test to see which model is preferred. The LRT tests the null hypothesis that all item discriminations are equal against an alternative that not all are equal:

anova(model1PLmirt, model2PLmirt)

Model 1: mirt(data = iadlData, model = mirt1PLsyntax)
Model 2: mirt(data = iadlData, model = 1, itemtype = "2PL")

       AIC     AICc    SABIC      BIC    logLik     X2  df     p
1 2945.125 2945.355 2955.355 2980.754 -1464.563    NaN NaN   NaN
2 2938.149 2938.826 2956.051 3000.499 -1455.074 18.977   6 0.004

Here, the test statistic was \(\chi_6 = 18.977\) with a p-value of .004. Therefore, we reject the null hypothesis of equal slopes and conclude the 2PL fits better than the 1PL model.

The LRT, however, assumes both models have a sufficient level of absolute fit to the data. One way to tell is the use of the M2() function, which provides model fit to the 2-way tables (think item-pair covariances). Because our data has some missing responses, we have to use the impute=10 option, imputing 10 values per missing response. Here is the value for the 1PL:

M2(obj = model1PLmirt, impute = 10)
M2(obj = model2PLmirt, impute=10)

The statistics given from the M2 function are similar to those used in CFA–these show approximate model fit indices such as RMSEA, SRMR, TLI, and CFI. From these, it appears the model fits approximately (CFI and TLI near 1 but relatively poor RMSEA). To find misfitting “residuals” we need complete data and the function M2() and the imputeMissing() functions are not working. So, here is an example with complete data and the 2PL:

model2PLmirtB = mirt(data = iadlData[complete.cases(iadlData),], model = 1, itemtype = "2PL")

Iteration: 1, Log-Lik: -1817.248, Max-Change: 1.46767
Iteration: 2, Log-Lik: -1517.948, Max-Change: 1.03243
Iteration: 3, Log-Lik: -1447.274, Max-Change: 1.00582
Iteration: 4, Log-Lik: -1424.072, Max-Change: 0.74506
Iteration: 5, Log-Lik: -1415.121, Max-Change: 0.55113
Iteration: 6, Log-Lik: -1411.156, Max-Change: 0.39934
Iteration: 7, Log-Lik: -1407.784, Max-Change: 0.17155
Iteration: 8, Log-Lik: -1407.439, Max-Change: 0.13233
Iteration: 9, Log-Lik: -1407.206, Max-Change: 0.10162
Iteration: 10, Log-Lik: -1406.798, Max-Change: 0.03483
Iteration: 11, Log-Lik: -1406.725, Max-Change: 0.02867
Iteration: 12, Log-Lik: -1406.663, Max-Change: 0.02278
Iteration: 13, Log-Lik: -1406.425, Max-Change: 0.01418
Iteration: 14, Log-Lik: -1406.410, Max-Change: 0.01160
Iteration: 15, Log-Lik: -1406.397, Max-Change: 0.00953
Iteration: 16, Log-Lik: -1406.352, Max-Change: 0.00510
Iteration: 17, Log-Lik: -1406.348, Max-Change: 0.00443
Iteration: 18, Log-Lik: -1406.344, Max-Change: 0.00389
Iteration: 19, Log-Lik: -1406.331, Max-Change: 0.00247
Iteration: 20, Log-Lik: -1406.330, Max-Change: 0.00214
Iteration: 21, Log-Lik: -1406.330, Max-Change: 0.00209
Iteration: 22, Log-Lik: -1406.327, Max-Change: 0.00080
Iteration: 23, Log-Lik: -1406.327, Max-Change: 0.00070
Iteration: 24, Log-Lik: -1406.327, Max-Change: 0.00071
Iteration: 25, Log-Lik: -1406.326, Max-Change: 0.00055
Iteration: 26, Log-Lik: -1406.326, Max-Change: 0.00021
Iteration: 27, Log-Lik: -1406.326, Max-Change: 0.00013
Iteration: 28, Log-Lik: -1406.326, Max-Change: 0.00075
Iteration: 29, Log-Lik: -1406.326, Max-Change: 0.00013
Iteration: 30, Log-Lik: -1406.326, Max-Change: 0.00060
Iteration: 31, Log-Lik: -1406.326, Max-Change: 0.00053
Iteration: 32, Log-Lik: -1406.326, Max-Change: 0.00019
Iteration: 33, Log-Lik: -1406.326, Max-Change: 0.00053
Iteration: 34, Log-Lik: -1406.326, Max-Change: 0.00015
Iteration: 35, Log-Lik: -1406.326, Max-Change: 0.00037
Iteration: 36, Log-Lik: -1406.326, Max-Change: 0.00017
Iteration: 37, Log-Lik: -1406.326, Max-Change: 0.00015
Iteration: 38, Log-Lik: -1406.326, Max-Change: 0.00041
Iteration: 39, Log-Lik: -1406.326, Max-Change: 0.00038
Iteration: 40, Log-Lik: -1406.326, Max-Change: 0.00015
Iteration: 41, Log-Lik: -1406.326, Max-Change: 0.00038
Iteration: 42, Log-Lik: -1406.326, Max-Change: 0.00008
M2(obj = model2PLmirtB)
M2(obj = model2PLmirtB, residmat = TRUE)
            dia1        dia2         dia3       dia4        dia5       dia6 dia7
dia1          NA          NA           NA         NA          NA         NA   NA
dia2  0.02663977          NA           NA         NA          NA         NA   NA
dia3  0.06240477  0.06313870           NA         NA          NA         NA   NA
dia4 -0.02758525 -0.02668132 -0.038895907         NA          NA         NA   NA
dia5 -0.02215675 -0.01360095 -0.080263677 0.04226633          NA         NA   NA
dia6 -0.03183329 -0.04414976 -0.029552955 0.02706127  0.03628025         NA   NA
dia7 -0.02704504 -0.01200917  0.007826515 0.01542917 -0.00691414 0.02537108   NA

Here we see the biggest descripancy of residual covariances is that for dia5 with dia3 at -.08.

Finally, we can see plots of our model (all shown for the 2PL model). First, the item characteristic curves

plot(model2PLmirt, item=1, type = "trace", theta_lim = c(-3,3))

Next we can see the test information plot:

plot(model2PLmirt, type = "info", theta_lim = c(-3,3))

We can see that our test information peaks around a theta of -.5, meaning scores near -.5 will be the most reliable.

Finally, we can use the fscores() function to get the estimated trait scores. Note, there are several types of scores available. The standard used for score reporting is method = "EAP", which are scores that use the expected value of the posterior distribution of the score. For doing secondary analyses, multiple “plausible” scores should be used with the option plausible.draws = # where # is the number of scores to draw. We plot the density of the test scores following estimating them with fscores():

theta = fscores(object = model2PLmirt, method = "EAP")
hist(theta)