Higher-Order Models (CFA with MLR and IFA with WLSMV) lavaan

if (!require(lavaan)) install.packages("lavaan")
library(lavaan)

Example data: 1336 college students self-reporting on 49 items (measuring five factors) assessing childhood maltreatment: Items are answered on a 1–5 scale: 1=Strongly Disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly Agree. The items are NOT normally distributed, so we’ll use both CFA with MLR and IFA with WLSMV as two options to examine the fit of these models (as an example of how to do each, but NOT to compare between estimators).

1. Spurning: Verbal and nonverbal caregiver acts that reject and degrade a child

2. Terrorizing: Caregiver behaviors that threaten or are likely to physically hurt, kill, abandon, or place the child or the child’s loved ones or objects in recognizably dangerous situations.

3. Isolating: Caregiver acts that consistently deny the child opportunities to meet needs for interacting or communicating with peers or adults inside or outside the home.

4. Corrupting: Caregiver acts that encourage the child to develop inappropriate behaviors (self-destructive, antisocial, criminal, deviant, or other maladaptive behaviors).

5. Ignoring: Emotional unresponsiveness includes caregiver acts that ignore the child’s attempts and needs to interact (failing to express affection, caring, and love for the child) and show no emotion in interactions with the child

abuseData = read.csv(file = "abuse.csv", col.names = c("ID", paste0("p0",1:9), paste0("p",10:57)))

First, we separately build each one-factor model:

spurningSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
"
spurningEstimatesMLR = cfa(model = spurningSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = data.frame(Model = "Spurning", rbind(inspect(object = spurningEstimatesMLR, what = "fit")), stringsAsFactors = FALSE)
spurningEstimatesWLSMV = cfa(model = spurningSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p06", "p10", "p14", "p25", "p27", "p29", "p33", "p35", "p48", "p49", "p53", "p54"),
                             parameterization = "theta")
lavaan WARNING: 22 bivariate tables have empty cells; to see them, use:
                  lavInspect(fit, "zero.cell.tables")
fitResultsWLSMV = data.frame(Model = "Spurning", rbind(inspect(object = spurningEstimatesWLSMV, what = "fit")), stringsAsFactors = FALSE)
spurningParams = cbind(inspect(object = spurningEstimatesMLR, what = "std")$lambda, inspect(object = spurningEstimatesWLSMV, what = "std")$lambda) 
colnames(spurningParams) = c("spurningMLR", "spurningWLSMV")
terrorizingSyntax = "
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
"
terrorizingEstimatesMLR = cfa(model = terrorizingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = rbind(fitResultsMLR, c("Terrorizing", inspect(object = terrorizingEstimatesMLR, what = "fit")))
terrorizingEstimatesWLSMV = cfa(model = terrorizingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p07", "p11", "p13", "p17", "p24", "p26", "p36", "p55", "p56"), parameterization = "theta")
lavaan WARNING: 20 bivariate tables have empty cells; to see them, use:
                  lavInspect(fit, "zero.cell.tables")
fitResultsWLSMV = rbind(fitResultsWLSMV, c("Terrorizing", inspect(object = terrorizingEstimatesWLSMV, what = "fit")))
terrorizingParams = cbind(inspect(object = terrorizingEstimatesMLR, what = "std")$lambda, inspect(object = terrorizingEstimatesWLSMV, what = "std")$lambda) 
colnames(terrorizingParams) = c("terrorizingMLR", "terrorizingWLSMV")
isolatingSyntax = "
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
"
isolatingEstimatesMLR = cfa(model = isolatingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = rbind(fitResultsMLR, c("Isolating", inspect(object = isolatingEstimatesMLR, what = "fit")))
isolatingEstimatesWLSMV = cfa(model = isolatingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p01", "p18", "p19", "p23", "p39", "p43"), parameterization = "theta")
lavaan WARNING: 11 bivariate tables have empty cells; to see them, use:
                  lavInspect(fit, "zero.cell.tables")
fitResultsWLSMV = rbind(fitResultsWLSMV, c("Isolating", inspect(object = isolatingEstimatesWLSMV, what = "fit")))
isolatingParams = cbind(inspect(object = isolatingEstimatesMLR, what = "std")$lambda, inspect(object = isolatingEstimatesWLSMV, what = "std")$lambda) 
colnames(isolatingParams) = c("isolatingMLR", "isolatingWLSMV")
corruptingSyntax = "
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
"
corruptingEstimatesMLR = cfa(model = corruptingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = rbind(fitResultsMLR, c("Corrupting", inspect(object = corruptingEstimatesMLR, what = "fit")))
corruptingEstimatesWLSMV = cfa(model = corruptingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p09", "p12", "p16", "p20", "p28", "p47", "p50"), parameterization = "theta")
lavaan WARNING: 20 bivariate tables have empty cells; to see them, use:
                  lavInspect(fit, "zero.cell.tables")
fitResultsWLSMV = rbind(fitResultsWLSMV, c("Corrupting", inspect(object = corruptingEstimatesWLSMV, what = "fit")))
corruptingParams = cbind(inspect(object = corruptingEstimatesMLR, what = "std")$lambda, inspect(object = corruptingEstimatesWLSMV, what = "std")$lambda) 
colnames(corruptingParams) = c("corruptingMLR", "corruptingWLSMV")
ignoringSyntax = "
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"
ignoringEstimatesMLR = cfa(model = ignoringSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = rbind(fitResultsMLR, c("Ignoring", inspect(object = ignoringEstimatesMLR, what = "fit")))
ignoringEstimatesWLSMV = cfa(model = ignoringSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p02", "p03", "p04", "p21", "p22", "p30", "p31", "p37", "p40", "p44", "p45", "p46", "p51", "p52", "p57"),
                             parameterization = "theta")
lavaan WARNING: 66 bivariate tables have empty cells; to see them, use:
                  lavInspect(fit, "zero.cell.tables")
fitResultsWLSMV = rbind(fitResultsWLSMV, c("Ignoring", inspect(object = ignoringEstimatesWLSMV, what = "fit")))
ignoringParams = cbind(inspect(object = ignoringEstimatesMLR, what = "std")$lambda, inspect(object = ignoringEstimatesWLSMV, what = "std")$lambda) 
colnames(ignoringParams) = c("ignoringMLR", "ignoringWLSMV")

MLR Model Fit Results

fitResultsMLR[,c("Model", "chisq.scaled", "chisq.scaling.factor", "df.scaled", "pvalue.scaled", "cfi.scaled", "tli.scaled","rmsea.scaled")]

WLSMV Model Fit Results

fitResultsWLSMV[,c("Model", "chisq.scaled", "chisq.scaling.factor", "df.scaled", "pvalue.scaled", "cfi.scaled", "tli.scaled","rmsea.scaled")]

Parameter Results

spurningParams
terrorizingParams
isolatingParams
corruptingParams
ignoringParams

CFA model with MLR including all 5 correlated factors (“biggest model” for comparison)

cfaNoHighSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"
cfaNoHighEstimates = cfa(model = cfaNoHighSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")

NOTE: With respect to fit of the structural model, letting the separate factors be correlated is as good as it gets. This saturated structural model will be our “larger model” baseline with which to compare the fit of a single higher-order factor model (as the “smaller model”).

Syntax for CFA model with MLR and a higher-order factor instead of correlations among 5 factors (“smaller/bigger model”" for comparison)

cfaHigherSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
abuse =~ spurn + terror + isolate + corrupt + ignore
"
cfaHigherEstimates = cfa(model = cfaHigherSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
summary(cfaHigherEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
lavaan (0.5-23.1097) converged normally after  78 iterations

  Number of observations                          1335

  Number of missing patterns                         1

  Estimator                                         ML      Robust
  Minimum Function Test Statistic             6597.050    4489.494
  Degrees of freedom                              1122        1122
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  1.469
    for the Yuan-Bentler correction (Mplus variant)

Model test baseline model:

  Minimum Function Test Statistic            35067.550   22808.622
  Degrees of freedom                              1176        1176
  P-value                                        0.000       0.000

User model versus baseline model:

  Comparative Fit Index (CFI)                    0.838       0.844
  Tucker-Lewis Index (TLI)                       0.831       0.837

  Robust Comparative Fit Index (CFI)                         0.851
  Robust Tucker-Lewis Index (TLI)                            0.844

Loglikelihood and Information Criteria:

  Loglikelihood user model (H0)             -69010.792  -69010.792
  Scaling correction factor                                  2.505
    for the MLR correction
  Loglikelihood unrestricted model (H1)     -65712.267  -65712.267
  Scaling correction factor                                  1.593
    for the MLR correction

  Number of free parameters                        152         152
  Akaike (AIC)                              138325.584  138325.584
  Bayesian (BIC)                            139115.480  139115.480
  Sample-size adjusted Bayesian (BIC)       138632.643  138632.643

Root Mean Square Error of Approximation:

  RMSEA                                          0.060       0.047
  90 Percent Confidence Interval          0.059  0.062       0.046  0.049
  P-value RMSEA <= 0.05                          0.000       1.000

  Robust RMSEA                                               0.057
  90 Percent Confidence Interval                             0.056  0.059

Standardized Root Mean Square Residual:

  SRMR                                           0.058       0.058

Parameter Estimates:

  Information                                 Observed
  Standard Errors                   Robust.huber.white

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  spurn =~                                                              
    p06               1.000                               0.697    0.579
    p10               0.792    0.062   12.710    0.000    0.552    0.445
    p14               1.065    0.063   17.019    0.000    0.742    0.763
    p25               0.897    0.061   14.749    0.000    0.624    0.524
    p27               1.015    0.059   17.274    0.000    0.707    0.594
    p29               1.319    0.069   19.075    0.000    0.919    0.796
    p33               1.142    0.064   17.880    0.000    0.795    0.824
    p35               0.747    0.063   11.848    0.000    0.520    0.512
    p48               0.545    0.060    9.101    0.000    0.380    0.568
    p49               0.927    0.061   15.296    0.000    0.646    0.664
    p53               1.041    0.063   16.503    0.000    0.725    0.681
    p54               1.098    0.069   15.908    0.000    0.765    0.628
  terror =~                                                             
    p07               1.000                               0.483    0.534
    p11               1.341    0.097   13.872    0.000    0.648    0.673
    p13               0.622    0.065    9.628    0.000    0.301    0.451
    p17               1.070    0.088   12.210    0.000    0.517    0.600
    p24               0.610    0.058   10.590    0.000    0.295    0.576
    p26               1.247    0.111   11.191    0.000    0.602    0.602
    p36               1.228    0.098   12.498    0.000    0.594    0.673
    p55               1.589    0.130   12.227    0.000    0.768    0.633
    p56               1.793    0.134   13.406    0.000    0.866    0.706
  isolate =~                                                            
    p01               1.000                               0.358    0.491
    p18               2.139    0.219    9.778    0.000    0.766    0.611
    p19               1.209    0.117   10.344    0.000    0.433    0.606
    p23               1.685    0.168   10.004    0.000    0.603    0.591
    p39               0.903    0.088   10.281    0.000    0.323    0.488
    p43               1.557    0.134   11.634    0.000    0.558    0.672
  corrupt =~                                                            
    p09               1.000                               0.360    0.602
    p12               0.961    0.103    9.367    0.000    0.346    0.541
    p16               1.014    0.116    8.772    0.000    0.365    0.368
    p20               0.645    0.080    8.086    0.000    0.232    0.497
    p28               1.177    0.097   12.150    0.000    0.424    0.624
    p47               1.347    0.112   12.030    0.000    0.485    0.614
    p50               1.041    0.074   14.039    0.000    0.375    0.649
  ignore =~                                                             
    p02               1.000                               0.461    0.681
    p03               1.318    0.082   16.102    0.000    0.607    0.653
    p04               1.139    0.072   15.741    0.000    0.525    0.651
    p21               1.317    0.093   14.221    0.000    0.607    0.717
    p22               1.046    0.081   12.922    0.000    0.482    0.474
    p30               1.504    0.090   16.642    0.000    0.693    0.743
    p31               1.437    0.082   17.502    0.000    0.662    0.841
    p37               1.161    0.078   14.957    0.000    0.535    0.708
    p40               1.431    0.081   17.589    0.000    0.659    0.807
    p44               1.302    0.079   16.483    0.000    0.600    0.764
    p45               0.915    0.053   17.137    0.000    0.422    0.670
    p46               1.439    0.083   17.405    0.000    0.663    0.822
    p51               1.484    0.099   14.975    0.000    0.684    0.700
    p52               1.673    0.106   15.727    0.000    0.771    0.753
    p57               1.302    0.072   18.105    0.000    0.600    0.823
  abuse =~                                                              
    spurn             1.000                               0.971    0.971
    terror            0.680    0.064   10.572    0.000    0.952    0.952
    isolate           0.494    0.056    8.762    0.000    0.934    0.934
    corrupt           0.397    0.049    8.189    0.000    0.745    0.745
    ignore            0.577    0.054   10.585    0.000    0.846    0.846

Intercepts:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .p06               2.520    0.033   76.549    0.000    2.520    2.095
   .p10               2.208    0.034   65.045    0.000    2.208    1.780
   .p14               1.600    0.027   60.165    0.000    1.600    1.647
   .p25               2.029    0.033   62.184    0.000    2.029    1.702
   .p27               2.229    0.033   68.385    0.000    2.229    1.872
   .p29               1.898    0.032   60.059    0.000    1.898    1.644
   .p33               1.601    0.026   60.633    0.000    1.601    1.659
   .p35               1.776    0.028   63.917    0.000    1.776    1.749
   .p48               1.236    0.018   67.548    0.000    1.236    1.849
   .p49               1.649    0.027   61.987    0.000    1.649    1.697
   .p53               1.844    0.029   63.324    0.000    1.844    1.733
   .p54               1.934    0.033   58.053    0.000    1.934    1.589
   .p07               1.622    0.025   65.517    0.000    1.622    1.793
   .p11               1.586    0.026   60.218    0.000    1.586    1.648
   .p13               1.213    0.018   66.573    0.000    1.213    1.822
   .p17               1.493    0.024   63.352    0.000    1.493    1.734
   .p24               1.196    0.014   85.313    0.000    1.196    2.335
   .p26               2.026    0.027   73.948    0.000    2.026    2.024
   .p36               1.459    0.024   60.477    0.000    1.459    1.655
   .p55               1.837    0.033   55.295    0.000    1.837    1.513
   .p56               1.923    0.034   57.270    0.000    1.923    1.567
   .p01               1.303    0.020   65.295    0.000    1.303    1.787
   .p18               2.318    0.034   67.527    0.000    2.318    1.848
   .p19               1.288    0.020   65.846    0.000    1.288    1.802
   .p23               2.022    0.028   72.385    0.000    2.022    1.981
   .p39               1.311    0.018   72.292    0.000    1.311    1.979
   .p43               1.656    0.023   72.927    0.000    1.656    1.996
   .p09               1.246    0.016   76.116    0.000    1.246    2.083
   .p12               1.338    0.018   76.389    0.000    1.338    2.091
   .p16               1.692    0.027   62.260    0.000    1.692    1.704
   .p20               1.109    0.013   86.722    0.000    1.109    2.373
   .p28               1.205    0.019   64.736    0.000    1.205    1.772
   .p47               1.370    0.022   63.288    0.000    1.370    1.732
   .p50               1.184    0.016   74.812    0.000    1.184    2.048
   .p02               1.298    0.019   70.090    0.000    1.298    1.918
   .p03               1.630    0.025   64.022    0.000    1.630    1.752
   .p04               1.573    0.022   71.253    0.000    1.573    1.950
   .p21               1.562    0.023   67.411    0.000    1.562    1.845
   .p22               1.831    0.028   65.796    0.000    1.831    1.801
   .p30               1.706    0.026   66.859    0.000    1.706    1.830
   .p31               1.514    0.022   70.256    0.000    1.514    1.923
   .p37               1.479    0.021   71.457    0.000    1.479    1.956
   .p40               1.467    0.022   65.622    0.000    1.467    1.796
   .p44               1.599    0.022   74.349    0.000    1.599    2.035
   .p45               1.282    0.017   74.467    0.000    1.282    2.038
   .p46               1.502    0.022   68.064    0.000    1.502    1.863
   .p51               1.619    0.027   60.522    0.000    1.619    1.656
   .p52               1.804    0.028   64.384    0.000    1.804    1.762
   .p57               1.378    0.020   69.055    0.000    1.378    1.890
    spurn             0.000                               0.000    0.000
    terror            0.000                               0.000    0.000
    isolate           0.000                               0.000    0.000
    corrupt           0.000                               0.000    0.000
    ignore            0.000                               0.000    0.000
    abuse             0.000                               0.000    0.000

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .p06               0.961    0.049   19.570    0.000    0.961    0.665
   .p10               1.234    0.048   25.501    0.000    1.234    0.802
   .p14               0.394    0.025   15.580    0.000    0.394    0.417
   .p25               1.032    0.042   24.741    0.000    1.032    0.726
   .p27               0.919    0.040   22.915    0.000    0.919    0.648
   .p29               0.489    0.027   17.888    0.000    0.489    0.367
   .p33               0.298    0.021   14.357    0.000    0.298    0.321
   .p35               0.760    0.045   17.037    0.000    0.760    0.738
   .p48               0.303    0.031    9.754    0.000    0.303    0.677
   .p49               0.528    0.032   16.668    0.000    0.528    0.559
   .p53               0.607    0.039   15.389    0.000    0.607    0.536
   .p54               0.897    0.043   21.030    0.000    0.897    0.605
   .p07               0.584    0.037   15.883    0.000    0.584    0.715
   .p11               0.506    0.034   15.099    0.000    0.506    0.547
   .p13               0.353    0.041    8.653    0.000    0.353    0.796
   .p17               0.474    0.032   14.885    0.000    0.474    0.640
   .p24               0.175    0.018    9.605    0.000    0.175    0.669
   .p26               0.639    0.043   14.816    0.000    0.639    0.638
   .p36               0.425    0.031   13.811    0.000    0.425    0.547
   .p55               0.883    0.049   18.120    0.000    0.883    0.600
   .p56               0.754    0.044   17.223    0.000    0.754    0.501
   .p01               0.404    0.043    9.317    0.000    0.404    0.759
   .p18               0.986    0.045   21.986    0.000    0.986    0.627
   .p19               0.323    0.029   11.130    0.000    0.323    0.633
   .p23               0.678    0.033   20.665    0.000    0.678    0.651
   .p39               0.334    0.035    9.588    0.000    0.334    0.762
   .p43               0.378    0.026   14.324    0.000    0.378    0.548
   .p09               0.228    0.027    8.392    0.000    0.228    0.637
   .p12               0.289    0.030    9.810    0.000    0.289    0.707
   .p16               0.853    0.047   18.246    0.000    0.853    0.865
   .p20               0.164    0.030    5.404    0.000    0.164    0.753
   .p28               0.283    0.036    7.880    0.000    0.283    0.611
   .p47               0.390    0.036   10.858    0.000    0.390    0.623
   .p50               0.193    0.030    6.448    0.000    0.193    0.579
   .p02               0.246    0.028    8.724    0.000    0.246    0.536
   .p03               0.497    0.036   13.665    0.000    0.497    0.574
   .p04               0.375    0.032   11.878    0.000    0.375    0.576
   .p21               0.348    0.025   14.074    0.000    0.348    0.486
   .p22               0.801    0.039   20.556    0.000    0.801    0.775
   .p30               0.389    0.036   10.728    0.000    0.389    0.448
   .p31               0.181    0.019    9.457    0.000    0.181    0.292
   .p37               0.285    0.027   10.566    0.000    0.285    0.499
   .p40               0.232    0.030    7.729    0.000    0.232    0.348
   .p44               0.258    0.021   12.304    0.000    0.258    0.417
   .p45               0.218    0.020   11.032    0.000    0.218    0.551
   .p46               0.210    0.026    8.198    0.000    0.210    0.324
   .p51               0.487    0.036   13.473    0.000    0.487    0.510
   .p52               0.454    0.028   16.305    0.000    0.454    0.433
   .p57               0.172    0.021    8.308    0.000    0.172    0.323
    spurn             0.028    0.009    2.984    0.003    0.058    0.058
    terror            0.022    0.005    4.189    0.000    0.093    0.093
    isolate           0.016    0.005    3.448    0.001    0.129    0.129
    corrupt           0.058    0.010    5.778    0.000    0.445    0.445
    ignore            0.060    0.008    7.512    0.000    0.284    0.284
    abuse             0.457    0.047    9.730    0.000    1.000    1.000

R-Square:
                   Estimate
    p06               0.335
    p10               0.198
    p14               0.583
    p25               0.274
    p27               0.352
    p29               0.633
    p33               0.679
    p35               0.262
    p48               0.323
    p49               0.441
    p53               0.464
    p54               0.395
    p07               0.285
    p11               0.453
    p13               0.204
    p17               0.360
    p24               0.331
    p26               0.362
    p36               0.453
    p55               0.400
    p56               0.499
    p01               0.241
    p18               0.373
    p19               0.367
    p23               0.349
    p39               0.238
    p43               0.452
    p09               0.363
    p12               0.293
    p16               0.135
    p20               0.247
    p28               0.389
    p47               0.377
    p50               0.421
    p02               0.464
    p03               0.426
    p04               0.424
    p21               0.514
    p22               0.225
    p30               0.552
    p31               0.708
    p37               0.501
    p40               0.652
    p44               0.583
    p45               0.449
    p46               0.676
    p51               0.490
    p52               0.567
    p57               0.677
    spurn             0.942
    terror            0.907
    isolate           0.871
    corrupt           0.555
    ignore            0.716

NOTE: With respect to fit of the structural model, we are now fitting a single higher-order factor INSTEAD OF covariances among the 5 factors.

To test the fit against the saturated (all possible factor correlations model), we can do a −2ΔLL scaled difference test.

anova(cfaNoHighEstimates, cfaHigherEstimates)
Scaled Chi Square Difference Test (method = "satorra.bentler.2001")

                     Df    AIC    BIC  Chisq Chisq diff Df diff Pr(>Chisq)    
cfaNoHighEstimates 1117 138229 139045 6490.3                                  
cfaHigherEstimates 1122 138326 139115 6597.1     47.083       5  5.465e-09 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

This higher-order factor model uses 5 fewer parameters (5 higher-order loadings to replace the 10 covariances among the factors).

According to the −2ΔLL scaled difference relative to the previous model,

−2ΔLL (5) = 47.083, p < .0001

trying to reproduce the 5 factor covariances with a single higher-order factor results in a significant decrease in fit. Based on the factor correlations we examined earlier and the standardized higher-order loadings, I’d guess the issue lies with the “corrupting”" factor not being as related to the others.

Comparison with One-Factor CFA model

For the sake of illustration, we can try one more alternative – what if the items were measuring a single factor (i.e., a single score)? Syntax for CFA model with MLR including a single factor instead of a higher-order factor (“smallest model” for comparison):

cfaSingleSyntax = "
abuse =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54 +
         p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56 + p01 + p18 + p19 + 
         p23 + p39 + p43 + p09 + p12 + p16 + p20 + p28 + p47 + p50 + p02 + p03 + 
         p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"
cfaSingleEstimates = cfa(model = cfaSingleSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
summary(cfaSingleEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
lavaan (0.5-23.1097) converged normally after  47 iterations

  Number of observations                          1335

  Number of missing patterns                         1

  Estimator                                         ML      Robust
  Minimum Function Test Statistic             9209.963    6186.390
  Degrees of freedom                              1127        1127
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  1.489
    for the Yuan-Bentler correction (Mplus variant)

Model test baseline model:

  Minimum Function Test Statistic            35067.550   22808.622
  Degrees of freedom                              1176        1176
  P-value                                        0.000       0.000

User model versus baseline model:

  Comparative Fit Index (CFI)                    0.762       0.766
  Tucker-Lewis Index (TLI)                       0.751       0.756

  Robust Comparative Fit Index (CFI)                         0.774
  Robust Tucker-Lewis Index (TLI)                            0.764

Loglikelihood and Information Criteria:

  Loglikelihood user model (H0)             -70317.248  -70317.248
  Scaling correction factor                                  2.392
    for the MLR correction
  Loglikelihood unrestricted model (H1)     -65712.267  -65712.267
  Scaling correction factor                                  1.593
    for the MLR correction

  Number of free parameters                        147         147
  Akaike (AIC)                              140928.496  140928.496
  Bayesian (BIC)                            141692.409  141692.409
  Sample-size adjusted Bayesian (BIC)       141225.455  141225.455

Root Mean Square Error of Approximation:

  RMSEA                                          0.073       0.058
  90 Percent Confidence Interval          0.072  0.075       0.057  0.059
  P-value RMSEA <= 0.05                          0.000       0.000

  Robust RMSEA                                               0.071
  90 Percent Confidence Interval                             0.069  0.072

Standardized Root Mean Square Residual:

  SRMR                                           0.062       0.062

Parameter Estimates:

  Information                                 Observed
  Standard Errors                   Robust.huber.white

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  abuse =~                                                              
    p06               1.000                               0.640    0.532
    p10               0.784    0.066   11.916    0.000    0.502    0.405
    p14               1.084    0.068   15.992    0.000    0.694    0.714
    p25               0.898    0.064   14.002    0.000    0.575    0.482
    p27               1.026    0.062   16.490    0.000    0.657    0.551
    p29               1.333    0.075   17.690    0.000    0.854    0.739
    p33               1.183    0.071   16.751    0.000    0.757    0.785
    p35               0.939    0.077   12.203    0.000    0.601    0.592
    p48               0.598    0.066    8.990    0.000    0.383    0.573
    p49               0.949    0.064   14.723    0.000    0.608    0.625
    p53               1.089    0.068   16.023    0.000    0.697    0.655
    p54               1.099    0.074   14.765    0.000    0.704    0.578
    p07               0.748    0.067   11.082    0.000    0.479    0.529
    p11               0.934    0.075   12.466    0.000    0.598    0.622
    p13               0.431    0.061    7.022    0.000    0.276    0.414
    p17               0.719    0.067   10.725    0.000    0.460    0.535
    p24               0.427    0.053    8.106    0.000    0.273    0.534
    p26               0.915    0.060   15.249    0.000    0.586    0.585
    p36               0.828    0.067   12.339    0.000    0.530    0.602
    p55               1.067    0.069   15.406    0.000    0.683    0.563
    p56               1.183    0.075   15.714    0.000    0.758    0.618
    p01               0.526    0.060    8.737    0.000    0.337    0.462
    p18               1.066    0.064   16.776    0.000    0.683    0.544
    p19               0.639    0.066    9.710    0.000    0.409    0.573
    p23               0.844    0.061   13.757    0.000    0.540    0.529
    p39               0.473    0.055    8.581    0.000    0.303    0.457
    p43               0.812    0.061   13.257    0.000    0.520    0.627
    p09               0.430    0.051    8.357    0.000    0.275    0.460
    p12               0.421    0.052    8.132    0.000    0.270    0.422
    p16               0.389    0.057    6.819    0.000    0.249    0.251
    p20               0.216    0.041    5.257    0.000    0.138    0.295
    p28               0.473    0.068    6.991    0.000    0.303    0.445
    p47               0.624    0.070    8.949    0.000    0.399    0.505
    p50               0.438    0.062    7.082    0.000    0.280    0.485
    p02               0.721    0.069   10.525    0.000    0.462    0.682
    p03               0.899    0.069   13.001    0.000    0.576    0.619
    p04               0.754    0.062   12.191    0.000    0.483    0.598
    p21               0.874    0.074   11.734    0.000    0.559    0.661
    p22               0.882    0.057   15.442    0.000    0.565    0.556
    p30               1.017    0.070   14.548    0.000    0.651    0.698
    p31               0.960    0.069   13.984    0.000    0.615    0.781
    p37               0.775    0.066   11.728    0.000    0.496    0.657
    p40               0.975    0.071   13.805    0.000    0.624    0.765
    p44               0.916    0.064   14.284    0.000    0.587    0.746
    p45               0.679    0.063   10.774    0.000    0.435    0.691
    p46               0.953    0.072   13.227    0.000    0.610    0.757
    p51               0.955    0.073   13.161    0.000    0.612    0.626
    p52               1.212    0.068   17.943    0.000    0.776    0.758
    p57               0.878    0.068   12.944    0.000    0.562    0.771

Intercepts:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .p06               2.520    0.033   76.549    0.000    2.520    2.095
   .p10               2.208    0.034   65.045    0.000    2.208    1.780
   .p14               1.600    0.027   60.165    0.000    1.600    1.647
   .p25               2.029    0.033   62.184    0.000    2.029    1.702
   .p27               2.229    0.033   68.385    0.000    2.229    1.872
   .p29               1.898    0.032   60.059    0.000    1.898    1.644
   .p33               1.601    0.026   60.633    0.000    1.601    1.659
   .p35               1.776    0.028   63.917    0.000    1.776    1.749
   .p48               1.236    0.018   67.548    0.000    1.236    1.849
   .p49               1.649    0.027   61.987    0.000    1.649    1.697
   .p53               1.844    0.029   63.324    0.000    1.844    1.733
   .p54               1.934    0.033   58.053    0.000    1.934    1.589
   .p07               1.622    0.025   65.517    0.000    1.622    1.793
   .p11               1.586    0.026   60.218    0.000    1.586    1.648
   .p13               1.213    0.018   66.573    0.000    1.213    1.822
   .p17               1.493    0.024   63.352    0.000    1.493    1.734
   .p24               1.196    0.014   85.313    0.000    1.196    2.335
   .p26               2.026    0.027   73.948    0.000    2.026    2.024
   .p36               1.459    0.024   60.477    0.000    1.459    1.655
   .p55               1.837    0.033   55.295    0.000    1.837    1.513
   .p56               1.923    0.034   57.270    0.000    1.923    1.567
   .p01               1.303    0.020   65.295    0.000    1.303    1.787
   .p18               2.318    0.034   67.527    0.000    2.318    1.848
   .p19               1.288    0.020   65.846    0.000    1.288    1.802
   .p23               2.022    0.028   72.385    0.000    2.022    1.981
   .p39               1.311    0.018   72.292    0.000    1.311    1.979
   .p43               1.656    0.023   72.927    0.000    1.656    1.996
   .p09               1.246    0.016   76.116    0.000    1.246    2.083
   .p12               1.338    0.018   76.389    0.000    1.338    2.091
   .p16               1.692    0.027   62.260    0.000    1.692    1.704
   .p20               1.109    0.013   86.722    0.000    1.109    2.373
   .p28               1.205    0.019   64.736    0.000    1.205    1.772
   .p47               1.370    0.022   63.288    0.000    1.370    1.732
   .p50               1.184    0.016   74.812    0.000    1.184    2.048
   .p02               1.298    0.019   70.090    0.000    1.298    1.918
   .p03               1.630    0.025   64.022    0.000    1.630    1.752
   .p04               1.573    0.022   71.253    0.000    1.573    1.950
   .p21               1.562    0.023   67.411    0.000    1.562    1.845
   .p22               1.831    0.028   65.796    0.000    1.831    1.801
   .p30               1.706    0.026   66.859    0.000    1.706    1.830
   .p31               1.514    0.022   70.256    0.000    1.514    1.923
   .p37               1.479    0.021   71.457    0.000    1.479    1.956
   .p40               1.467    0.022   65.622    0.000    1.467    1.796
   .p44               1.599    0.022   74.349    0.000    1.599    2.035
   .p45               1.282    0.017   74.467    0.000    1.282    2.038
   .p46               1.502    0.022   68.064    0.000    1.502    1.863
   .p51               1.619    0.027   60.522    0.000    1.619    1.656
   .p52               1.804    0.028   64.384    0.000    1.804    1.762
   .p57               1.378    0.020   69.055    0.000    1.378    1.890
    abuse             0.000                               0.000    0.000

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .p06               1.037    0.048   21.813    0.000    1.037    0.717
   .p10               1.287    0.048   26.819    0.000    1.287    0.836
   .p14               0.462    0.029   16.035    0.000    0.462    0.490
   .p25               1.091    0.042   25.785    0.000    1.091    0.767
   .p27               0.987    0.041   24.269    0.000    0.987    0.696
   .p29               0.605    0.033   18.229    0.000    0.605    0.454
   .p33               0.357    0.023   15.700    0.000    0.357    0.384
   .p35               0.669    0.041   16.194    0.000    0.669    0.649
   .p48               0.300    0.032    9.440    0.000    0.300    0.672
   .p49               0.576    0.033   17.198    0.000    0.576    0.609
   .p53               0.646    0.040   16.033    0.000    0.646    0.571
   .p54               0.986    0.045   21.967    0.000    0.986    0.666
   .p07               0.589    0.036   16.149    0.000    0.589    0.720
   .p11               0.568    0.036   15.739    0.000    0.568    0.614
   .p13               0.368    0.042    8.712    0.000    0.368    0.829
   .p17               0.529    0.034   15.407    0.000    0.529    0.714
   .p24               0.187    0.020    9.378    0.000    0.187    0.715
   .p26               0.659    0.041   16.117    0.000    0.659    0.658
   .p36               0.496    0.035   14.348    0.000    0.496    0.638
   .p55               1.007    0.051   19.864    0.000    1.007    0.683
   .p56               0.931    0.046   20.420    0.000    0.931    0.619
   .p01               0.419    0.044    9.581    0.000    0.419    0.787
   .p18               1.106    0.043   25.620    0.000    1.106    0.704
   .p19               0.343    0.030   11.613    0.000    0.343    0.672
   .p23               0.750    0.033   22.901    0.000    0.750    0.720
   .p39               0.347    0.035    9.948    0.000    0.347    0.791
   .p43               0.418    0.023   17.799    0.000    0.418    0.607
   .p09               0.282    0.028    9.966    0.000    0.282    0.788
   .p12               0.337    0.031   11.006    0.000    0.337    0.822
   .p16               0.924    0.046   19.893    0.000    0.924    0.937
   .p20               0.199    0.033    6.011    0.000    0.199    0.913
   .p28               0.371    0.039    9.444    0.000    0.371    0.802
   .p47               0.466    0.037   12.715    0.000    0.466    0.745
   .p50               0.255    0.033    7.825    0.000    0.255    0.765
   .p02               0.245    0.027    9.217    0.000    0.245    0.534
   .p03               0.534    0.038   14.053    0.000    0.534    0.617
   .p04               0.418    0.033   12.790    0.000    0.418    0.642
   .p21               0.404    0.028   14.351    0.000    0.404    0.563
   .p22               0.714    0.035   20.130    0.000    0.714    0.691
   .p30               0.446    0.039   11.520    0.000    0.446    0.512
   .p31               0.242    0.023   10.688    0.000    0.242    0.390
   .p37               0.325    0.030   10.910    0.000    0.325    0.569
   .p40               0.277    0.032    8.742    0.000    0.277    0.415
   .p44               0.274    0.021   12.945    0.000    0.274    0.443
   .p45               0.207    0.017   12.315    0.000    0.207    0.523
   .p46               0.277    0.029    9.723    0.000    0.277    0.427
   .p51               0.581    0.040   14.598    0.000    0.581    0.608
   .p52               0.447    0.027   16.499    0.000    0.447    0.426
   .p57               0.216    0.022    9.615    0.000    0.216    0.406
    abuse             0.410    0.045    9.048    0.000    1.000    1.000

R-Square:
                   Estimate
    p06               0.283
    p10               0.164
    p14               0.510
    p25               0.233
    p27               0.304
    p29               0.546
    p33               0.616
    p35               0.351
    p48               0.328
    p49               0.391
    p53               0.429
    p54               0.334
    p07               0.280
    p11               0.386
    p13               0.171
    p17               0.286
    p24               0.285
    p26               0.342
    p36               0.362
    p55               0.317
    p56               0.381
    p01               0.213
    p18               0.296
    p19               0.328
    p23               0.280
    p39               0.209
    p43               0.393
    p09               0.212
    p12               0.178
    p16               0.063
    p20               0.087
    p28               0.198
    p47               0.255
    p50               0.235
    p02               0.466
    p03               0.383
    p04               0.358
    p21               0.437
    p22               0.309
    p30               0.488
    p31               0.610
    p37               0.431
    p40               0.585
    p44               0.557
    p45               0.477
    p46               0.573
    p51               0.392
    p52               0.574
    p57               0.594

NOTE: With respect to fit of the structural model, we are now fitting a single factor INSTEAD OF 5 factors and a higher-order factor. This will tell us the extent to which a “total score” is appropriate.

anova(cfaSingleEstimates, cfaNoHighEstimates, cfaHigherEstimates)
Scaled Chi Square Difference Test (method = "satorra.bentler.2001")

                     Df    AIC    BIC  Chisq Chisq diff Df diff Pr(>Chisq)    
cfaNoHighEstimates 1117 138229 139045 6490.3                                  
cfaHigherEstimates 1122 138326 139115 6597.1      47.08       5  5.465e-09 ***
cfaSingleEstimates 1127 140928 141692 9210.0     448.91       5  < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

According to the −2ΔLL scaled difference relative to the previous model, −2ΔLL (5) = 448.91, p < .0001

Therefore, a single factor fits significantly worse than 5 factors + a higher-order factor, and so one factor does not capture the covariances for these 49 items.

Syntax for IFA model with WLSMV including all 5 correlated factors (“biggest model”)

NOTE: With respect to fit of the structural model, letting the 5 separate factors be correlated is as good as it gets. This saturated structural model will be our “largest model” baseline with which to compare the fit of a single higher-order factor model (as the “smaller model”).

ifaNoHighSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"
ifaNoHighEstimates = cfa(model = ifaNoHighSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV",
                         ordered = c("p06", "p10", "p14", "p25", "p27", "p29", "p33", "p35", "p48", "p49", "p53", "p54", 
                                     "p07", "p11", "p13", "p17", "p24", "p26", "p36", "p55", "p56", "p01", "p18", "p19", 
                                     "p23", "p39", "p43", "p09", "p12", "p16", "p20", "p28", "p47", "p50", "p02", "p03", 
                                     "p04", "p21", "p22", "p30", "p31", "p37", "p40", "p44", "p45", "p46", "p51", "p52", "p57"))
lavaan WARNING: 704 bivariate tables have empty cells; to see them, use:
                  lavInspect(fit, "zero.cell.tables")
summary(ifaNoHighEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
lavaan (0.5-23.1097) converged normally after  84 iterations

  Number of observations                          1335

  Estimator                                       DWLS      Robust
  Minimum Function Test Statistic             5673.876    5931.529
  Degrees of freedom                              1117        1117
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  1.070
  Shift parameter                                          628.337
    for simple second-order correction (WLSMV)

Model test baseline model:

  Minimum Function Test Statistic           471778.208   67352.685
  Degrees of freedom                              1176        1176
  P-value                                        0.000       0.000

User model versus baseline model:

  Comparative Fit Index (CFI)                    0.990       0.927
  Tucker-Lewis Index (TLI)                       0.990       0.923

  Robust Comparative Fit Index (CFI)                            NA
  Robust Tucker-Lewis Index (TLI)                               NA

Root Mean Square Error of Approximation:

  RMSEA                                          0.055       0.057
  90 Percent Confidence Interval          0.054  0.057       0.055  0.058
  P-value RMSEA <= 0.05                          0.000       0.000

  Robust RMSEA                                                  NA
  90 Percent Confidence Interval                                NA     NA

Standardized Root Mean Square Residual:

  SRMR                                           0.059       0.059

Weighted Root Mean Square Residual:

  WRMR                                           2.034       2.034

Parameter Estimates:

  Information                                 Expected
  Standard Errors                           Robust.sem

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  spurn =~                                                              
    p06               1.000                               0.625    0.625
    p10               0.798    0.045   17.632    0.000    0.499    0.499
    p14               1.311    0.047   28.073    0.000    0.819    0.819
    p25               0.920    0.044   20.749    0.000    0.575    0.575
    p27               1.032    0.042   24.504    0.000    0.645    0.645
    p29               1.343    0.047   28.557    0.000    0.839    0.839
    p33               1.432    0.049   29.253    0.000    0.895    0.895
    p35               1.125    0.048   23.493    0.000    0.703    0.703
    p48               1.312    0.059   22.379    0.000    0.820    0.820
    p49               1.171    0.045   26.270    0.000    0.731    0.731
    p53               1.212    0.046   26.428    0.000    0.757    0.757
    p54               1.109    0.044   25.052    0.000    0.693    0.693
  terror =~                                                             
    p07               1.000                               0.672    0.672
    p11               1.159    0.041   28.558    0.000    0.778    0.778
    p13               1.062    0.048   22.148    0.000    0.713    0.713
    p17               1.023    0.041   25.087    0.000    0.687    0.687
    p24               1.185    0.047   25.431    0.000    0.796    0.796
    p26               1.032    0.043   24.070    0.000    0.693    0.693
    p36               1.184    0.043   27.302    0.000    0.795    0.795
    p55               1.075    0.043   25.134    0.000    0.722    0.722
    p56               1.134    0.040   28.208    0.000    0.762    0.762
  isolate =~                                                            
    p01               1.000                               0.687    0.687
    p18               0.965    0.044   21.821    0.000    0.663    0.663
    p19               1.173    0.048   24.300    0.000    0.806    0.806
    p23               0.932    0.043   21.924    0.000    0.641    0.641
    p39               0.993    0.044   22.697    0.000    0.682    0.682
    p43               1.095    0.041   26.563    0.000    0.753    0.753
  corrupt =~                                                            
    p09               1.000                               0.759    0.759
    p12               0.905    0.044   20.425    0.000    0.686    0.686
    p16               0.560    0.043   13.010    0.000    0.425    0.425
    p20               1.042    0.048   21.574    0.000    0.790    0.790
    p28               1.084    0.047   22.873    0.000    0.823    0.823
    p47               1.045    0.041   25.314    0.000    0.793    0.793
    p50               1.154    0.045   25.776    0.000    0.875    0.875
  ignore =~                                                             
    p02               1.000                               0.845    0.845
    p03               0.874    0.023   38.607    0.000    0.738    0.738
    p04               0.850    0.022   37.796    0.000    0.718    0.718
    p21               0.924    0.022   41.452    0.000    0.781    0.781
    p22               0.800    0.027   29.860    0.000    0.675    0.675
    p30               0.974    0.021   46.124    0.000    0.822    0.822
    p31               1.063    0.021   49.535    0.000    0.898    0.898
    p37               0.955    0.022   43.445    0.000    0.807    0.807
    p40               1.056    0.021   50.751    0.000    0.892    0.892
    p44               1.022    0.021   48.583    0.000    0.863    0.863
    p45               1.008    0.021   46.912    0.000    0.852    0.852
    p46               1.052    0.021   50.199    0.000    0.888    0.888
    p51               0.903    0.022   40.265    0.000    0.763    0.763
    p52               1.000    0.022   46.047    0.000    0.844    0.844
    p57               1.075    0.021   50.984    0.000    0.908    0.908

Covariances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  spurn ~~                                                              
    terror            0.397    0.018   22.004    0.000    0.947    0.947
    isolate           0.397    0.020   19.657    0.000    0.925    0.925
    corrupt           0.375    0.019   19.975    0.000    0.791    0.791
    ignore            0.465    0.019   24.269    0.000    0.882    0.882
  terror ~~                                                             
    isolate           0.408    0.023   18.050    0.000    0.885    0.885
    corrupt           0.441    0.025   17.544    0.000    0.866    0.866
    ignore            0.463    0.023   20.565    0.000    0.817    0.817
  isolate ~~                                                            
    corrupt           0.404    0.027   15.130    0.000    0.776    0.776
    ignore            0.501    0.024   20.577    0.000    0.863    0.863
  corrupt ~~                                                            
    ignore            0.467    0.024   19.265    0.000    0.728    0.728

Intercepts:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .p06               0.000                               0.000    0.000
   .p10               0.000                               0.000    0.000
   .p14               0.000                               0.000    0.000
   .p25               0.000                               0.000    0.000
   .p27               0.000                               0.000    0.000
   .p29               0.000                               0.000    0.000
   .p33               0.000                               0.000    0.000
   .p35               0.000                               0.000    0.000
   .p48               0.000                               0.000    0.000
   .p49               0.000                               0.000    0.000
   .p53               0.000                               0.000    0.000
   .p54               0.000                               0.000    0.000
   .p07               0.000                               0.000    0.000
   .p11               0.000                               0.000    0.000
   .p13               0.000                               0.000    0.000
   .p17               0.000                               0.000    0.000
   .p24               0.000                               0.000    0.000
   .p26               0.000                               0.000    0.000
   .p36               0.000                               0.000    0.000
   .p55               0.000                               0.000    0.000
   .p56               0.000                               0.000    0.000
   .p01               0.000                               0.000    0.000
   .p18               0.000                               0.000    0.000
   .p19               0.000                               0.000    0.000
   .p23               0.000                               0.000    0.000
   .p39               0.000                               0.000    0.000
   .p43               0.000                               0.000    0.000
   .p09               0.000                               0.000    0.000
   .p12               0.000                               0.000    0.000
   .p16               0.000                               0.000    0.000
   .p20               0.000                               0.000    0.000
   .p28               0.000                               0.000    0.000
   .p47               0.000                               0.000    0.000
   .p50               0.000                               0.000    0.000
   .p02               0.000                               0.000    0.000
   .p03               0.000                               0.000    0.000
   .p04               0.000                               0.000    0.000
   .p21               0.000                               0.000    0.000
   .p22               0.000                               0.000    0.000
   .p30               0.000                               0.000    0.000
   .p31               0.000                               0.000    0.000
   .p37               0.000                               0.000    0.000
   .p40               0.000                               0.000    0.000
   .p44               0.000                               0.000    0.000
   .p45               0.000                               0.000    0.000
   .p46               0.000                               0.000    0.000
   .p51               0.000                               0.000    0.000
   .p52               0.000                               0.000    0.000
   .p57               0.000                               0.000    0.000
    spurn             0.000                               0.000    0.000
    terror            0.000                               0.000    0.000
    isolate           0.000                               0.000    0.000
    corrupt           0.000                               0.000    0.000
    ignore            0.000                               0.000    0.000

Thresholds:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    p06|t1           -0.751    0.038  -19.732    0.000   -0.751   -0.751
    p06|t2            0.154    0.034    4.458    0.000    0.154    0.154
    p06|t3            0.700    0.038   18.642    0.000    0.700    0.700
    p06|t4            1.513    0.053   28.441    0.000    1.513    1.513
    p10|t1           -0.312    0.035   -8.932    0.000   -0.312   -0.312
    p10|t2            0.427    0.035   12.026    0.000    0.427    0.427
    p10|t3            0.869    0.039   22.014    0.000    0.869    0.869
    p10|t4            1.568    0.055   28.489    0.000    1.568    1.568
    p14|t1            0.360    0.035   10.237    0.000    0.360    0.360
    p14|t2            1.047    0.042   24.864    0.000    1.047    1.047
    p14|t3            1.446    0.051   28.277    0.000    1.446    1.446
    p14|t4            2.081    0.081   25.669    0.000    2.081    2.081
    p25|t1           -0.082    0.034   -2.380    0.017   -0.082   -0.082
    p25|t2            0.547    0.036   15.095    0.000    0.547    0.547
    p25|t3            0.916    0.040   22.849    0.000    0.916    0.916
    p25|t4            1.965    0.073   26.758    0.000    1.965    1.965
    p27|t1           -0.394    0.035  -11.159    0.000   -0.394   -0.394
    p27|t2            0.398    0.035   11.268    0.000    0.398    0.398
    p27|t3            0.888    0.040   22.360    0.000    0.888    0.888
    p27|t4            1.712    0.061   28.262    0.000    1.712    1.712
    p29|t1            0.033    0.034    0.958    0.338    0.033    0.033
    p29|t2            0.715    0.038   18.955    0.000    0.715    0.715
    p29|t3            1.090    0.043   25.436    0.000    1.090    1.090
    p29|t4            1.800    0.065   27.890    0.000    1.800    1.800
    p33|t1            0.346    0.035    9.857    0.000    0.346    0.346
    p33|t2            1.063    0.042   25.087    0.000    1.063    1.063
    p33|t3            1.440    0.051   28.259    0.000    1.440    1.440
    p33|t4            2.115    0.084   25.312    0.000    2.115    2.115
    p35|t1            0.022    0.034    0.629    0.529    0.022    0.022
    p35|t2            0.960    0.041   23.570    0.000    0.960    0.960
    p35|t3            1.351    0.049   27.846    0.000    1.351    1.351
    p35|t4            1.915    0.071   27.151    0.000    1.915    1.915
    p48|t1            1.047    0.042   24.864    0.000    1.047    1.047
    p48|t2            1.636    0.058   28.444    0.000    1.636    1.636
    p48|t3            1.881    0.069   27.396    0.000    1.881    1.881
    p48|t4            2.433    0.114   21.318    0.000    2.433    2.433
    p49|t1            0.265    0.035    7.624    0.000    0.265    0.265
    p49|t2            0.975    0.041   23.807    0.000    0.975    0.975
    p49|t3            1.451    0.051   28.294    0.000    1.451    1.451
    p49|t4            2.151    0.086   24.909    0.000    2.151    2.151
    p53|t1            0.003    0.034    0.082    0.935    0.003    0.003
    p53|t2            0.782    0.038   20.349    0.000    0.782    0.782
    p53|t3            1.275    0.047   27.333    0.000    1.275    1.275
    p53|t4            1.927    0.071   27.060    0.000    1.927    1.927
    p54|t1            0.076    0.034    2.216    0.027    0.076    0.076
    p54|t2            0.637    0.037   17.223    0.000    0.637    0.637
    p54|t3            0.999    0.041   24.180    0.000    0.999    0.999
    p54|t4            1.712    0.061   28.262    0.000    1.712    1.712
    p07|t1            0.207    0.035    5.988    0.000    0.207    0.207
    p07|t2            1.128    0.044   25.901    0.000    1.128    1.128
    p07|t3            1.549    0.054   28.481    0.000    1.549    1.549
    p07|t4            2.212    0.091   24.201    0.000    2.212    2.212
    p11|t1            0.382    0.035   10.834    0.000    0.382    0.382
    p11|t2            1.060    0.042   25.042    0.000    1.060    1.060
    p11|t3            1.456    0.051   28.311    0.000    1.456    1.456
    p11|t4            2.115    0.084   25.312    0.000    2.115    2.115
    p13|t1            1.150    0.044   26.147    0.000    1.150    1.150
    p13|t2            1.658    0.058   28.406    0.000    1.658    1.658
    p13|t3            1.881    0.069   27.396    0.000    1.881    1.881
    p13|t4            2.336    0.103   22.629    0.000    2.336    2.336
    p17|t1            0.451    0.036   12.675    0.000    0.451    0.451
    p17|t2            1.275    0.047   27.333    0.000    1.275    1.275
    p17|t3            1.615    0.057   28.470    0.000    1.615    1.615
    p17|t4            2.234    0.093   23.931    0.000    2.234    2.234
    p24|t1            1.009    0.041   24.319    0.000    1.009    1.009
    p24|t2            1.904    0.070   27.237    0.000    1.904    1.904
    p24|t3            2.433    0.114   21.318    0.000    2.433    2.433
    p24|t4            2.748    0.164   16.783    0.000    2.748    2.748
    p26|t1           -0.468    0.036  -13.106    0.000   -0.468   -0.468
    p26|t2            0.813    0.039   20.960    0.000    0.813    0.813
    p26|t3            1.242    0.046   27.059    0.000    1.242    1.242
    p26|t4            1.870    0.068   27.470    0.000    1.870    1.870
    p36|t1            0.587    0.037   16.056    0.000    0.587    0.587
    p36|t2            1.242    0.046   27.059    0.000    1.242    1.242
    p36|t3            1.531    0.054   28.465    0.000    1.531    1.531
    p36|t4            2.308    0.100   22.993    0.000    2.308    2.308
    p55|t1            0.253    0.035    7.297    0.000    0.253    0.253
    p55|t2            0.700    0.038   18.642    0.000    0.700    0.700
    p55|t3            1.002    0.041   24.227    0.000    1.002    1.002
    p55|t4            1.790    0.064   27.938    0.000    1.790    1.790
    p56|t1            0.114    0.034    3.310    0.001    0.114    0.114
    p56|t2            0.651    0.037   17.540    0.000    0.651    0.651
    p56|t3            0.945    0.041   23.332    0.000    0.945    0.945
    p56|t4            1.772    0.063   28.026    0.000    1.772    1.772
    p01|t1            0.836    0.039   21.414    0.000    0.836    0.836
    p01|t2            1.575    0.055   28.489    0.000    1.575    1.575
    p01|t3            1.881    0.069   27.396    0.000    1.881    1.881
    p01|t4            2.191    0.090   24.453    0.000    2.191    2.191
    p18|t1           -0.416    0.035  -11.755    0.000   -0.416   -0.416
    p18|t2            0.294    0.035    8.442    0.000    0.294    0.294
    p18|t3            0.826    0.039   21.213    0.000    0.826    0.826
    p18|t4            1.495    0.053   28.409    0.000    1.495    1.495
    p19|t1            0.899    0.040   22.556    0.000    0.899    0.899
    p19|t2            1.525    0.054   28.457    0.000    1.525    1.525
    p19|t3            1.881    0.069   27.396    0.000    1.881    1.881
    p19|t4            2.336    0.103   22.629    0.000    2.336    2.336
    p23|t1           -0.334    0.035   -9.530    0.000   -0.334   -0.334
    p23|t2            0.616    0.037   16.747    0.000    0.616    0.616
    p23|t3            1.254    0.046   27.164    0.000    1.254    1.254
    p23|t4            2.097    0.082   25.496    0.000    2.097    2.097
    p39|t1            0.717    0.038   19.007    0.000    0.717    0.717
    p39|t2            1.696    0.060   28.312    0.000    1.696    1.696
    p39|t3            2.049    0.079   25.988    0.000    2.049    2.049
    p39|t4            2.366    0.106   22.231    0.000    2.366    2.366
    p43|t1            0.033    0.034    0.958    0.338    0.033    0.033
    p43|t2            1.202    0.045   26.694    0.000    1.202    1.202
    p43|t3            1.673    0.059   28.373    0.000    1.673    1.673
    p43|t4            2.433    0.114   21.318    0.000    2.433    2.433
    p09|t1            0.908    0.040   22.703    0.000    0.908    0.908
    p09|t2            1.688    0.060   28.334    0.000    1.688    1.688
    p09|t3            2.151    0.086   24.909    0.000    2.151    2.151
    p09|t4            2.748    0.164   16.783    0.000    2.748    2.748
    p12|t1            0.589    0.037   16.109    0.000    0.589    0.589
    p12|t2            1.809    0.065   27.839    0.000    1.809    1.809
    p12|t3            2.133    0.085   25.117    0.000    2.133    2.133
    p12|t4            2.398    0.110   21.797    0.000    2.398    2.398
    p16|t1            0.196    0.035    5.660    0.000    0.196    0.196
    p16|t2            0.963    0.041   23.618    0.000    0.963    0.963
    p16|t3            1.360    0.049   27.900    0.000    1.360    1.360
    p16|t4            2.171    0.088   24.688    0.000    2.171    2.171
    p20|t1            1.478    0.052   28.371    0.000    1.478    1.478
    p20|t2            2.034    0.078   26.134    0.000    2.034    2.034
    p20|t3            2.191    0.090   24.453    0.000    2.191    2.191
    p20|t4            2.674    0.150   17.859    0.000    2.674    2.674
    p28|t1            1.210    0.045   26.769    0.000    1.210    1.210
    p28|t2            1.688    0.060   28.334    0.000    1.688    1.688
    p28|t3            1.809    0.065   27.839    0.000    1.809    1.809
    p28|t4            2.282    0.098   23.330    0.000    2.282    2.282
    p47|t1            0.724    0.038   19.163    0.000    0.724    0.724
    p47|t2            1.370    0.049   27.951    0.000    1.370    1.370
    p47|t3            1.754    0.062   28.104    0.000    1.754    1.754
    p47|t4            2.308    0.100   22.993    0.000    2.308    2.308
    p50|t1            1.150    0.044   26.147    0.000    1.150    1.150
    p50|t2            1.849    0.067   27.606    0.000    1.849    1.849
    p50|t3            2.081    0.081   25.669    0.000    2.081    2.081
    p50|t4            2.433    0.114   21.318    0.000    2.433    2.433
    p02|t1            0.789    0.038   20.502    0.000    0.789    0.789
    p02|t2            1.650    0.058   28.420    0.000    1.650    1.650
    p02|t3            1.940    0.072   26.964    0.000    1.940    1.940
    p02|t4            2.433    0.114   21.318    0.000    2.433    2.433
    p03|t1            0.209    0.035    6.042    0.000    0.209    0.209
    p03|t2            1.111    0.043   25.692    0.000    1.111    1.111
 [ reached getOption("max.print") -- omitted 54 rows ]

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .p06               0.610                               0.610    0.610
   .p10               0.751                               0.751    0.751
   .p14               0.329                               0.329    0.329
   .p25               0.670                               0.670    0.670
   .p27               0.584                               0.584    0.584
   .p29               0.296                               0.296    0.296
   .p33               0.200                               0.200    0.200
   .p35               0.506                               0.506    0.506
   .p48               0.328                               0.328    0.328
   .p49               0.465                               0.465    0.465
   .p53               0.427                               0.427    0.427
   .p54               0.520                               0.520    0.520
   .p07               0.549                               0.549    0.549
   .p11               0.394                               0.394    0.394
   .p13               0.492                               0.492    0.492
   .p17               0.528                               0.528    0.528
   .p24               0.367                               0.367    0.367
   .p26               0.520                               0.520    0.520
   .p36               0.367                               0.367    0.367
   .p55               0.479                               0.479    0.479
   .p56               0.420                               0.420    0.420
   .p01               0.528                               0.528    0.528
   .p18               0.561                               0.561    0.561
   .p19               0.350                               0.350    0.350
   .p23               0.590                               0.590    0.590
   .p39               0.535                               0.535    0.535
   .p43               0.434                               0.434    0.434
   .p09               0.424                               0.424    0.424
   .p12               0.529                               0.529    0.529
   .p16               0.819                               0.819    0.819
   .p20               0.375                               0.375    0.375
   .p28               0.323                               0.323    0.323
   .p47               0.371                               0.371    0.371
   .p50               0.234                               0.234    0.234
   .p02               0.287                               0.287    0.287
   .p03               0.455                               0.455    0.455
   .p04               0.484                               0.484    0.484
   .p21               0.391                               0.391    0.391
   .p22               0.544                               0.544    0.544
   .p30               0.324                               0.324    0.324
   .p31               0.194                               0.194    0.194
   .p37               0.349                               0.349    0.349
   .p40               0.205                               0.205    0.205
   .p44               0.254                               0.254    0.254
   .p45               0.275                               0.275    0.275
   .p46               0.211                               0.211    0.211
   .p51               0.418                               0.418    0.418
   .p52               0.287                               0.287    0.287
   .p57               0.176                               0.176    0.176
    spurn             0.390    0.026   14.836    0.000    1.000    1.000
    terror            0.451    0.028   15.972    0.000    1.000    1.000
    isolate           0.472    0.034   14.048    0.000    1.000    1.000
    corrupt           0.576    0.039   14.874    0.000    1.000    1.000
    ignore            0.713    0.029   24.982    0.000    1.000    1.000

Scales y*:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    p06               1.000                               1.000    1.000
    p10               1.000                               1.000    1.000
    p14               1.000                               1.000    1.000
    p25               1.000                               1.000    1.000
    p27               1.000                               1.000    1.000
    p29               1.000                               1.000    1.000
    p33               1.000                               1.000    1.000
    p35               1.000                               1.000    1.000
    p48               1.000                               1.000    1.000
    p49               1.000                               1.000    1.000
    p53               1.000                               1.000    1.000
    p54               1.000                               1.000    1.000
    p07               1.000                               1.000    1.000
    p11               1.000                               1.000    1.000
    p13               1.000                               1.000    1.000
    p17               1.000                               1.000    1.000
    p24               1.000                               1.000    1.000
    p26               1.000                               1.000    1.000
    p36               1.000                               1.000    1.000
    p55               1.000                               1.000    1.000
    p56               1.000                               1.000    1.000
    p01               1.000                               1.000    1.000
    p18               1.000                               1.000    1.000
    p19               1.000                               1.000    1.000
    p23               1.000                               1.000    1.000
    p39               1.000                               1.000    1.000
    p43               1.000                               1.000    1.000
    p09               1.000                               1.000    1.000
    p12               1.000                               1.000    1.000
    p16               1.000                               1.000    1.000
    p20               1.000                               1.000    1.000
    p28               1.000                               1.000    1.000
    p47               1.000                               1.000    1.000
    p50               1.000                               1.000    1.000
    p02               1.000                               1.000    1.000
    p03               1.000                               1.000    1.000
    p04               1.000                               1.000    1.000
    p21               1.000                               1.000    1.000
    p22               1.000                               1.000    1.000
    p30               1.000                               1.000    1.000
    p31               1.000                               1.000    1.000
    p37               1.000                               1.000    1.000
    p40               1.000                               1.000    1.000
    p44               1.000                               1.000    1.000
    p45               1.000                               1.000    1.000
    p46               1.000                               1.000    1.000
    p51               1.000                               1.000    1.000
    p52               1.000                               1.000    1.000
    p57               1.000                               1.000    1.000

R-Square:
                   Estimate
    p06               0.390
    p10               0.249
    p14               0.671
    p25               0.330
    p27               0.416
    p29               0.704
    p33               0.800
    p35               0.494
    p48               0.672
    p49               0.535
    p53               0.573
    p54               0.480
    p07               0.451
    p11               0.606
    p13               0.508
    p17               0.472
    p24               0.633
    p26               0.480
    p36               0.633
    p55               0.521
    p56               0.580
    p01               0.472
    p18               0.439
    p19               0.650
    p23               0.410
    p39               0.465
    p43               0.566
    p09               0.576
    p12               0.471
    p16               0.181
    p20               0.625
    p28               0.677
    p47               0.629
    p50               0.766
    p02               0.713
    p03               0.545
    p04               0.516
    p21               0.609
    p22               0.456
    p30               0.676
    p31               0.806
    p37               0.651
    p40               0.795
    p44               0.746
    p45               0.725
    p46               0.789
    p51               0.582
    p52               0.713
    p57               0.824

Note: #free parameters = 255 = 44 loadings + 49*4=196 thresholds + 5 factor variances + 10 factor covariances = 255 parameters USED or estimated

Possible = 4950/2 + 494 = 1421 DF =1117 calculation: 1421 – 255 – 49 “residuals” = 1117

Now we can test the fit of a constrained structural model that posits a single higher-order “General Abuse” factor to account for the correlations among these 5 latent factors.

Syntax for IFA model with WLSMV including a higher-order factor instead of 5 correlated factors (“smaller/bigger model”):

NOTE: With respect to fit of the structural model, we are now fitting a single higher-order factor INSTEAD OF covariances among the 5 factors.

To test the fit against the saturated (all possible factor correlations model), we direct DIFFTEST on the ANALYSIS command to use the results from the previous model.

ifaHigherSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
abuse =~ spurn + terror + isolate + corrupt + ignore
"
ifaHigherEstimates = cfa(model = ifaHigherSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV",
                         ordered = c("p06", "p10", "p14", "p25", "p27", "p29", "p33", "p35", "p48", "p49", "p53", "p54", 
                                     "p07", "p11", "p13", "p17", "p24", "p26", "p36", "p55", "p56", "p01", "p18", "p19", 
                                     "p23", "p39", "p43", "p09", "p12", "p16", "p20", "p28", "p47", "p50", "p02", "p03", 
                                     "p04", "p21", "p22", "p30", "p31", "p37", "p40", "p44", "p45", "p46", "p51", "p52", "p57"))
lavaan WARNING: 704 bivariate tables have empty cells; to see them, use:
                  lavInspect(fit, "zero.cell.tables")
summary(ifaHigherEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
lavaan (0.5-23.1097) converged normally after  77 iterations

  Number of observations                          1335

  Estimator                                       DWLS      Robust
  Minimum Function Test Statistic             5865.614    5939.652
  Degrees of freedom                              1122        1122
  P-value (Chi-square)                           0.000       0.000
  Scaling correction factor                                  1.107
  Shift parameter                                          642.166
    for simple second-order correction (WLSMV)

Model test baseline model:

  Minimum Function Test Statistic           471778.208   67352.685
  Degrees of freedom                              1176        1176
  P-value                                        0.000       0.000

User model versus baseline model:

  Comparative Fit Index (CFI)                    0.990       0.927
  Tucker-Lewis Index (TLI)                       0.989       0.924

  Robust Comparative Fit Index (CFI)                            NA
  Robust Tucker-Lewis Index (TLI)                               NA

Root Mean Square Error of Approximation:

  RMSEA                                          0.056       0.057
  90 Percent Confidence Interval          0.055  0.058       0.055  0.058
  P-value RMSEA <= 0.05                          0.000       0.000

  Robust RMSEA                                                  NA
  90 Percent Confidence Interval                                NA     NA

Standardized Root Mean Square Residual:

  SRMR                                           0.060       0.060

Weighted Root Mean Square Residual:

  WRMR                                           2.068       2.068

Parameter Estimates:

  Information                                 Expected
  Standard Errors                           Robust.sem

Latent Variables:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
  spurn =~                                                              
    p06               1.000                               0.624    0.624
    p10               0.798    0.045   17.622    0.000    0.498    0.498
    p14               1.312    0.047   28.027    0.000    0.819    0.819
    p25               0.920    0.044   20.732    0.000    0.575    0.575
    p27               1.033    0.042   24.474    0.000    0.645    0.645
    p29               1.344    0.047   28.503    0.000    0.839    0.839
    p33               1.433    0.049   29.181    0.000    0.895    0.895
    p35               1.125    0.048   23.458    0.000    0.703    0.703
    p48               1.311    0.059   22.385    0.000    0.819    0.819
    p49               1.171    0.045   26.240    0.000    0.731    0.731
    p53               1.213    0.046   26.375    0.000    0.757    0.757
    p54               1.109    0.044   25.042    0.000    0.692    0.692
  terror =~                                                             
    p07               1.000                               0.671    0.671
    p11               1.159    0.041   28.494    0.000    0.778    0.778
    p13               1.063    0.048   22.037    0.000    0.714    0.714
    p17               1.024    0.041   25.044    0.000    0.687    0.687
    p24               1.187    0.047   25.308    0.000    0.797    0.797
    p26               1.033    0.043   24.070    0.000    0.693    0.693
    p36               1.185    0.043   27.253    0.000    0.796    0.796
    p55               1.075    0.043   25.059    0.000    0.721    0.721
    p56               1.135    0.040   28.132    0.000    0.762    0.762
  isolate =~                                                            
    p01               1.000                               0.688    0.688
    p18               0.963    0.044   21.844    0.000    0.662    0.662
    p19               1.171    0.048   24.281    0.000    0.805    0.805
    p23               0.931    0.042   21.934    0.000    0.641    0.641
    p39               0.992    0.044   22.636    0.000    0.682    0.682
    p43               1.095    0.041   26.460    0.000    0.753    0.753
  corrupt =~                                                            
    p09               1.000                               0.759    0.759
    p12               0.904    0.044   20.342    0.000    0.686    0.686
    p16               0.559    0.043   12.954    0.000    0.424    0.424
    p20               1.041    0.048   21.482    0.000    0.790    0.790
    p28               1.085    0.048   22.806    0.000    0.823    0.823
    p47               1.045    0.041   25.239    0.000    0.793    0.793
    p50               1.153    0.045   25.652    0.000    0.875    0.875
  ignore =~                                                             
    p02               1.000                               0.845    0.845
    p03               0.874    0.023   38.605    0.000    0.738    0.738
    p04               0.850    0.022   37.812    0.000    0.718    0.718
    p21               0.924    0.022   41.447    0.000    0.781    0.781
    p22               0.800    0.027   29.857    0.000    0.675    0.675
    p30               0.974    0.021   46.147    0.000    0.822    0.822
    p31               1.063    0.021   49.546    0.000    0.898    0.898
    p37               0.955    0.022   43.425    0.000    0.807    0.807
    p40               1.056    0.021   50.758    0.000    0.892    0.892
    p44               1.022    0.021   48.596    0.000    0.863    0.863
    p45               1.008    0.021   46.918    0.000    0.852    0.852
    p46               1.052    0.021   50.202    0.000    0.888    0.888
    p51               0.903    0.022   40.269    0.000    0.763    0.763
    p52               1.000    0.022   46.082    0.000    0.844    0.844
    p57               1.075    0.021   50.995    0.000    0.908    0.908
  abuse =~                                                              
    spurn             1.000                               0.990    0.990
    terror            1.029    0.050   20.661    0.000    0.948    0.948
    isolate           1.059    0.055   19.162    0.000    0.951    0.951
    corrupt           1.025    0.059   17.501    0.000    0.834    0.834
    ignore            1.209    0.051   23.484    0.000    0.884    0.884

Intercepts:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .p06               0.000                               0.000    0.000
   .p10               0.000                               0.000    0.000
   .p14               0.000                               0.000    0.000
   .p25               0.000                               0.000    0.000
   .p27               0.000                               0.000    0.000
   .p29               0.000                               0.000    0.000
   .p33               0.000                               0.000    0.000
   .p35               0.000                               0.000    0.000
   .p48               0.000                               0.000    0.000
   .p49               0.000                               0.000    0.000
   .p53               0.000                               0.000    0.000
   .p54               0.000                               0.000    0.000
   .p07               0.000                               0.000    0.000
   .p11               0.000                               0.000    0.000
   .p13               0.000                               0.000    0.000
   .p17               0.000                               0.000    0.000
   .p24               0.000                               0.000    0.000
   .p26               0.000                               0.000    0.000
   .p36               0.000                               0.000    0.000
   .p55               0.000                               0.000    0.000
   .p56               0.000                               0.000    0.000
   .p01               0.000                               0.000    0.000
   .p18               0.000                               0.000    0.000
   .p19               0.000                               0.000    0.000
   .p23               0.000                               0.000    0.000
   .p39               0.000                               0.000    0.000
   .p43               0.000                               0.000    0.000
   .p09               0.000                               0.000    0.000
   .p12               0.000                               0.000    0.000
   .p16               0.000                               0.000    0.000
   .p20               0.000                               0.000    0.000
   .p28               0.000                               0.000    0.000
   .p47               0.000                               0.000    0.000
   .p50               0.000                               0.000    0.000
   .p02               0.000                               0.000    0.000
   .p03               0.000                               0.000    0.000
   .p04               0.000                               0.000    0.000
   .p21               0.000                               0.000    0.000
   .p22               0.000                               0.000    0.000
   .p30               0.000                               0.000    0.000
   .p31               0.000                               0.000    0.000
   .p37               0.000                               0.000    0.000
   .p40               0.000                               0.000    0.000
   .p44               0.000                               0.000    0.000
   .p45               0.000                               0.000    0.000
   .p46               0.000                               0.000    0.000
   .p51               0.000                               0.000    0.000
   .p52               0.000                               0.000    0.000
   .p57               0.000                               0.000    0.000
    spurn             0.000                               0.000    0.000
    terror            0.000                               0.000    0.000
    isolate           0.000                               0.000    0.000
    corrupt           0.000                               0.000    0.000
    ignore            0.000                               0.000    0.000
    abuse             0.000                               0.000    0.000

Thresholds:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    p06|t1           -0.751    0.038  -19.732    0.000   -0.751   -0.751
    p06|t2            0.154    0.034    4.458    0.000    0.154    0.154
    p06|t3            0.700    0.038   18.642    0.000    0.700    0.700
    p06|t4            1.513    0.053   28.441    0.000    1.513    1.513
    p10|t1           -0.312    0.035   -8.932    0.000   -0.312   -0.312
    p10|t2            0.427    0.035   12.026    0.000    0.427    0.427
    p10|t3            0.869    0.039   22.014    0.000    0.869    0.869
    p10|t4            1.568    0.055   28.489    0.000    1.568    1.568
    p14|t1            0.360    0.035   10.237    0.000    0.360    0.360
    p14|t2            1.047    0.042   24.864    0.000    1.047    1.047
    p14|t3            1.446    0.051   28.277    0.000    1.446    1.446
    p14|t4            2.081    0.081   25.669    0.000    2.081    2.081
    p25|t1           -0.082    0.034   -2.380    0.017   -0.082   -0.082
    p25|t2            0.547    0.036   15.095    0.000    0.547    0.547
    p25|t3            0.916    0.040   22.849    0.000    0.916    0.916
    p25|t4            1.965    0.073   26.758    0.000    1.965    1.965
    p27|t1           -0.394    0.035  -11.159    0.000   -0.394   -0.394
    p27|t2            0.398    0.035   11.268    0.000    0.398    0.398
    p27|t3            0.888    0.040   22.360    0.000    0.888    0.888
    p27|t4            1.712    0.061   28.262    0.000    1.712    1.712
    p29|t1            0.033    0.034    0.958    0.338    0.033    0.033
    p29|t2            0.715    0.038   18.955    0.000    0.715    0.715
    p29|t3            1.090    0.043   25.436    0.000    1.090    1.090
    p29|t4            1.800    0.065   27.890    0.000    1.800    1.800
    p33|t1            0.346    0.035    9.857    0.000    0.346    0.346
    p33|t2            1.063    0.042   25.087    0.000    1.063    1.063
    p33|t3            1.440    0.051   28.259    0.000    1.440    1.440
    p33|t4            2.115    0.084   25.312    0.000    2.115    2.115
    p35|t1            0.022    0.034    0.629    0.529    0.022    0.022
    p35|t2            0.960    0.041   23.570    0.000    0.960    0.960
    p35|t3            1.351    0.049   27.846    0.000    1.351    1.351
    p35|t4            1.915    0.071   27.151    0.000    1.915    1.915
    p48|t1            1.047    0.042   24.864    0.000    1.047    1.047
    p48|t2            1.636    0.058   28.444    0.000    1.636    1.636
    p48|t3            1.881    0.069   27.396    0.000    1.881    1.881
    p48|t4            2.433    0.114   21.318    0.000    2.433    2.433
    p49|t1            0.265    0.035    7.624    0.000    0.265    0.265
    p49|t2            0.975    0.041   23.807    0.000    0.975    0.975
    p49|t3            1.451    0.051   28.294    0.000    1.451    1.451
    p49|t4            2.151    0.086   24.909    0.000    2.151    2.151
    p53|t1            0.003    0.034    0.082    0.935    0.003    0.003
    p53|t2            0.782    0.038   20.349    0.000    0.782    0.782
    p53|t3            1.275    0.047   27.333    0.000    1.275    1.275
    p53|t4            1.927    0.071   27.060    0.000    1.927    1.927
    p54|t1            0.076    0.034    2.216    0.027    0.076    0.076
    p54|t2            0.637    0.037   17.223    0.000    0.637    0.637
    p54|t3            0.999    0.041   24.180    0.000    0.999    0.999
    p54|t4            1.712    0.061   28.262    0.000    1.712    1.712
    p07|t1            0.207    0.035    5.988    0.000    0.207    0.207
    p07|t2            1.128    0.044   25.901    0.000    1.128    1.128
    p07|t3            1.549    0.054   28.481    0.000    1.549    1.549
    p07|t4            2.212    0.091   24.201    0.000    2.212    2.212
    p11|t1            0.382    0.035   10.834    0.000    0.382    0.382
    p11|t2            1.060    0.042   25.042    0.000    1.060    1.060
    p11|t3            1.456    0.051   28.311    0.000    1.456    1.456
    p11|t4            2.115    0.084   25.312    0.000    2.115    2.115
    p13|t1            1.150    0.044   26.147    0.000    1.150    1.150
    p13|t2            1.658    0.058   28.406    0.000    1.658    1.658
    p13|t3            1.881    0.069   27.396    0.000    1.881    1.881
    p13|t4            2.336    0.103   22.629    0.000    2.336    2.336
    p17|t1            0.451    0.036   12.675    0.000    0.451    0.451
    p17|t2            1.275    0.047   27.333    0.000    1.275    1.275
    p17|t3            1.615    0.057   28.470    0.000    1.615    1.615
    p17|t4            2.234    0.093   23.931    0.000    2.234    2.234
    p24|t1            1.009    0.041   24.319    0.000    1.009    1.009
    p24|t2            1.904    0.070   27.237    0.000    1.904    1.904
    p24|t3            2.433    0.114   21.318    0.000    2.433    2.433
    p24|t4            2.748    0.164   16.783    0.000    2.748    2.748
    p26|t1           -0.468    0.036  -13.106    0.000   -0.468   -0.468
    p26|t2            0.813    0.039   20.960    0.000    0.813    0.813
    p26|t3            1.242    0.046   27.059    0.000    1.242    1.242
    p26|t4            1.870    0.068   27.470    0.000    1.870    1.870
    p36|t1            0.587    0.037   16.056    0.000    0.587    0.587
    p36|t2            1.242    0.046   27.059    0.000    1.242    1.242
    p36|t3            1.531    0.054   28.465    0.000    1.531    1.531
    p36|t4            2.308    0.100   22.993    0.000    2.308    2.308
    p55|t1            0.253    0.035    7.297    0.000    0.253    0.253
    p55|t2            0.700    0.038   18.642    0.000    0.700    0.700
    p55|t3            1.002    0.041   24.227    0.000    1.002    1.002
    p55|t4            1.790    0.064   27.938    0.000    1.790    1.790
    p56|t1            0.114    0.034    3.310    0.001    0.114    0.114
    p56|t2            0.651    0.037   17.540    0.000    0.651    0.651
    p56|t3            0.945    0.041   23.332    0.000    0.945    0.945
    p56|t4            1.772    0.063   28.026    0.000    1.772    1.772
    p01|t1            0.836    0.039   21.414    0.000    0.836    0.836
    p01|t2            1.575    0.055   28.489    0.000    1.575    1.575
    p01|t3            1.881    0.069   27.396    0.000    1.881    1.881
    p01|t4            2.191    0.090   24.453    0.000    2.191    2.191
    p18|t1           -0.416    0.035  -11.755    0.000   -0.416   -0.416
    p18|t2            0.294    0.035    8.442    0.000    0.294    0.294
    p18|t3            0.826    0.039   21.213    0.000    0.826    0.826
    p18|t4            1.495    0.053   28.409    0.000    1.495    1.495
    p19|t1            0.899    0.040   22.556    0.000    0.899    0.899
    p19|t2            1.525    0.054   28.457    0.000    1.525    1.525
    p19|t3            1.881    0.069   27.396    0.000    1.881    1.881
    p19|t4            2.336    0.103   22.629    0.000    2.336    2.336
    p23|t1           -0.334    0.035   -9.530    0.000   -0.334   -0.334
    p23|t2            0.616    0.037   16.747    0.000    0.616    0.616
    p23|t3            1.254    0.046   27.164    0.000    1.254    1.254
    p23|t4            2.097    0.082   25.496    0.000    2.097    2.097
    p39|t1            0.717    0.038   19.007    0.000    0.717    0.717
    p39|t2            1.696    0.060   28.312    0.000    1.696    1.696
    p39|t3            2.049    0.079   25.988    0.000    2.049    2.049
    p39|t4            2.366    0.106   22.231    0.000    2.366    2.366
    p43|t1            0.033    0.034    0.958    0.338    0.033    0.033
    p43|t2            1.202    0.045   26.694    0.000    1.202    1.202
    p43|t3            1.673    0.059   28.373    0.000    1.673    1.673
    p43|t4            2.433    0.114   21.318    0.000    2.433    2.433
    p09|t1            0.908    0.040   22.703    0.000    0.908    0.908
    p09|t2            1.688    0.060   28.334    0.000    1.688    1.688
    p09|t3            2.151    0.086   24.909    0.000    2.151    2.151
    p09|t4            2.748    0.164   16.783    0.000    2.748    2.748
    p12|t1            0.589    0.037   16.109    0.000    0.589    0.589
    p12|t2            1.809    0.065   27.839    0.000    1.809    1.809
    p12|t3            2.133    0.085   25.117    0.000    2.133    2.133
    p12|t4            2.398    0.110   21.797    0.000    2.398    2.398
    p16|t1            0.196    0.035    5.660    0.000    0.196    0.196
    p16|t2            0.963    0.041   23.618    0.000    0.963    0.963
    p16|t3            1.360    0.049   27.900    0.000    1.360    1.360
    p16|t4            2.171    0.088   24.688    0.000    2.171    2.171
    p20|t1            1.478    0.052   28.371    0.000    1.478    1.478
    p20|t2            2.034    0.078   26.134    0.000    2.034    2.034
    p20|t3            2.191    0.090   24.453    0.000    2.191    2.191
    p20|t4            2.674    0.150   17.859    0.000    2.674    2.674
    p28|t1            1.210    0.045   26.769    0.000    1.210    1.210
    p28|t2            1.688    0.060   28.334    0.000    1.688    1.688
    p28|t3            1.809    0.065   27.839    0.000    1.809    1.809
    p28|t4            2.282    0.098   23.330    0.000    2.282    2.282
    p47|t1            0.724    0.038   19.163    0.000    0.724    0.724
    p47|t2            1.370    0.049   27.951    0.000    1.370    1.370
    p47|t3            1.754    0.062   28.104    0.000    1.754    1.754
    p47|t4            2.308    0.100   22.993    0.000    2.308    2.308
    p50|t1            1.150    0.044   26.147    0.000    1.150    1.150
    p50|t2            1.849    0.067   27.606    0.000    1.849    1.849
    p50|t3            2.081    0.081   25.669    0.000    2.081    2.081
    p50|t4            2.433    0.114   21.318    0.000    2.433    2.433
    p02|t1            0.789    0.038   20.502    0.000    0.789    0.789
    p02|t2            1.650    0.058   28.420    0.000    1.650    1.650
    p02|t3            1.940    0.072   26.964    0.000    1.940    1.940
    p02|t4            2.433    0.114   21.318    0.000    2.433    2.433
    p03|t1            0.209    0.035    6.042    0.000    0.209    0.209
    p03|t2            1.111    0.043   25.692    0.000    1.111    1.111
 [ reached getOption("max.print") -- omitted 54 rows ]

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
   .p06               0.610                               0.610    0.610
   .p10               0.752                               0.752    0.752
   .p14               0.329                               0.329    0.329
   .p25               0.670                               0.670    0.670
   .p27               0.584                               0.584    0.584
   .p29               0.295                               0.295    0.295
   .p33               0.199                               0.199    0.199
   .p35               0.506                               0.506    0.506
   .p48               0.330                               0.330    0.330
   .p49               0.465                               0.465    0.465
   .p53               0.426                               0.426    0.426
   .p54               0.520                               0.520    0.520
   .p07               0.550                               0.550    0.550
   .p11               0.395                               0.395    0.395
   .p13               0.491                               0.491    0.491
   .p17               0.528                               0.528    0.528
   .p24               0.366                               0.366    0.366
   .p26               0.520                               0.520    0.520
   .p36               0.367                               0.367    0.367
   .p55               0.480                               0.480    0.480
   .p56               0.420                               0.420    0.420
   .p01               0.527                               0.527    0.527
   .p18               0.561                               0.561    0.561
   .p19               0.352                               0.352    0.352
   .p23               0.590                               0.590    0.590
   .p39               0.535                               0.535    0.535
   .p43               0.433                               0.433    0.433
   .p09               0.424                               0.424    0.424
   .p12               0.529                               0.529    0.529
   .p16               0.820                               0.820    0.820
   .p20               0.376                               0.376    0.376
   .p28               0.322                               0.322    0.322
   .p47               0.371                               0.371    0.371
   .p50               0.234                               0.234    0.234
   .p02               0.287                               0.287    0.287
   .p03               0.455                               0.455    0.455
   .p04               0.484                               0.484    0.484
   .p21               0.391                               0.391    0.391
   .p22               0.544                               0.544    0.544
   .p30               0.324                               0.324    0.324
   .p31               0.194                               0.194    0.194
   .p37               0.349                               0.349    0.349
   .p40               0.205                               0.205    0.205
   .p44               0.255                               0.255    0.255
   .p45               0.275                               0.275    0.275
   .p46               0.211                               0.211    0.211
   .p51               0.418                               0.418    0.418
   .p52               0.287                               0.287    0.287
   .p57               0.176                               0.176    0.176
    spurn             0.008    0.004    2.119    0.034    0.021    0.021
    terror            0.046    0.006    7.120    0.000    0.101    0.101
    isolate           0.045    0.008    5.525    0.000    0.096    0.096
    corrupt           0.175    0.016   10.869    0.000    0.304    0.304
    ignore            0.155    0.012   12.652    0.000    0.218    0.218
    abuse             0.382    0.026   14.620    0.000    1.000    1.000

Scales y*:
                   Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
    p06               1.000                               1.000    1.000
    p10               1.000                               1.000    1.000
    p14               1.000                               1.000    1.000
    p25               1.000                               1.000    1.000
    p27               1.000                               1.000    1.000
    p29               1.000                               1.000    1.000
    p33               1.000                               1.000    1.000
    p35               1.000                               1.000    1.000
    p48               1.000                               1.000    1.000
    p49               1.000                               1.000    1.000
    p53               1.000                               1.000    1.000
    p54               1.000                               1.000    1.000
    p07               1.000                               1.000    1.000
    p11               1.000                               1.000    1.000
    p13               1.000                               1.000    1.000
    p17               1.000                               1.000    1.000
    p24               1.000                               1.000    1.000
    p26               1.000                               1.000    1.000
    p36               1.000                               1.000    1.000
    p55               1.000                               1.000    1.000
    p56               1.000                               1.000    1.000
    p01               1.000                               1.000    1.000
    p18               1.000                               1.000    1.000
    p19               1.000                               1.000    1.000
    p23               1.000                               1.000    1.000
    p39               1.000                               1.000    1.000
    p43               1.000                               1.000    1.000
    p09               1.000                               1.000    1.000
    p12               1.000                               1.000    1.000
    p16               1.000                               1.000    1.000
    p20               1.000                               1.000    1.000
    p28               1.000                               1.000    1.000
    p47               1.000                               1.000    1.000
    p50               1.000                               1.000    1.000
    p02               1.000                               1.000    1.000
    p03               1.000                               1.000    1.000
    p04               1.000                               1.000    1.000
    p21               1.000                               1.000    1.000
    p22               1.000                               1.000    1.000
    p30               1.000                               1.000    1.000
    p31               1.000                               1.000    1.000
    p37               1.000                               1.000    1.000
    p40               1.000                               1.000    1.000
    p44               1.000                               1.000    1.000
    p45               1.000                               1.000    1.000
    p46               1.000                               1.000    1.000
    p51               1.000                               1.000    1.000
    p52               1.000                               1.000    1.000
    p57               1.000                               1.000    1.000

R-Square:
                   Estimate
    p06               0.390
    p10               0.248
    p14               0.671
    p25               0.330
    p27               0.416
    p29               0.705
    p33               0.801
    p35               0.494
    p48               0.670
    p49               0.535
    p53               0.574
    p54               0.480
    p07               0.450
    p11               0.605
    p13               0.509
    p17               0.472
    p24               0.634
    p26               0.480
    p36               0.633
    p55               0.520
    p56               0.580
    p01               0.473
    p18               0.439
    p19               0.648
    p23               0.410
    p39               0.465
    p43               0.567
    p09               0.576
    p12               0.471
    p16               0.180
    p20               0.624
    p28               0.678
    p47               0.629
    p50               0.766
    p02               0.713
    p03               0.545
    p04               0.516
    p21               0.609
    p22               0.456
    p30               0.676
    p31               0.806
    p37               0.651
    p40               0.795
    p44               0.745
    p45               0.725
    p46               0.789
    p51               0.582
    p52               0.713
    p57               0.824
    spurn             0.979
    terror            0.899
    isolate           0.904
    corrupt           0.696
    ignore            0.782
anova(ifaNoHighEstimates, ifaHigherEstimates)
Error in lav_test_diff_af_h1(m1 = m1, m0 = m0) : 
  lavaan ERROR: unconstrained parameter set is not the same in m0 and m1

As it turns out, lavaan returns an error for the model comparison, so we cannot be certain of which is better. The Mplus example showed the higher order trait did not fit as well as the general model.

Syntax and output for IFA model with WLSMV including only a single factor (“smallest model”)

We can try one more alternative – what if the items were measuring a single factor (i.e., a single score)?

NOTE: With respect to fit of the structural model, we are now fitting a single factor INSTEAD OF 5 factors and a higher-order factor. This will tell us the extent to which a single score is appropriate.

To test the fit against the higher-order factor model, we direct DIFFTEST on the ANALYSIS command to use the results from the previous model.

anova(ifaSingleEstimates, ifaNoHighEstimates)
Error in lav_test_diff_af_h1(m1 = m1, m0 = m0) : 
  lavaan ERROR: unconstrained parameter set is not the same in m0 and m1

Again, lavaan throws an error. We’ll use the Mplus result in our write up below.

Example results section for CFA using MLR

After examining the fit of each of the five factors individually, as described previously, a combined model was estimated in which all five factors were fit simultaneously with covariances estimated freely among them. A total of 49 items were thus included. Each factor was identified by fixing the first item loading on each factor to 1, estimating the factor variance, and then fixing the factor mean to 0, while estimating all possible item intercepts, item residual variances, and remaining item loadings. Robust maximum likelihood (MLR) estimation was used to estimate all higher-order models using the lavaan package (Rosseel, 2012) in R (R Core Team, 2017), and differences in fit between nested models were evaluated using −2* rescaled difference in the model log-likelihood values.

As shown in Table 1, the fit of the model with five correlated factors was acceptable by the RMSEA (.047), but not by the CFI (.844). Standardized model parameters (loadings, intercepts, and residual variances) are shown in Table 2. Correlations of .6 or higher were found among the five factors, suggesting evidence that the five factors may indicate a single higher-order factor. This idea was testing by eliminating the covariances among the factors and instead estimating loadings for the five factors from a single higher-order factor (whose variance was fixed to 1). Although the fit of the higher-order factor model remained marginal (see Table 1), a nested model comparison revealed a significant decrease in fit, −2ΔLL(5) = 47.083, p < .0001, indicating that a single factor did not appear adequate to describe the pattern of correlation amongst the five factors. A further nested model comparison was conducted to examine the extent to which a single factor could describe the covariances among the items rather than five lower-order factors and a single higher-order factor. Fit of the single factor only model was poor, as shown in Table 1, and was significantly worse than the higher-order factor model, −2ΔLL(5) = 448.91, p < .0001, indicating that a single “total score” would not be recommended.

Example results section for IFA using WLMSV

After examining the fit of each of the five factors individually, as described previously, a combined model was estimated in which all five factors were fit simultaneously with covariances estimated freely among them. A total of 49 items were thus included. Each factor was identified by fixing the first item loading on each factor to 1, estimating the factor variance, and then fixing the factor mean to 0, while estimating all possible item thresholds (four for each item given five response options) and remaining item loadings. WLSMV estimation in the lavaan package (Rosseel, 2012) in R (R Core Team, 2017) including a probit link and the THETA parameterization (such that all item residual variances were constrained to 1) was used to estimate all higher-order models. Thus, model fit statistics describe the fit of the item factor model to the polychoric correlation matrix among the items. Nested model comparisons were conducted using the Mplus DIFFTEST procedure.

As shown in Table 1, the fit of the model with five correlated factors was acceptable. Item factor analysis parameters (loadings and thresholds) and their corresponding item response model parameters (discriminations and difficulties) are shown in Table 2. Correlations of .7 or higher were found amongst the five factors, suggesting evidence that the five factors may indicate a single higher-order factor. This idea was testing by eliminating the covariances among the factors and instead estimating loadings for the five factors from a single higher-order factor (whose variance was fixed to 1). Although the fit of the higher-order factor model remained acceptable (see Table 1), a nested model comparison via the DIFFTEST procedure revealed a significant decrease in fit, DIFFTEST(5) = 92.05, p < .0001, indicating that a single factor did not appear adequate to describe the pattern of correlation amongst the five factors. A further nested model comparison was conducted to examine the extent to which a single factor could describe the polychoric correlations among the items rather than five lower-order factors and a single higher-order factor. Fit of the single factor only model was poor, as shown in Table 1, and was significantly worse than the higher-order factor model, DIFFTEST(5) = 611.95, p < .0001, indicating that a single score would not be recommended.

Table 1 = table with fit info per model Table 2 would have actual model parameters…. (unstandardized and standardized estimates and their SEs, so 4 columns)

---
title: "EPSY 906/CLDP 948 Example 8: Higher-Order Models (CFA with MLR and IFA with WLSMV)"
output:
  html_notebook:
    smart: false
---

## Higher-Order Models (CFA with MLR and IFA with WLSMV)  `lavaan`

```{r setup, include=TRUE}
if (!require(lavaan)) install.packages("lavaan")
library(lavaan)
```

Example data: 1336 college students self-reporting on 49 items (measuring five factors) assessing childhood maltreatment: Items are answered on a 1–5 scale: 1=Strongly Disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly Agree. The items are NOT normally distributed, so we’ll use both CFA with MLR and IFA with WLSMV as two options to examine the fit of these models (as an example of how to do each, but NOT to compare between estimators).

*1. Spurning:* Verbal and nonverbal caregiver acts that reject and degrade a child

*2. Terrorizing:* Caregiver behaviors that threaten or are likely to physically hurt, kill, abandon, or place the child or the child’s loved ones or objects in recognizably dangerous situations.

*3. Isolating:* Caregiver acts that consistently deny the child opportunities to meet needs for interacting or communicating with peers or adults inside or outside the home.

*4. Corrupting:* Caregiver acts that encourage the child to develop inappropriate behaviors (self-destructive, antisocial, criminal, deviant, or other maladaptive behaviors).

*5. Ignoring:* Emotional unresponsiveness includes caregiver acts that ignore the child’s attempts and needs to interact (failing to express affection, caring, and love for the child) and show no emotion in interactions with the child

```{r import, include=TRUE}
abuseData = read.csv(file = "abuse.csv", col.names = c("ID", paste0("p0",1:9), paste0("p",10:57)))
```


First, we separately build each one-factor model:

```{r multiFactors, include=TRUE}
spurningSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
"
spurningEstimatesMLR = cfa(model = spurningSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = data.frame(Model = "Spurning", rbind(inspect(object = spurningEstimatesMLR, what = "fit")), stringsAsFactors = FALSE)

spurningEstimatesWLSMV = cfa(model = spurningSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p06", "p10", "p14", "p25", "p27", "p29", "p33", "p35", "p48", "p49", "p53", "p54"),
                             parameterization = "theta")
fitResultsWLSMV = data.frame(Model = "Spurning", rbind(inspect(object = spurningEstimatesWLSMV, what = "fit")), stringsAsFactors = FALSE)

spurningParams = cbind(inspect(object = spurningEstimatesMLR, what = "std")$lambda, inspect(object = spurningEstimatesWLSMV, what = "std")$lambda) 
colnames(spurningParams) = c("spurningMLR", "spurningWLSMV")


terrorizingSyntax = "
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
"
terrorizingEstimatesMLR = cfa(model = terrorizingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = rbind(fitResultsMLR, c("Terrorizing", inspect(object = terrorizingEstimatesMLR, what = "fit")))

terrorizingEstimatesWLSMV = cfa(model = terrorizingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p07", "p11", "p13", "p17", "p24", "p26", "p36", "p55", "p56"), parameterization = "theta")
fitResultsWLSMV = rbind(fitResultsWLSMV, c("Terrorizing", inspect(object = terrorizingEstimatesWLSMV, what = "fit")))

terrorizingParams = cbind(inspect(object = terrorizingEstimatesMLR, what = "std")$lambda, inspect(object = terrorizingEstimatesWLSMV, what = "std")$lambda) 
colnames(terrorizingParams) = c("terrorizingMLR", "terrorizingWLSMV")


isolatingSyntax = "
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
"

isolatingEstimatesMLR = cfa(model = isolatingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = rbind(fitResultsMLR, c("Isolating", inspect(object = isolatingEstimatesMLR, what = "fit")))

isolatingEstimatesWLSMV = cfa(model = isolatingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p01", "p18", "p19", "p23", "p39", "p43"), parameterization = "theta")
fitResultsWLSMV = rbind(fitResultsWLSMV, c("Isolating", inspect(object = isolatingEstimatesWLSMV, what = "fit")))

isolatingParams = cbind(inspect(object = isolatingEstimatesMLR, what = "std")$lambda, inspect(object = isolatingEstimatesWLSMV, what = "std")$lambda) 
colnames(isolatingParams) = c("isolatingMLR", "isolatingWLSMV")

corruptingSyntax = "
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
"

corruptingEstimatesMLR = cfa(model = corruptingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = rbind(fitResultsMLR, c("Corrupting", inspect(object = corruptingEstimatesMLR, what = "fit")))

corruptingEstimatesWLSMV = cfa(model = corruptingSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p09", "p12", "p16", "p20", "p28", "p47", "p50"), parameterization = "theta")
fitResultsWLSMV = rbind(fitResultsWLSMV, c("Corrupting", inspect(object = corruptingEstimatesWLSMV, what = "fit")))

corruptingParams = cbind(inspect(object = corruptingEstimatesMLR, what = "std")$lambda, inspect(object = corruptingEstimatesWLSMV, what = "std")$lambda) 
colnames(corruptingParams) = c("corruptingMLR", "corruptingWLSMV")

ignoringSyntax = "
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"

ignoringEstimatesMLR = cfa(model = ignoringSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
fitResultsMLR = rbind(fitResultsMLR, c("Ignoring", inspect(object = ignoringEstimatesMLR, what = "fit")))

ignoringEstimatesWLSMV = cfa(model = ignoringSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV", 
                             ordered = c("p02", "p03", "p04", "p21", "p22", "p30", "p31", "p37", "p40", "p44", "p45", "p46", "p51", "p52", "p57"),
                             parameterization = "theta")
fitResultsWLSMV = rbind(fitResultsWLSMV, c("Ignoring", inspect(object = ignoringEstimatesWLSMV, what = "fit")))

ignoringParams = cbind(inspect(object = ignoringEstimatesMLR, what = "std")$lambda, inspect(object = ignoringEstimatesWLSMV, what = "std")$lambda) 
colnames(ignoringParams) = c("ignoringMLR", "ignoringWLSMV")

```

#### MLR Model Fit Results

```{r onefactorres, include=TRUE}
fitResultsMLR[,c("Model", "chisq.scaled", "chisq.scaling.factor", "df.scaled", "pvalue.scaled", "cfi.scaled", "tli.scaled","rmsea.scaled")]
```

#### WLSMV Model Fit Results


```{r onefactorres2, include=TRUE}
fitResultsWLSMV[,c("Model", "chisq.scaled", "chisq.scaling.factor", "df.scaled", "pvalue.scaled", "cfi.scaled", "tli.scaled","rmsea.scaled")]
```

#### Parameter Results

```{r, onefactorres3, include=TRUE}
spurningParams
terrorizingParams
isolatingParams
corruptingParams
ignoringParams
```

### CFA model with MLR including all 5 correlated factors (“biggest model” for comparison)

```{r cfabig, include=TRUE}
cfaNoHighSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"

cfaNoHighEstimates = cfa(model = cfaNoHighSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
summary(cfaNoHighEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
```

NOTE: With respect to fit of the structural model, letting the separate factors be correlated is as good as it gets. This saturated structural model will be our "larger model" baseline with which to compare the fit of a single higher-order factor model (as the "smaller model").

### Syntax for CFA model with MLR and a higher-order factor instead of correlations among 5 factors ("smaller/bigger model"" for comparison)

```{r cfahigher, include=TRUE}
cfaHigherSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57

abuse =~ spurn + terror + isolate + corrupt + ignore
"

cfaHigherEstimates = cfa(model = cfaHigherSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
summary(cfaHigherEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
```

NOTE: With respect to fit of the structural model, we are now fitting a single higher-order factor INSTEAD OF covariances among the 5 factors.

To test the fit against the saturated (all possible factor correlations model), we can do a −2ΔLL scaled difference test.

```{r cfaHvsNH, include=TRUE}
anova(cfaNoHighEstimates, cfaHigherEstimates)
```

This higher-order factor model uses 5 fewer parameters (5 higher-order loadings to replace the 10 covariances among the factors).

According to the −2ΔLL scaled difference relative to the previous model, 

−2ΔLL (5) = 47.083, p < .0001

trying to reproduce the 5 factor covariances with a single higher-order factor results in a significant decrease in fit. Based on the factor correlations we examined earlier and the standardized higher-order loadings, I’d guess the issue lies with the "corrupting"" factor not being as related to the others.

### Comparison with One-Factor CFA model

For the sake of illustration, we can try one more alternative – what if the items were measuring a single factor (i.e., a single score)? Syntax for CFA model with MLR including a single factor instead of a higher-order factor ("smallest model" for comparison):

```{r single, include=TRUE}
cfaSingleSyntax = "
abuse =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54 +
         p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56 + p01 + p18 + p19 + 
         p23 + p39 + p43 + p09 + p12 + p16 + p20 + p28 + p47 + p50 + p02 + p03 + 
         p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"
cfaSingleEstimates = cfa(model = cfaSingleSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "MLR")
summary(cfaSingleEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)

```


NOTE: With respect to fit of the structural model, we are now fitting a single factor INSTEAD OF 5 factors and a higher-order factor. This will tell us the extent to which a “total score” is appropriate.

```{r cfasinglecomp, include=TRUE}
anova(cfaSingleEstimates, cfaNoHighEstimates, cfaHigherEstimates)
```
According to the −2ΔLL scaled difference relative to the previous model,
−2ΔLL (5) = 448.91, p < .0001

Therefore, a single factor fits significantly worse than 5 factors + a higher-order factor, and so one factor does not capture the covariances for these 49 items.

### Syntax for IFA model with WLSMV including all 5 correlated factors ("biggest model")

NOTE: With respect to fit of the structural model, letting the 5 separate factors be correlated is as good as it gets. This saturated structural model will be our “largest model” baseline with which to compare the fit of a single higher-order factor model (as the "smaller model").

```{r ifabig, include=TRUE}
ifaNoHighSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"

ifaNoHighEstimates = cfa(model = ifaNoHighSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV",
                         ordered = c("p06", "p10", "p14", "p25", "p27", "p29", "p33", "p35", "p48", "p49", "p53", "p54", 
                                     "p07", "p11", "p13", "p17", "p24", "p26", "p36", "p55", "p56", "p01", "p18", "p19", 
                                     "p23", "p39", "p43", "p09", "p12", "p16", "p20", "p28", "p47", "p50", "p02", "p03", 
                                     "p04", "p21", "p22", "p30", "p31", "p37", "p40", "p44", "p45", "p46", "p51", "p52", "p57"))
summary(ifaNoHighEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
```



Note:	#free parameters = 255 = 44 loadings + 49*4=196 thresholds + 5 factor variances + 10 factor covariances = 255 parameters USED or estimated

Possible = 49*50/2 + 49*4 = 1421
DF =1117 calculation: 1421 – 255 – 49 "residuals" = 1117

Now we can test the fit of a constrained structural model that posits a single higher-order "General Abuse" factor to account for the correlations among these 5 latent factors.

### Syntax for IFA model with WLSMV including a higher-order factor instead of 5 correlated factors ("smaller/bigger model"):

NOTE: With respect to fit of the structural model, we are now fitting a single higher-order factor INSTEAD OF covariances among the 5 factors.

To test the fit against the saturated (all possible factor correlations model), we direct DIFFTEST on the ANALYSIS command to use the results from the previous model.


```{r ifahigher, include=TRUE}
ifaHigherSyntax = "
spurn =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54
terror =~ p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56
isolate =~ p01 + p18 + p19 + p23 + p39 + p43
corrupt =~ p09 + p12 + p16 + p20 + p28 + p47 + p50
ignore =~ p02 + p03 + p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57

abuse =~ spurn + terror + isolate + corrupt + ignore
"

ifaHigherEstimates = cfa(model = ifaHigherSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV",
                         ordered = c("p06", "p10", "p14", "p25", "p27", "p29", "p33", "p35", "p48", "p49", "p53", "p54", 
                                     "p07", "p11", "p13", "p17", "p24", "p26", "p36", "p55", "p56", "p01", "p18", "p19", 
                                     "p23", "p39", "p43", "p09", "p12", "p16", "p20", "p28", "p47", "p50", "p02", "p03", 
                                     "p04", "p21", "p22", "p30", "p31", "p37", "p40", "p44", "p45", "p46", "p51", "p52", "p57"))
summary(ifaHigherEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
```

```{r ifacomp, include=TRUE}
anova(ifaNoHighEstimates, ifaHigherEstimates)
```

As it turns out, `lavaan` returns an error for the model comparison, so we cannot be certain of which is better. The Mplus example showed the higher order trait did not fit as well as the general model.

### Syntax and output for IFA model with WLSMV including only a single factor ("smallest model")

We can try one more alternative – what if the items were measuring a single factor (i.e., a single score)?

```{r ifasingle, include=TRUE}
ifaSingleSyntax = "
abuse =~ p06 + p10 + p14 + p25 + p27 + p29 + p33 + p35 + p48 + p49 + p53 + p54 + 
         p07 + p11 + p13 + p17 + p24 + p26 + p36 + p55 + p56 + p01 + p18 + p19 + 
         p23 + p39 + p43 + p09 + p12 + p16 + p20 + p28 + p47 + p50 + p02 + p03 + 
         p04 + p21 + p22 + p30 + p31 + p37 + p40 + p44 + p45 + p46 + p51 + p52 + p57
"

ifaSingleEstimates = cfa(model = ifaSingleSyntax, data = abuseData, std.lv = FALSE, mimic = "mplus", estimator = "WLSMV",
                         ordered = c("p06", "p10", "p14", "p25", "p27", "p29", "p33", "p35", "p48", "p49", "p53", "p54", 
                                     "p07", "p11", "p13", "p17", "p24", "p26", "p36", "p55", "p56", "p01", "p18", "p19", 
                                     "p23", "p39", "p43", "p09", "p12", "p16", "p20", "p28", "p47", "p50", "p02", "p03", 
                                     "p04", "p21", "p22", "p30", "p31", "p37", "p40", "p44", "p45", "p46", "p51", "p52", "p57"))
summary(ifaSingleEstimates, fit.measures = TRUE, rsquare = TRUE, standardized = TRUE)
```

NOTE: With respect to fit of the structural model, we are now fitting a single factor INSTEAD OF 5 factors and a higher-order factor. This will tell us the extent to which a single score is appropriate.

To test the fit against the higher-order factor model, we direct DIFFTEST on the ANALYSIS command to use the results from the previous model.

```{r singlecheck, include=TRUE}
anova(ifaSingleEstimates, ifaNoHighEstimates)
```
Again, `lavaan` throws an error. We'll use the Mplus result in our write up below.

### Example results section for CFA using MLR

After examining the fit of each of the five factors individually, as described previously, a combined model was estimated in which all five factors were fit simultaneously with covariances estimated freely among them. A total of 49 items were thus included. Each factor was identified by fixing the first item loading on each factor to 1, estimating the factor variance, and then fixing the factor mean to 0, while estimating all possible item intercepts, item residual variances, and remaining item loadings. Robust maximum likelihood (MLR) estimation was used to estimate all higher-order models using the `lavaan` package (Rosseel, 2012) in R (R Core Team, 2017), and differences in fit between nested models were evaluated using −2* rescaled difference in the model log-likelihood values.

As shown in Table 1, the fit of the model with five correlated factors was acceptable by the RMSEA (.047), but not by the CFI (.844). Standardized model parameters (loadings, intercepts, and residual variances) are shown in Table 2. Correlations of .6 or higher were found among the five factors, suggesting evidence that the five factors may indicate a single higher-order factor. This idea was testing by eliminating the covariances among the factors and instead estimating loadings for the five factors from a single higher-order factor (whose variance was fixed to 1). Although the fit of the higher-order factor model remained marginal (see Table 1), a nested model comparison revealed a significant decrease in fit, −2ΔLL(5) = 47.083, p < .0001, indicating that a single factor did not appear adequate to describe the pattern of correlation amongst the five factors. A further nested model comparison was conducted to examine the extent to which a single factor could describe the covariances among the items rather than five lower-order factors and a single higher-order factor. Fit of the single factor only model was poor, as shown in Table 1, and was significantly worse than the higher-order factor model, −2ΔLL(5) = 448.91, p < .0001, indicating that a single “total score” would not be recommended. 

### Example results section for IFA using WLMSV

After examining the fit of each of the five factors individually, as described previously, a combined model was estimated in which all five factors were fit simultaneously with covariances estimated freely among them. A total of 49 items were thus included. Each factor was identified by fixing the first item loading on each factor to 1, estimating the factor variance, and then fixing the factor mean to 0, while estimating all possible item thresholds (four for each item given five response options) and remaining item loadings. WLSMV estimation in the `lavaan` package (Rosseel, 2012) in R (R Core Team, 2017) including a probit link and the THETA parameterization (such that all item residual variances were constrained to 1) was used to estimate all higher-order models. Thus, model fit statistics describe the fit of the item factor model to the polychoric correlation matrix among the items. Nested model comparisons were conducted using the Mplus DIFFTEST procedure.

As shown in Table 1, the fit of the model with five correlated factors was acceptable. Item factor analysis parameters (loadings and thresholds) and their corresponding item response model parameters (discriminations and difficulties) are shown in Table 2. Correlations of .7 or higher were found amongst the five factors, suggesting evidence that the five factors may indicate a single higher-order factor. This idea was testing by eliminating the covariances among the factors and instead estimating loadings for the five factors from a single higher-order factor (whose variance was fixed to 1). Although the fit of the higher-order factor model remained acceptable (see Table 1), a nested model comparison via the DIFFTEST procedure revealed a significant decrease in fit, DIFFTEST(5) = 92.05, p < .0001, indicating that a single factor did not appear adequate to describe the pattern of correlation amongst the five factors. A further nested model comparison was conducted to examine the extent to which a single factor could describe the polychoric correlations among the items rather than five lower-order factors and a single higher-order factor. Fit of the single factor only model was poor, as shown in Table 1, and was significantly worse than the higher-order factor model, DIFFTEST(5) = 611.95, p < .0001, indicating that a single score would not be recommended. 

Table 1 = table with fit info per model
Table 2 would have actual model parameters…. (unstandardized and standardized estimates and their SEs, so 4 columns)
