# Confirmatory factor analysis for ordinal outcome: computing r-square?

I’m helping a colleague on a project where they used a likert scale questionnaire that has already been well-researched and consensus seems to support a 4-factor latent structure. So I coded up a model that could estimate the loadings of each factor on each question, and it’s working seemingly fine, but my colleague is wondering if the loading weights (`outBetas` in the model below) can be transformed to a 0-1 scale reflecting what % of variability in the response is explained by the latent factor. This would be relatively straightforward with a continuous outcome that is modeled with a noise parameter, but I’m unsure if it even has meaning in the ordinal scenario. Any thoughts?

``````data{

// nCat: number of ordered categories in outcomes
int nCat ;

// nSubj: number of subjects
int nSubj ;

// nOut: number of outcomes
int nOut ;

// outFac: list of which factor is associated with each outcome
int outFac[nOut] ;

// out: integer matrix of outcomes
int out[nSubj,nOut] ;

}

transformed data{
// nFac: number of latent factors
int nFac = max(outFac) ;

// zeroes: a vector of zeroes for latent factor means
vector[nFac] zeroes = rep_vector(0,nFac) ;

}

parameters{

// cutpoints: cutpoints for modelling outcomes as ordered logistic
ordered[nCat-1] cutpoints[nOut] ;

// outMeans: *latent* means for each outcome
vector[nOut] outMeans ;

// outBetas: how each factor loads on to each outcome.
//           Must be postive for identifiability.
vector<lower=0>[nOut] outBetas ;

// subjFacs: matrix of values for each factor associated with each subject
vector[nFac] subjFacs[nSubj] ;

// L_Omega: related to correlation amongst the latent factors
cholesky_factor_corr[nFac] L_Omega ;

}

model{

// standard normal prior on all the cut points
for(i in 1:nOut){
cutpoints[i] ~ normal(0,1) ;
}

// standard normal prior on outMeans
outMeans ~ normal(0,1) ;

// implies flat prior on facCor
L_Omega ~ lkj_corr_cholesky(1) ;

// implies subjFacs distributed as multi_normal with zero means, unit SDs, and facCor correlation ;
subjFacs ~ multi_normal_cholesky(zeroes, L_Omega) ;

// outcomes as ordered logistic
for(i in 1:nOut){
for(j in 1:nSubj){
out[j,i] ~ ordered_logistic( outMeans[i] + subjFacs[j,outFac[i]] * outBetas[i], cutpoints[i] ) ;
}
}

}

generated quantities{

// log_lik: storage variable for later use of loo in R
vector[nOut*nSubj] log_lik ;

// facCor: correlations between latent factors
corr_matrix[nFac] facCor ;

// out_rep: matrix of outcomes
matrix[nSubj,nOut] out_rep ;

// compute facCor from L_Omega
facCor = multiply_lower_tri_self_transpose(L_Omega);

// compute log_lik for loo-based model comparison
for(i in 1:nOut){
for(j in 1:nSubj){
log_lik[(i-1)*nSubj+j] = ordered_logistic_lpmf( out[j,i] | outMeans[i] + subjFacs[j,outFac[i]] * outBetas[i], cutpoints[i] ) ;
}
}

// compute simulated outcomes
for(i in 1:nOut){
for(j in 1:nSubj){
out_rep[j,i] = ordered_logistic_rng( outMeans[i] + subjFacs[j,outFac[i]] * outBetas[i], cutpoints[i] ) ;
}
}

}
****``````