I’m working on a model for examining the probability that an observed score on a cognitive assessment comes from a particular normative or reference sample (i.e., whether a score is likely an observation from the population of scores expected to be obtained from those with X condition of interest). In application, the model would be used to examine whether or not a score is more typical of a population of scores for condition X vs. Y vs. Z vs. etc. Toward this end, I want to generate a Bayes’ factor and post-test probability for group memberships as these are relatively easy and straightforward for clinicians to understand. The model I’ve tested so far is below:
data {
real y; // obtained score
real<lower=0> mn; // prior sample average
real<lower=0> stdv; // prior sample standard deviation
real<lower=1> N; // sample size
real<lower=0, upper=1> rxx; // prior reliability
real<lower=1> n; // reliability sample size
}
transformed data{
real<lower=0> alpha; // define alpha
real<lower=0> beta; // define beta
alpha = n/2; // compute alpha
beta = n/fabs(2*(1-rxx)); // compute beta
}
parameters {
real<lower=0> mu; // population average
real<lower=0> sigma; // population standard deviation
real<lower=0> tau; // reliability
real theta; // true score
}
transformed parameters{
real<lower=0, upper=1> r; // define reliability estimate
real<lower=0> SEM; // define standard error of measurement
r = (1-tau)/-tau; // convert tau back to reliability
SEM = sigma*sqrt(1-r); // compute standard error of measurement
}
model {
mu ~ normal(mn, stdv); // estimation of population average
tau ~ inv_gamma(alpha, beta); // estimation of reliability
sigma ~ student_t(N-1, sqrt(((stdv^2)*N)/(N-1)), 0.71*(stdv/sqrt(N))); // estimation of population standard deviation
theta ~ normal(y, SEM); // estimation of true score
target += normal_lpdf(theta | mu, sigma); // likelihood
}
As the code probably betrays, I’m coming from a purely frequentist educational background. I had a pretty good idea about how I would tackle this problem with bootstrapping, so I just adapted the code to work with Stan based on the hierarchical normal example. The model generally works OK, but I have some issues with it.
First, does this model make sense for what I’m trying to do? I’ve looked for resources on single-case Bayesian methods, but they are generally about a single case measured over a period of time. I have just a single observation from a single individual to compare to known sample descriptive statistics. I haven’t been able to find examples of this kind of analysis in Stan.
Second, how can I get the priors to be just a little less informative? I’ve tried to some code adjustments based around sample size (thinking that larger samples should produce more informative priors), but I just end up with more divergent chain and missing information warnings.
Third, how can I reparameterize things to stay on unit scaling in the model block? While this current code does work most of the time, I’ve run into some issues of divergent chains (note: changing adapt_delta hasn’t fixed this issue), and I think that has to do with the scales of the parameters.
Finally, are there any suggestions for generalizing the model for a single case vs. reference across a set of assessments? In other words, if one were administered 6 tests, then how could we look at whether the scores obtained from these 6 tests are likely to have come from the same population of scores as those obtained from a reference sample (assuming the reference sample provides sufficient data to compute the covariance matrix)? I guess it’s essentially a kind of pattern analysis comparing a single case measured on multiple variables to a sample.
Thank you in advance for any and all help/resources/advice!