Hello!
I am a psychologist working on data from a psychophysical experiment. I analyze the data in a hierarchical logistic regression with 7 predictors: three main predictors (manipulated in the experiment, within each subject), and their two- and three-way multiplicative interactions. One of them is “continuous”- 1:3 (normalized by subtracting mean and dividing by 2 stds), and the rest are binary.
Up to now, I used JAGS. For each of my subjects (N=34) I find individual coefficients (beta0+beta(1:7)), and all subjects are modeled to be in the same group, with group parameters betamu0, betasigma0, betamu(1:7), betasigma(1:7), which I use to infer if the psychological effect is there or not.
The problem is that my variables are inherently correlated, and my JAGS model yields very small ESS (tried increasing the iterations, thinning and etc. it didn’t help).
I built a Stan model, which ran on my computer, but took hours to accomplish the first 5000 iterations of the first of three chains (even when I simplified the model to 2/7 beta parameters). I guess I may be doing something wrong, and for sure my model is completely non-vectorized, as in JAGS, because I was unsure about the vectorized operations in Stan (even after working with your manual).
Do you have any suggestion on how to correct and optimize my model? and how much running time would be pathologically slow?
This help is much appreciated, and it would allow me to reanalyze data from 6 related experiments which I ran!
data structure: rows are trials, one column for subject, 7 columns for predictors, one column for a binary response.
the model:
modelString = "
data {
int<lower=1> NSubj ;
int<lower=1> Ntotal ;
int<lower=1> Nx ;
real<lower=0> w[Ntotal] ;
int<lower=1> s[Ntotal] ;
matrix [Ntotal, Nx] x ;
int<lower=0,upper=1> y[Ntotal] ;
}
parameters {
real beta0mu ;
real<lower=0> beta0sigma ;
real betamu [Nx] ;
real<lower=0> betasigma[Nx] ;
real beta0[NSubj] ;
real beta[Nx,Nsubj];
}
//prior on group params
//first, intercept:
beta0mu ~ normal( 0, 10 );
beta0sigma ~ uniform( 1.0E-3 , 1.0E+3 );
//multiple predictors:
for ( predictor in 1:Nx ) {
betamu[predictor] ~ normal( 0 , 10 );
betasigma[predictor] ~ uniform( 1.0E-3 , 1.0E+3 );
} // predictors loop
// subject level params (drawn from group params)
for ( subject in 1:NSubj ) {
beta0[subject] ~ normal( beta0mu , beta0sigma ) ;
for ( predictor in 1:Nx ) {
beta[subject, predictor] ~ normal( betamu[predictor] , betasigma[predictor] );
} // predictors loop
} // subjects loop
for ( i in 1:Ntotal ) {
// In Stan, Bernoulli_logit is logistic:
y[i] ~ bernoulli_logit( beta0[s[i]] + sum( beta[s[i],1:Nx].*x[i,1:Nx])); } // all rows loop
}
for ( i in 1:Ntotal ) {
y[i] ~ bernoulli_logit( beta0[s[i]] + sum( beta[s[i],1:Nx].*x[i,1:Nx])); } // all rows loop
}
}"