data{
int smk[626];
int metSyn_alpha[3];
int metSyn[626];
int etoh[626];
int cad[626];
vector[626] bmi;
int stroke_tia[626];
int prev[626];
int mRS_preop_alpha[6];
int mRS_preop[626];
int gcs_alpha[12];
int gcs[626];
int age2[626];
int age[626];
vector[626] time;
int sex[626];
int rupt[626];
int migraine[626];
int mRS_discharge_alpha[6];
int mRS_discharge[626];
int insXrupt[626];
int dist[626];
int comp[626];
int FU[626];
int ins[626];
vector[626] sd_lnInc;
vector[626] sem_lnInc;
real grandSEM_lnInc;
real grandMean_lnInc;
vector[626] mean_lnInc;
}
parameters{
vector[2] a_ins;
real lnInc;
real mu;
vector[2] b_inc;
real<lower=0> phi;
}
model{
vector[626] lambda;
phi ~ exponential( 3 );
b_inc ~ normal( 0 , 1 );
mean_lnInc ~ normal( grandMean_lnInc , grandSEM_lnInc );
mu ~ normal( mean_lnInc , sem_lnInc );
lnInc ~ normal( mu , sd_lnInc );
a_ins ~ normal( 0 , 1 );
for ( i in 1:626 ) {
lambda[i] = a_ins[ins[i]] + b_inc[ins[i]] * (lnInc);
lambda[i] = exp(lambda[i]);
}
FU ~ neg_binomial_2( lambda , phi );
}
generated quantities{
vector[626] log_lik;
vector[626] lambda;
for ( i in 1:626 ) {
lambda[i] = a_ins[ins[i]] + b_inc[ins[i]] * (lnInc);
lambda[i] = exp(lambda[i]);
}
for ( i in 1:626 ) log_lik[i] = neg_binomial_2_lpmf( FU[i] | lambda[i] , phi );
}

This samples okay, with Rhat ~ 1.0, but 100% of iterations exceed max treedepth. Is this a problem for inference? If so, how can I fix it?

max treedepth usually means the inference is slow due to strong correlations in the posterior. That might not be problem if effective sample size is large.
In this case, though, I have concerns about your model, this part in particular:

data {
vector[626] sd_lnInc;
} parameters {
real lnInc;
real mu;
} model {
lnInc ~ normal( mu , sd_lnInc );
}

The sampling statement is vectorized and effectively puts 626 priors on the single parameter lnInc.
Was lnInc supposed to be a vector?

Regarding treedepth, the neff and Rhat are both strong. Is it possible that the treedepth problem is of concern despite this? Is there a way to determine if the treedepth problem is causing other problems?

In response to your question: the intention is to generate log-income values for 626 ZIP codes from a dataset that provides only the log-normal mean and log-normal sd for each ZIP code. Hence, I think it is correct that lnInc has 626 priors. Are you suggesting that this may be the cause of the treedepth problems, or do you have separate concerns about this?

Wouldnâ€™t that mean 600 distinctlnInc values? Right now the model has the same lnInc for every ZIP code but that lnInc has a prior that is 600 times more constraining than naively appears.

Yes. The constraint requires mu and lnInc to be almost equal and thatâ€™s the kind of strong correlation that can cause treedepth problems.

Yes, I think that youâ€™re right since the dispersion in the simulated samples is so low compared to the data. So if I am understanding you, I have overconstrained lnInc by trying to get a single estimate of it when I actually want 626 ZIP-specific estimates. How do I fix it? Here is an attempt on a truncated model:

I think this probably works because now the dispersion on the simulated lnInc is actually, well, dispersed as I would expect. What do you think about this?