Hi everyone,
I am running a Hurdle model with a hierarchical structure and I was having issues with poor convergence. Based on a helpful suggestion in this forum, I applied the non-centered parameterization (NCP) on the individual-specific intercepts, which helped to make the model converge. However, the model with NCP runs 3 times as long. Is there a way to speed the model with NCP up?
The Hurdle model is integrated in a hidden Markov model, which already makes the model computationally demanding. Here are the relevant parts of my model without NCP.
data{
int<lower = 1> N; // number of observations
int<lower = 0> S; // number of states
int<lower = 1> H; // number of individuals
int<lower = 1> id[N]; // identifier of individuals
int<lower = 0> K1; // number of covariates inside delta in Bernoulli part
int<lower = 0> K2; // number of covariates inside delta in Lognormal part
matrix[N, K1] C1; // matrix of covariates for Bernoulli part
matrix[N, K2] C2; // matrix of covariates for Lognormal part
int<lower = 0, upper = 1> y[N]; // binary dependent variable
real q[N]; // continuous variable we are modeling conditional on y
}
parameters {
ordered[S] mu; // state-dependent intercepts in Bernoulli part
vector[S] nu; // state-dependent intercepts in Lognormal part
real alphaj[H]; // individual-specific intercept in Bernoulli part
real alphai[H]; // individual-specific intercept in Lognormal part
real<lower = 0> sigma_alphaj; // sd of individual-specific intercept in Bernoulli part
real<lower = 0> sigma_alphai; // sd of individual-specific intercept in Lognormal part
real<lower = 0> sigma_q;
vector[K1] delta1;
vector[K2] delta2;
}
model {
// priors
mu[1] ~ normal(0, 1)T[, mu[2]];
mu[2] ~ normal(0, 2);
nu[1] ~ normal(0, 1);
nu[2] ~ normal(0, 2);
alphai ~ normal(0, sigma_alphai);
alphaj ~ normal(0, sigma_alphaj);
...
for (t in 1:N) {
target += log_sum_exp(gamma);
for (k in 1:S){
gamma_prev[k] = bernoulli_logit_lpmf(y[t] | alphaj[id[t]] + mu[k] + C1[t]*delta1);
if(y[t] == 1){
gamma_prev[k] += lognormal_lpdf(q[t] | alphai[id[t]] + nu[k] + C2[t]*delta2, sigma_q);
...
I just copied parts of the output summary, showing the problems with convergence indicated by Rhat and low ESS. I ran the model with 1000 warmup and 2000 iterations, treedepth at 11 and adapt_delta at .90.
The updated model with NCP looks like this:
...
parameters {
...
real alpha0i;
real alpha0j;
vector[H] alphai_raw;
vector[H] alphaj_raw;
real<lower=0> sigma_alphai;
real<lower=0> sigma_alphaj;
real<lower = 0> sigma_q;
}
transformed parameters {
vector[H] alphai;
vector[H] alphaj;
alphai = alpha0i + sigma_alphai * alphai_raw;
alphaj = alpha0j + sigma_alphaj * alphaj_raw;
}
model {
...
alphai_raw ~ std_normal();
alphaj_raw ~ std_normal();
alphai ~ normal(0, sigma_alphai);
alphaj ~ normal(0, sigma_alphai);
alpha0i ~ normal(0, 1);
alpha0j ~ normal(0, 1);
I ran this updated model with NCP on the same data with same number of iterations. Here is the output:
Is there anything I can do to speed up the model with NCP?
Thanks in advance!