Hello,
I am attempting to estimate a hierarchical model using Stan with code based on the example in the manual (Section 9.13 in manual 2.16.0-2). I have ~564 units with ~160 observations each (total of ~90,000 observations), none missing. Within each unit, the observations are sequential (time series) and there are no unvarying “group level” predictors, i.e. all of my variables vary over time. Thus I have removed the portion of the code that refers to group-level predictors. I have a total of 17 variables and a constant that I input as x.
I have tried the code below on my laptop and estimate that the model will take +7 days to get through 1000 iterations on each of three chains; I have never run the model to completion, however every time I try to run a model the chains seem to stall around 3-4% completion. I transferred the model to a much more powerful computer (~20 cores, +200 GB of RAM) but the model seems just as slow if not slower, even when I specify cores=3 and chains=3.
NonCenteredStanCode <- "
data {
int<lower=1> N; // number of observations, in-sample, 90000
int<lower=1> K; // number of variables in the model, 18
int<lower=1> nID; // number of unique IDs (unique groups), 564
int<lower=1, upper=nID> id[N]; // group ID variable for each observation
matrix[N, K] x; // inputting the in-sample data matrix, 90000 x 18 matrix
matrix[nID, 1] u; // this is a vestige of the manual's programming, there are no group-level predictors here
vector[N] loutcome; // logged outcome for each observation
}
parameters {
matrix[K, nID] z;
cholesky_factor_corr[K] L_Omega;
vector<lower=0,upper=pi()/2>[K] tau_unif;
matrix[1, K] gamma;
real<lower=0> sigma;
}
transformed parameters {
matrix[nID, K] beta;
vector<lower=0>[K] tau; // prior scale
for (k in 1:K) {tau[k] = 2.5 * tan(tau_unif[k]);}
beta = u * gamma + (diag_pre_multiply(tau, L_Omega) * z)';
}
model {
to_vector(z) ~ normal(0, 1);
L_Omega ~ lkj_corr_cholesky(1);
to_vector(gamma) ~ normal(0, 2);
target += normal_lpdf(loutcome | rows_dot_product(beta[id] , x), sigma);
}
"
Also, following the advice here:
https://groups.google.com/forum/#!msg/stan-users/HWiUtQLRCfE/tPPN3bAeDQAJ
I have tried a centered parameterization as well. (Code below.) This seems only marginally faster and I estimate that it will still take approximately the same amount of time to reach completion.
CenteredStanCode <- "
data {
int<lower=0> N; // num individuals
int<lower=1> K; // num ind predictors
int<lower=1> nID; // was called J, num groups
int<lower=1,upper=nID> id[N]; // group for individual
matrix[N, K] x; // individual predictors
vector[N] loutcome; // outcomes
}
parameters {
corr_matrix[K] Omega; // prior correlation
vector<lower=0>[K] tau; // prior scale
row_vector[K] gamma; // group coeffs
vector[K] beta[nID]; // indiv coeffs by group
real<lower=0> sigma; // prediction error scale
}
model {
tau ~ cauchy(0, 2.5);
Omega ~ lkj_corr(2);
to_vector(gamma) ~ normal(0, 5);
{
matrix[K, K] Sigma_beta;
Sigma_beta = quad_form_diag(Omega, tau);
for (j in 1:nID)
beta[j] ~ multi_normal(gamma, Sigma_beta);
}
{
vector[N] x_beta_id;
for (n in 1:N)
x_beta_id[n] = x[n] * beta[id[n]];
loutcome ~ normal(x_beta_id, sigma);
}
}
"
Any ideas about what’s happening or ways to get this moving faster? I can’t determine if this is a coding issue, or if I’m asking too much from the model with varying intercepts and slopes over 564 cross-sections, or something else entirely. I’d appreciate any insights, thanks everyone!