I’m new to stan and I am trying to build my own joint model using rstan. When I have multilevel random effects, I noticed the running time is extremely long. I’ve learn that by vectorizing parameters can improve the efficiency. Could someone help me with the code below and see if it’s possible to enhance to reduce the running time? I’m currently using loops, and mask is the indicator variable of missingness. Thank you so much!

for (i in 1:N) {
for (t in 1:T) {
real backward_time = death_time[i] - time_points[t];
if (mask[i, t] > 0) {
y[i, t] ~ normal(alpha00 + x1[i] * alpha01 + x2[i] * alpha02 + backward_time * alpha03 + treatment[i] * alpha04 + b_i[i] + u_i[cluster[i]], sigma_e);
}
}
}

Welcome to the Discourse! Would you mind sharing your full model? That might help clarify some things. That said, I have some guesses as to how you can speed it up.

The first is to pre-compute everything you can. For example, I am guessing that death_time and time_points are both data and do not depend on parameters. If so, you can use the transformed data block to calculate all N \times T values of backward_time once before sampling begins.

Second, it looks like much of the predicted mean is dependent only on i. That is, it is constant for each value of i.

Note how if we drop backward_time * alpha03 then we get a value that we can calculate only once for each value of i.

So, rather than calculate that potentially T times, we can just calculate it once.

You might also consider restructuring your data, since you are filtering by mask.

Rather than use a matrix, where the rows correspond to 1:N and the columns correspond to 1:T, you can use a vector and use arrays of integers to indicate the specific values of i and t.

Putting it all together, you get something like this. Note that this isn’t meant to run on its own, but should illustrate the idea.

data{
int N;
int T;
int<lower = N, upper = N*T> NT; // Number of observed elements
int K; // Number of clusters
array[N] int<lower = 1, upper = K> Z; // Cluster
vector[N] X1;
vector[N] X2;
vector[N] death_time;
vector[N] treatment;
vector[NT] Y;
array[NT] int<lower = 1, upper = N> I
array[NT] int<lower = 1, upper = T> J;
vector[T] time_points;
}
parameters{
real alpha00;
real alpha01;
real alpha02;
real alpha03;
real alpha04;
vector[N] b_i;
vector[K] u_i;
real<lower = 0> sigma_e;
}
transformed parameters{
vector[N] alpha_i = alpha00 + X1 * alpha01 + X2 * alpha02 + treatment * beta04 + b_i + u_i[Z];
vector[NT] backward_time = (death_time[I] - time_points[J]) * beta03;
}
model{
Y ~ normal(alpha_i[I] + backward_time, sigma_e);
}

Thank you for your reply! If the death_time is dependent on the parameters, e.g. generated using inverse CDF. Can I still move the form to transformed parameters section?