I’m developing a model, and it’s taking longer to fit than I would like. Using profile tools, I have determined that a piece of the transformed parameters block is taking about half the total runtime. It’s doing math on three sets of largish vectors. Here’s the code:
...
transformed parameters {
...
for (v in 1:3) {
int s = 1;
vec_simp[v] = re_stim_simp[v][stim_id_simp]
+ p0[v, s, sid_simp]
+ delta[v, s, sid_simp]
.* (1 - exp(-alpha[v, s, sid_simp].*t_simp));
s = 2;
vec_cplx[v] = re_stim_cplx[v][stim_id_cplx]
+ p0[v, s, sid_cplx]
+ delta[v, s, sid_cplx]
.* (1 - exp(-alpha[v, s, sid_cplx].*t_cplx));
}
...
}
...
stim_id_simp, sid_simp, stim_id_cplx, sid_cplx are data, int arrays of length about 7200 (will be twice as long eventually)
t_cplx and t_simp are vectors of the same length. All the other variables are transformed parameters.
Transformed parameter declarations
In case it is relevant, here are the declarations of all the transformed parameters from the above snippet. n_sub is ~120, nstim_simp,cplx=60, n_simp, n_cplx are both 7200.
array[3] matrix[2, n_sub] re_sid_p0;
array[3] matrix[2, n_sub] re_sid_delta;
array[3] matrix[2, n_sub] re_sid_alpha;
array[3] vector[nstim_simp] re_stim_simp;
array[3] vector[nstim_cplx] re_stim_cplx;
// likelihood temp variables
array[3] vector[n_simp] vec_simp;
array[3] vector[n_cplx] vec_cplx;
// v: variable, 1:3
// s: simple/complex 1:2
// sid: subject id, 1:n_sub
array[3, 2] vector[n_sub] p0;
array[3, 2] vector[n_sub] delta;
array[3, 2] vector<lower=0>[n_sub] alpha;
I’m hoping to figure out a way to speed the model up. I have plenty of memory and processors, but I would have to substantially restructure the code to use something like reduce_sum. If that’s the best answer, so be it, but I was hoping I was doing something really inefficient that would be obvious to the experts on here.
Any suggestions?