Hi all,
I need some help understanding how I can apply within-chain parallelization to the way I am currently calculating the log-likelihood. Specifically, what I am doing is this: I have N observation-pairs (y, option) where y is the number of people who chose that option in an ordered multiple-choice question.
The data covers multiple questions, so let’s say the first six of those N observation-pairs could look like this: (15, 1); (25, 2); (65, 3); and then moving on to the next question (35, 1); (45, 2); (25, 3).
Without parallelization, my code for the likelihood looks like this:
for(n in 1:N){
target += y[n] * ordered_logistic_lpmf( options[n] | ... );
}
The logic being that each of these y persons all had a probability of choosing that option denoted by ordered_logistic_lpmf. So we can just add the result of ordered_logistic_lpmf to the target y times.
Looking at the reduce_sum tutorial, I can see that the parallelized version of this could look something like this in the function block:
real partial_sum_lpmf(int[] slice_y,
int start, int end,
int[] option, ...) {
return ordered_logistic_lpmf( options[start:end] | ... );
}
and this in the model block:
int grainsize = 1;
target += reduce_sum(partial_sum_lupmf, y, grainsize, option, ...);
But what do I do with the multiplication by y? Thanks!