Possible to use both within-chain parallelization of log likelihood and still save log_lik for loo computations?

Hi everyone,

we have a relatively complicated log-likelihood in our models and therefore would like to benefit from reduce_sum to parallelize within chains further the computations. However we are also interested in comparing models and therefore would like to save individual log_lik values, as needed for the loo computations.
Is that both possible at the same time?
(I somehow had the first impression that I need to give up the individual log_lik values, which I currently calculate in the transformed parameters block, when switching to using `reduce_sum)


Unfortunately not with reduce_sum as that will implicitly (and unavoidably) aggregate the individual observations. You can try using map_rect as that will allow you to return the vector of individual log-likelihoods from each process.

Alternatively, you can just use reduce_sum and then recreate the individual log-likelihoods in the generated quantities block. The computations in the generated quantities block are markedly faster than those in the model & transformed parameters blocks (as they don’t need the gradients for each parameter), so the additional computation time might be negligible


Thanks Andrew for the quick reply!
I guess then I will keep the map_rect approach in mind for a later stage of the development (we are still changing too many things right now).