Using reduce_sum for two custom linked likelihoods - specifying separate grainsizes?

I have a complex model on a big(ish) that takes a long time to run a single chain, so I’m looking at ways to speed the code up.

I’ve seen discussions here stating that reduce_sum can help parallelise within chains, and the approach seems applicable to how my model is specified

The model I’m coding up is a complex multi-data source joint model - so it has a longitudinal sub-model, and a time-to-event sub-model, and I cannot fit the model using the joint modelling functionality e.g. in rstanarm. I have written custom _lpdf functions for the longitudinal and time-to-event sub-models.

The longitudinal log likelihood _lpdf function cycles through the longitudinal measurements, in this case counting 1,...,M (each individual has multiple measurements observed for them, total of M measurements across all individuals in the dataset).

The time-to-event log likelihood _lpdf function cycles through the individuals, in this case 1,...,N, where N is the total number of individuals in the dataset (note N<M).

So I am assuming, given the different numbers of instances the two separate likelihood functions will need to cycle across, that I will need to specify two grainsizes to be used for the two reduce_sum instances? Is this a problem?

Does grainsize have to be supplied labelled grainsize or can I for instance supply grainsize_long and grainsize_surv for the longitudinal and time-to-event (survival) components respectively? (the example I was reading here Reduce Sum: A Minimal Example suggested supplying grainsize as data, so that it was easier to tune)

Sure, you can have multiple grain sizes, one for each reduce_sum. Or you can have multiple likelihoods in the same reduction.

Does grainsize have to be supplied labelled grainsize or can I for instance supply grainsize_long and grainsize_surv

Call them whatever you want.