Sorry, can’t really delve deeply into the problem. There seem to be at least two interpretations of your question:
-
You are trying to use “posterior as next prior”, i.e. avoid fully refitting the model as new data becomes available. This is theoretically appealing, but it is generally discouraged as there is no good way to accomplish that in practice (see e.g. Using posteriors as new priors - #4 by mike-lawrence). You are usually better served by refitting the model with the full data available.
-
You are trying to find a more efficient way to compute some conditional distributions given your samples without recomputing a bunch of other stuff. I would suspect that in this case, your main bottleneck is model fitting and so I would definitely check, whether the simple way to compute the distributions in a loop isn’t already of satisfactory performance and only attempt optimization once you are sure this is necessary.
Best of luck with your model!