Reduce_sum question

Hi,

I have a time-series model that I’m hoping to implement reduce_sum in. I’ve got two scripts that analyze the same exact data and perform the same analysis, with the only difference being that one uses reduce_sum and the other does not. I’m glad to see that the results of both analyses are similar, however I worry since they aren’t exactly the same.

So my question is; Is it normal for reduce_sum to influence the results that one obtains? Or is this indicative of a larger problem in my stan program that utilizes reduce_sum?

Below I show just two examples of how the results can differ;

comparison_1

comparison_2

Apologies if this has been asked and answered elsewhere.

Thanks!

1 Like

You expect some sampling variation just because you’re drawing a finite number of samples. If it’s not prohibitively expensive the easiest way to find out what’s a reasonable amount of sampling variability is just to run the model repeatedly with different initializations.

I have a model I run rather frequently that has a simple log-likelihood that’s amenable to multi-threading with reduce_sum() and I don’t see any differences in inferences from the original code that I can attribute to its use.

I understand that it’s difficult or maybe impossible to get bit level reproducibility from multi-threaded code so you might see some difference even if you fix the parameter initialization and random number seed.

2 Likes

Hi,

Sorry for the late follow-up. Thank you for sharing your experience with reduce_sum() with me. I should have mentioned in my first post that seeds are fixed, so the fake data I make ought to be the same, and both samplers should start with the same initial values. I think when I have the time, I will try to run some kind of simulation to demonstrate how quickly MAP estimates from both models converge to the same point, as a function of the number of MCMC samples. Following your suggestion, I’ll also take a look at how sensitive results are to differences in initial sampler conditions. If/when I get around to this, I’ll update this post with my results.

Have a look into reduce_sum_static which does the partial sums always in the same way when given the same data.

1 Like

Hi,

Should have also mentioned that I’m using reduce_sum_static as well, sorry.

Considering that seeds are fixed and that reduce_sum_static is being used, do you think this discrepancy can be explained by a mistake in the partial_sum function? That it’s not actually calculating/returning what I intend it to? Thank you for your time on this.

I debug these programs by running the program without reduce sum, but use the partial log lik function to compute the log lik in - say - 4 steps over the dataset.njsut do 4 calls to the partial sum function and add that to target. This should work first and then go to reduce sum.

1 Like

That makes sense–Thank you very much for the suggestion!