Strange Results from BridgeSampler

When I run bridgesampler to compare a set of models I find that

  1. the number of iterations used for estimating the log likelihood varies drastically. (5 iterations vs 171 iterations for two similar models where the model with the additional parameter takes 5 iterations). Is that to be expected?
  2. When I compare two fairly similar models, the log of the bays factor comes out as insanely large number such 650 which intuitively is not sensible since the models are almost the same.
    In fact, I ran bridgesampler on a model while accidentally adding an extra parameter and it actually performed better.
    I know that one important factor is to run each model for the same number of iterations but it seems to me that I’m implementing bridgesampler incorrectly. Are there any other suggestions or checks to make sure that bridgesampler is working properly?

Thanks so much,
Levi

Regarding your second point, how many posterior samples have you obtained for each model? In order to get stable Bayes Factor results, you need many more posterior samples than you would typically need for parameter estimation. See for example this thread about calculating Bayes Factors for brms models - increasing the number of samples from something like iter = 2000, warmup = 1000, chains = 4 to iter = 10000, warmup = 1000, chains = 4 apparently yielded more stable results.

1 Like

Yes your suggestions seem to be the way to go. Someone helped me figure this out some time back.

Those strange results were generated when I was using 9000 iterations which I expected to be enough