When I run bridgesampler to compare a set of models I find that
- the number of iterations used for estimating the log likelihood varies drastically. (5 iterations vs 171 iterations for two similar models where the model with the additional parameter takes 5 iterations). Is that to be expected?
- When I compare two fairly similar models, the log of the bays factor comes out as insanely large number such 650 which intuitively is not sensible since the models are almost the same.
In fact, I ran bridgesampler on a model while accidentally adding an extra parameter and it actually performed better.
I know that one important factor is to run each model for the same number of iterations but it seems to me that I’m implementing bridgesampler incorrectly. Are there any other suggestions or checks to make sure that bridgesampler is working properly?
Thanks so much,