I am working on Adaptive Bayesian Estimation for model comparison. Currently, in order to monitor which model fits the data better, I calculate Akaike weights based on the AIC as explained in this article on page 193:
Now I was thinking about using the WAIC instead of the AIC but I am not sure whether I can use the same procedure to calculate the weights as it can be done with the AIC.
Thanks in advance for your help
thanks for your answer! Just for clarification, would it be possible to treat lp__ (if I use target+= syntax so that all constant terms are still there) as a “pseudo log-likelihood”? The reason I am asking is that for my research, I need a way to compare models via Akaike Weights since it is a simple and fast method. I am aware that there are better model selection cirteria available, but since I need to monitor the models being compared after every new data point, the computation time needed for bridgesampling is probably too high.
Thanks for your help
The computation time for bridgesampling is a fraction of the computation time needed for sampling.
Simple and fast are not good justifications if it’s also wrong. Ben already pointed to this paper https://arxiv.org/pdf/1704.02030.pdf, which has Pseudo-BMA+ which is the Bayesian version of Akaike Weights. Furthermore, that paper shows that you can do even better by using Bayesian stacking. Both of these methods use fraction of the computation time needed for sampling. Whether to use Bayesian stacking or posterior probabilities (e.g. with bridgesampling), depends on whether you assume M-open or M-closed (or close enough to M-closed) case (see more in the paper).
Can we have this engraved on a plaque?
Based on the comments in the bridgesampling package vignettes, it requires much more draws than the posterior fit or LOO (vignette uses 150000 draws, while the Stan default is 4000).
Thanks for the hint. In my case though, taking less draws is fine since I am only interested in an approximate result.