While Bayesian inference has growing popularity in my field of psychology, the Bayes Factors flavour seems to have gathered more buzz than the estimation flavour (at least, so far as I casually observe on twitter; hard data would be welcome). Personally I’m not a fan of BFs, as they require more compute time and I don’t understand what proponents think they offer inferentially over estimation. Does anyone have any resources to point me to that discuss these flavours and their respective merits?
Hm, I’m having trouble parsing Figure 1. Dotted line is the prior presumably? Is the point that the rank ordering of the Bayes factors seems to match the rank ordering of the absolute distance between respective points on the prior and posterior curves?
The vertical distance between the curves at any point along the horizontal axis is a Bayes Factor of sorts (for a point null at that point).
Actually, they’re not even the same rank order. So I don’t see what the relationship is.
The post Ben linked misses that usually in case of continuous parameter case we integrate over infinite number of values, but BF term is usually used in cases when when only one parameter value is selected.