Hello! I’m learning Bayesian data analysis with brms on my own.
I compared the result from lme4 with a Bayes factor, and found an inconsistency.
For example, with lme4, the effect of X was significant (p < .05) in GLMM.
But, with brms, when I compared a model with predictor X with a model without the predictor X, its BF01 was 4, which suggests that the data is in favor of the absence of the effect of X.
I don’t think this is due to an inappropriate flat prior (I used student t(3, 0, 2.5) for population-level coefficients). Also, results involving other predictors matched with those from lme4.
My guess is that maybe a few extreme data points led to the significant result (p < .05), but not in Bayesian models.
Does my guess sound plausible?
Could there be other possible reasons for the inconsistent result?
Morning! Can you write out both models and supply either a snippet of data or some fake data? Generally you want to avoid flat priors.
You really can’t compare p-values to BFs very easily. A “closer” method is whether your 95% CrI excludes zero (assuming your frequentist test sets alpha to .05). It’s not exactly the same, but with loose enough priors, they wind up giving [nearly] the same answer, due to the information matrix being the same.
So, if your 95% CrI excludes zero, then it’s in agreement with the frequentist test. BFs are different, because your test is ultimately coming from the parameters’ priors - They represent uncertain predictions, over which the likelihood is averaged. Then you compare those averaged likelihoods. My guess is, the student-t(3, 0, 2.5) is still too big, and isn’t much of a prediction at all. It wouldn’t surprise me at all if that prior is simply too big.
You could test your theory by removing the extreme data points . It sounds like your model point in opposite directions.
How close are the fitted parameter values in each case?