Hi Aki/everyone
A quick follow-up on this if I may? I was getting some surprising results with loo (i.e., not getting differences between models when I expected to). So, as a sanity check, I ran essentially the same analysis - with a predictor I “know” to be significant - two different ways (R syntax pasted below)
(a) As the only fixed effect in a standard Bayesian mixed-effects model
(b) Using model comparison with loo. i.e., comparing the model above to a random-effects only model
Method (a) suggested a large and reliable effect of the predictor of interest (“entrenchment”): M=0.62 SD=0.04
Method (b) suggested no evidence for this effect: elpd_diff = -2.6, SE = 2.8
I take Aki’s point that loo is not good for detecting small differences between models, but given the results of (a), this looks like a very large difference. Am I doing something wrong?
Syntax follows…
Thanks
Ben
Method (a) - Estimate direct from model
Sanity=glimmer(Un~ (1 + Entrenchment |PID) + (1|Verb) + Entrenchment, data=BOTH, family=gaussian, prefix=c(“b_”,“v_”), default_prior=“dnorm(0,1)”, iter=10000, adapt_delta = 0.99)
Sanity_M=map2stan(Sanity$f, data=Sanity$d)
precis(Sanity_M)
Method (b) - Model comparison
Sanity_Baseline=glimmer(Un~ (1 + Entrenchment|PID) + (1|Verb), data=BOTH, family=gaussian, prefix=c(“b_”,“v_”), default_prior=“dnorm(0,1)”, iter=10000, adapt_delta = 0.99)
Sanity_Baseline_M=map2stan(Sanity_Baseline$f, data=Sanity_Baseline$d)
Sanity_LOO = loo(WAIC(Sanity_M, pointwise=TRUE, loglik=TRUE))
Sanity_Baseline_LOO = loo(WAIC(Sanity_Baseline_M, pointwise=TRUE, loglik=TRUE))
loo::compare(Sanity_LOO, Sanity_Baseline_LOO)