Results from monotonic effect versus dummy variables in brms

Hi @torkar,

I was trying since long time now to find some literature on this claim. Usually, one always speaks of ~2SE being considered the “threshold”, but I haven’t found a paper claiming that.

Sorry for jumping in here.

Alex

Agreed, I’ve seen it being used in different circumstances and it varies between 2 and 6 SE. I should’ve written more explicitly:

[1] -23.576  -1.624

indicates that, relatively speaking, fit2 is better on the z_{95\%}-level. When I work with these things myself I often use 4 SE.

1 Like

Same here.

So far all my knowledge is based on forum entries where people argue where to draw the threshold!
I was just wondering if you happen to know any literature explicitly arguing for a certain threshold.

Cheers,
Alex

No, but if someone knows of a reference in this case (concerning model selection) it would be @avehtari :)

I did find some older comments of his on this matter (link below). But it was quite some time ago, so maybe one has a more solid idea in 2020?

1 Like

So, using z_{99\%} to stay on the safe side (and I usually have \gg n in my samples).

SE assumes that normal approximation works well for the expected difference. If it works then you can do whatever you would do with normal distribution. There is no recommended threshold for making binary decision as there is no single value which would be good for all cases and it is better to report the whole uncertainty instead of thresholded binary value (we don’t want new p-value almost 0.05 approaching significance issues). You are free to make your decisions based on the uncertainty, but please report the whole uncertainty. Another issue is that this normal approximation is not perfect. Some of the issues are known before and we’ll soon have a new paper out with more information about the failure modes, diagnostics and recommendations which will hopefully answer many your questions.

2 Likes

Thanks, @avehtari! This was very helpful!

1 Like