We’re (still, I’ve posted a few comments already) preparing a case-control study in which we’d like to use
stan_clogit to estimate the effect of some exposure on mortality. We have quite some patients who are controls for many cases, so my supervisor asked if it might be feasible to use robust errors in the estimation (he’s not a Bayesian). Our data set is fairly large with some 25k patients in total.
To the best of my knowledge, the way to render the model robust in a Bayesian framework might be to use e.g.
student_t priors to regularise the parameter estimation (e.g. Prior choices recommendations) or add an intermediate step in the likelihood function (DBA3 p. 438, ch. “Models for robust inference”).
I was considering going for the
student_t priors on the coefficients and was curious if others have done something similar of might be able to provide some guidance for alternative/better approaches, ideally sticking with