The `loo`

R package (or its various methods available via `rstanarm`

) can be used for model comparison in a number of ways, for instance comparing variable transformations (*e.g.*, untransformed vs log transformation) or GLM families (*e.g.*, Poisson *vs.*. negative binomial).

Does it make sense to compare models fitted with different priors to the same data?

For context, I performed a sensitivity analysis in `rstanarm`

for a logistic regression using flat, default, normal(0, 1), and Cauchy priors which all gave very different results despite prior and posterior predictive checks looking OK.

Neither the `loo`

preprint nor its vignettes seems to indicate that this is possible. Is it?

Yes. There is nothing special about priors here. LOO is trying to estimate expected log predictive density, which is just p(y_n \mid y_{-n}) = \int p(y_n \mid \theta) \cdot p(\theta \mid y_{-n}) \, \textrm{d} \theta, where y_n is one data point and y_{-n} = y_1, \ldots y_{n-1}, y_{n+1}, \ldots y_N. Nothing requires the priors in the models being compared to be the same.

You can often shift things around to make it look like something is in the prior or likelihood depending on the formulation. For example, I might have a varying effect that I add a prior to which says the effect is the same for all items and now it’s equivalent to a likelihood without a varying effect.

I agree with @Bob_Carpenter, just want to add that I think the log is missing in the ELPD formula and that in principle, a prior should be chosen *a priori* (strictly speaking, if any parts of the model—including the likelihood—are not specified *a priori*, this needs to be taken into account when performing inference; that is what Bayesian model averaging, post-selection inference etc. are concerned with).