Then I would however also advise against Bayes Factors (as far as I remember, might be wrong in details) - using BF choses among the candidate models one, that minimizes KL-divergence between the model (including priors) and the hypothetical true model. And from my experience KL-divergence is way weirder than out-of-sample prediction (i.e., I think I understand out-of-sample prediction, tried to really grasp KL-divergence a few times and mostly failed). I personally think Danielle’s approach presented in the “Between the devil and the deep blue sea” paper is the most sensible, but it IMHO mostly applies to more complex models than linear regression. When you are working with regression you basically 100% know that your model is completely false, so focusing on prediction does not seem to me such a bad idea…
Well, this is just my rambling - I actually have very limited real experience with model selection and so I am mostly repeating things I’ve read and that make sense to me, so don’t put too much weight on that… :-)