i want to incorporate measurement error of the dependent variable in my modeling. Unfortunately, i don’t know much about the size of the measurement error and i don’t expect to learn about it from the model itself. I have lots of data though. So, i wondered whether it would be reasonable to cross-validate my prior choice? I would specify different priors for the measurement error and using these different priors, predict the observed variables in the validation set and compare those predicted to the observed ones. The prior with the best prediction is the most appropriate one?
That’s a typical maneuver in machine learning settings where they don’t usually have the machinery to fit both the prior and the data at the same time. In something like Stan, the typical approach is to build a hierarchical model to jointly fit both the prior and the data. For example, if you have a regression
we won’t do cross-validation to estimate the prior parameters tau and rho, we’ll just fit the joint model p(beta, tau, rho). The user’s guide has a lot of discussions of hierarchical/multilevel models.
I might be not getting this right, but do you suggest to not specify a prior for such parameters than? is this than an empirical bayes approach? should i specify priors only if i have knowledge about them?
To expand on the previous answers: I believe the recommended way to use domain knowledge to choose priors are prior predictive checks, see the Visualisation paper for some examples https://arxiv.org/abs/1709.01449 but there AFAIK isn’t a comprehensive tutorial yet.
No. We suggest using priors for everything. What I’m saying is that rather than fixing a prior through cross-validation (a form of what’s known as “empirical Bayes”), that you can jointly fit the prior parmeters and the likelihood parameters.
I do a lot of PPCs in my repeated binary trial case study, but that’s not the focus and they’re buried after a lot of other detail.