I have a general question concerning using WAIC or LOOIC to compare Stan models that have been fit with and without measurement error (ME) in the brms package in R.
Every time that I have fit a model with ME it has had a substantially worse fit that the model with the same predictors fit without any ME specified. Is this always going to be the case? It is my understanding that if I have a well-motivated reason to include ME then I should fit it regardless to be more confident of any inferences made, but I’m hoping that someone can either direct me to some literature discussing this or give me some additional insight into this specific question regarding model comparison.
LOO and WAIC are biased if the data are bad. So essentially you get better fit without measurement error but you would get worse predictive validity if you got a future data set that had measurement error. You would be way over confident that your prediction was correct because you weren’t taking into consideration problems with the data.
This may be why people don’t like measurement/imputation. It slows things down, you get wider CIs and no one ever replicates, so why bother?
Thanks for the reply. I’m glad to hear that I had the general right idea about comparing models with and without measurement error. I’m working hard to understand how these information criteria work, but I’m not a statistician by training, so the intricacies often go over my head.
In the types of data we deal with in my field, there is known measurement error, evidenced by many published papers. We just haven’t quite gotten to the point of including this in our models (and for many reasons, some of which you pointed out, I don’t think it will happen anytime soon).