Is the brms() measurement error model doing the 'right' thing?

Taking a second - more succinct? - pass at this after reading Grace Y Yi’s ME textbook (2017): the question seems to be if, after assuming nondifferential measurement error, to factorize f(x,z) further, or instead to leave this term unmodeled as a nuisance function (as brms() and Stan() does by default). Put differently: when should the probability distribution of the true covariates be treated as ‘fixed’ (so-called ‘functional’ approach) vs. further elaborated such that f(x,z) = f(x|z) f(z). The latter (`structural’) approach is by far the generally adopted approach in the Bayesian ME literature, for whatever reason – perhaps because it is easily incorporated into the Bayesian framework?

  • In our specific case, theoretically, where x is correctly measured Tobin’s Q or the book-to-market value of the firm; and z is the firm’s cashflow rate, we know that these two variables are not independent. Moreover, we know that the direction of the dependancy is more likely to be: x = f(z). The dependance follows - I think - from the fact that x^* and z are not independent. We know that attenuation bias in x^* is likely to impact z. If we ignore this dependance, by not including a full exposure model, of the sort f(x,z) = f(x|z) f(z), then we are going to have less correct results?

  • A second, practical, issue for us is computational. Our model is already complex, with many group-level parameters (over 1,300) and 300,000 data points. Adding another regression, by adding in an `exposure model’ which is not essential, seems like a bad idea. But still then need to justify this theoretically.