The other caution around this approach is that you have to be really careful to understand your data. For example, you might think the outcome should essentially never be “large” (for some definition of large), but then you might look at your data and realize “oh shoot, I have some covariate combinations in here that I actually do think might yield large outcomes, or at least I’m not confident that they don’t.” Like if you’re modeling human heights, and you have played as a center in the NBA
as a covariate, and it turns out that you actually have a fair number of 1’s for this covariate, then your prior on the outcome ought to reflect that, and this is true regardless of whether your sampling process was somehow enriched for NBA centers or if you just happened to draw a really weird sample.
1 Like
What’s the point of including the normal_lccdf(0 | 6, 5) term at all?
That seems a constant term that could be ignored but I may be missing something. Any pointer in the right direction will be appreciated.
It is a constant, but it doesn’t enter the model as an additive constant term in the log-likelihood; instead it is multiplied by T
. On the log scale, additive constants can be ignored but multiplicative constants cannot.
1 Like
I thought T was also a constant, I guess it’s not somehow? But conditional on the data I can’t see how the term matters.
1 Like
my fault; looks like it is!
1 Like