I’ve been running some truncated normal models in brms, alongside standard normal and cumulative/ordinal models. When I use the posterior_predict function, or even just pp_check, it seems that every so often a couple of predicted values in some of the posterior draws come out as NA in the truncated model, but not for the typical normal or ordinal model. There are no NAs in the data that is predicted from and no reason I can see that would produce an NA. Is there something causing this to happen in truncated models, e.g., somehow it predicts a value outside the truncated range and gives it an NA or something like that?

Yes - here is an example. I’m using real data so I’ve just cut out a subsection of it so that you can use it. The real data I have about 2k respondents, so not having enough data doesn’t seem to be the issue if that is a consideration with this small data set:

reprex.RData (3.2 MB)
The uploaded file includes the data and model fit.

I find that running pp_check here says there are NAs, and you would also see that if you run posterior_predict

(on a separate note, if you can point to something that can help explain what the parameters coming out of the truncated model exactly relate to it would be much appreciated - sometimes the sds and beta values are really dramatically different from a non-truncated model, yet their predictions are quite similar!)

I have checked your example and the NAs appear because the truncation bounds yield numerically identical quantiles (both 0% or 100% quantile) within the untruncated distribution. unfortunately there is not much I can do about it at this point :-(

The parameters of the truncated distributions are hard to interprete due to the truncation and there is no general rule on what they mean. To get an idea and a better feeling for it, I recommend you create plots showing the density or samples of the truncated distribution and then vary the parameters to see how the implied truncated distribution changes.

Thanks for looking into it - much appreciated! Would ignoring these NAs and just summarising the posterior without them produce any strange biases or other important thing that cannot be ignored? Seems like there are usually only a couple of them in some of the posteriors and the posterior samples otherwise make sense, so they don’t seem to be doing anything too worrying