- Yeah, that makes sense. Also figured that the constraints should be a bit looser, so I used +/- 3x the error instead. I’m still a little skeptical of the bounds I’m putting here though. It seems like cheating by just centering
pmra
onpmra_true
, and then later saying thatpmra_true
is normally distributed aroundpmra
(even more so with vectorized bounds). I guess it kind of makes sense but it’s still a little fuzzy in my head. How does this end up being different frompmra ~ normal(pmra_true, pmra_err)
? - I bumped the maximum tree depth up to 14 and it got rid of all the tree depth problems. From what I’ve read it doesn’t seem to matter too much either way? As in tree depth problems don’t affect the results.
All the bounds do is tell the sampler to never explore outside the region. If the posterior probability outside is so low that the sampler would never go there anyway the bounds shouldn’t make any difference to the results. It does make a difference during early warmup when the sampler has not found the posterior mode/typical set yet and explores all kinds of crazy possibliities.
Another way to think about it: the bounds change the model to a truncated distribution
pmra ~ normal(pmra_true, pmra_err)T[pmra_true-3*pmra_err, pmra_true+3*pmra_err]
and this distribution is almost indistinguishable from the untruncated normal so whatever.
Ok, it needs a bit wider bounds though to be “almost indistinguishable”. N
is a couple of hundred, right? So you’d expect that at least one pmra
drawn from the normal is around 3*err
away from the center. A safer value for the bound is 5*err
which is definitely not going to happen unless the errors have been underestimated (and if that’s possible the model needs to change to account for it anyway)
Sure, it just makes the model slow. The posterior inference is valid as long as effective sample size is large enough.