SE is an estimate of the square root of the variance of the posterior mean estimate for a parameter
theta[n]
whereas SD is the intrinsic posterior standard deviation. Technically, SE = SD / sqrt(ESS), where ESS is the effective sample size. ESS grows linearly with the length of chains used.
Thank you for educating me on this Bob. I’m still not well versed in Bayesian statistics and never took a formal course (mostly ad hoc approaches/learning for different projects), so this is very useful info on sqrt(ESS).
I’d be inclined to use softer bounds like this, for which your intervals are roughly the 95% central intervals—you can make that more extreme to 99% or whatever. It usually leads to smoother computation.
This is great advice. I’ll try this and compare diagnostics/plots (and of course do reparameterizations – first thing that comes to mind is converting log-normals to exponents \theta_{i}=\bar{\theta}*\exp(\eta_{i}), \eta_{i} \sim \text{N}(0,\omega^{2}))