Thank you for your prompt response. It would be great news if I do not have to make any Jacobian adjustments in my case. But I wonder what then caused what appears to be a bias in the above simulation. I have just run the above model 100 times, and the 75% CIs of the beta
parameter included the true value (0.50) only in 43% of the cases, while 50% underestimated it.
Library(“tidyverse”)
lower <- upper <- numeric(100)
for (j in 1:100) {
set.seed(j)
# (above R code without “set.seed(100)”)
sm <- summary(fit)$summary %>%
as.data.frame %>%
rownames_to_column
lower[j] <- sm[202, 6] # lower limit of the 75% CI of beta
upper[j] <- sm[202, 8] # upper limit of the 75% CI of beta
}
res <- tibble(lower, upper)
# proportion of the cases where the upper limit is smaller than the true value
mean(res$upper < 0.5) # 0.50
# proportion of the cases where the lower limit is larger than the true value
mean(res$lower > 0.5) # 0.07
# proportion of the cases where the true value falls within the 75% CIs
mean(res$lower < 0.5 & res $upper > 0.5) # 0.43
Do you think this what appears like a systematic bias is due to chance, my coding error somewhere (though I have triple-checked my R code…), or some other reasons outside of Bayesian modeling with Stan?