Two specifical questions about Bayesian modeling

Dear all,

I’m writing to this Forum because I have two specifical questions:

  1. Regarding the use of Cauchy distribution for logit regression (Gelman 2008), I have also observed that the same is also valid for models which assume a log-link (as Poisson). How can I justify it? Any reference?

  2. I have a model in which is involved a exponential function, for which I have tried 5 different priors for the rate parameter ν. In all cases the model converges, but the estimates obtained for the rate parameter are high-conditioned on the prior I have assumed, as showed by the credible intervals. Hence, I do not know which the correct estimate is. Here, I present the priors and the results obtained via JAGS with 2000 samples:

a) ν ~ dunif(0, 10) ; (values between 0 and 10)
mean sd 2.5% 97.5% overlap0 f Rhat n.eff
nu 4.338 2.836 0.241 9.621 FALSE 1.000 1.002 598

b) ν ~ dunif(0, 20) ; (values between 0 and 20)
mean sd 2.5% 97.5% overlap0 f Rhat n.eff
nu 8.227 5.792 0.514 19.270 FALSE 1.000 1.001 935

c) ν ~ dexp(0.25) (most of the values between 0 and 12)
mean sd 2.5% 97.5% overlap0 f Rhat n.eff
nu 3.061 3.139 0.194 12.132 FALSE 1.000 1.010 1706

d) ν ~ dexp(0.15) (most of the values between 0 and 20)
mean sd 2.5% 97.5% overlap0 f Rhat n.eff
nu 4.827 5.466 0.291 20.177 FALSE 1.000 1.047 137

e) ν ~ dgamma(5, 0.55) (most of the values between 1 and 19)
mean sd 2.5% 97.5% overlap0 f Rhat n.eff
nu 8.042 3.855 2.508 17.146 FALSE 1.000 1.000 2000

The results are puzzling, since they are different, and what is stranger, they seem to be conditioned by the possible range of variation. What is happening here?

Thank you very much

You’re on the Stan forums, not the JAGS forums. Their forums are here:

https://sourceforge.net/p/mcmc-jags/discussion/

The best way to justify is with posterior predictive checks. Does it lead to well calibrated posterior inferences? You can also do the same thing that Jakulin and Gelman did and just run a bunch of examples and see how it does predictively like a machine learning researcher would do. Depends on who you’re trying to convince, I suppose.

The worry about the Cauchy is that if the data don’t have a strong effect, the posteriors can have very wide tails, which can lead to numerical issues.

Probably none of them. Again, what you want to do is look through the posterior predictive inferences. The tails may be different, but what happens to quantities of interest?