I am writing a code for Bayesian analysis with transformed parameters of the Weibull distribution. I have to define a Jeffreys’ prior to one of the transformed variables (\rho) and the transformations are non-linear. I have tried working around the Jacobian adjustment for the transformed parameters. Could anyone please help me with identifying any mistakes in the model? I am supposed to use this coding for a simulation study with censored data.
int<lower=0> J; // number of observations
int<lower=0> N_cen; // number of censored observations
vector[J] sampledata; // failure in hours
vector[N_cen] y_cen; // values of censored data
real mu; // scale
real<lower=0> rho; // shape
target += uniform_lpdf(mu|0,20);
target += -log(rho);
target += weibull_lpdf(sampledata | 1/(rho * mu), exp(mu));
target += weibull_lcdf(y_cen | 1/(rho * mu), exp(mu));
I haven’t run it myself but I worked with a Jeffreys prior some days ago and I solve it similarly. If you still have some trouble I will help and debug myself the stan code tomorrow morning (I have homework now) :)
Stan has some recommendations for prior selection, because of Jeffrey’s prior work really bad, if you haven’t read it check here, could be helpful.
what stan version are you using? I am using rstan 2.19.3 and it works ok this is what sf1 throws to me:
Inference for Stan model: stantry.
1 chains, each with iter=2000; warmup=1000; thin=1;
post-warmup draws per chain=1000, total post-warmup draws=1000.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
mu 0.61 0.00 0.04 0.51 0.58 0.61 0.63 0.67 365 1
rho 0.26 0.00 0.08 0.15 0.20 0.25 0.30 0.45 332 1
lp__ -10.37 0.06 1.10 -13.05 -10.73 -10.05 -9.58 -9.32 385 1
Samples were drawn using NUTS(diag_e) at Thu Apr 02 09:26:03 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at