model <- 'data {
int<lower=0> N;// Number of observations
vector[N] time; //predictor
vector[N] concentration; //response
real beta1_pop;
real beta2_pop;
real beta3_pop;
real<lower=0> omega_beta1;
real<lower=0> omega_beta2;
real<lower=0> omega_beta3;
real<lower=0> pres;
}
parameters {
vector<lower=0>[3] beta;
}
model {
//Priors
beta[1] ~ lognormal( beta1_pop , omega_beta1);
beta[2] ~ lognormal( beta2_pop , omega_beta2);
beta[3] ~ lognormal( beta3_pop , omega_beta3);
y ~ normal(123*beta[1]/(beta[2]*(beta[1]-beta[3]))*(exp(-beta[3]*time)-exp(-beta[1]*time)), pres);
}'
I would like to extract unbiased samples from the posterior distribution P(beta | y).
Using simply
stan.model <- control$modelstan
stan_data <- list(N = length(obs),concentration = obs,time = time,beta1_pop=1,beta2_pop=3,beta3_pop=2,omega_beta1=1,omega_beta2=1,omega_beta3=1,pres=1)
fit <- sampling(stan.model, data = stan_data, iter = 20000,chains = 1,algorithm = "HMC")
fit_samples = extract(fit)
returns biased HMC samples.
I would need to add a MH test ratio after each sampling to unbias them but computing this ratio is not an easy task, is there an embedded MH test in rstan?
I would suggest not worrying about frequentist concepts, such as the bias of a point estimator over repeated random samples from a well-defined population.
This is not correct. MH steps do not de-bias MCMC (and Stan is a specific Metropolis Hastings method so it’s particularly unnecessary).
This paper talks a little about the issue of debiasing MCMC, but in the event that Stan doesn’t give any warning (no max-tree depth, low EBFMI, or Divergence warnings, in particular) and the effective sample size is large enough, then the MCMC bias should be smaller than the error you get from computing things from samples.
What I meant by “adding an MH test” is adding a rejection step to only keep samples that were drawn from the target distribution. Don’t you think accepting/rejecting samples debiases the MCMC (for instance to obtain unbiased estimator of the expectation)?
Many applications requires to draw samples from the true posterior distribution, I was wondering if Stan would propose that.
Adding a rejection step does not ensure that you’re only drawing samples from the target distribution (unless you began by sampling exactly from the target distribution, which is never the case). Adding an accept/reject step does not debias MCMC. MCMC is asymptotically unbiased, which can be shown using the Ergodic Theorem for Markov Chains.
Yes, Stan satisfies the detailed balance condition so that marginal draws are distributed according to the posterior.
It would be totally broken if it didn’t!
Stan’s HMC algorithm does a Metropolis accept step. You can read Radford Neal’s paper in the MCMC Handbook (it’s a free sample chapter online) for background.
The (revised from the paper by Michael Betancourt) NUTS algorithm, which is Stan’s default, and actually works for real problems, replaces the Metropolis step with a multinomial draw along the simulated trajectory. The proofs of correctness for a version using slice sampling is in the original Matt Hoffman and Andrew Gelman paper on NUTS. Michael Betancourt revised it in a way described in his exhaustive HMC paper (along with a lot of analysis on why NUTS works).