HBM: Stan V.S. bootstrap filter method -- Different result

I am working on a HBM project and I am very confused by the posterior STAN gives me. So I wrote a R script using simple bootstrap filter method on the same project and I got a different posterior. I have checked both script again and again and didn’t find out what is causing the difference. I have tested both methods on simpler cases and the results are the same. But on this particular model that I am working on, they just don’t match. I am posting both script below. Can someone help me?

Another thing I found is that, in the STAN script, when I set parameter t_obs_log_prob <- 0, which means that there is no data, and I should simply get the prior, but in the result kappa doesn’t follow uniform(-2,0).

If there is anything unclear, please let me know and I will explain more about the question.

Below is my STAN script. I am using Python package PyStan to run it.

///////////////////////////// Python part in brief
t_obs = 0.1
fit = pystan.stan(model_code = code, data = data, iter = int(2e6), chains = 1)
///////////////////////////// STAN part
data{
	real t_obs;
}
parameters{
	real <lower=-2, upper=0> kappa;
	real <lower=-22, upper=-3> log_lambda_min;
	real <lower=-3, upper=3> log_lambda_max;
	
	real <lower=-2, upper=0> epsilon;
	real <lower=-0.3, upper=0.2> log_tau_min;
	real <lower=0.2, upper=0.65> log_tau_max;	

	real <lower = log_lambda_min, upper = log_lambda_max> log_lambda;
	real <lower = log_tau_min, upper = log_tau_max> log_tau;
}
transformed parameters{
	real lambda;
	real lambda_min;
	real lambda_max;
	real tau;
	real tau_min;
	real tau_max;

	lambda <- 10.0^(log_lambda);
	tau <- 10.0^(log_tau);

	lambda_min <- 10.0^(log_lambda_min);
	lambda_max <- 10.0^(log_lambda_max);

	tau_min <- 10.0^(log_tau_min);
	tau_max <- 10.0^(log_tau_max);

}
model{
	real t_obs_log_prob;
	real lambda_log_prob;
	real tau_log_prob;

	kappa ~ uniform(-2,0);
	log_lambda_min ~ uniform(-22,-3);
	log_lambda_max ~ uniform(-3,3);

	epsilon ~ uniform(-2,0);
	log_tau_min ~ uniform(-0.3,0.2);
	log_tau_max ~ uniform(0.2, 0.65);	

	lambda_log_prob <- kappa*log(lambda) + log((kappa+1)/(lambda_max^(kappa+1)-lambda_min^(kappa+1)));
	tau_log_prob <- epsilon*log(tau) + log((epsilon+1)/(tau_max^(epsilon+1)-tau_min^(epsilon+1)));
	t_obs_log_prob <- log(1 - exp( 0 - lambda * t_obs)) - log(1 - exp( 0 - lambda * tau));
        
	increment_log_prob(lambda_log_prob + tau_log_prob + t_obs_log_prob);
}

===================================
Below is my R script of bootstrap filter method.

# bootstrap filter approach

n <- 1000000

# setting

kappa.range <- c(-2, 0)
log.lambda.min.range <- c(-22, -3)
log.lambda.max.range <- c(-3, 3)

epsilon.range <- c(-2, 0)
log.tau.min.range <- c(-0.3, 0.2)
log.tau.max.range <- c(0.2, 0.65)

# hyper parameter
MyUnif <- function(n, r)
{
  x <- runif(n, r[1], r[2])
  return(x)
}

kappa <- MyUnif(n, kappa.range)
lambda.min <- 10^MyUnif(n, log.lambda.min.range)
lambda.max <- 10^MyUnif(n, log.lambda.max.range)

epsilon <- MyUnif(n, epsilon.range)
tau.min <- 10^MyUnif(n, log.tau.min.range)
tau.max <- 10^MyUnif(n, log.tau.max.range)

# inverse transform sampling to generate lambda, tau
MyInvCDF <- function(u, k, a, b)
{
  s <- k + 1
  x <- (u * b^s + (1 - u) * a^s)^(1/s)
  return(x)
}

lambda <- MyInvCDF(runif(n), kappa, lambda.min, lambda.max)
tau <- MyInvCDF(runif(n), epsilon, tau.min, tau.max)

# likelihood
Lik <- function(x, lambda, tau)
{
  p <- expm1(- lambda * x) / expm1(- lambda * tau)
  return(p)
}

# weight
tobs = 0.1
w <- Lik(tobs, lambda, tau)
ind <- sample.int(n, replace = TRUE, prob = w)

# prior vs posterior
par(mfrow = c(1, 2))
# x <- kappa
x <- log10(lambda)
hist(x, breaks = 100, main = "Prior")
hist(x[ind], breaks = 100, main = "Posterior")

[edit: formatted code]

I’m not familiar with these models, but I think there’s an issue here.

I agree if you set t_obs_log_prob equal to zero (or any other constant) then the data dependence is removed from the model, but your prior is more than just kappa ~ uniform(-2,0). It also includes:

lambda_log_prob <- kappa*log(lambda) + log((kappa+1)/(lambda_max^(kappa+1)-lambda_min^(kappa+1)));
tau_log_prob <- epsilon*log(tau) + log((epsilon+1)/(tau_max^(epsilon+1)-tau_min^(epsilon+1)));

increment_log_prob(lambda_log_prob + tau_log_prob);

I’m not familiar with what this is, but it’s more than a uniform (and in the sense that it is only a function of your parameters, it’s part of your prior).

What version of Stan are you using? The <- syntax and the increment_log_prob function have been deprecated for a while. It’d probly be worth upgrading. There have been some important changes.

Hope that helps!

You will just get your prior if you don’t have any data. But in your case, kappa is involved in more than just the uniform distribution. That means the marginal posterior isn’t necessarily going to be uniform.

I didn’tunderstand what this model was trying to do. It seems like an awful lot of model for a single data point!

I’m also not sure what a bootstrap filter is.