Problem with beta distribution

I’m trying to simulate the posterior distributions for almost 13000 parameters. I’ve run the model with as many as 15000 iterations per chain (4 chains). But I’m getting warning messages related to low bulk and tail ESS as well as low BFMI. When trying to find out what’s wrong, I decided to assign arbitrary values to the hyperparameters of the beta priors for the parameters p[K,N] you can see below. Then I ran the model on a small subset of my original sample. It turns out that when I assigned values close to 0 to the alpha parameter of the beta prior, say 0.01, I got those same warning messages. But when I raised alpha to a higher value that is still less than 1, say 0.3, the simulation ran smoothly with no warning messages. I’m guessing this is what is causing the problem in my model since the hyperparameters alpha[K] below are quite close to 0. Do you have any suggestions for some kind of reparametrization? Thank you.

functions {
  real ll_a_lpdf(real p, real d){
    return d*log1m(p);
  }
  real ll_b_lpdf(real p, real d){
    return log1m(pow((1-p),d));
  }
}
data {
  int K; 
  int N;
  int x[K,N];
  real m[K];
}
parameters {
  real <lower=0> d[N];
  real <lower=3,upper=8> mu;
  real <lower=1./4,upper=2> sigma;
  real <lower=0,upper=1> p[K,N];
  real <lower=0,upper=1> rho[K];
}
transformed parameters {
  real <lower=0> alpha[K];
  real <lower=0> beta[K];
  for (k in 1:K){
    alpha[k] = (m[k]*((1/rho[k])-1));
  }
  for (k in 1:K){
    beta[k] = ((1-m[k])*((1/rho[k])-1));
  }
}
model {
  //Prior
  d ~ lognormal(mu,sigma);
  mu ~ uniform(3,8);
  sigma ~ uniform(1./4,2);
  rho ~ uniform(0,1);
  for (i in 1:N){
    for (k in 1:K){
      p[k,i] ~ beta(alpha[k],beta[k]);
    }
  }
  // Likelihood
  for (i in 1:N){
    for (k in 1:K) {
      if (x[k,i]==0) {
        target += ll_a_lpdf(p[k,i]|d[i]);
      } else {
        target += ll_b_lpdf(p[k,i]|d[i]);
      }
    }
  }
}

@betanalpha can most likely provide a good answer for you :)

3 Likes

The problem is that the gradient of log and log1m and the internal Jacobian for the <lower=0, upper=1> transformation start to underflow/overflow near 0 and 1. There’s not much one can do but try to use the unconstrained login probabilities direction (i.e. use logit_p in your parameter block), work out the Jacobian and then transformed log density analytically, and then try to cancel as many of the offensive terms as possible (or branch and dispatch to more stable implementation near the boundaries).

3 Likes

Thank you for the suggestion, @torkar!

Thanks for your response, @betanalpha. That was really helpful.