I am trying to translate this model from INLA to Stan and evolve it to adapt to my dataset (hurdle model with a spatial Gaussian process).
I will tackle my model step-by-step.
Since my data are truncated, I am now trying to fit the parameters of a Y sample drawn from a censored Gamma distribution with \alpha (shape) and the \beta (rate parameter) written in terms of \mu = E[Y]:
\beta = \alpha/ \mu
because of \mu is needed to later model the GP. For now, \mu is written as a simple linear model : \mu=\beta_0 + \beta_1* X
I start with this toy dataset:
set.seed(999)
N = 100
X = runif(N, 0, 2)
alpha = 10
beta0 = 0.2
beta1 = 0.5
mu = beta0 + beta1 * X
Y = rgamma(N, rate = alpha / mu, shape = alpha)
U = 1
L = 0.25
Y[Y < L] = L
Y[Y > U] = U
I coded the Stan model as:
data {
int<lower = 1> N;
real<lower = 0> L;
real<lower = L> U;
vector<lower = L, upper = U>[N] Y; // dependent variable
vector[N] X; // independent variable
}
parameters {
real beta0;
real beta1;
real<lower = 0> alpha; //shape parameter
}
transformed parameters {
vector[N] rate = alpha * inv(beta0 + beta1 *X );
}
model {
beta0 ~ std_normal();
beta1 ~ std_normal();
alpha ~ gamma(0.01, 0.01);
for (n in 1:N) {
Y[n] ~ gamma(alpha, rate[n]) T[L, U];
}
}
At the very beginning I get some errors (the following fits all):
Chain 2: Rejecting initial value:
Chain 2: Error evaluating the log probability at the initial value.
Chain 2: Exception: gamma_lpdf: Inverse scale parameter is -2.10891, but must be > 0! (in 'model77e750880e39_Gamma_regression_truncated' at line 26)
with, at the end:
Warning messages:
1: There were 61 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
So, here are my questions:
- is it a better way to formulate the model and amend the divergent transitions?
- I use a Gamma distribution because of my data (hourly rainfall) must positive. Do you think a truncated normal distribution could be more stable to sample from?
Finally, here’s the pairs plot:
Thanks in advance