Dear stanimals,
I am doing inference following the Principled Bayesian Workflow: the data seem to follow a Exponentially modified Gaussian distribution and this is confirmed by the domain experts .
However, to adhere to the data generating process the correct model should be:
model {
... //some priors here
for(i in 1:N) {
y[i] ~ exp_mod_normal(mu, sigma, lambda) T[0,];
}
}
so now i need to generate random samples considering the truncation and for this I would like to write a function like
functions {
real exp_mod_normal_lb_rng (real mu, real sigma, real lambda, real lb) {
real p = exp_mod_normal_cdf(lb, mu, sigma, lambda);
real u = uniform(p, 1);
real x = ... // inverse cdf (here is where I need help)
return(x);
}
}
Where can I find the inverse CDF for the EMG distribution? Are there other approaches to generate random samples?
How close much mass does lb remove? If it’s not too much then simple rejection sampling should be okay:
functions {
real exp_mod_normal_lb_rng (real mu, real sigma, real lambda, real lb) {
int flag = 1;
real val;
while (flag) {
real <lower = 0> l = exponential_rng(lambda);
real n = normal_rng(mu, sigma);
val = l + n;
if (val > lb) {
flag = 0;
}
}
return(val);
}
}
Thanks @hhau
It works! I just needed to changed a line from:
real <lower = 0> l = exponential_rng(lambda);
to
real l = exponential_rng(lambda);
otherwise R with cmdstanr throw the following error:
Identifier expected after sized type in local (or model block) variable declaration. (No transformations/constraints allowed.)
So, for everyone interested, this is the working function (being l \in \mathbb{R}^+ ):
real exp_mod_normal_lb_rng (real mu, real sigma, real lambda, real lb) {
int flag = 1;
real val;
while (flag) {
real l = exponential_rng(lambda);
real n = normal_rng(mu, sigma);
val = l + n;
if (val > lb) {
flag = 0;
}
}
return(val);
}