So I’m considering a simple linear regression model with p = 1 predictors y = \beta x + \epsilon where \epsilon \sim N(0,\sigma^2). I want to use a generalised beta distribution of the second kind(GB2) as a prior for \beta and an inverse gamma distribution for \sigma^2, and then sample from the posterior distribution. Would this be doable using Stan? I noticed GB2 is not a built in.

Please keep in mind I am completely new to using Stan.

A straightforward way to do this would be to define the log of the GB2 pdf in the functions block, then put a sampling statement in the model block for \beta, for example

\beta ~ GB2.lpdf(parm1, parm2, parm3, parm4);

To see how to construct the lpdf function, see the “User Defined Functions” section of the User Guide.

You could also use a “target” statement in the model block. But regardless do pay attention to whether you will be needing the normalizing additive constants from the sampling function. For newer Stan versions, there are different function extensions (like “lpdf”) for including it or not.

Just an aside: I have entertained using the GB2 distribution for scale parameters many times, but never main/mean parameters. I encourage you to have a good think-through about using this distribution for the mean parameters.

I’m having some trouble implementing the GB2.lpdf function. I’ve looked at the user defined function section on the Stan website, although the syntax is taking some getting used to. It seems like the beta function is a defined function in Stan according to this 30.1 Beta | Stan Functions Reference but typing B(p,q) in the code gives an error saying that the function B(p,q) is not recognised. This is the code I have thus far. Note that the first shape parameter “a” is always 2 in my case, which Is what I have used here.

"functions{
real GB2.lpdf(real y, real b,real p,real q) {
real ret;
ret = 2*y^(2*p-1)/(1 + (y/b)^2)^(p+q);
ret = ret/B(p,q);
ret = log(ret);
if (y < 0) {
return 0;
}
else {
return ret;
}
}
}
data {
int<lower=1> n; // Number of data
vector[n] x; // nx1 vector of x_i
vector[n] y; // n-dimensional response vector
}
parameters {
real beta; // regression coefficient
real<lower=0> sigma; // dispersion parameter
}
transformed parameters {
vector[n] theta ;
theta = x * beta;
real<lower=0> k;
k = n/(x%*%x)
}
model {
sigma ~ inv_gamma(0.001,0.001);
beta ~ GB2.lpdf(k,1,0.5)
y ~ normal(theta, sigma);
}"

functions{
real GB2_lpdf(real y, real b, real p, real q){...}
}

As stated by @JohnnyZoom4H , you can then use the function either as a sampling statement (using ~) or to increment target. I believe you would need to modify the function to take a vector for y to truly vectorize it and thereby take advantage of the sampling-style statement. But I could be mistaken.

I’m a bit confused regarding the sampling. I want the prior to be for beta, so should I not type something like

beta~GB2_lpdf(k,1,0.5)

instead of having y on the LHS?

Also, is there a way to simulate a sign inside the Stan code? I want my prior to be the negative of a GB2 with probability 0.5, and the positive of a GB2 with probability 0.5. I tried using the binomial_rng() function and simulate the sign from that, but it seems to not work that well for this particular issue.

Do have any tips in regards to making the sign of the prior negative 50\% of the time and positive the other 50\% of the time? Perhaps it’s possible to simulate a Bernoulli random variable which decides the sign of the prior?

Do you mean that you’d like to reflect the distribution across 0? If so, the simplest solution would be to take the absolute value of y in the distribution function and divide the density in half.

Stan does not allow discrete parameters, so what you are suggesting here does not sound possible. Instead, you have to marginalize over the discrete parameters. You can find plenty of information on this in the Stan documentation or the Discourse.