y(x)=kI_{cx}(a,b), where k, c, a and b are parameters used for the fit.

Is there an existing implementation of this function in Stan? I know there is an inc_beta function but this doesn’t appear to work for vector input. At the moment I’m running my own metropolis samplers to do this, but I’m trying to generate model weights and WAIC/LOO seem to be pretty unstable with my homemade samplers.

I have a solution that I think should work which is to pass each element in my data vector to inc_beta and divide by the beta function which should be inc_beta(a,b,1). Unfortunately when I do so, I get initialization failed, even when I initialize at the ‘real’ parameter values for a test dataset, any idea why this would be the case? My model is below.

data {
int<lower=0> N;
real y[N];
real x[N];
real sigma;
}
parameters {
real<lower=0,upper=1> U;
real<lower=0> V;
}
transformed parameters {
real<lower=0> a= U*V;
real<lower=0> b= V-a;
real y_pred[N];
for (n in 1:N) {
y_pred[n]= inc_beta(a,b,x[n])/inc_beta(a,b,1);
}
}
model {
U ~ uniform(0,1);
V ~ gamma(1,20);
y ~ normal(y_pred,sigma);
}

Lol oops I did a poor job reading your question and the docs.

I guess make sure all the constraints are satisfied.

Is x greater than zero and less than one? If there are values exactly equal to 0 and 1 maybe force them to be greater than 0 or less than one and see if the error goes away.

The data I’m currently testing this on are simply xs generated from a uniform distribution on the range 0 to 1 and ys generated from exactly the model here with known a and b parameters.

It seems that dividing by the complete beta function causes the problem as the model runs if I take this out (although the model is then wrong). Not sure why this would be the case, but perhaps it’s because the denominator here has inc_beta(a,b,1), which is on the edge of the acceptable range for this function. Interestingly, if I set to a value close to 1 but not quite at 1 I get a different error:
“k (internal counter) exceeded 100000 iterations, hypergeometric function gradient did not converge.”