Hi all,
I’m using 2-dimensional simplexes to model monotonic predictors (as described in this paper by @paul.buerkner). I described the whole model in a previous question. It’s pretty big and efficiency is a concern. My idea is that since a 2-d simplexe can be re-expressed as \delta = [\alpha, 1 - \alpha] for some \alpha \in \{0, 1\}, then it might be simpler to only declare \alpha as the parameter of interest and \delta as an intermediate parameter.
Basically, a simplified version of my code looks like this :
parameters {
vector[K] beta;
array[K] simplex[2] delta;
}
model {
\\ independent uniform priors for all 2-d simplexes
for (k in 1:K) {
zeta[k] ~ dirichlet(rep_vector(1, 2));
}
\\ Linear predictor
phi = mo(delta, X) * beta;
\\ Rest of the model...
}
Would something like this be more efficient :
parameters {
vector[K] beta;
vector<lower = 0, upper = 1>[K] alpha;
}
model {
\\ independent uniform priors for all 2-d simplexes
array[K] vector[2] delta;
for (k in 1:K) {
alpha[k] ~ beta(rep_vector(1, 2));
delta[k] = [alpha[k], 1 - alpha[k]];
}
\\ Linear predictor
vector[N] phi = mo(delta, X) * beta;
\\ Rest of the model...
}
Note. The mo()
function applies the monotonic transform to each column of X. It could probably also be optimize but I want to keep this post short.