In general priors for ordered parameters are tricky and have not really been researched enough even in the unconstrained case.
The implicit prior for a model like
parameters {
ordered[10] x;
}
model {
}
is uniform on the smallest value and then uniform on the differences between the remaining values. Placing a prior on the ordered parameters, and not their differences, is non-generative and typically introduces some weird behavior. For example while one might expect
parameters {
ordered[10] x;
}
model {
x ~ normal(0, 1);
}
to simply contain the parameters in the interval [-3, 3] or thereabouts it actually interacts with the constraint to enforce a sort of uniform repulsion between the interior points, resulting in very rigid differences.
Really the challenge here is figuring out the context where one’s domain expertise manifests, and this is usually a generative context. For example let’s consider ordered parameters that are additionally constrained to the unit interval, as in the first post.
Now we need to maintain ordering and the interval constraint in addition to incorporating principled domain expertise, which is awkward unless we can find some generative model that naturally incorporates these constraints. Fortunately we have one for this case – the stick-breaking process.
parameters {
real<lower=0, upper=1> cond_probs[10];
}
transformed parameters {
real rev_ordered_probs[10];
rev_ordered_probs[1] = cond_probs[1];
for (n in 2:N)
rev_ordered_probs[n] = cond_probs[n] * rev_ordered_probs[n - 1];
}
model {
cond_probs ~ beta(1, 1);
}
Here we allocate some of the unit interval to the first ordered probability, then we allocate some of the remaining unit interval to the second ordered probability, and then recurse. We can also think of this as
hence the variable names. Because the conditional probabilities are less than one the ordered probabilities will monotonically decrease, creating a reverse ordering which readily be reversed to give the monastically increasing ordering. What really makes this approach so useful, however, is that often we can reasonably about conditional probabilities much more easily, and hence build principled priors for them without too much trouble.