Thanks for the reply. That \mathbb{R}^k \to \mathbb{R}^k mapping makes sense, and it’s invertible because one element of the simplex \pi is the reciprocal of the sum \Gamma := \sum_{i=1}^K \gamma_{i}. I’m still not sure I understand the Jacobian; for example,
Maybe I just need remedial calculus?
I’ve been using a different approach based on simplex amalgamation. If K_{0} is the dimension of the largest simplex you’ll need, you can declare an array of length-K_{0} simplexes and form a smaller one, say of length K, by amalgamating the last (or first) K_{0} - K + 1 elements. An appropriate Dirichlet prior ensures that the amalgamation is simplex-uniform, which is what I want. Here’s a toy example:
data {
int<lower=2> K0; // length of original simplex
int<lower=1,upper=K0> K; // length of amalgamated simplex
int<lower=0> x[K]; // multinomial counts
}
parameters {
simplex[K0] gamma;
}
transformed parameters {
vector<lower=0>[K] pi = append_row(head(gamma, K - 1), sum(tail(gamma, K0 - K + 1)));
}
model {
// prior on gamma that implies pi ~ Dir(rep_vector(1,K))
gamma ~ dirichlet(append_row(rep_vector(1, K - 1), rep_vector(1.0/(K0 - K + 1), K0 - K + 1)));
// likelihood
x ~ multinomial(pi);
}
The amalgamated K_{0} - K + 1 elements of \gamma are nonidentified, but bounded because they’re just sampled from the simplex prior. Seems to work fine in practice, and I find it easier to wrap my head around (YMMV).