If I’m not mistaking you are referring to that `prop`

must have `K-1`

degrees of freedom. I have implemented the non-overparametrised model but we still get non-convergence.

Is it possible we are doing some trivial mistake in the implementation of the jacobian?

```
functions{
vector sum_to_zero(vector v){
int K = rows(v)+1;
vector[K] v_0;
v_0[1:(K-1)] = v;
v_0[K] = -sum(v_0[1:(K-1)]);
return(v_0);
}
}
transformed data{
int K = 4;
vector[K] alpha = [1,2,3,4]';
matrix[K,K] ident = diag_matrix(rep_vector(1,K));
}
parameters{ vector[K-1] prop; }
transformed parameters {
simplex[K] prop_soft = softmax(sum_to_zero(prop));
}
model{
prop_soft ~ dirichlet(alpha);
target += log_determinant(diag_matrix(prop_soft) - prop_soft*prop_soft');
}
```

Afterall if I ignore the Jacobian the model “kind of” works (obviously with some divergencies)

```
functions{
vector sum_to_zero(vector v){
int K = rows(v)+1;
vector[K] v_0;
v_0[1:(K-1)] = v;
v_0[K] = -sum(v_0[1:(K-1)]);
return(v_0);
}
}
transformed data{
int K = 4;
vector[K] alpha = [1,2,3,4]';
}
parameters{ vector[K-1] prop; }
transformed parameters { simplex[K] prop_soft = softmax(sum_to_zero(prop)); }
model{ prop_soft ~ dirichlet(alpha); }
```

P.S. One of the goals of using uncontrained + jacobian implementation is to allow this model

```
transformed data{ vector[4] alpha = [1,2,3,4]'; }
parameters{ simplex[4] prop; }
model{ prop ~ dirichlet(alpha); }
```

To be more amenable to variational bayes, with the prop parameter being more symetric/normal-like