Induced Dirichlet Priors: making them weaker

I am using PyStan (for no good reason except that Python was already dominant in my life).
I have copied mindlessly from @betanalpha’s lovely case study Ordinal Regression the

  real induced_dirichlet_lpdf(vector c, vector alpha, real phi) 

and am using it in a mixture model with multiple ordinal logits.

cH ~ induced_dirichlet(rep_vector(1, K), 0);

I have little understanding of anything that’s going on but I’ve figured out how to do a prior predictive check and I notice that the priors on the cut points look rather tight. In fact, the posteriors for the cut points tend to be further out from 0 (especially on the high side). How do I weaken the priors? ie., I think the cut point prior distributions should be wider. Is that legitimate logic? Am I feeding back posteriors into priors by wanting to weaken them? The model converges okay; I’m really just hoping to speed it up.

I don’t understand what rep_vector(1,K),0) are, really, but I have found that if I subtract 0.5 from rep_vector(1,K) the priors get a bit wider.

Lost deep in the woods,
Chris

1 Like

Rather tight compared to what?

One of the utilities of the induced Dirichlet prior is that it converts probabilities, which are more straightforward to compare to our domain expertise, to cut points, which are usually much less straightforward to compare. Because the induced Dirichlet starts with the more interpretable setting already it’s usually sufficient to build a Dirichlet prior model there and take whatever cut point behavior is necessary to achieve that behavior.

If the range of posterior distributions over the categories given by the Dirichlet prior look reasonable then the corresponding cut point behavior is probably okay.

If you want a weaker prior for the cut points then you’ll need to weak the latent Dirichlet prior. This can be achieved by reducing the value of the elements in alpha. It may help to consult some references on the Dirichlet family to build some intuition about alpha, for example Dirichlet distribution - Wikipedia.

Weakening the prior is unlikely to speed up the model – it might make the model more compatible with your actual domain expertise and that might improve inferences, but in most cases the resulting posterior distribution shouldn’t be any easier to fit.