Just to be clear you shouldn’t think about Stan drawing numbers from random number generators anywhere. You write down a log density proportional to the log density you want to sample and it generates draws from that. It is working on an unconstrained space and transforming to a constrained space though.

For the parameters:

```
parameters {
ordered[K-1] tau;
}
model {
tau ~ student_t(3, 0, 5);
}
```

the way to think about this is first the constraint, and then the distribution.

So `tau`

is a random variable constrained to live in `ordered[K - 1]`

space and Stan will generate draws via MCMC from:

p(\tau) \propto \text{student_t_density}(\tau)

The student_t distribution itself is defined on unconstrained spaces, so the fact that you’re constraining `tau`

makes this a bit weird to talk about (cuz your distribution here is constrained – so it’s not really a student_t).

The example to think about is if you have a constrained variable and put a normal on it:

```
parameters {
real<lower = 0.0> x;
}
model {
x ~ normal(1, 1);
}
```

So `x`

doesn’t have a normal distribution here – you have a variable `x`

constrained to be positive and the actual distribution of `x`

in this case would be the density of `normal(1, 1)`

above zero re-normalized to integrate to 1. Does that make sense?

In this case you could think about drawing from a `normal(1, 1)`

and rejection sampling to get x. That doesn’t work in your model above, because you have more than just the prior (there is the likelihood as well) that will affect what `tau`

is, so the rejection sampling picture breaks.

The way Stan generates these draws is MCMC on an unconstrained space – so it isn’t working with `x`

or `tau`

directly but something that transforms to `x`

and `tau`

in a way that gives the expected distributions.

It’s a bit confusing to get it all together, and also somewhat unconvincing. You might try to justify your priors by generating prior predictives and talking about why those are sane.