# Non-centered Reparameterization for Truncated Normal Distribution?

It is difficult to understand the non-centered reparameterization of the model assuming the error distribution of the truncated normal distribution.
(i.e., ` Y ~ normal(mu, sigma)T[0, 1]`)
On the Internet and in the Stan manual, I found many examples of non-centered reparameterization for normal distribution. However, no one mentioned the case of a truncated normal distribution as far as I know.

Can anyone tell me about coding non-centered reparameterization for the truncated normal distribution?
Or is there a good example or something like a template?

Hi Taoka,

It looks like in this case the truncated distribution is the distribution for the data `y` and not the distribution for parameters. The non-centered parameterization is used if you have a hierarchical prior distribution on parameters. So, for example, if `mu` in your model was a vector and it had a prior with unknown prior standard deviation, then you could use non-centered parameterization for `mu`. But it’s not something that you need to do in the model for the outcome `y`. Does that make sense? Let me know if that was unclear and I should add more details.

Sorry. I’m a complete beginner at reparameterization, and I probably confused the terms.
I would like to know reparameterization for the model like below:

``````data{
int N;
vector<lower=0, upper=1>[N] Y;
vector[N] X;
}
parameters{
real beta;
real<lower=0> sigma;
}
transformed parameters{
vector[N] mu;

mu = beta*X;
}
model{
target += normal_lpdf (Y | mu, sigma) -
log_diff_exp(normal_lcdf(1 | mu, sigma), normal_lcdf(0 | mu, sigma)); // likelihood
}

``````

No problem at all!

In the model you wrote I don’t really see the opportunity for anything like the non-centered parameterization. There’s no hierarchical prior (i.e., beta and sigma don’t have distributions that depend on other parameters), which is OK, but there’s not really an opportunity for a non-centered parameterization. If, for example, there were multiple `beta`s and they all depended in their prior distribution on some hierarchical standard deviation parameter then you could do a non-centered reparameterization, but that structure isn’t present here.

Just to give you an example that might help clarify, here’s a canonical example where a non-centered reparameterization is often useful. The key is that `theta` has a hierarchical prior that depends on `tau`:

``````...
parameters {
vector[J] theta;
real<lower=0> tau;
real<lower=0> sigma;
}
model {
y ~ normal(theta, sigma);
theta ~ normal(0, tau);
...
}
``````

In this model the prior distribution for `theta` depends on a hierarchical standard deviation parameter `tau`, which is itself an unknown parameter and we could use a non-centered reparameterization:

``````...
parameters {
vector[J] theta_raw;
real<lower=0> tau;
real<lower=0> sigma;
}
transformed parameters {
vector[J] theta = theta_raw  * tau; // could also shift by some mean, but I'm assuming mean 0
}
model {
y ~ normal(theta, sigma);
theta_raw ~ normal(0, 1);  // implies theta ~ normal(0, tau)
...
}
``````

Thanks, jonah.
I appreciate your help and somehow understand when to use the Non-centered Reparameterization trick.
In my understanding, this is an effective method if we assume that the parameters of the error distribution used in the model are probabilistically generated under the influence of other parameters (hierarchical structure of mean/variance structure, etc.).

However, after listening to your advice, it seems that I need a different approach to the problem I am facing. Or it could be a problem with Stan’s MCMC sampling.

When fitting a regression model by Stan with truncated gaussian errors for data with very small variance, many divergent transitions occur, and it causes a fairly biased parameter recovery.
At first, I thought this was a problem specific to my own model, but when I tried it with a very simple regression model and simulation data, the same problem occurs.

Probably, it will be off the topic of this thread, so I would like to post a new topic.