Hi
I wonder, which is the correct way to re-parametrize a posterior with Cauchy prior (and Gaussian likelihood) in order to help Stan to achieve better mixing of the chains. Now I have a model with the following code (for simplicity, we can assume some\_function(w)=Mw, where M is a constant matrix):
data {
int N;
vector[N] y;
}
parameters {
vector[N] w;
}
transformed parameters {
vector[N] f;
f = some_function(w);
}
model {
for (n in 1:N) {
w[n] ~ cauchy(0, 1);
}
for (n in 1:N) {
target += normal_lpdf(f[n]-y[n] | 0, 0.5);
}
}
The question is, how to accomplish the re-parametrization in a correct manner? Stan documentation suggests to use uniform distribution and apply tan-transformation for each of the Cauchy variables, if one seeks to generate Cauchy random variables efficiently. But do I need to add logarithm-abs of the Jacobians of tan-transformations to the variable target or not in my case, because initially I have set independent Cauchy distributions for the components of the vector w? I think I have to add them, but which posterior does the following modified model then refer to? In any case, it is a proper distribution:
data {
int N;
vector[N] y;
}
parameters {
vector<lower=-pi()/2, upper=pi()/2>[N] u;
}
transformed parameters {
vector[N] w;
vector[N] f;
for (n in 1:N) {
w[n] = tan(u[n]);
}
f = some_function(w);
}
model {
for (n in 1:N) {
u[n] ~ uniform(-pi()/2, pi()/2);
}
for (n in 1:N) {
target += normal_lpdf(f[n]-y[n] | 0, 0.5);
}
}