# Asymmetric Gaussian Hierarchical Model

#21

Yes, that helps. The problem you may be having is that across the threshold on the conditional, your function isn’t going to be differentiable. That kink can cause similar problems to high curvature for our Hamiltonian simulation (it uses a simple first-order approximation using the leapfrog integrator). When the sampler tries to cross the boundary, if the scales are very different, there can be problems with rejection/divergence.

It’d be a bit more efficient to have

``````y ~ normal(f, sigma[pixel]);
``````

outside of the loop.

P.S. Life’s easier with vectors (or arrays):

``````parameters {
vector[6] log_mu;
vector[4] eta;
...
model {
log_mu ~ normal({-1, -1, -1, 4, 3, 3},
{ 1,  1,  1, 2, 5, 5});
eta ~ normal(0, 1);
...
``````

Or you can write out `log_mu[1] ~ normal(-1, 1);` — not every element of an array/vector needs to get the same prior.

#22

Thanks for the reply, Bob. I will give that a shot.

Yes, I was wondering if the conditional statement in the likelihood would cause problems. The scales should not be too different. The break is the center of two Gaussian functions that connect at the mean to make one asymmetrical curve. I am hoping to use Stan, though, because there is some high correlation between the parameters that I think HMC could exploit to do more efficient sampling compared to a series of Gibbs steps. I will also try increasing the adapt delta and initial step size to help reduce the number of divergences. Maybe because of the nonlinearity of the model, I also have noticed the sampler is sensitive to starting values; using the default inits can sometimes lead to nonsensical posteriors. Tightening up the priors will also help, I think.

~Colin