Yes, that helps. The problem you may be having is that across the threshold on the conditional, your function isn’t going to be differentiable. That kink can cause similar problems to high curvature for our Hamiltonian simulation (it uses a simple first-order approximation using the leapfrog integrator). When the sampler tries to cross the boundary, if the scales are very different, there can be problems with rejection/divergence.
It’d be a bit more efficient to have
y ~ normal(f, sigma[pixel]);
outside of the loop.
P.S. Life’s easier with vectors (or arrays):
parameters {
vector[6] log_mu;
vector[4] eta;
...
model {
log_mu ~ normal({-1, -1, -1, 4, 3, 3},
{ 1, 1, 1, 2, 5, 5});
eta ~ normal(0, 1);
...
Or you can write out log_mu[1] ~ normal(-1, 1);
— not every element of an array/vector needs to get the same prior.