Why scale of outcome affect model result

I wrote a model with Stan and it works well if I scaled my outcome variables to have mean zero and standard deviation 1 (with All Pareto k estimates are good < 5); however, if I do not standardized my outcome, then the model is performing poorly (all Pareto k diagnostic values are very bad). I do not understand why the range of outcome matters so much. Is this related to the distribution of prior distribution, which were normal (0, 1) or cauchy (0, 1)?

there’s a discussion on a previous thread:

it would help clarify the problem if you could include your model (if it’s simple and you can share), or a simplified version which has the essential behavior here.

I read that post earlier, but still unclear about what to do with my case.
Below is a simplified version of my model. I am wondering how I can fit my model well without pre-standardizing my response Y.

model {
 // prior distributions
  sigma_eps ~ cauchy(0, 1); 
  theta_mu ~ normal(0, 1);

  for(k in 1:K){
    Theta[, k] ~ normal(0, 1);
  }

  // posterior draws
  for(n in 1:N){
    alpha[n] ~ multi_normal(zero_k, diag_matrix(rep_vector(1,K)));
    Y[n] ~ multi_normal(B[n]*theta_mu + B[n] * Theta * alpha[n], diag_matrix(rep_vector(sigma_eps^2, V[n])));
  }
} 

Forgot to mention that V, B, and Y come from real data.