How to avoid rescaling predictors and outcomes

I wrote a Stan program for a time-series model, and it works very nice, except that the time and response need to be rescaled/standardized before putting into the model. I use normal (0, 1) and cauchy (0,1) for priors.

I would prefer to obtain estimated parameters on original scale, while I found it algebraically challenging to recover the original parameters. So I am wondering whether I should choose wider prior distribution, for example, normal (0, 10) and cauchy (0, 5), to avoid the standardization of response variable? I tried this on a small example, and it seems working, but my another question is that if I encounter a new dataset with more extreme response values, do I have to change my prior distribution again in order to avoid standardization? In this case, is standardization a better option than to change prior distribution from time to time?

I am really new to Stan and Bayesian inference, hence what I asked might be naive and mistaken. But I would really appreciate if anyone can give me some suggestions. Thanks!

I am working away on some time series models today. I am using a modified version of the state space models from State space models in Stan. I am using this model with narrow-ish priors on unscaled data.

data {
int<lower=1> n;
vector[n] y;
vector[n] gwmean;
}
parameters {
vector<lower=mean(y)-3sd(y), upper=mean(y)+3sd(y)>[n] mu;
real<lower=-0.5, upper=0.5> beta;
positive_ordered[2] sigma;
}
transformed parameters {
vector[n] yhat;
yhat = mu + beta * gwmean;
}
model {
mu[2:n] ~ normal(mu[n-1], sigma[2]);
y ~ normal(yhat, sigma[1]);
sigma ~ student_t(3,0,1);
beta ~ normal(-0.5, 0.5);
}

Not sure if this is any help.