Is this parameter transformation a good idea?

I built a model with parameters that have very different orders of magnitude.
Stan has a lot of problems sampling from the original problem, to remedy this I let Stan sample from the percentage deviation of my initial guess instead. It seems to work well, but it feels sort of awkward. I was just wondering if this approach might be a bad idea for some reason?

My actual model is a bit more complex than this one and has lots of issues with sampling without the transformed parameters. But this just serves as an example of what I’m doing:

data {
  int<lower=1> N;
  vector[N] windspeed;
  vector[N] time;
  vector[N] ar;
  vector[3] inits;
}
parameters{
  real p_wind;
  real p_time;
  real p_sigma;
}
transformed parameters{
  real th_wind;
  real th_time;
  real th_sigma;
  th_wind = (p_wind/100 + 1) * inits[2];
  th_time = (p_time/100 + 1) * inits[2];
  th_sigma = (p_sigma/100 + 1) * inits[3];
}
model {
  vector[N] ar_model;
  for (i in 1:N)
    ar_model[i] = th_wind * windspeed[i]^2 + th_time * time[i];
    
  ar ~ normal(ar_model,th_sigma);
  
  p_wind ~ normal(0,35);
  p_time ~ normal(0,35);
  p_sigma ~ normal(0,35);
}

I guess alternatively I could try to choose the units of my data carefully, and nondimensionalize the parameters in such a way that they are all of order one.