Imposing restrictions on the norm of a parameter

Hi,

I want to include a vector b in my model that is restricted to satisfy ||b|| <= 1. My initial thought was to parameterize using polar coordinates and then restrict the support of the vector length. However, this approach doesn’t work very well because the posterior of the angles is often not well-behaved. Is there a better way to proceed?

Thanks!

Do

parameters {
  unit_vectorK] b_raw;
  real<lower=0,upper=1> b_scale;
}
transformed parameters {
  vector[K] b = b_scale * b_raw;
}

Thanks! Although I think I need

vector[K] b = b_scale*pow(b_scale,1/K)

to get uniform distribution over the unit ball.

I think that first term needs to be b_raw, because that’s determining the direction. And you’re right you need to adjust for volume if you want a uniform draw in the unit ball.

Warning: 1 / K would use integer arithmetic, which rounds down. It should be inv(K) and you should precompute it in transformed data to avoid re-caclulating it each log density eval.