I wondered about exactly the same thing a while ago. It turns out that with normal prior on the intercept, you can parametrize the model with sum-to-zero constraint, resolve identifiability issues + remove one parameter, but then recover exactly the same inferences as you would have with the original model: Correlated posterior - Sum to zero constraint for varying intercepts?! - #24 by martinmodrak
(there’s also a lot of interesting ideas from other contributors to the thread)
The specific way I did that however didn’t result in improved performance in the cases I cared about, but hope there’s still something to learn from the attempt.
I’ll also tag @paul.buerkner who seemed to be interested in the previous discussion on the topic.