@JimBob is right about the technical aspects - there is no built-in function and one of the reasons is that specifics of which scaling would be reasonable are heavily dependent on your data and application. There are multiple goals you might want to achieve with scaling, including:
- Being able to easily set priors
- Making the model coefficients easier to interpret
- Making the model coefficients comparable across studies
I would generally advise against scaling by SD of your data as that doesn’t really help with either 1 or 3 and it is questionable if it helps with 2. In many cases, there is some relatively reasonable way to scale the predictors without considering the actual data you collected. E.g. for age it might make sense to subtract 50 and divide by 10, so that your intercept will correspond to the response for a 50-year old and your coefficient for age will correspond to change with each decade. Presumably a clinician will find it easier to think how lesions change between people 10 years apart than 1 year apart and it will prevent your coefficient from being super small.
If your data is informative, your inferences should not be affected very much by shifting and scaling the predictors. One way to make it easier to think about this is to not inspect model coefficients directly, but rather make predictions, e.g. “what is the difference in average lesion size of type A in 50 year old patients and 70 year old patients” - this quantity will not change however you scale your predictors (as long as you also scale your priors). You can use
posterior_predict to make such predictions for any comparison you want.
As noted on the GitHub thread, I explain brms’s default centering (which affects only how the prior for intercept is handled) at Brms: input scaling clarification - #3 by martinmodrak
Best of luck with your model!