# How to set autoscaling = FALSE for one parameter, while autoscaling = TRUE for all other parameters

I have been trying to run Bayesian conditional logistic regression in my PhD project using package rstanarm. Based on the literature, we have some prior knowledge about the exposure variable but not for the confounding variables. So we decided to specify an informative prior for the exposure, and use the default prior in rstanarm (weakly informative priors, in comparison to uniform priors) for the confounding variables. So in the modelling, we want to set the prior for the exposure parameter to autoscaling = FALSE, while leaving all other parameters autoscaling = TRUE. The code I used to try to implement this is as follows (linear regression model using mtcars data as an example):

``````prior <- normal(location = c(MES, 0, 0), scale = c(SD, 2.5, 2.5), autoscale = c(FALSE, TRUE, TRUE))
mtcars\$mpg10 <- mtcars\$mpg / 10
fit <- stan_glm(mpg10 ~ wt + cyl + am,
prior = prior,
data = mtcars,
algorithm = "sampling")
``````

The output is as follows:

Sadly, it appears that in this way, rstanarm did not autoscale the priors for coefficients.

Does anyone have any experience with this? Any comments/suggestions would be highly appreciated!

After posting the question, I tried another way:

First, use stan_glm to generate the default autoscaled priors using the data;
Second, extract the autoscaled priors and fit them to a new stan_glm model.

The code is as below:

``````fit_ini <- stan_glm(mpg10 ~ wt + cyl + am,
data = mtcars,
algorithm = "sampling")

prior_summary(fit_ini)

prior <- normal(location = c(1, 0, 0), scale = c(2, 0.84, 3.02), autoscale = FALSE) # extracted priors from fit_ini for the last two parameters
post <- stan_glm(mpg10 ~ wt + cyl + am,
data = mtcars,
prior = prior,
algorithm = "sampling")

prior_summary(post)
``````

I am not sure if this makes sense! :(

I donâ€™t know the answer for stanarm, but @bgoodri or @Jonah should be able to sort you out.

Also, I changed the topic to interfaces:rstanarm in the hope of the right people seeing this question.

Thanks very much for your help!

Hi, @bgoodri or @jonah, may I ask for a help with this question? Many thanks in advance for your help!