TL’DR: Why are my random slopes so small when modelled as shown below?

Long version: For a study, I am modelling how a far away human participants place an object from it’s correct location (i.e. my outcome is Euclidean distance) and my predictor is the number of times my participants have seen/placed that object (scaled to SD = 0.5 and mean of 0) with the obvious expectation that people will get better over time.

Model: euclideanDistance ~ s_times + (s_times | subject), family = Gamma(link = “log”)

Priors:

```
OLM_7T_m2_prior <- c(prior(normal(4.26123, 0.5), class = "Intercept", lb = 0, ub = log(153)),
prior(skew_normal(0.15, 0.5, -5), class = "b"),
prior(gamma(1, 1), class = "shape"),
prior(student_t(5, 0, 0.5), class = "sd"),
prior(student_t(5, 0, 0.5), class = "sd", coef = "s_times", group = "subject"))
```

When looking at the raw data it is obvious that on average the placement error definitely decreases but between participants there seems to be quite a lot of variation. However, in my `bmrs`

model there are only tiny differences between participants with regard to the slope. However, I want to use the estimated slopes as predictors for different stuff but the magnitude of the slopes feels misleading (even though statistically they might just be scaled down; see scatter plot).

I tried various things and I feel that there is something missing in my understanding how this models works that might be obvoius to some forum user here so I am trying my luck. The priors were chosen based in prior predictice checks but even if I for instance change the with of the random effects parameters (`sd()`

)it doesn’t really change.

Here is a direct comparison between similar models estimated via `glmer`

and `brm`

. Obviously, they can give different answers but I wasn’t expceting differences of this magnitude especially since the fixed effect differences are tiny.

You can see that for `glmer`

(Min = -0.360 & Max = -0.062) some participants nearly have a slope of 0 while with `brm`

they’re essentially uniform (Min = -0.263 & Max = -0.168). Most of this is simply scaling as can be seen when I compare both random slope estimates directly

So my question what in my model or prior structure is causing the slopes to be so small?

Summary for brm model:

```
summary(OLM_7T_m2)
# Family: gamma
# Links: mu = log; shape = identity
# Formula: euclideanDistance ~ s_times + (s_times | subject)
# Data: OLM_7T_retrieval (Number of observations: 2016)
# Draws: 10 chains, each with iter = 30000; warmup = 5000; thin = 1;
# total post-warmup draws = 250000
#
# Group-Level Effects:
# ~subject (Number of levels: 56)
# Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
# sd(Intercept) 0.39 0.04 0.32 0.48 1.00 46992 79373
# sd(s_times) 0.05 0.04 0.00 0.14 1.00 76693 110916
# cor(Intercept,s_times) 0.21 0.46 -0.82 0.94 1.00 241604 138982
#
# Population-Level Effects:
# Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
# Intercept 3.43 0.05 3.32 3.53 1.00 32040 61974
# s_times -0.22 0.03 -0.28 -0.17 1.00 293887 186542
#
# Family Specific Parameters:
# Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
# shape 2.58 0.08 2.43 2.74 1.00 321954 185870
#
# Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
# and Tail_ESS are effective sample size measures, and Rhat is the potential
# scale reduction factor on split chains (at convergence, Rhat = 1).
```

Summary for glmer model:

```
summary(OLM_7T_m2_freq)
# Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
# Family: Gamma ( log )
# Formula: euclideanDistance ~ s_times + (s_times | subject)
# Data: OLM_7T_retrieval
#
# AIC BIC logLik deviance df.resid
# 17126.0 17159.6 -8557.0 17114.0 2010
#
# Scaled residuals:
# Min 1Q Median 3Q Max
# -1.6217 -0.7049 -0.1389 0.4873 5.9048
#
# Random effects:
# Groups Name Variance Std.Dev. Corr
# subject (Intercept) 0.07093 0.2663
# s_times 0.01454 0.1206 0.12
# Residual 0.37196 0.6099
# Number of obs: 2016, groups: subject, 56
#
# Fixed effects:
# Estimate Std. Error t value Pr(>|z|)
# (Intercept) 3.40418 0.05678 59.949 < 2e-16 ***
# s_times -0.22684 0.03473 -6.531 6.53e-11 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Correlation of Fixed Effects:
# (Intr)
# s_times 0.086
```

# Code for the models

```
# Brms priors
OLM_7T_m2_prior <- c(prior(normal(4.26123, 0.5), class = "Intercept", lb = 0, ub = log(153)),
prior(skew_normal(0.15, 0.5, -5), class = "b"),
prior(gamma(1, 1), class = "shape"),
prior(student_t(5, 0, 0.5), class = "sd"),
prior(student_t(5, 0, 0.5), class = "sd", coef = "s_times", group = "subject"))
# Brms model
OLM_7T_m2 <- brm(euclideanDistance ~ s_times + (s_times | subject),
family = Gamma(link = "log"),
data = OLM_7T_retrieval,
prior = OLM_7T_m2_prior,
iter = iter,
warmup = warmup,
chains = chains,
cores = cores,
seed = 1124,
control = list(adapt_delta = 0.9),
backend = "cmdstanr")
# glmer model
OLM_7T_m2_freq <- glmer(euclideanDistance ~ s_times + (s_times | subject),
family = Gamma(link = "log"),
data = OLM_7T_retrieval)
```