Brms; Modeling slope effects to measure individual consistency

Hello all,

I am trying to fit a brm model with some pretty sparse data; I cannot collect more data so please refrain from that suggestion. I do not have a large sample size, so I have issues with convergence for complex models. What I have is 25 subjects (ID) each with two Trials (Trial). Trials are of varying duration (Duration), which means that the Behaviors (Grim) vary due to Trial and Duration. I need to control for Duration and Trial, but what I am really interested in is the ID variable and whether the effect of ID remains consistent across trials. That is, are subject scores repeatable while controlling for a duration effect and the mean trial effect. What I think is that I should be testing a slope variable: does the Behavior remain consistent across Trial 1 and 2 within each individual (slope of each individual’s behavior across trials), while acknowledging that Duration of trial has an influence on likelihood of a behavior occurring and that Trial 2 likely has a more muted response in general across all individuals.

I am having trouble parameterizing this in R, though. Essentially, I have n=1 per ID/Trial class with 25 IDs and 2 Trials per ID, since I was only able to run two Trials per ID. I have been working with variations of this model:

Model <- brm(Grim ~ 1 + Trial + Duration + (1|ID), data=Rep, iter=10000, cores=2, chains = 2, control=list(max_treedepth=15, adapt_delta=.9999999999999999))

Which barely converges (hence the lengthy adapt_delta). If this is right, though, I am unsure which output means what I want. I thought, perhaps, sd_ID_intercept? and if the 95% CI overlaps with 0 than individual ID’s Behavior does not markedly change across Trials? But what throws me is the intercept suffix for sd_ID. Currently, I do not really care about whether IDs differ from each other (between individiual variation), only if individuals show little within individual variation (95% CI overlaps with 0).

I also wondered whether I could build two models, one with 1|ID and one without, and see if those two models significantly differ, but this is hard to visualize when I still get confused about the above parameterization issue.

Essentially, I am trying to parameterize a Bayesian rptR https://cran.r-project.org/web/packages/rptR/vignettes/rptR.html . rptR gives me singularity errors. I wanted to try stan_surv (since Duration is built into the model), but cannot get the package to even compile due to low memory.

Any help would be greatly appreciated - I think it is probably obvious from my writing, but stats are not my major disciplinary focus.

Hi @PritchardAJP. I have a few thoughts. In general, it’s always good to try to present a reproducible example, if you can provide one.

First, the overall model structure you describe (a single-level hierarchical model) is not overly complicated, and should be quite easy for Stan/brms to fit without convergence issues when specified correctly. The fact that you are having so many convergence issues is a sign that the model is misspecified for your data. What is your dependent variable Grim?

Secondly, there are different types of repeatability calculations. Often, intra-class correlations (ICC) are calculated from hierarchical models using their variance components. For the simplest case of a group-varying intercept parameters, the ICC is the between-group intercept variance, divided by the total model variance, which in your brms code would be something like:

sd_ID_intercept^2/(sd_ID_intercept^2 + residual_sd^2)

From what you’ve written, you could use this to answer your questions about repeatability. This ICC tells you the proportion of variance attributable to between-subject variation. As a side-note, you can’t make inferences on whether sd_ID_intercept includes 0 because, as a variance parameter, it is defined to be > 0.

Thirdly, sd_ID_intercept only tells you the standard deviation of the subject’s behaviours holding Trial and Duration constant. In other words, it is not the slope of the subject’s behaviours over Trial time points, which is what you might also be interested in. To get these slope parameters, one for each individual, you need to include ‘random slopes’ for Trial using the syntax ( 1 + Trial | ID ).

Importantly, adding a random slope parameter makes the estimation and interpretation of repeatability a bit more difficult because now the variance components are Trial specific, i.e. we need to factor in the random slopes (and the correlation between random intercepts and slopes), and the behaviours now depend on Trial. Now, you get per-trial repeatability estimates, which are more difficult to work with.

Instead of repeatability, you might look at fitting a model with random slopes, and then making predictions from the posterior distribution for each subject and each trial point. You could then subtract these predictions for each subject to quantify how different they are, i.e. if the distribution of their differences surpass 0 (or some other meaningful value), then those subjects will have bigger differences between time points. Because you only have 25 individuals, this approach might allow you to look closely at each individual’s behaviours across trials. However, keep in mind that random effects exhibit partial pooling (i.e. regression to the mean, essentially), so the predictions and raw behaviours for each subject might differ.

I’m sure brms experts on here can help you with the code to make predictions if you can provide a reproducible example.

Thanks very much, cgoold! I think you articulated some of the nuances that I was struggling with in my model. I am still getting a slew of errors: divergent transitions, Bayesian Fractions, low bulk & tail ESS. I apologize for the long post.

Presently, I took your advice of the random effect with a slope, even though it’s hard to interpret, and ran:

Model ← brm(Grim ~ 1 + Dur + (1+ Trial|ID), data=Rep2, iter = 10000, cores = 4, chains = 4, control = list(max_treedepth = 15, adapt_delta = .9999999999999999))

Presently, I am getting correlation issues between the ID and Trial. Initially, I tried rescaling as I read that can help, but it is still an issue (as below).
‘’’

Family: gaussian
Links: mu = identity; sigma = identity
Formula: Grim ~ 1 + Dur + (1 + Trial | ID)
Data: Rep2 (Number of observations: 50)
Samples: 4 chains, each with iter = 10000; warmup = 5000; thin = 1;
total post-warmup samples = 20000

Group-Level Effects:
~ID (Number of levels: 25)
Estimate Est.Error l-95% CI
sd(Intercept) 2.69 0.40 2.02
sd(Trial) 1.31 0.20 0.97
cor(Intercept,Trial) -0.99 0.01 -1.00
u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 3.57 1.00 2414 6153
sd(Trial) 1.75 1.00 2115 5965
cor(Intercept,Trial) -0.98 1.00 1137 3899

Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept -0.30 0.09 -0.49 -0.12 1.00
Dur 0.09 0.06 -0.04 0.22 1.00
Bulk_ESS Tail_ESS
Intercept 4479 5092
Dur 5492 2821

Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat
sigma 0.22 0.11 0.04 0.45 1.01
Bulk_ESS Tail_ESS
sigma 240 143
Warning messages:
1: There were 251 divergent transitions after warmup. Increasing adapt_delta above 1 may help. See
Runtime warnings and convergence problems
2: There were 4 chains where the estimated Bayesian Fraction of Missing Information was low. See
Runtime warnings and convergence problems
3: Examine the pairs() plot to diagnose sampling problems

4: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help. See
Runtime warnings and convergence problems
5: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help. See
Runtime warnings and convergence problems
‘’’


Hi @PritchardAJP!

First of all, your model formula should also include Trial in the ‘fixed effect’ part, i.e.

brm(Grim ~ 1 + Dur + Trial + (1+ Trial | ID)

However, I don’t think that will solve your convergence issues. There’s likely something else going on with the model that needs addressed.

How much time do you have to spend on this analysis? I would strongly recommend you pick up a good multi-level modelling book (e.g. Gelman and Hill’s textbook http://www.stat.columbia.edu/~gelman/arm/), or Richard McElreath’s Statistical Rethinking (and YouTube lectures), or even have a look at webpages like University of Bristol’s multi-level modelling project (e.g. http://www.bristol.ac.uk/cmm/learning/videos/random-intercepts.html). These resources will help improve your understanding of what random intercepts and slopes mean.

For people to help you here, I’d recommend attaching your data file as a csv/txt file so people don’t have to copy and paste data manually. You should also provide the exact code you’re using to work with the data. That’s why simulating some data is helpful. I appreciate that data simulation is not part of everyone’s skill set, but it really is a valuable tool to work with data and analyses before touching your own data so that you know everything works and you can understand it. There are quite a few data simulation examples online for your type of model.

Repeatability is not that easy to understand when you get into random slope-type models. The approach using posterior predictions is quite intuitive, but I’d try to read a bit on multi-level modelling first and get your understanding up before diving into it.

2 Likes

Thanks for the advice! I will do that, since apparently it’s clear that I am out of my depth with brms.