Hello. I have a question about autocorrelation in a time series. To give you some background, I am using logistic regression to investigate what predicts the occurrence of a behavior. I am using the packages brms and rstan in R and have fit the model as follows:
m1 <- brm(bf(behavior~s(x1)+s(x2)+s(x3)+(1+x3|community)),
data=data, family="bernoulli", prior=prior, iter=6000,
control=list(adapt_delta=0.99999999999999, max_treedepth = 10))
where x1, x2 and x3 are all time-series data (z-transformed for the model). x3 is fit as a random slope. I had to increase the adapt_delta due to divergence issues.
After running the model, I get the following output when I plot the ACF for my predictor variables (population-level effects):
Should I be concerned about the pattern for the intercept? If so, how would I address this? I am unsure whether adding an ARIMA term would be appropriate. Any advice (or pointers to good sources for reading) would be really appreciated. Thanks!
Yikes. Do you have a principled reason for setting adapt_delta so high? If it’s because you were getting divergences otherwise, we now generally recommend that increasing adapt_delta should be a last resort, and higher priority in troubleshooting are things like looking at pairs plots to discern where the model is going awry, which can motivate model structure changes that are far more likely to yield more accurate inference than merely pumping up adapt_delta.
That aside, I presume the rhat for that parameter is high? If still below 1.01, then I’ve seen slight autocorrelation in the intercept when it is not well-constrained by either data nor priors.
Hi Mike. Thanks for replying! Yes, I was getting some divergence and so someone with much more experience than me recommended I increase the adapt_delta to deal with the divergence. However, I will look more into your recommendation of examining the pairs plots to see how I can better address this problem.
All of the Rhat values = 1.00 (for the intercept and all predictor variables).
To make sure I am understanding correctly: you are suggesting that the intercept is not well-constrained by either the data or the priors and this is why I am getting the autocorrelation? Do you have any recommendations on how to address this issue?
I’d say your primary problem is the divergences with the default adapt_delta; it’s not worth even looking at anything else until you can work out why those are occurring. I wouldn’t be surprised if solving them also solves your autocorrelations.