An explicit prior on R2 controls the tendency for the prior on R2 to move towards 1 as more predictors are added, which can occur when using other priors (e.g. Independent Gaussians, Minnesota, or Regularised horseshoe). Inflation of R2 can lead to overfitting and you can see in the figure below the stark contrast between the prior density (lines) for the ARR2 and other priors, and the resulting posterior distributions (histograms).
We derive the prior for AR, ARX and state space models and evaluate on simulated and real data examples. Performance is favourable compared to the commonly used independent Gaussians, Minnesota and regularised horseshoe priors.
Stan code is included in the appendix, and the full study code is on GitHub
Thanks for sharing! One suggestion, in the repo with the Stan code, write the ARR2 prior into the functions block. When it’s in the transformed parameters and model blocks it’s more difficult to see the code and one needs to read the paper and extract the relevant bits from the current code.
The revised version of the paper adds 1) results about implied priors on characteristic roots and partial autocorrelations, 2) case study with quasi-cyclical behavior, and 3) extensions to MA, ARMA, ARDL, and VAR models (updated version appeared in arXiv today, and has been also accepted for publication in Bayesian Analysis journal).