Repeated K-fold Cross Validation

Hi everyone,

First of all, thank you to the developers of BRMS for creating such an accessible package for Bayesian analyses!

I am currently attempting to perform model comparison between 9 models using K-fold cross validation (K=10) because performing LOO yields a high number of Pareto K > 0.7.

Presumably on account of the random data selection using the K-fold process, the ranked order of the models changes if one compares one run of the K-fold validation for all models to another (examples attached, where g.m1 is the simplest model). Thus, I was wondering if there is a way to “average” across multiple K-fold model comparisons in BRMS, which I believe is a process known as “Repeated k-Fold Cross-Validation” in other statistical softwares? I should note that, in most cases, the most simple model is not significantly different from the “best”-performing model. Ideally, my goal is to present an averaged ELPD_diff and SE_diff model comparison to readers. If there is no way to average across K-fold comparisons, are there any other recommended approaches for how to deal with randomness in the ELPD order of selected models?

Thanks again and please let me know if more information is required!

Example 1
image
Example 2
image

1 Like

You can use kfold helper functions to make the data division once and then use the same data division for all comparisons. If you think the results are too sensitive given one way to split the data, you can generate several random splits with the helpers, run kfold with each split, but then you need to do the averaging of pointwise elpd values yourself (those are stored in kfoldobject$pointwise). By generating a new kfoldobject with averaged pointwise results, you can then use loo_compare() to compute diffs and se_diff’s (or you can write your script for that).

Thanks for your quick response Aki, and sorry for my delay in getting back to you!

I am now in the process of running the models using the same data divisions for each model as per your first recommendation. I am hoping that by using the same data divisions using the kfold helper (which I was not doing before) for each model, it will not be necessary to conduct repeated K-fold.

However, I would just like to clarify what you mean by averaging the pointwise elpd values. If I understand you correctly, in order to obtain a an average ELPD for an example model (in this model 1) using just two repeats (in reality this would be 5-10), I would first:

Create two random data splits using the k-fold helper function: split_1 and split_2

Perform 10-fold cross validation as such:

model_1_split_1 <- kfold(model_1, K=10, folds=split_1)
model_1_split_2 <- kfold(model_2, K=10, folds=split_2)

In order to get averaged pointwise_ELPD for model_1, I would simply average the pointwise estimates for each repeat. This is the part I just wanted to make sure about.

model_1_average_pointwise <- apply(cbind(model_1_split_1$pointwise[,1],model_1_split_1$pointwise[,1]), 1, mean)

Thanks in advance!

1 Like

Yes, looks correct.

Thanks again, this has been extremely helpful!

1 Like