Sounds good, I am planning to look at the rate of selecting the correct model at different levels of SE, for example at 1SE, 2SE, 3SE, 4SE, 5SE. Originally was planning to stop at 3SE, given this information I might go up to 5SE. For both LOO, and WAIC comparison.
The idea is that if the ration for the model comparison passes the level we say that the change is meaningful to consider it a “better” model
Also, plan to look at the approximate log-bayes factor as a comparison method
Other thing was to look at on ROC to find the best ratio for the best sensitivity and specificity of model selection
I am planning to compare this to the maximum likelihood standard of practice, which is likelihood ratio test
You said that it will take you some time to have and test a better SE, you have some type of timeline for this? Just to take it in consideration for this type of projects?