Hi there,
I’ve developed a survival model from aggregated clinical trial data to estimate vaccine efficacy over time. The vaccine may actually enhance disease as antibody levels wane, which is what I want to test in my model.
To do this, I am comparing two nested models, one where we estimate the magnitude of enhancement (through a single parameter L), and one where we assume no enhancement (i.e., set L = 0). I’m using a truncated normal(0,1) prior on L, and either set a lower bound of 0 on L, or I define eff_L = abs(L).
I’m currently using the Savage-Dickey method of calculating the Bayes Factor, where I compare the prior density of L at 0 to the posterior density. I question whether this is a valid way to calculate the Bayes Factor, given that the nested model involves the parameter being set to the lower bound of the parameter range in the more complex model.
Is there a recommended alternative method for calculating the Bayes Factor? N.B., I’m using cmdstanr so cannot implement bridge sampling via the bridegesampling package.
Thanks a lot for any thoughts or suggestions!