I could really use your thought here. I have a within-subject design were participants made a choice, under two conditions (really - as simple as it gets). I’m using brms to model response but keep getting high r_hat, and corr of ~1 between slope and intercept…
We have ~140 participants, ~800 observation for each subject. The independent has two levels and is coded as a factor. The dep is 0/1.
This is the model we used:
and this is the bad output with the high slope-intercept corr:
This is the raw data (yellow are means calculated outside the model), and posterior_predict from the bad sampled model:
This is the corr in the raw data (with no model included) between level1 of the ind variable and the difference between level2 and level1 - which is as close as I could get to an “ïntercept-slope” corr inspection in the empirical data without using any modeling. If anything, this is actually negative:
I have tried to play with the priors, and ran prior pred checks - all seems to be fine.
Any clues?? What can I look into to try and figure out why this is happening?
BTW - taking out the corr estimate actually helps in terms of r_hat and ess (i.e., doing (reveal ||subject)), but this feel worng - I really want to understand why this is happening…