Hello,
I have a model in which I am trying to estimate the effect of two supposedly independent predictors (a dichotomic and a continuous one), eg Y ~ A + B. Unfortunately I don’t have much data (20 obs) and by chance in the sampling, these two features came out strongly correlated.
This correlation shows up in posterior samples, with the distribution of the two effects having a correlation of .5/.6 and totally large credibility intervals, with also quite high Rhats.
Which could be a good way to manage this? is it possible to add an estimated correlation between the parameters and force it close to zero with an LKJ prior? or should I had a second formula e.g. A ~ B forcing a close to zero coefficient for B?
Thanks
1 Like
There is no great way to manage this. If you know a priori that the predictors are not strongly correlated in the population, then collecting a larger sample (if feasible) should enable you to disentangle their effects. Your model seems to be telling you that at the moment you don’t have enough data to separate out the effects of A and B. If you can incorporate strong prior information about either A or B, you might still be able to recover good inference on the other. Otherwise, there’s little to be done here.
You appear to have a couple of understandable misconceptions about modeled correlations and their priors. While tangential to your main question, perhaps you’ll find some of the points below clarifying.
 Modeled correlations between parameters have nothing to do with whether parameters are correlated to one another across iterations, but rather are models for whether vectors of parameters are correlated with one another within iterations.
 Even if you were working with vectors of parameters, setting the correlation to zero does not force the estimates to be uncorrelated. After all, it is possible to draw (by chance) a set of strongly correlated samples from a multivariate normal with zero correlation. Indeed, this is what happened when you obtained your sample of 20 observations–you say that the populationlevel correlation is zero, but by chance you got a strongly correlated sample.
1 Like
@jsocolar is right about the conceptual issues, but I’ll add that brms supports QR decomposition for fixed effects via decomp="QR"
which will not let you make stronger inferences, but should make the model better behaved computationally and thus hopefully get rid of the high Rhat.
Best of luck with your research!
4 Likes
Sounds to me like a conceptual issue, first and foremost. Why are your predictors correlated? Do you need both of them conceptually?

Predictors might become less correlated when you change the scaling of the variables.

Maybe try use a prior to pull one of them to zero (lasso) leaving you with the better one.

oddly enough maybe a more complex model could reduce some of the correlation? Add an additional predictor to try Better isolate their unique effects if their is confounding going on, for example. Though would require tighter priors. So maybe not relevant here.
Well, according to theory and the specialist physicians I’m working with there should not be any association or selection bias, and we have just 20 patients, so bad luck cannot be excluded. But of course if the physician say this, I cannot exclude that maybe the interaction of the two factors is associated with a higher risk of disease and therefore create a correlation in the data. Of note, all the patients are affected since they are their own controls (it’s a condition on the patient hand, so the control is the unaffected hand).
At the moment I’m just running a separate regression for each predictor hoping that my causal model is true and there’s no confounding.
But there is confounding in your sample. Even if there is no populationlevel correlation in the predictors (i.e. your causal model is true), by bad luck you have obtained a sample where the influence of the two predictors is not distinguishable. If you run a regression model including only predictor_1
, and you find that it has some effect, you won’t know whether this is because predictor_1
has a causal effect or because predictor_1
is a good instrument for predictor_2
in your particular sample. Since your prior model sees both predictor_1
and predictor_2
as plausible predictors of the effect, you are not going to be able to disentangle them in this sample.
1 Like
I understand what you mean. Just one question, an instrumental just need to be correlated to a predictor of interest even without a causal relationship between the two?
Said that, since I don’t have the information to solve the conundrum, is the separate model solution appropriate or should I go for finer models like Ilan suggested?
Sorry, I dropped the ball on this a bit. I think the separate model approach is not a good solution. I think it is more honest (and hopefully useful) to use the full model (if you can make it work computationally, e.g. with the QR decomposition) and report that you can’t put tight constraints on the individual effects, but that you learn something about the sum (or other similar combination) of the effects.
Would that make sense?
1 Like