Common Cause of Two Variables in brm? (Same IV-Effect; different DV's)

Hi there,

while I found that brm- multi-variate models can estimate two outcomes based on the same data/model (with different parameter estimates by each outcome, and correlated parameter estimates), my search in the net and vignettes did not yet reveal, whether brm also allows to -directly- estimate one and the same parameter (instead of simply correlated) for two (mutli-variate) outcomes. For example:

 brm(
   bf(DV1~ 1+ b1/2, 
     DV2~b1,
     b1~1,
     nl=TRUE), 
   priors etc)

such that I get only 1 estimate for b1?
This is usually referred to as ‘common cause’ mechanism and is brilliant to ‘fix’ the model on additional data (reducing its flexibility in finding solutions). It should be part pf brms! :)
Is it?

If not, I guess it might work to do it in a multi-variate fashion and doing some tricky parameter definitions in two different models and then fix their correlation at 1 somehow?

Best, René

It is not yet supported because the non-linear syntax does not span multiple response variables. However, I will make this possible as part of brms 3.0.

Hi Paul.

This sounds great! This will make the package even better as it already is.
I guess, this will make direct Stan coding almost obsolete. (With some special cases)

I am curious (as a non developer, I am not sure whether I am asking for too much, I know all this development comes with a ton of effort, but… )
There is still this debate on how to define priors for mean differences, and I have the feeling that this is not really possible in brm, or at least not convenient (if I would see how to, maybe this view changes).
I mean, I know some tricks like doing a standard ANOVA like getting variance estimates:

X= 3 factor variable; DV=continuous (replicated responses on each level of X) then

fit<-brm(DV~1+(1|subject)+(1|X)...)
ps<-posterior_samples(fit)
aproximatedFvalueDistribution<-(ps$sd_X^2)/(ps$sd_subject^2)

But what about the reverse thing like?

priors…
F_var~(somePriordistributionwhichmakessense_whichisactuallyanalysablethisway :))
subject_var~(somePriordistribution)

which is then given to:

fit<-brm(
   bf(DV~1+(1| subject_var)+(1|X_var),
   X_var<-sqrt(F_prior*subject_var^2),
   nl=TRUE)

and then testing the F prior (NullHypothesis) against its Posterior. One can come up with a similar case for simple mean differences, in which on would start with a prior around 0 to model deviations from a grand mean.

Is this even possible, or desired, or do you think it is better to learn STAN in the near future :)

Best, René

I would say it is not desired at least not by me. It is possible to code that in Stan I would say but if it’s sensible I cannot tell. Generally, I try to stay away from NHST as much as possible.