Joint Hypothesis test help

Hi, I want to do a joint test like H0: a=b=c=d=0. I already get the posterior distribution of a, b, c, and d from my model. In the max likelihood method, this test can be done easily while I have no idea how to do it in the Bayesian approach. Any ideas to do the test? I am appreciated with your help.

1 Like

Do you really want to know if \{a, b, c, d\} are exactly 0? If so, draw samples from your posterior distribution and compare them (posterior_predict() in rstanarm or brms)? Chances are you will not find them to be exactly zero but that of course depends on the analysis you’re doing.

Of course you also have Bayes Factor, but I’m not the right person to help with that since I avoid it altogether.

Could you provide more context, i.e., what do you want to achieve? What software do you use, i.e., Stan, brms, rstanarm, etc?

1 Like

Hi @torkar, thanks for responding. Yep, I want to test they are exactly 0. a, b, c, and d are dummy variables, like region (south, west, northeast, and northwest) or education (before high school, high school, college, graduate college).
I use Rstan to estimate my model there. So posterior_predict() is a reliable method? Forgiven my ignorance, I am not familiar with this. Any difference between comparison with posterior_predict() and comparison with WAIC or LOO?

I also thought about the Bayes factor, while I only saw the single test example instead of the joint test.


the Generic function for drawing from the posterior predictive distribution — posterior_predict • rstantools function can be used to make predictions concerning the outcome. But are you perhaps interested if your estimates of \{a,b,c,d\} (i.e., they are predictors) are \neq0? I’m a bit worried since you also mention WAIC and LOO above :)

Well, firstly, I want to test they are 0 or not. If they are 0, fail to reject h0, I am not interested in the estimated coefficients of a,b,c, and d. If they are significantly different to 0, I am interested in the estimates.

If you do a summary() you will find a summary of your estimates and see if, e.g., the 95% interval is covering zero. If it’s not, i.e., if it is clearly negative or positive, then you probably have a reason to look further. However, it might still be worthwhile to look at the estimates even if they cross zero, but that all depends on the question you are trying to answer :)

I understand. But my H0 is a=b=c=d=0. See the 95% credible interval only works for single parameter, correct? like a [0.05, 2.13], then I can say a is significant different to 0.

Can you paste the output from the summary here? Use three backticks before and after the text you paste, i.e., ```

I do not have the summary in r console now. But I paste a figure of my output excel exported from R. beta8 to beta14 are all dummy variables like a…g.

I repliedto a similar question at Model comparisons and point hypotheses - #2 by martinmodrak - does that answer make sense to you in your context? Feel free to askfor clarifications here: this stuff can be quite counter-intuitive…

Best of luck with your research!

Hi @martinmodrak, thanks for responding.
For loo or waic comparison, I do not think my context fits it. The restricted model does not have parameters a to d while unrestricted has. So there are different numbers of parameters in the two models. The number of parameters would affect on loo and waic, correct?

For the second method, your example is a single test like H0 A( A=a-c) = B( B=b-d). Could you please give me a joint test example there?


B and D seems to be significantly away from zero.

So can I conclude a=b=…=g doest not equal to 0?

Yes, for 95% interval. However, I would also look at how often one is better than the other. Then you can use posterior predictions.

Cool, I will run the fuction you suggested. Thanks for your help.

This might be of help: