Please also provide the following information in addition to your question:

Operating System: macOS High Sierra 10.13.6

brms Version: 2.4.4

Hello everyone,

I am looking for a package that allows me to do a Bayesian analysis in a binomial dependent variable full of zeros, with random effects for items and subjects. I looked at some posts in a forum, and found two answers suggesting to use the brms package. These were two of the responses I found:

“You could try ‘brms’ – you may not get a test buy you can estimate and interpret the parameter values”
“Yes, brms is the ticket. However, the package places less emphasis on testing, and setting the default prior well (for testing) can take some tinkering”.

I would like to know if anyone here knows what not getting a test means. Also, I have no idea how will I get the priors. In all the literature of studies like mine, they use ANOVAs and treat the dependent variable as a continuous from 0 to the total number of tests/trials.

I believe they mean a test of a null hypothesis that some coefficient is equal to zero, which is not something that most Stan users would think is worth worrying about. But yes you can do a that model in brms or rstanarm.

You can do that afterwards using the bridgesampling package, but most people around here would advise you not to. The question behind a Bayes Factor is roughly which prior predictive distribution was most consistent with the observed data (assuming that the prior predictive distribution for one of the models estimated is the true data generating process). The question behind the main functions in the loo package is roughly which posterior predictive distribution is expected to be most consistent with future data (without assuming anything about a true data generating process). The brms package also has a hypothesis function that is not quite a Bayes Factor but analyzes the ratio of the posterior density to the prior density.

I wanted to know if this package would be useful for me before learning how to use it, because it will take a lot of time. For what you say, I think it will be worth the time because it will help me to answer my research question. Maybe I just need to read a lot more and then I will understand why most Stan users don’t worry about coefficients being equal to zero. Thank you very much for your time and answer.

Neither brms nor rstanarm take very much time to learn how to use, particularly for a relatively simple model like you described. The controversy over Bayes Factors has been going on for close to a century.

I guess that the amount of time needed is proportional to your prior knowledge. Mine is very basic. But thanks for mentioning the controversy, I had no idea. Again, I don’t know much yet.

I was wondering of hypothesis testing in cross-classified multilevel modelling usign brms.
It is helpful to know hypothesis testing is not something stan users worry about … But would you mind providing more “arguments” about this difference? None of my dissertation committee members using Bayesian method. This is a necessary explanation I should prepare …

It sort of depends on what you mean by “hypothesis” and how you are going about “testing” it.

Frequentists test point-null hypotheses that some parameter is (usually) zero based on some function of the conditional distribution of a statistic given the true parameters (so it is well-defined to condition on a parameter being exactly zero). For many Bayesians, the idea of testing a point-null hypothesis either does not make sense or is not interesting. The posterior density of the parameters given the data has no mass at any particular point (such as zero) so one could say that we know a point-null hypothesis has probability zero of being true irrespective of what the data were. Others would say that it doesn’t matter whether a parameter is “not zero” because it can have a high probability of being “near zero”, which is the same thing in substantive terms.

People like John Kruschke have favored calculating the posterior probability that a parameter is withing the Region Of Practical Equivalence (usually containing zero). That is a well-defined thing, but you have to define what that region is on substantive grounds. Others like to calculate the posterior probability that a parameter is positive or negative, which they use to argue that it is unlikely that a parameter has the wrong sign. This seems to be what frequentists want to achieve when they say some estimate is statistically significant with the right sign, but that is not really justified in the frequentist framework.

Then there is the question of comparing models or putting posterior probabilities on models via marginal likelihoods and Bayes Factors. It seems obvious to me that if one were going to do something like that, the whole analysis would have to be pre-registered. Otherwise, it is pretty easy to change the priors and data and stuff so that the model you favor has the highest posterior probability. These posterior probabilities on models are all conditional on one of the known models being the correct data-generating process, which is a really strong assumption to make. A lot of people complain that Bayes Factors are sensitive to the priors and in particular to “irrelevant” aspects of the priors (i.e. to aspects that do not have much of an impact on the posterior), but that sensitivity is intended and relevant if your objective is to say which prior predictive distribution the data were most consistent with.