Setting up priors - the practical side - and some clarifications

Hi everyone,

I have started this Bayesian trip over a month ago. It was a real journey. I started out as a NHST trained organizational psychologist, so everything was very new and to me. I/O psychology has seen virtually no, or very small Bayesian based empirical papers.

I have read pretty much every introductory book and semi-advanced manuals from JASP to BMRS, and I am pretty close to conducting my first real-world-data analysis, write-up and so on. I am close, but i still have a few closing questions, that are technical in nature…

Issue at hand: I am predicting the performance of work groups as a function of the personality traits of their members controlled for the potential effects of the group size, and i need a parameter estimation.

My model is then:

fit1 <-
  brm(data = team.data,
      family = gaussian,
      performance ~ perceived_group_size + machiaviellism,
      warmup = 10000, 
      iter = 20000, 
      chains = 4, 
      cores=4,
      seed = 1)

The default prior is a Student’s t: it has wider tails than the normal distribution. This makes it very useful for modeling small samples or data with outliers (i.e., “robust” analyses); Jebb, 2015, at DOI: 10.1177/1094428114553060. Perfect for me. I have only about 100 teams.

This however, won’t do 100%. I will need a prior prediction check. This is where I am stumped, as I cannot get it into my head how to do this practically. What hyperparameters should i enter? Are they dependent on the scaling of the predictor (machiavelism is a likert scale 1 to 4), or dependent on the outcome variable: performance is Likert 1-5, or should I center the predictor?

Is then normal(2, 1) a normal distribution of mean 2 and of sd = 1? But then a normal( Or, should i center the predictor so that a 'cauchy (0, 0.707) is actually a distribution of mean 0, with SD = 0.707? Or is that done automatically and i just have to enter "cauchy (0, 0.707)?

fit1 <-

  update(fit0,
         prior = prior(normal(2, 1), class = Intercept),  
         seed = 1)

And finally, just a minor question for my peace of mind: the JASP vs BRMS approach to linear regression. Why is JASP comparing models to decide, and why is BRMS more parameter oriented? I feel as a novice, pretty confusing.

Greetings from the quarantine,
George

  • Operating System: WIN10
  • brms Version: 2.12.0
1 Like

A couple of comments:

  • Indeed, the the Student’s t distribution has wider tails, but in order to conduct “robust regression” you’d want this to be specified as the response distribution, not necessarily as the distribution of the prior of a parameter (i.e., you’d use family=student inside brm()). It’s important to distinguish these two.

  • If your responses are given on a Likert scale, you may want to use an ordinal model (for example, with family=cumulative). This would be a good paper on how to fit such models with brms: BĂĽrkner & Vuorre (2018)

  • This preprint may be helpful regarding prior predictive checks, as well as other aspects of the workflow of Bayesian analysis in psychology: Schad, Betancourt, & Vasishth (in press)

5 Likes

Thank you for the answer. I’ll check the those references out.

family = cumulative will not work since it requires integers. Aggregated means over Likert scales include decimal values, so it throws an error.

Do you have the original data still available? By taking aggregated means you lose information, which could be important for your model.

Treating Likert scale data as continuous assumes equal distances between the items, which is usually not the case.

1 Like

I have no other options, frankly. The data is clustered, obviously, since we are talking group-level constructs. I am not predicting individual-level (level1) variables, however. I am always at level 2 : team-level construct predicted by team characteristic. Therefore, mixed-models are out of the question.

I’ll try and give a sample from the data.