Priors for shifted lognormal model

Hello,

I’m trying to set some weakly informative priors on the mu, ndt, and sigma parameters (on the identity, log, and log link scales, respectively, as a default) for the shifted lognormal distribution but am struggling to understand how to do this. As an example, I have a dependent variable (reaction time, in milliseconds) being predicted by 3 categorical independent variables (age group, bias type, trial type) with the model formula being as below,

Formula: rt_ms ~ age_group * bias_type * trial_type + (1 | subject)
ndt ~ age_group * bias_type * trial_type + (1 | subject)
sigma ~ age_group + bias_type + trial_type + (1 | subject)

The prior summary tells me that there are flat priors on all population-level effects for mu, ndt, and sigma (i.e., class ‘b’) but I would like to set some weakly informative priors.

I’ve been advised that the below should work but am trying to gain more insight and understanding regardless of whether it is right or wrong (or somewhere in between).

c(prior(“student_t(3,0,2.5)”, class = “b”),
prior(“student_t(3,0,2.5)”, dpar = “ndt”),
prior(“student_t(3,0,2.5)”, dpar = “sigma”))

  • Operating System: Cent OS Linux (Release 7 (Core) 64-bit)
  • brms Version: 2.16.3

Thank you for reading! Also, pinging @martinmodrak

1 Like

I agree with the priors you’ve been advised to start out with. They look like a fine place to start.

2 Likes

Thanks Solomon! :) I’ve given them a try and while the model is being run more efficiently (i.e., in less time), I am still having issues with the model not converging. For reference, I’m running 8 chains - 6000 total iterations and 1000 warm-up samples per chain for a total of 40000 post-warmup samples.

I’m going to try simplifying my model first (I probably don’t have enough data to estimate all the population-level parameters) and see how I go.

It’s always a good idea to start with a simple model, particularly when working with new data or a new model type.

Heads up: if you’re new to using brms, it seems like you’re using an unnecessarily large number of post-warmup iterations. The Stan-based packages like brms are usually fine with just 2,000 to 10,000 post-warmup iterations.

Thanks Solomon for that advice. I simplified my model to not estimate the ndt parameter and it might have given me some insight why I’m having trouble with the maximal model. Even with no predictors for the ndt parameter, the posterior distribution for it is quite large. I’ve fit the ndt parameter successfully for a model with a 3 way interaction in the past but I do have a lot less data for this model than I had for that model. I wonder if that is the reason…

Regarding using all those post-warmup iterations, I was hoping to have a ‘plentiful posterior’ if I wished to do Bayes Factors. But, I have reduced it to 16000 post-warmup iterations currently as I make sure the model works in the first place.

2 Likes

That might quaite plausibly be a source of convergence issues - see e.g. Underdetermined Linear Regression for some ways this type of problems can manifest in posterior visualisations.

1 Like