Why does the shifted_lognormal() family make stan code such as

target += lognormal_lpdf(Y - ndt | mu, sigma);

How does this subtraction by the shift parameter accomplish shifting the distirbution to the right horizontally?

Further, I am confused about the syntax of response ~ predictors here. Do the predictors here represent the inputs to the mean function or the inputs to the function for the first parameter?

For example, in the following code:

fit3 <- brm(reaction_time ~ 1 + bigram+ trial_num + (1 + bigram | participant_id) ,
data = experiment_df ,
family = shifted_lognormal(),
prior = c(set_prior("normal(6,0.2)",class = "Intercept"),
set_prior("normal(-0.6,0.2)", class = "b", coef = "trial_num"),
set_prior("normal(0,1)", class = "b", coef = "bigram"),
set_prior("gamma(1,1)", class = "sigma")),
control = list(adapt_delta = 0.95),
max_treedepth = 20)

Does this mean that the mean function of this shifted lognormal distribution is a function of bigram and trial_num with population and random effects, or instead that the parameter mu is a function of these predictors? And, for the shifted log normal mu is not the mean but exp(mu) is the median.

ndt here is the smallest Y value that has non zero density, because for Y <= ndt you have Y - ndt <= 0 which is outside the range of the not shifted lognormal.

The predictors always correspond to one of the parameters of the distribution, after applying the link function. Here, the main parameter (i.e. the one assumed unles you do â€śdistributional regressionâ€ť) is mu and the default link (as shown at http://paul-buerkner.github.io/brms/reference/brmsfamily.html) is identity. I.e. the predictors are for the mean of the logarithm. Generally not all families have their mean predicted, but it is always some measure of central tendency.

So what happens if (Y - ndt) is <= 0? Iâ€™m going for a deeper understanding of the underlying mechanisms here.

Further, does the shift parameter ndt inform us about the change in the mean of the distribution on the scale of the response? Iâ€™m thinking about using a distributional model where the shift parameter is a function of various predictors. I am wondering what the interpretation would be for parameters on the predictors which contribute to the ndt.

Then the density is 0, the log density is negative infinity and you get an error in sampling. I am a bit surprised brms doesnâ€™t enforce this constraint, but by inspecting the generated Stan code (via make_stancode - a good way to check what the model is actually doing) it clearly doesnâ€™t, so I would expect errors in sampling if the data do not inform the ndt well enough to avoid bumping into this boundary.

I am not sure I can answer this very productively: by default the predictors are for the log of the ndt parameter. So the parameters represent the change in log(ndt).

Brms does not enforce the upper boundary because then every ndt prediction would have to be smaller than the smallest y which may not be a reasonable assumption. Hence the current approach in the absence of me coming up with a better one yet. I do the same for the Wiener() family which may lead to the same problems there unfortunately.