Non-linear power relationship model under a lognormal distribution - realtionship to parameters

The best general resources I can recommend are at Understanding basics of Bayesian statistics and modelling. In fact, I think brms is actually one of the hardest way to get a good understanding of Bayesian modelling, because it just hides so much from you (while at the same time letting you very quickly build a huge range of models), so for a beginner trying to understand, I would recommend trying to implement at least a couple models directly in Stan. This is IMHO doubly true for the non-linear syntax in brms which is even less intuitive than the linear syntax.

Yes, that looks correct to me.

In this particular case, the linear predictor only contains the intercept, so estiamtes of b1_Intercept are actually directly estimates of b1 (that would not be true if the formulas for b1 and b2 were more complex). There is however a minor catch: since the Estimate column (the posterior mean) is computed separately for each parameter, it can happen that despite the estimate being somewhat representative of the posterior for each single parameter, the pair of estimates may not be a good representation of the join posterior of the two parameters (because the posterior of the parameters can have some correlation structure).

So the safe way is to always work with samples - pick a posterior sample, take the values of b1_Intercept and b2_Intercept, plug them in the equation, repeat - this will give you samples of the fitted curves that will be representative.

The simplest way is to use posterior_predict, posterior_epred or posterior_linpred to let brms calculate the predictions for you (which will do exactly this logic and automatically handle even complex linear predictors).

Using the lognormal family means that your predictor (b1 * x ^ b2) is a predictor for the logarithm of the observed values. So to make assumed “noise-free” predictions you would need to compute exp(b1 * x ^ b2) for each sample. If you are interested in the posterior mean of the predictions, you would need to take into account also the sigma parameter (the mean of log-normal is not the “log-mean”, but rather \exp(\mu + \sigma/2) - see Log-normal distribution - Wikipedia (this is what posterior_epred would do for you)

Hope that clarifies more than confuses!

(this stuff can be hard, so feel free to ask for additional clarifications)

1 Like