Best practice building non-linear brms models?

Please also provide the following information in addition to your question:

  • Operating System: linux
  • brms Version: 2.8.0

Is there good advice out there on how to best approach a fairly complex non linear regression modelling task in simple steps to ensure convergence and good fits in brms? such as:

  1. Iterative approaches for building up complexity from simple to complex models and priors, maybe even only using subsets of data to set direction before fitting the entire data set?
  2. The use of simulation studies to inform model and prior formulations to ensure convergence and quality of fit?
  3. The number of samples needed to get convergence and good fits (as function of model complexity)?

Let me explain:

Recently, I have built a model for predicting ‘non-linear’ curves as a function of physical and chemical factors:

y ~ (t > tg) * Gm * (1-exp(-k * (t-tg)))

Where t is the time and the parameters tg, Gm, and k depend on cacl2, pH, temperature, protein levels and enzyme dose.

I had curves from fractional factorial design of 46 experiments.

My naive approach has been to:

  1. first fit all curves using brms nonlinear regression, estimating the parameters of each curve and the associated standard errors.
  2. then estimate each parameter (and associated std. error via the se() term) as a function of the experimental conditions using a linear model, e.g. y | se(std.error, sigma = TRUE) ~ …

This yielded good results when comparing model predictions to hold-out samples. I could stop here, but I’m feeling I’m not getting the optimal results as if fitting everything in one go, such as the covariance structure between the parameters…

However, when I try to do all in one go:

bform <- bf(firmness ~ (minutes > tg) * Gm * (1 - exp(-k * (minutes - tg))),
tg ~ (pH + temperature + dose + protein)^2,
Gm ~ (pH + temperature + dose + protein)^2,
k ~ (pH + temperature + dose + protein)^2,
nl = TRUE)

using priors informed by the first approach:

bprior <- prior(normal(11, 3), nlpar = tg, coef = Intercept)+
prior(normal(0, 5), nlpar = tg)+
prior(normal(260, 25), nlpar = Gm, coef = Intercept)+
prior(normal(0, 5), nlpar = Gm)+
prior(normal(0.22, 0.05), nlpar = k, coef = Intercept)+
prior(normal(0, 0.1), nlpar = k)

Stan fails to converge at a solution of similar quality as the piece-wise approach.

I then tried to use simulation studies, building up a data set by using the findings of the piece-wise approach to understand what was going on:

  1. For a given design in the physical and chemical parameters
  2. I caclulate the value of each parameter for each experiment
  3. I use the calculated parameter to simulate a curve for each experiment
  4. I add noise to the curves
  5. I fit all parameters in one go

From these few studies it appears that the two step approach is much more efficient than the fit all in one approach; it takes less time and I need fewer experiments to arrive at the underlying model with the two step approach. Is this a well known result? Are there well known tricks, other than tight priors, to guide the sampler? Or am I aproaching it all wrong?

Any advice and heuristics on how to efficiently build complex non-linear models in brms will be highly appreciated!

Thanks

/Jannik

I strongly favor the one-step approach if feasible.

I think what you are missing is that in reality your non-linear parameters tg, Gm and k all have to be positive, but this is nowhere ensured by your model. One approach would be to model all on the log-scale, i.e. write

bform <- bf(firmness ~ (minutes > exp(tg)) * exp(Gm) * (1 - exp(-exp(k) * (minutes - exp(tg)))),
tg ~ (pH + temperature + dose + protein)^2,
Gm ~ (pH + temperature + dose + protein)^2,
k ~ (pH + temperature + dose + protein)^2,
nl = TRUE)

but then you need to adjust the prios accordingly, which will likely be way too large for coefficients on the log scale

Thanks Paul,

Reformulating the formula and adjusting the priors to the log-scale and using gaussian noise have given me very good results, better than the two step approach!

bform <- bf(firmness ~ (minutes > exp(lntg)) * exp(lnGm) * (1 - exp(-exp(lnk)* (minutes - exp(lntg)))),
lntg ~ (cacl + dose + pH + temperature + protein)^2 + I(cacl^2) + I(dose^2) + I(pH^2) + I(temperature^2) + I(protein^2),
lnGm ~ (cacl + dose + pH + temperature + protein)^2 + I(cacl^2) + I(dose^2) + I(pH^2) + I(temperature^2) + I(protein^2),
lnk ~ (cacl + dose + pH + temperature + protein)^2 + I(cacl^2) + I(dose^2) + I(pH^2) + I(temperature^2) + I(protein^2),
nl = TRUE)

bprior <- prior(normal(log(11), 0.1), nlpar = lntg, coef = Intercept) + prior(normal(0, 0.1), nlpar = lntg)+
prior(normal(log(260), 0.1), nlpar = lnGm, coef = Intercept) + prior(normal(0, 0.1), nlpar = lnGm)+
prior(normal(log(0.22), 0.1), nlpar = lnk, coef = Intercept) + prior(normal(0, 0.1), nlpar = lnk)

fit <- brm(bform,
prior = bprior,
data = model_data,
family = gaussian(),
chains = 2,
iter = 2000,
thin = 1,
future = TRUE,
seed = 42)

However, these firmness curves I’m modelling have a natural lower bound of zero, therefore it does not make physical sense to me that given the noise, I can get negative values, especially for the part of the equation that is forced to zero. On the other hand, I do not see any evidence of increasing noise when comparing lower and higher firmness parts of the curves (the curves are in fact very smooth and the noise is so low, that sigma is realy just an estimation of lack of fit rather than random measurement noise…)

In your opinion, what would be the correct way to tackle this issue?

  1. Should I reformulate the problem to use an error family that has a natural lower boundary of zero?
  2. Should I stick with gausian noise by provide an equation to sigma, that forces it to zero when firmness is approaching zero?
  3. … or something entirely different?

Again, your insights, as always, are invaluable!

/Jannik

I would go for something along the lines of (1).

Finally,

I would like to predict and show how my parameters tg, k, and Gm vary with factor values:

lntg ~ (cacl + dose + pH + temperature + protein)^2 + I(cacl^2) + I(dose^2) + I(pH^2) + I(temperature^2) + I(protein^2)

lnGm ~ (cacl + dose + pH + temperature + protein)^2 + I(cacl^2) + I(dose^2) + I(pH^2) + I(temperature^2) + I(protein^2)

lnk ~ (cacl + dose + pH + temperature + protein)^2 + I(cacl^2) + I(dose^2) + I(pH^2) + I(temperature^2) + I(protein^2)

I know how to do this by extracting the draws and combining by ‘hand’, but was wondering if there is an easier way to do this?

e.g something like:

parameter = predict(fit, newdata = newdata, nl_parameter_to_predict = ‘lnk’) …

Thanks

/Jannik

Try fitted(..., nlpar = "nlpar name")

This is exactly what I was looking for!

Thank you for all your help and thank you for making this package so awesome :-)

/Jannik