# Is forcing correlation between parameter values allowed in bayesian regression

I am using bayesian nonlinear regression in brms to fit some model parameters, and I’m running into some issues getting MCMC chains to converge. I know this is because of some model specification issues and I’m looking for a way to incorporate some assumptions into my priors or model specification that will help avoid this issue.

The data: the response variable C_{r[i]} is the fraction of a chemical i remaining after an experiment, and conc is the initial concentration of the chemical. There are about 500 different chemicals measured in the dataset, with 9 observations per chemical. C_{r[i]} is a positive number, where values less than 1 indicate a net decrease and values greater than 1 indicate a net increase in the chemical during the experiment, and conc values range from 0.1 to 1.

Generally, I’m fitting functions of the form

C_{r[i]} ~ 1 + exp(b_i) - f(conc)

Where b_i is a random effect for each different chemical and f(conc) is some function of the concentration. Parameters in f(conc) are generally transformed in a way that ensures they are positive, so exp(b_i) represents the estimated gains and f(conc) represents the estimated losses. Of course, there are issues fitting this model because if b_i and f(conc) can take any value then there are infinite plausible parameter combinations. Generally, my way around this has been to structure f(conc) so that the maximum possible value is 1, which constrains parameter values enough for MCMC chains to converge.

Now, I’m trying to fit a version of this where f(conc) is a piecewise function. Below some threshold t_i it’s value is 0, and above it takes the form

f(conc) = \begin{cases} 0 & conc < t_i \\ \frac{a*conc}{1 + b_1*conc^2} & conc > t_i \end{cases}

Where is a function of the chemical property NOSC. The threshold t_i would be a random effect of chemical identity.

The issue is that particularly for chemicals where C_{r[i]} > 1, it is equally plausible that t_i is low, and a is also low, or that t_i is high, and f(conc) = 0. Because the model has no reason to suspect either one more, it doesn’t converge and I get poor estimates.

Now, I think a way around this might be to specify some relationship between t_i and b_i. Essentially, when C_{r[i]} is high, I’m skeptical of low values for t_i. Instead, I think it’s more likely that conc < t_i and f(conc) = 0. So essentially, I think that t_i and b_i are correlated somewhat.

I’ve found a way to code this that gets a convergent MCMC chain, but I’m concerned about whether it’s valid, or if there’s a better way. I’ve been following the example here for specifying piecewise functions in brms. My idea is to fit a random effect, where both b_i and t_i are functions of the effect. b_i is the exponential function of the effect (to keep it positive) and t_i is transformed to a value between 0 and 2 by an inverse logit function (we don’t really have power to detect t_i over 1, so answers over 1 are all the same pretty much). So what happens is we end up with the random effect influencing both b_i and t_i positively, with the slope of the relationship depending on an additional parameter z, which in one version is a fixed effect, and in another version is an additional random effect, so that there is always a positive relationship, but the slope varies depending on the chemical. The MCMC chains converge more easily when I do it this way, but I’m not sure if it’s really good practice.

My code then looks like this:

chem_priors =
prior(normal(-1, 0.25), nlpar = "b") +
prior(normal(1.5, 0.25), nlpar = "k0") +
prior(normal(1, 0.25), nlpar = "k1") +
prior(exponential(1), nlpar = "b1", lb = 0) +
prior(normal(0, 1), nlpar = "z") +
prior(exponential(1), class = "sigma") +
prior(exponential(10), class = "sd", nlpar = "b")

chem_sb_ce = brm(data = md,
family = lognormal(),
bf(frac_remain ~ (1  +  exp(b)) - step(1 - t) * (a * conc)/(1 + (b1 * conc^2)),
nlf(a ~ inv_logit(k0 + k1*nosc_adj)),
k0 ~ 1,
k1 ~ 1,
b1 ~ 1,
nlf(t ~ 2*inv_logit(exp(z)*b)),
z ~ 1,
# alternative: z ~ (1 | chemical),
b ~ (1 | chemical),
nl = TRUE),
prior = chem_priors,
iter = 1000, warmup = 500, chains = 4, cores = 4,
control = list(adapt_delta = 0.99))

So is it valid to specify the model this way, with the same random effect working in two parts of the model? Is there a better way to just specify some kind of covariance prior for two parameters? Thanks

Yes, you can reuse parameters in multiple contexts in Bayesian models.

In fact, my StanCon talk next Tuesday reuses parameters for two different sampling distributions (one involving rating and one involving ranking, in my case).

I’m afraid don’t know brms well enough to know if your model specification is correct. But what I’d generally recommend is simulating data and then trying to recover the parameter values with appropriate posterior coverage—the general and rigorous form of this is simulation-based calibration checks.