Non-linear models using family = Beta()

Hi all,
I’ve been having trouble with fitting some data using a non-linear function with the Beta family. In this case the data are proportional loss in area, so this is a true “proportion” and cannot be validly modeled with a binomial (there are no trials). It works fine a lot of the time, but I have quite a lot of examples where the predicted values for the fitted model do not fit the upper and lower extremes of the observed data at all.

To generate some simulated data that can reproduce this problem I used:

fct <- function(b_top, b_bot, b_ec50, b_beta, x) {
  b_top + (b_bot - b_top) /
    (1 + exp((b_ec50 - x) * exp(b_beta)))
}
linear_rescale <- function(x, r_out) {
  p <- (x - min(x)) / (max(x) - min(x))
  r_out[[1]] + p * (r_out[[2]] - r_out[[1]])
}

set.seed(10)
x <- sort(runif(100, 1, 3))
a <- 0.2
y_mean <- fct(b_beta = 3, b_bot = 0.001, b_ec50 = 2, b_top = 0.999, x)
y <- linear_rescale(rbeta(100, a, (a-a*y_mean)/y_mean),  c(0.001, 0.999))

dat <- data.frame(x, y)
plot(dat$x, dat$y)

To fit the model in brms I’ve been using the following code - which basically uses exactly the same model formula to generated the simulated data. Note that the priors for the “top” and “bot” parameters are on the logit scale, as I am using a logit link.

library(brms)

bfmod <- brms::bf(y ~ top + (bot - top) / (1 + exp((ec50 - x) * exp(beta))),
                  bot + ec50 + top + beta ~ 1,
                  nl = TRUE)
my_prior <- c(prior_string("normal(5, 5)", nlpar = "top"),
              prior_string("normal(-5, 5)", nlpar = "bot"),
              prior_string("gamma(5, 0.04)", nlpar = "ec50"),
              prior_string("normal(0, 3)", nlpar = "beta"))

tt <- brm(formula = bfmod, data = dat, prior = my_prior, 
          family = Beta(link="logit"))#, sample_prior='only')
tt_ce <- conditional_effects(tt)
plot(tt_ce, points = TRUE)

So you can see that the estimated upper asymptote is well below where it should be, and the lower one is way to high. I did play around with the prior for “beta”. Making this really vague causes it to take a really long time to run and while it did generate a more realistically sharp decline, it had no impact on the “top” and “bot” estimates. Making the priors on top and

Running directly in stan produces the same lack of fit outcome, with estimates for “bot” (transformed using the inverse logit) no where near the simulation values of 0.001 and 0.999.

sink("new1.stan")
cat(stancode(tt, version = FALSE))
sink()

fit <- stan(file = 'new1.stan', data = standata(tt))

summary(fit)$summary

I tried modelling this using a binomial, with the same link function and priors, using an arbitrary number of trials. I get a much better fit (albeit this is probably over-dispersed). So this suggests that the priors are ok on the non-linear model parameters in the context of the logit link function for the binomial. But I really can’t use a binomial here because I can’t justify and given selection of “trials” - and I’m not aware of any other bounded distributions that I can use here.

dat$trials <- 10
dat$success <- as.integer(round(dat$y*dat$trials))

bfmod_binom <- brms::bf(success | trials(trials) ~ top + (bot - top) / (1 + exp((ec50 - x) * exp(beta))),
                        bot + ec50 + top + beta ~ 1,
                        nl = TRUE)

ttbinom <- brm(formula = bfmod_binom, data = dat, prior = my_prior, 
               family = binomial(link="logit"))#, sample_prior='only')
plot(conditional_effects(ttbinom))[[1]] + 
  geom_point(data = ttbinom$data, mapping = aes(x = x, y = y), inherit.aes = FALSE)

I tried an exremely vague prior for phi, but still with no improvement (see code below). Does anyone have any other suggestions, or know what I am doing wrong here?

a <- 0.2
y_mean <- fct(b_beta = 3, b_bot = 0.001, b_ec50 = 2, b_top = 0.999, x)
y <- linear_rescale(rbeta(100, a, (a-a*y_mean)/y_mean),  c(0.001, 0.999))

dat <- data.frame(x, y)
plot(dat$x, dat$y)

library(brms)

bfmod <- brms::bf(y ~ top + (bot - top) / (1 + exp((ec50 - x) * exp(beta))),
                  bot + ec50 + top + beta ~ 1,
                  nl = TRUE)
my_prior <- c(prior_string("normal(8, 1)", nlpar = "top"),
              prior_string("normal(-8, 1)", nlpar = "bot"),
              prior_string("gamma(5, 0.04)", nlpar = "ec50"),
              prior_string("uniform(0.0000001, 100000)", nlpar = "beta"),
              prior(uniform(0.000001, 10000), class = "phi"))

tt3 <- brm(formula = bfmod, data = dat, prior = my_prior, 
          family = Beta(link="logit"))#, sample_prior='only')
tt_ce <- conditional_effects(tt)
plot(tt_ce, points = TRUE)
2 Likes

@beckyfisher looks like a dose-response model.

So this is a sigmoid-shaped model for some outcome based on concentration, right? If the outcome variable is constrained to be between 0 and 1, then the beta response distribution makes sense. But I don’t think you need the link function. Your model structure already captures the shape.

What about if you set the link to ‘identity’ for the beta family in your first model? Might need specification of the priors for b_bot and b_top.

1 Like

Thanks for replying @AWoodward yes this is a dose response model. I can try the same example with an identity link as suggest, but I get the same outcome (the upper predicted values to low, and the lower predicted values too high). This is the code:

# using identity
my_prior <- c(prior_string("beta(5, 1)", nlpar = "top"),
              prior_string("beta(1, 5)", nlpar = "bot"),
              prior_string("gamma(5, 0.04)", nlpar = "ec50"),
              prior_string("normal(0, 3)", nlpar = "beta"))

tt <- brm(formula = bfmod, data = dat, prior = my_prior, 
          family = Beta(link="identity"))
tt_ce <- conditional_effects(tt)
plot(tt_ce, points = TRUE)

Just a note, I can reproduce this same thing in R2jags, so this is not a brms/stan specific issue…

1 Like

A very vague guess: could the problem lie with the phi parameter, specifically with phi being the same across the whole range. It appears that the “tails” support large phi while the middle section requires low phi and what you see could be the model doing weird compromise. If I am right than treating phi as fixed known and large (by using a constant prior) should give you similar results as the binomial approach. But I might easily be wrong… sigmoid-like curves can be tough to fit well for many reasons.

Hope you’ll be able to resolve this!

1 Like

@beckyfisher damn, I had a similar issue with a dose-response analysis which was resolved by selecting the identity link, but that was for a beta-binomial model.

The other thing I noticed about your plot is that it’s essentially an ‘on-off’ function; the central slope is so steep that you transition directly from the maximum possible response to the minimum possible. As @martinmodrak suggests this might cause issues with the constant-variance assumption in this model. The priors might also not be informative enough.

Thanks for the suggestions @AWoodward - I tried a slightly different model, with a step function instead, and no slope - nie just and upper and a lower value, and an estimated transition (here called nec). I’m still getting the same thing, with and without the link… This is with a very vague prior on phi, but the priors don’t seem to be making much different. I also tried more informative priors on top and bot, but they don’t really help.

bfmod <- brms::bf(y ~ bot + (top - bot) * step(x - nec),
                       bot + top + nec ~ 1,
                        nl = TRUE)
 
 my_prior <- c(prior_string("normal(5, 5)", nlpar = "top"),
               prior_string("normal(-5, 5)", nlpar = "bot"),
               prior_string("gamma(5, 0.04)", nlpar = "nec"),
               prior(uniform(0.00001, 1000), class = "phi"))
 
tt <- brm(formula = bfmod, data = dat, prior = my_prior, 
           family = Beta(link="logit"))#, sample_prior='only')
tt_ce <- conditional_effects(tt)
plot(tt_ce, points = TRUE)

Hi all, to explore the issue around phi in more detail I reformulated the problem in terms of phi, based on the parameterisation of the beta in stan.In this way I can actually generated data of the type I am having trouble with, using a given phi value (ie there is no variation in phi across x here at all). The code is:

fct <- function(b_top, b_bot, b_ec50, b_beta, x) {
  b_top + (b_bot - b_top) /
    (1 + exp((b_ec50 - x) * exp(b_beta)))
}
linear_rescale <- function(x, r_out) {
  p <- (x - min(x)) / (max(x) - min(x))
  r_out[[1]] + p * (r_out[[2]] - r_out[[1]])
}
set.seed(30)
phi=1.5
x <- sort(runif(100, 1, 3))
y_mean <- fct(b_beta = 3, b_bot = 0.01, b_ec50 = 2, b_top = 0.99, x)
shape1 <- y_mean * phi
shape2 <- (1-y_mean) * phi
y <- linear_rescale(rbeta(100, shape1, shape2),  c(0.001, 0.999))
dat <- data.frame(x, y)
plot(dat$x, dat$y)
require(brms)
bfmod <- brms::bf(y ~ top + (bot - top) / (1 + exp((ec50 - x) * exp(beta))),
                  bot + ec50 + top + beta ~ 1,
                  nl = TRUE)

my_prior <- c(prior_string("uniform(0.8, 0.99)", nlpar = "top"),
              prior_string("uniform(0.001, 0.2)", nlpar = "bot"),
              prior_string("uniform(1, 3)", nlpar = "ec50"),
              prior_string("normal(3, 1)", nlpar = "beta"),
              prior(uniform(0.1, 30), class = "phi"))
my_inits <- list(list(b_beta = array(3), b_bot = array(0.1),
                      b_ec50 = array(2), b_top = array(0.9), phi = 0.5))
tt <- brm(formula = bfmod, data = dat, prior = my_prior,
          family = Beta(link="identity"), init = my_inits, chains = 1)

tt_ce <- conditional_effects(tt)
plot(tt_ce, points = TRUE)
summary(tt)

Fitting this simulated data fails to return the original simulated phi, even when the priors are set quite tightly on top of the values of the other parameters that were used. Phi seems to always be bigger than the simulated value.

I think that’s entirely a quirk of the way you simulate the data, which does not match what the model expects. If I remove the linear_rescale call and generate y almost exactly as the model expects, i.e.

y <- rbeta(100, shape1, shape2)
y[y == 1] <- 1 - 1e-10 # Needed as rbeta can produce exact 1

I get the estimate as:

    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
phi     1.29      0.37     0.70     2.09 1.00     1195     1917

which looks reasonable. The problem with your code is IMHO that having a large number of values that are close to the boundary (e.g. your 0.001 and 0.999) and low precision implies you should also get some values even closer to the boundary, but that does not happen.

Thanks @martinmodrak - yes of course that makes sense, and thanks for updating the example so it actually reproduces the problem accurately. Do you have any advice regarding what I can do in this situation to model these data?

So it seems to me that the problem here is that I am trying to model data that really does contain 0s and 1s using the beta distribution. I’ve been trying to find alternatives to this, so if any one has some suggestions that would be awesome, as I’m really struggling to find anything directly useful.
First I thought I might try the zero_one_inflated_beta() family in brms, as the description here Zero One Inflated Beta Models for Proportion Data - The Analysis Factor suggested this might be the right family in this situation. However, using that I get a plot that looks like:


The code for this is:

fct <- function(b_top, b_bot, b_ec50, b_beta, x) {
  b_top + (b_bot - b_top) /
    (1 + exp((b_ec50 - x) * exp(b_beta)))
}
linear_rescale <- function(x, r_out) {
  p <- (x - min(x)) / (max(x) - min(x))
  r_out[[1]] + p * (r_out[[2]] - r_out[[1]])
}
set.seed(30)
phi=1.5
x <- sort(runif(100, 1, 3))
y_mean <- fct(b_beta = 3, b_bot = 0, b_ec50 = 2, b_top = 1, x)
shape1 <- y_mean * phi
shape2 <- (1-y_mean) * phi
y <- rbeta(100, shape1, shape2)
dat <- data.frame(x, y)
plot(dat$x, dat$y)
require(brms)
bfmod <- brms::bf(y ~ top + (bot - top) / (1 + exp((ec50 - x) * exp(beta))),
                  bot + ec50 + top + beta ~ 1,
                  nl = TRUE)

my_prior <- c(prior_string("normal(5, 8)", nlpar = "top"),
              prior_string("normal(-5, 8)", nlpar = "bot"),
              prior_string("uniform(1, 3)", nlpar = "ec50"),
              prior_string("normal(0, 2)", nlpar = "beta"))
my_inits <- list(list(b_beta = array(3), b_bot = array(0.1),
                      b_ec50 = array(2), b_top = array(0.9), phi = 0.5))
tt <- brm(formula = bfmod, data = dat, prior = my_prior,
          family = zero_one_inflated_beta(), init = my_inits, chains = 1)


tt_ce <- conditional_effects(tt)
plot(tt_ce, points = TRUE)

Reading the description, it seems that the model is for the situation where the 0s and 1s are an inflation - ie caused by a different process to what is influencing the proportions between - this is not the case in my example (which is the ratio of two surface areas). As the model is a dose-response, the process that causes the 1s is simply that the x concentration isn’t yet sufficient to cause any loss of area, and the 0s are caused by complete loss. I wasn’t able to figure out from the brms help file if I could modify this so the zero and one inflation is also a function of x.

An alternative I tried is the beta-binomial, following the instructions here: Define Custom Response Distributions with brms
The problem here is that I have no “trials”, so technically this is not appropriate. However, given the high precision in the area values (and hence their ratio) I tried with an arbitrarily large number of trials, and the plot looks great.


My code for this is:

require(brms)
beta_binomial2 <- custom_family(
  "beta_binomial2", dpars = c("mu", "phi"),
  links = c("logit", "log"), lb = c(NA, 0),
  type = "int", vars = "vint1[n]"
)
stan_funs <- "
  real beta_binomial2_lpmf(int y, real mu, real phi, int T) {
    return beta_binomial_lpmf(y | T, mu * phi, (1 - mu) * phi);
  }
  int beta_binomial2_rng(real mu, real phi, int T) {
    return beta_binomial_rng(T, mu * phi, (1 - mu) * phi);
  }
"
stanvars <- stanvar(scode = stan_funs, block = "functions")


fct <- function(b_top, b_bot, b_ec50, b_beta, x) {
  b_top + (b_bot - b_top) /
    (1 + exp((b_ec50 - x) * exp(b_beta)))
}
linear_rescale <- function(x, r_out) {
  p <- (x - min(x)) / (max(x) - min(x))
  r_out[[1]] + p * (r_out[[2]] - r_out[[1]])
}

set.seed(30)
phi=1.5
x <- sort(runif(100, 1, 3))
y_mean <- fct(b_beta = 3, b_bot = 0, b_ec50 = 2, b_top = 1, x)
shape1 <- y_mean * phi
shape2 <- (1-y_mean) * phi
y <- as.integer(round(rbeta(100, shape1, shape2)*10000))
dat <- data.frame(x, y)
dat$trials <- 10000

bfmod <- brms::bf(y|vint(trials) ~ top + (bot - top) / (1 + exp((ec50 - x) * exp(beta))),
                  bot + ec50 + top + beta ~ 1,
                  nl = TRUE)

my_prior <- c(prior_string("normal(5, 8)", nlpar = "top"),
              prior_string("normal(-5, 8)", nlpar = "bot"),
              prior_string("uniform(1, 3)", nlpar = "ec50"),
              prior_string("normal(0, 2)", nlpar = "beta"))
my_inits <- list(list(b_beta = array(3), b_bot = array(0.1),
                      b_ec50 = array(2), b_top = array(0.9)))
tt <- brm(formula = bfmod, data = dat, prior = my_prior, stanvars = stanvars, 
          family = beta_binomial2, init = my_inits, chains = 1)


expose_functions(tt, vectorize = TRUE)

log_lik_beta_binomial2 <- function(i, prep) {
  mu <- prep$dpars$mu[, i]
  phi <- prep$dpars$phi
  trials <- prep$data$vint1[i]
  y <- prep$data$Y[i]
  beta_binomial2_lpmf(y, mu, phi, trials)
}

posterior_predict_beta_binomial2 <- function(i, prep, ...) {
  mu <- prep$dpars$mu[, i]
  phi <- prep$dpars$phi
  trials <- prep$data$vint1[i]
  beta_binomial2_rng(mu, phi, trials)
}
posterior_epred_beta_binomial2 <- function(prep) {
  mu <- prep$dpars$mu
  trials <- prep$data$vint1
  trials <- matrix(trials, nrow = nrow(mu), ncol = ncol(mu), byrow = TRUE)
  mu * trials
}

tt_ce <- conditional_effects(tt)
plot(tt_ce, points = TRUE)
points(dat$x, dat$y/dat$trials)

I tried with a range of different arbitrary numbers of trials and it doesn’t seem to effect the confidence intervals much. So my question is, is this a valid solution to this problem? Ie, setting a ‘trials’ based on the apparent precision of the measurements and using the betabinomial to allow the 0s and 1s to be modeled explicitly.

1 Like

Hi @beckyfisher,

You might want to have a look at this post and associated paper. The model was developed with survey responses in mind where participant have to indicate for instance agreement on a 0 to 100 scale. We kind of now that if the “underlying” real response is 1.3 or 2.7, people tend to just respond 0 (similar for answers close to 100). The idea of this model is that below a certain underlying threshold people answer 0 and above another threshold people answer 100.

This quote below suggest to me that this approach of a threshold might be useful to you. The explicit idea of the model is that the same underlying model is driving the 0s, 1s, and the proportions.

The paper has some comparisons with zero-one inflated beta models. I played around a bit with the model and in the thread I present what I think is a more robust parameterisation than in the original paper. I am obviously biased in my assessment!

I might have some more notes about this model and I can dig them up if they would be helpful to you.

1 Like

Hi @beckyfisher and @stijn -

Thanks for the shout-out, @stijn! I agree that the ordered beta model mentioned in the above links could be a good fit for your problem, @beckyfisher. Your point about inflation being a separate process is what led me to work on this model, and it combines essentially a beta distribution with ordered cutpoints so that it can fairly naturally handle 0/1s without requiring separate processes/parameters.

I’d just add that the current implementation of this is also in brms. There is some extra code to run but then you can use all the brms functions. Vignette here:

https://htmlpreview.github.io/?https://github.com/saudiwin/ordbetareg/blob/master/estimate_with_brms.html

@stijn has a version that uses a different parameterization of the beta distribution but is otherwise similar. Though it’s not yet implemented in brms… c’mon @stijn… ;)

I’m currently revising the paper to add in some other models to the simulation and hope to have some interesting new results out before long.

2 Likes

Thanks @stijn and @saudiwin for chiming in!

As for next steps, I was thinking about this a bit more and I realized I am not completely sure how the real-world measurements you did in the lab actually give raise to the data you feed into the model. Could you describe the experiment in a bit more detail? Also what do b_top, b_bot represent biologically? (I am a bit confused by those being exactly 0 and 1 in your simulations which coincides with the range of the beta distribution, but you probably don’t assume this holds in general)

Maybe you actually do? If e.g. you are measuring area that lights up on an image from a microscope by thresholding the pixel values and counting the proportion of pixels above the threshold, then it would be IMHO quite reasonable to assume each pixel is a trial (the trials are not independent, but I don’t think that would be an issue - the extra variability would likely be accounted for by the “beta” part of the beta-binomial).

In another view, even if the process of computing the area is more involved, the 0s and 1s are likely actually not really 0 and 1 but just “out of detection range”. In the case of image analysis, you know you would not detect less than a single pixel or less then a “single pixel * lowest detectable light intensity”. Similarly, you can’t detect decreases smaller than this range. So you could probably say that your beta response is censored on both sides by values that are not quite 0/1 and treat your 0/1s as censored observations (see cens in brms). Similar arguments could IMHO be constructed for almost any measurement technique.

Would that make sense?

@beckyfisher sorry for the slow response on this one, I’ve been buried in some writing. Observations of 1 and 0 would absolutely be a headache for the beta model.

I’d agree with @martinmodrak that a censored model could be very reasonable, especially where the observations result from some laboratory technique with finite sensitivity, and if there’s no special interpretation attached to responses of 0 or 1. In that case you could either look to your method validation for the smallest measurable deviation from 0 or 1 and set the censoring limit accordingly, or select a smallest biologically interesting effect and set the limit there.

@saudiwin would it be correct to say that your model would be functionally equivalent to this approach, except with the advantage that the ‘censoring’ threshold is estimated, rather than being required to be specified? I imagine there would be some dose-response/pharmacodynamics applications for this. Interesting stuff!

It is interesting to compare it to censoring. I think one analogy could be “soft censoring”, i.e. the response is a weighted combination of censored (0/1) and non-censored outcomes. You don’t really have true censoring because the probability of a degenerate response increases as you move past the cutpoint, but never reaches unity. So for example, if you had an observed value of .81, the model might describe that as a Pr(.8) * Beta(x) + Pr(.2) * Pr(x=1|logit(x)) combination of continuous and discrete outcomes. So a mixture model might be a more apt comparison.

I’d be happy to see it get used in other fields. My original application was social-scientific but obviously it can work anywhere you have a response with these distinctive features. The important criterion is that you believe the same process generated the entire response, unlike the ZOIB, which has separate processes for different parts of the response.

2 Likes

Thanks for the response @marinmodrak! I had a bit of a chat with my colleague that collected these data, and while it would be difficult to define trials based on “pixels” it may be reasonable to infer a “trials” from the precision of the measurements. Ironically the precision of a 1 (all tissue completely alive) and a 0 (all tissue completely dead) are well defined, with no measurement error - it is the measurements in between that have error, of about 1% in this case. So in that case I’m not sure the censored model would work well. I had a go at coding up @saudiwin’s model but didn’t succeed in getting it to run in my non-linear setting, bit if I get some time I will keep trying to get it to work.

Hi @beckeyfisher -

I’d be interested to hear why you couldn’t get it to run. You should be able to get estimates with any distribution in [0,1]. Did you use the brms version?

All best,

Bob

@saudiwin I spent a bit more time today getting it to work for this non-linear model, and actually it looks great!


The code is quite long for anyone interested in trying it (see below), and taken straight from the blog you linked. For some reason the conditional_effects plot looks very strange - the posterior_epred function was missing, so maybe the version I made isn’t correct - but using predict and plotting works fine.

This is the code I used:

require(brms)
fct <- function(b_top, b_bot, b_ec50, b_beta, x) {
  b_top + (b_bot - b_top) /
    (1 + exp((b_ec50 - x) * exp(b_beta)))
}
linear_rescale <- function(x, r_out) {
  p <- (x - min(x)) / (max(x) - min(x))
  r_out[[1]] + p * (r_out[[2]] - r_out[[1]])
}

set.seed(30)
phi=1.5
x <- sort(runif(100, 1, 3))
y_mean <- fct(b_beta = 3, b_bot = 0, b_ec50 = 2, b_top = 1, x)
shape1 <- y_mean * phi
shape2 <- (1-y_mean) * phi
y <- rbeta(100, shape1, shape2)
dat <- data.frame(x, y)

# custom family
ord_beta_reg <- custom_family("ord_beta_reg",
                              dpars=c("mu","phi","cutzero","cutone"),
                              links=c("logit","log",NA,NA),
                              lb=c(NA,0,NA,NA),
                              type="real")

# stan code for density of the model
stan_funs <- "real ord_beta_reg_lpdf(real y, real mu, real phi, real cutzero, real cutone) {

    //auxiliary variables
    real mu_logit = logit(mu);
    vector[2] thresh;
    thresh[1] = cutzero;
    thresh[2] = cutzero + exp(cutone);
    
  if(y==0) {
      return log1m_inv_logit(mu_logit - thresh[1]);
    } else if(y==1) {
      return log_inv_logit(mu_logit  - thresh[2]);
    } else {
      return log(inv_logit(mu_logit  - thresh[1]) - inv_logit(mu_logit - thresh[2])) +
                beta_proportion_lpdf(y|mu,phi);
    }
  }"

stanvars <- stanvar(scode=stan_funs,block="functions")

# For pulling posterior predictions
posterior_predict_ord_beta_reg <- function(i, draws, ...) {
  mu <- draws$dpars$mu[, i]
  phi <- draws$dpars$phi
  cutzero <- draws$dpars$cutzero
  cutone <- draws$dpars$cutone
  N <- length(phi)
  
  thresh1 <- cutzero
  thresh2 <- cutzero + exp(cutone)
  
  pr_y0 <- 1 - plogis(qlogis(mu) - thresh1)
  pr_y1 <- plogis(qlogis(mu) - thresh2)
  pr_cont <- plogis(qlogis(mu)-thresh1) - plogis(qlogis(mu) - thresh2)
  out_beta <- rbeta(n=N,mu*phi,(1-mu)*phi)
  
  # now determine which one we get for each observation
  outcomes <- sapply(1:N, function(i) {
    sample(1:3,size=1,prob=c(pr_y0[i],pr_cont[i],pr_y1[i]))
  })
  
  final_out <- sapply(1:length(outcomes),function(i) {
    if(outcomes[i]==1) {
      return(0)
    } else if(outcomes[i]==2) {
      return(out_beta[i])
    } else {
      return(1)
    }
  })
  
  final_out
  
}

# For pulling posterior predictions
posterior_epred_ord_beta_reg <-  function(draws) {
  mu <- draws$dpars$mu
  phi <- draws$dpars$phi
  cutzero <- draws$dpars$cutzero
  cutone <- draws$dpars$cutone
  N <- length(phi)
  
  thresh1 <- cutzero
  thresh2 <- cutzero + exp(cutone)
  
  pr_y0 <- 1 - plogis(qlogis(mu) - thresh1)
  pr_y1 <- plogis(qlogis(mu) - thresh2)
  pr_cont <- plogis(qlogis(mu)-thresh1) - plogis(qlogis(mu) - thresh2)
  out_beta <- rbeta(n=N,mu*phi,(1-mu)*phi)
  
  # now determine which one we get for each observation
  outcomes <- sapply(1:N, function(i) {
    sample(1:3,size=1,prob=c(pr_y0[i],pr_cont[i],pr_y1[i]))
  })
  
  final_out <- sapply(1:length(outcomes),function(i) {
    if(outcomes[i]==1) {
      return(0)
    } else if(outcomes[i]==2) {
      return(out_beta[i])
    } else {
      return(1)
    }
  })
  
  final_out
  
}


# for calculating marginal effects/conditional expectations
pp_expect_ord_beta_reg <- function(draws) {
  
  cutzero <- draws$dpars$cutzero
  cutone <- draws$dpars$cutone
  
  mu <- draws$dpars$mu
  
  thresh1 <- cutzero
  thresh2 <- cutzero + exp(cutone)
  
  low <- 1 - plogis(qlogis(mu) - thresh1)
  middle <- plogis(qlogis(mu)-thresh1) - plogis(qlogis(mu) - thresh2)
  high <- plogis(qlogis(mu) - thresh2)
  
  low*0 + middle*mu + high
}

# for calcuating LOO and Bayes Factors
log_lik_ord_beta_reg <- function(i, draws) {
  
  mu <- draws$dpars$mu[,i]
  phi <- draws$dpars$phi
  y <- draws$data$Y[i]
  cutzero <- draws$dpars$cutzero
  cutone <- draws$dpars$cutone
  
  thresh1 <- cutzero
  thresh2 <- cutzero + exp(cutone)
  
  if(y==0) {
    out <- log(1 - plogis(qlogis(mu) - thresh1))
  } else if(y==1) {
    out <- log(plogis(qlogis(mu) - thresh2))
  } else {
    out <- log(plogis(qlogis(mu)-thresh1) - plogis(qlogis(mu) - thresh2)) + dbeta(y,mu*phi,(1-mu)*phi,log=T)
  }
  
  out
  
}

priors <- set_prior("normal(0,5)",class="b") + 
  prior(constant(0),class="b", coef="Intercept") +
  prior_string("target += normal_lpdf((cutzero + exp(cutone)) - cutzero|0,3) + cutone",check=F) +
  set_prior("exponential(.1)",class="phi")

priors_phireg <- set_prior("normal(0,5)",class="b") + 
  prior(constant(0),class="b",coef="Intercept") +
  prior_string("target += normal_lpdf((cutzero + exp(cutone)) - cutzero|0,3) + cutone",check=F)

bfmod <- brms::bf(y ~ 0 + top + (bot - top) / (1 + exp((ec50 - x) * exp(beta))),
                  bot + ec50 + top + beta ~ 1,
                  nl = TRUE)

my_prior <- 
            c(prior_string("normal(5, 8)", nlpar = "top"),
              prior_string("normal(-5, 8)", nlpar = "bot"),
              prior_string("uniform(1, 3)", nlpar = "ec50"),
              prior_string("normal(0, 2)", nlpar = "beta"))  +
prior_string("target += normal_lpdf((cutzero + exp(cutone)) - cutzero|0,3) + cutone",check=F) +
  set_prior("exponential(.1)",class="phi")

my_inits <- list(list(b_beta = array(3), b_bot = array(0.1),
                      b_ec50 = array(2), b_top = array(0.9)),
                 list(b_beta = array(3), b_bot = array(0.1),
                      b_ec50 = array(2), b_top = array(0.9)),
                 list(b_beta = array(3), b_bot = array(0.1),
                      b_ec50 = array(2), b_top = array(0.9)),
                 list(b_beta = array(3), b_bot = array(0.1),
                      b_ec50 = array(2), b_top = array(0.9)))
tt <- brm(formula = bfmod, data = dat, prior = my_prior, stanvars = stanvars, 
          family = ord_beta_reg, init = my_inits, chains = 4, iter = 5e3)
expose_functions(tt, vectorize = TRUE)
tt_ce <- conditional_effects(tt)
plot(tt_ce, points = TRUE)
# looks weird, not sure why - make plot manually using the predict method
pred_vals <- predict(tt, newdata = data.frame(x=seq(1,3,length=100)))
plot(dat$x, dat$y)
lines(seq(1,3,length=100), pred_vals[,"Estimate"])
lines(seq(1,3,length=100), pred_vals[,"Q2.5"], lty=3)
lines(seq(1,3,length=100), pred_vals[,"Q97.5"], lty=3)
3 Likes

OK I think I need to update the code as brms doesn’t use pp_expect anymore. Also the specification for the model formula should be 0 + Intercept + covariates instead of just 0 + covariates, unless there is something I missed in terms of your model specification.