Over on this lengthy thread, I suggested an implementation of a custom family in brms for a zero-one-midpoint-inflated beta model for slider scale data. The reasons for having such a thing would be 1) slider scale data really does appear this way - people hate it, are neutral, or love it, 2) generally slider scale data has multiple responses per person, and it would be nice to model the different inflation components using correlated varying effects for person, 3) this would all be super handy and convenient in brms.

I have started a new topic, because the original was slightly off topic and got rather lengthy. I had more time to think about it, and now I have some questions.

Defining a custom response distribution wouldnâ€™t seem so difficult, as there is already a `zero_one_inflated_beta`

in brms. I would think that you could simply add a line for the midpoint inflation to the currently implemented `zero_one_inflated_beta`

, like this:

```
stan_funs <- "
real zero_one_mid_inflated_beta_lpdf(real y, real mu, real phi,
real zoi, real coi, real mdi) {
row_vector[2] shape = [mu * phi, (1 - mu) * phi];
if (y == 0) {
return bernoulli_lpmf(1 | zoi) + bernoulli_lpmf(0 | coi);
} else if (y == 1) {
return bernoulli_lpmf(1 | zoi) + bernoulli_lpmf(1 | coi);
} else if (y == 0.5) {
return bernoulli_lpmf(1 | mdi);
} else {
return bernoulli_lpmf(0 | zoi) + bernoulli_lpmf(0 | mdi) + beta_lpdf(y | shape[1], shape[2]);
}
}
"
```

Now in the previous topic that I linked to, I implemented it like this:

```
stan_funs <- "
real zero_one_mid_inflated_beta_lpdf(real y, real mu, real phi,
real zoi, real coi, real mdi) {
row_vector[2] shape = [mu * phi, (1 - mu) * phi];
if (y == 0) {
return bernoulli_lpmf(1 | zoi) + bernoulli_lpmf(0 | coi);
} else if (y == 1) {
return bernoulli_lpmf(1 | zoi) + bernoulli_lpmf(1 | coi);
} else if (y == 0.5) {
return bernoulli_lpmf(1 | mdi) + bernoulli_lpmf(0 | mdi);
} else {
return bernoulli_lpmf(0 | zoi) + bernoulli_lpmf(0 | mdi) + beta_lpdf(y | shape[1], shape[2]);
}
}
"
```

which seems like a mistake to include the `+ bernoulli_lpmf(0 | mdi)`

in the midpoint inflation line.

However, the latter does a much better job at estimating the midpoint inflation than the former! Why??

Here is fully reproducible code to implement the first one as shown above, with results and posterior predictive checks:

```
library(brms)
#proportion of zero-one inflation (zoi) = 200/800 = 0.25
#proportion of zoi that are ones (coi) = 100/200 = 0.5
#mid-point inflation (mdi) = 100/800 = 0.125
#mean of the beta distribution is 0.5
y <- c(rep(0, 100), rbeta(500, 1, 1), rep(0.5, 100), rep(1, 100))
hist(y,breaks=50)
d1 <- cbind(y)
d1 <- data.frame(d1)
str(d1)
## TRY 1
zero_one_mid_inflated_beta <- custom_family(
"zero_one_mid_inflated_beta", dpars = c("mu", "phi", "zoi", "coi", "mdi"),
links = c("logit", "identity", "identity", "identity", "identity"),
lb = c(0, 0, 0, 0, 0), ub = c(1, NA, 1, 1, 1),
type = "real"
)
stan_funs <- "
real zero_one_mid_inflated_beta_lpdf(real y, real mu, real phi,
real zoi, real coi, real mdi) {
row_vector[2] shape = [mu * phi, (1 - mu) * phi];
if (y == 0) {
return bernoulli_lpmf(1 | zoi) + bernoulli_lpmf(0 | coi);
} else if (y == 1) {
return bernoulli_lpmf(1 | zoi) + bernoulli_lpmf(1 | coi);
} else if (y == 0.5) {
return bernoulli_lpmf(1 | mdi);
} else {
return bernoulli_lpmf(0 | zoi) + bernoulli_lpmf(0 | mdi) + beta_lpdf(y | shape[1], shape[2]);
}
}
"
stanvars <- stanvar(scode = stan_funs, block = "functions")
m1 <- brm(y ~ 1, data=d1, family = zero_one_mid_inflated_beta, stanvars = stanvars, cores = 4)
m1
posterior_predict_zero_one_mid_inflated_beta <- function(i, prep, ...) {
zoi <- get_dpar(prep, "zoi", i)
coi <- get_dpar(prep, "coi", i)
mdi <- get_dpar(prep, "mdi", i)
mu <- get_dpar(prep, "mu", i = i)
phi <- get_dpar(prep, "phi", i = i)
hu <- runif(prep$ndraws, 0, 1)
one_or_zero <- runif(prep$ndraws, 0, 1)
mid <- runif(prep$ndraws, 0, 1)
ifelse(hu < zoi & mid > mdi,
ifelse(one_or_zero < coi, 1, 0),
ifelse(mid < mdi, 0.5, rbeta(prep$ndraws, shape1 = mu * phi, shape2 = (1 - mu) * phi))
)
}
pp_check(m1, type='hist')
```

And here is the second implementation that would seem incorrect but does a better job:

```
## TRY 2
zero_one_mid_inflated_beta <- custom_family(
"zero_one_mid_inflated_beta", dpars = c("mu", "phi", "zoi", "coi", "mdi"),
links = c("logit", "identity", "identity", "identity", "identity"),
lb = c(0, 0, 0, 0, 0), ub = c(1, NA, 1, 1, 1),
type = "real"
)
stan_funs <- "
real zero_one_mid_inflated_beta_lpdf(real y, real mu, real phi,
real zoi, real coi, real mdi) {
row_vector[2] shape = [mu * phi, (1 - mu) * phi];
if (y == 0) {
return bernoulli_lpmf(1 | zoi) + bernoulli_lpmf(0 | coi);
} else if (y == 1) {
return bernoulli_lpmf(1 | zoi) + bernoulli_lpmf(1 | coi);
} else if (y == 0.5) {
return bernoulli_lpmf(1 | mdi) + bernoulli_lpmf(0 | mdi);
} else {
return bernoulli_lpmf(0 | zoi) + bernoulli_lpmf(0 | mdi) + beta_lpdf(y | shape[1], shape[2]);
}
}
"
stanvars <- stanvar(scode = stan_funs, block = "functions")
m2 <- brm(y ~ 1, data=d1, family = zero_one_mid_inflated_beta, stanvars = stanvars, cores = 4)
m2
posterior_predict_zero_one_mid_inflated_beta <- function(i, prep, ...) {
zoi <- get_dpar(prep, "zoi", i)
coi <- get_dpar(prep, "coi", i)
mdi <- get_dpar(prep, "mdi", i)
mu <- get_dpar(prep, "mu", i = i)
phi <- get_dpar(prep, "phi", i = i)
hu <- runif(prep$ndraws, 0, 1)
one_or_zero <- runif(prep$ndraws, 0, 1)
mid <- runif(prep$ndraws, 0, 1)
ifelse(hu < zoi & mid > mdi,
ifelse(one_or_zero < coi, 1, 0),
ifelse(mid < mdi, 0.5, rbeta(prep$ndraws, shape1 = mu * phi, shape2 = (1 - mu) * phi))
)
}
pp_check(m2, type='hist')
```

In both cases the midpoint inflation is over-estimated. However, the first implementation has a much greater over-estimation.

Why is that? It seems like the first one would be the correct implementation. ??