Suppose you sow a known number of seeds—0 to 100—onto patches of ground and observe how many seedlings germinate. Simultaneously, additional seeds blow onto your study patches, causing the number of germinants to exceed the number of seeds sown in some cases. If we assume that the rate of background seed addition and germination rates are constant across patches, I think it ought to be possible to recover the germination rate, especially if background seed addition is fairly low compared to the range of seeds added. Is it possible to fit such a model with brms? If so, what is the correct use of aterms needed to specify it?

# set parameters
pr_germ <- 0.3
mu_bkgd <- 5
disp_bkgd <- 3
# simulate data
set.seed(356456)
dat <- data.frame(seeds_added = rep(c(0L, 5L, 10L, 50L, 100L), each = 5), # sown seeds
seeds_bkgd = rnbinom(n = 25, mu = mu_bkgd, size = disp_bkgd)) # unknown seeds
dat$seeds_total <- dat$seeds_added + dat$seeds_bkgd
dat$n_germ <- rbinom(n = 25, size = dat$seeds_total, prob = pr_germ) # number of germinants
dat$seeds_est <- dat$seeds_added # lower bound of seeds_total
obs_dat <- dat[, c("seeds_added", "n_germ", "seeds_est")] # we observe this
# fit model
mod <- brm(bf(n_germ | trials(seeds_est) ~ 1, family = binomial())+
bf(seeds_est | cens("right") ~ 1 + seeds_added, family = negbinomial())+
set_rescor(FALSE),
# b_seedsest_seeds_added is fixed to 1
# (i.e., each seed added increments the total seed by 1)
prior = prior(constant(1), class = "b", coef = "seeds_added", resp = "seedsest"),
data = obs_dat)

This produces an error because the number of successes (germinants) exceeds the number of trials in some replicates. Error: Number of trials is smaller than the number of events.

I might have thought it would be reasonable to assume that the number of background seeds coming in is Poisson distributed, but your negative binomial approach should be similar in implementation. Your example also assumes that the germination rate is constant. I think you might be able to relax this assumption eventually if desired, especially if you’re willing to assume that the variation in the germination of the “background” seeds mirrors variation in the germination rate of the added seeds. In general, both a higher germination rate and a higher rate of background seed deposition will increase the observed counts. Fortunately, two features of the data should allow you to distinguish between the two. First, the binomial sampling variation might be distinguishable from Poisson (or negative binomial) variation. (But caution: when the number of trials gets large binomial distributions converge towards Poisson distributions.) But more powerfully, you have known variation in the number of binomial trials which we expect to be independent of any variation in the background seed deposition rate. Thus, the model is y = a + b where a \sim binomial(p, k) and b \sim Poisson(\lambda). (I’ll use the Poisson in this example, but it can be replaced with negbin).
On the Poisson side of the model the germination rate is completely non-identified from the background deposition rate.

What we need to do to fit this model exactly is to marginalize over all of the possible combinations of a and b. The number of such combinations is y + 1, ranging from (0, y) to (y, 0), which probably isn’t prohibitive for marginalization. I don’t think you can achieve this in brms except via a custom family, so it’ll require a bit of Stan code to get this running. It’s possible that you can get some kind of approximation solution in brms, but to fit this model exactly using Stan I think you’ll need to resort to explicit hand-coded marginalization.

Caution: a Poisson-distributed count parameter in a binomial distribution is not the same thing as the sum of a Poisson plus a binomial distribution! The former marginalizes to Poisson; the latter does not! To see intuitively that this is the case, suppose that \lambda is very small. Then the sum is approximately equal to the binomial term.