Logit transformation of parameters in non-linear binomial model

Again, big thanks for sharing your knowledge. And thanks for pointing out the typos in code. I renamed some of the variables while writing the post to add clarity but added only more confusion! In my script all the names are correct.

I’ve never extracted draws from a non-linear model, but I followed this post. Let’s consider only one species for simplicity, so that Tp ~ 1 + (1|site).

I assume getting my Tp results would look like this:

Tp.fit <- fitted(mod, nlpar = 'Tprev')
inv_logit <- function(x) exp(x)/(1+exp(x))
Tp.fit.inv <- inv_logit(Tp.fit)
mean(Tp.fit.inv)

I tried also prepare_predictions(), with the same result, Tprev = 0.53 - way too high given my apparent prevalence is ~ 0.28

The estimate for Se was also very low, ~ 0.6 (much lower than the prior specifies). I compared the results with truePrev() from prevalence package. It uses basically the same model but without random intercept (and it’s in JAGS). It gives me Tprev = 0.37, using the same Se and Sp priors. That sounds more reasonable.
Moreover, when I omit the inv_logit function in my model, the returned Tprev estimate gives me 0.37 prevalence. The ‘Se’ estimate is also much closer to what I specified in priors. From that I conclude, that the inv_logit is messing up my results OR changing them in a way I just cannot understand. Unfortunately, without this transformation I get a lot of divergent transitions because of the random intercept. So i circle back to the very beginning. I’ve learned a lot, but still not enough to really comprehend what I am actually doing. I tried tweaking the priors of Se and Sp and tightening the priors on Tprev as you suggested but I does not change the outcome. The only model that works well and gives me straight-forward estimates is the no-logit-transformation-no-random-intercept model, and I am afraid I am stuck with it.

If I understand you correctly, you mean something like:

prev <- data %>% group_by(site, species) %>% summarise(Aprev = mean(Count), n = n())
mod <-  brm(
    bf(Aprev | weights(n) ~ Tprev * Se + (1 - Tprev ) * (1 - Sp),
       Tprev ~ species + (1|site), 
       Se + Sp ~ 1,
       nl = T), family = gaussian(link = 'identity'), data = prev, chains =4)
Population-Level Effects: 
                  Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Tprev_Intercept       0.64      0.15     0.34     0.93 1.00      937     1132
Tprev_speciesB        0.44      0.03     0.37     0.51 1.00     2578     2248
Tprev_speciesC        0.96      0.03     0.88     1.00 1.00     1854     1431
Se_Intercept          0.26      0.04     0.18     0.34 1.00     1338     1799
Sp_Intercept          0.40      0.04     0.31     0.48 1.00     1440     1862

I used various priors with the same effect. The Se and Sp (and the prevalences) don’t look right.

I’m following an advice from ‘Statistical Rethinking’ of running the model with one chain and if everything seems alright, rerunning with four chains. This also speeds up the process of debugging, because nothing is alright in the first try! Adapt delta is an artifact from the previous version of the model without logistic transformation. I rerun it with default controls and it works all the same.

I will keep banging my head against a wall. But I definitely learned a lot!