Hi All, first post. I’m fitting what is known as a functional response to a dataset of Peregrine Falcon prey consumption. Here is my model definition:
Del ~ Poisson(mu) # number of prey deliveries to chicks in a day
mu = a * N ^ (m + 1) / (1 + a * t * N ^ (m + 1)) * offset * chicks
this is the functional response formula, and the offset is the number of hours the nest was observed (by camera). chicks is the number of chicks in the nest, which converts the units of the functional response to per chick. a, t and m are parameters to be estimated during fitting that control the shape of the response, and N is prey density in the surrounding landscape.
I have taken the further step of allowing a and t to vary according to the number of chicks in the nest and their age
a = exp(alpha_a + beta_chick_a * chicks + beta_age_a * age)
t = exp(alpha_t + beta_chick_t * chicks + beta_age_t * age)
a and t must be > 0 for the model to make biological sense, hence exp()
priors for submodel terms:
alpha_a, alpha_t ~ N(0, 5)
beta_chick_a, beta_chick_t, beta_age_a, beta_age_t ~ N(0, 1)
m ~ halfnorm(0, 1)
this definition for m constrains the exponent in the model formula to be > 1, which is also necessary for biological plausibility.
This model runs cleanly in ulam from the rethinking package, although does exhibit very high posterior correlations between terms in the linear submodels for a and t.
My difficulty arises when I introduce a random effect for nest ID into the submodel for a, i.e.,
a = exp(alpha_a + beta_chick_a * chicks + beta_age_a * age + zz[nestID])
zz[nestID] ~ N(0, sigma_zz)
sigma_zz ~ halfnorm(0, 1)
then I get divergent iterations, R-hat warnings and poorly mixed chains:
I have tried the non-centered approach, i.e.,
a = exp(alpha_a + beta_chick_a * chicks + beta_age_a * age + zz[nestID] * sigma_zz)
zz[nestID] ~ N(0, 1)
sigma_zz ~ halfnorm(0, 1)
but this does not solve the problem. I can actually fit a much more complicated joint model for multiple prey species that also accounts for unidentified prey and uncertainty in prey density estimates, which also runs cleanly if no random effects are present (though with similarly high posterior correlations), but then fails if random effects are included (divergent iterations and poor mixing).
I’m somewhat raw as a Bayesian and would be really grateful if someone could maybe advise me here. I’m kind of at a loss as to a good approach.
Thanks!