I am trying to interpret my results and am having issues
My model
Multi-Membership random effect
male_id and female_id
Dependent Variable:
Association Rate: Amount of instances/ Total instances in a day
Independent Variables:
Status: Territorial Males or Satellite Males
Date: Day of breeding season
Male degree: Number of male conspecfics that were met on the previous day
Non-focal: Promiscuity of female on previous day
Description:
There are alot of 0’s for male degree and non-focal so I assumed that male_degree and non_focal would be influencing the dispersion in the model (phi).
I did two models, one where I modeled date as a second degree polynomial function
This is how I calculated the predicted probabilities
Predicted Probabilities:
exp(b_Intercept+b_male_degree x male_degree)/(1+exp(b_Intercept+b_male_degree x male_degree)
exp(b_Intercept+b_non_focal x non_focal)/(1+exp(b_Intercept+b_non_focal x non_focal))
I believe this issue is with the crazy intercept in the regular date model.
When I did this model in a frequestist form to check for multi-collinearity ( I put dyadic ID as a random effect and not a multi-membership to test for multicollinarity ), the model wouldn’t converge. Although when I put date as a second degree polynomial function it did converge.
So my question is why this is doing this and how to accurately represent the effects for just the date normal model because that was my best model based off of leave-one-out cross-validation.
I am not sure why the model with a linear term for date has such a large error in the Intercept vs the model with the polynomial. Perhaps you need to use better priors. I’m not sure anyone could answer this based on just the model output…
Also, in general I would recommend using a spline instead of a polynomial. Splines are easy to fit in brms, using the s() syntax from mgcv.
When you used LOO, did you get a warning about high pareto k?
I would be pretty suspicious of a model where the intercept had an error that was so high.
For the plots, I would use conditional_effects() as I mentioned in your other post.
phi_Intercept gives the link-scale value of phi when all predictors are held at their means. That it is non zero (“significant”) means that the link-scale value of phi when all predictors are held at their means is something other than zero. Any value, zero or otherwise, influences the dispersion in the model.
phi is fitted with a log link. Since the intercept is positive, the resulting beta-binomial has a higher precision in the beta component–and thus lower dispersion–than a beta-binomial distribution with phi equal to exp(0) = 1. Of course it still has higher dispersion than a plain old binomial distribution.