I am new in Bayesian Statistics and the “brms” package. I ran an ordinal logistic regression, family adjacent with category-specific effects to predict GPA (ordinal variable with five levels (less than 5.9, 6-6.9, 7-7.9, 8-8.9 and 9-10) using 2 precitors: Family income (ordinal variable with five levels) and Cognitive Reflection (continous variable:IRT scores).
I specified the model as following:
model <- brm(formula = GPA ~ cs(Gender) + cs(Family_Income) ,data=datos1, family = acat())
Nevertheless, I have had problems while interpreting the results. Taking into account that the 95% CI of the following exclude zero:
Family Income[1,2] & -0.78 & -1.4 & -0.12
Cognitive Reflection[4]& 0.43 & 0.2 & 0.68
First of all, I wouldn’t engage in dichotomous thinking too much. In other words, don’t over-interpret the arbitrary threshold of 5%.
Then, you should check that you are not overfitting by using category-specific effects. I would recommend running one model with just standard effects and then compare both models using loo.
Thank you for your reply. Actually, I performed a model comparison between two models: a standard effects model and a category-specific effects model, and found that the former had the lowest LOO value.
Model & LOOIC & SE
Model 1 (Standard effects) & 2955.92 & 37.52
Model 2 (Specific effects) & 2976.33 & 42.02
Model 1 - Model 2 &-20.41 & 15.33
Nevertheless, I am still unsure which marginal effects plots are the correct ones to report.
marginal_effects(model) = When I ran this, a warning appeared: Warning messages: 1: Predictions are treated as continuous variables in ‘marginal_effects’ by default, which is likely invalid for ordinal families. Please set ‘categorical’ to TRUE.
marginal_effects(model, categorical =TRUE) =
marginal_effects(model, ordinal = TRUE) =
(The last two show the change in probability to answer certain category option, and it is clear that the change in probability is more evident in certain options (e.g. 3 and 4)). Does this finding suggest that a categoric-specific model could be more plausible?
The results may indicate that you are (slightly) overfitting when using category-specific effects in all of your predictiors. That doesn’t mean they don’t capure something relevant between 3 and 4.
How does marginal_effects(model, categorical =TRUE) look like for the standard model?
Actually, they are very similar. The plot marginal_effects (specificeffects_model, categorical =TRUE) is shown below.
In conclusion, Should I select a standard effect model, although the graph is showing an effect in certain response categories?
If yes, how should I report this finding?
Thanks
I think reporting the results of the model comparison via loo, and then the coefficients of the standard model would be a reasonable choice. You may report the results of the category-specific model, as well, but state that this model is probably overfitting the data.
So sorry for this follow-up after such a loooong time. I’m wondering if it is possible to calculate the marginal effect of a given predictor on the dependent variable. For instance, for the data described here, is it possible to calculate how Gender (male vs female) will make the GPA higher or lower? I tried avg_comparison for my adjacent-category model but the outputs were grouped by rating categories.