Creating polynomial contrast for parameters

Hi everyone,

I fitted a model using brms using the following formula:

first1.1 <- brm(data   = first,
                family = multinomial(refcat = NA, 
                                     link   = logit),
                
                formula = y | trials(total) ~ 1 + (1 | strategy/subject)) 

I extracted samples from the posterior predictive distribution (I used add_fitted_draws from the tidybayes package , which is equivalent to posterior_linpred). Now, I would like to test if the different levels of y follow a linear pattern. In the NHST I would perform a polynomial contrast to follow up, and I was wondering if this is possible here.

My approach so far was to transform the log odds into a standardized difference (by multiplying them by sqrt(3)/pi and then I can do orthogonal coding on these values (-3, -1, 1, 3, since I have 4 levels here). Is this the right approach? I have seen the function hypothesis is used for follow up comparisons between parameters, but I am not sure how to use it for orthogonal contrasts (if it’s possible at all).

My guess would be to weight all the samples from each of the levels by the orthogonal coding, add them up and then see if the result lies within a ROPE surrounding 0 (given that a linear contrast assumes equal difference between levels)?

Is this approach reasonable? Is there a more straightforward way of doing this?

Thanks for your help

Sorry, your question fell through. I can’t comment much on hypothesis as I don’t really use Bayes factors. Overall, I find it much easier to evaluate models using their predictions - I’ve written about this in a different context here: Multivariate model interpreting residual correlation and group correlation - feel free to ask for clarifications (this lets you for example test directly whether your predictions are within some ROPE).

Does that help at least a bit?

No worries, Martin. Actually that is the approach I have used so far (I did find that reply of your a while ago and it was pretty helpful, thanks). The question is more about, once I have the posterior predictions, I would like to check if those follow a linear trend (i.e. the increments across levels are equally spaced e.g. A-B = B-C = C-D) . What I have done so far is compare the differences between these levels to see if there are no differences between them, since that would imply that they are increasing at the same rate. To say that these are following the linear pattern, these should fall within a ROPE around 0, but I don’t know if I should use the same ROPE I used for the simple comparisons here or if there is a less convoluted way of testing this.

That sounds roughly reasonable to me (the devil could be in the details of the connection of this quantity to your actual research question, but that’s your call as the domain expert).

The ROPE should certainly be informed by the domain and there is IMHO no general reasons why the same ROPE should be used for multiple different questions. You might also want to do something like actually fitting (for each sample separately) a line between the four values and take the residual sum of squares, giving you a posterior distribution for this residual, which might be easier to interpret.

Also there is IMHO nothing wrong with just reporting an appropriate posterior interval. For a positive quantity, it might make sense to not report the central interval but rather a 95% interval from zero up, i.e. P(RSS < a) = 95%

Alternatively you can build a model that forces a linear trend (by coding the levels as real numbers and treating it as continous predictor) and then do model comparison with loo between this and the more expressive model where each category can differ. (this would answer a different question, i.e. whether not treating this as a linear trend improves prediction).

1 Like

That makes a lot of sense, thanks for your input!