Too bad! Thanks for being so kind to respond nonetheless :)
I tried calling stan_clogit but it has been running for over a day now…
library(rstanarm)
model_brms2b <- stan_clogit(
formula = choice ~ return + risk + social * SVOangle + age + gender + (1 | id),
data = df_social_fct,
strata = choiceid,
prior = normal(),
algorithm = "sampling",
adapt_delta = 0.95,
chains = 4,
iter = 2000,
cores = 4
)
I wonder if there even is a difference in using binomial instead of multinomial/categorical.
1 Like
What is the motivation for doing sequential binary choices rather than multinomial? Is it to make prior specification easier? Or is this just trying to get around a limitation of brms?
I’ve always been confused by this, because you can always take the sequence of binary choices and convert it to equivalent categorical parameters. For example if there is a phi1
probability of outcome 1 and a (1 - phi1)
probability of outcomes 2 or 3, then given that the outcome is not 1, the probability of outcome two is phi2
, you get the equivalent simplex (phi1, (1 - phi1) * phi2, (1 - phi1) * (1 - phi2))
.
Because of the marginalization properties of multinomials, you can go back the other way, too and calculate the probability that the result is among any subset of the outcomes.
Thanks for your input, Bob!
The experiment is supposed to be performed in a brain scanner given a successful pre-study. Identifying neural correlates of stimuli is easier with less stimuli, hence only two alternatives per choice.
I’m not that familiar with the lingo. Did you suggest I combine all binary choices? I’m afraid that doesn’t work in my case. Subjects made 40 binary decisions between unlabeled alternatives and alternatives are unrelated to each other (i.e., there are 80 different alternative profiles).
Yes, I was suggesting that. But in the situation you just described, there’s not a clear larger categorical structure, so now I’m confused by the link to Kruschke and splitting trees. That’s using a sequence of binary distributions to model a more general categorical distribution.
Thanks for indulging me, Bob! Yes, I understood the tree to be a sequence of binary choices as well. Importantly, the probabilities for each alternative sum to 1. Seems equivalent to each choice in my experiment. Given that, I figured perhaps someone here can show me how do just that conditional logistic without the rest of the tree structure. Although, looking at the very first equation of Solomon’s, have I been wrong using family=bernoulli and should have used categorical instead? Basically, with my data in long format, I don’t know how to tell brms that probabilities need to sum to one for each choice set (two rows per choice set).
I guess, part of my confusion is that I often read that multinomial (implemented with categorical in brms) is for MORE THAN 2 alternatives and thus I should use bernoulli. Seems to me that multinomial with 2 alternatives is the same but I don’t want to say that without knowing.
I’m afraid I have no idea how to do this in brms
or if it’s even possible.
If you use a bernoulli, it will make sure that pairs of alternatives sum to 1. If you use multinomial, that’ll make all the alternatives sum to 1. I’m pretty sure that even with linear predictors, chained Bernoullis are going to be equivalent to a multinomial, but you’d need to work out the math to construct the appropriate coefficient matrix.
1 Like
Conditional logistic models in brms are kind of “possible” but a bit awkward to implement. There is a closed issue on github discussing this issue: Implement Conditional logistic regression models · Issue #560 · paul-buerkner/brms · GitHub
Edit: I realize the OP already mentioned this issue. Sorry for providing redundant information.
1 Like