Hi,
sorry your question seems to have fallen through. I honestly don’t understand those models very well. I however fear that you would pushing brms
to its limits and you might be getting better results moving to pure Stan. It looks like a borderline case.
It all seems roughly correct up to:
I don’t think that would work as that would make the posterior non-smooth. I think what you want could be achieved with explicitly having a b_conservative
parameter, i.e. something like:
nl(binary ~ a + (1 - conservative) * exp(b)*(o-x)^2 + conservative * exp(b_conservative) * (o - x) ^ 2,
a ~ 1 + (1|ideology), # ideology tag specific constant (prevalence)
b ~ 1 + (1|ideology), # ideology tag discrimination parameter
o ~ (1|ideology), # ideology tag location
b_conservative ~ 1,
x ~ (1|party)) # party location
And then use prior
to give the Intercept
for b_conservative
a lower bound of 0.
Does that make sense?
There are a couple results on this forum for “ideal point” and also “latent factor models” seem to lie in a similar territory (e.g. as Constraining Latent Factor model / baysian probabalisic matrix factorization to remove multimodality). My impression is that those models are tough to crack, but I am not well read on this.
P.S.: It is usually good practice to repost the model formula you are after in the post - it took me some time to find it in the reference.