Check this discussion about weighting.
Also, afaik weighting is used to make the sample that you have resemble the population you want to generalize to (in frequentist statistics!). This is why y_rep
in the wt.mod
model shows you that both classes are equally likely in the posterior… You basically told the model, that the 2:3 ratio in the data is really a 1:1 ratio in the population.
In a Bayesian model, the more natural way to give the model this information is the prior! If you have strong prior belief, that the population ratio of ones and zeroes is 1:1, then you’d but a very strong prior on the intercept term centered at zero (assuming, that all covariates are centered).
The same logic follows if your data is 2:3, but you’d expect a 1:14 (sorry that I didn’t read that before, I should have read the thread more carefully!) ratio in the population. A frequentist would weight the data, so it matches the population ratio. As a Bayesian, you’d just put your prior information into the model.
Now, all of this is really not that important if a) the sample ratio of ones and zeroes matches that of the population, and b) when you are only interested in the slopes (coefficients) of the covariates (which should still be “consistent estimators”).
And, as an aside (I think you are probably aware of this), working with something like this:
logit(ifelse(y == 1, y - 0.01, y + 0.01))
is really bad if y \in \{0,1\} as in your generated data (i.e. y can only be 0 or 1). In the bit, that you quote, it seems Pinheiro et al. have y \in [0,1], i.e. y can be 0 or 1 or anything in between.