Hi all, I want to ask another question about the use of sampling weights particularly with rstanarm. I have a very simple random intercept model that I’m running and it runs like a charm. When I add the sampling weights that are supplied by the survey, the program slows down dramatically, which is not surprising, but the problem is that I can’t get convergence of either the random intercept or its standard deviation. Rhat is large and n_eff is much smaller than it should be. As a result, the program is throwing the bulk and effective sample size error messages and suggesting many more iterations (i’m using 15K with 4 chains and 10 thinning).
To be clear, I fully concur with the issues that were raised in previous posts about the problems with sampling weights and the violation of the likelihood principle. The reason I’m trying to add the weights is a long story, but suffice to say that the purveyors of this large (and very policy important) survey (Organization for Economic Cooperation and Development) are very interested in what a Bayesian approach could provide but would find it hard to swallow a Bayesian approach if sampling weights were not included. I’m trying to convince them otherwise. So, my question is whether there are any tricks of the trade in stabilizing the weights in order to get convergence of the random intercept and its standard deviation.
Thanks,
David