# Multiple outcomes as hierarchical - impossible values

Hello,

I am working on an analysis in which parents and clinical staff were asked to rate how increasing different variables would be expected to affect their ability to be present at the bedside. The original purpose of this survey was to develop informative priors for a different set of parents for whom we have presence data. In the interim, we were hoping to look at how estimates from parents and different staff might differ. Responses range from -2 (decrease presence a lot) to +2 (increase a lot).

I have run the analysis with each question as a separate model, but also wanted to revisit an idea I talked about here earlier which is to melt the questions together and fit slopes for each outcome as varying by role (e.g. mother, father, nurse, physician, social worker). I used the the following code:

`stan1b = stan_lmer(value ~ role + (role|outcome) + (1|role), adapt_delta = 0.999, data = quick_pos)`

Which seems to work at first glance, but is giving me really unexpected results for one group by one outcome. For the acuity outcome, I have the following averages from the data:

Mean pooled over all roles: 1.37
Mean of social worker for acuity question: 1
Mean of social worker across all questions: 1.6

Based on my understanding of partial pooling, I expected the fitted value to be some compromise between 1, 1.37, and 1.6, but instead posterior_predict tells me the model fits social worker response for acuity as 1.8.

Any ideas what I’m missing? I have attached the paper that made me revisit this question.

Also, I DO get values that make sense if I use either of:

``````stan1c = stan_lmer(value ~ role + (-1 + role|outcome) + (1|role), adapt_delta = 0.999, data = quick_pos)
stan1d = stan_lmer(value ~ role + (1|role:outcome) + (1|role), adapt_delta = 0.999, data = quick_pos)``````