I am trying to reconstruct predicted values from a linear-mixed effects model with random-intercept. I use the data from Hox (see here and build a simple model with one predictor variable (originally occas, I called it semester). My goal is to understand how the model comes up with a final prediction based on the parameters (fixed and random) that have been estimated.
This is the output I get after I ran the following model: ‘gpa ~ 1 + semester + (1|student)’
Family: gaussian Links: mu = identity; sigma = identity Formula: gpa ~ 1 + semester + (1 | student) Data: df (Number of observations: 1200) Samples: 2 chains, each with iter = 3000; warmup = 1000; thin = 1; total post-warmup samples = 4000 Group-Level Effects: ~student (Number of levels: 200) Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sd(Intercept) 0.25 0.01 0.23 0.28 1159 1.00 Population-Level Effects: Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat Intercept 2.60 0.02 2.55 2.64 969 1.01 semester 0.11 0.00 0.10 0.11 4000 1.00 Family Specific Parameters: Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat sigma 0.24 0.01 0.23 0.25 4000 1.00 Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat = 1).
In order to understand it better, I appended predictions + random effects to the original dataset. preds1 is the prediction (calculated using predict), u1 is the group-level effect estimated for that person (calculated using ranef), sigma is the residual error (residuals), grand_gpa is the grand mean. I wonder how to come up wih the prediction of, say student 1 at semester = 0 which is 2.5 and at semester = 1 which is 2.6.
R version 3.4.2 (2017-09-28)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6
Thanks in advance for your help.