Understand predicted values of random-intercept model

Hi,

I am trying to reconstruct predicted values from a linear-mixed effects model with random-intercept. I use the data from Hox (see here and build a simple model with one predictor variable (originally occas, I called it semester). My goal is to understand how the model comes up with a final prediction based on the parameters (fixed and random) that have been estimated.

This is the output I get after I ran the following model: ‘gpa ~ 1 + semester + (1|student)’

Family: gaussian 
  Links: mu = identity; sigma = identity 
Formula: gpa ~ 1 + semester + (1 | student) 
   Data: df (Number of observations: 1200) 
Samples: 2 chains, each with iter = 3000; warmup = 1000; thin = 1;
         total post-warmup samples = 4000

Group-Level Effects: 
~student (Number of levels: 200) 
              Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sd(Intercept)     0.25      0.01     0.23     0.28       1159 1.00

Population-Level Effects: 
          Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
Intercept     2.60      0.02     2.55     2.64        969 1.01
semester      0.11      0.00     0.10     0.11       4000 1.00

Family Specific Parameters: 
      Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sigma     0.24      0.01     0.23     0.25       4000 1.00

Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
is a crude measure of effective sample size, and Rhat is the potential 
scale reduction factor on split chains (at convergence, Rhat = 1).

In order to understand it better, I appended predictions + random effects to the original dataset. preds1 is the prediction (calculated using predict), u1 is the group-level effect estimated for that person (calculated using ranef), sigma is the residual error (residuals), grand_gpa is the grand mean. I wonder how to come up wih the prediction of, say student 1 at semester = 0 which is 2.5 and at semester = 1 which is 2.6.

student semester gpa preds1 u1 sigma grand_gpa
1 0 2.3 2.5 -0.069 -0.2284 2.9
1 1 2.1 2.6 -0.069 -0.5347 2.9
1 2 3.0 2.7 -0.069 0.2590 2.9
1 3 3.0 2.8 -0.069 0.1527 2.9
1 4 3.0 3.0 -0.069 0.0463 2.9
1 5 3.3 3.1 -0.069 0.2400 2.9
2 0 2.2 2.4 -0.215 -0.1827 2.9
2 1 2.5 2.5 -0.215 0.0110 2.9
2 2 2.6 2.6 -0.215 0.0047 2.9
2 3 2.6 2.7 -0.215 -0.1016 2.9
2 4 3.0 2.8 -0.215 0.1920 2.9
2 5 2.8 2.9 -0.215 -0.1143 2.9

brms-version: 2.4.2
R version 3.4.2 (2017-09-28)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS High Sierra 10.13.6

Thanks in advance for your help.

Sorry, I made a mistake. I used options(digits=2) which I totally forgot throughout the rest of the script. That’s why I was confused concerning the predictions. It now makes sense again.