Is it recommended to have fixed effects if you already have random effects?

I’m trying to understand the case for including fixed effects in your model if you already have random effects in your model. Particularly if you’re using non-centred parametization for your random effects, it strikes me that having a fixed effect too is unnecessary.

For example, if you already have this:

model {
  for (g in 1:NumGroups) {
     alpha_grp[g] = alpha_bar + alpha_raw[g] * alpha_sigma
     beta_grp[g] = beta_bar + beta_raw[g] * beta_sigma
  }
  vector[N] mu = alpha_grp[grpID] + beta_grp[grpID] * x;
  y ~ normal(mu, sigma); 
} 

is there any point in adding in the fixed effects?:

model {
  for (g in 1:NumGroups) {
     alpha_grp[g] = alpha_bar + alpha_raw[g] * alpha_sigma;
     beta_grp[g] = beta_bar + beta_raw[g] * beta_sigma;
  }
  vector[N] mu = alpha_grp[grpID] + beta_grp[grpID] * x + alpha + beta * x;
  y ~ normal(mu, sigma); 
} 

It appears to me that the fixed effects will be make no difference as alpha_bar and beta_bar are already ‘doing the job’ that alpha and beta would do.

Any guidance would be much appreciated.


What you are referring to as “fixed effects” makes the non-centered parameterization of the “random effects” possible, i.e. the latter can have a (usually normal) prior with mean (vector) of zero. To use a centered parameterization, then the mean vector would be the “fixed effects” but the “fixed effects” would not appear in the line for vector[N] mu. Either way, you need to include them because excluding them is equivalent to setting them all to zero.

Thanks for your response Ben, unfortunately I still don’t seem to get it.

Couldn’t you re-write the second model in my question (the one with I think both “fixed” and “random” effects) as:

model {
  for (g in 1:NumGroups) {
     alpha_grp[g] = alpha_raw[g] * alpha_sigma;
     beta_grp[g] = beta_raw[g] * beta_sigma;
  }
  vector[N] mu = alpha_bar + alpha_grp[grpID] + (beta_bar + beta_grp[grpID]) * x + alpha + beta * x;
  y ~ normal(mu, sigma); 
} 

Now just putting the _bars on the vector[N] mu line. Isn’t this exactly the same model but just now more explicitly illustrating the fact that alpha_bar and beta_bar are doing the same thing as alpha and beta?

I realise I haven’t included priors in this example but I can’t currently see how they would change or help the fact that I seem to have parameters with duplicated “responsibilities”.

I’m trying to keep my question concise, but please let me know if it would be helpful to expand on any of my likely flawed thinking.

If you do it that way, then you can write mu without the alpha + beta * x. And you could put the _bar terms into the _grp terms, like

model {
  for (g in 1:NumGroups) {
    alpha_grp[g] = alpha_bar + alpha_sigma * alpha_raw[g];
    beta_grp[g] = beta_bar + beta_sigma * beta_raw[g];
  }
  y ~ normal(alpha_grp[grpID] + beta_grp[grpID] * x, sigma);
}

So that’s the same model as the first one in my original question?

In that model, is it unnecessary to include “fixed” effects? Or rather, the _bars seem are acting as the “fixed” effects?

The _bars are the “fixed” effects.

1 Like