Prior setting for NCP: is there a difference between attributing more variance to 'raw' vs 'sigma' parameter?

#1

As far as I can tell there’s very little difference between the following two prior structures on the non centred parametrisation for a hierarchical effect:

parameters {
  vector[NumGrp] grp_raw;
  real<lower=0> grp_sigma;
}
model {
   grp_eff[grpID] = grp_raw[grpID] * grp_sigma;
   
   #Possible prior structure 1
   grp_raw ~ normal(0,1);
   grp_sigma ~ normal(0,0.5);

   #Possible prior structure 2
   grp_raw ~ normal(0,0.5);
   grp_sigma ~ normal(0,1);
} 

I appreciate NCP can help with convergence issues but I’m not sure in the value or difference in attributing more prior variance to the sigma component of the NCP versus the raw component.

That said, sometimes when I run my model the raw component will exhibit results that suggest that my prior variance was smaller than the likely value and at the same time the sigma values will suggest my prior value was likely too big.
This seems counter to my original interpretation that it doesn’t really matter in which parameter the group variance is expressed.

Is my original interpretation correct? If not, any guidance on this would be much appreciated.

0 Likes

#2

I guess the difference is structure 1 is the standard way of doing it and so it’ll be clearer what’s happening to someone else reading the model.

I guess you have this extra degree of freedom to scale the parameters, but I don’t see any reason to do it.

I wouldn’t read into the variances of grp_raw. grp_eff and grp_sigma should tell you what you need to know.

Explain this more. Do you have numbers? I don’t think I addressed it.

1 Like

#3

Thanks Ben yeah your answer confirms what I thought. It doesn’t really matter which parameter you assign a ‘wider’ prior too, but for consistency purposes it may be better to keep raw at (0,1) and then express higher/lower variance among groups in the sigma prior.

The raw and sigma results that I mentioned in my original post (and that triggered my question) on further review don’t exhibit the opposing movements against their prior that I had originally suspected.

0 Likes

#4

You can also use the new std_normal() on the raw parameters to make things a bit more efficient.

1 Like

#5

Thanks @Max_Mantei, so just something like?:

model {
   #Possible prior structure 1
   grp_raw ~ std_normal();
   grp_sigma ~ normal(0,0.5);
}
0 Likes

#6

Yes, like that.

1 Like