Big difference between PyMC and Stan results

I’m not sure if I can ask this in Stan’s discourse, but I’m helping someone that built a model in PyMC and I (tried) to reproduce the results in Stan. The results in Stan look way more like we would expect, given the domain knowledge. Can anyone help in identifying perhaps what might cause the difference?

Note, I added temporary variable names to obfuscate sensitive information.

data {
    int<lower = 0> sample_size;
    int<lower = 0> B; 
    int<lower = 0> P;
    int<lower = 0> T;
    int<lower = 0> E; 
    int<lower = 0> R;
    real y[sample_size];
    int t[sample_size];
    int b[sample_size];
    int p[sample_size];
    int e[sample_size];
    int r[sample_size];
}
parameters{
    vector[T] varT; 
    vector[B] varB;
    vector[P] VarP;
    vector[E] varE;
    vector[R] VarR;
    real<lower = 0> c;
    real<lower = 0> sigma;

}
model{
    real mu[sample_size];
    varT ~ normal(0, 1);
    varP ~ normal(0 , 1);
    varE ~ normal(0 , 1);
    varR ~ normal(0 , 1);
    c ~ normal(30, 7.2);
    sigma ~ exponential(2);
    varB ~ normal(0 , 1);

for (i in 1:sample_size){
        mu[i] = c  + varP[p[i]] + varT[t[i]] + varE[e[i]] + varR[r[i]] + varB[b[i]];
        y[i] ~ normal(mu[i], sigma);
        }
}

This gives the output:

In PyMC the code is:

with pm.Model() as basic_model:

    intercept = pm.Normal('c', mu=30, sd=7) 
    varT = pm.Normal('varT', 0, sd=1, shape=(data['varT'].nunique(),1))
    varP = pm.Normal('varP', 0, sd=1, shape=(data['varP'].nunique(),1))
    varE = pm.Normal('varE', 0, sd=1, shape=(data['varE'].nunique(),1))
    varB = pm.Normal('varB', 0, sd=1, shape=(data['varB'].nunique(),1))
    varR = pm.Normal('varR', 0, sd=1, shape=(data['varR'].nunique(),1))

    sigma = pm.Exponential('sigma', 2)
    growth_mu = pm.Deterministic('Growth_mu', c + varT[data['varT_cat'].values] + 
                             varP[data['varP_cat'].values] +
                             varE[data['varE_cat'].values] +
                             varB[data['varB_cat'].values] +
                             varR[data['varR_cat'].values])
    out = pm.Normal('Y', mu=growth_mu, sd=sigma, observed=data[['Y', 'varT_cat', 'varP_cat', 'varE_cat', 'varB_cat', 'varR_cat']])

This gives the output, with variables in the same order :

The intercept at the bottom is definitely not what one would expect (given the domain knowledge at least). There is also way more variation than expected and a lot of the effects have means and median below 0, also not expected.

I don’t know pymc. I suggest trying over in their forums.

I can comment on a few things for your model. Stan has vectorized indexing which allows you to get rid of the loop. Also, std_normal() is more efficient because it doesn’t need to do the normalization under the hood.

data {
  int<lower=0> sample_size;
  int<lower=0> B, P, T, E, R;
  array[sample_size] real y;
  array[sample_size] int t, b, p, e, r;
}
parameters {
  vector[T] varT;
  vector[B] varB;
  vector[P] varP;
  vector[E] varE;
  vector[R] varR;
  real<lower=0> c, sigma;
}
model {
  vector[sample_size] mu = c + varP[p] + varT[t] + varE[e] + varR[r]
                           + varB[b];
  varT ~ std_normal();
  varP ~ std_normal();
  varE ~ std_normal();
  varR ~ std_normal();
  c ~ normal(30, 7.2);
  sigma ~ exponential(2);
  varB ~ std_normal();
  
  y ~ normal(mu, sigma);
}

Thanks for the tips to simplify and optimise @spinkney, will surely make use of it. This was a very quick exercise though to reproduce the results of the other model. I will indeed ask the question on the PyMC Discourse, but I thought I would take my chances here first.

I headed over to the PyMC discourse and it appears that it is something simple in the way the data was specified, see the discussion here.

You just need to specify the output in the likelihood function and not give it all the data.

This

observed=data[['Y', 'varT_cat', 'varP_cat', 'varE_cat', 'varB_cat', 'varR_cat']]

should be

observed=data[['Y']]

only.

2 Likes