Slightly different modelling changes the fitting

I just wanted to fit a 2nd order polynomial. In the first exaple I am having no problem:

import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np

A0=0.5; A1=1.5; A2=-0.2; A3=-0.008; A4=0.00025
def func(xx):
    return A0+A1*xx+A2*xx*xx#+A3*(x**3)+A4*(x**4)

x=10*np.random.rand(100); x=np.sort(x)
fx=func(x);
yerror=np.random.rand(len(x))
y=np.random.normal(fx,scale=yerror)

np.random.seed(101)

model = """
data {
    int<lower=0> N;
    vector[N] x;
    vector[N] y;
    vector[N] yerror;
    int <lower=0> NP;
}
parameters {
    real a[NP];
}
model {
    for(i in 1:N){
    target+=normal_lpdf(y[i]|a[3] + a[1] * x[i] + a[2] * x[i]*x[i], yerror[i]);
    
    }
    
}
"""


NP=3;
data = {'N': len(x), 'x': x, 'y': y,'yerror':yerror,'NP':NP}
sm = pystan.StanModel(model_code=model)
fit = sm.sampling(data=data, iter=2000, chains=4, warmup=400, thin=3, seed=101)
print(fit)

The output is normal:

INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_9e171b5a8aec523e35681175cfb02420 NOW.
Inference for Stan model: anon_model_9e171b5a8aec523e35681175cfb02420.
4 chains, each with iter=2000; warmup=400; thin=3; 
post-warmup draws per chain=534, total post-warmup draws=2136.

       mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
a[1]   1.52  8.4e-4   0.02   1.48   1.51   1.52   1.54   1.57    877    1.0
a[2]   -0.2  6.6e-5 2.1e-3  -0.21   -0.2   -0.2   -0.2   -0.2    979    1.0
a[3]   0.44  2.3e-3   0.07   0.31   0.39   0.44   0.49   0.57    844    1.0
lp__ -50.18    0.03   1.21 -53.35  -50.7 -49.84 -49.31 -48.79   1385    1.0

Samples were drawn using NUTS at Wed Nov 20 17:18:26 2019.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

But when I try to do the same with declaring a[1] as a new parameter, I am getting surprising warnings and Rhat is not becoming 1.0. See the following code:

import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np

A0=0.5; A1=1.5; A2=-0.2; A3=-0.008; A4=0.00025
def func(xx):
    return A0+A1*xx+A2*xx*xx#+A3*(x**3)+A4*(x**4)

x=10*np.random.rand(100); x=np.sort(x)
fx=func(x);
yerror=np.random.rand(len(x))
y=np.random.normal(fx,scale=yerror)

np.random.seed(101)

model = """
data {
    int<lower=0> N;
    vector[N] x;
    vector[N] y;
    vector[N] yerror;
    int <lower=0> NP;
}
parameters {
    real shift;
    real a[NP];
}
model {
    for(i in 1:N){
    target+=normal_lpdf(y[i]|shift + a[1] * x[i] + a[2] * x[i]*x[i], yerror[i]);
    
    }
    
}
"""


NP=2;
data = {'N': len(x), 'x': x, 'y': y,'yerror':yerror,'NP':NP}
sm = pystan.StanModel(model_code=model)
fit = sm.sampling(data=data, iter=2000, chains=4, warmup=400, thin=3, seed=101)
print(fit)

The output is:

INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_1cfdf39512ef7dc674a1650663817e89 NOW.
WARNING:pystan:Rhat above 1.1 or below 0.9 indicates that the chains very likely have not mixed
WARNING:pystan:640 of 2136 iterations saturated the maximum tree depth of 10 (30 %)
WARNING:pystan:Run again with max_treedepth larger than 10 to avoid saturation
WARNING:pystan:Chain 1: E-BFMI = 0.0067
WARNING:pystan:Chain 3: E-BFMI = 0.00587
WARNING:pystan:Chain 4: E-BFMI = 0.0581
WARNING:pystan:E-BFMI below 0.2 indicates you may need to reparameterize your model
Inference for Stan model: anon_model_1cfdf39512ef7dc674a1650663817e89.
4 chains, each with iter=2000; warmup=400; thin=3; 
post-warmup draws per chain=534, total post-warmup draws=2136.

        mean se_mean     sd   2.5%    25%    50%    75%  97.5%  n_eff   Rhat
shift   0.49    0.13   0.41  -0.13   0.38   0.42   0.46   2.05     10   1.49
a[1]    1.47    0.07   0.24   0.21   1.51   1.52   1.53   1.65     13   1.29
a[2]    -0.2  6.0e-3   0.02  -0.21   -0.2   -0.2   -0.2  -0.07     15   1.23
lp__   -1490  1745.7 8441.7 -4.4e4  -55.9 -54.38 -53.61 -53.04     23   1.13

Samples were drawn using NUTS at Wed Nov 20 17:19:09 2019.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

I am confused as I don;t know when to trust pystan now. I don’t know why such a small modelling change causes so much difference in the fitting. Again I am most concerned not with the different results but because Rhat does not converge to 1.0 in the latter case.
Please help.

I am sorry but I don’t see any problem. If the algorithm does not converge and honestly tells you this, it does not mean you should not trust it. It is good thing! It would be much worse if the algorithm would tell you the opposite: not converging while telling of converging. That would be really confusing

Regarding your code above, I don’t see where you put priors on your parameters. Maybe it is the point to start

1 Like

Hm. Using rstan 2.19.2 I can’t replicate what you observe. But @aakhmetz is also correct that you should expect some wonky behaviour when you use flat (absent) priors.

2 Likes

I agree with both of you.
Thank you.