PyStan - Unexpected exception and slow training time to RStan

I am trying to translate my model from rstan to pystan. However, I observe a significantly slower sampling time and a weird error something like this

If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.

Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
Exception: student_t_lpdf: Scale parameter is 0, but must be > 0! (in ‘unknown file name’ at line 195)

If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.

From R I run:

library("rjson")
library("rstan")

stan_data <- fromJSON(file = "debug_stan_data.json")
stan_data$PR_MAT <- matrix(0, nrow=stan_data$NUM_OF_OBS, ncol=0)
stan_data$RR_MAT <- matrix(0, nrow=stan_data$NUM_OF_OBS, ncol=0)

stan_data$PR_BETA_PRIOR <- numeric(0)
stan_data$PR_SIGMA_PRIOR<- numeric(0)
stan_data$RR_BETA_PRIOR  <- numeric(0)
stan_data$RR_SIGMA_PRIOR <- numeric(0)

mod <- stan_model(file='lgt.stan', verbose = TRUE)
fit <- sampling(mod, chains=4, cores=8, iter=1125, warmup=1000, data=stan_data)

From Python I run:

import json
import pystan
with open('debug_stan_data.txt') as f:
    stan_data = json.load(f)
sm = pystan.StanModel(file='lgt.stan')
stan_data['PR_MAT'] = np.zeros((stan_data['NUM_OF_OBS'], 0))
stan_data['RR_MAT'] = np.zeros((stan_data['NUM_OF_OBS'], 0))

stan_data['PR_BETA_PRIOR']  = [] 
stan_data['RR_BETA_PRIOR']  = [] 
stan_data['PR_SIGMA_PRIOR'] = []
stan_data['RR_SIGMA_PRIOR'] = []
fit = sm.sampling(data=stan_data, iter=1125, warmup=1000, chains=4)

Well…

Chain 3:  Elapsed Time: 233.19 seconds (Warm-up)
Chain 3:                97.3875 seconds (Sampling)
Chain 3:                330.578 seconds (Total)
Chain 3: 
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 218.21 seconds (Warm-up)
Chain 2:                120.221 seconds (Sampling)
Chain 2:                338.43 seconds (Total)
Chain 2: 
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 246.667 seconds (Warm-up)
Chain 1:                132.301 seconds (Sampling)
Chain 1:                378.968 seconds (Total)
Chain 1: 
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 263.176 seconds (Warm-up)
Chain 4:                122.579 seconds (Sampling)
Chain 4:                385.755 seconds (Total)

and

 Elapsed Time: 253.338 seconds (Warm-up)
               141.927 seconds (Sampling)
               395.265 seconds (Total)

Iteration: 1600 / 2000 [ 80%]  (Sampling)
Iteration: 1800 / 2000 [ 90%]  (Sampling)
Iteration: 2000 / 2000 [100%]  (Sampling)

 Elapsed Time: 265.096 seconds (Warm-up)
               155.504 seconds (Sampling)
               420.6 seconds (Total)

Iteration: 1800 / 2000 [ 90%]  (Sampling)
Iteration: 2000 / 2000 [100%]  (Sampling)

 Elapsed Time: 252.003 seconds (Warm-up)
               175.612 seconds (Sampling)
               427.615 seconds (Total)

Iteration: 2000 / 2000 [100%]  (Sampling)

 Elapsed Time: 317.535 seconds (Warm-up)
               119.673 seconds (Sampling)
               437.208 seconds (Total)

RStan: 2.19.3
PyStan: 2.19.1.1
OS: Ubuntu 16.04.6 LTS.

Yes, it takes more time in Python. Something to worry? I don’t know.

In pystan, try

pystan.StanModel(..., extra_compile_args=['-O3'])