"Bad message length" error


#1

Operating System: OS X 10.10.5
Python Version: 2.7.10 (Anaconda 2.0.1)
Interface Version: 2.15.0.1
Compiler/Toolkit:

Hi,

Just pip upgraded to Pystan 2.15 from 2.9, and now getting the following error:

Traceback (most recent call last):
File “sample_softmax_choice_learning.py”, line 58, in
fit = model_code_obj.sampling(data=model_data, iter=2000, chains=4)
File “//anaconda/lib/python2.7/site-packages/pystan/model.py”, line 725, in sampling
ret_and_samples = _map_parallel(call_sampler_star, call_sampler_args, n_jobs)
File “//anaconda/lib/python2.7/site-packages/pystan/model.py”, line 81, in _map_parallel
map_result = pool.map(function, args)
File “//anaconda/lib/python2.7/multiprocessing/pool.py”, line 251, in map
return self.map_async(func, iterable, chunksize).get()
File “//anaconda/lib/python2.7/multiprocessing/pool.py”, line 567, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: ‘[(0, <stanfit4softmax_choice_learning_2d589e109e324d5ba96b6f064cf3aa6b_787095412542364527.PyStanHolder object at 0x10be3f170>)]’. Reason: ‘IOError(‘bad message length’,)’

The code throwing the error is:

model_code_obj = pystan.StanModel(file=‘softmax_choice_learning.stan.cpp’, model_name=‘softmax_choice_learning’, verbose=True) # Specific to model
fit = model_code_obj.sampling(data=model_data, iter=2000, chains=4)

Any help much appreciated!

Cheers,
Tor


#2

Are you using a pickled model or anything like that?


#3

Nope, it compiles from a script file.


#4

Haven’t seen this error before. Can you try running it one chain at a
time using n_jobs=1? The error might be a bit more informative.

If the problem persists, it’s likely a bug (in Python). If you have a
Github account, you might open an issue on the PyStan repository.


#5

How is your memory usage? How large is your dataset. This could be somehow related to low amount of free RAM available.


#6

Running the chains sequentially worked. It’s also worked when sampling in parallel but with 50 iterations instead of 2,000. I thought it might be related to the upgrade, since I haven’t gotten this before, but I reverted back to 2.9 and got the same error. The dataset is only 9 matrices of 48 x 360, but I was saving a lot of big generated quantities matrices (the fit pickle is 39 GB), so I’ve trimmed it up and now it’s working fine. Sounds like a memory issue then. Thanks for your help!