Can you try to run outside pycharm?
Working on it now – at the end it suppose to sun on a CentOS7 cluster, so I am setting it up there rn.
A more general question - is this the right approach to set
grainsize=1 to parallelize over
on the cluster I get the following error while building the model:
compileError(DistutilsExecError("command 'gcc' failed with exit status 1")), traceback:
[' File "/pystan3/lib/python3.7/site-packages/httpstan/views.py", line 93, in handle_models\n await httpstan.models.build_services_extension_module(program_code)\n', ' File "/pystan3/lib/python3.7/site-packages/httpstan/models.py", line 207, in build_services_extension_module\n await asyncio.get_event_loop().run_in_executor(None, httpstan.build_ext.run_build_ext, extensions, build_lib)\n', ' File "/apps/centos7/Core/Anaconda3/5.1.0/lib/python3.7/concurrent/futures/thread.py", line 57, in run\n result = self.fn(*self.args, **self.kwargs)\n', ' File "/pystan3/lib/python3.7/site-packages/httpstan/build_ext.py", line 97, in run_build_ext\n build_extension.run()\n', ' File "/apps/centos7/Core/Anaconda3/5.1.0/lib/python3.7/distutils/command/build_ext.py", line 339, in run\n self.build_extensions()\n', ' File "/apps/centos7/Core/Anaconda3/5.1.0/lib/python3.7/distutils/command/build_ext.py", line 448, in build_extensions\n self._build_extensions_serial()\n', ' File "/apps/centos7/Core/Anaconda3/5.1.0/lib/python3.7/distutils/command/build_ext.py", line 473, in _build_extensions_serial\n self.build_extension(ext)\n', ' File "/apps/centos7/Core/Anaconda3/5.1.0/lib/python3.7/distutils/command/build_ext.py", line 533, in build_extension\n depends=ext.depends)\n', ' File "/apps/centos7/Core/Anaconda3/5.1.0/lib/python3.7/distutils/ccompiler.py", line 574, in compile\n self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)\n', ' File "/apps/centos7/Core/Anaconda3/5.1.0/lib/python3.7/distutils/unixccompiler.py", line 120, in _compile\n raise CompileError(msg)\n']
That is a compilation error.
I’m not sure what is the current situation with the verbosity settings.
Can you try to compile the model with CmdStan (CmdStanPy) and see if there is an error msg.
I have an error while downloading CmdStan to the cluster using
INFO:cmdstanpy:stan/lib/stan_math/stan/math/prim/prob/neg_binomial_2_log_glm_lpmf.hpp:139:17: error: expected ?;? before ?theta_tmp? WARNING:cmdstanpy:CmdStan installation failed
This is the last lines.
Is there a way to download not the latest version (2-24.0), but 2-23.0 with
install_cmdstan()? Maybe this causes the problem.
I also tried uploading cmdstan-2.23.0.tar.gz directly to the cluster, but cmdstanpy didn’t recognize the binaries.
But when I compile it on my local machine using CmdStanPy, it compiles without errors.
I don’t know if that helps, but PyStan3 fails to compile models that were complied fine with PyStan2.19.
Check the version of the c++ compiler on the cluster in that case.
Checked that on the cluster:
>g++ --version >g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
On my local machine it’s
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Thread model: posix
I know CmdStan documentation suggests using 4.9.3 or later, but it works fine on my local machine.
Your local compiler is clang 9.0, which is why ot works locally.
I can confirm it will definitely not work with 4.8.5.
ok, that’s good to know about clang! Thanks!
I asked my system admins if C++ compiler can be updated, so let’s hope for the best.
Is this also why PyStan3 models don’t compile?
On the cluster you will not be able to compile with g++ 4.8.5 with any Stan interface, so I would imagine so. All interfaces have the same backend that requires g++ 4.9.3 or clang 5.0+ (we officially say 6 because that is what we test).
Hmm are you sure about that? I run models just fine with pystan2.19 on this cluster…
Also, is there a way to easily change g++ to clang? Maybe cluster’s clang version is ok.
On the cluster run: clang++ --version to see if it exists.
pystan2.19 uses the older version of the Math backend that could be used with older compiler pre-4.9.3.
clang++: command not found… so now it’s all in the hands of the admins to update g++. So many setbacks!
And it makes sense about pystan2.19, thanks!
conda has its own compilers too (conda install possible), but I recommend following the official route on clusters.
Thanks! Btw, I compiled my PyStan3 model in the command line of my local machine and it still fails with
Segmentation fault: 11. CmdStanPy still compiles fine, but multithreading doesn’t work there…
I can reproduce the segfault on Ubuntu with gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0.
I will create an issue for this. Even if there would be something wrong, things should never segfault.