It looks like trying to compile some Stan models fails on Windows as the compilation unit is too big: https://jenkins.mc-stan.org/job/Stan/view/change-requests/job/PR-2761/10/execution/node/138/log/?consoleFull
g++ -std=c++1y -m64 -Wall -Wno-unused-function -Wno-uninitialized -Wno-unused-but-set-variable -Wno-unused-variable -Wno-sign-compare -Wno-unused-local-typedefs -O0 -I src -I . -I lib/stan_math/ -I lib/stan_math/lib/eigen_3.3.3 -I lib/stan_math/lib/boost_1.69.0 -I lib/stan_math/lib/sundials_4.1.0/include -I lib/stan_math/lib/gtest_1.8.1/include -I lib/stan_math/lib/gtest_1.8.1 -D_USE_MATH_DEFINES -DBOOST_RESULT_OF_USE_TR1 -DBOOST_NO_DECLTYPE -DBOOST_DISABLE_ASSERTS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION -DGTEST_USE_OWN_TR1_TUPLE -c -o nul -include test/test-models/good/function-signatures/distributions/univariate/continuous/exp_mod_normal/exp_mod_normal_log_4.hpp test/test-model-main.cpp
C:/Rtools/mingw_64/bin/../lib/gcc/x86_64-w64-mingw32/4.9.3/../../../../x86_64-w64-mingw32/bin/as.exe: nul: too many sections (48645)
C:\Users\jenkins\AppData\Local\Temp\ccfzjidc.s: Assembler messages:
C:\Users\jenkins\AppData\Local\Temp\ccfzjidc.s: Fatal error: can't write nul: File too big
I seem to remember this being a problem before, and probably why the Stan tests were not previously being run on Windows - is that right? /cc @syclik @Bob_Carpenter
The new compile system should help with that, right? Should we rewrite the Makefiles for these tests to use the two translation units? /cc @mitzimorris
This is on a PR that @serban-nicusor is creating to get the Stan repo to be testing the compiler we claim to support on each OS.
Another option is that when we switch to cmake, that can use the new dual compilation units? /cc @alashworth
Is that using 32bit compiler? Also shouldn’t we use mingw-w64 not mingw32?
I believe it’s using RTools, which looks like it comes with mingw_64 (that string appears in the logs, at least). I think we should be using RTools for the tests, since that’s what we have folks install, but not sure if there’s some way to make sure we’re getting a 64bit version of everything. These tests are just using
g++ on the path after RTools installation.
I think this is right, right?
You may want to try with more optimization than
-O0. Over the years, we have gone back and forth on that because
-O0 was faster and sometimes we would run out of RAM if we tried to do
-O3 but doing
-O3 or perhaps better
-Os tends to avoid the “File too big” errors.
Btw I have had some errors with
-O0 on Windows (python process crashes, no idea why it does that).
I don’t know about that.
The model translation units will be much smaller.
yes, as faster model compilation time would help speedup integration tests. this wasn’t done for the 2.20 release because we had a deadline to meet.
Is there a wiki link for what needs to be done for that?
Sweet. Are you planning to work on that? If so then we can punt on updating the integration tests until after that’s done.
I’m pretty sure we were running Windows tests for Stan, but not Math. Was there a change that caused this to fail?