I’m all for seeing the testing pipeline fail sooner when there are errors (I think that if you develop on Windows you are slighly luckier, as there very early Linux stages that may catch compilation issues you might not have seen).
The idea of changing the order of tests is interesting, but I think it applies mainly to distribution tests (which is fine, as that’s one of the bottlenecks of the all pipeline), right? I’m not sure how to convice make
about this, but it’s probably doable. As for starting running tests for a given distribution before all tests have compiled, I think that would already be an improvement over what we have now.
Much of the pipeline is already using multiple cores (distribution tests are run with -j25
for example), so I’m not sure there’s much to do on this front.
I can offer a couple of hints that saved me a bunch of times by running selected tests locally:
- There’s the -f option of
runTests.py
, which allows you to only run the tests that match a certain pattern, such asrunTests.py test/unit -f filename_pattern
- How to generate (or just recompile) only one distribution test
As for other ideas for speeding up testing in general, ccache
has come up a few times, but as far as I know nobody has actually tried if if brings any benefits in our setup (I don’t have any experience with that). Other discussions: Speeding up testing.
Partially tangential, there have been recent mentions of avoiding running tests if there are changes only in the doxygen
directory, which should be easily achievable. Potentially that could also be applied to the .github
and licenses
directories, but overall this would affect a minority of PRs. Reintroducing something like ci-skip
could help with this.