In Jenkins Updates, Issues & Requests @syclik mentioned that maybe a separate thread on bringing the testing time down should be started. This is the thread (motivated by waiting a lot for tests both locally and on Jenkins). Also tagging @serban-nicusor as he seems to be doing stuff with Jenkins right know.
I think there is some low-hanging fruit, but maybe I’m wrong. My starting point (which is my experience, but might not be everybody else’s) is this:
Most builds fail at at least one test.
So shortening the time to first failure might be a good optimization target. The current approach: build all tests first, then run all tests is suboptimal - building takes ages, running is fast. And in the end I don’t even see the output of all tests. E.g. when I tried to resolve failures that were unique to Linux (I develop on Win), I had to fix one failure, wait for all tests to build and only then I saw a second failure in a different test.
So there could be a benefit from running tests as they are built or at least split the test build into more pieces (e.g. by subdir). The latter could also mean that the build can run on multiple executors if resources are available.
To be more fancy,
run_tests.py could run tests that are related to the changes in the PR first. But no need for code analysis - a simple way would be to reorder tests by some fuzzy string match to the list of files changed, so that the tests that are most likely to fail execute first. This is also low risk - all tests are eventually run, we are just changing the order.
Does any of that make sense?