Jenkins / Travis / CI Issues


#1

If anyone is having Jenkins and Travis issues (in the past couple of days or from now on) please leave a message here and I can look into them.

Thanks!


#2

Travis has not worked reliably for rstan and rstanarm for years now. I am planning to just switch them to use the Jenkins in my office.


#3

Yeah, among other things I’ve seen Travis sometimes take an extra 15 or 20 minutes over “normal” to run tests (thus causing them to time out). I keep breaking them up into smaller and smaller chunks every time I see this…


#4

Just saw the linux node timeout again, which looks like this in the logs:

FATAL: command execution failed
Command close created at
	at hudson.remoting.Command.<init>(Command.java:60)
	at hudson.remoting.Channel$CloseCommand.<init>(Channel.java:1132)
	at hudson.remoting.Channel$CloseCommand.<init>(Channel.java:1130)
	at hudson.remoting.Channel.close(Channel.java:1290)
	at hudson.remoting.Channel.close(Channel.java:1272)
	at hudson.remoting.Channel$CloseCommand.execute(Channel.java:1137)
Caused: hudson.remoting.Channel$OrderlyShutdown
	at hudson.remoting.Channel$CloseCommand.execute(Channel.java:1138)
	at hudson.remoting.Channel$1.handle(Channel.java:535)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:83)
Caused: java.io.IOException: Backing channel 'gelman-group-linux' is disconnected.
	at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:192)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:257)
	at com.sun.proxy.$Proxy99.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1138)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1130)
	at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
	at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:736)
	at hudson.model.Build$BuildExecution.build(Build.java:206)
	at hudson.model.Build$BuildExecution.doRun(Build.java:163)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:496)
	at hudson.model.Run.execute(Run.java:1737)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
	at hudson.model.ResourceController.execute(ResourceController.java:97)
	at hudson.model.Executor.run(Executor.java:419)
Build step 'Execute shell' marked build as failure
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for Stan - Tests - Integration #513
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for Stan - Tests - Integration #513
ERROR: gelman-group-linux is offline; cannot locate JDK8u66
Finished: FAILURE

#5

I don’t know. I guess we just have to live with it until someone figures out a fix.


#6

I still can’t even find the agent log files… I’m updating the SSH slaves plugin on Jenkins. Mind if I try Oracle’s JRE just to try things that shouldn’t work but might anyway?


#7

OK (this is a complete sentence).


#8

Baha. Okay, just updated it to oracle jre8 (some warning message told me 9 was not advisable yet) and updated the SSH slave plugin. Let’s hope that helps…


#9

@mitzimorris wrote to me:

this PR hung: https://github.com/stan-dev/stan/pull/2391
requests to retest didn’t work.

She’s right, as posted above the linux node went down again yesterday. I’m not sure why her request to retest didn’t work…


#10

@mitzimorris experienced another Jenkins weirdness - running Stan Pull Request - Upstream - CmdStan on the same machine as Stan Pull Request - Tests - Unit seems to cause the former test to fail with errors like this:

[ RUN      ] CmdStan.optimize_newton
unknown file: Failure
C++ exception with description "bad lexical cast: source type value could not be interpreted as target" thrown in the test body.
[  FAILED  ] CmdStan.variational_meanfield (174 ms)
[ RUN      ] CmdStan.variational_fullrank
unknown file: Failure
C++ exception with description "bad lexical cast: source type value could not be interpreted as target" thrown in the test body.
[  FAILED  ] CmdStan.optimize_newton (24 ms)
[----------] 4 tests from CmdStan (43 ms total)

[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (43 ms total)
[  PASSED  ] 3 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] CmdStan.optimize_newton

 1 FAILED TEST
make: *** [test/interface/optimization_output_test] Error 1

(from http://d1m1s1b1.stat.columbia.edu:8080/job/Stan%20Pull%20Request%20-%20Upstream%20-%20CmdStan/597/console)

Anyone have any ideas about why this might be? @syclik or @Bob_Carpenter might know Jenkins or if the CmdStan tests use some global files or something? I’m not even sure how the jobs ran at the same time given that the upstream tests run in a different, non-parallelized phase of Stan Pull Request.


#11

CmdStan shouldn’t have a problem.

There are a few places where we could have trouble:

  • maybe the way we run it from within the tests. If you look at src/test/utility.hpp, we’re running things using popen. Maybe that’s failing under the new linux boxes? I don’t think there’s a difficulty with multiple popens in parallel.
  • CmdStan uses pointers (in the argument parsing) and there’s a possibility that it’s not safe somewhere.
  • I googled that exception. It looks like it’s a boost::lexical_cast exception. We can try to trace where that’s happening. There are a few places where we use lexical_cast.

#12

To summarize the current status, here are things that I think have been causing flakiness:

  1. My original change to the old jobs to allow testing against pull requests on forks has encountered a couple of corner cases so far that have caused spurious failures.
  2. Adding the linux box back in and trying to figure out how to use it without it imploding / its network connection dropping. I think it’s in a pretty good state for the past week or so, finally.
  3. Github going down (semi-rare but I’ve seen it a few times).
  4. Two jobs running simultaneously that conflict in ways I don’t totally understand
    There might be more I’m missing - anyone have others?

I think the vast majority have been due to #2, and pipelines are supposed to give us better job isolation (dealing with #4), better robustness in the face of node failure (#2), and tools to add retries etc. to help deal with things like #3. I think the mechanism behind #1 is a little better in pipeline land as well as its getting its parameters from a plugin with commercial support that seems a little more robust than the old “Github Pull Request Builder” plugin.

Daniel, what kind of stuff could we simplify or coalesce to add robustness or save time, respectively?

I don’t think my experimentation with pipelines have been affecting the other jobs, other than that they are also pull requests being tested and thus adding testing load.


#13

Quotes from @syclik in an email thread:

Bob, regarding more robust alternatives. One thing we can do is simplify and just have things take more time.

Simple is good. It depends howmuch more time we’re talking
about—it’s already very time consuming.

broke things up into smaller chunks so that we could get finer-grained information out about the failure, but if we’re willing to give that up, it makes life easier.

We need to get as much information as the pull requester is
going to need to debug.

There are things we can’t always control like GitHub going down.

Understood.


#14

If we wanted to simplify, we could have just one project to test Math, one project to test Stan. I think we’d still want to test that project over a number of different configurations, but it’d be one project. That would allow us to easily run multiple pull requests in parallel. Right now, Math is tested across something like 6 different projects.

Pros of multiple projects:

  • Post-processing of each of those projects is done separately. We can check for things like gcc warnings.
  • We can run multiple pieces of testing a single pull request in parallel. This makes it quicker for us to determine if a pull request has failed.
  • The project is sort of descriptive and just seeing which project failed indicates what needs to be fixed.

Cons of multiple projects:

  • For Math, this means maintaining 6 different projects.
  • Jenkins has to maintain 6 different workspaces, merging 6 times against the same branch. (Space)
  • Post-processing the log isn’t feasible when it’s one project. Scanning the log for gcc warnings won’t actually work using the built-in plugins. (I’ve tried, but a long time ago.)
  • We can’t easily run multiple pull requests in parallel easily. I think we can, but we’d need even more storage for copies of multiple projects.
  • It’s hard to tell what’s going on. I believe @seantalts’s work with pipelines should clean a lot of this up, but it’s still easier to see what’s happening when you see that the one project for the repo failed.
  • Triggering other jobs properly is harder than it seems. Hopefully pipelines fixes that a bit too. So, having one project is a bit easier.

It also hits GitHub less often, so we might have less issues with their downtime.


#15

I think the pipelines will get the best of both worlds here, except that right now they’re set up to be super parallel and I’ve done it in a lazy way which involves checking out the git repo on each machine. I can look into changing that to use a new stashing and unstashing feature to spread the git repo across parallelized nodes, which might legit give us the best of both. Parallelization progress bar visualization is a little messed up right now but I suspect that will get better in future versions, and failures and output are still clearly visible on a per-stage basis (See this and this general stage view for some examples).

I’ll look into the stashing thing!


#16

Stashing seems like it might be a decent solution! Though it’s breaking something weird right now, I think I will be able to eventually get that settled and then we can talk to github just once at the beginning of the build (and wrap that in a retry block).

Another issue I just encountered is that the Stan src/test/performance tests don’t work on the linux node (due to linux or g++, I’m not sure). I knew this already but didn’t realize until today that Math’s Upstream - Stan tests also did the performance tests. This resulted in error messages that look like this:

src/test/performance/logistic_test.cpp:111
Value of: first_run[0]
  Actual: -66.1493
Expected: -65.7658
lp__: index 0

(from here, but this link will eventually stop working).


#17

What do you mean by “stashing”? (It sounds great no matter what it is.)

Let’s split this off into its own thread and fix the problem for good!

Right now it’s hardware + compiler dependent, which makes it not a good test at all. I’ve mentioned before: the purpose of the test has really expanded from just timing to a crude integration test (due to real bugs that were introduced and this was the easiest thing to adapt to prevent future bugs).


#18

Stashing is basically just asking Jenkins to tar up the working directory (or some subset of it) and then unstash it on new nodes on demand, and making that pretty easy and hiding the inter-machine communication aspects. You can see some light doc here.

It seems to be working now! It takes ~3 minutes to unstash the first time on a new machine (full Stan + Math repos) but it only copies over the network once, so the 2nd time is only a few seconds. I think this is worth it since it means we talk to github way less, and that whole checkout process for both repos could take a minute or two on its own anyway.

Here’s the build for the PR with all the bells and whistles hooked up in the last two builds: http://d1m1s1b1.stat.columbia.edu:8080/job/Stan%20Pipeline/view/change-requests/job/PR-2414/


#19

Regarding email notification, we had set up a Google Group called stan-buildbot so the notification went to a list that people could subscribe to. I don’t think it should just go to the one email address. (I’m guessing you haven’t even seen these emails?)

---------- Forwarded message ----------
From: ...@gmail.com
Date: Fri, Oct 13, 2017 at 7:07 AM
Subject: [StanJenkins] SUCCESSFUL: Job 'Stan Pipeline/PR-2414 [29]'
To: ...@gmail.com

SUCCESSFUL: Job ‘Stan Pipeline/PR-2414 [29]’: Check console output at http://d1m1s1b1.stat.columbia.edu:8080/job/Stan%20Pipeline/job/PR-2414/29/


#20
  1. We can add another email address to always send to. What address should I put in? I can’t actually figure out from that page how I would send a message to that group, haha.
  2. Sending mail to the buildbot’s gmail address is an edge case I didn’t consider - Right now the job is set up to email the developers who have commits that were newly tested by the job (more or less, logic is somewhat fuzzy but automatic from the plugin). The buildbot is the one who automatically creates the commit that updates Stan’s develop branch to point to the latest math develop that passes the tests, and so it gets an email when that job finishes :P