[off-topic] Future of Theano

Looks like there are no (current) plans for the Theano project to continue. Link to the announcement on the Theano list (quoted below): Redirecting to Google Groups

Dear users and developers,

After almost ten years of development, we have the regret to announce
that we will put an end to our Theano development after the 1.0 release,
which is due in the next few weeks. We will continue minimal maintenance
to keep it working for one year, but we will stop actively implementing
new features. Theano will continue to be available afterwards, as per
our engagement towards open source software, but MILA does not commit to
spend time on maintenance or support after that time frame.

The software ecosystem supporting deep learning research has been
evolving quickly, and has now reached a healthy state: open-source
software is the norm; a variety of frameworks are available, satisfying
needs spanning from exploring novel ideas to deploying them into
production; and strong industrial players are backing different software
stacks in a stimulating competition.

We are proud that most of the innovations Theano introduced across the
years have now been adopted and perfected by other frameworks. Being
able to express models as mathematical expressions, rewriting
computation graphs for better performance and memory usage, transparent
execution on GPU, higher-order automatic differentiation, for instance,
have all become mainstream ideas.

In that context, we came to the conclusion that supporting Theano is no
longer the best way we can enable the emergence and application of novel
research ideas. Even with the increasing support of external
contributions from industry and academia, maintaining an older code base
and keeping up with competitors has come in the way of innovation.

MILA is still committed to supporting researchers and enabling the
implementation and exploration of innovative (and sometimes wild)
research ideas, and we will keep working towards this goal through other
means, and making significant open source contributions to other projects.

Thanks to all of you who for helping develop Theano, and making it
better by contributing bug reports, profiles, use cases, documentation,
and support.

– Yoshua Bengio,
Head of MILA

That’s an odd announcement from my perspective. Theano’s open source, so I’d expect someone else to take up supporting it.

Yeah, I found the announcement to be a little out of place. Organizations can’t really kill open source projects by pulling support. Hopefully someone picks up the development.

Looks like given the threads on mxnet, PyMC3 is looking for a new autodiff backend. I’m surprised nobody wants to support Theano; I’d be curious to know what it is.

1 Like

Maybe it’s because Tensorflow is too popular and MXNet got the support from major companies? I wish MXNet would get more popular btw, it seems to have a nice API.

1 Like

fyi.

There are now discussions going on for the backend at pymc discourse. E.g.:

And a github for testing

1 Like

I guess there is a lot of work to keep it working with different hardware even if using lower level libraries between Theano and hardware, those libraries seem to be changing quite fast, too. See, e.g., NNVM compiler announcement mentioned by Smola in that thread. From that page

Second, framework developers need to maintain multiple backends to guarantee performance on hardware ranging from smartphone chips to data center GPUs. Take MXNet as an example. It has a portable C++ implementation built from scratch. It also ships with target dependent backend support like cuDNN for Nvidia GPU and MKLML for Intel CPUs. Guaranteeing that these different backends deliver consistent numerical results to users is challenging.

It sounds like a challenging task to maintain these.

1 Like

Yes, it’s really a challenge to deal with multiple hardware / software configurations. You can see that in our challenges in supporting {Linux, Windows, OS X} x {g++, clang++, intel} x {command line, R, Python, …}. And that’s not including the different versions of g++ that have different behavior.

Although I’m really excited for GPU and MPI, it doesn’t come for free and we’re going to have to do a lot of work to support users.

That’s an understatement. It gets worse with low level libraries as they’re more tied into the hardware. For example, our Stan users don’t need to worry about this (other than during installation).

I think MxNet is restricting to NVIDIA for GPUs, which worries me on the portability front.

All that plug-and-play stuff is usually more marketing than truth. It’s very hard and almost always needs custom work by skilled back-end programmers.

Not to mention the different optimization levels having different behavior.

You can’t really do this. You’d have to turn off the Intel compiler’s optimizations—it didn’t pass our unit tests with optimizations turned on. We had to lower arithmetic precision in our tests to get Intel to pass.

Java tryed byte-level reproducible math. Really bad idea. They abandoned it.

Forgot to mention CPUs before—they’re where the real floating-point action is.

3 Likes