FFT design considerations


Having an FFT in Stan would be really cool!

One of the very cool applications of a FFT transform (and its inverse) is the possibility to calculate the convolution as a simple product of two functions in the transformed space. This is for example handy if one wants to get the difference distribution of two quantities; one simply FFT transforms the function, multiplies them and then takes the inverse of that product.

Long story short: If this is possible with what is planned then by all means it should be taken in consideration for the design. That would be quite useful I think.


1 Like

It’s very inconvenient to do this without complex numbers. I think the options here are

  1. Implement a complex type and vectorized complex arithmetic.
  2. Establish a convention for treating 2-by-n matrices as complex vectors, providing complex arithmetic.
  3. Not expose any complex arithmetic, but give users a convolution function.

I think 1 has the character of tilting at windmills for pretty thin benefit, it’s a truly huge project. 2 seems like a design disaster (picturing nightmares if our handling of matrices evolves or eigen’s does). We can’t prevent users from doing this in an ad hoc way but I don’t want to bless it. 3 is a bit unsatisfying but has the benefit of letting us add the pieces one at a time. If after a few months we have a zoo of signal processing functions, we may want to re-evaluate this.


Complex type isn’t enough of a priority now. And I
think the derivatives are pretty tricky to do in general.

Is the second option what we were talking about before?
If so, I don’t see a problem with that.

I don’t understand the third option. Is the convolution
why people want these operations? Or is it more things
which is why you think libraries may grow?

  • Bob

Forget convolution for now – there are more subtle issues involved due to the discretization assumed in the DFT. Right now let’s just start with

real[] fft_amps(real[] x);
real[] fft_phases(real[] x);
real[,] fft_amps_phases(real[] x);

With regards to 2 I’m just saying that I don’t want to give people a bunch of functions like:

real[2,] pointwise_multiply_as_if_complex_vector(real[2,-] z, real[2,-] w)

as a brittle workaround for the lack of a complex type. People may slice the thing we give them and do all kinds of manipulation, I’m just kind of wary of creating a pseudo-type. So I’m saying that we shouldn’t really worry about convolution for now, but later if there’s a need we can specific functions for convolution.

Michael’s signatures are exactly right, though I’m tempted to add one more: real[,] fft_real_imag(real[] x);

I’d suggest starting with just one function, getting
feedback and merging, then going on to the rest.

  • Bob

That’s absolutely the plan. Just figured Sebastian deserved a response on convolutions.


Ok; let’s keep things simple. The Fourier transform is super handy for a convolution, but I am not an expert on the discrete FFT nor is this an easy request as this entails to transform and back-transform. Still, if we think that convolutions are feasible at some point, then we should probably design the internal code with this application in mind? But as Bob says, getting started with the simple function is a great start.


The actual FFT code is sitting inside of Eigen, so it’s
out of our hands unless we want to rewrite it.

  • Bob

Oh, did not know that. This makes it much easier…just expose what they did sounds as the way to go then.

That’s the issue—how to do that. Their API is based
on complex numbers, which we don’t have in Stan.

  • Bob

Hi Bob,

I am working on using convolution operators to model spatial population dynamics. Right now my go-to solution is to use nested loops in the function block to implement the convolution. Ideally, the simulations can be made much faster by solving convolution by FFT.

I saw that the Stan team just released support for complex numbers but I could not find a FFT (iFFT) function. Is the stan team working on providing FFT related functions to solve convolutions? thank you in advance

1 Like

Yes, and if we’re fast, they should be in the next release (in about 3 months or so). I already have them implemented in the C++ tests for complex numbers and now @WardBrian is finishing off the vector and matrix wrappers for complex numbers.

I think at this point it wouldn’t be premature to start the work on adding the FFT functions to the math library. It would provide a nice stress test for my work on the compiler if nothing else

I’ve got both 1D and 2D FFT and inverse FFT working on the math branch feature/20-fft (this has been a feature request since before we moved the repos). All the reverse-mode autodiff seems to be working, but some of the forward mode is failing. And I’m only getting results matching NumPy to two decimal places on the 2D case. I don’t know if this is my poor implementation or NumPy’s!


Sounds great, I added some links to numpy source.

This is from numpy docs


fft2 is just fftn with a different default for axes .


And the main function which iterates over the axes

And here is the fft function

and its raw function

And here is a .c file for real and complex cases

1 Like

Was this a request to go beyond fft2? The rest of the fftn are easy, but I thought I’d start simple with the main ones we need.

If anyone has reverse-mode or forward-mode autodiff code for this, that’d be great. The hint is that

\partial\, \textrm{fft}(x) = \textrm{fft}(\partial x)

Presumably something similar holds for the inverse FFT.

No, I just wanted to add a note (if there was some numerical differences)

Not sure how relevant it would be, but I know a few of the Julia autodiff libraries have custom adjoints for FFT code, e.g.:

This is all defined in terms of the AbstractFFT library API · AbstractFFTs.jl

1 Like