One of the very cool applications of a FFT transform (and its inverse) is the possibility to calculate the convolution as a simple product of two functions in the transformed space. This is for example handy if one wants to get the difference distribution of two quantities; one simply FFT transforms the function, multiplies them and then takes the inverse of that product.
Long story short: If this is possible with what is planned then by all means it should be taken in consideration for the design. That would be quite useful I think.
It’s very inconvenient to do this without complex numbers. I think the options here are
Implement a complex type and vectorized complex arithmetic.
Establish a convention for treating 2-by-n matrices as complex vectors, providing complex arithmetic.
Not expose any complex arithmetic, but give users a convolution function.
I think 1 has the character of tilting at windmills for pretty thin benefit, it’s a truly huge project. 2 seems like a design disaster (picturing nightmares if our handling of matrices evolves or eigen’s does). We can’t prevent users from doing this in an ad hoc way but I don’t want to bless it. 3 is a bit unsatisfying but has the benefit of letting us add the pieces one at a time. If after a few months we have a zoo of signal processing functions, we may want to re-evaluate this.
Complex type isn’t enough of a priority now. And I
think the derivatives are pretty tricky to do in general.
Is the second option what we were talking about before?
If so, I don’t see a problem with that.
I don’t understand the third option. Is the convolution
why people want these operations? Or is it more things
which is why you think libraries may grow?
as a brittle workaround for the lack of a complex type. People may slice the thing we give them and do all kinds of manipulation, I’m just kind of wary of creating a pseudo-type. So I’m saying that we shouldn’t really worry about convolution for now, but later if there’s a need we can specific functions for convolution.
Michael’s signatures are exactly right, though I’m tempted to add one more: real[,] fft_real_imag(real[] x);
Ok; let’s keep things simple. The Fourier transform is super handy for a convolution, but I am not an expert on the discrete FFT nor is this an easy request as this entails to transform and back-transform. Still, if we think that convolutions are feasible at some point, then we should probably design the internal code with this application in mind? But as Bob says, getting started with the simple function is a great start.
I am working on using convolution operators to model spatial population dynamics. Right now my go-to solution is to use nested loops in the function block to implement the convolution. Ideally, the simulations can be made much faster by solving convolution by FFT.
I saw that the Stan team just released support for complex numbers but I could not find a FFT (iFFT) function. Is the stan team working on providing FFT related functions to solve convolutions? thank you in advance
Yes, and if we’re fast, they should be in the next release (in about 3 months or so). I already have them implemented in the C++ tests for complex numbers and now @WardBrian is finishing off the vector and matrix wrappers for complex numbers.
I think at this point it wouldn’t be premature to start the work on adding the FFT functions to the math library. It would provide a nice stress test for my work on the compiler if nothing else
I’ve got both 1D and 2D FFT and inverse FFT working on the math branch feature/20-fft (this has been a feature request since before we moved the repos). All the reverse-mode autodiff seems to be working, but some of the forward mode is failing. And I’m only getting results matching NumPy to two decimal places on the 2D case. I don’t know if this is my poor implementation or NumPy’s!