I am currently looking at a set of time series that are clearly more or less (but not equally) correlated.
Call them A, B, C, D, [up to 12 of these, each of 50-60 points, but I can make a principled retreat to 6]
I am trying to build a GP to model this set of time series.
I can define the basic input vector as (T, V) where V is an indicator vector for the time series, but this does not work. Seems obvious that this is because the various time series are differently correlated - A[t+1] is obviously not the same distance away from A[t] as is B[t], which is at a different distance than C[t), but, alas, cov_exp_quad assumes an equally weighted metric.
My idea (the obvious one) is to put a covariance matrix Q into the model, and replace the indicator vectors with the projections from the cholesky decomposition of Q. Then cov_exp_quad should work as before, but with a weighted distance function, and I can fit the covariance along with everything else.
Now I assume that I’m not the first person to encounter this problem, so here are my questions:
Is this the best way to do this? If not, what is the best way? (Is there another way? Am I missing something obvious?)
What is the best way to implement this - I can see at least two, and this is a big enough model that the computations cost is a material issue, so I thought I would mine the experience of people who have more of it than I do.
Thanks for any thoughts,