Scalar-to-function regression in Stan

Hi all,

I am thinking about adding scalar-to-function regression in brms someday in the future and wanted to ask how such a model could be specified in Stan. For a scalar response y and a functional predictor X(t), a simple model would look as follows (i indexes observations, T is the interval on which X(t) is defined):

y_i = a + \int_T X_i(t) b(t) dt + e_i

where a is the intercept, b(t) is the coefficient function corresponding to X(t) and e_i is the error term. Both X_i(t) and b(t) should be rather easy to estimate using splines or GPs, but I wonder, how we could efficiently implement the integral in Stan.

Many thanks,
Paul

If X_i(t) and b(t) are GPs, then product of GPs is not a GP (I first stated otherwise, but “If f_1 and f_2 are Gaussian processes then the product f will not in general be a Gaussian process, but there exists a GP with this covariance function” (Rasmussen & Williams, 2006)).

Thank you Aki. Do Rasmussen & Williams, 2006 give a constructive proof of how such a GPs looks like?

It would definitely be nice to get GPs working, but I would also like to make this possible via splines since they currently scale far better with data than GPs.

There are several examples of products of kernels and how corresponding GPs look like, but that doesn’t help if you model X_(t) and b(t) with separate GPs, because the product is not a GP (which would have been easy for the integral).

Ok that makes sense. The thing is that X_i (t) varies over observations i
while b(t) is constant over i. I am not an expert with GPs so apologies if
my questions are stupid, but I am unsure if we can estimate X_i and b with
a single (family of) GP(s) in this case. We would need something as GP_i(t,
(theta_i, psi)) where theta_i are observation specific pars and psi are
observation indepdendent pars coming from b(t).

Yes. You can also add a hiererachical prior for GP_i’s.