I am interested in implementing latent Gaussian processes for repeated measures on multiple subjects. That is, each subject has their own little 1-dimensional time series, not necessarily aligned or with the same spacing as the other subjects. The subjects fall into 2 or more (mutually-exclusive) groups and I’d like each group to have its own Gaussian process parameters, so we can investigate whether the length-scale, amplitude etc differs between groups.

The idea is based on a model presented in https://doi.org/10.1111/bmsp.12180 - though there they fit the model using a kind of EM algorithm.

I know I can fit (latent) Gaussian processes in brms and I have an understanding how this basic principle can be sped up with the basis function approximation. But a model of the form `y ~ gp(t, by = group) + (1 | id)`

doesn’t seem appropriate because it would appear to treat responses from different subjects but with similar timestamps as ‘close together’ if they are within the same group. At the same time, doing `by = id`

would fit a completely separate GP for every subject, which seems inefficient and would not pool the parameters quite how I imagine it.

In other words: if I have 5 subjects in group A and 5 subjects in group B, then I want a ‘separate’ GP for every subject, but only two different sets of parameters: one for group A and one for group B. And maybe a measurement-error parameter `sigma`

that’s common to all subjects.

Does this make sense? Has it been done before? What would be a good way to go about doing it?