It’s difficult to know exactly the gaussian process connects to the metaanalysis you mention, but in general terms here’s a few comments:

If you don’t have any uncertainty for the parameters but do have sample sizes for each study, the simplest way to factor uncertainty into the prior knowledge of parameter values may be to approximate them by a normal where the variance is related to that size (although presumably the parameter is not measured directly, and therefore its variance could not be calculated directly, at least it would be to some extent proportional to the sample size). Of course several assumptions go into that, but your prior is your starting point, so if you don’t know a lot it’s still better than nothing.

Not that I know of, at least not as a general procedure (but I may just now know about it). If you are using parameters from a kernel (1) but want to use a different kernel (2), maybe what you want to conserve is the general shape of 1 (although in that case you may just be mimicking the original kernel instead of using 2). It could make sense to use the general shape of a kernel as prior to another one (if possible), how to do it may be a matter of trial, of inferring the best parameters of 2 given those of 1, or they could be a clever way of finding them without that.
All in all, if you have a model but you don’t have good prior information, just go ahead and try it with “bad” priors and see how it does. That’s kind of the point. If you still can’t get reasonable results, then maybe it’s the time to invest in finding better information to set up better priors.