Hm, I’m not expert enough to know what this is, but I will say that every time I’ve seen folks trying to achieve inference iteratively (where samples from one posterior are supplied as data for a subsequent model), it’s turned out that there’s a more appropriate/sensible (in the sense of probability theory) single model that could be run with all data in a single pass (see for example @betanalpha’s comment here). (Exception might be the PBPK models where something akin to this is called “model cutting”, but I’m [possibly naively] still leary of it making sense even there)