Well, I think there are two issues, one is identifiability. What’s the difference between the “global trend” and any one of the individual trends? If there’s no real difference in specification, then they’re interchangeable potentially, and you can’t identify the global trend. Stuff will tend to wander off in opposite directions, like y = a + b, if you a goes to infinity and b goes to negative infinity, y stays constant…
To solve this, for example you might impose some greater smoothness on the global trend, and then if the individual trends are necessarily wigglier this gives identifiability because they can’t exactly interchange their roles in the calculation. It will help identifiability to be much smoother not just a little smoother (like you might consider doing a fourier series, or a polynomial or a radial basis function, and keep the number of coefficients small relative to the number of observations in the series)
The second issue though is the dependency structure. The fit of all the individual time points changes with an epsilon change in the global trend. Think of the global trend as a mass with a LOT of small springs holding it in place, one for each anchored data point. It won’t want to move very far from its equilibrium position with all those springs pushing it back, so if it’s identifiable, it will ultimately be tightly identified around some particular value.
Replacing a tightly identified set of parameters with their modal value is a principled simplification… like y = normal(1,.000002) could be replaced with y=1 with only an error on the order of .000002.
Think about your model to see if there is anything about the way the global trend is specified which distinguishes it from the local trends, like smoothness / degrees of freedom etc. If not, perhaps impose a simpler structure on the global trend so it becomes identified. Then, do the optimization procedure I mentioned, and replace the global trend with a fixed value, to get started at least, and see if you can get the local trends to fit.