Many thanks, @robertgrant! You’ve given me quite a bit to think about; very much appreciated! A few initial thoughts…
First, I’ve not worked with this beta distribution parameterisation. Are phi and lambda posteriors correlated?
In theory, I believe, the phi and lambda parameterization is uncorrelated. Whereas the alpha, beta parameterization is strongly correlated. And that’s actually where I got started on all this, writing an alpha, beta model that captured the alpha, beta correlation. (A good learning experience, as I’m new to Stan!) But then I couldn’t figure out how to pass along the posterior correlation to the new prior. My post about all this is here.
From a visual inspection, it looks like my phi and lambda posterior parameters are just slightly correlated, so I am getting a little information loss by converting the posteriors to marginal distributions only. This slight correlation may be due to the lingering effects of my initial prior (which I happen to specify, marginally, in alpha, beta space). I’ll do some testing to see if I can sort this out.
Second, are they really normal? How quickly do they go to being passably normal?
In general, probably not. And definitely not when the mean is close to either 0 or 1. This is something I certainly need to explore. Thanks! (Also, if both alpha and beta are < 1.0, the distribution is bimodal. But my “prior” for the data I’m considering is that that’s not going to happen.)
What happens if variance inflation pushes them into impossible values?
For now I just truncate. I need to figure out how and when this might cause my approach to misbehave.
Third, my experience was that scaling up the n of observations can cause trouble, as the likelihood gets very narrow but can be a bad fit to the (already high-n) prior. Worth checking out with big n.
I’ve done some very initial testing with large(r) n. But not much, and not with this in mind. I will!
Fourth, what happens when someone tries to do this with many correlated time series or lots of covariates, so there is a higher p number of parameters?
I haven’t thought about that at all! I will.
the idea of manually inflating a posterior SD to represent “forgetting” rings a bell, but I’m not sure where I’ve seen it. I’ll sleep on it and let you know if I remember. Maybe it’s from AI.
Thanks; appreciated!