I have output from a capture-recapture model, to be more specific, estimates of survival over 30 time intervals (for 2 populations), that have an associated variance-covariance matrix. I’d like to be able to do some post-hoc analysis of these estimates to assess change over time while doing some partial pooling and accounting for the known variance-covariance among the estimates. What’s the best way to do this?

I can see a couple of paths to achieving this result. One might be to include ... + fcor(survival_vcv) in the model formula specification. Another might be to include the variance-covariance matrix in the grouping variable, e.g. ... + (1|gr(time, cov = survival_vcv)). Both of these approaches are borrowed from different phylogenetic multilevel models, which seem similar to what I’m looking to achieve (although are perhaps different in important ways – I haven’t used phylogenetic modeling previously so am open to being enlightened!). The first approach is demonstrated here, while the second comes from the phylogenetic model vignette.

I guess might appreciate some discussion of how the two approaches differ, since the documentation, especially for fcor(), is sparse.

I have output from a capture-recapture model, to be more specific, estimates of survival over 30 time intervals (for 2 populations), that have an associated variance-covariance matrix. I’d like to be able to do some post-hoc analysis of these estimates to assess change over time while doing some partial pooling and accounting for the known variance-covariance among the estimates. What’s the best way to do this?

This is more in response to your asking for discussion than directly answering your question. My opinion here is that the best way to answer the questions you want to explore (changes over time and correlation among populations) is to not do what I think you’re proposing: take the output from one model and put it in another model.

Is there a reason why you can’t model the effects you’re hypothesizing within the capture-recapture model? Why not parameterize survival with correlation between the 2 populations and whatever time-trend related covariates you think might be affecting survival?

If you do do that, then it’s a matter of how you structure serial and population correlation between survivals.

This is a fair criticism of the two-step approach I proposed initially and certainly worth exploring in more detail, but I have my reasons for pursuing it. The short explanation is that the data I have are idiosyncratic and most model fitting routines seem to struggle to converge – survival is generally high, but variable, and there’s boundary condition issues coupled with irregular sampling, etc. etc. My lazy-man’s-solution was to take a fully time-dependent model, get it to converge (which can take many attempts, tens to hundreds) and then explore patterns in the parameter estimates. I can then iterate through potential/hypothesized changes over time relatively quickly. It’s not elegant, but maybe it’s defensible?

I understand the problems of getting these types of models to fit, and I’d rather be helpful than just critical. What your propose is defensible in the sense that it’s done in ecology quite often. That doesn’t mean it’s ideal. I’d encourage you if possible to try to attack this in one model.

But that said, the approach you’re talking about could be helpful as part of the iterative process of model exploration and building in complexity overtime. Doing some posthoc analysis to figure out what kind of dependence structure makes sense can help you with that.

I tend to work much more in raw Stan than bmrs, so I can’t comment on that code. I do suggest though that if you’re having trouble with getting a capture-recapture model to fit in Stan, that you should post that. In theory, Stan should be able to deal with boundary area estimates and irregular sampling. I’ve used it in those circumstances

At the moment, a lot of my workflow is set-up in RMark, which is to some extent a relic of the demands of my PhD, where doing the intellectually satisfying and elegant thing was not always a top priority (and Stan resources were more limited). I’ve been slowly moving towards coding things in base Stan, but it’s not the thing that’s paying the bills at the moment, so I don’t always have the time to commit to the learning curve and iterative model development that Stan requires.

That being said, I may promote CJS models in Stan on my priority list and see if I can get something built for the current project sooner than later. I know it’s probably the right way to go for the questions and data I have, but might take a bit more time to get running.