I’m preparing for a metaanalytic project that I’d like to use brms for, but I’ve not found very much documentation on the kind of context that I’m looking at. I’m always surprised by how much brms can handle, and I want to make sure that I don’t miss out on an opportunity to incorporate all the available information in the analysis.
Context of Problem
In brief, neuropsychology relies on normative samples (sometimes these are lookup tables or other times they are regressionbased predictions) in order to convert a sum of correct answers on a test to a demographicallyadjusted (e.g., age, educational attainment, sex, etc.) standardized score. Since many studies are published that include summary data of tests in various populations (usually Table 1 somewhere), the potential for pooling these various publications into a metaanalytic normative regression has been discussed, but attempts at it have been variable in their quality, sophistication, and utility.
Expected Knowns
I’m looking at pooling means for a brief cognitive screening battery that includes 10 standardized tests with a variety of known properties that I’d like to account for:

There are several factor analyses and published data sources that describe the correlation matrices of these 10 tests with one another.

The scale of these variables are fairly disparate: the shortest scale ranges from 010 while the longest goes from 090.

The test scores are also not normally distributed in the population (most have a prominent negative skew) with a (beta)binomial distribution usually approximating those better than a traditional Gaussian likelihood

I expect most studies to report just the means and standard deviations of the test scores (some may include or report instead the median and IQR, but that’s not the norm).

I also expect that covariates of interest will be reported as means and standard deviations (e.g., mean age + SD)

The tests also have some measurement error with published reliability coefficients available (mostly just Cronbach’s alpha or splithalf reliability and only from a couple of sources)
Preliminary Plan
Looking through the materials that I have found on using brms for metaanalysis, it seems like the simplest case here is to do the following:
bf(mean  se(se_mean) ~ 0 + intercept + mean_age + mean_edu + test_id +
(0 + test_id + mean_age + mean_edu  Study/Sample))
As I understand this model, it is predicting a mean value that is weighted by the standard error of the mean for each sample (allowing larger samples to carry more weight). To make that prediction, the regression is using the sample’s average age and education (both fixed and random effects for each study and each sample within each published study). Then, test_id is just a factor variable indicating which subtest the mean is for.
Causes for Pause
In thinking about the goals and generalization of the intended metaregression, I’m not sure that this general method captures all the information that is known about the test. Specifically, I’m wondering about how to include or whether to incorporate some of the following points:

I believe that the covariance matrix of the subtests can be passed to brms with the
fcor()
function, but I don’t entirely follow how that works since I’ve never used it before. I’d like to treat the covariance matrix as an informative prior rather than a constrained known. Is that what thefcor()
argument would permit? 
The skew of the tests means that I expect some unreasonable predictions when the resulting model is used. For example, one of the shortest tests has a mean of around 9 (out of a possible 10) and then standard deviations around like 1.5  3 because there are some who do kind of poorly and causes the variability measure to spike. I know that brms supports truncated priors/likelihoods, but I’m not sure how that would work in this case since all the tests have the same lower bound but varying upper bounds

Similarly, I don’t really care to predict an impossible value like 8.342 since the raw score will always be an integer. I’m wondering whether it would be possible to specify a betabinomial likelihood somehow using the mean and standard deviations with the score range to derive a binomial distribution that approximates the underlying skew and then allows the binomial probabilities to vary by study. I’m not sure how that would work exactly, though…

I’m wondering whether variability in the sample statistics used as covariates can also be accounted for. I don’t think that I’ve seen this done before in my field, but I think for the intended purpose of this metaanalysis, it makes sense to account for the fact that there is a range of values represented in the covariates for each study. My first thought was that brms might handle that with the
me()
/nowmi()
syntax, but I’m not sure whether the extra effort of including that variability in the model is worth it. Additionally, I’m not sure whether those should be inverse variance weighted or not in the context of a metaanalysis 
I’d love to include some information about measurement unreliability for some tests, but I don’t know how reasonable it will be as there are varying measures of test reliability and very few studies (like maybe 2 or 3 at most) will have them reported with their data. The test publisher does have reliability test data from their standardization sample, so there’s at least some information. This is more a wishlist than a need or concern as I think it’d help make better credible intervals to account for this extra error, but I don’t know of any normative method in the field that accounts for measurement error beyond the standard error of prediction from the regression equation or the standard deviation of scores in a cell of demographic characteristics

I’m also anticipating some potential estimation issues since there’s a lot of scale variability between the outcomes and covariates. I’m sure that these variables could all be scaled to a zmetric or at least mean centered in standard ways, but I don’t know whether the metaanalysis introduces any additional caveats (e.g., should they be centered on a weighted mean rather than simple grand mean). I just want to make sure that the results are coherent and that estimation is relatively efficient, but I suppose the worst case scenario is just to pass custom prior distributions that reflect the appropriate scales for each effect (though this gets complicated since there are 10 outcomes on differing scales being predicted by the same predictors that have their own unique scales – if that makes sense…)
Summary
Hopefully the above is sufficient for contextualizing the problem and articulating my initial thoughts and questions. I’m happy to hear from anyone on recommendations or thoughts about how best to approach the issues and accommodate some (or all) of the contextual information that I’m expecting to have. I’d be especially grateful for any references or resources that I could turn to in order to learn more about this kind of metaanalytic question since I’ve struggled to find anything on this. My inkling is that there is an obvious connection between generalized multilevel regression and generalized multilevel metaregression that makes the stuff I’m looking at trivial to those who want to do that kind of analysis but is probably irrelevant for most metaanalytic needs and thus not covered in typical metaanalysis resources