Savage-Dickey method to test null (or other point hypotheses)

I just read

Heck, D. W. (2019). A caveat on the Savage–Dickey density ratio: The case of computing Bayes factors for regression parameters. British Journal of Mathematical and Statistical Psychology , 72 (2), 316–333.

which points out problems with a common use of the "naive Savage-Dickey method in tests of point hypotheses for regression models, and I’m wondering whether these problems apply to the way that brms::hypothesis approximates the Savage-Dickey density ratio (I couldn’t find any details in the help files or discourse about the specific method used)?

On p. 5, Heck writes:

To test the effect of a predictor in a regression model that also includes other covariates, it might be tempting to use the naive Savage–Dickey density ratio for computing the corresponding Bayes factor. However,when choosing default (JZS) priors for the regression parameters (Jeffreys, 1961; Rouder & Morey, 2012; Zellner & Siow,1980), the necessary assumption in equation (3) does not hold and thus the naive Savage–Dickey method will result in an incorrect approximation of the Bayes factor.

On p. 13, he elaborates:

For instance, Boehm et al. (2018) proposed the naive Savage–Dickey ratio as a general method for testing the effect of one or more predictors in a multiple regression. In explaining the method, they correctly remarked that ‘the exact expression for the alternative hypothesis depends on the marginal prior for [the] standardized effect size under consideration, which in our case is a univariate Cauchy distribution’ (p. 9, emphasis added). However, Boehm et al. (2018) did not check whether the conditional prior distribution of the nuisance parameters under the full model (i.e., those parameters that are shared by the nested model) is again a multivariate Cauchy of lower dimensionality (i.e., whether the right-hand side of equation 3 is the JZS prior). As shown in Section 4.2, this is not the case. Instead, the remaining, non-constrained regression parameters follow a multivariate t -distribution with degrees of freedom depending on the number of equality constraints that are tested. Hence, in this common scenario, the naive Savage–Dickey density ratio is not equal to the Bayes factor.

My question is whether brms::hypothesis uses the “naive” approximation discussed by Heck.

When specifying independent priors, which is usually done in brms, the Savage Dickey Ratio is fine. The problem comes with certain kind of dependent priors, which require quite a lot of hacking to get into brms so we should rarely encounter a problematic case (but I am not saying there are none).