Jacobian adjustment needed?

Hi all,

I am trying to run a very complicated Beta Fay-Herriot style model and would love some assistance with the necessary Jacobian adjustment. Below I present a highly simplified version of my model.

The general setup is as follows. y \in (0,1), with \text{Var}(y_i) = \psi_i and E(y_i) = \mu_i. Using the mean-precision parametization of the Beta distribution, the model setup is as follows.

y \sim \text{Beta}(a_i = \mu_i \phi_i, b_i = \phi_i - a_i) \\ \eta_i = X_i \beta + v_i \\ \mu_i = \text{logit}^{-1} (\eta_i) \\ v \sim N(0, \sigma_v) \\ \phi_i = \frac{\mu_i (1- \mu_i)}{\psi_i} - 1 \text{, where } \psi_i \in (0, 0.25)

To ensure, a_i>0, b_i>0, the formula for \phi_i implies the following bounds on \mu_i.

\mu_i \in \left( \frac{1-\sqrt{1-4 \psi_i}}{2}, \frac{1+\sqrt{1-4 \psi_i}}{2} \right)

Now given that \psi_i is a random variable in our model, the bounds on \mu_i are also random. Using a combination of the non-centered parameterization and the necessary transform we can sample a standard normal parameter Z \sim N(0,1) and then transform this parameter to the correct bounds on \eta_i which are implied by taking the \text{logit}^{-1} transform of the bounds on \mu_i. On the bottom of this page, the transformation required for variables \in (L, U) specifies that a Jacobian adjustment must be added if L and U are random, which in this case they are.

The current stan model is below.

data {
	int<lower=0> n;  			
	vector<lower=0,upper=1>[n] y;
	int<lower=0> q;
	matrix[n, q] x; 
	vector[2] boundpsi;
}
parameters {
	vector[q] B;
	real<lower=0> sigma_v;
	vector<lower=boundpsi[1],upper=boundpsi[2]>[n] psi;
	vector[n] Z;
}
transformed parameters{
	// DECLARACTIONS
		vector[n] mu; 									
		vector[n] eta;				
		vector[n] alpha;							
		vector[n] etaLower; 
		vector[n] etaUpper;
		vector[n] phi;

	// FULL VECTORS
		etaLower = logit(0.5 * (1 - sqrt(1 - 4 * psi)));
		etaUpper = logit(0.5 * (1 + sqrt(1 - 4 * psi)));
		// Non-centered parameterization: https://mc-stan.org/docs/2_28/stan-users-guide/reparameterization.html
		eta = Z * sigma_v + x*B;
		// adjust the bounds on eta and transform to mu: https://mc-stan.org/docs/2_28/stan-users-guide/vectors-with-varying-bounds.html
		mu = inv_logit(etaLower + (etaUpper - etaLower) .* inv_logit(eta));
		phi = ((mu .* (1 - mu)) ./ (psi)) - 1;
		alpha = mu .* phi;

}
model{
// draw from a standard normal
target += std_normal_lpdf(Z);

// priors for the model parameters
for(j in 1:q){
	target += std_normal_lpdf(B[j]);
}
target += cauchy_lpdf(sigma_v | 0,2);
			
// increment log-probability for Jacobian
	for(f in 1:n){
		// https://mc-stan.org/docs/2_27/reference-manual/logit-transform-jacobian-section.html
		target += log( (etaUpper[f] - etaLower[f]) * inv_logit(eta[f]) * (1 - inv_logit(eta[f]) ) );
	}

// Beta model
target += beta_lpdf(y | alpha, phi - alpha);

}

Note a few caveats that cannot be edited to ensure this simple example still operates in my more complex version. Firstly, I wish to use the non-centered parameterization for the linear predictor, as I get significant pathological behaviour if I don’t. In this example, psi is a parameter, whilst in my complex model, psi is a transformed parameter. This means that the bounds on eta cannot be defined in the parameters block using the <lower=., upper=.> syntax.

The simple model converges very well, and I find close to no differences when I exclude/include the Jacobian adjustment. However, my complex model is having significant convergence problems so I want to be sure that the fault is not because of this bounded parameter.

Thankyou in advance for any assistance.

You do not need this any Jacobian adjustment term here (beyond the Jacobian adjustments that Stan automatically applies associated with your bounded parameters (the ones declared in the parameters block). None of you transformed parameters ever shows up on the right-hand [edit: left-hand] side of a sampling statement. Including the adjustment should yield incorrect inference.

The point of Jacobian adjustments is to make sure that prior statements over transformed parameters accomplish what they seem to be saying. If you don’t have prior statements over transformed parameters, then you don’t need to worry.

3 Likes

Great! Thank you for taking a look jsocolar.

Note also that the beta_proportiona_lpdf function does most of the heavy-lifting for you, allowing you to write

y ~ beta_proportion(inv_logit(v + X * beta, psi);

without having to try to transform to the standard beta density parameterization. The variance in the beta proportion parameterization will in general depend on \mu but you can work out \psi(\mu) to fix a unit variance if you really want.

Thankyou Michael. Yes, I had initially used the beta_proportion_lpdf() version, but found that using the beta_lpdf() allowed for greater transparency while getting the model working. Given that \psi is effectively considered known in the FH setup, we must instead constrain \mu, which I found easier to specify using beta_lpdf().