# Hessian matrix for transformed parameters in optimizing function

I’m using the `optimizing` function in RStan to obtain the MLE. Now I can get the correct MLE and I try to use `optimizing(..., hessian=TRUE)` to get the Hessian matrix. The Hessian matrix I get is for the parameters but not the transformed parameters. My question is that is there a way to also get the Hessian matrix for the transformed parameters. The parameters I’m interested in are specified in the transformed parameters.

In the code below, `beta` and `pi` are the parameters. I would like to get the Hessian matrix of `beta` and `alpha` (which is in the transformed parameters).

Thanks!

``````mod.stan = "
functions {
vector pw_log_lik(vector alpha, vector beta, row_vector[] X, int[] y) {
int N = size(X);
vector[N] out;
int k = max(y);
for (n in 1:N){
real eta = X[n] * beta;
int y_n = y[n];
if (y_n == 1) out[n] = log1m_exp(-log1p_exp(-alpha[1] - eta));
else if (y_n == k) out[n] = -log1p_exp(-alpha[k - 1] - eta);
else out[n] = log_diff_exp(-log1p_exp(-alpha[y_n - 1] - eta),
-log1p_exp(-alpha[y_n]     - eta));
}
return out;
}
}

data {
int<lower = 1> N;   // number of observations
int<lower = 1> p;   // number of predictors
row_vector[p] X[N];     // columnwise CENTERED predictors
int<lower = 2> k;   // number of outcome categories (7)
int<lower = 1, upper = k> y[N]; // outcome on 1 ... k

// prior standard deviations
vector<lower = 0>[p] sds;
}

parameters {
vector[p] beta; // coefficients
simplex[k] pi;  // category probabilities for a person w/ average predictors
}

transformed parameters {
vector[k - 1] alpha;                               // intercepts
vector[N] log_lik;                                 // log-likelihood pieces
for (j in 2:k) alpha[j - 1] = logit(sum(pi[j:k])); // predictors are CENTERED
log_lik = pw_log_lik(alpha, beta, X, y);
}

model {
target += log_lik;
target += normal_lpdf(beta | 0, sds);
// implicit: pi ~ dirichlet(ones)
}
"

# data
set.seed(305)
N <- 20
p <- 1 # dimension of X
X <- as.matrix(rbinom(N, 1, .5))
e <- rlogis(N, 0, sqrt(3/pi^2))
cont_y <- exp(X + e) # continuous y
y <- match(cont_y, sort(unique(cont_y)))
k <- length(unique(y))
sds <- as.array(100)

# model
model <- stan_model(model_code = mod.stan)
res.stan <- optimizing(model,
data = c("N", "p", "X", "k", "y","sds"),
hessian = TRUE)
``````

I would set `draws` equal to a big number in order to draw many times from a multivariate normal distribution at the mode and then call `cov` on the columns of the resulting matrix.