I’ll give it a try. Someone should check my math below. However, on 1, since you have 1 function but 4 parameters the jacobian is 1 x 4 which has no determinant. You can put the prior on which acts as a weighting function in the high density region of the prior. See for more Once more about Stan and Jacobian warnings | Models Of Reality
For 2,
\begin{aligned}
m &= \frac{\sqrt{n} \sum_{i=1}^{n} \frac{y_i - \mu}{\sigma}}{n \sigma} \\
&= \frac{\sum_{i=1}^{n} (y_i - \mu) }{\sqrt{n} \sigma^2} \\
\frac{\partial m}{\partial y} &= \frac{1}{\sqrt{n}\sigma^2}
\end{aligned}
the log abs of which is - (0.5 \log(n) + 2 \log(\sigma)). You can drop the first part since it’s constant so you just need
target += -2 * log(sigma);
For the second part, check my math.
\begin{aligned}
r &= \frac{y - \mu}{\sigma} \\
v &= \frac{1}{n} \sum_{i=1}^{n} (r - \mu_r)^2 \\
\frac{\partial v}{\partial y} &= \frac{2 \sum_{i=1}^{n} (r - \mu_r) }{n \sigma} \\
&= 0
\end{aligned}
update Thinking about this more I think that the second part doesn’t need an adjustment because \mu_r is 0 and then \sum r / n is also 0. It just boils down to
y <- rnorm(1000, mean = 3, sd = 2)
mike_mod <- cmdstan_model("mike_jacobian.stan")
fit <- mike_mod$sample(
data = list(n = length(y),
y = y),
parallel_chains = 2,
chains = 2,
seed = 12312,
adapt_delta = 0.8,
max_treedepth = 10,
iter_sampling = 500,
iter_warmup = 500
)
fit$summary()
where the Stan code is
data{
int n;
vector[n] y ;
}
transformed data{
real sqrt_n = sqrt(n) ;
real var_gamma_par = (n-1)/2 ;
}
parameters{
real mu ;
real<lower=0> sigma ;
}
model{
vector[n] resid = (y-mu)/sigma ; // computation involves data and multiple parameters
real m = mean(resid) / (sigma/sqrt_n) ;
real v = variance(resid) ;
m ~ std_normal() ; // needs jacobian?
v ~ gamma(var_gamma_par,var_gamma_par) ;
target += -2 * log(sigma);
}
The output as
Both chains finished successfully.
Mean chain execution time: 0.3 seconds.
Total execution time: 0.5 seconds.
> fit$summary()
# A tibble: 3 x 10
variable mean median sd mad q5 q95 rhat ess_bulk ess_tail
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 lp__ -501. -500. 0.983 0.661 -503. -500. 1.00 468. 667.
2 mu 2.97 2.97 0.128 0.127 2.77 3.18 1.00 935. 712.
3 sigma 2.01 2.01 0.0445 0.0457 1.94 2.09 1.00 692. 681.