Clarification regarding Jacobians - BYM2 model

Firstly, as far as I understand, the only difference between defining a transformed parameter under the transformed parameter block and the model block is that in the latter, the transformed values are not stored in the output.

In this model (which is not mine), there is no interest in the parameter inv_logit.

I had no issues with this model before reading the Section 24 of the Stan user guide (and some other posts like this and this ones).

I also apologize for any confusion in my question. To clarify, let me set up a toy example:

Consider a GLMM model defined as follows:

Y_{ij} \mid \cdot \sim {\rm Poisson} (E_i \lambda_{ij}),

where, i = 1, \ldots, N_j, j = 1, \ldots, M, and

\log(\lambda_{ij}) = \mathbf{x}^{\top}_i \boldsymbol{\beta} + s_{j}
where s_j is a random effect for the group j.

Are the following two models equivalent?

Model 1

data {
  int<lower=0> N;
  int<lower=0> 
  int<lower=0> y[N];         // count outcomes
  int<lower=0> E[N];         // exposure
  int<lower=1> K;            // num covariates
  matrix[N, K] X;            // design matrix
  int<lower = 0> M;          // num of groups
  int<lower=1, upper = M> group_id; // group identifier
}
parameters {
  vector[K] betas;  // reg coef
  vector[M] s;      // random effects
  real log_sigma;   // log standard deviation (for random effect)
} 
transformed parameters {
  real<lower=0> sigma = exp(log_sigma);
}
model {
  vector[N] log_E = log(E);
  for (i in 1:N) {
     y ~ poisson_log(log_E + x * beta + s[grou_id[i]]);
  }
  s ~ normal(0, sigma);
  log_sigma ~ std_normal();
  beta ~ normal(0, 1);
}

Model 2

data {
  int<lower=0> N;
  int<lower=0> 
  int<lower=0> y[N];         // count outcomes
  int<lower=0> E[N];         // exposure
  int<lower=1> K;            // num covariates
  matrix[N, K] X;            // design matrix
  int<lower = 0> M;          // num of groups
  int<lower=1, upper = M> group_id; // group identifier
}
parameters {
  vector[K] betas;      // reg coef
  vector[M] s;          // random effects
  real<lower=0> sigma;  // log standard deviation (for random effect)
} 
model {
  vector[N] log_E = log(E);
  for (i in 1:N) {
     y ~ poisson_log(log_E + x * beta + s[grou_id[i]]);
  }
  real log_sigma = log(sigma);
  s ~ normal(0, sigma);
  log_sigma ~ std_normal();
  beta ~ normal(0, 1);
}

In my understanding, these models are equivalent. However, based on your responses, it appears that the Jacobian should only be incorporated in the second case.

Could you please point out where my rationale is flawed?


Thank you for your assistance and the insightful discussion.