Do I need a Jacobian correction?

I’m working on a non-linear model and have a parameter (B) that is a combination of other parameters (a, b, c). I move B from the LHS to the RHS, which I think means I need a Jacobian correction. Is that correct? If so, what would that look like?

The X1 and X2 variables are time time series values, which I enter as matrices.

My model is:

data {
  int<lower=1> N;  // total number of observations
  vector[N] y;  // response variable
  // covariate vectors for non-linear functions
  int<lower=1> N_Times; // Number of times in time series
  matrix[N, N_Times] X1;
  matrix[N, N_Times] X2;
  vector[N] X3;
parameters {
  real<lower=0> a; 
  matrix[N, N_Times] b;  
  real<lower=0> c;  
  real<lower=0> kappa;  // precision parameter
model {
  // likelihood including constants
    // initialize B matrix
    matrix[N, N_Times] B;
    // initialize mu for beta likelihood
    row_vector[N] mu;
      // compute B
      B = (a * exp(b .* X2)) + c;
    for (n in 1:N) {
      mu[n] = sum(row(X1, n) .* row(B, n))  / X3[n];
    y ~ beta(inv_logit(mu), kappa);
  // priors including constants
  a ~ normal(5,5);
  to_vector(b) ~ normal(5,5);
  c ~ normal(5,5);
  kappa ~ gamma(0.01, 0.01);
generated quantities {
        vector[N] log_lik;
        matrix[N, N_Times] B_pred;
        vector[N] y_pred;
        row_vector[N] mu_pred;
        B_pred = (a * exp(b .* X2)) + c;
        for (n in 1:N) {
                mu_pred[n] = sum(row(X1, n) .* row(B_pred, n)) / X3[n];
        for (n in 1:N) {
                y_pred[n] = beta_rng(inv_logit(mu_pred[n]) + kappa);
                log_lik[n] = beta_lpdf(y_pred[n] | inv_logit(mu_pred) + kappa);

Do I need to address the movement of B from the LHS to the RHS within the model block? Or is a Jacobian correction only necessary if I move a transformed parameter?

Nope! Your lhs/rhs heuristic isn’t accurate. The thing that triggers the need for a jacobian is if you put a quantity derived from one or more parameters on the left hand side of a tilde, and even then only under certain circumstances (not when it’s a linear transformation for sure and iirc not when it’s a many-to-one transform).

Whew! Thank you @mike-lawrence !