Why do my constraints not seem to have an effect?

I have been trying to estimate this model and I believe in order for the model to be identifiable, I need some type of constraint on beta and gamma. I have seen somewhat similar models in which they used sum to 0 constraints, which I have implemented in the below code. I also experimented with setting the first element of beta and gamma to 0, which is commented out. However, it seems that these constraints do not assist me in the recovery of the parameters. With or without constraints, it seems that the model just requires larger amounts of data in order to recover the parameters. Am i not implementing these correctly? And how can my model be identifiable when i have no constraints on beta or gamma, for example:
theta ~ normal(0,1)
beta ~ normal(0,3)
gamma ~ normal(0,3)

If i include enough data, this seems to work but I am not sure how the model can distinguish between the parameters.

data {
  int<lower=1> I;
  int<lower=1> J;
  int<lower=1> N;
  int<lower=1> T;
  int<lower=1,upper=I> ii[N];
  int<lower=1,upper=J> jj[N];
  int<lower=1,upper=T> tt[N];
  int<lower=0> y[N];

}
parameters {
  vector[I-1] beta_free;
  vector[T-1] gamma_free;
  
  vector[J] theta;
  real<lower=0> sigma;
  
  //vector[I] beta;
  //vector[T] gamma;
}
transformed parameters {
  vector[I] beta;
  vector[T] gamma;
  
  beta[1:(I-1)] = beta_free;
  beta[I] = -1*sum(beta_free);

  gamma[1:(T-1)] = gamma_free;
  gamma[T] = -1*sum(gamma_free);
  
  //beta[1] = 0;
  //beta[2:I] = beta_free;

  //gamma[1] = 0;
  //gamma[2:T] = gamma_free;

}
model {

  target += normal_lpdf(beta | 0, 1);
  target += normal_lpdf(gamma | 0, 1);

  theta ~ normal(0,sigma);
  sigma ~ exponential(.1);
  
  y ~ poisson(exp(theta[jj] + beta[ii] + gamma[tt]));
    
}

For sum-to-zero, an easier way to code is;

vector[I] beta = append_row(beta_free, -sum(beta_free));

I’m not sure what you mean by the constraints not having an effect. Coding this way will ensure that sum(beta) = 0, so it perfectly enforces the sum-to-zero constraint.

It’s much more efficient to use

target += normal_lupdf(beta | 0, 1);

or equivalently,

beta ~ normal(0, 1);

because it doesn’t have to evaluate three log-gamma functions each evaluation.

For the Poisson, it’s better too write

y ~ poisson_log(theta[jj] + beta[ii] + gamma[tt]);

I’m not sure how you’re evaluating, but what you want to do is simulate data under this exact data generating process (whichever you choose) and then see if you can recover the parameters used to generate the data.

Thank you for the reply! What I meant by the constraints not having an effect is that whether I include the constraint or not, the estimates seem to be the same. Let’s say for example that theta beta and gamma were all standard normal priors with no sum to 0 constraints. I assume there is some soft constraint going on underneath but am not sure.

For the target+=, I assumed that I needed to use lpdf because I am transforming the parameters, but I’m assuming then that this transformation does not require this?