Bounds on a specific column of a 2D array

Yes, you’re providing two priors for xn that way. The log posterior (up to an additive constant) is defined by adding together the result of all the sampling, target increment, and Jacobians for constraint transforms.

OLS is just an ad hoc way of generalizing maximum likelihood. For linear regression, they’re equivalent. We don’t do least squares algorithms, just direct max likelihood estimation and posterior sampling (and variational inference).

The statement

for (t in 1:T)
  xn[t]~normal(x_hat[t],sigma);

or more efficiently,

xn ~ normal(x_hat, sigma);

adds the following to the likelihood:

SUM_{n in 1:N}  log normal(xn[n] | x_hat[n], sigma)

If you work out the math, there’s a quadratic term in (xn[n] - x_hat[n]) / sigma)^2.

P.S. We also don’t recommend tabs in code.

P.P.S. Also wouldn’t recommend using x_hat[n] for a parameter—we usually use “hat” for estimators.