I am currently working on a state-space model that takes into account spatial structure. For vector `r[k]`

, I have built into the model the assumption that adjacent `r`

’s are similar to each other.

I would like to compute the log posterior probabilities for the response variables `Y`

and `r`

, respectively. Based on my colleague’s advice, I have written the code as follows:

```
data {
int<lower=0> N_data;
int<lower=0> N_region;
int<lower=0> N_subject;
vector[N_data] Y;
vector[N_data] Age;
vector[N_data] Ses;
vector[N_data] Sex;
vector[N_data] Smoke;
int RID[N_data];
int SID[N_data];
int I;
int<lower=1, upper=N_region> From[I];
int<lower=1, upper=N_region> To[I];
}
parameters {
vector[N_region] r;
vector[N_region] beta1;
vector[N_region] beta2;
vector[N_region] beta3;
vector[N_region] beta4;
vector[N_subject] beta_ind;
real<lower=0> s_r;
real<lower=0> s_Y;
real<lower=0> s_beta1;
real<lower=0> s_beta2;
real<lower=0> s_beta3;
real<lower=0> s_beta4;
real<lower=0> s_beta_ind;
}
transformed parameters {
vector[N_region] lp_r;
vector[N_data] lp_Y;
for (i in 1:N_region){
lp_r[i] = log(1.0/N_region) + normal_lpdf(r[To[i]]|r[From[i]], s_r);
}
for (i in 1:N_data){
lp_Y[i] = log(1.0/N_data) + student_t_lpdf(Y[i]|3, r[RID[i]]+beta1[RID[i]]*Age[i]+beta2[RID[i]]*Ses[i]+beta3[RID[i]]*Sex[i]+beta4[RID[i]]*Smoke[i]+beta_ind[SID[i]], s_Y);
}
}
model {
target += normal_lpdf(r[To]|r[From], s_r);
for (n in 1:N_data){
Y[n] ~ student_t(3, r[RID[n]]+beta1[RID[n]]*Age[n]+beta2[RID[n]]*Ses[n]+beta3[RID[n]]*Sex[n]+beta4[RID[n]]*Smoke[n]+beta_ind[SID[n]], s_Y);
}
beta1 ~ student_t(3, 0, s_beta1);
beta2 ~ student_t(3, 0, s_beta2);
beta3 ~ student_t(3, 0, s_beta3);
beta4 ~ student_t(3, 0, s_beta4);
beta_ind ~ normal(0, s_beta_ind);
s_Y ~ normal(0, 0.01);
}
```

However, this code does not seem correct. First, if `r[k]`

is a categorical variable satisfying r\in\{ 1,2,..., N_{region}\}, then the term `log(1.0/N_region)`

would be necessary, but actually `r`

is a continuous value. Second, if there is no prior distribution set for any parameter, then the log posterior probability is equal to the log-likelihood, which I think can be expressed by a function with `_lpdf`

at the end, but we have prior distributions for `beta1`

, `beta2`

, `beta3`

, `beta4`

and `beta_ind`

. How should I consider these prior distributions?

Any minor comments are welcome. Thank you in advance!