Hi Stan Users,

I’m trying to write a fairly simple Bayesian linear regression for a dataset where both the independent and dependent variables are analytically measured data with known gaussian uncertainties. This is a bit different than the example in the manual where only the dependent variable has an uncertainty.

The below model seems to be returning decent results, but I was wondering if any of y’all could help confirm that my approach is a statistically valid one.

Thanks!

```
data {
int N; // Number of observations
vector[N] x0; // Observations
vector[N] x_sd; // S.D. of observations (assuming normally distributed uncertainty)
vector[N] y0; // Outcomes
vector[N] y_sd; // S.D. of Outcomes
// Inputs for priors on regression coefficients (just as an example)
real slope_min;
real slope_max;
real intercept_min;
real intercept_max;
}
parameters {
vector[2] beta; // regression coefficients
vector[N] X; // "uncertain" observations
}
model {
// Define local variables
matrix[N,2] XP;
// Populate expectations matrix
XP[:,1] = rep_vector(1.0,N); // column for y-int
XP[:,2] = X; // column for slope
// Define Priors
X ~ normal(x0,x_sd);
beta[1] ~ uniform(intercept_min,intercept_max);
beta[2] ~ uniform(slope_min,slope_max);
// Define Likelihood
y0 ~ normal(XP*beta,y_sd);
}
```