Hello,
I need help with the following problem. To put it simply, I have a sample of observed pairs (m_1,..,m_n; s_1,..,s_n) where s_i acts as a constant, and the observation model depends on unobserved quantities y_1,..,y_n. Theoretically, their true but unobserved model is y\sim F(y;\mu_y,\phi_y) and I’m interested to estimate the parameters using m and s. Although unobserved in practice, I know they can be drawn from a Beta distribution with parameters \lambda_i and \sigma_i (in the code below, Res), which are computed using the observed data m_i and s_i.
In practice, what I’am trying to write down is a Gibbs-style sampler for the model parameters. Now, my question is simple: Where should I incorporate the information y \sim Beta(y; g(\lambda,\sigma)) in the Stan workflow? My naive solution (see code below) isn’t working (probably because I’m placing y in the wrong block as I have no y at all).
Note: I could get samples for y_i at each iteration given \mu_y and \phi_y using the following user defined _rng function:
real conditional_y_rng(real m, real s,vector pars,vector sigma0){
vector[2] res = dpi_y_approx_case1(m,s,m,sigma0,pars);
return beta_rng(res[1]*res[2],res[2]-res[1]*res[2]);
}
Thank you in advance!
functions{
#include eqsModel.stan
#include innerFunctions.stan
}
data{
int n;
int maxIter;
vector[n] m;
vector[n] s;
//vector<lower=0,upper=1>[n] y;
real<lower=0> alphas;
real mus;
vector[maxIter] sigma0;
}
parameters{
real muy;
real<lower=0> phiy;
}
transformed parameters{
matrix[n,2] Res; //lambda and sigma for each observation
for(i in 1:n){
Res[i,] = dpi_y_approx_case1(m[i],s[i],m[i],sigma0,[muy,phiy]',maxIter)';
}
print(Res);
}
model{
vector[n] y;
for(i in 1:n){
y[i] ~ beta(Res[i,1]*Res[i,2],Res[i,2]-Res[i,2]*Res[i,1]);
s[i] ~ gamma(alphas,mus/alphas);
m[i] ~ beta(y[i]*s[i],s[i]-s[i]*y[i]);
}
}