Sample from a custom distribution in the generated quantiles block

Hello, I’m fairly new to Stan so apologies if I’ve missed something obvious.

I have the following model that I would like to generate samples from in the generated quantities block:

model {
target += alpha*multi_normal_lpdf(y | mu, Sigma);

// priors
mu[1] ~ normal(-4,10);
mu[2] ~ normal(25,10);
Sigma ~ wishart(nu,v);
}

It doesn’t seem possible to use the multi_normal_rng function as that would not appropriately consider the influence of alpha. I can generate samples from this model easily enough in r using rejection sampling with the estimated parameters, but I would like to be able to do it directly from Stan if possible. Here’s my full code:

// The input data is a vector 'y' of length 'N'.
data {
  int<lower=0> N;
  vector[2] y[N];

}

// The parameters accepted by the model. 
parameters {
  vector[2] mu; //means
  vector<lower=0>[2] sigma; //sds
  real<lower = -1,upper = 1> rho;//correltation
}

transformed parameters{
  cov_matrix[2] Sigma; // create first correlation matrix
  real alpha;
  real nu; // wishart prior  degrees of freedom
  cov_matrix[2] v; // wishart prior covariance matrix
  Sigma[1,1] = square(sigma[1]); // variance 
  Sigma[2,2] = square(sigma[2]); // varance
  Sigma[1,2] = rho*sigma[1]*sigma[2]; // covariance
  Sigma[2,1] = Sigma[1,2]; // covariance
  alpha = .05; // helpfulness -- will be estimated later but set to .05 for testing
  nu = 5; 
  v[1,1] = 5; // variance 1
  v[2,2] = 5; // variance 2
  v[1,2] = 0; // covariance
  v[2,1] = 0;
}


// The model to be estimated. We model the output
// 'y' to be log bivariate normally distributed with mean 'mu'
// and standard deviation 'sigma' multiplied by 'alpha'.
model {
target += alpha*multi_normal_lpdf(y | mu, Sigma);
// priors
mu[1] ~ normal(-4,10);
mu[2] ~ normal(25,10);
Sigma ~ wishart(nu,v);
}

Any help would be greatly appreciated!

I think a tempered normal distribution (meaning alpha*multi_normal_lpdf) is equivalent to rescaling the Covariance. Alpha seems redundant in your model.
In addition, you could code v as data, not transformed parameters.

Hi Yuling, thanks for your response. It’s true that alpha is redundant in the sense that it is just a scaling parameter for the covariance matrix in this context, but having that scaling parameter is theoretically important for the model (its part of a broader computational framework of social inference and alpha describes how informative a participant thinks the data is). Showing that alpha and the covariance matrix are practically equivalent (and when they might not be) is something that I also want to investigate through recovery simulations, hence why I need to be able to generate samples according to different alpha parameters.

If y is one-d, then clearly
target += alpha*normal_lpdf(y | mu, Sigma);
is the same as
y ~ normal( mu, sigma/ sqrt(alpha));

Hence, if you have draws y from normal( mu, Sigma), just let y = mu+ (y-mu)/sqrt(alpha).

For multidimensional y you can work out similar re-scalings formulas?

Such rescaling tricks will only for gaussian distributions, if it is what you mean by “alpha and the covariance matrix are practically equivalent (and when they might not be)”