@mike-lawrence Thank you for the response!
I include the full model code below. At the most basic, I am modelling cross-sectional human growth where x is a continuous Real that equates to chronological age (from ~birth to 20 years of age in my data) and Y is a large response vector of length k. In other words, I have measured k continuous growth variables (e.g., size, breadth, etc.) I have parameterized the mean \mu by an offset power law and the noise \sigma that scales linearly demonstrating heteroskedasticity as chronological age increases. In the code below, I assume Y is distributed as a multivariate normal with a mean per response k and decomposed \Sigma to disentangle the noise from the correlation structure. A few notes, I have also fit this model with a Multivariate Student’s T to compare against the MVN model. Further, I am in the process of trying to fit the model when the data is mixed. That is, the response vector k contains both continuous and ordinal responses variables. I am trying to do this using the GHK algorithm. I have been able to fit continuous and ordinal separately, but have yet to get them to work together… I have posted other times through the discourse here and here for further information about the model.
In terms of the covariance structure and/or the correlation structure, my ultimate goal here is less concerned about the parameters associated with the mean and noise. Instead, I am most concerned with the nature of the covariance structure between growth responses and how they change with age (x). As it is currently written, I have let the noise vary as a function of x, but the correlations between growth traits are assumed to remain unchanged - there is just a single correlation term for each response variable regardless of age. My domain expertise suggests \rho like \sigma, may also vary as an individual ages.
I hope all of this makes sense!
data{
  int N; // # of individuals
  int K; // # of responses
  vector[K] y[N]; // vector of growth responses per individual
  real X[N]; //age
}
parameters{
  vector<lower=0>[K] a; // multiplicative constant
  vector<lower=0>[K] r; // scaling parameter
  vector[K] b; // offset
  vector<lower=0>[K] s_scale; //constant noise
  vector<lower=0>[K] kappa; // slope/gradient of linear noise function
  cholesky_factor_corr[K] L_Omega; // cholesky factor the corr matrix for efficiency
}
transformed parameters{
  vector[K] mu[N]; //mean array of size N containing vectors of length K
  vector[K] s[N]; //sd array of size N containing vectors of length K
  matrix[K,K] Sigma[N]; // cov array of size N with K x K cov matrices
  for(i in 1:N){
    for(k in 1:K){
      mu[i,k] = a[k]*X[i]^(r[k])+b[k]; // mean function
      s[i,k] = s_scale[k]*(1 + kappa[k]*X[i]); // sd function
      }
  }
  for(i in 1:N){
    Sigma[i] = diag_pre_multiply(s[i],L_Omega); // combining the cholesky corr matrix with the noise  
      }
}
model{
  //priors
  a ~ normal(0,10); // half-normal
	r ~ normal(0,1); //half-normal
	b ~ normal(0,10);
	kappa ~ normal(0,1); //half-normal
	s_scale ~ cauchy(0,5); //half-cauchy
  L_Omega ~ lkj_corr_cholesky(0.5);
	for(i in 1:N){
	 y[i] ~ multi_normal_cholesky(mu[i], Sigma[i]); //likelihood
	}
}
generated quantities{
  vector[K] Y_mean[N]; //posterior mean
  vector[K] Y_pred[N]; //posterior predictive check
  corr_matrix[K] Omega; //re-factor corr matrix
  cov_matrix[K] Sigma_new[N];//put cov matrix back together
  real<lower=0> a_prior = normal_rng(0,10); // a prior
  real<lower=0> r_prior = normal_rng(0,1); // r prior
  real b_prior = normal_rng(0,10); // b prior
  real<lower=0> s_const_prior = cauchy_rng(0,5); // constant s prior
  real<lower=0> kappa_prior = normal_rng(0,1); // kappa prior
  real s_prior_lin[N] = s_const_prior*(1+kappa_prior*X[N]); //heterskedatsic w/ prior
  vector[K] log_lik[N]; // log-likelihood for model comparison
  Omega = multiply_lower_tri_self_transpose(L_Omega); //corr matrix
  for(i in 1:N){
    Sigma_new[i] = quad_form_diag(Omega, s[i]); //posterior cov matrix
  }
  for(i in 1:N){
    for(k in 1:K){
      Y_mean[i,k] = a[k]*X[i]^r[k] +b[k]; //posterior mean
    }
    Y_pred[i] = multi_normal_rng(Y_mean[i],Sigma_new[i]); //posterior predictive
  }
  real y_sim[N] = normal_rng(a_prior*X^(r_prior) + b_prior,s_prior_lin); // prior pred distribution
  for(n in 1:N){
    log_lik[n] = multi_normal_cholesky_lpdf(y[n]|mu[n],Sigma[n]); // log lik of each obs
  }
}