Confusion about lkj prior in mixture modeling example

I am trying to follow the example posted in this blog

It seems to fit a multivariate gaussian mixture model using a lkj prior

as shown here

data {
int D; //number of dimensions
int K; //number of gaussians
int N; //number of data
vector[D] y[N]; //data
}

parameters {
simplex[K] theta; //mixing proportions
ordered[D] mu[K]; //mixture component means
cholesky_factor_corr[D] L[K]; //cholesky factor of covariance
}

model {
real ps[K];

for(k in 1:K){
mu[k] ~ normal(0,3);
L[k] ~ lkj_corr_cholesky(4);
}

for (n in 1:N){
for (k in 1:K){
ps[k] = log(theta[k])+multi_normal_cholesky_lpdf(y[n] | mu[k], L[k]); //increment log probability of the gaussian
}
target += log_sum_exp(ps);
}

}

I am confused by the fact that she samples

L[k] ~ lkj_corr_cholesky(4);

and then puts it into

multi_normal_cholesky_lpdf

I would have thought you need to also have a prior for the variance part of covariance matrix, is that not correct

This example assumes the variances are known to be 1. You could easily introduce variances or standard deviations in the parameters block, in which case you would need to incorporate them into the likelihood function and put a prior on them.

So something analogous to this?

data {
int<lower=1> N; // number of observations
int<lower=1> J; // dimension of observations
vector[J] y[N]; // observations
vector[J] Zero; // a vector of Zeros (fixed means of observations)
}
parameters {
cholesky_factor_corr[J] Lcorr;
vector<lower=0>[J] sigma;
}
model {
y ~ multi_normal_cholesky(Zero, diag_pre_multiply(sigma, Lcorr));
sigma ~ cauchy(0, 5);
Lcorr ~ lkj_corr_cholesky(1);
}
generated quantities {
matrix[J,J] Omega;
matrix[J,J] Sigma;
Omega <- multiply_lower_tri_self_transpose(Lcorr);
Sigma <- quad_form_diag(Omega, sigma);
}

Something like that. I am not a fan of half-Cauchy, but whatever.