Operating System: CentOS x86_64 GNU/Linux

Interface Version: pystan 2.17.0.0

Compiler/Toolkit: stan 2.17.0.0

I’ve been tinkering around with Stan fairly recently – so far I have found it to be an incredibly expressive language. It is truly amazing that these resources are under active development.

Right now, I’m trying to implement a hierarchical multinomial-normal regression model. It actually seems to be working with really small examples. The code is below

```
data {
int<lower=0> N; // number of samples
int<lower=0> D; // number of dimensions
int<lower=0> r; // rank of partition matrix
matrix[D-1, D] psi;// partition matrix for ilr transform
real g[N]; // covariate values
int x[N, D]; // observed counts
}
parameters {
vector[D-1] beta0;
vector[D-1] beta1;
matrix[N, D-1] x_; // ilr transformed abundances
vector<lower=0>[D-1] sigma;
matrix[D-1, r] W; // factor for covariance matrix
}
transformed parameters{
cov_matrix[D-1] Sigma;
matrix[D-1, D-1] I;
I = diag_matrix(rep_vector(1.0, D-1));
Sigma = W * W' + diag_matrix(sigma) + 0.001*I;
}
model {
// setting priors ...
for (i in 1:D-1){
beta0[i] ~ normal(0, 15);
beta1[i] ~ normal(0, 15);
sigma[i] ~ lognormal(0, 1);
}
for (i in 1:D-1){
for (j in 1:r){
W[i, j] ~ normal(0, 5);
}
}
for (j in 1:N){
x_[j] ~ multi_normal(g[j] * beta1 + beta0, Sigma);
// perform ilr inverse and multinomial sample
x[j] ~ multinomial( softmax(to_vector( x_[j] * psi )) );
}
}
```

The problem that I am currently running into is trying to scale this on larger datasets. I am able to get this to run on a `N=40, D=256`

dataset using ADVI. However, when `N=40, D=512`

, the program crashes with a memory overhead of 9GB.

The largest data structure in this program is the covariance matrix, which would be on the order of 511 x 511. I’d think that would be the size of a few MB, not GB right?

Not sure what is going on with the memory footprint, or how to best diagnose this. Any insights will be greatly appreciated!!