# :: Parallelization :: Pystan :: Covariance Matrix

Dear STAN community,

I need to parallelize a model run. Reading the docs (incl. threads) I learned that (pystan (I use 2.19) and Windows (use Windows 10)) is not the best combination for parallel runs with stan.

Reading the docs I guess I have 2 options:

1. install cmdstan and use sum_reduce
2. use pystan and use map_rect

Q1: In case I use sum_reduce with cmdstan, is the code below likely to be successful?
Q2: Can you hint me to a source, which describes how covariance matrices can be used with map_rect. I did not find something that helped me.
Q3: What is longterm the best horse: cmdstan or pystan (system: windows & python)?

``````// in functions:
real partial_sum1( real[,]  W, int start, int end, real[] W_mean, real[,,]  L_COVMAT ){
//  used for sum_reduce ( which is not yet ready in pystan )
// W.. matrix with [N,4]
// W_mean... vector with 
// L_COVMAT... array with matrices [4,4]
return multi_normal_cholesky_lpdf( W[start:end,:] | W_mean, L_COVMAT[start:end,:,:] );
}

// in model:
target += reduce_sum( partial_sum1, W, grain_size, W_mean, L_COVMAT );

// the above should parallelize this:
// for (i in 1: data_n) {
//     W[i,:] ~ multi_normal_cholesky_lpdf(W_mean, L_COVMAT[i,:,:] );
// }

``````

If you’re committed to Windows and can’t use the Linux subsystem, then I’d suggest using `cmdstanpy`. It plays more nicely with Windows because it doesn’t need to run Stan’s C++ in the same environment as Python.

It looks OK, but anything that evaluates a bunch of multivariate normals with different covariance matrices is going to be slow.

It’s just a matter of packing matrices into arrays and then unpacking them. It’s a lot of index fiddling, I’m afraid. But you should probably stick to `sum_reduce` if you can.

1 Like

Thanks you Bob,
If I like it or not I am bound to Windows and since I decided to continue with cmdstanpy things changed positively for me.
Kind regards

2 Likes