Arrays with varying subset dimensionality

Hi!

I was looking to tinker with implementing Bayesian Neural Networks in Stan, just to compare NUTS performance relative to other software (quite aware of the threads on this). I wanted the user to easily define and vary the number of hidden units per layer of the neural network.

As a simple reference, an NN layer with m units, connected to a layer with n units, will have matrix[m, n] weight parameters. If all layers have m hidden units and L layers , then I can define an array to contain the weights as follows (which works fine):

matrix[m, m] W[L]

However, if the number of hidden units per layer changes, then W cannot be expected to have a fixed shape for each L slice. Can Stan handle this within its array implementation? Would really appreciate any pointers!

The usual answer here is

  1. Declare a larger matrix than you need and include a separate integer array to say how big each slice’s matrix is (better for cases where you don’t hit memory limits and really need to use matrix ops)

  2. Flatten more and have a vector[m * m * L] and additional arrays of integers that tell you which row/column/slice you to find the matrix in. This is a sparse representation and it’s better if you might hit memory limits or slow things down b/c the size of matrices varies a lot.

There are some language features coming up (or in already?) that might help with this but I haven’t used them yet.

1 Like

Thank you @sakrejda! The large matrix idea is quite clever and I’ll try that out.