I was looking to tinker with implementing Bayesian Neural Networks in Stan, just to compare NUTS performance relative to other software (quite aware of the threads on this). I wanted the user to easily define and vary the number of hidden units per layer of the neural network.
As a simple reference, an NN layer with m units, connected to a layer with n units, will have matrix[m, n] weight parameters. If all layers have m hidden units and L layers , then I can define an array to contain the weights as follows (which works fine):
matrix[m, m] W[L]
However, if the number of hidden units per layer changes, then W cannot be expected to have a fixed shape for each L slice. Can Stan handle this within its array implementation? Would really appreciate any pointers!