Reduced rank regression

Hello Stan Community,

I intend to run reduced rank regression for the very first time and I just wondered if there are any specifications of what the outcome variable should be? My outcome variables are all categorical. However, most of the examples I’ve seen have all used continuous response variables.

Thanks.

I don’t have any experience with reduced rank regression, but close alternatives to it (i.e. Bayesian versions of penalized regression methods: ridge, lasso, and elastic net) are implementable in Stan. For example, you can fit an analogue of ridge regression in Stan by setting a hierarchical prior on the standard deviation of the regression slopes (http://haines-lab.com/post/on-the-equivalency-between-the-lasso-ridge-regression-and-specific-bayesian-priors/), or the more-powerful-but-more-fussy horseshoe prior (https://betanalpha.github.io/assets/case_studies/bayes_sparse_regression.html#2_constructing_prior_distributions_that_induce_sparsity).

Afaik, technically, you should be able to fit these models with whatever kind of likelihood you like, meaning that your outcome variable can also look however you want. Big emphasis on the technically, a specific choice of likelihood may not play well with the shrinkage prior & you might end up with slow sampling & divergent transitions.

From my experience, the hierarchical normal (ridge-like) prior plays pretty well with most likelihoods/model structures I’ve used, is fast to fit, and performs good in terms of prediction. Horseshoe can give you nicer, sparser solutions, but sometimes that does come at the cost of slower sampling & more problems with the model fit, so you might end up having to really dig in into your model and set informative priors.