Akash Dhaka will present Robust, Accurate Stochastic Optimization for Variational Inference which is joint work with @AlejandroCatalina, Michael R. Andersen, @mans_magnusson, Jonathan Huggins, and me. The poster session is Thu Dec 10 09:00 AM – 11:00 AM (PST) @ Poster Session 5 #1477.
The article describes two simple ways how ADVI in Stan could be improved and limitations of ADVI even if the approximating family includes the true posterior. We also illustrate that even if the approximating family includes the true posterior the currently used stochastic optimization makes it likely that in case of moderate and high dimensional posterior the optimization result is far from the target. We can improve by 1) using convergence diagnostics as stopping rule (instead of ELBO threshold) and 2) using average iterations after convergence. These make the optimization more robust and accurate with less variation if re-run. After we know how to make more robust and accurate stochastic optimization we can start considering alternate approximate distributions and divergences etc. The presented stochastic optimization improvements are not yet implemented in Stan C++.