Optimizing Functions of stan random variables

Hi. I am wondering (hoping) that there is a way of doing stochastic function optimization using pystan.

In particular, assume you have data for a Poisson count process Q ~ poisson(exp(alpha + beta * x) where x is given data. You fit alpha and beta using stan. Now, you seek to find the minimum of a function F(Q) over x. There are known ways of doing this in model predictive control, but I was wondering if there was any functionality already in stan to do this?

From a Bayesian perspective, the usual way of doing something like this is to try to choose x to minimize the expectation of F(Q(x)) with respect to alpha and beta. In other words, for any x, evaluate F(Q(x)) for each posterior draw of alpha and beta and calculate the mean of F(Q(x)) over the posterior draws. Then choose a better x until you find a minimum.

I think what you might be saying is that you are treating Q(x) as a random variable, in which case for each posterior draw of alpha and beta, draw from the posterior predictive distribution of Q — which can be accomplished in Stan by calling poisson_log_rng(alpha + beta * x) — and calculate the mean of F(Q(x)) over the posterior predictive draws. For each x, the objective function is not deterministic. There is nothing built into Stan to specifically handle this, but you should be able to do stochastic gradient descent or something. Actually, the ADVI implementation in Stan does something similar, but its optimization routine is not exported in a general way.

2 Likes

Thanks, that is where I am at currently (using poisson_log_rng in my generated_quantities block.) . The issue is that for a given x, everything is fine. My issue is that I need to find a sequence of x that minimizes the function F(Q(x)) over a finite time horizon (i.e. the x are really x_t).

There should be an icon that looks like a pencil that allows you to edit, but anyway, if you are really optimizing over a vector[T] x, then you need

generated quantities {
real mean_y_tilde;
{
  vector[T] y_tilde;
  for (t in 1:T) y_tilde[t] = poisson_log_rng(alpha + beta * x[t]);
  mean_y_tilde = mean(y_tilde);
}
}
1 Like

It’s just not obvious where it is. Click on three dots to the left of reply, then on the thing that looks like a pencil.

Thanks @bgoodri. That is exactly what I have. What I wasn’t conveying properly is the fact that, once you set a given x[i], then the posterior of Q(x) would get updated with the new information, and hence F(Q(x)) gets updated. So there is an explore/exploit tradeoff in pursuing the optimal x given the uncertainty in Q. How best to take advantage of it over time though using a stan model is my question.

Basically, I would like to replace the Gaussian Process in the paper https://arxiv.org/pdf/1706.06491.pdf with a fitted stan model and optimize my function. (In a more generic sense than just dynamical systems).

Does this make sense?

I met similar problem too. Did you solve the problem?

I actually solved the optimization problem in python. So after inference, you extract distributions for your parameters and then wrap those inside a python function. Revisiting @bgoodri answer though, that is the way i would do it now. (i guess i didn’t understand what he was talking about 3 years ago…)