How to reuse the trained model to make prediction and get the predicted value

Hi,

I have built a model with a few different inputs, lets say they are vectors A and B.
I have trained the model through fit = sm.optimizing(data=data, init=init, algorithm=‘BFGS’) with some values passed in to A and B.

Now I want to use the trained the model to make some prediction for new values for A and B. How can I pass in the new values into the train model and get the predicted results?

I am using PyStan by the way

Thank you very much for your kind help.

Regards,
PS

1 Like

Depends on the shape of the model and what you want to predict.

Given that you’re fitting a point estimate, then you probably want to do prediction by taking the point estimate and plugging it into the likelihood. In general, you can do that within the generated quantities block of a Stan program and then make predictions at the same time as you fit.

Alternatively, you can extract the parameters and implement the predictive model on the outside in Python. I’m not sure if PyStan lets you extract Stan functions (RStan does)—if it does, that’d be one way to write a prediction function.

If you do it within the generated quantities block in Stan, you can always run full Bayes over the model and get uncertainties in the predictions.

1 Like

I was about to ask a similar question but with a slight complication. My model fits a Gaussian Process in which the variance-covariance matrix depends on the hyperparameters. That means that the matrix is recalculated in the model block at every iteration. So if I want to get predictions with uncertainties it looks as if I have to recalculate the matrix in the generated quantities block. I notice that that is what Rob Trangucci does in one of his GP examples programs.
Doing it twice seems inefficient particularly if the computations for the matrix are complicated as mine are. Is there a better way?

I think I have found the answer to my own question: I have declared the variance-covariance matrix in the transformed parameters block.

If the covariance matrix is big, then it can be more inefficient to save the covariance-matrix for each iteration than saving just the covariance function parameters and then recomputing the covariance again in generated quantities given the saved covariance function parameters. If the covariance matrix is small then the computation time should be negligible anyway.

How to do full Bayes over the model and get uncertainties in the predictions? I tried .sampling method and get many samples so that I can get uncertainty. The results looks fine. But when I use .optimization to do point estimate, the result is really bad. I do not understand why these two methods vary that much. So I was wondering weather there are extra methods to do full Bayes.
Thanks a lot!