Priors on derived quantities of model

I have a general question about priors:

Imagine that i derived a posterior p(f|y) from a previous experiment about a quantity which is function of my new modeling problem f(\theta) . Since theta has a larger dimension the invers f^-1 does not exist. I can not reformulate this posterior as a prior(\theta) analytically.

I guess a statement in stan such as
target += p(f|y)
give my model for the new data implicitly the correct prior probabilities on the \theta manifold?

Has someone counter-arguments on this approach?

Thanks, Jan

1 Like

Hi,

Sorry we weren’t able to get to your question earlier.

@andrewgelman wrote about this here which I think is a worthy read.

2 Likes

For reference, this idea is also used in Markov melding and Bayesian benchmarking.

2 Likes

Thank you, Prof. Gelmans blog cleared my thoughs on this. And Ill have a look in the papers know.

Jan

I have a question similar to the one in the Gelman post.

I have estimated parameters from a linear regression, which I want to use as priors in a log-level regression. For example:

y_hat = b0_hat + b1_hat X1 + b2_hat X2
log(y)_hat = g0_hat + g1_hat X1 + g2_hat X2

Is it possible to use the b’s to set an informative prior on g1? My guess is that the nonlinear log transformation creates big problems.