Using joint probability distribution for the prior distribution


#1

Hi, i’m fairly new to stan and need some help constructing my prior.

I’ve created a model for an enzymatic reaction E+S <-> ES -> P+ E which has reaction rates kf,kr,kcat ( My parameters which i have declared in the parameter block).
Most of the stan code is similar to the lotka volterra example (http://mc-stan.org/users/documentation/case-studies/lotka-volterra-predator-prey.html).

For the prior i want to code the joint distribution p_{kf,kr,kcat} (x,y,z) = p_kf (x)*p_km ((y+z)/x)*p_kcat (z) .

(N.B. Km = (kr+kcat)/kf is the Michaelis constant ). x,y,z are the random variables

Each of p_kf , p_km , p_kcat follow a known distribution (in stan) e.g. lognormal so p_kf (x) ~ lognormal(mu,sigma)

How do i code this? Is this possibly? I assume you need to use a user defined function but i couldn’t find any mention for multiple variables.

I would appreciate any help and let me know if it isn’t clear enough. I haven’t included any code because i don’t see how it would help but let me know if it would.


#2

Just make k_m a parameter, and let k_r be a transformed parameter. That way you can just put whatever prior you want on k_m and then k_r will just be whatever it will be (put dollars around stuff to use \LaTeX math).

It looks like the way you wrote your joint prior k_f, k_m, and k_{cat} are independent.

So would this work?

parameters {
  real kf;
  real km;
  real kcat;
}

transformed parameters {
  real kr = km * kf - kcat;
}

model {
  kf ~ log_normal(...);
  km ~ log_normal(...);
  kcat ~ log_normal(...);

  //...
}

Presumably there are constraints on the k values to keep them positive since these are reaction networks or whatever? I dunno.

To your original question, it’s hard to have this parameterized in terms of k_r and have a prior on the transformed variable. The section in the manual on this is “Change of Variables vs. Transformations.” I think you should be able to figure out a multivariate transform here since your function from k_f, k_r, k_{cat} to k_f, k_m, k_{cat} is invertible n’ all, but I just don’t think you need to worry with it.


#3

@bbbales2
Thanks for your swift response. k_f , k_m and k_{cat} are independent. (Haven’t shown it yet so the original question may still be of interest and i’ll have a look at the chapter you mention).

Yes, that should work.

Yes, the k values need to be positive. I just did this using real<lower=0> kf; etc in the parameter and transformed parameter block.

So if for example K_M and K_{cat} are not independent and so the prior is given by
p_{kf,kr,kcat} (x,y,z) = p_{kf} (x)*p_{km,kcat} (\frac{y+z}{x},z) how would a transformation help with coding the p_{km,kcat} (\frac{y+z}{x},z) part?

Do parameters in Stan have to be independent? How would you code p_{kf,kr,kcat} (x,y,z) = p_{kf} (x) * p_{kr,kcat} (y,z) (ignore k_m for now, kr,kcat dependent) where p_{kr,kcat} may not be an inbuilt stan function?


#4

To actually have the model parameterized in terms of k_f, k_r, k_{cat}, look at “Multivariate Changes of Variables” in the manual. It can be a bit confusing at first, but a good way to understand what’s going on is to look at the examples for how Stan handles constrained parameters (“Transformations of Constrained Variables” in the manual).

Nope. Independence makes models easy to write out, but the sampler only cares that it can evaluate the log density (edit: and its gradients) at some point in parameter space. The thing to keep in mind is that the statements with the ~s are just incrementing the log density internally. Check “Log Probability Increment vs. Sampling Statement” in the manual (edit: sorry for the billion and a half manual references, not trying to be sarcastic – it’ll just do a better job of explaining things than me :P).

With this in mind, unless there’s a compelling reason to do otherwise, I’d just roll with independent priors or something easy to code up (like a multivariate normal). The likelihood is going to mix everything together in the end anyway. K_M and K_{cat} could easily end up super coupled in the posterior even if they are independent in the prior.