Target *=

Is there any way to achieve the effect of target*=real a?
This will be useful in simulated tempering and some general post-processing, where we want to reshape the posterior by some power transformation.

Would this work

target += target() * a - target()

edit. typo in the docs target()() --> target()


Thanks. But target()() does not seem to be recognized.

I think that is a typo in the docs and its just target().

1 Like

Great, thanks! This is exactly what I want.


OK it turns out I have not fully solved my problem. I am now using a geometric bridge between two models: For two densities p_1(\theta) and p_2(\theta), I want to sample from a density that is proportional to p_1^\lambda(\theta) p_2^{(1-\lambda)}(\theta). Using the saved target(), I can do this by:

real lambda;
real theta;

model {
real log_q;//lp of the first model;
theta~foo; // model 1
real log_p;//lp of the alternative model
theta~foo2; // model 2
 target+=target()*(lambda)-target() + (1-lambda)*log_q;

The sampling is fine. But for some post process, I want to access my local variable log_q and log_p defined in the model block. For one time use, I could put them into transformed parameters, to compute and store log_p=foo1_lpdf(theta).

But I want an automated procedure that applied to a general model where the density is hard to compute in the transformed parameters.

In other words, in most cases, local variables in model block can be easily moved back to transformed parameters, expect when these variables depends on target().

In this structure, (before hacking into stan language), is there any easy way I can rewrite my code to

  1. save local variable in model block ; or
  2. compute transformed parameters in model block; or
  3. save intermediate states of target()?

which are three equivalent description my question.


That is going to be less trivial I believe as local variables are not saved in the results and you can’t call the local variables from model {} in generated quantities.

A “trick” you can do is use print. Example:

parameters {
    real y;
model {
    y ~ normal(0,1);
    real b = target()*5.0;
    print("line = ",y,b);
generated quantities {

The print in GQ is just so we mark iterations. On the standard output you would get:

Iteration: 1001 / 2000 [ 50%]  (Sampling)
line = -0.243293-0.147978

line = 1.30349-4.24769

line = 1.33544-4.45849

line = -0.184566-0.0851618


line = 1.33544-4.45849

line = 0.316915-0.251088

line = -1.06991-2.86175

line = -1.21335-3.68057


line = 0.316915-0.251088

line = 1.0784-2.90739

line = 0.586643-0.860376

line = -0.586876-0.861059


In order to know which of the printed lines was the one selected by the sampler you can compare y with the output, which is:

# Adaptation terminated
# Step size = 1.02103
# Diagonal elements of inverse mass matrix:
# 1.11476

So for the first sample its the third printed line, for the second one its the second line and for the 3rd sample it was the fourth one. With some scripting you could automate this pretty easily.

But not ideal. Still beats hacking in the C++ backend I guess.


Thanks, Rok. It works. It seems the actually saved parameter is always the first printed leapfrog line after each “—”, hence the last iteration will not be saved, but that is fine. That is

--- \\ printed from the generated quantile in the i-th iter"
line = 0.316915-0.251088 \\  first-line in the i+1 iter: corresponds to the saved parameter in the i-th iter 

It is also interesting that the printed value is not numerically identical to the saved parameter (they have the same precision), up to one or two digit difference (~1e-6).


By the way, I used the solution here to facilitate a model/alternative model syntax in
It could (1) fit two models at the same time (2)ease metastability/multimodality and (3) estimate the normalization constant.
Thanks for your help!


Nice! Glad it worked!

The precision difference is because of different precision settings for stdout (print) and output stream (CSV file).