Sampling from a transformed parameter with +=target



I am relatively new to Stan so my apologies if this seems a trivial problem, but I didn’t manage to get anywhere from reading the documentation and some previous threads.

I have been recently using a hierarchical model which on the whole works well with good convergence on the parameters. However, one of the parameters (A) occasionally wanders below zero which represents an unphysical result (for context, A represents the amplitude of an oscillation in my overall model). A is a transformed parameter in my model and is represented in the following way:

A = A_{std} * A_{sig} + a\left<\delta\nu\right>^{-b}

where a and b are the hyperparameters.

Below is an excerpt of my Stan code. I have only included sections involving A at this point, however can provide the whole code if required.

parameters {
// Normal Parameters
    real dnu[N];    
// Hierarchical Parameters
    real A_std[N];
    real<lower=0> A_sig;
    real AA;
    real AB;

transformed parameters {
    real A[N];

    for (i in 1:N){
        A[i] = A_std[i] * A_sig + (AA * (dnu[i])^(-AB));
model {    
// Hierarchical Parameters
    A_std ~ normal(0, 1);
    A_sig ~ normal(0, 0.5);
    AA ~ normal(0.06, 0.02);
    AB ~ normal(0.88, 0.05);

Now it is when I came to provide limits that I ran into issues. My first attempt was to introduce

 A ~ uniform(0,0.1)

into the model section, which brought up the compilation warning regarding Jacobean warnings that I have seen in other threads. The code does run, however convergence goes ‘out the window’ so to speak, even after a large number of sampling iterations. This makes me think that an adjustment involving target += does need to be introduced, however I am at a loss as to how to do this, given the expression for my transformed parameter A.

As a sanity check, I tried an alternative method of applying real<lower=0> A[N]; in the transformed parameter section (without the ‘uniform’ sampler line previously attempted). When attempting to run this version of the code, initialisation fails after 100 attempts, with an example output in the terminal as:

Rejecting initial value:
  Error evaluating the log probability at the initial value.
Exception: anon_model_25b76e0c44ce504cd1daa90904e96370_namespace::write_array: A[k0__] is -3.88599, but must be greater than or equal to 0  (in 'unknown file name' at line 52)

To me, it seems clear that it is sampling from a different distribution from the one I expect, since there are values below zero. This seems to fit with the Jacobean warning message:

If you fail to heed this warning, the posterior distribution Stan will sample from is not necessarily the posterior distribution that you have in mind.

My question hence is have I understood what is happening correctly, and how I go about constructing the target += line to accommodate for this issue (presuming that is what is needed)? I have had a go at reading the documentation (for example here), however can’t quite understand how to apply it to my situation, since there are a combination of factors to consider.

In summary, I would like to stop A going below 0!

Any help regarding this issue would be greatly appreciated, as I have now simply reached the point of confusion.



More specifically, you need a construction such that A is positive for all values of the parameters. It seems as if you need more constraints. If any \delta \nu wanders below zero and b>0, then \left(\delta \nu\right)^{-b} is not real. It seems from your priors that you believe b > 0, so should \delta \nu be constrained to be positive?

If so, then for sufficiently large a, we could make A positive, although it requires a function to compute the lower bound on a.


Thanks for the reply. I do believe that b is positive as you suggest. In fact, I know that d\nu is always positive-valued, being the spacing between successive spatial frequencies, and based off previous tests, a should be also.

I shall try testing with <lower=0> applied to these parameters.



OK. You are going to need something like

parameters {
  real<lower = 0> dnu[N];
  real A_std[N];
  real<lower = 0> A_sig;
  real<lower = 0> AB;
  real<lower = lb(A_std, A_sig, dnu, AB)> AA;

where lb(...) is some function that you defined in the functions block that returns the value of AA such that the transformed parameter A is zero for some observation. The body of this function is a bit redundant with the transformed parameters block but is necessary in order for NUTS to work well.