Speeding up the integrate_1d in RStan

Dear all,
I’m new to RStan, and I’m writing my spatial survival models as following codes. I find similar method to my codes for vectorizing integrate_1d in RStan(file:///G:/Dr%20Nazar/UBUNTU%20SERVER%20-%20UJI/stan%20files/Vectorizing%20integrate_1d%20-%20General%20-%20The%20Stan%20Forums.html). Now, I would like to speed this process up because it longs about 14 days for N=35000 cases, but I get as error as follows:
"
Chain 1: Log probability evaluates to log(0), i.e. negative infinity.
Chain 1: Stan can’t start sampling from this initial value.
Chain 1: Rejecting initial value:
Chain 1: Log probability evaluates to log(0), i.e. negative infinity.
Chain 1: Stan can’t start sampling from this initial value.
Chain 1: Unrecoverable error evaluating the log probability at the initial value.
Chain 1: Exception: Error in function tanh_sinh::integrate: The tanh_sinh quadrature evaluated your function at a singular point and got nan. Please narrow the bounds of integration or check your function for singularities. (in ‘model38147f0761c4_Onicesco_interval_independentmodel_Log_logistic34744’ at line 129)

[1] “Error in sampler$call_sampler(args_list[[i]]) : "
[2] " Exception: Error in function tanh_sinh::integrate: The tanh_sinh quadrature evaluated your function at a singular point and got nan. Please narrow the bounds of integration or check your function for singularities. (in ‘model38147f0761c4_Onicesco_interval_independentmodel_Log_logistic34744’ at line 129)”
[1] “error occurred during calling the sampler; sampling not done”
"
Thank you in advance!

The code I used here as below:

functions{
  real integrand(real u, real xc, real[] theta, real[] x_r, int[] x_i)
  {     // data (integer)
   real df_dx= (theta[2]*theta[1]*u^(theta[1]-1))/((1+(theta[2] *u^theta[1]))^2);
return(df_dx);
}
}
data {
  int <lower=0> province;
  int <lower=0> n_grid;
  int <lower=0> N;
  int <lower=0> num;
  int <lower=0> number;
  int <lower =0 , upper = province> county[N];
  vector <lower=0> [province] p;
  row_vector[n_grid] kernel[province];
  matrix [N,number] respon;
 vector <lower=0>[N] ub;
}
transformed data {
 real x_r[0];
  int x_i[0];
}

parameters{
  vector [num] beta;
  real <lower=0> sigma;
  vector [n_grid] x;
}
transformed parameters{
  vector [num] time_ratio;
  vector [N] lambdaa;
  real alpha = 1 / sigma;
  vector [n_grid] exp_x;
  vector[province] a; // `a` saved to output because it's a top level transformed parameter
     time_ratio  = exp(beta);
  exp_x = exp(x);
    lambdaa = exp ((-1 * respon[,7:18] * beta) / sigma);
{// `z` not part of output
    vector[province] landa;
    for (k in 1 : province) {
     landa[k] = kernel[k] * exp_x * p[k];
      }
 a = landa / sum(landa); // assign `a`
   }
      }
model{
  vector [N] f;
  vector [N] s;
  vector[N] method_two;
   int x_idx [N]= sort_indices_asc(ub);
   vector[N] sx = ub[x_idx];
  vector[N] sorted_m2;
  target += normal_lpdf(x|0,5);
 target += normal_lpdf(beta | 0, 1000);
  target += gamma_lpdf(alpha | 0.001, 0.001);
  for (i in 1:N) {
    real lb;
    real ub1 = sx[i];
    real increment;
  real prev_val;
  f[i]= a[county[i]]*((lambdaa [i]*alpha*respon[i,4]^(alpha-1))/((1+(lambdaa [i]*respon[i,4]^alpha))^2));
   s[i]= a[county[i]]  * (1-method_two[i]);
     if (i==1) {
       lb = 0;
     prev_val = 0;
      } else {
        lb = sx[i-1];
      prev_val = sorted_m2[i-1];
     }
     increment = integrate_1d(integrand, lb,ub1 ,{ alpha, lambdaa[i]},  {0.0} , {0}, 1e-8);
       sorted_m2[i] = prev_val + increment;
        }
        method_two[x_idx] = sorted_m2;  
     target += respon[,6] .*  (log(f)-log(s)) +  log(s);
 }
generated quantities{
  vector[N] log_lik;//log-likelihood
   vector[N] method_two;
  {for(n in 1:N){
     log_lik[n] =respon[n,6] *  (log(((exp ((-1 * respon[n,7:18] * beta) * alpha)*alpha*respon[n,4]^(alpha-1))/((1+(exp ((-1 * respon[n,7:18] * beta) *alpha)*respon[n,4]^alpha))^2)))-log(1-method_two[n])) +  log(1- method_two[n]);
  } 
  }

.```

Dear Stan community,

Can anyone help me to solve this problem?

Warm regards,
Eisa

@hhau and @andrjohns can probably help you here.

One thing I think you should consider is implementing your integrand function in a more numerically stable manner: do all calculations in log-space first, then convert to ‘real’ space at the end.
This would be something like (untested):

real lnum= log(theta[2]) + log(theta[1) + (theta[1]-1)*log(u);
real ldenom = log1p_exp(2*log(theta[2]) + 2*theta[1]*log(u));
return(exp(lnum-ldenom));
1 Like

Thank you so much for your help. Unfortunately, this is doesn’t work.

here are my new stan codes:

functions{
  real integrand(real u, real xc, real[] theta, real[] x_r, int[] x_i)
  {     // data (integer)
    //real df_dx= (theta[2]*theta[1]*u^(theta[1]-1))/((1+(theta[2] *u^theta[1]))^2);
    //return(df_dx);
    real lnum= log(theta[2]) + log(theta[1]) + (theta[1]-1)*log(u);
    
    real ldenom = (log1p_exp(log(theta[2]) + theta[1]*log(u)))^2;
    //real ldenom = 2*log(1+(theta[2] *u^theta[1]));
    return(exp(lnum-ldenom));
  }
}
data {
  int <lower=0> province;
  int <lower=0> n_grid;
  int <lower=0> N;
  int <lower=0> num;
  int <lower=0> number;
  int <lower =0 , upper = province> county[N];
  vector <lower=0> [province] p;
  row_vector[n_grid] kernel[province];
  matrix [N,number] respon;
  vector <lower=0>[N] ub;
  }
transformed data {
  real x_r[0];
  int x_i[0];
  }
parameters{
  vector [num] beta;
  real <lower=0> sigma;
  vector [n_grid] x;
  }
transformed parameters{
  vector [num] time_ratio;
  vector [N] lambdaa;
  real alpha = 1 / sigma;
  vector [n_grid] exp_x;
  vector[province] a; // `a` saved to output because it's a toplevel transformed parameter
     time_ratio  = exp(beta);
  exp_x = exp(x);
    lambdaa = exp ((-1 * respon[,7:18] * beta) / sigma);
{// `z` not part of output
    vector[province] landa;
    for (k in 1 : province) {
    landa[k] = kernel[k] * exp_x * p[k];
         }
   a = landa / sum(landa); // assign `a`
           }
           }
model{
  vector [N] log_f;
  vector [N] log_s;
  vector [N]x_idx= sort_asc(ub);
  vector[N] sorted_m2;
  target += normal_lpdf(x|0,5);
  target += normal_lpdf(beta | 0, 1000);
  target += gamma_lpdf(alpha | 0.001, 0.001);
for (i in 1:N) {
         real lb;
         real ub1 = x_idx[i];
         real increment;
         real prev_val;
         if (i==1) {
         lb = 0;
         prev_val = 0;
            } else {
                lb = x_idx[i-1];
                prev_val = sorted_m2[i-1];
            }
      increment = integrate_1d(integrand, lb,ub1 ,{ alpha, lambdaa[i]},  {0.0} , {0}, 0.05);
      sorted_m2[i] = prev_val + increment;
      //f[i]= a[county[i]]*((lambdaa [i]*alpha*respon[i,4]^(alpha-1))/((1+(lambdaa [i]*respon[i,4]^alpha))^2));
      log_f[i]= log(a[county[i]])+log(lambdaa [i])+log(alpha)+(alpha-1)*log(respon[i,4])- 2*log(1+(lambdaa [i]*respon[i,4]^alpha));
      //s[i]= a[county[i]]  * (1- sorted_m2[i]);
      log_s[i]= log(a[county[i]])  + log(1- sorted_m2[i]);
        }
     target += respon[,6] .*  (log_f -log_s) +  log_s;
    }
generated quantities{
 // vector[N] y_rep;//posterior predictive value
  vector[N] log_lik;//log-likelihood
  vector [N]x_idx= sort_asc(ub);
  vector[N] sorted_m2;
  {for(n in 1:N){
      real lb;
      real ub1 = x_idx[n];
      real increment;
      real prev_val;
      if (n==1) {
      lb = 0;
      prev_val = 0;
      } else {
      lb = x_idx[n-1];
      prev_val = sorted_m2[n-1];
            }
      increment = integrate_1d(integrand, lb,ub1 ,{ alpha, exp ((-1 * respon[n,7:18] * beta) / sigma)},  {0.0} , {0}, 0.05);
      sorted_m2[n] = prev_val + increment;
      log_lik[n] =respon[n,6] *  (log(((exp ((-1 * respon[n,7:18] * beta) * alpha)*alpha*respon[n,4]^(alpha-1))/((1+(exp ((-1 * respon[n,7:18] * beta) *alpha)*respon[n,4]^alpha))^2)))-log(1-sorted_m2[n])) +  log(1- sorted_m2[n]);}
            }
            }
.```

You’re gonna need to be a bit more specific; what does not work? You mean the code does not run? Which error message do you get? Which version of Rstan are you using? Under which operating system?

I don’t know why this would fail to integrate as it doesn’t look numerically unstable. The integral even has an analytical solution

\int_{x}^{y}\left(\frac{\lambda\alpha u^{\alpha-1}}{\left(1+\lambda u^{\alpha}\right)^{2}}\right)\mathrm{d}u=\frac{1}{1+\lambda x^{\alpha}}-\frac{1}{1+\lambda y^{\alpha}}=\frac{\lambda y^{\alpha}-\lambda x^{\alpha}}{\left(1+\lambda x^{\alpha}\right)\left(1+\lambda y^{\alpha}\right)}

That should be better than integrate_1d.

1 Like

thanks a ton for your assistance and help @nhuurre

Does this analytical solution speed up the execution of my model?

Thank you in advance!

here are my new stan codes based on your analytical solution suggestion:

//functions{
 // real integrand(real u, real xc, real[] theta, real[] x_r, int[] x_i)
  //{     // data (integer)
    //real df_dx= (theta[2]*theta[1]*u^(theta[1]-1))/((1+(theta[2] *u^theta[1]))^2);
    //return(df_dx);
    //real lnum= log(theta[2]) + log(theta[1]) + (theta[1]-1)*log(u);
    
    //real ldenom = (log1p_exp(log(theta[2]) + theta[1]*log(u)))^2;
    //real ldenom = 2*log(1+(theta[2] *u^theta[1]));
  //  return(exp(lnum-ldenom));
 // }
//}
data {
  int <lower=0> province;
  int <lower=0> n_grid;
  int <lower=0> N;
  int <lower=0> num;
  int <lower=0> number;
  int <lower =0 , upper = province> county[N];
  vector <lower=0> [province] p;
  row_vector[n_grid] kernel[province];
  matrix [N,number] respon;
  vector <lower=0>[N] ub;
  }
transformed data {
  real x_r[0];
  int x_i[0];
  }
parameters{
  vector [num] beta;
  real <lower=0> sigma;
  vector [n_grid] x;
  }
transformed parameters{
  vector [num] time_ratio;
  vector [N] lambdaa;
  real alpha = 1 / sigma;
  vector [n_grid] exp_x;
  vector[province] a; // `a` saved to output because it's a toplevel transformed parameter
     time_ratio  = exp(beta);
  exp_x = exp(x);
    lambdaa = exp ((-1 * respon[,7:18] * beta) / sigma);
{// `z` not part of output
    vector[province] landa;
    for (k in 1 : province) {
    landa[k] = kernel[k] * exp_x * p[k];
         }
   a = landa / sum(landa); // assign `a`
           }
           }
model{
  vector [N] log_f;
  vector [N] log_s;
  vector [N]x_idx= sort_asc(ub);
  vector [N] sorted_m2;
  target += normal_lpdf(x|0,5);
  target += normal_lpdf(beta | 0, 1000);
  target += gamma_lpdf(alpha | 0.001, 0.001);
for (i in 1:N) {
         real lb;
         real ub1 = x_idx[i];
         real increment;
         real prev_val;
         if (i==1) {
         lb = 0;
         prev_val = 0;
            } else {
                lb = x_idx[i-1];
                prev_val = sorted_m2[i-1];
            }
      //increment = integrate_1d(integrand, lb,ub1 ,{ alpha, lambdaa[i]},  {0.0} , {0}, 0.05);
      increment = ((lambdaa [i]*(ub1^alpha)) - (lambdaa [i]*(lb^alpha))) / ((1+(lambdaa [i]*(lb^alpha)))*(1+(lambdaa [i]*(ub1^alpha))));
      sorted_m2[i] = prev_val + increment;
      //f[i]= a[county[i]]*((lambdaa [i]*alpha*respon[i,4]^(alpha-1))/((1+(lambdaa [i]*respon[i,4]^alpha))^2));
      log_f[i]= log(a[county[i]])+log(lambdaa [i])+log(alpha)+(alpha-1)*log(respon[i,4])- 2*log(1+(lambdaa [i]*respon[i,4]^alpha));
      //s[i]= a[county[i]]  * (1- sorted_m2[i]);
      log_s[i]= log(a[county[i]])  + log(1- sorted_m2[i]);
        }
     target += respon[,6] .*  (log_f -log_s) +  log_s;
    }
generated quantities{
 // vector[N] y_rep;//posterior predictive value
  vector[N] log_lik;//log-likelihood
  vector [N]x_idx= sort_asc(ub);
  vector[N] sorted_m2;
  {for(n in 1:N){
      real lb;
      real ub1 = x_idx[n];
      real increment;
      real prev_val;
      if (n==1) {
      lb = 0;
      prev_val = 0;
      } else {
      lb = x_idx[n-1];
      prev_val = sorted_m2[n-1];
            }
      //increment = integrate_1d(integrand, lb,ub1 ,{ alpha, exp ((-1 * respon[n,7:18] * beta) / sigma)},  {0.0} , {0}, 0.05);
            increment = ((exp ((-1 * respon[n,7:18] * beta) / sigma)*(ub1^alpha)) - (exp ((-1 * respon[n,7:18] * beta) / sigma)*(lb^alpha))) / ((1+(exp ((-1 * respon[n,7:18] * beta) / sigma)*(lb^alpha)))*(1+(exp ((-1 * respon[n,7:18] * beta) / sigma)*(ub1^alpha))));
      sorted_m2[n] = prev_val + increment;
      log_lik[n] =respon[n,6] *  (log(((exp ((-1 * respon[n,7:18] * beta) * alpha)*alpha*respon[n,4]^(alpha-1))/((1+(exp ((-1 * respon[n,7:18] * beta) *alpha)*respon[n,4]^alpha))^2)))-log(1-sorted_m2[n])) +  log(1- sorted_m2[n]);}
            }
            }
.```

I would expect so because integrate_1d requires computing the integrand many times but the closed-form expression is computed only once (and is of similar complexity as the integrand).

The code looks correct except for the prior. You have

parameters{
  real <lower=0> sigma;
}
transformed parameters{
  real alpha = 1 / sigma;
}
model {
  target += gamma_lpdf(alpha | 0.001, 0.001);
}

That doesn’t work (without a Jacobian adjustment). You must put the prior on the parameter, not transformed parameter.

model {
  target += inv_gamma_lpdf(sigma | 0.001, 0.001);
}
1 Like

thank you so much @nhuurre for your help. the execution of these new formats of my codes is faster than previous codes with integrate_1d. But, The speed of execution of my model still is very slow.

“SAMPLING FOR MODEL ‘independent’ NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.215 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2150 second.”

Regards,
Eisa

You said N=35000 and the loop can’t be parallelized so I’m not sure how much faster you expect it to be.
Here’s some thoughts:
ub is data so sorting can be done in the transformed data block (where it does not need to be repeated for each sample draw)

transformed data {
  vector[N] x_idx = sort_asc(ub);
}

You can squeeze a little bit more performance by computing subexpressions once and reusing the result

real e1 = lambdaa[i]*(lb^alpha);
real e2 = lambdaa[i]*(ub1^alpha);
increment = (e2 - e1) / ((1+e1)*(1+e2));

Same goes for log(a[county[i]]) in the loop vs log_a = log(a) outside the loop and then log_a[county[i]] inside.
Actually, since both log_f and log_s have log(a[county[i]]) in them and you only care about log_f - log_s you can move that term outside the loop entirely

for (...) {
  ...
  // multiplication is faster than log()+log() but less numerically stable
  real e3 = lambdaa[i]*respon[i,4]^alpha;
  log_f[i] = log(alpha*e3/(respon[i,4]*(1+e3)^2));
  log_s[i] = log1m(sorted_m2[i]);
}
target += respon[,6] .* (log_f - log_s) + (log_s + log(a[county]));

That last line splits (target += vector is more optimized than vector + vector)

target += respon[,6] .* (log_f - log_s);
target += log_s;
target += log(a[county]);

And log(a[county]) is just the vector log(a) with each entry repeated as many times as the entry number occurs in the county array. It’s faster to multiply by the number of repeats than add them individually.

transformed data {
  vector[province] multiplicity = rep_vector(0, province);
  for (n in 1:N) {
    multiplicity[county[n]] += 1;
  }
}
...
model {
  ...
  target += log(a) .* multiplicity; // target += log(a[county]);
}
1 Like

Many thanks for your very valuable suggestions and help @nhuurre. Fortunately, with the new changes, the speed of my model improved.

Are my codes correct?

"
SAMPLING FOR MODEL ‘independent.log_alpha.normal.prior.’ NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.141 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1410 seconds.
".

Warm regards,
Eisa

Here are my new codes:

//functions{
 // real integrand(real u, real xc, real[] theta, real[] x_r, int[] x_i)
  //{     // data (integer)
    //real df_dx= (theta[2]*theta[1]*u^(theta[1]-1))/((1+(theta[2] *u^theta[1]))^2);
    //return(df_dx);
    //real lnum= log(theta[2]) + log(theta[1]) + (theta[1]-1)*log(u);
    //real ldenom = (log1p_exp(log(theta[2]) + theta[1]*log(u)))^2;
    //real ldenom = 2*log(1+(theta[2] *u^theta[1]));
  //  return(exp(lnum-ldenom));
 // }
//}
data {
  int <lower=0> province;
  int <lower=0> n_grid;
  int <lower=0> N;
  int <lower=0> num;
  int <lower=0> number;
  int <lower =0 , upper = province> county[N];
  vector <lower=0> [province] p;
  row_vector[n_grid] kernel[province];
  matrix [N,number] respon;
  vector <lower=0>[N] ub;
  }
transformed data {
 // real x_r[0];
  //int x_i[0];
  vector[N] x_idx = sort_asc(ub);
   vector[province] multiplicity = rep_vector(0, province);
  for (n in 1:N) {
    multiplicity[county[n]] += 1;
  }
  }
parameters{
  vector [num] beta;
  real <lower=0> sigma;
  vector [n_grid] x;
  }
transformed parameters{
  vector [num] time_ratio;
  vector [N] lambdaa;
  real alpha = 1 / sigma;
  vector [n_grid] exp_x;
  vector[province] a; // `a` saved to output because it's a toplevel transformed parameter
     time_ratio  = exp(beta);
  exp_x = exp(x);
    lambdaa = exp ((-1 * respon[,7:18] * beta) / sigma);
{// `z` not part of output
    vector[province] landa;
    for (k in 1 : province) {
    landa[k] = kernel[k] * exp_x * p[k];
         }
   a = landa / sum(landa); // assign `a`
           }
           }
model{
  vector [N] log_f;
  vector [N] log_s;
  //vector [N]x_idx= sort_asc(ub);
  vector [N] sorted_m2;
  vector[province] log_a;
  real log_alpha;
  log_alpha = log (alpha);
   log_a=log(a);
  target += normal_lpdf(x|0,5);
  //target += normal_lpdf(beta | 0, 1000);
  target += normal_lpdf(log_alpha | 0, 5);
 // target += gamma_lpdf(alpha | 0.001, 0.001);
 target += inv_gamma_lpdf(sigma | 0.001, 0.001);
for (i in 1:N) {
         real lb;
         real ub1 = x_idx[i];
         real e1;
         real e2;
         real e3;
         real increment;
         real prev_val;
         if (i==1) {
         lb = 0;
         prev_val = 0;
            } else {
                lb = x_idx[i-1];
                prev_val = sorted_m2[i-1];
            }
      //increment = integrate_1d(integrand, lb,ub1 ,{ alpha, lambdaa[i]},  {0.0} , {0}, 0.05);
      e1 = lambdaa[i]*(lb^alpha);
      e2 = lambdaa[i]*(ub1^alpha);
      increment = (e2 - e1) / ((1+e1)*(1+e2));
      //increment = ((lambdaa [i]*(ub1^alpha)) - (lambdaa [i]*(lb^alpha))) / ((1+(lambdaa [i]*(lb^alpha)))*(1+(lambdaa [i]*(ub1^alpha))));
      sorted_m2[i] = prev_val + increment;
      //f[i]= a[county[i]]*((lambdaa [i]*alpha*respon[i,4]^(alpha-1))/((1+(lambdaa [i]*respon[i,4]^alpha))^2));
     e3 = lambdaa[i]*respon[i,4]^alpha;
      log_f[i] = log(alpha*e3/(respon[i,4]*(1+e3)^2));
      //s[i]= a[county[i]]  * (1- sorted_m2[i]);
      log_s[i] = log1m(sorted_m2[i]);
        }
     //target += respon[,6] .*  (log_f -log_s) +  log_s-log_alpha;
    target += respon[,6] .* (log_f - log_s);
   target += log_s;
   //target += log_a[county];
   target += log_a .* multiplicity; // target += log(a[county]);
   target += -1 * log_alpha; // This is Jacobian adjustment
    }
generated quantities{
 // vector[N] y_rep;//posterior predictive value
  vector[N] log_lik;//log-likelihood
  vector[N] sorted_m2;
  real log_alpha;
  log_alpha = log (alpha);
  {for(n in 1:N){
      real lb;
         real ub1 = x_idx[n];
         real e1;
         real e2;
         real increment;
         real prev_val;
         if (n==1) {
         lb = 0;
         prev_val = 0;
            } else {
                lb = x_idx[n-1];
                prev_val = sorted_m2[n-1];
            }
      //increment = integrate_1d(integrand, lb,ub1 ,{ alpha, lambdaa[i]},  {0.0} , {0}, 0.05);
      e1 =  exp ((-1 * respon[n,7:18] * beta) * alpha)*(lb^alpha);
      e2 =  exp ((-1 * respon[n,7:18] * beta) * alpha)*(ub1^alpha);
      increment = (e2 - e1) / ((1+e1)*(1+e2));
      //increment = ((lambdaa [i]*(ub1^alpha)) - (lambdaa [i]*(lb^alpha))) / ((1+(lambdaa [i]*(lb^alpha)))*(1+(lambdaa [i]*(ub1^alpha))));
      sorted_m2[n] = prev_val + increment;
      //f[i]= a[county[i]]*((lambdaa [i]*alpha*respon[i,4]^(alpha-1))/((1+(lambdaa [i]*respon[i,4]^alpha))^2));
      log_lik[n] =respon[n,6] *  (log(((exp ((-1 * respon[n,7:18] * beta) * alpha)*alpha*respon[n,4]^(alpha-1))/((1+(exp ((-1 * respon[n,7:18] * beta) *alpha)*respon[n,4]^alpha))^2)))-log(1-sorted_m2[n])) +  log(1- sorted_m2[n])-log_alpha;}
            }
            }
.```

You need to make up your mind on whether you want to put the prior on alpha or sigma. The Jacobian is correct if alpha is a parameter (which it is not in the current formulation).

Other than that, looks fine to me.

Thanks for your reply @nhuurre. Yes, you are right, and I was not careful. So, I’ll put the prior on sigma only.

If I put the prior on alpha. Is the following formulation for the Jacobian correct?

target += normal_lpdf(log_alpha | 0, 5);

target += -1 * log_alpha; // This is Jacobian adjustment.

Thank you in advance!

Yes, assuming you have

parameters {
    real<lower=0> alpha;
}
transformed parameters {
  real sigma = 1/alpha;
}

Ok. Many thanks for your help @nhuurre.