Random walk in log-space

So I think I have similar issues as this post from a few years ago; a random walk on the positive line for small numbers.
There it was suggested to do the walk in log-space. I must confess it is not super obvious to me how to do the walk in log space (even though I thought it would be somewhat straight forward).
I have the following model that I am trying to change into log space (due to the large number of divergent transitions + large Rhat + low ESS):

data {
  int<lower = 1> N; // Number of walks
}

transformed data {
  vector<lower=0>[N] sqrtn; // Normalization constant
  for (i in 1:N) {
    sqrtn[i] = 1/sqrt(i);
  }
}

parameters {
  vector[N] eta; // Random steps
  real<lower = 0> sigma; // Random step size
}

transformed parameters {
  vector<lower = 0>[N] Rw = 1e-4*sigma*sqrtn.*cumulative_sum(eta); // Wiener process
}

model {
  eta      ~ normal(0, 1);
  sigma    ~ gamma(10, 10);
}

I thought the transformation would be something like:
\log\left(\frac{R_T}{10^{-4}\sigma\sqrt{T^{-1}}}\right) = \sum_{i=1}^T\eta_i
Giving:
R_T=10^{-4}\sigma\sqrt{T^{-1}}\exp\left({\sum_{i=1}^T\eta_i}\right)
But this givings nothing even remotely close to the original dynamics.
Any hints or help would be super appreciated (I’m guessing I’m missing a Jacobian correction?).

Sorry this didn’t get answered before.

The short answer is that to do a random walk in log space, you just do a random walk in unconstrained space and apply exp() to the result.

It looks like you’re just trying to generate a positive-constrained random walk as a transformed parameter. Here’s how to do that using a first-order random walk.

data {
  int<lower=1> N;
}
parameters {
  real<lower=0> sigma;
  vector[N] log_alpha;
}
transformed parameters {
  vector<lower=0>[N] alpha = exp(log_alpha);
}
model {
  sigma ~ gamma(10, 10);
  log_alpha[1] ~ normal(0, 1);  // need to ground it
  for (n in 2:N) {
    log_alpha[n] ~ normal(log_alpha[n - 1], sigma);
  }
}

Because we’re putting the distribution directly on the parameters then transforming, we don’t need a Jacobian adjustment.

The last two lines of the model block can be vectorized into a simpler and more efficient

  log_alpha[2:N] ~ normal(log_alpha[1:N-1], sigma);

Right, I figured that out after some time. Thanks for taking your time to answer my question! I think I ended up doing the cumsum(alpha)*sigma with alpha std_normal, which was slightly more efficient.