It looks like RStan doesn’t: rstan/rstan/src/chains.cpp. I didn’t realize

it wasn’t using Stan’s structure. I think we should consolidate code if we

can.

Thanks for your work on chains.hpp, the changes look great.

Getting the interfaces to use the same code is or should be a top priority.

I tried at one point to have PyStan use chains.hpp but it turned out to

be really slow.

The main difficulty is exposing some sort function in the services

namespace which is easy and computationally convenient for the

interfaces to call. The function signature should only involve std data

structures, no Eigen data structures. You can, however, assume that any

interface interested in calling the chains functions will be able to

arrange the samples into contiguous memory. (The RStan people can

correct me if I’m wrong about this.)

Should I make pull requests for `rstan/rstan/src/chains.cpp`

and `pystan/pystan/_chains.pyx`

?

Yeah, although it may require more hacking to get it to work.

Why there is effective_sample_size and effective_sample_size2 ? I can read the comment but I don’t understan it. Is effective_sample_size2 really used?

Hey @avehtari,

I am trying to implement this computation in PyMC3.

I have a question for you. In your tests, what exactly is `dimensions`

? Is it the length of the chains or number of chains or something else?

Thanks in advance!

Sharan

I don’t know what `dimensions`

you are referring. There is not such variable in the parts of the code I modified.

The more hacking seems to refer to the fact that it’s really difficult to build rstan. There are not much instructions and even @bgoodri keeps guessing what I should do to get it built…

PyStan build was easier (only couple hours to figure how to install and run in place), but I haven’t been able to figure out how to run tests, or compare results with CmdStan (same random seeds doesn’t seem to provide same answer). I’m novice in Python, so I’m now stuck, and would appreciate help how to load values of chains from csv-files (one csv-file per chain) and run ess function for those chains. @ariddell can you help?

If everything you need upstream has been merged into develop, try

```
source("https://raw.githubusercontent.com/stan-dev/rstan/develop/install_StanHeaders.R", echo = TRUE)
```

and then installing the modified rstan.

`.libPaths()[1]`

Now CmdStan, RStan and PyStan N_eff computations all use Geyer’s initial monotone sequence and results match at least with 6 digit accuracy when using exactly same chains (`blocker.1.csv`

and `blocker.2.csv`

).

Based on the experience it would be great if this kind of computational code would be shared…

Hi, cmdStan user on Matlab platform here. Sorry to get back to this after all this time. I’m trying to replicate the N_Eff and R_hat calculations of the stansummary with my own code. There are several practical reasons for wanting to do this, e.g. to easily display these stats on figures. R_hat is pretty straight forward. For N_Eff, it is difficult to exactly match the value from stansummary without knowing exactly how the truncation works. I know that an exact match is not necessarily all that important for inference, but for publication purposes, this kind of stuff matters. Is the code, or the formulas, to calculate the stats available somewhere? Perhaps I’m missing something, but did not have much luck with the user guide. I read through the above discussion but it is not clear to me what you all settled on. Thanks in advance.

Equations are in the paper Rank-Normalization, Folding, and Localization: An Improved Rˆ for Assessing Convergence of MCMC (with Discussion) (stansummary doesn’t use the rank normalization, but otherwise the details are the same).

You can also look at the code for the exact implementation

and

Or see R version `ess_basic()`

function in posterior/convergence.R at master · stan-dev/posterior · GitHub

Thank you.

Also, you can look at the following Matlab implementation by Beth Baribault : matstanlib/ess.m at master · baribault/matstanlib · GitHub

I looked into this code and ran into problems:

You have to manually alter the method from within the code. The version on Git is set to ‘vehtari’ and it throws an error when ran that way. I tried with ‘BDA3’ and ‘BDA2’ methods and got back wildly different results. BDA3 returned a very small number= 7.2e-4. BDA2 returned an unexpectedly high one= 8.4e3.

The data has 2000 iterations on 4 chains. When I use stansummary, it calculates n_eff to be 4.1e3. My own code, in which I use the first four positive lags (after that autocorrelations look monotonic and alternate between positive and negative), I also get 4.1e3, although not an exact match to stansummary.

In short, I don’t understand the results from this Matlab code.

Maybe run against R/Python implementions side-by-side and compare results for each step

I want to do that, but do not have them installed on my computer right now and I’m not fully literate on either platform. For now, I’m sticking to comparison with cmdStan stansummary. Thanks