I am curious about the estimation of the asymptotic variance \sigma^2, appearing in the definition of the MCSE for the mean value of a parameter (see MCMC central limit theorem).

My question is:
Why is the estimate of this asymptotic variance defined as:

\lim_{N \rightarrow\infty} N \; \mathrm{Var}[\hat{f}^\mathrm{MCMC}_N] = \mathrm{Var}_\pi[f] \; \tau_l[f],

where \mathrm{Var}_\pi[f] can be approximated by \hat{\sigma}^2_n. But I do not understand where the term \mathrm{Var}_\pi[f] in the above formula comes from in the first place.

you appear to be overlooking the complexity of the computation of the effective sample size in the denominator of the expression from the Stan reference manual, as well as some deep similarities between the two versions. Perhaps @avehtari might chime in with a clear answer to some of the specifics of your question.

Paralleling the ordinary central limit theorem, which assumes independent draws, in the MCMC central limit theorem, the standard error for a parameter \theta can be defined to be:

where \textrm{se} is the standard error of \theta and \textrm{sd} is the standard deviation (in the posterior) of \theta. The autocovariance you’re looking at is used in the definition of N_{\textrm{eff}}. See, for example, page 287 of Bayesian Data Analysis (free download) for the derivation. For a comparison of different estimators of effective sample size and more discussion, see @avehtari’s Comparison of MCMC effective sample size estimators, which points to our paper (mostly Aki’s!) on revised estimators for it.

The point of my question is how do you get to the formula you quoted:

\mathrm{se} = \frac{\mathrm{sd}}{\sqrt{N_{eff}}}

For example, Geyer (2005) suggest to use batch means to estimate what you called ‘\mathrm{se}’.
And I do not see how this approach and the above formula are connected.