 # Any way to do causal inference in PGMs without writing out the do-Calculus in Generated Quantities?

So often times I would like to fit a model and do some “do Calculus” based inference in a PGM. Here is a simple example. (This example is very simple, but often times will get unnecessarily verbose in larger graphs). The below is just a simple graph of W->X and X->Y and W->Y and it still gets quite verbose.

Or is this the only way even if you have say a larger graph, just having to cumbersome “plug things in” within generated quantities to represent the do() operator.

``````model {
mu_W ~ normal(0,10);
W ~ normal(mu_W, sigma_W);
vector[N] mu_X = alpha_x + betaxw*W;

X ~ normal(mu_X,sigma_X)

vector[N] mu_Y = alpha + beta1 * X + beta2*W + beta3*X*W;
y ~ normal(mu_Y, sigma_Y);
}
generated quantities{
real ATE_dox1;
real ATE_dox0;

ATE_dox1 = 0;
ATE_dox0 = 0;

for(i in 1:N){
real Ydo1;
real Ydo0;

real wnew;
real xnew;

wnew = normal_rng(mu_W,sigma_W);
Ydo1 = alpha + beta1 * 1 + beta2*wnew + beta3*1*wnew,sigma_Y); //do(X=1)
Ydo0 = alpha + beta1*0 + beta2*w + beta3*0*wnew; //do(X=0)

ATE_dox1 += Ydo1;
ATE_dox0 += Ydo0;
}

ATE_dox1 /= N;
ATE_dox0 /= N;
}
``````

(i am a beginner when it comes to causal inference, so the below might be quite wrong)

I am not sure if there is any existing software to make this easy.

My understanding is that performing causal do calculus operations requires working at a higher abstraction level than what the Stan modelling language supports. Each different do calculus operation might involve a different kind of surgery to the graph and produces a different set of equations – producing a different probabilistic model that Stan could perform inference on.

In principle, there could be a nice workflow something like the following:

1. express the causal model in some higher-level modelling language than Stan, that supports do calculus operations. This would need the DAG as well as the various distributions and relationships between varibles to be specified.
2. apply desired do calculus operations to the model, producing 1 or more variations of the model
3. export each variant model as a pure stan probabilistic model
4. sample each probabilistic model with stan

It looks like the R CausalQueries (see also Causal Models: Guide to CausalQueries ) package does something like what I describe, but it only supports binary variables. CausalQueries depends upon stan and dagitty.

If you have to do a large number of these causal inferences, or a small number on a very complex graph, it might be an opportunity to build a library that offers a nice workflow for automating this.

@rfc

Thanks, yea it would be an interesting project. I think from my understanding, the model would not have to be refit or anything. As in, once the model is fit as usual, the do Calculus/graph surgery stuff can be applied and just the monte carlo sampling new data (of the same sample size as used to fit) needs to be done, hence I was using generated quantities.

Maybe there is another way, but I am guessing such an automated framework would require “metaprogramming” and essentially generating the “do” code in generated quantities based on the model block.

I myself am new to both Stan and causal inference. I recently learned generative graphical models and do-calculus through a workshop series on generative models that went over the theory and some pseudocode. And now I am trying to implement things in Stan. I am able to replicate results of things like G-computation, and mediation analysis but especially for the latter even in a simple mediation graph it gets very cumbersome to write out things in “generated quantities”.

Exactly! There could be some well-engineered ways to do this, but perhaps one could also start with a simple approach, just using a script to generate Stan model code by replacing some templated formula expressions with different values.

E.g. here’s a demo as a python3 script: `demo_formula_template.py`:

``````# formula for mu_Y. expressions X and W are template parameters
MU_Y_TEMPLATE = 'mu_Y%(suffix)s = alpha + beta1 * %(X)s + beta2*%(W)s + beta3*%(X)s*%(W)s;'

# default template instantiations for X and W
DEFAULT_SUBSTITUTIONS = {'X': 'X', 'W': 'W'}

# variations of template instantiations
SUBSTITUTIONS = [
{'X': '0'},
{'X': '1'},
{'X': '-W', 'W': 'sqrt(X)'},
]

def describe_substitution(subs):
return ' '.join(['%s=%s' % (k, subs[k]) for k in sorted(subs)])

def make_suffix(subs):
descr = describe_substitution(subs)
return ''.join(c if c.isalnum() else '_' for c in descr)

model {
/* TODO: insert model here */
}

generated quantities{"""

FOOTER = r"""
}
"""

# print generated stan model code

for subs in SUBSTITUTIONS:
bindings = dict(DEFAULT_SUBSTITUTIONS.items())
bindings.update(subs)
descr = 'do(' + describe_substitution(subs) + ')'  # annotation for comment only
bindings['suffix'] = make_suffix(subs)  # suffix for mu_Y, to ensure unique var name
statement = MU_Y_TEMPLATE % bindings
print('\t%s // %s' % (statement, descr))
print(FOOTER)
``````

run it as `python3 demo_formula_template.py`

example output:

``````model {
/* TODO: insert model here */
}

generated quantities{
mu_YX_0 = alpha + beta1 * 0 + beta2*W + beta3*0*W; // do(X=0)
mu_YX_1 = alpha + beta1 * 1 + beta2*W + beta3*1*W; // do(X=1)
mu_YW_sqrt_X__X__W = alpha + beta1 * -W + beta2*sqrt(X) + beta3*-W*sqrt(X); // do(W=sqrt(X) X=-W)

}
``````