I have a weighted Bayesian Logistic Regression model
weighted_stan_representation = """
data {
int<lower=0> n; // number of observations
int<lower=0> d; // number of predictors
array[n] int<lower=0,upper=1> y; // outputs
matrix[n,d] x; // inputs
vector<lower=0>[n] w; // coreset weights
}
parameters {
vector[d] theta; // auxiliary parameter
}
model {
theta ~ normal(0, 1);
target += w*bernoulli_logit_lpmf(y| x*theta);
}
with data as such:
{'x': array([[-1.92220908, -0.86248914],
[-0.64517094, 0.40222473],
[-0.71675321, -1.2782317 ],
...,
[-2.0448459 , -0.11735602],
[-0.9622542 , -2.27172399],
[-1.09545494, -0.83435958]]),
'y': array([0, 0, 0, ..., 0, 0, 0]),
'w': array([1., 1., 1., ..., 1., 1., 1.]),
'd': 2,
'n': 10000}
I can get samples from the full posterior, i.e. with weights uniformly 1 by running
posterior = stan.build(model.weighted_stan_representation, data = full_data, random_seed = 100000)
fit = posterior.sample(num_chains = num_chains, num_samples = num_samples, num_warmup = num_warmup)
And I then want to use a sparse weight vector, and sample from the approximate sparse posterior using
coreset_posterior = stan.build(model.weighted_stan_representation, data = coreset_data)
coreset_samples = coreset_posterior.sample(num_chains = num_chains, num_samples = num_samples, num_warmup = num_warmup)
However when I access the samples, they are exactly equivalent between the two cases. I’m confident it has something to do with the model being cached when stan.build is first called, and so no new samples are ever actually being taken. This is because I get this output
Building: found in cache, done.
when I run the second stan representation. This is the first time I’ve used PyStan and I don’t know how to get around this. There doesn’t seem to be an option to force PyStan to recompile as far as I can tell.
Any help would be appreciated!
I’ve got the latest version of Python and PyStan installed.