Vectorized vs. unvectorized model

In the Stan User’s guide (1.1.1) it states that the vectorized version of a model is a more concise and faster version, yet equal to the unvectorized version of a model. However, when in the models below, I run one with a vectorized model and an unvectorized model but they provide different estimates and n_effective. What is the basis for causing different results?

Vectorized Version:

data{
  int<lower=0> N; 
  real<lower=0> obs[N];
}
parameters {
  real<lower = 0> a; 
  real<lower = 0> s; 
}
model {
  a ~ inv_gamma(3,6); 
  s ~ normal(1,3);    
  for(i in 1:N){
    obs[i] ~ weibull(a,s);
  }
}

Unvectorized Version:

data{
  int<lower=0> N; 
  real<lower=0> obs[N];
}
parameters {
  real<lower = 0> a; 
  real<lower = 0> s; 
}
model {
  a ~ inv_gamma(3,6); 
  s ~ normal(1,3);    
obs ~ weibull(a, s);
}
2 Likes

Since you haven’t mentioned that: did you set the same random seed when running the two models?

Sorry, yes I used the same seed, iterations, warmup, chains, etc.

Even with same seed and init values don’t mean that the estimates are exactly same, because there is always a possibility that some calculation steps are done in different order (floating point arithmetic is lossy).

How large is the difference?

The estimates aren’t marginally different (difference in the mean/tail 2.5% is .01/.05 which I get is insignificant with this data and model) it is more the n_effective that is the concern. The s parameter alone has a lower n_effective of about 10% in the vectorized form. Given the model is provided all the same inputs and the only change between the two is the vectorization of the model, why would the calculation steps change and affect the n_effective that much?

I have seen something almost as puzzling, in my case the two models were mathematically equivalent, and still results in terms of n_eff varied quite a lot. I can see that the different way calculations are expressed can lead to different order of operations and/or automatic differentiaton nodes, but it would be great to get a better grounded explanation, if possible. :)

1 Like