# The effect of weights on the resulting estimates

I am hoping to understand how weights will effect the fit that STAN develops. I know that weights aren’t full Bayesian, but I’m willing to sacrifice that.

Suppose I have two Bernoulli observations that are a function of a latent variable, and weight them differently in the model. Will this mean that the model will optimize to ‘care’ about the higher-weighted observation more, and/or by multiplying the log-liklihood by a weight am I implicitly changing the value of the probability (in the log liklihood).

For example, in the following model:

``````data{
int result[2];
vector[2] weights;
vector[2] breaks;

}

parameters{
real mu;

}

model {
real p;
for(i in 1:2){
p = normal_cdf(breaks[i],mu,3);
target += bernoulli_lpmf(result[i]|p) * weights[i];
}
/// the log density of a bernoulli is  x *log(p) + (1-x)*log(1-p) where x is the result
/// if I multiply this by a weight, am I implicitly telling STAN that the probability is actually p ^ weights[i] ?

}

``````

Additionally, suppose weights[1] = .5 and weights[2] = 1 ; then will this Stan model care more about the 2nd observation than it will about the 1st observation?

Thanks, appreciate any help!

if I multiply this by a weight, am I implicitly telling STAN that the probability is actually p ^ weights[i] ?

Stan samples from a distribution defined by the unnormalized log density `target`.

So if before your model was:

``````target += log(q(theta));
``````

``````target += log(q(theta)) * w;
Then before you were generating samples from a distribution proportional to `q(theta)` and now you are generating samples from a distribution proportional to `q(theta)^w` (assuming these are proper distributions and can be normalized and Stan is mixing and whatnot).
Well Stan just samples whatever the `target` defines. In this case the model weights the first likelihood half as much as the second, yes.