# Jacobian adjustment for a model

Dear all,

I was wondering whether I could get some help on a model where I’m trying to include a Jacobian adjustment (which I’ve never done before). The model converges easily, but the results are biased - and always in the same way which makes me think that something is wrong in the model.

The data generation is like this:

``````a ~ normal(muA,sdA)
b ~ normal(muB,sdA)
control is from list c(0.2,0.5,0.8)
feedback = control*a + (1-control)*b
``````

I give to the model ‘a’ and ‘feedback’ and I want it to estimate muA, sdA and control:

``````data {
int nrow;  // size of my data
int nenv; // number of levels of control
int envID[nrow]; // which control level it is (1,2 or 3)
real feedback[nrow];
real a[nrow];
}

parameters{
real muA;
real<lower=0> sdA;
real muB;
real<lower=0,upper=1> control[nenv];
}

transformed parameters{
real b[nrow];
for (irow in 1:nrow){
b[irow] = (feedback[irow]-control[envID[irow]]*a[irow])/(1-control[envID[irow]]);
}
}

model{
// priors
muA ~ normal(0,5);
muB ~ normal(0,5);
sdA ~ normal(0,1);
// sdB ~ normal(0,1);

for (irow in 1:nrow){
a[irow] ~ normal(muA,sdA);
b[irow] ~ normal(muB,sdA);
// Jacobian adjustment (from Quotient rule)
target += log(fabs((feedback[irow]-a[irow])/(1-control[envID[irow]])^2));
}
}
``````

I am not sure about how I am defining ‘b’ in the transformed parameters and then sampling it in the model and the target+= statement that I’ve added, trying to follow what the handbook says about using the log of the absolute derivative.

Just from the Maths of this problem, I’m quite sure it should be solvable given that the variance of the sum of two normally distributed variables is defined as:
var_sum = (control)^2var_A + (1-control)^2var_B
and in the present version of the model I have set sdB=sdA and variance of the sum (i.e. the feedback) is also known, so this should definitely be solvable, which makes me think that there is a mistake in my model.

(It’s not really relevant but the reason I’ve built this model is that I’m making a model of cognition where I want to estimate how different people can learn these distributions and I wanted to first see whether an ‘optimal observer’ can solve this problem.)

I would be very grateful for any advice.
Best wishes
Jacquie

The simple Jacobian adjustment thing is for when you compute N things from N other things.

In this case it looks like you have more b variables than control variables.

I honestly don’t know the answer. Sure seems like there’d be one, but that could be where things are breaking here.

Maybe there is a way around this. Could you describe your model more? What are ‘a’ and ‘b’, etc?

Oh, I see; I did not realise that having different number of parameters and transformed parameters might cause the problem.
The data generation is:

``````a ~ normal(muA,sdA)
b ~ normal(muB,sdA)
control is from list c(0.2,0.5,0.8)
feedback = control*a + (1-control)*b
``````

What we tell participants in our task is that they play together with another player. The feedback that we’ll give them about how well they do will be a mixture of what they did and what the other person did, weighted by how much each contributed (i.e. how much ‘control’ they have over the outcome). They have introspection about what they do themselves (‘a’) and they get ‘feedback’ about the summed behaviour.

I have put in data now with size nrow (here I have 20 measurements per control level, of which there are 3).

The reason that in the modelling I’m sampling the transformed parameter ‘b’ is that I could not think of a better way to do it. I have tried having ‘b’ as a parameter, but that model is not correct (it says the row for feedback is not a legal statement).

``````parameters{
real muA;
real<lower=0> sdA;
real muB;
real<lower=0,upper=1> control[nenv];
real b[nrow];
}
model{
for (irow in 1:nrow){
a[irow] ~ normal(muA,sdA);
b[irow] ~ normal(muB,sdA);
feedback= a[irow]*control[envID[irow]]+ b[irow]*(1-control[envID[irow]]);
}
}
``````

Ps: There reason that there are so many ‘b’ variables is that this represents the performance of the ‘other player’ on each single trial. Whereas ‘control’ is also applied on each trial, but is only one out of 3 possible levels of control.

I think the data generation mechanism needs work and that might clarify how the model works.

It needs to answer the question of how we get from the parameters we want to know about to the things we measure.

This sounds like a sequence of games between two people. Is the outcome of the game measured? Are muA/muB player skills? Like, any chance this could look like one of the soccer models: World Cup model ?

Sorry that was not clearer, I have recoded this with better names now.
Data generation:

``````controls = c(0.2,0.5,0.8)
ncontrol=length(controls)
irow=0;
for (icontr in 1:ncontrol{
for (itr in 1:ntr){
irow=irow+1
controlID[irow ] = icontr
current_control = controls[icontr]
performance_playerA[irow ]~ normal(mu_playerA,sd_playerA)
performance_playerB       ~ normal(mu_playerB,sd_playerA)
sumPerformance[irow ]     = current_control*performance_playerA + (1-current_control)*performance_playerB
// I am storing for the model to be fitted in Stan:
// [1] performance_playerA
// [2] controlID - i.e. not what the amount of control is, but the trials belonging to the same level of control are flagged up.
// [3] sumPerformance
}
}
``````

In words: Two players play a series of games together (total number: ntr*length(control). Each time playerA knows how well he has done (‘performance_playerA’) and how well they have done together (‘sumPerformance’). He wants to figure out: how good his average performance is (‘mu_playerA’), how noisy his performance is (‘sd_playerA’), how good the other player is on average (‘mu_playerB’) and how much control he has over the outcome in the three different levels of control (‘controls’).

I am building a model of his cognition in Stan:

``````data {
int nrow; // total number of measurements
int ncontrol; // levels of control
int controlID[nrow]; // what the identity of the control condition is for each measurement (but not what the actual control is)
real sumPerformance[nrow];
real performance_playerA[nrow];
}

parameters{
real mu_playerA;
real<lower=0> sd_playerA;
real mu_playerB;
real<lower=0,upper=1> controls[ncontrol];
}

transformed parameters{
real performance_playerB[nrow];
for (irow in 1:nrow){
// we get this from reshuffling the equation sumPerformance=performance_playerA*controls[controlID[irow]]+(1-controls[controlID[irow]])*performance_playerB
performance_playerB[irow] = (sumPerformance[irow]-control[controlID[irow]]*performance_playerA[irow])/(1-controls[controlID[irow]]);
}
}

model{
for (irow in 1:nrow){
performance_playerA[irow] ~ normal(mu_playerA,sd_playerA);
perfromance_playerB[irow] ~ normal(mu_playerB,sd_playerA);
// Jacobian adjustment (from Quotient rule)
target += log(fabs((sumPerformance[irow]-performance_playerA[irow])/(1-controls[controlID[irow]])^2));
}
}
``````

Since performance_playerA and sumPerformance are getting measured, how about:

``````performance_playerA ~ normal(mu_playerA, sd_playerA);
sumPerformance ~ normal(current_control * performance_playerA + (1 - current_control) * mu_playerB, (1 - current_control) * sd_playerB);
``````

I’m not sure how the indexing works so I left that off. mu_playerA, sd_playerA, mu_playerB, sd_playerB, and current_control are the parameters there.

Thank you so much!!! This is amazing!!! I am so grateful for your help!

I did not think about that if I know the actual performance of player A, then the noise in sumPerformance was (1-control)*sdB. Excellent!

1 Like