Simple Prospect Theory model - inaccurate parameter value output

Hello everyone,

I am new to modelling and Stan, and I am interested in implementing a simple Prospect Theory model to my data (initially to data from a single participant as a test, but I would like to make it hierarchical further down the line). Within the model, the components of an option (value * probability) are transformed via the following functions:
value\_corrected = value^{alpha}
probability\_corrected = probability^{gamma}/{(probability^{gamma}+(1-probability)^{gamma})^{(1/gamma)}}

And finally, the probability of accepting the risky option is given by

So, I implemented the following model in Stan:

data {
  int<lower=0> N;//the number of trials for a single participant
  int accept[N]; //a vector holding the sequence of responses
  vector[N] RM; //a vector holding the risky magnitude for each trial
  vector[N] SM; //a vector holding the safe magnitude for each trial
  vector[N] RP; //a vector holding the risky probability for each trial

parameters {
  real<lower=0.1,upper=5> alpha;
  real<lower=0.1,upper=5> gamma;
  real<lower=0,upper=2> mu;

model {
  //mu~uniform(0.5,1.5); //reminder: no need to specify uniform prior here if the parameter is defined as bounded
//vectors for holding the transformed values/probs
  vector[N] risky_value;
  vector[N] safe_value;
  vector[N] risky_prob;
  vector[N] VD;
  vector[N] VD_cor;

//transforming the raw values/probs
for (i in 1:N){
  risky_value[i] <- pow(RM[i],alpha);
  safe_value[i] <- pow(SM[i],alpha);
  risky_prob[i] <- pow(RP[i], gamma)/pow((pow(RP[i],gamma)+pow(1-RP[i],gamma)),inv(gamma));

//computing the value difference and the probability of accepting the risky option
  for (i in 1:N){
  VD[i] <- risky_value[i]*risky_prob[i] - safe_value[i];
  VD_cor[i] <- 1/1+exp(-mu * VD[i]);
  for (i in 1:N){
    accept[i] ~ bernoulli_logit (VD_cor[i]);

The model seems to converge as indicated by low RHat values, occasionally there are a couple of divergent transitions, seemingly nothing too severe. But my biggest issue is that the estimated parameter values are not what I would expect them to be. For instance, I had a condition where the participants were asked to be risk-averse, and another where they were supposed to be more risk-taking; in this model, this should translate into the alpha parameter being lower in the former condition and higher in the latter one. This was also confirmed by a maximum likelihood estimate of these parameters, which I am using as a sort of ‘ground truth’. Yet when I fit this model to data of a single participant from these two conditions (separately), this is very often not the case.

So, because I am new to all this, I was wondering if you have any ideas why that might be? I have tried playing around with the priors and the parameter limits in the Stan model, but they alone were not responsible for the estimated parameters to be ‘off’. Is there something obvious that I’m missing? For instance, do the parameters get transformed ‘under the hood’ by stan, meaning that the model is not sampling in the parameter space where I think it’s sampling from? Or have I simply specified something the wrong way, which is causing the model to misbehave?

Many thanks for any insights!

I think you are doing the logit transformation twice.

for (i in 1:N){
  VD[i] <- risky_value[i]*risky_prob[i] - safe_value[i];
  accept[i] ~ bernoulli_logit (-mu * VD[i]);

is probably sufficient if I am not misunderstanding your goals.

1 Like

Thank you very much, you are correct, indeed; good catch! Unfortunately, fixing that has not made my model more sensible. I have generated some fake choice data using known parameter values (alpha = 0.8, gamma=0.8, mu=1), but the model is still not estimating parameter values correctly; specifically, the alpha parameter always seems to be under-estimated (i.e., closer to zero than it should be), and so does the mu parameter (0.47 and 0.02, respectively). As the graph below shows, the posterior for gamma has no identifiable peak, but it’s an artifact of the response data not being diverse enough; importantly, fixing the gamma parameter to a known value has not improved the alpha and mu estimates, which still seem to be pulled towards zero. Again, I tried fitting a simple Maximum Likelihood model with the same starting points (trying to approximate the priors used in the bayesian model, centered around the value of 1 for each parameter) and that one is successful in recovering correct parameter values, whereas the Stan model is not. Any more ideas on what might be wrong here?

I feared that would not be the only thing. I can’t directly see any possible mistakes. Did you run the maximum likelihood estimate with Stan? If not you could try this. If the Stan maximum likelihood recovers the true parameters than you can probably be more confident in the code. I would guess that if the code is correct but you get wrong true values with HMC, the model might be poorly identified. That is, different values of alpha, mu, and gamma give almost identical predictions.

Thank you once again for your insight, Stijn - it wasn’t immediately obvious to me that I can run an MLE in Stan, so this is good to know! I have done as you suggested, but unfortunately I am encountering the same type of issue - my alpha parameter is estimated to be at values close to 0.1, which is the lower bound for that particular parameter (this does not surprise me very much - in the HMC plot of the parameter posteriors, the mean of the distribution may have been around 0.4, but the peak was between 0.1-0.2).
Another potentially noteworthy fact is that when I set a non-uniform prior over the parameters, the estimate tends to be heavily influenced by this prior, to the extent that the recovered parameter value is almost exclusively determined by the mean of the prior distribution. This would not surprise me with the gamma parameter, because due to the design of my task, it is difficult to generate the true parameter value accurately. To my understanding though, there is no reason why the same thing should happen to the alpha parameter, particularly when I keep the gamma value fixed. I was hoping I could develop my model by starting out with the basics and slowly building on top of it, but think I will now try to look for other people’s Stan codes which implement Prospect Theory and see if I can glean some information from there - I will update this thread if I manage to find a clue!

Hi Erik,

This is very generic advice. So it might not be immediately helpful.

  • These type of theoretical models have the difficulty that if the data is uninformative (the decision maker just makes random decisions), this can be modeled as mu → 0, or alpha → 0 (and probably something with gamma as well.) This causes two things: (1) identification problems: if mu = 0, alpha and gamma can be whatever and it doesn’t matter. (2) The priors dominate the data.

  • This problem could be exacerbated if you run the model for one participant; the data might not be very informative, especially if you run it for each condition separately. This should be less of a problem with the simulated data unless the data was small. Maybe you could generate 1000 observations just to be sure.

  • If you are digging further, you could start even simpler; fixing mu = 1, gamma = 1. If that recovers alpha, I would then only fix mu = 1. If that works, it might then be worthwhile thinking more about the priors especially once you move to the hierarchical model. If you are not used to thinking about that, you could have a look at Appendix C in one of my papers. This is just one way to do it and we probably could have done it better but at least it might give a starting point.