Uniform circular distribution

Hi all,

I am trying to code a mixture model in which the mixture components are circular distributions - specifically, I’ve got three von mises distributions, plus one circular uniform distribution. I’ve attempted to code the circular uniform component as a von mises with kappa = 0, however I’m getting an error message from the sampler saying that kappa needs to be > 0 .

Is anyone able to help with a workaround for this? Would a standard uniform distribution (limited to -pi to pi) be appropriate? Or should I stick with the von mises distribution but use a value of kappa fractionally greater than zero?

Many thanks in advance for your help!

data {
  int<lower=1> K;                   // number of von mises mixture components
  int<lower=1> N;                   // number of trials
  int<lower=1> nsubj;               // The total number of subjects
  int<lower=1> id[N];               // for each trial, what is the subject id
  real<lower=-pi(), upper = pi()> x[N];         // response error on each trial
  real<lower=-pi(),upper=pi()> mu[N,K];            // values of mixture components

parameters {
  real<lower=0> PmA;
  real<lower=0> PmB;
  real<lower=0> PtA;
  real<lower=0> PtB;
  real<lower=0> Mprecision;
  real<lower=0> SDprecision;
  real<lower=0> precision[nsubj];      // each participant has a different precision (which is constant for all components)
  real<lower=0, upper=1> Pm[nsubj];             // each participant has a probability of responding from memory (versus guessing)
  real<lower=0, upper=1> Pt[nsubj];             // each participant has a probability of responding to the target P(target), given response comes from memory

transformed parameters{
  real Pshape;
  real Prate;
  simplex[K] Pitem[nsubj];        //indexing is equivalent to Pitem[nsubj,K]

  Pshape = (Mprecision^2)/(SDprecision^2); 
  Prate = Mprecision/(SDprecision^2);

  for (j in 1:nsubj){
      Pitem[j,1] = Pt[j];          //conditional P(target), given response comes from memory
      for (k in 2:K){
          Pitem[j,k] = (1-Pt[j])/(K-1); //conditional P(recall other item k), given response from memory

model {
  // Priors
  PmA ~ exponential(0.1);
  PmB ~ exponential(0.1);
  PtA ~ exponential(0.1);
  PtB ~ exponential(0.1);
  Mprecision ~ exponential(0.1);
  SDprecision ~ exponential(0.1);
	// Subject level parameters
  Pm ~ beta(PmA, PmB);     //probability that response comes from memory (any item)
  Pt ~ beta(PtA, PtB);     //probability that response comes from target, given it comes from memory
  precision ~ gamma(Pshape, Prate);
  //Trial by trial model
  for (i in 1:N) {  
    vector[K] lpi = log(Pitem[id[i]]);       //lpi is a temporary variable containing the log mixture proportions for that trial                                    
    vector[K+1] lpu;                    //ditto lpu, but with an extra row to incorporate the uniform component                                    

    // this loop: extract mixing proportions for each trial,weight likelihoods of each component by their 
    //mixing proportion,then further marginalise over Pm (probability respond from memory)
    for (k in 1:K){                                                            
      lpi[k] += von_mises_lpdf(x[i] | mu[i,k], precision[id[i]]) + log(Pm[id[i]]);  
    // now: create new variable with extra row for uniform component, calculate its likelihood and, as before, 
    // weight by its mixing proportion 1-Pm
    lpu[1:K] = lpi;                                                           
    lpu[K+1] = log(1-Pm[id[i]]);                                                   
    lpu[K+1] += von_mises_lpdf(x[i] | mu[i,1], 0);                            
    // finally, compute the entire mixture as a sum of the component log likelihoods
    target += log_sum_exp(lpu);                                               

Not an expert here, but I think that circular uniform is just a constant, i.e. circular_uniform_lpdf (x) = log(1/2pi) = uniform_lpdf(x | -pi, pi). Hope that helps.

1 Like

Great, yes, I thought that was the most likely answer - funny how it all becomes clearer once you’ve asked asked the question! Thanks for your help

Also, check out the paper here where we fit a hierarchical uniform+vonMises circular mixture model. Should be relatively straightforward to add multiple von Mises if your context is also hierarchical, though be prepared for label switching.