Dynamic vector size within model


#21

right, sorry for making you repeat yourself then :) Re collapsing - I mean instead of sampling a ‘quantity’ variable and n magnitude variables, just sample a single ‘total effect’ (quantity * magnitude) variable, that modifies some autoregressive process with a decay of tau. Looking again, it seems you don’t even necessarily want a different magnitude for every impulse… in which case it seems straightforward to me but then I’m probably missing something and can’t parse through all the code.


#22

@Charles_Driver this is is what the stan code looks like:

data{
  int<lower=1> T; //n obs
  real y[T]; //data
}

transformed data{
  //calculate the median of my data for my initial value of my shocks
  real<lower=0> median;
  int middle;
  real y_sorted[T];
  middle = T / 2;
  y_sorted= sort_asc(y);
  if (T % 2)
	{
		median = (y_sorted[middle] + y_sorted[middle + 1]) / 2.0;
	}
	else
	{
		median = y_sorted[middle + 0] / 1.0;
	}
}

parameters {
  real<lower=0.0000001,upper=T> tau; //rate of decay
  real<lower=0> sigma; //sd
  real<lower=0> delta; //amplitude of the shock
  real<lower=0,upper=1> mix; //mixure for amplitude of shock
}

transformed parameters {
  vector[T] decay;                  // decay
  for (i in 1:T){
    decay[i] = 0;
    for(j in 0:(i-1)){
        decay[i] = decay[i] +  median * delta * exp(-j / tau);
    }
  }
}

model{
 sigma ~ cauchy(0,5);
 y ~ normal(  decay  , sigma);
 target += log_mix(mix, gamma_lpdf(delta | 5, 1), 
                           exponential_lpdf(delta | 1)); //using exp instead of a {0} because not sure how to do it.
}

generated quantities{
 real ps_rep[T];
 real y_rep[T];
  for (i in 1:T){
  ps_rep[i] = 0;
  for(j in 0:(i-1)){
      ps_rep[i] = ps_rep[i] + median * delta * exp(-j / tau) ;
   }
  }

 for (nn in 1:T){
  y_rep[nn] =  ps_rep[nn]  ; 
 }
}

However, it does not fit well my data :(.
Any suggestion is welcome


#23

I still might be missing something but that seems complicated. Here’s a skeleton of what I was proposing. just a univariate autoregression, and you need to give some thought to the error / shock distribution. code untested.

parameters{
vector[T] mix;
real init; //probably not well identified so constrain it somehow.
real<lower=0> sigma;
real<lower=0,upper=1> tau;

transformed parameters{
vector[T] process;
process[1] = init;
for(t in 2:T){
process[t] = process[t-1] * tau + mix;
}

model{
mix ~ normal(0,1); // come up with a better model for this! 
sigma ~ cauchy(0,5);
init ~ normal(0,10);
}

[edit: escape code; punctuation was already missing]