Constraining sum of coefficients

I’m building a simple random walk model where y_t \sim N(\beta_1 y_{t-1} + \beta_2 y_{t-2} + \ldots, sigma). I would like to constrain \sum_{k=1}^K \beta_k = 1, but I don’t want to require that each \beta \in (0, 1) so I don’t want to declare this as a simplex.

A hack that works for me is to just declare the last \beta as a separate variable (see following code block). Is there a better way to write this? It feels like there should be

data {
    int<lower=1> N; // Number of data
    int<lower=2> K; // Number of covariates
    vector[N] y; // the data
}

parameters {
    vector[K-1] beta;
    real<lower=0> sigma;
}

transformed parameters {
    real beta_K = 1 - sum(beta); // constrain: sum of all beta = 1
}

The following seems like a step in the right direction

data {
    int<lower=1> N; // Number of data
    int<lower=2> K; // Number of covariates
    vector[N] y; // the data
}

parameters {
    vector[K-1] beta_tilde;
    real<lower=0> sigma;
}

transformed parameters {
    real beta[K]; // constrain: sum of all beta = 1
    for (k in 1:(K-1)){
        beta[k] = beta_tilde[k];
    }
    beta[K] = 1 - sum(beta_tilde);
}

@mitzimorris discusses this in her case study. The short version is that enforcing these constraints softly is much nicer. https://mc-stan.org/users/documentation/case-studies/icar_stan.html

3 Likes

Thanks, that’s a good suggestion