# Timeseries inverse modelling - al la Soil Carbon Modeling

Hi Stan people

I’ve spent the last few days trying to implement a ocean carbon model inspired by Bob’s soil carbon modelling and some esturine inverse modelling work done with JAGS (Grace et al, 2015). Unlike Bob’s model I can solve mine analytically.

At the moment I’m testing the model as I build up each set of parameters. See Stan model below, most of the parameters have been disabled for testing…

What i’m not understanding at the moment is why there is so much variance in my C_hat estimates when the model is given a data set where C_hat[n-1] is exactly equal to C[n], i.e. P = 0.
See the attached plot, red line is the data.
I would expect that sigma and P would be correctly estimated at near zero.

So either my model is assembled wrong or I’m missing something fundamental.

Does anyone fancy giving me some much appreciated help? Some example data is also attached.

test_data1.csv (3.8 KB)

Reference:

Grace, M. R., Giling, D. P., Hladyz, S., Caron, V., Thompson, R. M., & Mac Nally, R. (2015). Fast processing of diel oxygen curves: Estimating stream metabolism with BASE (BAyesian Single-station Estimation). Limnology and Oceanography: Methods, 13(3), 103–114. http://doi.org/10.1002/lom3.10011

Model:

``````data {
int<lower=0> N;  // number of samples
real<lower=0> C[N]; // observed O2
real<lower=0> PAR[N]; // light
real<lower=0> dt[N];  // change in time between obs
real<lower=0> Csat; // observed saturation coef
real<lower=0> kw; // observed kw
}

parameters {
// real<lower=0> R; // respiration
real<lower=0> sigma; // model fit error
// real<lower=0> A; // PP per PAR ("photosynthetic efficiency")
// real<lower=0> p; // PP coef2 ("saturation")
// real<lower=0> kw_hat;
real P[N];
}

transformed parameters {
real C_hat[N]; // predicted C at t
for(t in 1:N){
// P[t] = A * PAR[t]^p; ## PP estimate
C_hat[t] = ((C[t] - Csat) * exp(-kw * dt[t]) + Csat) + (P[t]); // R removed and P allowed to be negative
}
}

model {
sigma ~ cauchy(0, 0.1);
// A ~ normal(0, 0.25);
// p ~ normal(1.1, 0.25);
// R ~ cauchy(0, 1);
P ~ normal(0, 10);
// kw_hat ~ normal(kw, 0.5);
for(n in 2:N){
C[n] ~ normal(C_hat[n-1], sigma);
}
}``````

Your C_hat is some deterministic function + P[t] and P[t] is a vector of normal(0,10) random values… so the variability in C_hat is fixed with sd = 10

1 Like

P and sigma might be doing similar things here.

Have you looked at pairplots of P and sigma?

What is the text output of this model fit?

Hi guys,

Looks like I was a bit too quick to jump in asking for help, I was forcing my model with data which to which i’d applied noise to (to simulate measurement error), similar named columns!

The model above appears to work just fine.

Hi All again,

If I can reopen this conversation on a slightly different tack.

The above model now appears to be working happily, however I have read that it is preferred to make variables scale-free and unit scale.
For instance C_hat, C and Csat are typically around 200, while kw is very small (~2.3e-05)
dt is around 1800.

What are likely to be the issues I’ll encounter if I don’t do this?
Conceptually I’m not sure how I’ll correctly interpret the variables if I do that.

Thanks

1 Like

The problem with varying scales is that we need to estimate the scales while doing warmup, but to do warmup effectively, we need to have the scales. So it can be slow to converge.

The other problem with varying scales is that correlation isn’t compatible with scaling as we just use a diagonal mass matrix, so it can’t jointly scale and rotate. (We can estimate a dense mass matrix, but that’s also problematic in that it can be large and is much harder to estimate than just scales during warmup and getting it wrong can cause warmup to have a very hard time converging.)

If you have a variable that’s around 200, then you can do this:

``````parameters {
real C_hat_raw;
...
transformed parameters {
real C_hat = 200 * C_hat_raw;
...
``````

Now you have your `C_hat` on the scale you expect, but Stan will be sampling with `C_hat_raw`.