I have fitted a simple ar1 model with poisson outcome in brms.

As far as I know an ARMA model assumes that the “shocks” (the “error term” in the ARMA process) is iid gaussian.

I don’t see a way to extract these from my brms model for assumption checking. Am I missing something? How can I check my model?

A a side note/question:
I also didn’t find a way to get fitted or predicted values utilizing the fitted ARMA process. It seems the choices are either to set the ARMA process to zero or to simulate new draws of the ARMA process. The same also applies to GP models

I am not an expert of BRMS. BRMS are Stan models. So you can use extr <- extract(fit)
Then use extr$foo. It’s Not difficult to find out “foo”. You may use one of those normality tests.

Very true. Brandt et al. defined filters using the gamma distribution as conj. prior, not easy to understand and implement.

# Brandt et al. 2000. "Dynamic Modelling for Persistent Event
# Count Time Series." American Journal of Political Science. 44(4):
# 823-843.
#
# Brandt and Williams. 2001. "A Linear Poisson Autoregressive
# Model: The PAR(p)" Political Analysis 9(2).

The fitted ARMA “shocks” don’t seem to be part of the returned object. If they can be recovered then only by deriving them from the other variables that are returned. One of the things that are returned is the value of the ARMA process for each datapoint, so i guess it could be derived from that in combination with the fitted ARMA parameters.
Edit: But BRMA doesn’t seem designed to require its users to manually hack into the returned object, so I hope it has an accessor function for that (I didn’t find any). Or maybe there is some other clever way to assess the assumptions that doesn’t require extracting the shocks.

Are you referring to the values of the ARMA process or to the shocks?

BRMS doesn’t fit the ARMA process directly to the data in this case, but to a latent variable which is part of the linear predictor. In other words it models the error on the scale of the linear predictor as an ARMA process. This allows to have discrete response variables and to use predictors to account for non-stationarity.

If you look in the stan forum(here) you will find lots of implementations is not that hard, but the model is slow and has some problems of identifiability.

I implemented an intgarch before, it can be usefull: intgarch.stan (727 Bytes)

A good thing is that brms have a function to recover the stan code…so you can use that and manipulate it to get what you want (is not that hard).

If I understand your notation well, yes will be to the shocks… the errors obtained after filtering the ARMA process.

Well, I will be scared of fitting a non-stationary process with an ARMA model even for latent variables, but that is just me (a bit old fashion). But even after fitting an ARMA model to the latent variable, the models residual (shocks) have to behave stationary. On the contrary, the ARMA process didn’t do anything (not even tickles).

The variance and expected value of the poisson distribution are same. To me the common-way is using the gamma distribution to realize the ARMA process. The entropy of a count distribution is lower than of a normal distribution. Since ARMA requires a large amount of data to fit, this becomes even larger.
If you mess up the error distribution, say assuming a normal distribution, I doubt model fits provide something useful.

Not sure what you mean with the word “filtering” here.

This latent variable is stationary if the surrounding regression model fits the data. So it’s not just the shocks that are stationary, but the entire ARMA process.

No the arma process doesnt need that much data too fit, and that what I am talking about the process on that paper are already implemented by someone else and their sketch is here in the stan forum.

Thanks for the hint, but It doesn’t make sense in my situation to start over learning and implementing a new approach at this point. My Models and model checking code are almost finished.