More than one step lag between observation and prediction

I want to have a simple DLM model in stan, but with an extra restriction that for making the prediction at time t, the observations are only available until time t-k. By default, I think Stan assumes that observations are available until t-1. How should I do the modeling for k >1?

Stan is a probabilistic programming language in which you can express any set of assumptions you want; there are no defaults but those you express in your model. Can you post the code you have so far? That would help us advise on how to incorporate the structure you want.