General question, what is the Bayesian analog of the Frequentist notion ‘clustered standard error’?

I know that standard error is the standard deviation over the sampling distribution for some parameter. In the Bayesian setting, we typically model parameter distributions directly. So we’d just query the posterior, “what’s the standard deviation of the posterior chain for chain this parameter?”

But clustered standard error adjusts this idea to contexts where some samples are dependent on one another, for example panel data where y_t is dependent on y_{t-1}. The number of independent samples is N not NT.

My best guess at the bayesian analog of clustered standard error is multi-level models. Perhaps we assume that each individual unit is independent and receives its own parameters, for example, \mu_i, \sigma_i then each unit’s panel data is sampled given these priors.

Curious what the established Bayesian approach is, if one exists?