How to consider the degree of freedom (D.F) in chi square test for Bayesian context ?

In Gelman book, the posterior predictive p value is defined by

\text{p -value} = \int d\theta \int dy \mathbb{I}_{ T(y,\theta) > T(y_{ \text{obs}}, \theta)} f(y|\theta)\pi(\theta| y_{ \text{obs} } ) ,

where f(y|\theta) is a likelihood and \pi(\theta| y_{ \text{obs} } ) is a posterior distribution. And T(y,\theta) is a test statistic.

Taking T(y,\theta) := \sum_i \frac{(y_i - \mathbb{E}[y_i|\theta])^2}{\mathbb{V}[y_i|\theta]},y=(y_1,y_2,...,y_n), we can validate our model.

In non-Bayesian context, the p-value for the chi square test is calculated with the notion of degree of freedom. However, in the above Bayesian context, to calculate the posterior predictive p value, we no longer need the degree of freedom ?

How to reflect the D.F. or we do not need it ?