Not necessarily negligible difference, but certainly there is non-negligible uncertainty about the difference. In addition of looking elpd, it would be good to use application specific utility or cost function, so that it is easier to assess whether there is non-negligible probability of practically relevant difference.
We will also soon have a paper discussing in more detail the issues in elpd based model comparisons.
Can you tell more about your model and modeling task, so I may have recommendations for easier to interpret utility or cost functions?
Dat needs to be the same, but likelihoods can be different. See Can WAIC/LOOIC be used to compare models with different likelihoods? - #2 by avehtari
There is intrinsic meaning, but not many are well calibrated for that. For discrete observation models it’s sum of log probabilities and if you know also the number of observations and the range of the data, you can compute average probabilities and compare that to probabilities, e.g. from uniform distribution which is easy way to check whether the model has learned anything from the data. See an example of using this as diagnostic in thread Loo: High Pareto k diagnostic values for beta binomial regression - #2 by avehtari. For continuous distributions we have log densities, and usually people are even less calibrated to think what they mean and the meaning depends also on scaling of the data, but it’s still possible to infer things from elpd without reference to another model.
As I said, people are not used to think so much of probabilities and densities, and thus I also recommend to use also application specific utilities and cost functions and we are adding some convenience functions for easier use of some common more easy to interpret cost functions. The reason why we still favor elpd in the model comparison is that it measures the goodness of the whole predictive distribution, while many commonly used measures such RMSE or R^2 measure only the goodness of the point estimate.
@Longshot408, form the loo output I can see that you are using 75000 posterior draws, which is probably about 71000 more than you would need taking into account that you seem to have quite simple model (p_loo around 4-6) and plenty of observations so that the posterior is likely to be very easy.