Historically “deconvolution” and “inverse problems” (and related terms like “uncertainty quantification”) have referred to particular applications and the models typically used for those applications. More formally consider a model were data scatter around the output of some complex, many-to-one function,

y \sim \text{normal}(f(x), \sigma).

Here x is the unobserved behavior of interest and f is the “forward model” that quantifies which features manifest in the output . For example x might be a latent image with f quantifying the various ways in which the image is warped and obscured by the camera or x might be an initial state with f a dynamical system that chaotically evolves the initial state to a final state.

Because f is many-to-one naively inverting the forward model to observed data is ill-posed: f^{-1}(\tilde{y}) does not result in a single value for x but rather range of values. The language isn’t consistent but I would say that “deconvolution” is used more to refer to regularizing f^{-1}(\tilde{y}) to a single point estimate of x while “inverse problems” refers to quantifying the full geometry of f^{-1}(\tilde{y}).

Bayesian deconvolution and inverse problems then typically refers to Bayesian inference over these ill-posed systems, sometimes focusing on quantifying how f^{-1}(\tilde{y}) manifests in the posterior distribution and sometimes focusing on regularizing f^{-1}(\tilde{y}) with informative prior models so that the posterior distribution is less degenerate than f^{-1}(\tilde{y}). When that informative prior model is motivated by f^{-1}(\tilde{y}) itself, and not actual domain expertise, this becomes a form of empirical Bayes.

I looked only briefly but Wagner et al., 2021 and Wahal and Biros., 2019 both seem to refer to inverse problems of this sort. Wahal and Biros complicate things by also referring to their particular computational method (which focuses on quantifying posterior tail behavior) as “inverse Monte Carlo”.

Historically “prior elicitation” is a bit more precise. It refers to the various ways that one can extract information from oneself, collaborators, colleagues, literature, and the like in order to inform a prior model.

In the particular context of inverse problems one could focus prior elicitation on information that would help to regularize the ill-posed inverse regularizing f^{-1}(\tilde{y}). In other words focusing elicitation on informing those parameters that are poorly informed by f^{-1}(\tilde{y}) while not worrying about those that are informed. That said this kind of conditional prior elicitation has be performed *very* carefully. The structure of f^{-1}(\tilde{y}) from any particular observation can help motivate what kind of information we should elicit (i.e. about which parameters we need more information). If it is used to make that information up then it becomes a form of empirical Bayes which used the data twice to construct a posterior distribution.

Again the language is all over the place but I would say that:

Deconvolution, inverse problem, and uncertainty quantification are most universally used to refer to a many-to-one forward model that results in ill-posed inferences. They are sometimes also used to refer to particular methods for regularizing those inferences but these interpretations vary wildly.

Bayesian deconvolution, inverse problem, and uncertainty quantification most universally refer to using the prior model to regularize the ill-posed inferences from these models. Sometimes this might be used to refer to empirical Bayesian methods and sometimes principled prior elicitation.