Ill-posed linear regression

I want to solve the ill-posed problem y = A x, where

“y” is a known vector (1-d) of data,

“A” is a 2-d matrix with parameters theta.

The functional form of A is known, so you can calculate A as function of the parameters. The vector (1-d) “x” are the unknown values that together with the parameters theta are the ones I want to calculate with the bayesian linear regression.

Thus, the question is how can you write this model in Stan

Extra question: because this is an ill-posed problem, do you have to add an extra term such as the one in Tihkonov regularisation?

Is this problem statistical in the sense that y is sampled from Ax with uncertainty, or is this just about finding the set of valid solutions to the strict equality?

Dear Jacob:
This is certainly not just about finding the set of valid solutions to the strict equality.
We obtain y in an independent procedure with uncertainty.
We want to model y ~ Ax + eps, where eps (uncertainty) is modelled by N(0,sigma).

Many thanks in advance for your support.

Hi!

I don’t know if I completely understood your question but if you want to solve a linear system where Ax = y then you only need to use some classical direct method (e.g Gaussian elimination). In another way, If you want to use some probabilistic method then it is a very interesting paper to read (https://www.jmlr.org/papers/volume22/21-0031/21-0031.pdf) and this is another paper where the authors solve a linear system using the Conjugated Gradient but under the Bayesian perspective (https://arxiv.org/pdf/2008.03225.pdf).

I have never seen an example in \texttt{Stan} using this framework but I will very interested to know it too.

If you only want to solve y = Ax + \epsilon assuming \epsilon \sim N(0, \sigma) then it’s reduce to the least square problem and I’m sure that there is an example in \texttt{Stan} for this.

Best wishes

My very hazy understanding is that in a Stan model instead of directly encoding the equations, you instead define a probabilistic model. So it might produce a very similar result, but you would do so in a different way, by defining your priors & assumptions. The wikipedia page on Tihkonov regularisation suggests it can be derived from a particular choice of error model and choice of prior on x. So I believe in a Stan model you would tell Stan what your error model for eps, and what your prior on x is, what your prior on the parameters theta is, and how x and y are linked through A as a function of theta.