Bayesian priors that combine data and parameters

Hi all - I’m a newcomer to stan and Bayesian statistics so bear with me…

I’m looking to find a way to code a prior for linear regression such that

Suppose that

p \sim N(0, 1)\\ q = \beta_0 + \beta_1 p + \beta_2 p^2\\ \beta = [\beta_0, \beta_1, \beta_2], P = [1, p, p^2]

I want to find a \beta such that \frac{\partial q}{\partial p} = 2\beta_2 p_i + \beta_1 < 0 for as many p_i, in addition to the standard bayesian linear regression constraints.
I know for the latter, my prior for \beta is q \sim N(\beta'P, \sigma). However, how would I encode the former (\frac{\partial q}{\partial p} = 2\beta_2 p_i + \beta_1 < 0) as a prior?

Usually, a regression involves predictors x_n \in \mathbb{R}^P and outcomes y_n \in \mathbb{R} and a parameter vector \beta \in \mathbb{R}^P and scale parameter \sigma > 0, with likelihood y_n \sim \textrm{normal}(x_n \beta, \sigma). I don’t see where the observations (equivalent of y) come in for your model.

A prior for \beta needs to be a distribution over \beta, but this is a distribution over q,.

To answer your question, if you have a bunch of observed p_1, \ldots, p_N, and a coefficient vector \beta and you want to ensure 2 \beta_2 p_n < \beta_1 for all n, then you can do this:

data {
  int N;
  vector[N] p;
parameters {
  real beta_2
  real<lower = max(2 * beta_2 * p)> beta_1;

Ah, I used the incorrect terminology. My observations q have likelihood q \sim N(\beta' P, \sigma) - and this informs how my observations come in for my model. Thanks for the correction!

As stated earlier, in In my example, I believe dq/dp = b1 + 2b2*p > 0. However, is there another way to do this, that doesn’t necessarily force the inequality for all n, but ensures that it will be true for a high percentage of n? I’ve read of which suggests that perhaps imposing this absolute constraint would not be the best solution to this problem.

Thank you for the response! This is really helpful.