Generative models for DoE size - makes sense?

Hi,

I want to make a fake example to present how

  1. the experimental error and
  2. the measurement error

can affect models built from lab experiments using DoE. However, I am not sure that the approach is correct. As it’s the first time using simulated results I don’t feel confident that what I’m doing makes sense.

  1. For this reason I made a fake model.
  2. A fake DoE
  3. And I get fake responses.
  4. Then, I randomly take a small sample and I run a model to see if I can recover the true parameters
  5. Finally, I repeat the step 4 multiple times

Now, the DoE is not always the same as I test different size of DoEs e.g. 22 vs 16 runs (but, all of them have the same number of factors and the same min & max levels, per factor)

Do you think that this approach makes sense?
If you have any comments or any related literature please, let me know.

Thanks

Hi, sorry we took quite long to respond.

In fact there is recent preprint on Bayesian workflow that deals with exactly those question. If I understand you correctly, what you are doing roughly matches the approach we advocate for in the preprint and is at least a very good start.

Best of luck with your model!

1 Like