Number of observations for every parameters

Hi all, I am fitting my model to a data. The data is from 30 participants and each participant has 300 trials. I have 2 parameters in my model. One of them is going to be for every subject. So i have 300 trials for this parameters for each participant. But the other parameter is different for every participant and every trial of each participant, so it would have 30*300 quantities after fitting. I have only one observation for every quantity. The important point is that the second parameter is not important for me and i want to know the quantities of the first parameter. My question is that; does using the first parameter makes the quantities for first parameter unreliable? or if i want to compare two models that both of them have the second parameter , does the BIC`s are reliable for model comparison?

My problem until know is that i can see the effect of my data in generated quantities produced by fitting but i cannot see the effect from simulation when i use the fitted parameters for simulation.

Is it the inclusion of the second parameter you are worried about affecting the quantiles for the first parameter?

Given you have as many data points as parameters, is this a hierarchical model? There’s really a lot of parameters here. Without some structure to your priors, it seems unlikely things will work well.

What is the model comparison you are doing, more specifically? Is it a comparison of a model with every-trial effects and individual effects (30 * 300 parameters) to one with only individual effects (30 parameters)?

Thanks a lot for your answer.

I am worried that first parameter (the big one, 30*320 that its fitted quantities aren’t important for me) affects the reliability of other parameters.

I have two learning models, the learning rule is different between them. So Model1 has a,b,c parameters and Model 2 has a and b parameters. The big parameter (30*320) is “a” that is common between two models. The b and c parameters are important for me, i want fitted quantities of them for each of my participants ( i have 30 participants).

No, it is not hierarchical. My collaborators think that it is possible to use this kind of parameter in stan and it doesn’t influence other parameters and model comparison. Can you guide me to a reference that i can be certain about this possibility or impossibility in stan?

Oh okay, so ‘a’ is the parameter in question, right? I don’t think the goal of including ‘a’ and not influencing ‘b’/‘c’ makes sense.

If there is a model out there that doesn’t require ‘a’ and it produces results equivalent to a model that includes ‘a’, then why not roll with the simpler model? If the more complicated model is only judged to be working when it matches the simpler model, then it seems like you’d have to prefer the simpler model.

If each parameter ‘a’ only has one observation and there isn’t some sort of aggressive hierarchical model or tight prior, then the estimates of ‘a’ are almost definitely going to be uninformative. This will probably also make the other parameters estimates hard to interpret as well.

Reliability of the inference might not be your biggest enemy here. The sampling might work fine and be totally reliable, but a model without much data is going to give you an uninformative posterior (you do not learn much about the parameters in your model by conditioning on the data you observed).

Although model without enough data or without tight enough priors often sample badly and the inferences themselves are unreliable :P. So reliability could be the issue just saying there are other things haha.

Thanks for your answer, i was thinking about the considerable point that you said, that tight priors and hierarchical model help to avoid uninformative posterior for my main parameters (different from the big parameter). Could you please introduce some references for this?

Because i didn’t find any way until know to omit my big parameter and i want to make its influence on the other parameters less if it be possible.

Could you please introduce some references for this?

This is the argument behind hierarchical modeling, so the Bayesian Data Analysis 3 section on hierarchical modeling (8-schools n’ such) is the place.

The short story is if you have one noisy data point per parameter (at least I’m assuming it’s noisy), without any priors or hierarchical modeling, any estimates of those parameters are going to be just as noisy as your data.

This is the part that I’m not sure about. What does it mean for these parameters to not influence the others much?

Is there a model without them you can compare them against? If there is a model without these parameters and the goal is for them to have no effect, why include them?

Thanks a lot Ben for your time.

I say the problem that obliged me to use “a” (the big parameter) as parameter, maybe you know a solution for that. I don’t need “a” as a parameter in my model, the problem was that “a” is produced from a normal distribution (that its mean and sd is from my data) in my model and as i understood we cannot have a random distribution to produce “a” from data while “a” isn’t parameter. It was the reason that we use it as parameter but its fitted quantities wasn’t important for us.

Do you think we could use a random distribution to produce “a” in our model while it isn’t a parameter?

So part of your data is the means and standard deviations of some random variables “a”? Instead of observations of “a”?

Can you post a simple version of the model that you’re working with by any chance?

Thanks a lot Ben for your suggestion. I tried to do modelling in another way that didn’t need that strange parameter. Thanks for thinking with me in this problem.