# Using Stan for MLE-like estimation?

Hi, I’m looking to write a Stan script to simulate sampling from (for example) a normal distribution with unknown mean and variance, and then estimate those parameters (I mean variables) and also simulate draws from the posterior. I’ve checked all the Stan official reference materials and can’t find anything.

I’d like to do something like the script here: https://magesblog.com/post/2016-09-27-fitting-distribution-in-stan-from/
Except not explicitly writing the equation for the normal.

Thanks

You can do this in Stan using generated quantities, but honestly it’s probably easier to just do it in R or Python or whatever. Be careful to make sure your random number generator parameterizations are consistent between Stan and whatever you use to generate your data. The defaults can be different (Stan uses standard deviation for its normals).

If you’re generating the data though, at some point you’ll need to know the parameters.

Parameter estimates are constructed in Stan from posterior draws, so these basically happen together (once you have your posterior draws you can do summarize them however).

If you’re using R, the Rstan introduction is probably the place to go: https://cran.r-project.org/web/packages/rstan/vignettes/rstan.html

Regarding the MLE thing, you can run an optimizer on a Stan model. There’s a section on that in the Rstan introduction.

Let’s say the data is given, rather than generated using rnorm. Then how would I calculate my estimands using Stan? Aren’t there functions in Stan?

It’s not at all clear what you want to achieve. There’s a plethora of material on how to estimate distribution parameters using Stan. What part of the script you linked does not work for you? If you don’t want to write your own lpdf for the normal, just use normal_lpdfor even do y ~ normal(mu, sigma).

Hi, I wrote a script below to show what I’m after. It’s a simplified version of the code in the link listed previously. The idea is to estimate the mean and variance drawing from a normal distribution, but using priors to narrow down my estimates. It seems to work well and has similar results as the script in the link. Does it appear that the code achieves this goal? Also, how would I simulate results from this distribution?

data {
int N;
real y[N];
}
parameters {
real mu;
real<lower = 0> sigma;
}
model {
y ~ normal(mu, sigma);
// Priors
mu ~ normal(0, 3);
sigma ~ cauchy(0,10);
}
}

This certainly looks like a program one can use* to sample from the posterior distribution
p(\sigma, \mu \mid y) = \frac{l(y \mid \mu, \sigma)\pi(\mu, \sigma)}{m(y)}

From this you can compute, say, E_p[\mu] and E_p[\sigma] as estimators of \mu and \sigma. This can be achieved by calling sampling() in Rstan.
Now, the issue is what you mean by “MLE-like”. You can find the parameter values that maximise the expression above to obtain so-called maximum a posteriori (MAP) estimates. This can be obtained by calling optimizing().

*This Cacuhy prior on the variance is probably not the best choice, but for such a simple model it might not hurt you (@bbbales2, @betanalpha?).