"Stan model XX does not contain samples" error when using pars argument

I am trying to run a model in rstan saving only a few parameters, as otherwise the output becomes really big. However, when I use the pars and include arguments I get the following error:

some chains had errors; consider specifying chains = 1 to debughere are whatever error messages > were returned

Stan model ‘83da1bf7b504e96df52c6e00997ecc57’ does not contain samples.

Stan model ‘83da1bf7b504e96df52c6e00997ecc57’ does not contain samples.

Stan model ‘83da1bf7b504e96df52c6e00997ecc57’ does not contain samples.

Stan model ‘83da1bf7b504e96df52c6e00997ecc57’ does not contain samples.

Below is my model:

model_code <- '
functions {
  real fplBPosRange(real logX, real logBase, real A, real B, real logC, real D) {
    return exp(log(A))/(1+logBase^(-B*(logX-logC)))+D;

data {
  int<lower=1> N;                    // size of dataset
  int<lower=1> nSamples;            // number of samples
  int<lower=1, upper=nSamples> sample[N];       // sample
  vector[N] logX;                    // dose value in logarithmic scale
  real y[N];                         // Response values
  int<lower=1> nBlocks;              // number of blocks
  int<lower=1,upper=nBlocks> cultureBlock[N]; //block
  real targetPotencies[nSamples];
  real<lower=0> sigma;                // residual
  int<lower = 0, upper = 1> run_estimation; // a switch to evaluate the likelihood

parameters {
  vector[nBlocks] eta[nSamples];     // log(EC50) for each sample in block
  real meanLogC[nSamples];
  real<lower=0> sigmaLogC;            // standard deviation of the logC

transformed parameters {
  vector[nBlocks] logC[nSamples];
  for (i in 1:N) {

model {
  vector[N] mu_y;
  sigmaLogC ~ normal(0,1);
  for (i in 1:nSamples) {
    eta[i] ~ normal(0,1);
    meanLogC[i] ~ normal(targetPotencies[i],0.1); //mean for each sample

  if (run_estimation==1) {
    for (i in 1:N) {
      mu_y[i] = fplBPosRange(logX[i], 2, 1, 1, logC[sample[i],cultureBlock[i]], 0);
    y ~ normal(mu_y,sigma);

And here is my call:

  stanFit <- rstan::stan(
      model_code = model_code,
      data = list(N=nrow(data),
                  run_estimation=1),    # named list of data
      chains = 4,             # number of Markov chains
      warmup = 1000,          # number of warmup iterations per chain
      iter = 10000,            # total number of iterations per chain
      cores = 4,              # number of cores (could use one per chain)
      refresh = 0,           # no progress shown
      seed = 1,

Any ideas what might be happening? The output fit is getting to be so big that it slows everything down.

Please provide (fake) data so we can run it :) Also, have you tried running with chains = 1?

1 Like

Thank you @torkar, Indeed, once I ran chains = 1 I figured out that the simple issue was that I was specifying


as I was used to specifying the parameters as such (meanLogC[1], etc), but what I should have put was

pars = c("meanLogC","sigmaLogC").

This became very clear once I set chains=1. Duh. Thank you very much.

1 Like