Normal_id_glm slower now than with 2.19?

This model for normal_id_glm runs twice as slow on current develop than with rstan 2.19:

data {
  int<lower=1> N;
  vector[N] y;
  matrix[N, 1] x;
}
parameters {
  real alpha;
  vector[1] beta;
  real<lower=0> sigma;
}
model {
  alpha ~ normal(0, 10);
  beta ~ normal(0, 10);
  sigma ~ normal(0, 5);
  y ~ normal_id_glm(x, alpha, beta, sigma);
}

and data generated as follows:

N <- 100000
alpha <- 10 * rnorm(1)
beta <- 10 * rnorm(1)
eps <- 0.1 * rnorm(N)
X <- rnorm(N)
y <- alpha + beta * X + eps
dat <- list(x=matrix(X, ncol=1), y=y, N=N)

Running it with cmdstanr and current develop I also get these messages:

Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
Chain 3 Exception: normal_id_glm_lpdf: Scale vector is inf, but must be finite!
Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.

Is this known or expected?

That is definitely something to investigate. How about the difference compared to 2.19 cmdstan?

Also how big are the execution times here? What is the number of iterations?

I’m running with 2000 iterations (1000 of warmup). It takes ~70 seconds on develop and ~36 seconds with rstan. I’m going to start a bisect and see if I can narrow it down.

Thanks, if you need any help, let me know.

If you are using cmdstanr, please first revert 9ddf771ae3f16887c5a7c840a4f6c2ebb43da8c6 (https://github.com/stan-dev/cmdstanr/commit/9ddf771ae3f16887c5a7c840a4f6c2ebb43da8c6) to use JSON for input. We will revert that on master once 2.22 is released.

It turns out that bisecting over 3 repositories can drive you crazy… Anyway, my exploration seems to indicate that there’s nothing wrong with math develop, and the problem starts with the switch to stanc3 in cmdstan (https://github.com/stan-dev/cmdstan/pull/764). But perhaps I’m barking at the wrong tree, so take this with a pinch of salt.

What I can reliably see is that cmdstan at e6c8ea41e3ab7a5a4c1806f5e321f0d094126bf9 (the switch to stanc3) takes ~70 seconds, while at bd23efe20be2e25bffaec6a39def8850408a6f4e (the commit immediately preceding that) takes ~30 seconds.

This is with:

  • math at current develop
  • stan at 4ed134fd25036dc578742394bc41a48d5af48b20 (just before the merge of https://github.com/stan-dev/stan/pull/2869, as with that included I couldn’t get cmdstan to compile)
  • plus a few header renames due to flatten

This sound quite surprising to me: if anyone wants to try to make sense of it or has suggestions on how I could proceed, I’d be very happy.

1 Like

Can you post what the model hpp files look in the old and new compiler. That might tell us more. Also thanks for digging this up. Working over three repos is no fun. I know.

Good .hpp file:

// Code generated by Stan version 2.21.0

#include <stan/model/model_header.hpp>

namespace slowdown_model_namespace {

using std::istream;
using std::string;
using std::stringstream;
using std::vector;
using stan::io::dump;
using stan::math::lgamma;
using stan::model::prob_grad;
using namespace stan::math;

static int current_statement_begin__;

stan::io::program_reader prog_reader__() {
    stan::io::program_reader reader;
    reader.add_event(0, 0, "start", "/cmdstanr-git/slowdown.stan");
    reader.add_event(21, 19, "end", "/cmdstanr-git/slowdown.stan");
    return reader;
}

class slowdown_model
  : public stan::model::model_base_crtp<slowdown_model> {
private:
        int N;
        vector_d y;
        matrix_d x;
public:
    slowdown_model(stan::io::var_context& context__,
        std::ostream* pstream__ = 0)
        : model_base_crtp(0) {
        ctor_body(context__, 0, pstream__);
    }

    slowdown_model(stan::io::var_context& context__,
        unsigned int random_seed__,
        std::ostream* pstream__ = 0)
        : model_base_crtp(0) {
        ctor_body(context__, random_seed__, pstream__);
    }

    void ctor_body(stan::io::var_context& context__,
                   unsigned int random_seed__,
                   std::ostream* pstream__) {
        typedef double local_scalar_t__;

        boost::ecuyer1988 base_rng__ =
          stan::services::util::create_rng(random_seed__, 0);
        (void) base_rng__;  // suppress unused var warning

        current_statement_begin__ = -1;

        static const char* function__ = "slowdown_model_namespace::slowdown_model";
        (void) function__;  // dummy to suppress unused var warning
        size_t pos__;
        (void) pos__;  // dummy to suppress unused var warning
        std::vector<int> vals_i__;
        std::vector<double> vals_r__;
        local_scalar_t__ DUMMY_VAR__(std::numeric_limits<double>::quiet_NaN());
        (void) DUMMY_VAR__;  // suppress unused var warning

        try {
            // initialize data block variables from context__
            current_statement_begin__ = 2;
            context__.validate_dims("data initialization", "N", "int", context__.to_vec());
            N = int(0);
            vals_i__ = context__.vals_i("N");
            pos__ = 0;
            N = vals_i__[pos__++];
            check_greater_or_equal(function__, "N", N, 1);

            current_statement_begin__ = 3;
            validate_non_negative_index("y", "N", N);
            context__.validate_dims("data initialization", "y", "vector_d", context__.to_vec(N));
            y = Eigen::Matrix<double, Eigen::Dynamic, 1>(N);
            vals_r__ = context__.vals_r("y");
            pos__ = 0;
            size_t y_j_1_max__ = N;
            for (size_t j_1__ = 0; j_1__ < y_j_1_max__; ++j_1__) {
                y(j_1__) = vals_r__[pos__++];
            }

            current_statement_begin__ = 4;
            validate_non_negative_index("x", "N", N);
            validate_non_negative_index("x", "1", 1);
            context__.validate_dims("data initialization", "x", "matrix_d", context__.to_vec(N,1));
            x = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic>(N, 1);
            vals_r__ = context__.vals_r("x");
            pos__ = 0;
            size_t x_j_2_max__ = 1;
            size_t x_j_1_max__ = N;
            for (size_t j_2__ = 0; j_2__ < x_j_2_max__; ++j_2__) {
                for (size_t j_1__ = 0; j_1__ < x_j_1_max__; ++j_1__) {
                    x(j_1__, j_2__) = vals_r__[pos__++];
                }
            }

            // initialize transformed data variables
            // execute transformed data statements

            // validate transformed data

            // validate, set parameter ranges
            num_params_r__ = 0U;
            param_ranges_i__.clear();
            current_statement_begin__ = 7;
            num_params_r__ += 1;
            current_statement_begin__ = 8;
            validate_non_negative_index("beta", "1", 1);
            num_params_r__ += 1;
            current_statement_begin__ = 9;
            num_params_r__ += 1;
        } catch (const std::exception& e) {
            stan::lang::rethrow_located(e, current_statement_begin__, prog_reader__());
            // Next line prevents compiler griping about no return
            throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
        }
    }

    ~slowdown_model() { }

    void transform_inits(const stan::io::var_context& context__,
                         std::vector<int>& params_i__,
                         std::vector<double>& params_r__,
                         std::ostream* pstream__) const {
        typedef double local_scalar_t__;
        stan::io::writer<double> writer__(params_r__, params_i__);
        size_t pos__;
        (void) pos__; // dummy call to supress warning
        std::vector<double> vals_r__;
        std::vector<int> vals_i__;

        current_statement_begin__ = 7;
        if (!(context__.contains_r("alpha")))
            stan::lang::rethrow_located(std::runtime_error(std::string("Variable alpha missing")), current_statement_begin__, prog_reader__());
        vals_r__ = context__.vals_r("alpha");
        pos__ = 0U;
        context__.validate_dims("parameter initialization", "alpha", "double", context__.to_vec());
        double alpha(0);
        alpha = vals_r__[pos__++];
        try {
            writer__.scalar_unconstrain(alpha);
        } catch (const std::exception& e) {
            stan::lang::rethrow_located(std::runtime_error(std::string("Error transforming variable alpha: ") + e.what()), current_statement_begin__, prog_reader__());
        }

        current_statement_begin__ = 8;
        if (!(context__.contains_r("beta")))
            stan::lang::rethrow_located(std::runtime_error(std::string("Variable beta missing")), current_statement_begin__, prog_reader__());
        vals_r__ = context__.vals_r("beta");
        pos__ = 0U;
        validate_non_negative_index("beta", "1", 1);
        context__.validate_dims("parameter initialization", "beta", "vector_d", context__.to_vec(1));
        Eigen::Matrix<double, Eigen::Dynamic, 1> beta(1);
        size_t beta_j_1_max__ = 1;
        for (size_t j_1__ = 0; j_1__ < beta_j_1_max__; ++j_1__) {
            beta(j_1__) = vals_r__[pos__++];
        }
        try {
            writer__.vector_unconstrain(beta);
        } catch (const std::exception& e) {
            stan::lang::rethrow_located(std::runtime_error(std::string("Error transforming variable beta: ") + e.what()), current_statement_begin__, prog_reader__());
        }

        current_statement_begin__ = 9;
        if (!(context__.contains_r("sigma")))
            stan::lang::rethrow_located(std::runtime_error(std::string("Variable sigma missing")), current_statement_begin__, prog_reader__());
        vals_r__ = context__.vals_r("sigma");
        pos__ = 0U;
        context__.validate_dims("parameter initialization", "sigma", "double", context__.to_vec());
        double sigma(0);
        sigma = vals_r__[pos__++];
        try {
            writer__.scalar_lb_unconstrain(0, sigma);
        } catch (const std::exception& e) {
            stan::lang::rethrow_located(std::runtime_error(std::string("Error transforming variable sigma: ") + e.what()), current_statement_begin__, prog_reader__());
        }

        params_r__ = writer__.data_r();
        params_i__ = writer__.data_i();
    }

    void transform_inits(const stan::io::var_context& context,
                         Eigen::Matrix<double, Eigen::Dynamic, 1>& params_r,
                         std::ostream* pstream__) const {
      std::vector<double> params_r_vec;
      std::vector<int> params_i_vec;
      transform_inits(context, params_i_vec, params_r_vec, pstream__);
      params_r.resize(params_r_vec.size());
      for (int i = 0; i < params_r.size(); ++i)
        params_r(i) = params_r_vec[i];
    }


    template <bool propto__, bool jacobian__, typename T__>
    T__ log_prob(std::vector<T__>& params_r__,
                 std::vector<int>& params_i__,
                 std::ostream* pstream__ = 0) const {

        typedef T__ local_scalar_t__;

        local_scalar_t__ DUMMY_VAR__(std::numeric_limits<double>::quiet_NaN());
        (void) DUMMY_VAR__;  // dummy to suppress unused var warning

        T__ lp__(0.0);
        stan::math::accumulator<T__> lp_accum__;
        try {
            stan::io::reader<local_scalar_t__> in__(params_r__, params_i__);

            // model parameters
            current_statement_begin__ = 7;
            local_scalar_t__ alpha;
            (void) alpha;  // dummy to suppress unused var warning
            if (jacobian__)
                alpha = in__.scalar_constrain(lp__);
            else
                alpha = in__.scalar_constrain();

            current_statement_begin__ = 8;
            Eigen::Matrix<local_scalar_t__, Eigen::Dynamic, 1> beta;
            (void) beta;  // dummy to suppress unused var warning
            if (jacobian__)
                beta = in__.vector_constrain(1, lp__);
            else
                beta = in__.vector_constrain(1);

            current_statement_begin__ = 9;
            local_scalar_t__ sigma;
            (void) sigma;  // dummy to suppress unused var warning
            if (jacobian__)
                sigma = in__.scalar_lb_constrain(0, lp__);
            else
                sigma = in__.scalar_lb_constrain(0);

            // model body

            current_statement_begin__ = 13;
            lp_accum__.add(normal_log<propto__>(alpha, 0, 10));
            current_statement_begin__ = 14;
            lp_accum__.add(normal_log<propto__>(beta, 0, 10));
            current_statement_begin__ = 15;
            lp_accum__.add(normal_log<propto__>(sigma, 0, 5));
            current_statement_begin__ = 18;
            lp_accum__.add(normal_id_glm_lpdf<propto__>(y, x, alpha, beta, sigma));

        } catch (const std::exception& e) {
            stan::lang::rethrow_located(e, current_statement_begin__, prog_reader__());
            // Next line prevents compiler griping about no return
            throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
        }

        lp_accum__.add(lp__);
        return lp_accum__.sum();

    } // log_prob()

    template <bool propto, bool jacobian, typename T_>
    T_ log_prob(Eigen::Matrix<T_,Eigen::Dynamic,1>& params_r,
               std::ostream* pstream = 0) const {
      std::vector<T_> vec_params_r;
      vec_params_r.reserve(params_r.size());
      for (int i = 0; i < params_r.size(); ++i)
        vec_params_r.push_back(params_r(i));
      std::vector<int> vec_params_i;
      return log_prob<propto,jacobian,T_>(vec_params_r, vec_params_i, pstream);
    }


    void get_param_names(std::vector<std::string>& names__) const {
        names__.resize(0);
        names__.push_back("alpha");
        names__.push_back("beta");
        names__.push_back("sigma");
    }


    void get_dims(std::vector<std::vector<size_t> >& dimss__) const {
        dimss__.resize(0);
        std::vector<size_t> dims__;
        dims__.resize(0);
        dimss__.push_back(dims__);
        dims__.resize(0);
        dims__.push_back(1);
        dimss__.push_back(dims__);
        dims__.resize(0);
        dimss__.push_back(dims__);
    }

    template <typename RNG>
    void write_array(RNG& base_rng__,
                     std::vector<double>& params_r__,
                     std::vector<int>& params_i__,
                     std::vector<double>& vars__,
                     bool include_tparams__ = true,
                     bool include_gqs__ = true,
                     std::ostream* pstream__ = 0) const {
        typedef double local_scalar_t__;

        vars__.resize(0);
        stan::io::reader<local_scalar_t__> in__(params_r__, params_i__);
        static const char* function__ = "slowdown_model_namespace::write_array";
        (void) function__;  // dummy to suppress unused var warning

        // read-transform, write parameters
        double alpha = in__.scalar_constrain();
        vars__.push_back(alpha);

        Eigen::Matrix<double, Eigen::Dynamic, 1> beta = in__.vector_constrain(1);
        size_t beta_j_1_max__ = 1;
        for (size_t j_1__ = 0; j_1__ < beta_j_1_max__; ++j_1__) {
            vars__.push_back(beta(j_1__));
        }

        double sigma = in__.scalar_lb_constrain(0);
        vars__.push_back(sigma);

        double lp__ = 0.0;
        (void) lp__;  // dummy to suppress unused var warning
        stan::math::accumulator<double> lp_accum__;

        local_scalar_t__ DUMMY_VAR__(std::numeric_limits<double>::quiet_NaN());
        (void) DUMMY_VAR__;  // suppress unused var warning

        if (!include_tparams__ && !include_gqs__) return;

        try {
            if (!include_gqs__ && !include_tparams__) return;
            if (!include_gqs__) return;
        } catch (const std::exception& e) {
            stan::lang::rethrow_located(e, current_statement_begin__, prog_reader__());
            // Next line prevents compiler griping about no return
            throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
        }
    }

    template <typename RNG>
    void write_array(RNG& base_rng,
                     Eigen::Matrix<double,Eigen::Dynamic,1>& params_r,
                     Eigen::Matrix<double,Eigen::Dynamic,1>& vars,
                     bool include_tparams = true,
                     bool include_gqs = true,
                     std::ostream* pstream = 0) const {
      std::vector<double> params_r_vec(params_r.size());
      for (int i = 0; i < params_r.size(); ++i)
        params_r_vec[i] = params_r(i);
      std::vector<double> vars_vec;
      std::vector<int> params_i_vec;
      write_array(base_rng, params_r_vec, params_i_vec, vars_vec, include_tparams, include_gqs, pstream);
      vars.resize(vars_vec.size());
      for (int i = 0; i < vars.size(); ++i)
        vars(i) = vars_vec[i];
    }

    std::string model_name() const {
        return "slowdown_model";
    }


    void constrained_param_names(std::vector<std::string>& param_names__,
                                 bool include_tparams__ = true,
                                 bool include_gqs__ = true) const {
        std::stringstream param_name_stream__;
        param_name_stream__.str(std::string());
        param_name_stream__ << "alpha";
        param_names__.push_back(param_name_stream__.str());
        size_t beta_j_1_max__ = 1;
        for (size_t j_1__ = 0; j_1__ < beta_j_1_max__; ++j_1__) {
            param_name_stream__.str(std::string());
            param_name_stream__ << "beta" << '.' << j_1__ + 1;
            param_names__.push_back(param_name_stream__.str());
        }
        param_name_stream__.str(std::string());
        param_name_stream__ << "sigma";
        param_names__.push_back(param_name_stream__.str());

        if (!include_gqs__ && !include_tparams__) return;

        if (include_tparams__) {
        }

        if (!include_gqs__) return;
    }


    void unconstrained_param_names(std::vector<std::string>& param_names__,
                                   bool include_tparams__ = true,
                                   bool include_gqs__ = true) const {
        std::stringstream param_name_stream__;
        param_name_stream__.str(std::string());
        param_name_stream__ << "alpha";
        param_names__.push_back(param_name_stream__.str());
        size_t beta_j_1_max__ = 1;
        for (size_t j_1__ = 0; j_1__ < beta_j_1_max__; ++j_1__) {
            param_name_stream__.str(std::string());
            param_name_stream__ << "beta" << '.' << j_1__ + 1;
            param_names__.push_back(param_name_stream__.str());
        }
        param_name_stream__.str(std::string());
        param_name_stream__ << "sigma";
        param_names__.push_back(param_name_stream__.str());

        if (!include_gqs__ && !include_tparams__) return;

        if (include_tparams__) {
        }

        if (!include_gqs__) return;
    }

}; // model

}  // namespace

typedef slowdown_model_namespace::slowdown_model stan_model;

#ifndef USING_R

stan::model::model_base& new_model(
        stan::io::var_context& data_context,
        unsigned int seed,
        std::ostream* msg_stream) {
  stan_model* m = new stan_model(data_context, seed, msg_stream);
  return *m;
}

#endif

Slower .hpp file (stanc3):

// Code generated by stanc 159a90f0
#include <stan/model/model_header.hpp>
namespace slowdown_model_namespace {

template <typename T, typename S>
std::vector<T> resize_to_match__(std::vector<T>& dst, const std::vector<S>& src) {
  dst.resize(src.size());
  return dst;
}

template <typename T>
Eigen::Matrix<T, -1, -1>
resize_to_match__(Eigen::Matrix<T, -1, -1>& dst, const Eigen::Matrix<T, -1, -1>& src) {
  dst.resize(src.rows(), src.cols());
  return dst;
}

template <typename T>
Eigen::Matrix<T, 1, -1>
resize_to_match__(Eigen::Matrix<T, 1, -1>& dst, const Eigen::Matrix<T, 1, -1>& src) {
  dst.resize(src.size());
  return dst;
}

template <typename T>
Eigen::Matrix<T, -1, 1>
resize_to_match__(Eigen::Matrix<T, -1, 1>& dst, const Eigen::Matrix<T, -1, 1>& src) {
  dst.resize(src.size());
  return dst;
}
std::vector<double> to_doubles__(std::initializer_list<double> x) {
  return x;
}

std::vector<stan::math::var> to_vars__(std::initializer_list<stan::math::var> x) {
  return x;
}

inline void validate_positive_index(const char* var_name, const char* expr,
                                    int val) {
  if (val < 1) {
    std::stringstream msg;
    msg << "Found dimension size less than one in simplex declaration"
        << "; variable=" << var_name << "; dimension size expression=" << expr
        << "; expression value=" << val;
    std::string msg_str(msg.str());
    throw std::invalid_argument(msg_str.c_str());
  }
}

inline void validate_unit_vector_index(const char* var_name, const char* expr,
                                       int val) {
  if (val <= 1) {
    std::stringstream msg;
    if (val == 1) {
      msg << "Found dimension size one in unit vector declaration."
          << " One-dimensional unit vector is discrete"
          << " but the target distribution must be continuous."
          << " variable=" << var_name << "; dimension size expression=" << expr;
    } else {
      msg << "Found dimension size less than one in unit vector declaration"
          << "; variable=" << var_name << "; dimension size expression=" << expr
          << "; expression value=" << val;
    }
    std::string msg_str(msg.str());
    throw std::invalid_argument(msg_str.c_str());
  }
}


using std::istream;
using std::string;
using std::stringstream;
using std::vector;
using stan::io::dump;
using stan::math::lgamma;
using stan::model::model_base_crtp;
using stan::model::rvalue;
using stan::model::cons_list;
using stan::model::index_uni;
using stan::model::index_max;
using stan::model::index_min;
using stan::model::index_min_max;
using stan::model::index_multi;
using stan::model::index_omni;
using stan::model::nil_index_list;
using namespace stan::math;

static int current_statement__ = 0;
static const std::vector<string> locations_array__ = {" (found before start of program)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 7, column 2 to column 13)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 8, column 2 to column 17)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 9, column 2 to column 22)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 13, column 2 to column 24)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 14, column 2 to column 23)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 15, column 2 to column 23)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 18, column 2 to column 43)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 2, column 2 to column 17)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 3, column 2 to column 14)",
                                                      " (in '/cmdstanr-git/slowdown.stan', line 4, column 2 to column 17)"};


class slowdown_model : public model_base_crtp<slowdown_model> {

 private:
  int pos__;
  int N;
  Eigen::Matrix<double, -1, 1> y;
  Eigen::Matrix<double, -1, -1> x;

 public:
  ~slowdown_model() { }

  std::string model_name() const { return "slowdown_model"; }

  slowdown_model(stan::io::var_context& context__,
             unsigned int random_seed__ = 0,
             std::ostream* pstream__ = nullptr) : model_base_crtp(0) {
    typedef double local_scalar_t__;
    boost::ecuyer1988 base_rng__ =
        stan::services::util::create_rng(random_seed__, 0);
    (void) base_rng__;  // suppress unused var warning
    static const char* function__ = "slowdown_model_namespace::slowdown_model";
    (void) function__;  // suppress unused var warning

    try {

      pos__ = 1;
      context__.validate_dims("data initialization","N","int",
          context__.to_vec());

      current_statement__ = 8;
      pos__ = 1;
      current_statement__ = 8;
      N = context__.vals_i("N")[(pos__ - 1)];
      current_statement__ = 9;
      validate_non_negative_index("y", "N", N);
      context__.validate_dims("data initialization","y","double",
          context__.to_vec(N));
      y = Eigen::Matrix<double, -1, 1>(N);

      current_statement__ = 9;
      pos__ = 1;
      current_statement__ = 9;
      for (size_t sym1__ = 1; sym1__ <= N; ++sym1__) {
        current_statement__ = 9;
        assign(y, cons_list(index_uni(sym1__), nil_index_list()),
          context__.vals_r("y")[(pos__ - 1)], "assigning variable y");
        current_statement__ = 9;
        pos__ = (pos__ + 1);}
      current_statement__ = 10;
      validate_non_negative_index("x", "N", N);
      current_statement__ = 10;
      validate_non_negative_index("x", "1", 1);
      context__.validate_dims("data initialization","x","double",
          context__.to_vec(N, 1));
      x = Eigen::Matrix<double, -1, -1>(N, 1);

      current_statement__ = 10;
      pos__ = 1;
      current_statement__ = 10;
      for (size_t sym1__ = 1; sym1__ <= 1; ++sym1__) {
        current_statement__ = 10;
        for (size_t sym2__ = 1; sym2__ <= N; ++sym2__) {
          current_statement__ = 10;
          assign(x,
            cons_list(index_uni(sym2__),
              cons_list(index_uni(sym1__), nil_index_list())),
            context__.vals_r("x")[(pos__ - 1)], "assigning variable x");
          current_statement__ = 10;
          pos__ = (pos__ + 1);}}
      current_statement__ = 8;
      current_statement__ = 8;
      check_greater_or_equal(function__, "N", N, 1);
    } catch (const std::exception& e) {
      stan::lang::rethrow_located(e, locations_array__[current_statement__]);
      // Next line prevents compiler griping about no return
      throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    }
    num_params_r__ = 0U;

    try {
      num_params_r__ += 1;
      current_statement__ = 2;
      validate_non_negative_index("beta", "1", 1);
      num_params_r__ += 1;
      num_params_r__ += 1;
    } catch (const std::exception& e) {
      stan::lang::rethrow_located(e, locations_array__[current_statement__]);
      // Next line prevents compiler griping about no return
      throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    }
  }
  template <bool propto__, bool jacobian__, typename T__>
  T__ log_prob(std::vector<T__>& params_r__, std::vector<int>& params_i__,
               std::ostream* pstream__ = 0) const {
    typedef T__ local_scalar_t__;
    T__ lp__(0.0);
    stan::math::accumulator<T__> lp_accum__;
    static const char* function__ = "slowdown_model_namespace::log_prob";
(void) function__;  // suppress unused var warning

    stan::io::reader<local_scalar_t__> in__(params_r__, params_i__);

    try {
      local_scalar_t__ alpha;

      current_statement__ = 1;
      alpha = in__.scalar();
      current_statement__ = 2;
      validate_non_negative_index("beta", "1", 1);
      Eigen::Matrix<local_scalar_t__, -1, 1> beta;
      beta = Eigen::Matrix<local_scalar_t__, -1, 1>(1);

      current_statement__ = 2;
      beta = in__.vector(1);
      local_scalar_t__ sigma;

      current_statement__ = 3;
      sigma = in__.scalar();
      current_statement__ = 3;
      if (jacobian__) {
        current_statement__ = 3;
        sigma = stan::math::lb_constrain(sigma, 0, lp__);
      } else {
        current_statement__ = 3;
        sigma = stan::math::lb_constrain(sigma, 0);
      }
      {
        current_statement__ = 4;
        lp_accum__.add(normal_log<propto__>(alpha, 0, 10));
        current_statement__ = 5;
        lp_accum__.add(normal_log<propto__>(beta, 0, 10));
        current_statement__ = 6;
        lp_accum__.add(normal_log<propto__>(sigma, 0, 5));
        current_statement__ = 7;
        lp_accum__.add(normal_id_glm_lpdf<propto__>(y, x, alpha, beta, sigma));
      }
    } catch (const std::exception& e) {
      stan::lang::rethrow_located(e, locations_array__[current_statement__]);
      // Next line prevents compiler griping about no return
      throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    }
    lp_accum__.add(lp__);
    return lp_accum__.sum();
    } // log_prob()

  template <typename RNG>
  void write_array(RNG& base_rng__, std::vector<double>& params_r__,
                   std::vector<int>& params_i__, std::vector<double>& vars__,
                   bool emit_transformed_parameters__ = true,
                   bool emit_generated_quantities__ = true,
                   std::ostream* pstream__ = 0) const {
    typedef double local_scalar_t__;
    vars__.resize(0);
    stan::io::reader<local_scalar_t__> in__(params_r__, params_i__);
    static const char* function__ = "slowdown_model_namespace::write_array";
(void) function__;  // suppress unused var warning

    (void) function__;  // suppress unused var warning

    double lp__ = 0.0;
    (void) lp__;  // dummy to suppress unused var warning
    stan::math::accumulator<double> lp_accum__;

    try {
      double alpha;

      current_statement__ = 1;
      alpha = in__.scalar();
      current_statement__ = 2;
      validate_non_negative_index("beta", "1", 1);
      Eigen::Matrix<double, -1, 1> beta;
      beta = Eigen::Matrix<double, -1, 1>(1);

      current_statement__ = 2;
      beta = in__.vector(1);
      double sigma;

      current_statement__ = 3;
      sigma = in__.scalar();
      current_statement__ = 3;
      sigma = stan::math::lb_constrain(sigma, 0);
      vars__.push_back(alpha);
      for (size_t sym1__ = 1; sym1__ <= 1; ++sym1__) {
        vars__.push_back(beta[(sym1__ - 1)]);}
      vars__.push_back(sigma);
      if (logical_negation((primitive_value(emit_transformed_parameters__) ||
            primitive_value(emit_generated_quantities__)))) {
        return ;
      }
      if (logical_negation(emit_generated_quantities__)) {
        return ;
      }
    } catch (const std::exception& e) {
      stan::lang::rethrow_located(e, locations_array__[current_statement__]);
      // Next line prevents compiler griping about no return
      throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    }
    } // write_array()

  void transform_inits(const stan::io::var_context& context__,
                       std::vector<int>& params_i__,
                       std::vector<double>& vars__, std::ostream* pstream__) const {
    typedef double local_scalar_t__;
    vars__.resize(0);
    vars__.reserve(num_params_r__);

    try {
      int pos__;

      pos__ = 1;
      double alpha;

      current_statement__ = 1;
      pos__ = 1;
      current_statement__ = 1;
      alpha = context__.vals_r("alpha")[(pos__ - 1)];
      current_statement__ = 2;
      validate_non_negative_index("beta", "1", 1);
      Eigen::Matrix<double, -1, 1> beta;
      beta = Eigen::Matrix<double, -1, 1>(1);

      current_statement__ = 2;
      pos__ = 1;
      current_statement__ = 2;
      for (size_t sym1__ = 1; sym1__ <= 1; ++sym1__) {
        current_statement__ = 2;
        assign(beta, cons_list(index_uni(sym1__), nil_index_list()),
          context__.vals_r("beta")[(pos__ - 1)], "assigning variable beta");
        current_statement__ = 2;
        pos__ = (pos__ + 1);}
      double sigma;

      current_statement__ = 3;
      pos__ = 1;
      current_statement__ = 3;
      sigma = context__.vals_r("sigma")[(pos__ - 1)];
      current_statement__ = 3;
      sigma = stan::math::lb_free(sigma, 0);
      vars__.push_back(alpha);
      for (size_t sym1__ = 1; sym1__ <= 1; ++sym1__) {
        vars__.push_back(beta[(sym1__ - 1)]);}
      vars__.push_back(sigma);
    } catch (const std::exception& e) {
      stan::lang::rethrow_located(e, locations_array__[current_statement__]);
      // Next line prevents compiler griping about no return
      throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    }
    } // transform_inits()

  void get_param_names(std::vector<std::string>& names__) const {

    names__.resize(0);
    names__.push_back("alpha");
    names__.push_back("beta");
    names__.push_back("sigma");
    } // get_param_names()

  void get_dims(std::vector<std::vector<size_t>>& dimss__) const {
    dimss__.resize(0);
    std::vector<size_t> dims__;
    dimss__.push_back(dims__);
    dims__.resize(0);
    dims__.push_back(1);
    dimss__.push_back(dims__);
    dims__.resize(0);
    dimss__.push_back(dims__);
    dims__.resize(0);

    } // get_dims()

  void constrained_param_names(std::vector<std::string>& param_names__,
                               bool emit_transformed_parameters__ = true,
                               bool emit_generated_quantities__ = true) const {

    param_names__.push_back(std::string() + "alpha");
    for (size_t sym1__ = 1; sym1__ <= 1; ++sym1__) {
      {
        param_names__.push_back(std::string() + "beta" + '.' + std::to_string(sym1__));
      }}
    param_names__.push_back(std::string() + "sigma");
    if (emit_transformed_parameters__) {

    }

    if (emit_generated_quantities__) {

    }

    } // constrained_param_names()

  void unconstrained_param_names(std::vector<std::string>& param_names__,
                                 bool emit_transformed_parameters__ = true,
                                 bool emit_generated_quantities__ = true) const {

    param_names__.push_back(std::string() + "alpha");
    for (size_t sym1__ = 1; sym1__ <= 1; ++sym1__) {
      {
        param_names__.push_back(std::string() + "beta" + '.' + std::to_string(sym1__));
      }}
    param_names__.push_back(std::string() + "sigma");
    if (emit_transformed_parameters__) {

    }

    if (emit_generated_quantities__) {

    }

    } // unconstrained_param_names()

  std::string get_constrained_sizedtypes() const {
    stringstream s__;
    s__ << "[{\"name\":\"alpha\",\"type\":{\"name\":\"real\"},\"block\":\"parameters\"},{\"name\":\"beta\",\"type\":{\"name\":\"vector\",\"length\":" << 1 << "},\"block\":\"parameters\"},{\"name\":\"sigma\",\"type\":{\"name\":\"real\"},\"block\":\"parameters\"}]";
    return s__.str();
    } // get_constrained_sizedtypes()

  std::string get_unconstrained_sizedtypes() const {
    stringstream s__;
    s__ << "[{\"name\":\"alpha\",\"type\":{\"name\":\"real\"},\"block\":\"parameters\"},{\"name\":\"beta\",\"type\":{\"name\":\"vector\",\"length\":" << 1 << "},\"block\":\"parameters\"},{\"name\":\"sigma\",\"type\":{\"name\":\"real\"},\"block\":\"parameters\"}]";
    return s__.str();
    } // get_unconstrained_sizedtypes()


    // Begin method overload boilerplate
    template <typename RNG>
    void write_array(RNG& base_rng__,
                     Eigen::Matrix<double,Eigen::Dynamic,1>& params_r,
                     Eigen::Matrix<double,Eigen::Dynamic,1>& vars,
                     bool emit_transformed_parameters__ = true,
                     bool emit_generated_quantities__ = true,
                     std::ostream* pstream = 0) const {
      std::vector<double> params_r_vec(params_r.size());
      for (int i = 0; i < params_r.size(); ++i)
        params_r_vec[i] = params_r(i);
      std::vector<double> vars_vec;
      std::vector<int> params_i_vec;
      write_array(base_rng__, params_r_vec, params_i_vec, vars_vec,
          emit_transformed_parameters__, emit_generated_quantities__, pstream);
      vars.resize(vars_vec.size());
      for (int i = 0; i < vars.size(); ++i)
        vars(i) = vars_vec[i];
    }

    template <bool propto__, bool jacobian__, typename T_>
    T_ log_prob(Eigen::Matrix<T_,Eigen::Dynamic,1>& params_r,
               std::ostream* pstream = 0) const {
      std::vector<T_> vec_params_r;
      vec_params_r.reserve(params_r.size());
      for (int i = 0; i < params_r.size(); ++i)
        vec_params_r.push_back(params_r(i));
      std::vector<int> vec_params_i;
      return log_prob<propto__,jacobian__,T_>(vec_params_r, vec_params_i, pstream);
    }

    void transform_inits(const stan::io::var_context& context,
                         Eigen::Matrix<double, Eigen::Dynamic, 1>& params_r,
                         std::ostream* pstream__) const {
      std::vector<double> params_r_vec;
      std::vector<int> params_i_vec;
      transform_inits(context, params_i_vec, params_r_vec, pstream__);
      params_r.resize(params_r_vec.size());
      for (int i = 0; i < params_r.size(); ++i)
        params_r(i) = params_r_vec[i];
    }

};
}
typedef slowdown_model_namespace::slowdown_model stan_model;

#ifndef USING_R

// Boilerplate
stan::model::model_base& new_model(
        stan::io::var_context& data_context,
        unsigned int seed,
        std::ostream* msg_stream) {
  stan_model* m = new stan_model(data_context, seed, msg_stream);
  return *m;
}

#endif

How long does it take if you run 4000 iterations? This smells like


That issue was closed because of an unexpected miracle but I’m somewhat skeptical.

1 Like

Average time over 4 chains in seconds

warmup/samples   old   stanc3
  1000/1000       30       70
  2000/2000       59       94
  1000/4000       78      112
  2000/4000       90      120
  4000/4000      107      146

Yeah, based on these it really seems is the issue @nhuurre linked. I reopened it with the additional info you have on this.

What compiler flags are you using?

Whatever default cmdstan has, I’ve only changed clang++ to g++: g++ -std=c++1y -pthread -D_REENTRANT -O3.

@nhuurre has the fix https://github.com/stan-dev/stanc3/pull/433

Hooray!

4 Likes