Ill-typed arguments supplied to infix operator * - error w new array type

I’m getting an issue with conditioning a parameter (rate in a bernoulli) on a simple linear model.

Stan code:

data {
 int<lower=1> n;
 array[n] int h;
 //int h[n];
 array[n] real<lower=0, upper=1> memory;
 //real<lower=0, upper=1> Memory[n];
}

// The parameters accepted by the model. 
parameters {
  real alpha;
  real beta;
}

// The model to be estimated. 
model {
  // The prior for theta is a uniform distribution between 0 and 1
  target += normal_lpdf(alpha | 0, 1);
  target += normal_lpdf(beta | 0, .3);
  
  // The model consists of a binomial distributions with a rate theta
  target += bernoulli_logit_lpmf(h | alpha + beta * memory);
}

R/cmdstanr code to reproduce the error. Note that the code is deliberately simplified, so the memory variable is not particularly meaningful, it’s just to set up the example.

pacman::p_load(tidyverse,
        posterior,
        cmdstanr)

trials = 120
d <- tibble(trial = seq(trials), choice = rbinom(trials, 1, 0.7)) %>%
        mutate( memory = cumsum(choice) / seq_along(choice))

## Create the data
data <- list(
  n = 120,
  h = d$choice,
  memory = d1$cumulativerate
)

## Specify where the model is
file <- file.path("model.stan")
mod <- cmdstan_model(file, cpp_options = list(stan_threads = TRUE))

The error is:

Semantic error in ‘/var/folders/3m/f039n0x549vfxhdtj55yykzhfjr0d6/T/Rtmpy3kqj3/model-183d225a00414.stan’, line 22, column 19 to column 32:

20:  transformed parameters{
21:    real theta;
22:    theta = (alpha + beta * memory);
                        ^
23:  }
24:  

Ill-typed arguments supplied to infix operator *. Available signatures:

Update:
replacing

 target += bernoulli_logit_lpmf(h | alpha + beta * memory);

with

for (i in 1:n)
    target += bernoulli_logit_lpmf(h[i] | alpha + beta * memory[i]);

fixes the problem. Which is very frustrating given I’d like to vectorize (mostly for teaching purposes here)

This is because the function multiply is only defined for

multiply(int, int) => int
multiply(real, real) => real
multiply(row_vector, vector) => real
multiply(real, vector) => vector
multiply(vector, real) => vector
multiply(matrix, vector) => vector
multiply(complex, complex) => complex
multiply(real, row_vector) => row_vector
multiply(row_vector, real) => row_vector
multiply(row_vector, matrix) => row_vector
multiply(real, matrix) => matrix
multiply(vector, row_vector) => matrix
multiply(matrix, real) => matrix
multiply(matrix, matrix) => matrix

so multiplying an array is not supported.

The only thing you can do is convert it to a vector with to_vector().

In general, we recommend using vectors instead of arrays of real if there isn’t a specific reason you need arrays (some functions only accept arrays for example).

1 Like

thanks.

target += bernoulli_logit_lpmf(h | alpha + beta * to_vector(memory));

works.

I got into trouble because I started getting warnings that real<lower=0, upper=1> memory[n]; was deprecated and the warning suggested using array instead. So a few others might get this issue.
Not sure whether changing the warning would be a good idea?

Hm,

well

real<lower=0, upper=1> memory[n];

and

array[n] real<lower=0, upper=1> memory;

are the exact same type (array of reals), so none of the two can be multiplied with a scalar.

1 Like

right. thanks.
I’m actually wondering how to best explain the logic in

data {
int<lower=1> n;
array[n] int h;
vector<lower=0, upper=1>[n] memory;
}

where both outcome of a bernoulli (h) and predictor (memory) are vectors, but the outcome is best specified as an array and the predictor as a vector.