Selecting a link function

Dear Community,

I am trying to fit a model to my ordered categorical data. I have been using : Bayesian ordinal regression with random effects using brms to help, as well as https://journals.sagepub.com/doi/full/10.1177/2515245918823199.

There is a useful function Kevin Stadler describes to select a link, it looks like this;

cumulativemodelfit <- function(formula, data, links=c("logit", "probit", "cloglog", "cauchit"),
    thresholds=c("flexible", "equidistant"), verbose=TRUE) {
  names(links) <- links
  names(thresholds) <- thresholds
  llks <- outer(links, thresholds,
    Vectorize(function(link, threshold)
      # catch error for responses with 2 levels
      tryCatch(ordinal::clm(formula, data=data, link=link, threshold=threshold)$logLik,
        error = function(e) NA)))
  print(llks)
  if (verbose) {
    bestfit <- which.max(llks)
    cat("\nThe best link function is ", links[bestfit %% length(links)], " with a ",
    thresholds[1 + bestfit %/% length(thresholds)], " threshold (logLik ", llks[bestfit],
    ")\n", sep="")
  }
  invisible(llks)
}

I applied it to my data which is a score for sentiment (negative, neutral, positive)

linkfunction<-cumulativemodelfit(sentiment ~ 1, data=iddf)

The output is below:

> linkfunction
>          flexible equidistant
> logit   -2066.537   -2066.537
> probit  -2066.537   -2066.537
> cloglog -2066.537   -2066.537
> cauchit -2066.537   -2066.537
> The best link function is cloglog with a equidistant threshold (logLik -2066.537)

I’m wondering if anyone can help me understand what is happening? I am new to Brms, and not sure why all the links provide the same result, but then clog log is selected?

Thank you so much

From what I’ve seen in the link description, the procedure is basically to fit the model and choosing the model with the highest likelihood (technically the highest MAP, but the priors are should be the same). Why it gives exactly the same value is not clear to me, I am assuming you fit the model with the four different functions, so it would look like a bug somewhere where you are loading the same model into the link-selecting function.

A better approach (in the sense that it would use the whole posterior) would be using an AIC-like criterion to choose the best performing model (probably WAIC, DIC isn’t often recommended around here, but it’d probably still be better than just choosing the highest MAP).

2 Likes

+1 to what @caesoma said, but I’ll note that the currently recommended tool for AIC-like criterion is the loo package with its LOO-IC which estimates the same quantity as WAIC/AIC but tends to be more robust and has better diagnostics to signal when the computation is suspicious.

One possible reason why you would get almost the same likelihood with different links (since you are using the frequentist ordinal package) is that with intercept-only model all of the flexible link models boil down to just defining a probabilities for each category - the intercept-only model is very close to categorical intercept-only model. Since if I remember correctly from another post you have 3 categories I can see how even the equidistant threshold variant has enough flexibility to match any distribution of the categories.

For modelling choices like this I would usually recommend using posterior predictive checks and other tools to critique full models (see e.g. the Bayesian workflow preprint ) - a best link in a simple model might be problematic in a larger model. In the case of different links for ordinal models I would however expect the differences to be very minor across different links so the choice is likely quite arbitrary and I wouldn’t spend too much energy on it.

Best of luck with your model!

2 Likes