Decision from Suboptimal Classifiers: Excess Risk Pre- and Post-Calibration

Alexandre Perez-Lebel, Gael Varoquaux, Sanmi Koyejo, Matthieu Doutreligne, Marine Le Morvan

Research output: Contribution to journalConference articlepeer-review

Abstract

Probabilistic classifiers are central for making informed decisions under uncertainty. Based on the maximum expected utility principle, optimal decision rules can be derived using the posterior class probabilities and misclassification costs. Yet, in practice only learned approximations of the oracle posterior probabilities are available. In this work, we quantify the excess risk (a.k.a. regret) incurred using approximate posterior probabilities in batch binary decision-making. We provide analytical expressions for miscalibration-induced regret (RCL), as well as tight and informative upper and lower bounds on the regret of calibrated classifiers (RGL). These expressions allow us to identify regimes where recalibration alone addresses most of the regret, and regimes where the regret is dominated by the grouping loss, which calls for post-training beyond recalibration. Crucially, both RCL and RGL can be estimated in practice using a calibration curve and a recent grouping loss estimator. On NLP experiments, we show that these quantities identify when the expected gain of more advanced post-training is worth the operational cost. Finally, we highlight the potential of multicalibration approaches as efficient alternatives to costlier fine-tuning approaches.

Original languageEnglish
Pages (from-to)2395-2403
Number of pages9
JournalProceedings of Machine Learning Research
Volume258
Publication statusPublished - 1 Jan 2025
Externally publishedYes
Event28th International Conference on Artificial Intelligence and Statistics, AISTATS 2025 - Mai Khao, Thailand
Duration: 3 May 20255 May 2025

Fingerprint

Dive into the research topics of 'Decision from Suboptimal Classifiers: Excess Risk Pre- and Post-Calibration'. Together they form a unique fingerprint.

Cite this