Calibration Error for Decision Making (Yifan Wu)

Abstract

A sequence of predictions is calibrated if and only if it induces no swap regret to all down-stream decision tasks. We propose a new decision-theoretic calibration error, the Calibration Decision Loss (CDL): the swap regret maximized over all downstream tasks with bounded payoffs. Previously, the best online prediction algorithm for minimizing CDL is obtained by minimizing the ECE calibration error, which upper bounds CDL up to a constant factor. However, recent work (Qiao and Valiant, 2021) gives an Ω(T^0.528) lower bound for the worst-case expected ECE calibration error incurred by any randomized algorithm in T rounds, presenting a barrier to achieving better rates for CDL. Several relaxations of CDL have been considered to overcome this barrier, via external regret (Kleinberg et al., 2023) and regret bounds depending polynomially on the number of actions in downstream tasks (Roth and Shi, 2024). We show that the barrier can be surpassed without any relaxations: we give an efficient randomized prediction algorithm that guarantees O(√TlogT) expected CDL. We also discuss the economic utility of calibration and study the relationship of CDL to existing metrics.

This is joint work with Lunjia Hu, to appear in FOCS 2024.

Time

2024-08-30  15:00 - 16:00   

Speaker

Yifan Wu, Northwestern University

Room

Room 308