Abstract image

Margin-Based Thresholds for Learnability in Online Strategic Classification

The Mathematical Data Science Centre seminar series

schedule Date & time
Date/time
14 May 2026 1:00pm - 14 May 2026 2:00pm
person Speaker

Speakers

Nam Ho-Nguyen (The University of Sydney)
contact_support Contact

Content navigation

Description

Abstract: We study online strategic classification, in which the learner’s deployed classifier influences the data it subsequently observes because agents can strategically alter their reported features. Learnability is well-understood when the agents' underlying true features satisfy a large-margin separability condition. Our goal is to characterize when meaningful learning remains possible under strategic behaviour. We show that learnability is governed by margin-based thresholds. In particular, there are regimes in which no online method can guarantee finitely many prediction errors, and stricter regimes are required to simultaneously control both prediction errors and strategic manipulation. To complement these limits, we develop a general reduction framework that converts standard online large-margin algorithms into algorithms for strategic classification. Using reported features alone yields guarantees in favorable regimes, while a proxy-data construction extends finite-mistake guarantees to the full learnable range. We also study additive noise in reported features and show that the associated learnability thresholds depend sharply on the feedback model and on what information about the perturbation is available to the learner. Counterintuitively, learnability thresholds do not vary continuously with the noise level: the introduction of arbitrarily small additive noise can create a strictly harder learning problem than in the noiseless case, producing a discontinuous jump in the threshold for learnability.

Location

Room 1.33 Hanna Neumann Building #145

-35.275389150424, 149.1192955