Privacy Amplification via Shuffling for Linear Contextual Bandits

Research output: Contribution to journalConference articlepeer-review

Abstract

Contextual bandit algorithms are widely used in domains where it is desirable to provide a personalized service by leveraging contextual information, that may contain sensitive information that needs to be protected. Inspired by this scenario, we study the contextual linear bandit problem with differential privacy (DP) constraints. While the literature has focused on either centralized (joint DP) or local (local DP) privacy, we consider the shuffle model of privacy and we show that it is possible to achieve a privacy/utility trade-off between JDP and LDP. By leveraging shuffling from privacy and batching from bandits, we present an algorithm with regret bound Oe(T2/31/3), while guaranteeing both central (joint) and local privacy. Our result shows that it is possible to obtain a trade-off between JDP and LDP by leveraging the shuffle model while preserving local privacy.

Original languageEnglish
Pages (from-to)381-407
Number of pages27
JournalProceedings of Machine Learning Research
Volume167
Publication statusPublished - 1 Jan 2022
Externally publishedYes
Event33rd International Conference on Algorithmic Learning Theory, ALT 2022 - Virtual, Online, France
Duration: 29 Mar 20221 Apr 2022

Keywords

  • Differential Privacy
  • Joint Differential Privacy
  • Linear Contextual Bandits
  • Local Differential Privacy
  • Shuffling

Fingerprint

Dive into the research topics of 'Privacy Amplification via Shuffling for Linear Contextual Bandits'. Together they form a unique fingerprint.

Cite this