Abstract
Contextual bandit algorithms are widely used in domains where it is desirable to provide a personalized service by leveraging contextual information, that may contain sensitive information that needs to be protected. Inspired by this scenario, we study the contextual linear bandit problem with differential privacy (DP) constraints. While the literature has focused on either centralized (joint DP) or local (local DP) privacy, we consider the shuffle model of privacy and we show that it is possible to achieve a privacy/utility trade-off between JDP and LDP. By leveraging shuffling from privacy and batching from bandits, we present an algorithm with regret bound Oe(T2/3/ε1/3), while guaranteeing both central (joint) and local privacy. Our result shows that it is possible to obtain a trade-off between JDP and LDP by leveraging the shuffle model while preserving local privacy.
| Original language | English |
|---|---|
| Pages (from-to) | 381-407 |
| Number of pages | 27 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 167 |
| Publication status | Published - 1 Jan 2022 |
| Externally published | Yes |
| Event | 33rd International Conference on Algorithmic Learning Theory, ALT 2022 - Virtual, Online, France Duration: 29 Mar 2022 → 1 Apr 2022 |
Keywords
- Differential Privacy
- Joint Differential Privacy
- Linear Contextual Bandits
- Local Differential Privacy
- Shuffling