Utility/privacy trade-off as regularized optimal transport

Research output: Contribution to journalArticlepeer-review

Abstract

Strategic information is valuable either by remaining private (for instance if it is sensitive) or, on the other hand, by being used publicly to increase some utility. These two objectives are antagonistic and leaking this information by taking full advantage of it might be more rewarding than concealing it. Unlike classical solutions that focus on the first point, we consider instead agents that optimize a natural trade-off between both objectives. We formalize this as an optimization problem where the objective mapping is regularized by the amount of information revealed to the adversary (measured as a divergence between the prior and posterior on the private knowledge). Quite surprisingly, when combined with the entropic regularization, the Sinkhorn loss naturally emerges in the optimization objective, making it efficiently solvable via better adapted optimization schemes. We empirically compare these different techniques on a toy example and apply them to preserve some privacy in online repeated auctions.

Original languageEnglish
Pages (from-to)703-726
Number of pages24
JournalMathematical Programming
Volume203
Issue number1-2
DOIs
Publication statusPublished - 1 Jan 2024

Keywords

  • 49Q22
  • 68P27
  • 90C90
  • 91A27
  • Non-convex optimization
  • Optimal transport
  • Privacy learning
  • Repeated auctions

Fingerprint

Dive into the research topics of 'Utility/privacy trade-off as regularized optimal transport'. Together they form a unique fingerprint.

Cite this