On Estimating the Strength of Differentially Private Mechanisms in a Black-Box Setting

Research output: Contribution to journalArticlepeer-review

Abstract

We analyze to what extent final users can infer information about the level of protection of their data when the data obfuscation mechanism is a priori unknown to them (the so-called 'black-box' scenario). In particular, we explore four notions of differential privacy, namely local/central ϵ-DP/Rényi-DP. On the one hand, we prove that, without any assumption on the underlying distributions, it is not possible to have an algorithm able to infer the level of data protection with provable guarantees. On the other hand, we demonstrate that, under reasonable assumptions (namely Lipschitzness of the involved densities on a closed interval), such guarantees exist for the local versions and can be achieved by a simple histogram-based estimator. We validate our results experimentally and note that, in two particularly well behaved distributions (namely the Laplace and the Gaussian noise), our method performs better than expected, in the sense that in practice the number of samples needed to achieve the desired confidence is smaller than the theoretical bound, and the estimate of ϵ is more precise than predicted.

Original languageEnglish
Pages (from-to)5494-5507
Number of pages14
JournalIEEE Transactions on Dependable and Secure Computing
Volume22
Issue number5
DOIs
Publication statusPublished - 1 Jan 2025

Keywords

  • Differential privacy
  • histogram-based sampling
  • local differential privacy
  • rényi differential privacy
  • the impossibility of provable guarantees

Fingerprint

Dive into the research topics of 'On Estimating the Strength of Differentially Private Mechanisms in a Black-Box Setting'. Together they form a unique fingerprint.

Cite this