Differential Privacy Guarantees of Markov Chain Monte Carlo Algorithms

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper aims to provide differential privacy (DP) guarantees for Markov chain Monte Carlo (MCMC) algorithms. In a first part, we establish DP guarantees on samples output by MCMC algorithms as well as Monte Carlo estimators associated with these methods under assumptions on the convergence properties of the underlying Markov chain. In particular, our results highlight the critical condition of ensuring the target distribution is differentially private itself. In a second part, we specialise our analysis to the unadjusted Langevin algorithm and stochastic gradient Langevin dynamics and establish guarantees on their (Rényi) DP. To this end, we develop a novel methodology based on Girsanov’s theorem combined with a perturbation trick to obtain bounds for an unbounded domain and in a non-convex setting. We establish: (i) uniform in n privacy guarantees when the state of the chain after n iterations is released, (ii) bounds on the privacy of the entire chain trajectory. These findings provide concrete guidelines for privacy-preserving MCMC.

Original languageEnglish
Pages (from-to)3931-3951
Number of pages21
JournalProceedings of Machine Learning Research
Volume267
Publication statusPublished - 1 Jan 2025
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025

Fingerprint

Dive into the research topics of 'Differential Privacy Guarantees of Markov Chain Monte Carlo Algorithms'. Together they form a unique fingerprint.

Cite this