Skip to main navigation Skip to search Skip to main content

Leveraging Adversarial Examples to Quantify Membership Information Leakage

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The use of personal data for training machine learning systems comes with a privacy threat and measuring the level of privacy of a model is one of the major challenges in machine learning today. Identifying training data based on a trained model is a standard way of measuring the privacy risks induced by the model. We develop a novel approach to address the problem of membership inference in pattern recognition models, relying on information provided by adversarial examples. The strategy we propose consists of measuring the magnitude of a perturbation necessary to build an adversarial example. Indeed, we argue that this quantity reflects the likelihood of belonging to the training data. Extensive numerical experiments on multivariate data and an array of state-of-the-art target models show that our method performs comparable or even outperforms state-of-the-art strategies, but without requiring any additional training samples.

Original languageEnglish
Title of host publicationProceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
PublisherIEEE Computer Society
Pages10389-10399
Number of pages11
ISBN (Electronic)9781665469463
DOIs
Publication statusPublished - 1 Jan 2022
Event2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022 - New Orleans, United States
Duration: 19 Jun 202224 Jun 2022

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume2022-June
ISSN (Print)1063-6919

Conference

Conference2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022
Country/TerritoryUnited States
CityNew Orleans
Period19/06/2224/06/22

Keywords

  • Adversarial attack and defense
  • Machine learning
  • Transparency
  • accountability
  • fairness
  • privacy and ethics in vision

Fingerprint

Dive into the research topics of 'Leveraging Adversarial Examples to Quantify Membership Information Leakage'. Together they form a unique fingerprint.

Cite this