Information Removal at the bottleneck in Deep Neural Networks

Research output: Contribution to conferencePaperpeer-review

Abstract

Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. Commonly, leveraging over the availability of “big data”, deep neural networks are trained as black boxes, minimizing an objective function at its output. This however does not allow control over the propagation of some specific features through the model, like gender or race, for solving some uncorrelated task. This raises issues either in the privacy domain (considering the propagation of unwanted information) or bias (considering that these features are potentially used to solve the given task). In these contexts, the development of a strategy specifically purposed to remove some part of the information in these models is critical. In this work, we propose IRENE, a method to achieve information removal at the bottleneck of deep neural networks, which explicitly minimizes the estimated mutual information between the features to be kept “private” and the target. Experiments on a synthetic dataset and on CelebA validate the effectiveness of the proposed approach, and open the road toward the development of approaches guaranteeing information removal in deep neural networks.

Original languageEnglish
Publication statusPublished - 1 Jan 2022
Event33rd British Machine Vision Conference Proceedings, BMVC 2022 - London, United Kingdom
Duration: 21 Nov 202224 Nov 2022

Conference

Conference33rd British Machine Vision Conference Proceedings, BMVC 2022
Country/TerritoryUnited Kingdom
CityLondon
Period21/11/2224/11/22

Fingerprint

Dive into the research topics of 'Information Removal at the bottleneck in Deep Neural Networks'. Together they form a unique fingerprint.

Cite this