Fairness, Debiasing and Privacy in Computer Vision and Medical Imaging

  • Carlo Alberto Barbano
  • , Edouard Duchesnay
  • , Benoit Dufumier
  • , Pietro Gori
  • , Marco Grangetto

Research output: Contribution to journalConference articlepeer-review

Abstract

Deep Learning (DL) has become one of the predominant tools for solving a variety of issue, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data; however, they have also been shown to often learn additional features in the data, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as the additional features can contain bias, sensitive or private information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL models, especially if they involve users' data. Learning robust representations which are free of biased, private, and collateral information can be very relevant for a variety of fields and applications, for example for medical applications and decision support systems. In this work we present our group's activities aiming at devising methods to ensure that representations learned by DL models are robust to collateral features, biases and privacy-preserving with respect to sensitive information.

Original languageEnglish
Pages (from-to)318-323
Number of pages6
JournalCEUR Workshop Proceedings
Volume3486
Publication statusPublished - 1 Jan 2023
Event2023 Italia Intelligenza Artificiale - Thematic Workshops, Ital-IA 2023 - Pisa, Italy
Duration: 29 May 202330 May 2023

Keywords

  • Debiasing
  • Deep Learning
  • Fairness
  • Privacy
  • Representation Learning

Fingerprint

Dive into the research topics of 'Fairness, Debiasing and Privacy in Computer Vision and Medical Imaging'. Together they form a unique fingerprint.

Cite this