TY - JOUR
T1 - Fairness, Debiasing and Privacy in Computer Vision and Medical Imaging
AU - Barbano, Carlo Alberto
AU - Duchesnay, Edouard
AU - Dufumier, Benoit
AU - Gori, Pietro
AU - Grangetto, Marco
N1 - Publisher Copyright:
© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Deep Learning (DL) has become one of the predominant tools for solving a variety of issue, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data; however, they have also been shown to often learn additional features in the data, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as the additional features can contain bias, sensitive or private information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL models, especially if they involve users' data. Learning robust representations which are free of biased, private, and collateral information can be very relevant for a variety of fields and applications, for example for medical applications and decision support systems. In this work we present our group's activities aiming at devising methods to ensure that representations learned by DL models are robust to collateral features, biases and privacy-preserving with respect to sensitive information.
AB - Deep Learning (DL) has become one of the predominant tools for solving a variety of issue, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data; however, they have also been shown to often learn additional features in the data, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as the additional features can contain bias, sensitive or private information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL models, especially if they involve users' data. Learning robust representations which are free of biased, private, and collateral information can be very relevant for a variety of fields and applications, for example for medical applications and decision support systems. In this work we present our group's activities aiming at devising methods to ensure that representations learned by DL models are robust to collateral features, biases and privacy-preserving with respect to sensitive information.
KW - Debiasing
KW - Deep Learning
KW - Fairness
KW - Privacy
KW - Representation Learning
M3 - Conference article
AN - SCOPUS:85173889488
SN - 1613-0073
VL - 3486
SP - 318
EP - 323
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 2023 Italia Intelligenza Artificiale - Thematic Workshops, Ital-IA 2023
Y2 - 29 May 2023 through 30 May 2023
ER -