TY - GEN
T1 - Mitigating Bias in Facial Recognition Systems
T2 - 27th International Conference on Pattern Recognition Workshops, ICPRW 2024
AU - Conti, Jean Rémy
AU - Clémençon, Stéphan
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - The urging societal demand for fair AI systems has put pressure on the research community to develop predictive models that are not only globally accurate but also meet new fairness criteria, reflecting the lack of disparate mistreatment with respect to sensitive attributes (e.g. gender, ethnicity, age). In particular, the variability of the errors made by certain Facial Recognition (FR) systems across specific segments of the population compromises the deployment of the latter, and was judged unacceptable by regulatory authorities. Designing fair FR systems is a very challenging problem, mainly due to the complex and functional nature of the performance measure used in this domain (i.e. ROC curves) and because of the huge heterogeneity of the face image datasets usually available for training. In this paper, we propose a novel post-processing approach to improve the fairness of pre-trained FR models by optimizing a regression loss which acts on centroid-based scores. Beyond the computational advantages of the method, we present numerical experiments providing strong empirical evidence of the gain in fairness and of the ability to preserve global accuracy.
AB - The urging societal demand for fair AI systems has put pressure on the research community to develop predictive models that are not only globally accurate but also meet new fairness criteria, reflecting the lack of disparate mistreatment with respect to sensitive attributes (e.g. gender, ethnicity, age). In particular, the variability of the errors made by certain Facial Recognition (FR) systems across specific segments of the population compromises the deployment of the latter, and was judged unacceptable by regulatory authorities. Designing fair FR systems is a very challenging problem, mainly due to the complex and functional nature of the performance measure used in this domain (i.e. ROC curves) and because of the huge heterogeneity of the face image datasets usually available for training. In this paper, we propose a novel post-processing approach to improve the fairness of pre-trained FR models by optimizing a regression loss which acts on centroid-based scores. Beyond the computational advantages of the method, we present numerical experiments providing strong empirical evidence of the gain in fairness and of the ability to preserve global accuracy.
KW - Bias
KW - Face Recognition
KW - Fairness
UR - https://www.scopus.com/pages/publications/105005571278
U2 - 10.1007/978-3-031-87657-8_26
DO - 10.1007/978-3-031-87657-8_26
M3 - Conference contribution
AN - SCOPUS:105005571278
SN - 9783031876561
T3 - Lecture Notes in Computer Science
SP - 371
EP - 385
BT - Pattern Recognition. ICPR 2024 International Workshops and Challenges, 2024, Proceedings
A2 - Palaiahnakote, Shivakumara
A2 - Schuckers, Stephanie
A2 - Ogier, Jean-Marc
A2 - Bhattacharya, Prabir
A2 - Pal, Umapada
A2 - Bhattacharya, Saumik
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 1 December 2024 through 1 December 2024
ER -