Passer à la navigation principale Passer à la recherche Passer au contenu principal

Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness

Résultats de recherche: Le chapitre dans un livre, un rapport, une anthologie ou une collectionContribution à une conférenceRevue par des pairs

Résumé

Machine learning (ML) algorithms are heavily based on the availability of training data, which, depending on the domain, often includes sensitive information about data providers. This raises critical privacy concerns. Anonymization techniques have emerged as a practical solution to address these issues by generalizing features or suppressing data to make it more difficult to accurately identify individuals. Although recent studies have shown that privacy-enhancing technologies can influence ML predictions across different subgroups, thus affecting fair decision-making, the specific effects of anonymization techniques, such as k-anonymity, l-diversity, and t-closeness, on ML fairness remain largely unexplored. In this work, we systematically audit the impact of anonymization techniques on ML fairness, evaluating both individual and group fairness. Our quantitative study reveals that anonymization can degrade group fairness metrics by up to fourfold. Conversely, similarity-based individual fairness metrics tend to improve under stronger anonymization, largely as a result of increased input homogeneity. By analyzing varying levels of anonymization across diverse privacy settings and data distributions, this study provides critical insights into the trade-offs between privacy, fairness, and utility, offering actionable guidelines for responsible AI development. Our code is publicly available at: https://github.com/hharcolezi/anonymity-impact-fairness.

langue originaleAnglais
titreECAI 2025 - 28th European Conference on Artificial Intelligence, including 14th Conference on Prestigious Applications of Intelligent Systems, PAIS 2025 - Proceedings
rédacteurs en chefInes Lynce, Nello Murano, Mauro Vallati, Serena Villata, Federico Chesani, Michela Milano, Andrea Omicini, Mehdi Dastani
EditeurIOS Press BV
Pages1009-1018
Nombre de pages10
ISBN (Electronique)9781643686318
Les DOIs
étatPublié - 21 oct. 2025
Evénement28th European Conference on Artificial Intelligence, ECAI 2025, including 14th Conference on Prestigious Applications of Intelligent Systems, PAIS 2025 - Bologna, Italie
Durée: 25 oct. 202530 oct. 2025

Série de publications

NomFrontiers in Artificial Intelligence and Applications
Volume413
ISSN (imprimé)0922-6389
ISSN (Electronique)1879-8314

Une conférence

Une conférence28th European Conference on Artificial Intelligence, ECAI 2025, including 14th Conference on Prestigious Applications of Intelligent Systems, PAIS 2025
Pays/TerritoireItalie
La villeBologna
période25/10/2530/10/25

Empreinte digitale

Examiner les sujets de recherche de « Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation