Passer à la navigation principale Passer à la recherche Passer au contenu principal

Enhancing Concept Localization in CLIP-based Concept Bottleneck Models

Résultats de recherche: Contribution à un journalArticleRevue par des pairs

Résumé

This paper addresses explainable AI (XAI) through the lens of Concept Bottleneck Models (CBMs) that do not require explicit concept annotations, relying instead on concepts extracted using CLIP in a zero-shot manner. We show that CLIP, which is central in these techniques, is prone to concept hallucination—incorrectly predicting the presence or absence of concepts within an image in scenarios used in numerous CBMs, hence undermining the faithfulness of explanations. To mitigate this issue, we introduce Concept Hallucination Inhibition via Localized Interpretability (CHILI), a technique that proposes a disentangling of image embeddings. Furthermore, our approach supports the generation of saliency-based explanations that are more interpretable.

langue originaleAnglais
journalTransactions on Machine Learning Research
Volume2026-January
étatPublié - 1 janv. 2026

Empreinte digitale

Examiner les sujets de recherche de « Enhancing Concept Localization in CLIP-based Concept Bottleneck Models ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation