Abstract
This paper addresses explainable AI (XAI) through the lens of Concept Bottleneck Models (CBMs) that do not require explicit concept annotations, relying instead on concepts extracted using CLIP in a zero-shot manner. We show that CLIP, which is central in these techniques, is prone to concept hallucination—incorrectly predicting the presence or absence of concepts within an image in scenarios used in numerous CBMs, hence undermining the faithfulness of explanations. To mitigate this issue, we introduce Concept Hallucination Inhibition via Localized Interpretability (CHILI), a technique that proposes a disentangling of image embeddings. Furthermore, our approach supports the generation of saliency-based explanations that are more interpretable.
| Original language | English |
|---|---|
| Journal | Transactions on Machine Learning Research |
| Volume | 2026-January |
| Publication status | Published - 1 Jan 2026 |
Fingerprint
Dive into the research topics of 'Enhancing Concept Localization in CLIP-based Concept Bottleneck Models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver