Explainability and vision foundation models: A survey

Research output: Contribution to journalArticlepeer-review

Abstract

As artificial intelligence systems become increasingly integrated into daily life, the field of explainability has gained significant attention. This trend is particularly driven by the complexity of modern AI models and their decision-making processes. The advent of foundation models, characterized by their extensive generalization capabilities and emergent uses, has further complicated this landscape. Foundation models occupy an ambiguous position in the explainability domain: their complexity makes them inherently challenging to interpret, yet they are increasingly leveraged as tools to construct explainable models. In this survey, we explore the intersection of foundation models and eXplainable AI (XAI) in the vision domain. We begin by compiling a comprehensive corpus of papers that bridge these fields. Next, we categorize these works based on their architectural characteristics. We then discuss the challenges faced by current research in integrating XAI within foundation models. Furthermore, we review common evaluation methodologies for these combined approaches. Finally, we present key observations and insights from our survey, offering directions for future research in this rapidly evolving field.

Original languageEnglish
Article number103184
JournalInformation Fusion
Volume122
DOIs
Publication statusPublished - 1 Oct 2025

Keywords

  • Explainability
  • Foundation models
  • Interpretability
  • Survey
  • Vision
  • XAI

Fingerprint

Dive into the research topics of 'Explainability and vision foundation models: A survey'. Together they form a unique fingerprint.

Cite this