Foundation models and Transformers for anomaly detection: A survey

Mouin Ben Ammar, Arturo Mendoza, Nacim Belkhir, Antoine Manzanera, Gianni Franchi

Research output: Contribution to journalArticlepeer-review

Abstract

In line with the development of deep learning, this survey examines the transformative role of Transformers and foundation models in advancing visual anomaly detection (VAD). We explore how these architectures, with their global receptive fields and adaptability, address challenges such as long-range dependency modeling, contextual modeling and data scarcity. The survey categorizes VAD methods into reconstruction-based, feature-based and zero/few-shot approaches, highlighting the paradigm shift brought about by foundation models. By integrating attention mechanisms and leveraging large-scale pre-training, Transformers and foundation models enable more robust, interpretable, and scalable anomaly detection solutions. This work provides a comprehensive review of state-of-the-art techniques, their strengths, limitations, and emerging trends in leveraging these architectures for VAD.

Original languageEnglish
Article number103517
JournalInformation Fusion
Volume126
DOIs
Publication statusPublished - 1 Feb 2026
Externally publishedYes

Keywords

  • Anomaly detection
  • Computer vision
  • Deep learning
  • Foundation models
  • Self-supervised learning
  • Survey
  • Transformers
  • Unsupervised learning

Fingerprint

Dive into the research topics of 'Foundation models and Transformers for anomaly detection: A survey'. Together they form a unique fingerprint.

Cite this