Abstract
Foundation Models (FMs) have been successful in various computer vision tasks like image classification, object detection and image segmentation. However, these tasks remain challenging when these models are tested on datasets with different distributions from the training dataset, a problem known as domain shift. This is especially problematic for recognizing animal species in camera-trap images where we have variability in factors like lighting, camouflage and occlusions. In this paper, we propose the Camera Trap Language-guided Contrastive Learning (CATALOG) model to address these issues. Our approach combines multiple FMs to extract visual and textual features from camera-trap data and uses a contrastive loss function to train the model. We evaluate CATALOG on two benchmark datasets and show that it outperforms previous state-of-the-art methods in camera-trap image recognition, especially when the training and testing data have different animal species or come from different geographical areas. Our approach demonstrates the potential of using FMs in combination with multi-modal fusion and contrastive learning for addressing domain shifts in camera-trap image recognition. The code of CATALOG is publicly available at https://github.com/Julian075/CATALOG.
| Original language | English |
|---|---|
| Title of host publication | Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025 |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 1197-1206 |
| Number of pages | 10 |
| ISBN (Electronic) | 9798331510831 |
| DOIs | |
| Publication status | Published - 1 Jan 2025 |
| Event | 2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025 - Tucson, United States Duration: 28 Feb 2025 → 4 Mar 2025 |
Publication series
| Name | Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025 |
|---|
Conference
| Conference | 2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025 |
|---|---|
| Country/Territory | United States |
| City | Tucson |
| Period | 28/02/25 → 4/03/25 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Keywords
- camera trap
- computer vision
- contrastive learning
- fundation models
Fingerprint
Dive into the research topics of 'CATALOG: A Camera Trap Language-Guided Contrastive Learning Model'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver