Improving Interpretability for Computer-Aided Diagnosis Tools on Whole Slide Imaging with Multiple Instance Learning and Gradient-Based Explanations

  • Antoine Pirovano
  • , Hippolyte Heuberger
  • , Sylvain Berlemont
  • , Saïd Ladjal
  • , Isabelle Bloch

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deep learning methods are widely used for medical applications to assist medical doctors in their daily routines. While performances reach expert’s level, interpretability (highlight how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification. We formalize the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. We aim at explaining how the decision is made based on tile level scoring, how these tile scores are decided and which features are used and relevant for the task. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances by more than 29% for tile level AUC.

Original languageEnglish
Title of host publicationInterpretable and Annotation-Efficient Learning for Medical Image Computing - 3rd International Workshop, iMIMIC 2020, 2nd International Workshop, MIL3iD 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, Proceedings
EditorsJaime Cardoso, Wilson Silva, Ricardo Cruz, Hien Van Nguyen, Badri Roysam, Nicholas Heller, Pedro Henriques Abreu, Jose Pereira Amorim, Ivana Isgum, Vishal Patel, Kevin Zhou, Steve Jiang, Ngan Le, Khoa Luu, Raphael Sznitman, Veronika Cheplygina, Samaneh Abbasi, Diana Mateus, Emanuele Trucco
PublisherSpringer Science and Business Media Deutschland GmbH
Pages43-53
Number of pages11
ISBN (Print)9783030611651
DOIs
Publication statusPublished - 1 Jan 2020
Event3rd International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2020, the 2nd International Workshop on Medical Image Learning with Less Labels and Imperfect Data, MIL3ID 2020, and the 5th International Workshop on Large-scale Annotation of Biomedical data and Expert Label Synthesis, LABELS 2020, held in conjunction with the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2020 - Lima, Peru
Duration: 4 Oct 20208 Oct 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12446 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference3rd International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2020, the 2nd International Workshop on Medical Image Learning with Less Labels and Imperfect Data, MIL3ID 2020, and the 5th International Workshop on Large-scale Annotation of Biomedical data and Expert Label Synthesis, LABELS 2020, held in conjunction with the 23rd International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2020
Country/TerritoryPeru
CityLima
Period4/10/208/10/20

Keywords

  • Explainability
  • Heat-maps
  • Histopathology
  • Interpretability
  • WSI classification

Fingerprint

Dive into the research topics of 'Improving Interpretability for Computer-Aided Diagnosis Tools on Whole Slide Imaging with Multiple Instance Learning and Gradient-Based Explanations'. Together they form a unique fingerprint.

Cite this