TY - GEN
T1 - Knowledge distillation from multi-modal to mono-modal segmentation networks
AU - Hu, Minhao
AU - Maillard, Matthis
AU - Zhang, Ya
AU - Ciceri, Tommaso
AU - La Barbera, Giammarco
AU - Bloch, Isabelle
AU - Gori, Pietro
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - The joint use of multiple imaging modalities for medical image segmentation has been widely studied in recent years. The fusion of information from different modalities has demonstrated to improve the segmentation accuracy, with respect to mono-modal segmentations, in several applications. However, acquiring multiple modalities is usually not possible in a clinical setting due to a limited number of physicians and scanners, and to limit costs and scan time. Most of the time, only one modality is acquired. In this paper, we propose KD-Net, a framework to transfer knowledge from a trained multi-modal network (teacher) to a mono-modal one (student). The proposed method is an adaptation of the generalized distillation framework where the student network is trained on a subset (1 modality) of the teacher’s inputs (n modalities). We illustrate the effectiveness of the proposed framework in brain tumor segmentation with the BraTS 2018 dataset. Using different architectures, we show that the student network effectively learns from the teacher and always outperforms the baseline mono-modal network in terms of segmentation accuracy.
AB - The joint use of multiple imaging modalities for medical image segmentation has been widely studied in recent years. The fusion of information from different modalities has demonstrated to improve the segmentation accuracy, with respect to mono-modal segmentations, in several applications. However, acquiring multiple modalities is usually not possible in a clinical setting due to a limited number of physicians and scanners, and to limit costs and scan time. Most of the time, only one modality is acquired. In this paper, we propose KD-Net, a framework to transfer knowledge from a trained multi-modal network (teacher) to a mono-modal one (student). The proposed method is an adaptation of the generalized distillation framework where the student network is trained on a subset (1 modality) of the teacher’s inputs (n modalities). We illustrate the effectiveness of the proposed framework in brain tumor segmentation with the BraTS 2018 dataset. Using different architectures, we show that the student network effectively learns from the teacher and always outperforms the baseline mono-modal network in terms of segmentation accuracy.
U2 - 10.1007/978-3-030-59710-8_75
DO - 10.1007/978-3-030-59710-8_75
M3 - Conference contribution
AN - SCOPUS:85093101983
SN - 9783030597092
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 772
EP - 781
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings
A2 - Martel, Anne L.
A2 - Abolmaesumi, Purang
A2 - Stoyanov, Danail
A2 - Mateus, Diana
A2 - Zuluaga, Maria A.
A2 - Zhou, S. Kevin
A2 - Racoceanu, Daniel
A2 - Joskowicz, Leo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020
Y2 - 4 October 2020 through 8 October 2020
ER -