MSKD: Structured knowledge distillation for efficient medical image segmentation
Libo Zhao, Xiaolong Qian, Yinghui Guo, Jiaqi Song, Jinbao Hou, Jun Gong
Comput Biol Med . 2023 Aug 2;164:107284. doi: 10.1016/j.compbiomed.2023.107284. Online ahead of print.
In recent years, deep learning has revolutionized the field of medical image segmentation by enabling the development of powerful deep neural networks. However, these models tend to be complex and computationally demanding, posing challenges for practical implementation in clinical settings. To address this issue, we propose an efficient structured knowledge distillation framework that leverages a powerful teacher network to assist in training a lightweight student network. Specifically, we propose the Feature Filtering Distillation method, which focuses on transferring region-level semantic information while minimizing redundant information transmission from the teacher to the student network. This approach effectively mitigates the problem of inaccurate segmentation caused by similar internal organ characteristics. Additionally, we propose the Region Graph Distillation method, which exploits the higher-order representational capabilities of graphs to enable the student network to better imitate structured semantic information from the teacher. To validate the effectiveness of our proposed methods, we conducted experiments on the Synapse multi-organ segmentation and KiTS kidney tumor segmentation datasets using various network models. The results demonstrate that our method significantly improves the segmentation performance of lightweight neural networks, with improvements of up to 18.56% in Dice coefficient. Importantly, our approach achieves these improvements without introducing additional model parameters. Overall, our proposed knowledge distillation methods offer a promising solution for efficient medical image segmentation, empowering medical experts to make more accurate diagnoses and improve patient treatment.