Auto-segmentation of pancreatic tumor in multi-modal image using transferred DSMask R-CNN network
Yao Yao, Yang Chen, Shuiping Gou, Shuzhe Chen, Xiangrong Zhang, Nuo Tong
Pancreatic tumor segmentation is a difficult task due to the high variable shape, small size and hidden position of organs in patients for adaptive radiation therapy plan. To address the problems of limited labeled data, intra-class inconsistency and inter-class indistinction in pancreas tumor segmentation, a transferred DenseSE-Mask R-CNN (TDSMask R-CNN) Network segmentation model using Dense and SE block embedded is proposed in this paper. The multi-scale features strategy is selected to deal with high variability of pancreas and their tumor. The proposed network can learn complementary information from different modes (PET/MR) images respectively by the attention mechanism to get pancreatic tumor regions in different domain. As a result, the irrelevant information for segmenting the tumor area can be suppressed and get low false positives. Furthermore, accurate tumor location from PET image is transferred MRI training model for guide Dense-SE network learning to alleviate the small label samples and reduce network overfitting. Experimental results show that the proposed method achieves average Dice Similarity Coefficient (DSC) of 78.33%, sensitivity (SEN) of 78.56%, and specificity (SPE) of 99.72% on the collected PET/MR data set, which is superior to the existing method of some literatures. This algorithm can improve the accuracy of pancreatic tumor segmentation.