Strongly representative semantic-guided segmentation network for pancreatic and pancreatic tumors
Luyang Cao, Jianwei Li
Accurate and reliable segmentation of the pancreas and its lesions on computed tomography (CT) images is crucial in medical imaging for preoperative diagnosis, surgical planning, and postoperative monitoring. However, there are limited studies that address simultaneous segmentation of the pancreas and pancreatic tumors. Moreover, existing studies have not fully utilized the feature potential of the original images and have neglected the exploration of semantic information with strong representation. To overcome these limitations, we propose the Strongly Representative Semantic-guided Segmentation Network (SRSNet). Specifically, we employ intermediate semantic information to generate strongly representative high-resolution pre-segmented images, effectively reducing channel redundancy across different resolutions. We utilize various mechanisms to extract distinct representative features, and with the guidance of these features, SRSNet effectively supplements high-resolution detailed information for features of different resolutions, provides auxiliary features for the pixel decision phase of the network, and detects large-scale changes in the pancreas and pancreatic tumors. Additionally, we design a loss function that enhances SRSNet’s sensitivity to boundary pixels and attenuates the effect of class imbalance. Our method is evaluated on Task07 Pancreas and NIH Pancreas datasets. In the experiment of combined pancreas and tumor segmentation in the MSD dataset, we achieved Dice, Recall, Precision, and MIoU scores of 78.60%, 79.64%, 81.72%, and 71.47%, respectively. Extensive experiments demonstrate that our algorithm not only outperforms state-of-the-art algorithms for pancreas segmentation but also exhibits excellent performance for pancreas and pancreatic tumor segmentation.