Xianpeng Cao, Junfeng Yao, Qingqi Hong & Rongzhou Zhou
Recently, pioneering work has improved segmentation performance by combining the self-attention (SA) mechanism with UNet. However, since SA can only model its own features in a single sample, it ignores the potential relevance of the whole dataset. Additionally, medical image datasets are typically small, making it crucial to obtain as many features as possible within a limited dataset. To address these problems, we propose the Multiple External Attention (MEA) module, which characterizes the overall dataset by mining correlations between different samples based on external concerns. Furthermore, our method applies the Squeeze-and-Excitation (SE) module for the first time to low-level feature extraction of medical images. By using MEA and SE, we construct MEA-TransUNet for accurate segmentation of medical images. We test our method on two datasets and the experimental results demonstrate its superior performance compared to other existing methods. Code and pre-trained models are coming soon.