Towards AI-Driven Radiology Education: A Self-supervised Segmentation-Based Framework for High-Precision Medical Image Editing
Kazuma Kobayashi, Lin Gu, Ryuichiro Hataya, Mototaka Miyake, Yasuyuki Takamizawa, Sono Ito, Hirokazu Watanabe, Yukihiro Yoshida, Hiroki Yoshimura, Tatsuya Harada & Ryuji Hamamoto
Medical education is essential for providing the best patient care in medicine, but creating educational materials using real-world data poses many challenges. For example, the diagnosis and treatment of a disease can be affected by small but significant differences in medical images; however, collecting images to highlight such differences is often costly. Therefore, medical image editing, which allows users to create their intended disease characteristics, can be useful for education. However, existing image-editing methods typically require manually annotated labels, which are labor-intensive and often challenging to represent fine-grained anatomical elements precisely. Herein, we present a novel algorithm for editing anatomical elements using segmentation labels acquired through self-supervised learning. Our self-supervised segmentation achieves pixel-wise clustering under the constraint of invariance to photometric and geometric transformations, which are assumed not to change the clinical interpretation of anatomical elements. The user then edits the segmentation map to produce a medical image with the intended detailed findings. Evaluation by five expert physicians demonstrated that the edited images appeared natural as medical images and that the disease characteristics were accurately reproduced.