Implementation of Efficient Prototype Consistency in Semi-Supervised Medical Image Segmentation via Joint Uncertainty and Data Augmentation.
Recently, prototype learning has emerged in semi-supervised medical image segmentation and achieved remarkable performance. However, the scarcity of labeled data limits the expressiveness of prototypes in previous methods, potentially hindering the complete representation of prototypes for class embedding. To overcome this issue, we propose an efficient prototype consistency learning via joint uncertainty quantification and data augmentation (EPCL-JUDA) to enhance the semantic expression of prototypes based on the framework of Mean-Teacher. The concatenation of original and augmented labeled data is fed into the teacher network to generate expressive prototypes. Then, a joint uncertainty quantification method is devised to optimize pseudo-labels and generate reliable prototypes for original and augmented unlabeled data separately. High-quality global prototypes for each class are formed by fusing labeled and unlabeled prototypes, optimizing the distribution of features used in consistency learning. Notably, a prototype network is proposed to reduce a high memory requirement brought by the introduction of augmented data. Extensive experiments on Left Atrium, Pancreas-CT, Type B Aortic Dissection datasets demonstrate EPCL-JUDA's superiority over previous state-of-the-art approaches, confirming the effectiveness of our framework.
🚩 News (2024.06.12) We have uploaded the code EPCL_JUDA code 🥳.
This is the official code implementation project for paper "Efficient Prototype Consistency in Semi-Supervised Medical Image Segmentation via Joint Uncertainty and Data Augmentation". The code implementation refers to UPCoL(https://github.com/VivienLu/UPCoL/tree/main). Thanks very much for the contribution of UPCoL(https://github.com/VivienLu/UPCoL/tree/main) to code structure of our paper "Efficient Prototype Consistency in Semi-Supervised Medical Image Segmentation via Joint Uncertainty and Data Augmentation".
Here we list our some important requirements and dependencies.
- Linux: Ubuntu 22.04 LTS
- GPU: RTX 4090
- CUDA: 12.3
- Python: 3.10
- PyTorch: 2.1.2
- Pancreas dataset: https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT
- Left atrium dataset: http://atriaseg2018.cardiacatlas.org
- Type B Aorta Dissection dataset: https://github.com/XiaoweiXu/Dataset_Type-B-Aortic-Dissection
Preprocess: refer to the image pre-processing method in SASSNet, CoraNet, and FUSSNet for the Pancreas dataset and Left atrium dataset. The preprocess
folder contains the necessary code to preprocess the pancreas and TBAD dataset. It is recommended to run pancreas_preprocess.py
and TBAD_preprocess.py
first to preprocess the data while using the raw dataset.
Dataset split: The data_lists
folder contains the information about the train-test split for all three datasets.
# LA
exp='LA'
data_dir='../../../Datasets/LA_dataset'
list_dir='../datalist/LA'
python train_JUDA.py --exp $exp --data_dir $data_dir --list_dir $list_dir --exp $exp
If you find EPCL-JUDA useful in your research, please cite our work:
@misc{ROLSSL,
title={Efficient Prototype Consistency in Semi-Supervised Medical Image Segmentation via Joint Uncertainty and Data Augmentation},
author={Lijian Li, Yuanpeng He, Chi-Man Pun},
year={2024},
journal={arXiv}
}
If you have any questions or suggestions, feel free to contact:
-
Lijian Li ([email protected])
-
Yuanpeng He ([email protected])