TY - GEN
T1 - Head and Neck Cancer Primary Tumor Auto Segmentation Using Model Ensembling of Deep Learning in PET/CT Images
AU - Naser, Mohamed A.
AU - Wahid, Kareem A.
AU - van Dijk, Lisanne V.
AU - He, Renjie
AU - Abdelaal, Moamen Abobakr
AU - Dede, Cem
AU - Mohamed, Abdallah S.R.
AU - Fuller, Clifton D.
N1 - Publisher Copyright:
© 2022, Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Our DSC and 95% HD test results are within 0.01 and 0.06 mm of the top ranked model in the competition, respectively. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation. Future investigations should target the ideal combination of channel combinations and label fusion strategies to maximize segmentation performance.
AB - Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Our DSC and 95% HD test results are within 0.01 and 0.06 mm of the top ranked model in the competition, respectively. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation. Future investigations should target the ideal combination of channel combinations and label fusion strategies to maximize segmentation performance.
KW - Auto-contouring
KW - CT
KW - Deep learning
KW - Head and neck cancer
KW - Oropharyngeal cancer
KW - PET
KW - Tumor segmentation
UR - http://www.scopus.com/inward/record.url?scp=85126654353&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126654353&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-98253-9_11
DO - 10.1007/978-3-030-98253-9_11
M3 - Conference contribution
C2 - 35399869
AN - SCOPUS:85126654353
SN - 9783030982522
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 121
EP - 133
BT - Head and Neck Tumor Segmentation and Outcome Prediction - 2nd Challenge, HECKTOR 2021, Held in Conjunction with MICCAI 2021, Proceedings
A2 - Andrearczyk, Vincent
A2 - Oreiller, Valentin
A2 - Hatt, Mathieu
A2 - Depeursinge, Adrien
PB - Springer Science and Business Media Deutschland GmbH
T2 - 2nd 3D Head and Neck Tumor Segmentation in PET/CT Challenge, HECKTOR 2021, held in conjunction with 24th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2021
Y2 - 27 September 2021 through 27 September 2021
ER -