Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Purpose: The purpose of this work was to evaluate the performance of X-Net, a multiview deep learning architecture, to automatically label vertebral levels (S2-C1) in palliative radiotherapy simulation CT scans. Methods: For each patient CT scan, our automated approach 1) segmented spinal canal using a convolutional-neural network (CNN), 2) formed sagittal and coronal intensity projection pairs, 3) labeled vertebral levels with X-Net, and 4) detected irregular intervertebral spacing using an analytic methodology. The spinal canal CNN was trained via fivefold cross validation using 1,966 simulation CT scans and evaluated on 330 CT scans. After labeling vertebral levels (S2-C1) in 897 palliative radiotherapy simulation CT scans, a volume of interest surrounding the spinal canal in each patient's CT scan was converted into sagittal and coronal intensity projection image pairs. Then, intensity projection image pairs were augmented and used to train X-Net to automatically label vertebral levels using fivefold cross validation (n = 803). Prior to testing upon the final test set (n = 94), CT scans of patients with anatomical abnormalities, surgical implants, or other atypical features from the final test set were placed in an outlier group (n = 20), whereas those without these features were placed in a normative group (n = 74). The performance of X-Net, X-Net Ensemble, and another leading vertebral labeling architecture (Btrfly Net) was evaluated on both groups using identification rate, localization error, and other metrics. The performance of our approach was also evaluated on the MICCAI 2014 test dataset (n = 60). Finally, a method to detect irregular intervertebral spacing was created based on the rate of change in spacing between predicted vertebral body locations and was also evaluated using the final test set. Receiver operating characteristic analysis was used to investigate the performance of the method to detect irregular intervertebral spacing. Results: The spinal canal architecture yielded centroid coordinates spanning S2-C1 with submillimeter accuracy (mean ± standard deviation, 0.399 ± 0.299 mm; n = 330 patients) and was robust in the localization of spinal canal centroid to surgical implants and widespread metastases. Cross-validation testing of X-Net for vertebral labeling revealed that the deep learning model performance (F1 score, precision, and sensitivity) improved with CT scan length. The X-Net, X-Net Ensemble, and Btrfly Net mean identification rates and localization errors were 92.4% and 2.3 mm, 94.2% and 2.2 mm, and 90.5% and 3.4 mm, respectively, in the final test set and 96.7% and 2.2 mm, 96.9% and 2.0 mm, and 94.8% and 3.3 mm, respectively, within the normative group of the final test set. The X-Net Ensemble yielded the highest percentage of patients (94%) having all vertebral bodies identified correctly in the final test set when the three most inferior and superior vertebral bodies were excluded from the CT scan. The method used to detect labeling failures had 67% sensitivity and 95% specificity when combined with the X-Net Ensemble and flagged five of six patients with atypical vertebral counts (additional thoracic (T13), additional lumbar (L6) or only four lumbar vertebrae). Mean identification rate on the MICCAI 2014 dataset using an X-Net Ensemble was increased from 86.8% to 91.3% through the use of transfer learning and obtained state-of-the-art results for various regions of the spine. Conclusions: We trained X-Net, our unique convolutional neural network, to automatically label vertebral levels from S2 to C1 on palliative radiotherapy CT images and found that an ensemble of X-Net models had high vertebral body identification rate (94.2%) and small localization errors (2.2 ± 1.8 mm). In addition, our transfer learning approach achieved state-of-the-art results on a well-known benchmark dataset with high identification rate (91.3%) and low localization error (3.3 mm ± 2.7 mm). When we pre-screened radiotherapy CT images for the presence of hardware, surgical implants, or other anatomic abnormalities prior to the use of X-Net, it labeled the spine correctly in more than 97% of patients and 94% of patients when scans were not prescreened. Automatically generated labels are robust to widespread vertebral metastases and surgical implants and our method to detect labeling failures based on neighborhood intervertebral spacing can reliably identify patients with an additional lumbar or thoracic vertebral body.

Original languageEnglish (US)
Pages (from-to)5592-5608
Number of pages17
JournalMedical physics
Volume47
Issue number11
DOIs
StatePublished - Nov 2020

Keywords

  • automatic vertebral labeling
  • deep learning

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

MD Anderson CCSG core facilities

  • Biostatistics Resource Group

Fingerprint

Dive into the research topics of 'Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images'. Together they form a unique fingerprint.

Cite this