Cross-modality deep learning: Contouring of MRI data from annotated CT data only

Jennifer P. Kieselmann, Clifton D. Fuller, Oliver J. Gurney-Champion, Uwe Oelfke

Research output: Contribution to journalArticlepeer-review

29 Scopus citations

Abstract

Purpose: Online adaptive radiotherapy would greatly benefit from the development of reliable auto-segmentation algorithms for organs-at-risk and radiation targets. Current practice of manual segmentation is subjective and time-consuming. While deep learning-based algorithms offer ample opportunities to solve this problem, they typically require large datasets. However, medical imaging data are generally sparse, in particular annotated MR images for radiotherapy. In this study, we developed a method to exploit the wealth of publicly available, annotated CT images to generate synthetic MR images, which could then be used to train a convolutional neural network (CNN) to segment the parotid glands on MR images of head and neck cancer patients. Methods: Imaging data comprised 202 annotated CT and 27 annotated MR images. The unpaired CT and MR images were fed into a 2D CycleGAN network to generate synthetic MR images from the CT images. Annotations of axial slices of the synthetic images were generated by propagating the CT contours. These were then used to train a 2D CNN. We assessed the segmentation accuracy using the real MR images as test dataset. The accuracy was quantified with the 3D Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) between manual and auto-generated contours. We benchmarked the approach by a comparison to the interobserver variation determined for the real MR images, as well as to the accuracy when training the 2D CNN to segment the CT images. Results: The determined accuracy (DSC: 0.77±0.07, HD: 18.04±12.59mm, MSD: 2.51±1.47mm) was close to the interobserver variation (DSC: 0.84±0.06, HD: 10.85±5.74mm, MSD: 1.50±0.77mm), as well as to the accuracy when training the 2D CNN to segment the CT images (DSC: 0.81±0.07, HD: 13.00±7.61mm, MSD: 1.87±0.84mm). Conclusions: The introduced cross-modality learning technique can be of great value for segmentation problems with sparse training data. We anticipate using this method with any nonannotated MRI dataset to generate annotated synthetic MR images of the same type via image style transfer from annotated CT images. Furthermore, as this technique allows for fast adaptation of annotated datasets from one imaging modality to another, it could prove useful for translating between large varieties of MRI contrasts due to differences in imaging protocols within and between institutions.

Original languageEnglish (US)
Pages (from-to)1673-1684
Number of pages12
JournalMedical physics
Volume48
Issue number4
DOIs
StatePublished - Apr 2021

Keywords

  • automated segmentation
  • deep learning
  • head and neck cancer
  • image style transfer
  • magnetic resonance imaging
  • synthetic image generation

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Cross-modality deep learning: Contouring of MRI data from annotated CT data only'. Together they form a unique fingerprint.

Cite this