DL-Recon: Combining 3D Deep Learning Image Synthesis and Model Uncertainty with Physics-Based Image Reconstruction

Xiaoxuan Zhang, Pengwei Wu, Wojciech B. Zbijewski, Alejandro Sisniega, Runze Han, Craig K. Jones, Prasad Vagdargi, Ali Uneri, Patrick A. Helm, William S. Anderson, Jeffrey H. Siewerdsen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

High-precision image-guided neurosurgery - especially in the presence of brain shift - would benefit from intraoperative image quality beyond the conventional contrast-resolution limits of cone-beam CT (CBCT) for visualization of the brain parenchyma, ventricles, and intracranial hemorrhage. Deep neural networks for 3D image reconstruction offer a promising basis for noise and artifact reduction, but generalizability can be challenged in scenarios involving features previously unseen in training data. We propose a 3D deep learning reconstruction framework (termed “DL-Recon”) that integrates learning-based image synthesis with physics-based reconstruction to leverage strengths of each. A 3D conditional GAN was developed to generate synthesized CT from CBCT images. Uncertainty in the synthesis image was estimated in a spatially varying, voxel-wise manner via Monte-Carlo dropout and was shown to correlate with abnormalities or pathology not present in training data. The DL-Recon approach improves the fidelity of the resulting image by combining the synthesized image (“DL-Synthesis”) with physics-based reconstruction (filtered back-projection (FBP) or other approaches) in a manner weighted by uncertainty - i.e., drawing more from the physics-based method in regions where model uncertainty is high. The performance of image synthesis, uncertainty estimation, and DL-Recon was investigated for the first time in real CBCT images of the brain. Variable input to the synthesis network was tested - including uncorrected FBP and precorrection with a simple (constant) scatter estimate - hypothesizing the latter to improve synthesis performance. The resulting uncertainty estimation was evaluated for the first time in real anatomical features not included in training (abnormalities and brain shift). The performance of DL-Recon was evaluated in terms of image uniformity, noise, and soft-tissue contrast-to-noise ratio in comparison to DL-Synthesis and FBP with a comprehensive artifact correction framework. DL-Recon was found to leverage the strengths of the learning-based and physics-based reconstruction approaches, providing a high degree of image uniformity similar to DL-Synthesis while accurately preserving soft-tissue contrast as in artifact-corrected FBP.

Original languageEnglish (US)
Title of host publication7th International Conference on Image Formation in X-Ray Computed Tomography
EditorsJoseph Webster Stayman
PublisherSPIE
ISBN (Electronic)9781510656697
DOIs
StatePublished - 2022
Externally publishedYes
Event7th International Conference on Image Formation in X-Ray Computed Tomography - Virtual, Online
Duration: Jun 12 2022Jun 16 2022

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume12304
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference7th International Conference on Image Formation in X-Ray Computed Tomography
CityVirtual, Online
Period6/12/226/16/22

Keywords

  • artifact correction
  • Cone-beam CT
  • deep learning
  • image synthesis
  • image-guided intervention

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'DL-Recon: Combining 3D Deep Learning Image Synthesis and Model Uncertainty with Physics-Based Image Reconstruction'. Together they form a unique fingerprint.

Cite this