Context-Aware, Reference-Free Local Motion Metric for CBCT Deformable Motion Compensation

Heyuan Huang, Jeffrey H. Siewerdsen, Wojciech Zbijewski, Clifford R. Weiss, Mathias Unberath, Alejandro Sisniega

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Deformable motion is one of the main challenges to image quality in interventional cone beam CT (CBCT). Autofocus methods have been successfully applied for deformable motion compensation in CBCT, using multi-region joint optimization approaches that leverage the moderately smooth spatial variation motion of the deformable motion field with a local neighborhood. However, conventional autofocus metrics enforce images featuring sharp image-appearance, but do not guarantee the preservation of anatomical structures. Our previous work (DL-VIF) showed that deep convolutional neural networks (CNNs) can reproduce metrics of structural similarity (visual information fidelity - VIF), removing the need for a matched motion-free reference, and providing quantification of motion degradation and structural integrity. Application of DL-VIF within local neighborhoods is challenged by the large variability of local image content across a CBCT volume and requires global context information for successful evaluation of motion effects. In this work, we propose a novel deep autofocus metric, based on a context-aware, multi-resolution, deep CNN design. In addition to the inclusion of contextual information, the resulting metric generates a voxel-wise distribution of reference-free VIF values. The new metric, denoted CADL-VIF, was trained on simulated CBCT abdomen scans with deformable motion at random locations and with amplitude up to 30 mm. The CADL-VIF achieved good correlation with the ground truth VIF map across all test cases with R2 = 0.843 and slope = 0.941. When integrated into a multi-ROI deformable motion compensation method, CADL-VIF consistently reduced motion artifacts, yielding an average increase in SSIM of 0.129 in regions with severe motion and 0.113 in regions with mild motion. This work demonstrated the capability of CADL-VIF to recognize anatomical structures and penalize unrealistic images, which is a key step in developing reliable autofocus for complex deformable motion compensation in CBCT.

Original languageEnglish (US)
Title of host publication7th International Conference on Image Formation in X-Ray Computed Tomography
EditorsJoseph Webster Stayman
PublisherSPIE
ISBN (Electronic)9781510656697
DOIs
StatePublished - 2022
Externally publishedYes
Event7th International Conference on Image Formation in X-Ray Computed Tomography - Virtual, Online
Duration: Jun 12 2022Jun 16 2022

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume12304
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference7th International Conference on Image Formation in X-Ray Computed Tomography
CityVirtual, Online
Period6/12/226/16/22

Keywords

  • Convolutional Neural Network
  • Deformable Motion
  • Interventional CBCT
  • Motion Compensation

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

MD Anderson CCSG core facilities

  • Clinical and Translational Research Center

Fingerprint

Dive into the research topics of 'Context-Aware, Reference-Free Local Motion Metric for CBCT Deformable Motion Compensation'. Together they form a unique fingerprint.

Cite this