TY - GEN
T1 - Context-Aware, Reference-Free Local Motion Metric for CBCT Deformable Motion Compensation
AU - Huang, Heyuan
AU - Siewerdsen, Jeffrey H.
AU - Zbijewski, Wojciech
AU - Weiss, Clifford R.
AU - Unberath, Mathias
AU - Sisniega, Alejandro
N1 - Funding Information:
This work was supported by the National Institute of Health, Grant R01-EB-030547.
Publisher Copyright:
© 2022 SPIE.
PY - 2022
Y1 - 2022
N2 - Deformable motion is one of the main challenges to image quality in interventional cone beam CT (CBCT). Autofocus methods have been successfully applied for deformable motion compensation in CBCT, using multi-region joint optimization approaches that leverage the moderately smooth spatial variation motion of the deformable motion field with a local neighborhood. However, conventional autofocus metrics enforce images featuring sharp image-appearance, but do not guarantee the preservation of anatomical structures. Our previous work (DL-VIF) showed that deep convolutional neural networks (CNNs) can reproduce metrics of structural similarity (visual information fidelity - VIF), removing the need for a matched motion-free reference, and providing quantification of motion degradation and structural integrity. Application of DL-VIF within local neighborhoods is challenged by the large variability of local image content across a CBCT volume and requires global context information for successful evaluation of motion effects. In this work, we propose a novel deep autofocus metric, based on a context-aware, multi-resolution, deep CNN design. In addition to the inclusion of contextual information, the resulting metric generates a voxel-wise distribution of reference-free VIF values. The new metric, denoted CADL-VIF, was trained on simulated CBCT abdomen scans with deformable motion at random locations and with amplitude up to 30 mm. The CADL-VIF achieved good correlation with the ground truth VIF map across all test cases with R2 = 0.843 and slope = 0.941. When integrated into a multi-ROI deformable motion compensation method, CADL-VIF consistently reduced motion artifacts, yielding an average increase in SSIM of 0.129 in regions with severe motion and 0.113 in regions with mild motion. This work demonstrated the capability of CADL-VIF to recognize anatomical structures and penalize unrealistic images, which is a key step in developing reliable autofocus for complex deformable motion compensation in CBCT.
AB - Deformable motion is one of the main challenges to image quality in interventional cone beam CT (CBCT). Autofocus methods have been successfully applied for deformable motion compensation in CBCT, using multi-region joint optimization approaches that leverage the moderately smooth spatial variation motion of the deformable motion field with a local neighborhood. However, conventional autofocus metrics enforce images featuring sharp image-appearance, but do not guarantee the preservation of anatomical structures. Our previous work (DL-VIF) showed that deep convolutional neural networks (CNNs) can reproduce metrics of structural similarity (visual information fidelity - VIF), removing the need for a matched motion-free reference, and providing quantification of motion degradation and structural integrity. Application of DL-VIF within local neighborhoods is challenged by the large variability of local image content across a CBCT volume and requires global context information for successful evaluation of motion effects. In this work, we propose a novel deep autofocus metric, based on a context-aware, multi-resolution, deep CNN design. In addition to the inclusion of contextual information, the resulting metric generates a voxel-wise distribution of reference-free VIF values. The new metric, denoted CADL-VIF, was trained on simulated CBCT abdomen scans with deformable motion at random locations and with amplitude up to 30 mm. The CADL-VIF achieved good correlation with the ground truth VIF map across all test cases with R2 = 0.843 and slope = 0.941. When integrated into a multi-ROI deformable motion compensation method, CADL-VIF consistently reduced motion artifacts, yielding an average increase in SSIM of 0.129 in regions with severe motion and 0.113 in regions with mild motion. This work demonstrated the capability of CADL-VIF to recognize anatomical structures and penalize unrealistic images, which is a key step in developing reliable autofocus for complex deformable motion compensation in CBCT.
KW - Convolutional Neural Network
KW - Deformable Motion
KW - Interventional CBCT
KW - Motion Compensation
UR - http://www.scopus.com/inward/record.url?scp=85141749899&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85141749899&partnerID=8YFLogxK
U2 - 10.1117/12.2646857
DO - 10.1117/12.2646857
M3 - Conference contribution
C2 - 36381250
AN - SCOPUS:85141749899
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - 7th International Conference on Image Formation in X-Ray Computed Tomography
A2 - Stayman, Joseph Webster
PB - SPIE
T2 - 7th International Conference on Image Formation in X-Ray Computed Tomography
Y2 - 12 June 2022 through 16 June 2022
ER -