TY - GEN
T1 - Deformable Motion Compensation for Intraprocedural Vascular Cone-beam CT with Sequential Projection Domain Targeting and Vessel-Enhancing Autofocus
AU - Lu, Alexander
AU - Huang, Heyuan
AU - Hu, Yicheng
AU - Zbijewski, Wojtek
AU - Unberath, Mathias
AU - Siewerdsen, Jeffrey H.
AU - Weiss, Clifford R.
AU - Sisniega, Alejandro
N1 - Publisher Copyright:
© 2023 SPIE.
PY - 2023
Y1 - 2023
N2 - Purpose: Cone-beam CT (CBCT) is used in interventional radiology (IR) for identification of complex vascular anatomy, difficult to visualize in 2D fluoroscopy. However, long acquisition time makes CBCT susceptible to soft-tissue deformable motion that degrades visibility of fine vessels. We propose a targeted framework to compensate for deformable intra-scan motion via learned full-sequence models for identification of vascular anatomy coupled to an autofocus function specifically tailored to vascular imaging. Methods: The vessel-targeted autofocus acts in two stages: (i) identification of vascular and catheter targets in the projection domain; and, (ii) autofocus optimization for a 4D vector field through an objective function that quantifies vascular visibility. Target identification is based on a deep learning model that operates on the complete sequence of projections, via a transformer encoder-decoder architecture that uses spatial-temporal self-attention modules to infer long-range feature correlations, enabling identification of vascular anatomy with highly variable conspicuity. The vascular autofocus function is derived through eigenvalues of the local image Hessian, which quantify the local image structure for identification of bright tubular structures. Motion compensation was achieved via spatial transformer operators that impart time dependent deformations to NPAR = 90 partial angle reconstructions, allowing for efficient minimization via gradient backpropagation. The framework was trained and evaluated in synthetic abdominal CBCTs obtained from liver MDCT volumes and including realistic models of contrast-enhanced vascularity with 15 to 30 end branches, 1-3.5 mm vessel diameter, and 1400 HU contrast. Results: The targeted autofocus resulted in qualitative and quantitative improvement in vascular visibility in both simulated and clinical intra-procedural CBCT. The transformer-based target identification module resulted in superior detection of target vascularity and a lower number of false positives, compared to a baseline U-Net model acting on individual projection views, reflected as a 1.97x improvement in intersection-over-union values. Motion compensation in simulated data yielded improved conspicuity of vascular anatomy, and reduced streak artifacts and blurring around vessels, as well as recovery of shape distortion. These improvements amounted to an average 147% improvement in cross correlation computed against the motion-free ground truth, relative to the un-compensated reconstruction. Conclusion: Targeted autofocus yielded improved visibility of vascular anatomy in abdominal CBCT, providing better potential for intra-procedural tracking of fine vascular anatomy in 3D images. The proposed method poses an efficient solution to motion compensation in task-specific imaging, with future application to a wider range of imaging scenarios.
AB - Purpose: Cone-beam CT (CBCT) is used in interventional radiology (IR) for identification of complex vascular anatomy, difficult to visualize in 2D fluoroscopy. However, long acquisition time makes CBCT susceptible to soft-tissue deformable motion that degrades visibility of fine vessels. We propose a targeted framework to compensate for deformable intra-scan motion via learned full-sequence models for identification of vascular anatomy coupled to an autofocus function specifically tailored to vascular imaging. Methods: The vessel-targeted autofocus acts in two stages: (i) identification of vascular and catheter targets in the projection domain; and, (ii) autofocus optimization for a 4D vector field through an objective function that quantifies vascular visibility. Target identification is based on a deep learning model that operates on the complete sequence of projections, via a transformer encoder-decoder architecture that uses spatial-temporal self-attention modules to infer long-range feature correlations, enabling identification of vascular anatomy with highly variable conspicuity. The vascular autofocus function is derived through eigenvalues of the local image Hessian, which quantify the local image structure for identification of bright tubular structures. Motion compensation was achieved via spatial transformer operators that impart time dependent deformations to NPAR = 90 partial angle reconstructions, allowing for efficient minimization via gradient backpropagation. The framework was trained and evaluated in synthetic abdominal CBCTs obtained from liver MDCT volumes and including realistic models of contrast-enhanced vascularity with 15 to 30 end branches, 1-3.5 mm vessel diameter, and 1400 HU contrast. Results: The targeted autofocus resulted in qualitative and quantitative improvement in vascular visibility in both simulated and clinical intra-procedural CBCT. The transformer-based target identification module resulted in superior detection of target vascularity and a lower number of false positives, compared to a baseline U-Net model acting on individual projection views, reflected as a 1.97x improvement in intersection-over-union values. Motion compensation in simulated data yielded improved conspicuity of vascular anatomy, and reduced streak artifacts and blurring around vessels, as well as recovery of shape distortion. These improvements amounted to an average 147% improvement in cross correlation computed against the motion-free ground truth, relative to the un-compensated reconstruction. Conclusion: Targeted autofocus yielded improved visibility of vascular anatomy in abdominal CBCT, providing better potential for intra-procedural tracking of fine vascular anatomy in 3D images. The proposed method poses an efficient solution to motion compensation in task-specific imaging, with future application to a wider range of imaging scenarios.
UR - http://www.scopus.com/inward/record.url?scp=85160825175&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85160825175&partnerID=8YFLogxK
U2 - 10.1117/12.2652137
DO - 10.1117/12.2652137
M3 - Conference contribution
C2 - 37937266
AN - SCOPUS:85160825175
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2023
A2 - Linte, Cristian A.
A2 - Siewerdsen, Jeffrey H.
PB - SPIE
T2 - Medical Imaging 2023: Image-Guided Procedures, Robotic Interventions, and Modeling
Y2 - 19 February 2023 through 23 February 2023
ER -