TY - GEN
T1 - DL-Recon
T2 - 7th International Conference on Image Formation in X-Ray Computed Tomography
AU - Zhang, Xiaoxuan
AU - Wu, Pengwei
AU - Zbijewski, Wojciech B.
AU - Sisniega, Alejandro
AU - Han, Runze
AU - Jones, Craig K.
AU - Vagdargi, Prasad
AU - Uneri, Ali
AU - Helm, Patrick A.
AU - Anderson, William S.
AU - Siewerdsen, Jeffrey H.
N1 - Publisher Copyright:
© 2022 SPIE.
PY - 2022
Y1 - 2022
N2 - High-precision image-guided neurosurgery - especially in the presence of brain shift - would benefit from intraoperative image quality beyond the conventional contrast-resolution limits of cone-beam CT (CBCT) for visualization of the brain parenchyma, ventricles, and intracranial hemorrhage. Deep neural networks for 3D image reconstruction offer a promising basis for noise and artifact reduction, but generalizability can be challenged in scenarios involving features previously unseen in training data. We propose a 3D deep learning reconstruction framework (termed “DL-Recon”) that integrates learning-based image synthesis with physics-based reconstruction to leverage strengths of each. A 3D conditional GAN was developed to generate synthesized CT from CBCT images. Uncertainty in the synthesis image was estimated in a spatially varying, voxel-wise manner via Monte-Carlo dropout and was shown to correlate with abnormalities or pathology not present in training data. The DL-Recon approach improves the fidelity of the resulting image by combining the synthesized image (“DL-Synthesis”) with physics-based reconstruction (filtered back-projection (FBP) or other approaches) in a manner weighted by uncertainty - i.e., drawing more from the physics-based method in regions where model uncertainty is high. The performance of image synthesis, uncertainty estimation, and DL-Recon was investigated for the first time in real CBCT images of the brain. Variable input to the synthesis network was tested - including uncorrected FBP and precorrection with a simple (constant) scatter estimate - hypothesizing the latter to improve synthesis performance. The resulting uncertainty estimation was evaluated for the first time in real anatomical features not included in training (abnormalities and brain shift). The performance of DL-Recon was evaluated in terms of image uniformity, noise, and soft-tissue contrast-to-noise ratio in comparison to DL-Synthesis and FBP with a comprehensive artifact correction framework. DL-Recon was found to leverage the strengths of the learning-based and physics-based reconstruction approaches, providing a high degree of image uniformity similar to DL-Synthesis while accurately preserving soft-tissue contrast as in artifact-corrected FBP.
AB - High-precision image-guided neurosurgery - especially in the presence of brain shift - would benefit from intraoperative image quality beyond the conventional contrast-resolution limits of cone-beam CT (CBCT) for visualization of the brain parenchyma, ventricles, and intracranial hemorrhage. Deep neural networks for 3D image reconstruction offer a promising basis for noise and artifact reduction, but generalizability can be challenged in scenarios involving features previously unseen in training data. We propose a 3D deep learning reconstruction framework (termed “DL-Recon”) that integrates learning-based image synthesis with physics-based reconstruction to leverage strengths of each. A 3D conditional GAN was developed to generate synthesized CT from CBCT images. Uncertainty in the synthesis image was estimated in a spatially varying, voxel-wise manner via Monte-Carlo dropout and was shown to correlate with abnormalities or pathology not present in training data. The DL-Recon approach improves the fidelity of the resulting image by combining the synthesized image (“DL-Synthesis”) with physics-based reconstruction (filtered back-projection (FBP) or other approaches) in a manner weighted by uncertainty - i.e., drawing more from the physics-based method in regions where model uncertainty is high. The performance of image synthesis, uncertainty estimation, and DL-Recon was investigated for the first time in real CBCT images of the brain. Variable input to the synthesis network was tested - including uncorrected FBP and precorrection with a simple (constant) scatter estimate - hypothesizing the latter to improve synthesis performance. The resulting uncertainty estimation was evaluated for the first time in real anatomical features not included in training (abnormalities and brain shift). The performance of DL-Recon was evaluated in terms of image uniformity, noise, and soft-tissue contrast-to-noise ratio in comparison to DL-Synthesis and FBP with a comprehensive artifact correction framework. DL-Recon was found to leverage the strengths of the learning-based and physics-based reconstruction approaches, providing a high degree of image uniformity similar to DL-Synthesis while accurately preserving soft-tissue contrast as in artifact-corrected FBP.
KW - artifact correction
KW - Cone-beam CT
KW - deep learning
KW - image synthesis
KW - image-guided intervention
UR - http://www.scopus.com/inward/record.url?scp=85141798575&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85141798575&partnerID=8YFLogxK
U2 - 10.1117/12.2646383
DO - 10.1117/12.2646383
M3 - Conference contribution
AN - SCOPUS:85141798575
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - 7th International Conference on Image Formation in X-Ray Computed Tomography
A2 - Stayman, Joseph Webster
PB - SPIE
Y2 - 12 June 2022 through 16 June 2022
ER -