Automated quality control assessment of clinical chest images

Charles E. Willis, Thomas K. Nishino, Jered R. Wells, H. Asher Ai, Joshua M. Wilson, Ehsan Samei

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Purpose: The purpose of this study was to determine whether a proposed suite of objective image quality metrics for digital chest radiographs is useful for monitoring image quality in a clinical setting unique from the one where the metrics were developed. Methods: Seventeen gridless AP chest radiographs from a GE Optima portable digital radiography (DR) unit (“sub-standard” images; Group 2) and 17 digital PA chest radiographs (“standard-of-care” images; Group 1) and 15 gridless (non-routine) PA chest radiographs (images with a gross technical error; Group 3) from a Discovery DR unit were chosen for analysis. Group 2 images were acquired with a lower kVp (100 vs 125) and shorter source-to-image distance (127 cm vs 183 cm) and were expected to have lower quality than Group 1 images. Group 3 images were expected to have degraded contrast vs Group 1 images. Images were anonymized and securely transferred to the Duke University Clinical Imaging Physics Group for analysis using software described and validated previously. Individual image quality was reported in terms of lung gray level, lung detail, lung noise, rib-lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. Metrics were compared across groups. To improve precision of means and confidence intervals for routine exams, an additional 66 PA images were acquired, processed, and pooled with Group 1. Three observer studies were conducted to assess whether humans were able to identify images classified by the algorithm as abnormal. Results: Metrics agreed with published Quality Consistency Ranges with three exceptions: higher lung gray level, lower rib-lung contrast, and lower subdiaphragm-lung contrast. Higher (stored) bit depth (14 vs 12) accounted for higher lung gray level values in our images. Values were most internally consistent for Group 1. The most sensitive metric for distinguishing between groups was mediastinum noise, followed closely by lung noise. The least sensitive metrics were mediastinum detail and rib-lung contrast. The algorithm was more sensitive than human observers at detecting suboptimal diagnostic quality images. Conclusions: The software appears promising for objectively and automatically identifying suboptimal images in a clinical imaging operation. The results can be used to establish local quality consistency ranges and action limits per facility preferences.

Original languageEnglish (US)
Pages (from-to)4377-4391
Number of pages15
JournalMedical physics
Volume45
Issue number10
DOIs
StatePublished - Oct 2018

Keywords

  • data analytics
  • detector performance
  • digital radiography
  • quality assurance
  • quality control

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Automated quality control assessment of clinical chest images'. Together they form a unique fingerprint.

Cite this