TY - JOUR
T1 - Composite grading algorithm for the National Cancer Institute’s Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE)
AU - Basch, Ethan
AU - Becker, Claus
AU - Rogak, Lauren J.
AU - Schrag, Deborah
AU - Reeve, Bryce B.
AU - Spears, Patricia
AU - Smith, Mary Lou
AU - Gounder, Mrinal M.
AU - Mahoney, Michelle R.
AU - Schwartz, Gary K.
AU - Bennett, Antonia V.
AU - Mendoza, Tito R.
AU - Cleeland, Charles S.
AU - Sloan, Jeff A.
AU - Bruner, Deborah Watkins
AU - Schwab, Gisela
AU - Atkinson, Thomas M.
AU - Thanarajasingam, Gita
AU - Bertagnolli, Monica M.
AU - Dueck, Amylou C.
N1 - Publisher Copyright:
© The Author(s) 2020.
PY - 2021/2
Y1 - 2021/2
N2 - Background: The Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events is an item library designed for eliciting patient-reported adverse events in oncology. For each adverse event, up to three individual items are scored for frequency, severity, and interference with daily activities. To align the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events with other standardized tools for adverse event assessment including the Common Terminology Criteria for Adverse Events, an algorithm for mapping individual items for any given adverse event to a single composite numerical grade was developed and tested. Methods: A five-step process was used: (1) All 179 possible Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events score combinations were presented to 20 clinical investigators to subjectively map combinations to single numerical grades ranging from 0 to 3. (2) Combinations with <75% agreement were presented to investigator committees at a National Clinical Trials Network cooperative group meeting to gain majority consensus via anonymous voting. (3) The resulting algorithm was refined via graphical and tabular approaches to assure directional consistency. (4) Validity, reliability, and sensitivity were assessed in a national study dataset. (5) Accuracy for delineating adverse events between study arms was measured in two Phase III clinical trials (NCT02066181 and NCT01522443). Results: In Step 1, 12/179 score combinations had <75% initial agreement. In Step 2, majority consensus was reached for all combinations. In Step 3, five grades were adjusted to assure directional consistency. In Steps 4 and 5, composite grades performed well and comparably to individual item scores on validity, reliability, sensitivity, and between-arm delineation. Conclusion: A composite grading algorithm has been developed and yields single numerical grades for adverse events assessed via the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events, and can be useful in analyses and reporting.
AB - Background: The Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events is an item library designed for eliciting patient-reported adverse events in oncology. For each adverse event, up to three individual items are scored for frequency, severity, and interference with daily activities. To align the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events with other standardized tools for adverse event assessment including the Common Terminology Criteria for Adverse Events, an algorithm for mapping individual items for any given adverse event to a single composite numerical grade was developed and tested. Methods: A five-step process was used: (1) All 179 possible Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events score combinations were presented to 20 clinical investigators to subjectively map combinations to single numerical grades ranging from 0 to 3. (2) Combinations with <75% agreement were presented to investigator committees at a National Clinical Trials Network cooperative group meeting to gain majority consensus via anonymous voting. (3) The resulting algorithm was refined via graphical and tabular approaches to assure directional consistency. (4) Validity, reliability, and sensitivity were assessed in a national study dataset. (5) Accuracy for delineating adverse events between study arms was measured in two Phase III clinical trials (NCT02066181 and NCT01522443). Results: In Step 1, 12/179 score combinations had <75% initial agreement. In Step 2, majority consensus was reached for all combinations. In Step 3, five grades were adjusted to assure directional consistency. In Steps 4 and 5, composite grades performed well and comparably to individual item scores on validity, reliability, sensitivity, and between-arm delineation. Conclusion: A composite grading algorithm has been developed and yields single numerical grades for adverse events assessed via the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events, and can be useful in analyses and reporting.
KW - Adverse event
KW - Common Terminology Criteria for Adverse Events
KW - Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events
KW - health-related quality of life
KW - oncology
KW - patient-reported outcome
KW - symptom
KW - toxicity
UR - http://www.scopus.com/inward/record.url?scp=85097054374&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097054374&partnerID=8YFLogxK
U2 - 10.1177/1740774520975120
DO - 10.1177/1740774520975120
M3 - Article
C2 - 33258687
AN - SCOPUS:85097054374
SN - 1740-7745
VL - 18
SP - 104
EP - 114
JO - Clinical Trials
JF - Clinical Trials
IS - 1
ER -