Deputy Division Director Division of Epidemiology I, Office of Surveillance and Epidemiology, CDERR, FDA, United States
Background: Quantitative bias analysis (QBA) methods evaluate the impact of biases arising from systematic errors on study results. Although previous studies have provided summaries of the most prominent QBA methods, many of which require individual patient level data, less is known about the full range of QBA methods that can be conducted using only summary level data.
Objectives: To identify and summarize all QBA methods that have been proposed in the literature for different epidemiologic study designs that only require summary level data.
Methods: We conducted a systematic review to identify QBA methods for summary level epidemiologic data across the biomedical literature. We searched MEDLINE, Embase, Scopus, and Web of Science for English-language articles that described, evaluated, or compared QBA methods for summary level data from observational and nonrandomized experimental studies. For all eligible articles, we conducted reference chaining and cross-checked if methods were also cited in a prominent QBA textbook. For each eligible QBA method, we recorded key characteristics, including applicable study designs; bias(es) addressed; bias parameters; formulas; output of the method; and software available to implement the method. The eligible QBA methods and characteristics were then used to generate a decision tree that can used to identify various QBA methods for different study design scenarios. All characteristics of the QBA methods were reported using descriptive statistics.
Results: Our search identified 10,249 records, of which 51 were eligible articles describing 54 unique QBA methods for summary level data. Overall, 21 (39%) QBA methods were referenced in the textbook and 11 (20%) were explicitly explained. Of the 54 QBA methods, 50 (93%) were designed for observational studies, 2 (4%) for nonrandomized trials, and 2 (4%) for meta-analyses. There were 28 (52%) QBA methods that addressed unmeasured confounding, 21 (39%) addressed misclassification bias, and 3 (6%) addressed selection bias. An additional 2 (4%) QBA methods addressed multiple biases at a time. There were 34 (63%) QBA methods classified as simple sensitivity analysis methods. While 36 (67%) QBA methods generated bias-adjusted effect estimates, 16 (30%) described how bias could explain away observed findings. Twenty (37%) articles provided publicly available code or tools to implement the QBA methods.
Conclusions: Our systematic review identified 54 unique QBA methods for summary level epidemiologic data that have been published in the peer-reviewed literature, of which the vast majority address confounding and misclassification bias. Our comprehensive evaluation of QBA methods and accompanying decision tree can serve as a complement to the QBA textbook and can help guide the selection of appropriate QBA methods based on different study design characteristics.