10-Year Update on Study Results Submitted to ClinicalTrials.gov
Sources:

Deborah A. Zarin, M.D., Kevin M. Fain, J.D., M.P.H., Dr.P.H., Heather D. Dobbins, Ph.D., Tony Tse, Ph.D., and Rebecca J. Williams, Pharm.D., M.P.H.

November 14, 2019

In September 2008, the National Library of Medicine (NLM) of the National Institutes of Health (NIH) expanded the database at ClinicalTrials.gov to include the results of registered clinical trials in response to the Food and Drug Administration Amendments Act (FDAAA).1 This database consists of structured tables of summary data regarding the results of trials without discussion or conclusions. The FDAAA, its implementing regulations (42 CFR Part 11),2,3 and several policies require the reporting of results to ClinicalTrials.gov to address issues related to the nonpublication of results of clinical trials and incomplete reporting of outcomes and adverse events.4 These issues have necessitated process changes for sponsors and investigators in both industry and academic medical centers.5

We previously estimated that the regulations and the trial-reporting policy of the NIH6 would affect more than half the registered trials conducted at academic medical centers in the United States.7 The scope and importance of these requirements demand that we monitor and evaluate the effect of this evolving results-reporting mechanism on the clinical trials enterprise. In 2011, we characterized early experiences with nearly 2200 posted results.4 A decade after launch, the results database contained more than 36,000 results as of May 2019. In this article, we describe the current requirements, the state of results reporting at ClinicalTrials.gov, and challenges and opportunities for further advancement.

Requirements for Reporting Results of U.S. Clinical Trials
LAWS, REGULATIONS, AND POLICIES
Table 1.

Summary of Key U.S. Laws, Regulations, and Policies Related to the Submission of Study Results to ClinicalTrials.gov.
The FDAAA, its regulations, and several U.S. policies require or encourage the reporting of study results on ClinicalTrials.gov (Table 1). These regulations, which were an important milestone in implementing the FDAAA, clarified key definitions and information to be reported, including the additional requirement to submit full protocol documents with results information for trials completed on or after January 18, 2017.3 Under the regulations, the party responsible for reporting is generally the “sponsor,” which is defined as either the holder of the FDA investigational-product application or, if no holder has been designated, the initiator of the trial, such as a grantee institution. Sponsors may designate qualified principal investigators for meeting the requirements. We will refer to this entity or individual as the “sponsor or investigator.”

Both the regulations and the trial-reporting policy of the NIH, which follow the regulatory reporting framework, require sponsors or investigators to submit results data within 1 year after the primary completion date of the trial, which is generally defined as the final collection of data for the primary outcome measure; the delayed submission of results is permitted in certain situations. Registration and results information may also be submitted to ClinicalTrials.gov on an optional basis for clinical studies to which the law, regulations, or policies do not apply but must follow the established procedures for content and quality-control review. Although this article focuses on the U.S. landscape, we note the international scope of requirements for reporting results, including the Clinical Trial Regulations of the European Union.13

CONTENT OF REQUIRED RESULTS DATA
Table 2.

Requirements for Submitting Results to ClinicalTrials.gov, According to Module, for Studies Completed since January 18, 2017.
Each record of a study that is posted on ClinicalTrials.gov represents one trial with information submitted by the sponsor or investigator. The registration section, which is generally provided at the time of trial initiation, summarizes key protocol details and other information to support enrollment and tracking of the progress of the trial. After the completion of the trial, results data can be added to the record with the use of required and optional data elements organized into the following scientific modules: Participant Flow, Baseline Characteristics, Outcome Measures and Statistical Analyses, Adverse Events Information, and Study Documents (protocol and statistical analysis plan)3 (Table 2).

CRITERIA AND PROCESS FOR QUALITY-CONTROL REVIEW
Information that is submitted to ClinicalTrials.gov undergoes quality-control review, which consists of automated validation followed by manual review by NLM staff members. The goal of quality-control review is to ensure that all required information is complete and meaningful by identifying apparent errors, deficiencies, or inconsistencies.14 At the NLM, we developed review criteria that were based on established scientific-reporting principles15 and informed by our experience. Requirements for each data element are explained and reinforced by the tabular structure of the system — for example, any type of measure (e.g., mean) must have a unit of dispersion (e.g., standard deviation).16 The criteria for quality-control review are described in review-criteria documents,14 and when possible, automated messages are provided before submission within the system.

As part of the process of quality-control review, NLM staff members apply the review criteria and provide data submitters with “major” comments noting issues that must be corrected or addressed and “advisory” comments that are provided as suggestions for improving clarity. This process ends when all noted major comments have been addressed in a subsequent submission. Common types of issues include invalid or inconsistent units of measure (e.g., “time to myocardial infarction” as a measure, with “number of participants” as the unit of measure), listing of a scale without the required information about the domain or the directionality (e.g., the minimum and maximum scores), and inconsistencies within the record (e.g., a number of patients who were included in an analysis of an outcome measure that is greater than the number who were enrolled in the study) (Table S1 in the Supplementary Appendix, available with the full text of this article at NEJM.org).17

Description of Results in the Database
In this review, we sought to characterize results that had been posted on ClinicalTrials.gov as of January 1, 2019, at which time more than 3300 sponsors or investigators had posted more than 34,000 records with results. As of May 2019, approximately 120 new results were being posted to the site each week, with an additional 128 posted records with results updated each week. Most of the posted results were for clinical trials, whereas 1973 of the postings (6%) were for observational studies.

Table S2 shows the characteristics of the posted trials with results. More than 1500 of the posted results were accompanied by documents that included a protocol and statistical analysis plan. The median interval between the primary completion date and the date of posting by the NLM was 2.0 years (interquartile range, 1.3 to 3.8), which includes the time from the final collection of data until submission to the NLM, the time for the NLM quality-control review, and the time for sponsors or investigators to address quality-control issues.

Figure 1.

Registered U.S. Clinical Trials and Trials with Posted Results on ClinicalTrials.gov.
After the regulations became effective in January 2017, there was an increase in the rate of posting of results for completed U.S. clinical trials, from an average of 50 new reports of results posted per week in 2016 to 86 new reports posted per week in 2017 (Figure 1). Various research groups have estimated the adherence to components of FDAAA results-reporting requirements using public data available on ClinicalTrials.gov. Such analyses are limited because accurate evaluation sometimes requires study-specific considerations and nonpublic data (e.g., information not required or collected after the Final Rule effective date). Others have used various metrics to assess public dissemination of results generally (e.g., any results reported in published articles or on ClinicalTrials.gov within 2 years).18 According to these heterogeneous analyses, the percentage of completed trials that are listed on ClinicalTrials.gov ranges from 22% of relevant trials completed in 200919 to 66% completed as of May 2019.20 The efforts by various reviewers to highlight rates of reporting, including the naming of specific sponsors, correspond to improvements in reporting generally and by named sponsors. For example, an updated 2018 analysis by the health-oriented news website STAT documented the most improvement in overall rates of results reporting among sponsors that had been previously named in a 2016 analysis by the publication.21,22

Key Issues in Meeting Requirements for Reporting Results
To explore the degree to which sponsors and investigators are meeting the criteria for quality-control review, on October 31, 2018, we identified all trial records (including both required and optional results) that had first been submitted on or after May 1, 2017, and that had undergone quality-control review at least once by September 30, 2018. All the submissions (whether required or optional) were subject to the same review criteria and were considered to have met these criteria (“success”) for a review cycle if no major comments had been provided (see the Supplementary Appendix). The success rate during the first review cycle was 31% (862 of 2780 submissions) for industry records and 17% (582 of 3486 submissions) for nonindustry records. Cumulative success rates after the second review cycle increased to 77% (1653 of 2140 submissions) for industry records and 63% (1492 of 2359 submissions) for nonindustry records.

In our analysis of high-volume sponsors (i.e., those who submitted ≥20 results during the sample period), the success rates during cycle 1 were heterogeneous. For example, the success rate for high-volume industry sponsors during cycle 1 ranged from 16.4 to 77.1%, whereas the success rate for corresponding nonindustry sponsors ranged from 5.0 to 44.4% (Fig. S1). The fact that during cycle 1 some high-volume industry sponsors had a success rate of more than 70% indicated that the reporting requirements could be understood and followed appropriately. Although during cycle 1 the median success rate was relatively low, 62% had success after two review cycles. We believe that a goal of achieving success within two cycles is reasonable and is analogous to the need to make changes in response to editorial comments before journal publication (Fig. S2).

We have observed that industry sponsors tend to be well staffed and have a centralized process for supporting the submission of results, whereas nonindustry sponsors tend to rely on individual investigators with minimal centralized support. A 2017 survey showed that academic medical centers had assigned the task of supervising registration and results submission to a median of 0.08 full-time-equivalent staff members (working 3.2 hours per week) with varying levels of education.5 In addition to limited support, some sponsors have described challenges with providing structured information in a system that is unfamiliar in format and terminology.12 The NLM recognizes these challenges and has made improvements to the system over time; we continue to invest in evaluating and improving the system, including providing more just-in-time automated user support before submission. We also continue to conduct training workshops, add and improve resource materials (e.g., templates, checklists, and tutorials), and provide one-on-one assistance when needed.

Effect of Results Reporting on the Evidence Base
Table 3.

Potential Benefits and Uses of Results Data Submitted to ClinicalTrials.gov.
We reviewed the effect of the ClinicalTrials.gov registry in a previous article,23 and sample evidence for the effect of the results database is provided in Table 3. In addition, we conducted two analyses to evaluate the relationship between the results database and published literature (see the Supplementary Appendix).4

RELATIONSHIP TO PUBLISHED LITERATURE
To investigate the broad effect of ClinicalTrials.gov on public availability of trial results, we compared the timing of the availability of initial results between the results database and corresponding journal publications (when available). On March 1, 2018, we identified 1902 registered trials with required or optional results that had first been posted on ClinicalTrials.gov between April 1, 2017, and June 30, 2017. We then extracted a 20% random set of 380 records with results, used methods that have been described previously to manually identify corresponding publications,38 and compared the date that results were first posted on ClinicalTrials.gov with the publication date. We categorized as “simultaneous” the posting date and publication date if they fell within a 1-month period, whereas other submissions were designated as having been published before or after posting.

Relative to the date of posting on ClinicalTrials.gov, 31% (117 of 380) of the records had an earlier publication date, 2% (7 of 380) were published simultaneously, and 9% (36 of 380) were published after posting; 58% (220 of 380) did not have a publication date by the end of the follow-up period on July 15, 2018. Twenty-four months after the primary completion date of the trial, 41% (156 of 380) had posted results on ClinicalTrials.gov, and 27% (101 of 380) had been published (Table S2). These findings are consistent with those in previous analyses in which we found that the results of a substantial number of trials had not been published 2 to 4 years after trial completion.38 In the case of such trials, ClinicalTrials.gov provided the only public reporting of results.4,29

COMPLETENESS OF RESULTS REPORTING
Researchers have previously shown inadequacies in the reporting of data on ClinicalTrials.gov and in the corresponding published articles. Included in these shortcomings is the lack of reporting of all-cause mortality, which is critical, unambiguous information.24,39 To improve reporting on ClinicalTrials.gov, a table that includes data regarding all-cause mortality is now required for trials that were completed on or after January 18, 2017. Of the 160 trials in our sample for which the results had been published, we identified 47 trials that included a table showing all-cause mortality on ClinicalTrials.gov. Of these trials, 26 reported the occurrence of no deaths, and 21 reported at least one death, for an overall total of 995 reported deaths. The associated published articles reported 964 deaths (Table S3). Among the trials for which no deaths were reported on ClinicalTrials.gov, 4% (1 of 26) of published articles stated that there were no deaths, and 96% (25 of 26) did not specifically mention deaths. Among the trials for which at least one death had been reported on ClinicalTrials.gov, 62% (13 of 21) were concordant with the published data, 14% (3 of 21) reported fewer deaths in the published article, and 10% (2 of 21) reported the same overall number of deaths but in groups that were discordantly described; in 14% of the trials (3 of 21), the total number of deaths was ambiguous in the published article. In our sample, no published article reported more deaths than were reported on ClinicalTrials.gov.

Although discrepancies between two or more sources generally raise questions about which is accurate, it is unlikely that sponsors or investigators would report more deaths than actually occurred, especially because the focus on “all-cause mortality” should remove any subjectivity. Differences in the timing of the disclosure of trial results may lead to some discrepancies, although we did not specifically evaluate that issue in our sample. For example, the publication of the results of a trial before its completion would include only deaths that had occurred to date, whereas the results reported on ClinicalTrials.gov would include all the additional deaths that had been observed until trial completion and thereby serve as a key source of final results for such trials.

Discussion
We have previously described the mandates to report results to ClinicalTrials.gov as an experiment for addressing the nonpublication and incomplete reporting of clinical trial results.4 A decade after launch, the results database is the only publicly accessible source of results information for thousands of trials. As such, the database supports the goal of complete reporting and serves as a tool for timely dissemination of trial results that complement existing published reports. The study records that have been posted on ClinicalTrials.gov provide an informational scaffold on which information about a trial can be discovered, including access to statements regarding the sharing of data for individual trial participants and, in some cases, links to sites where such data have been deposited.40,41 This scaffolding function is facilitated when documents about a clinical trial (e.g., publications, data repositories, press releases, and news articles) reference the ClinicalTrials.gov unique identifier (NCT number) assigned to each registered study. The recent addition of protocol documents, statistical analysis plans, and informed-consent forms further informs users about a study’s design — use that we encourage in meta-research and quality-improvement efforts.42

Efforts to improve the quality of reporting need to consider the full life cycle of a clinical trial. For example, the presumption of both trial registration and reporting of summary results had been that required information would flow directly from the trial protocol, the statistical analysis plan, and the data analysis itself. However, based on the experience of operating ClinicalTrials.gov, we have seen heterogeneity in the degree to which the necessary information is specified or available. Thus, we support recent efforts aimed at strengthening this early stage of the clinical-research life cycle with structured, electronic protocol-development tools,43-45 as well as the use of standardized, well-specified outcome measures46 that are consistent with scientific principles and harmonized with ClinicalTrials.gov reporting. For such broad efforts and more targeted efforts to improve quality to take root, leadership in the clinical-research community is needed to champion the value of such efforts and provide resources and incentives. In parallel, as the database operators, we continue to evaluate users’ needs in order to ensure that reporting requirements are known and understood by those involved throughout the clinical-research life cycle and to improve the submission process and the quality of reporting.

We also think that the full value of the trial-reporting system will emerge when various parties recognize and leverage the substantial effort that has been invested in the use of this curated, structured system for reporting of summary results. For example, providing appropriate academic credit for results that are posted on ClinicalTrials.gov (as a complement to credit for the publishing of articles) would incentivize more timely and careful entries by investigators. In addition, the tables that are posted on the database can be reused in manuscripts and during the editorial or peer-review process to ensure consistency across sources. Publications can also refer to the full set of results on ClinicalTrials.gov while focusing on a subset of interest (e.g., publishing data for 19 of 27 prespecified secondary outcome measures and providing a link to access results for remaining outcome measures on ClinicalTrials.gov31,32). Just as the results database supports systematic reviews, we see opportunities for those who oversee research, including funders, ethics committees, and sponsoring organizations, to conduct landscape analyses before approving the initiation of new clinical trials and to monitor a field of research over time. In support of this goal, we aim to develop tools to further optimize search strategies and enhance the viewing and visualization of search results to support such activities. For instance, the NLM recently updated the way third-party software accesses data on ClinicalTrials.gov by supporting better targeted queries and more expansive content availability, as well as changes to the main search features on the website for other users.

Although the results database has evolved considerably in the past decade, efforts to strengthen the culture and practice of systematic reporting must continue. We have previously outlined steps that various stakeholders can take to enhance the trial-reporting system.23 These actions can be described in two broad themes: facilitating high-quality submissions while reducing the reporting burden for data submitters and modifying incentives to encourage reporting and embracing its value as part of the scientific process. As such, we endeavor to support researchers and institutions in maximizing the value of their efforts and those of the research participants as well as the overall value of the ClinicalTrials.gov results database to the scientific enterprise.

Supported by the Intramural Research Program of the National Library of Medicine, National Institutes of Health.

Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.

The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of the National Institutes of Health.

Author Affiliations
From the National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD.

Post a Comment

You must be logged in to post a Comment.

FEEDBACK