NOTE: The content below contains the first few paragraphs of the printed article and the titles of the sidebars and boxes, if applicable.
The FDA’s new guidance about monitoring provides for remote and risk-based monitoring, which offers a more thoughtful process.
What started as a way to make sure data were accurate and reflected the original source, clinical trial monitoring has become time-consuming, expensive, and inefficient.
Over the past two decades, the number and complexity of clinical trials have grown dramatically. This has created new challenges for clinical trial oversight, particularly increased variability in clinical investigator experience, site infrastructure, treatment choices, and standards of care.
At the same time, monitoring of trials has not really evolved or taken advantage of technologies that could improve the quality and efficiency of sponsor oversight of clinical investigations.
Until recently, the standard paradigm for clinical site monitoring was to conduct routine site visits every four to 10 weeks with 100% source data verification (SDV) in most cases.
According to Kyle Given, principal at Medidata Solutions, given the clinical research industry’s conservative interpretation of the FDA’s guidance, sponsors have historically performed 100% source data verification and review. Clinical monitors review every detail listed in subjects’ medical charts to confirm that investigators properly report safety and efficacy data.
“This practice assumes that all sites have an inherent quality issue and that every data point must be verified,” Mr. Given says. “Basically, the practice subscribes to the one-bad-apple-spoils-the-bunch philosophy.”
The reality, however, is that many sites are well managed and able to collect accurate and reliable clinical trial data. And, Mr. Given says, with the updated FDA guidance, it’s clear that 100% source data verification is not a requirement.
“Rather than assuming all sites have quality issues, risk-based monitoring takes a targeted approach using data analytics to identify specific areas of risk and deploy monitors with a specific remit to eliminate or reduce the impact of such areas of risk,” he says. “In addition to enhancing the site monitoring process, the approach focuses on building quality into the protocol from the start and redefining what quality data means — data that are not 100% error-free, but is ‘fit for purpose.’”
“Today, there’s mounting evidence that 100% SDV minimally impacts data quality,” Mr. Given says. “Thus, many organizations have started actively redefining their approach to monitoring with a more risk-based focus. We have analytics that represent thousands of recent, global clinical trials in all phases of development across all therapeutic areas, which show that less than 3% of all electronic case report form data are actually changed due to post-data capture monitoring and data cleaning. Data analytics from our platform also reveal that the percentage of SDV coverage measured industrywide is trending downward at a modest rate, from 92% in 2008 to 84% in 2012.”
As many sponsors have learned, conducting 100% source data verification at every investigative site is not the answer to ensuring data quality, nor is it an efficient use of resources or time.
“This process typically accounts for 30% of a sponsor’s overall costs, but results in less than 3% of any data changes post-monitoring visit,” says Marc Buyse, Sc.D., founder of CluePoints. “This is a highly time-consuming activity that cannot be shown to adequately ensure the integrity of the research process. Traditional on-site monitoring demands hundreds of man hours that in the current climate sponsors simply cannot afford.”
Sponsors have been using a cookbook approach to monitoring, says Brian Bollwage, VP, strategic regulatory affairs at Theorem Clinical Research.
“In the late 1980s, the FDA, in an attempt to be helpful, issued a guidance document regarding monitoring of trials in which there was an example that suggested that adequate monitoring of sites was every four to six weeks,” he says. “The industry seized on that statement in the guidance and that became the industry norm. But the FDA never meant to do that.”
The current process was developed by the industry to meet the perceived requirements of the FDA and other regulatory bodies to make sure the process is statistically correct, says Rick Morrison, CEO of Comprehend Systems.
“What has become apparent is that this process is extremely expensive and inefficient,” he says.
But now both the Food and Drug Administration and the European Medicines Agency have issued guidances to address clinical trial monitoring. Regulators have suggested an alternative way of monitoring the quality of trial data. The FDA’s guidance, for example, encourages sponsors to use electronic systems and focus oversight activity on preventing risks to data quality, patient safety, and trial integrity.
Industry experts say electronic data capture technology can be leveraged for use with a risk-based monitoring method to ensure quality.
“Some of the individual concepts that are being defined and talked about in risk-based monitoring have been evolving over the past few years,” says Craig Wozniak, head of Americas clinical operations at GlaxoSmith-Kline. “Risk-based monitoring brings a holistic approach that emphasizes risk assessment, key risk indicators, leveraging central data monitoring, and sampling approaches for onsite monitoring activities.”
Experts say risk-based monitoring represents a smarter way to monitor clinical trials, with a holistic, dynamic approach focusing on risk factors with the intent to increase patient safety and data integrity while maintaining compliance with industry regulations. In fact, PwC predicts the use of risk-based monitoring can result in a potential 15% to 20% in trial cost savings.
Risk-based monitoring is a framework to assess risk in clinical trials and come up with strategies for addressing those risks.
“Risk-based monitoring is a mind shift away from how we have traditionally done monitoring, which is by overseeing compliance by sending a CRA out to a site and doing one-to-one source data verification,” says Dan White, VP of global operations, at Quintiles.
Mr. White says regulators are trying to communicate that the industry has overengineered clinical monitoring.
“Regulators are recommending that companies start thinking outside the box,” he says. “In the draft guidance, there were references around using technology and focusing on the key data points and not assuming that 100% source data verification is going to produce the best quality data at the end of the day.”
This approach involves adjusting the monitoring strategy based on a level of risk, reflecting the reality that 20% of clinical trial sites contribute to 80% of quality issues, says Chitra Lele, Ph.D., chief scientific officer at Sciformix.
“Both the FDA and European Medicines Agency are urging greater reliance on centralized monitoring practices to identify when on-site monitoring is truly required and this can be based on assessment of key risk indicators,” she says.
Mr. Wozniak says GlaxoSmithKline has aligned with these approaches and techniques.
“We are developing, refining, and piloting enhanced procedures and processes within that methodology and will continue to expand the use across our programs,” he says.
Sanofi is another company that is working to implement risk-based monitoring. The company is in the process of developing a plan across the entire portfolio, says Lori Convy, assistant director, clinical research monitoring, at Sanofi.
“We’re taking small steps as we look at our ongoing studies, learning more about the risk assessment process, and how we can implement this approach within our studies,” she says. “We are going through a convergence process, bringing all of the Sanofi entities under one umbrella and we’re attempting to unify all of our processes so that we are all working in the same systems and under the same SOPs, creating an opportunity for an organized transition to risk-based monitoring.”
Several years ago Sanofi changed its monitoring to a targeted random source data verification program, but that is still a manual process that CRAs find difficult and complicated. As the emphasis on source data verification decreases, the switch to risk-based monitoring will provide opportunities for remote monitoring, Ms. Convy says.
“CRAs will be able to have a more global perspective of a trial and can look more holistically at the data to determine when there is elevated risk at their sites,” she says. “We hope to be able to provide them with better tools to look at how their sites compare with other sites and to have a better understanding of the risk process. If a site appears to be an outlier in one capacity or another, CRAs can determine what is causing the discrepancy and how they can intervene as the monitor for the site.”
The FDA Guidance
The FDA’s guidance, issued in August 2013, stresses that a risk-based approach to monitoring does not suggest any less vigilance in oversight of clinical investigations. Rather, it focuses sponsor oversight activities on preventing or mitigating important and likely risks to data quality.
The guidance describes strategies that focus on critical study parameters and relies on a combination of monitoring activities to oversee a study effectively. For example, the guidance specifically encourages greater use of centralized monitoring methods when they are appropriate.
Mukhtar Ahmed, global VP, life-sciences product strategy, at Oracle Health Sciences, says there are several core principles to risk-based monitoring: early and ongoing risk assessment; building quality by design into the study; identifying and tracking critical processes and critical data; the use of risk indicators and thresholds; partial source data verification; and the use of centralized, off-site, and adaptive monitoring while the study is under way.
Mr. Bollwage says the agency is now suggesting that sponsors have a more thoughtful process of monitoring, with analysis of the risks associated with the particular clinical trial and using that analysis to determine the type of monitoring practices that are appropriate.
“There may be some studies where onsite visits are still a good idea but the FDA is recognizing that with the advent of electronic data entry technology, remote monitoring can be much more effective and more cost-effective than visiting sites,” he says.
Dr. Lele says an optimized monitoring strategy can be determined based on risk ratings and KRIs corresponding to other factors such as patient safety (rates of adverse events and serious adverse events), treatment compliance (percentage of patients with delayed or reduced dose or with treatment discontinuation), data management (delays in completing and sending case report forms, query rates, query resolution times), and other aspects of study conduct (actual vs. target recruitment rate, percentage of patients with protocol violations, percentage of dropouts).
“This approach requires a priori and on-going evaluation and analysis of data on KRIs to define the monitoring strategy and plan and to adapt it in real-time,” she says.
Mr. Bollwage says the guidance allows for flexibility for companies to make determinations on a case-by-case basis about what makes sense and what protects patients and protects the integrity of the data in the monitoring choices being selected.
“The important thing to emphasize here is that the FDA is asking the industry to think,” Mr. Bollwage says. “Regulators are challenging the industry to think about risk and the safety monitoring plans. They want companies to consider reviewing data remotely on a real-time basis to have the opportunity to identify certain safety issues and problems even earlier than could be determined with an onsite visit.”
Andy Grieve, senior VP of clinical trial methodologies at Aptiv Solutions, says the FDA’s guidance was in response to the realization that sponsors are using a huge amount of resources to check data that are, in fact, accurate. In fact, he says there are about 2.5% errors when databases are compared with the source.
“Many sponsors have spent resources on 100% source data verification only to find that the data are accurate and reflect what is in the source,” he says. “When monitors spend their time doing source data verification, there is little time left for other strategies that can improve site data quality. For example, it might be a more valuable use of the monitor’s time to talk to the research staff.”
Mr. Bollwage says by using a risk-based monitoring approach, monitors are better able to address other issues that might be impacting data quality.
“This type of monitoring would be done based on the protocol,” he says. “It would depend on the drug itself, the risk associated with the drug, any risks that have been identified in a preclinical research setting, the protocol, how many patients are exposed, how aggressive the dosing is going to be, etc. A risk assessment can be generated on a protocol basis and that can determine the best use of onsite monitoring and remote monitoring.”
Assessing the Risk
According to Mr. Given, risk factors should be assessed at the portfolio level (e.g., therapeutic area or IP category) when designing clinical trials using quality-by-design principles and at the study level, where risk factors manifest themselves in various ways.
“There are three different categories of risk that may impact the outcome of any clinical trial: poor performing sites, unexpected changes to key safety and efficacy data, and protocol design compliance issues,” he says. “An issue that falls into each of these categories may develop in any clinical trial. It is, therefore, critical to have data analytics that look at all three areas of risk.
“Poor performing sites may generate inaccurate or incomplete clinical data,” he adds. “Additionally, the clinical data collected may reveal a safety issue associated with a compound, indicating the need for additional investigation. An example of the third area of risk could be that a trial’s protocol has been designed in such a way that its primary objectives may not be met due to a dosing compliance issue that may diminish the efficacy of the investigational product, for example.”
Jill Collins, senior director, integrated clinical processes, at INC Research, says when assessing the factors that lead to risk it’s best to look at studies holistically and assess the risks along the life cycle of the programs, specifically evaluating the medical and scientific risks, the regulatory risks, and the operational risks.
“By assessing these areas it’s possible to identify potential impacts to patient safety, delivery of the study, and barriers to approval,” she says. “In addition, there are opportunities to eliminate risk early in the life cycle of the study, and where it cannot be eliminated, plan mitigation strategies.”
Dr. Buyse says a wide variety of factors can lead to risk in clinical studies including, but not limited to, human error, sloppiness, faulty processes, fraud, and the fabrication of data.
“Certain data anomalies and other non-random data distribution may be more readily detected using risk-based strategies, and as a result overall quality of data can be improved.”
Role of Technology
As with many areas of clinical research, the field of risk-based monitoring has been optimized through the introduction of automated technologies that are making processes more efficient.
The availability of data and its analysis in real time is a major prerequisite of effective risk-based monitoring, Dr. Lele says.
“This can be achieved by implementing tools and technologies such as EDC,” she says. “Other tools provide seamless integration of data from various sources and access to the data on a common platform. In addition, data visualization tools are important for the successful execution of a risk-based monitoring plan. Several technology-based solutions have been developed, which integrate all data, analysis, and visualization needs into a single tool/product.”
Mr. Morrison says combining different data collection systems and putting in real-time analytics and alerting tools allow companies to automatically monitor data for statistical anomalies and common key risk indicators.
“They can set up alerts and incorporate these into a monitoring plan to proactively make sure that big problems don’t happen,” he says. “Technology can improve monitoring more than ever before.”
Any problems with overall data can be picked up by other systems, Mr. Grieve says. “Data management has checks that will pick up gross errors,” he says. “Statisticians have approaches to identify outliers in data. There are other systems backing up source data verification and there are certain logical checks that can be made.” The shift to a more holistic and ongoing risk assessment model and the use of critical data and processes actually places greater demand on clinical systems, Mr. Ahmed says.
“The move to targeted source data verification and centralized, off-site, and adaptive monitoring, results in a new e-clinical ‘footprint’ to enable risk-based monitoring,” he says. “The core clinical systems required for effective and proactive risk-based monitoring include activity-based study modeling, clinical development scenario simulations and impact analysis, and true end-to-end clinical data management including planning, study startup and design, EDC, and data warehousing, CTMS, and safety.”
“In the FDA’s current thinking on clinical trial oversight and the development of RBM strategies, several techniques are listed that may be considered by sponsors,” Dr. Buyse says. “One such method is the use of statistical techniques in the form of central statistical monitoring or CSM. CSM monitoring determines the expected values of each variable by assessing the data from all investigative sites involved in a study in order to identify statistical outliers.”
Through considered allocation of resources for monitoring clinical studies, sponsors have the opportunity to realize considerable cost and time savings, while improving overall data quality, Dr. Buyse says.
“However, to date, risk-based monitoring approaches have largely relied on key risk indicators, summary statistics that are predefined by the sponsor that potentially reveal deviations in the study conduct,” he says. “Although KRIs are effective and should be part of a RBM strategy, they identify centers at risk based on predefined variables and known risk factors. As such, this methodology may overlook hard to detect data issues.
“By comparison, the use of central statistical monitoring technologies is helping to overcome these limitations by not solely focusing on pre-defined criteria,” he continues. “Instead, the technique used is agnostic and analyzes all data to detect outlying investigative sites. As a result, the approach is able to detect issues, such as a lack of variability or implausible values, that are unlikely to be detected by other methods
The CRA Advantage
The role of monitors is more important than ever in a risk-based approach, Ms. Collins says.
“Both on-site and central monitors have an essential role in this type of monitoring model,” she adds. “They will be engaged in high-value tasks with a greater emphasis on overseeing critical site processes and gaining early insight into when a site has any risk indicators. They will have a great opportunity to course correct at the earliest time point, mitigating significant issues.”
Mr. Ahmed says since source data verification is a time-consuming activity for monitors, a risk-based approach allows CRAs to focus on more critical compliance activities on site such as patient informed consent, completion of essential documents, and site compliance with the study protocol and GCP.
“Mobile apps are emerging that provide CRAs with visibility into real-time trial and site operational data that help them determine, based on critical data and process indicators such as screen failure rates, if they can skip a planned visit or make an unplanned visit,” he says. “CRAs can leverage this increased mobility to help them manage the on-site visits they do make so they are more efficient and productive.”
Mr. Wozniak says CRAs still have a key role in monitoring.
“It is really important for us to streamline communications with the sites and be clear in our accountabilities internally,” he says. “I see the CRA as having a critical point of accountability for engaging the site staff, managing that relationship, communicating and discussing issues, retraining, and working with the sites to effectively execute the protocol and use systems.”
He says the monitors of tomorrow will need to be able to navigate new systems and interpret information from key risk indicators and monitoring to help determine where and when to mitigate.
Dr. Lele says to get most from the time spent on site, a monitor should review data for trends and identify potential issues.
“The monitor continues to have accountability to verify and confirm any issues that may have been remotely identified as potential issues,” she says.
“There’s also an important in-house component to the monitor’s role in the risk-based monitoring world. In addition to having the required insights and understanding of the protocol, the sites, and the data, monitors are now working closely with the data management and clinical programming teams to provide data quality support.”
“Monitoring can take up to 30% of the cost of a clinical study, and it can be very inefficient. ” Rick Morrison / Comprehend Systems
“Risk-based monitoring is a smart approach that attempts to make monitoring more effective by focusing on known problem areas or risks.” Dr. Chitra Lele / Sciformix
“When monitors spend their time doing source data verification, there is little time left for those other strategies that can improve site data quality.” Andy Grieve / Aptiv Solutions
“Risk-based monitoring is a mind shift away from how we have traditionally done monitoring in the past.” Dan White / Quintiles
“Today, there’s mounting evidence that 100% source data verification minimally impacts data quality. Thus, many organizations have started actively redefining their approach to monitoring with a more risk-based focus.” Kyle Given / Medidata Solutions
“With risk-based monitoring, there are true opportunities to drive and improve the quality of development and get people thinking holistically about the approaches. ” Craig Wozniak / GlaxoSmithKline
“With the advent of electronic data entry technology, it’s possible to monitor remotely much more effectively and more cost-effective than by visiting sites. ” Brian Bollwage / Theorem Clinical Research
“Traditional on-site monitoring demands hundreds of man hours, which in the current climate sponsors simply cannot afford.” Dr. Marc Buyse / CluePoints
Realizing Value from Risk-based Monitoring
- Assess technology: Companies interested in pursuing risk-based monitoring should first map the route to integration. They should evaluate current technologies and identify gaps.
- Frame risk: Companies should map a risk-evaluation framework, from what the initial risk assessment is to how the algorithm will interpret the data, and evaluating fixed and dynamic risks as well as the parameters that contribute to risk.
- Define roles, governance, and process: An end-to-end process that defines roles, responsibilities, and activities associated with new risk-based monitoring is critical to effective roll out of risk-based monitoring.
- Pilot and roll out the plan: The final piece of the integration is pilot and rollout. As new trials begin, a company can gradually transform, for instance, all therapeutic areas to risk-based monitoring. Prior to the pilot, companies should begin a change management plan.
Best Practices for Risk-Based Monitoring
Industry experts discuss how to make the most of risk-based monitoring.
Instead of treating risk-based monitoring as bolt-on addition to current monitoring practices, companies should fully integrate it into the R&D organizations. This requires financial, operational and time commitment, according to experts at PwC.
Mukhtar Ahmed, global VP, life-sciences product strategy at Oracle Health Sciences, says successful risk-based monitoring comes down to three factors: first, the up-front risk assessment identifying the critical data and processes for that study; second, the consistent and quality design and application of that risk assessment across both the organization — for example, people, partners, processes, and investigator sites — and across the trial workflow — for example, clinical systems, protocol design, and trial design — and finally, the ability to quickly and easily track the critical data and processes identified in the risk assessment.
“One interesting finding from the Trans-Celerate consortium was that source data verification may be less critical than thought to lowering risk,” Mr. Ahmed says. “TransCelerate member companies conducted a retrospective analysis to assess queries identified via source data verification to find the percentage of queries found in critical data. The total was only 2.4%, suggesting that source data verification has little impact on the quality of the data. In addition, other studies suggest that data anomalies and fraud, such as non-random data distributions and fabrication of data, may be more easily detected by centralized monitoring techniques than by on-site monitoring.”
Risk-based monitoring is a methodology and not a standard, with common principles generally shared across regulatory authorities and industry, Mr. Ahmed says.
“That said, the FDA guidance and the TransCelerate position paper, both issued in 2013, offer guidelines on how to approach risk-based monitoring,” he says.
In June 2013, TransCelerate released a methodology that shifts monitoring processes from excessive concentration on source data verification to comprehensive risk-driven monitoring. Instead of relying heavily on on-site monitoring, which severely limits the ability to identify and prevent issues, Trans-Celerate’s recommendations are driven by centralized and off-site monitoring techniques, as well as adaptive on-site monitoring. This approach makes it possible to oversee study parameters holistically and maximize on-site monitoring findings, bringing into balance effort and value gained, while mitigating risks and detecting any issues early, or preventing them entirely.
It’s important to establish a risk assessment paradigm, says Brian Bollwage, VP of global regulatory affairs at Theorem Clinical Research.
“This should be put in place early in the process so that as the protocol is being developed and the sample size and primary endpoints are being determined, a risk analysis can be performed and be put in place so that when protocol is being finalized there is a clear view as to what the monitoring plan should be,” he says. “The monitoring plan should not be designed after the fact; it is something that should be designed along with the development of the protocol.”
Rick Morrison, CEO of Comprehend Systems, says a best practice is to ensure technology is in place to allow data to be auditable.
Andy Grieve, senior VP of clinical trial methodologies at Aptiv Solutions, says it’s important not to lose sight of the big picture and the objective of the clinical trial.
“The objective of the clinical trial is to make decisions about the effect of a treatment,” he says. “The question I would pose to sponsors is: how can we be confident we are coming to the right conclusion? We need to be confident that the data allow us to make the appropriate decisions about the effectiveness of a treatment.”
Mr. Grieve suggests companies consider a hybrid approach to monitoring, where data that sponsors deem to be critical, including primary endpoint and safety data, will be checked using 100% verification, and noncritical data, however that is defined, will be checked using a statistical approach.
Craig Wozniak, head of Americas clinical operations at GlaxoSmithKline, says it’s important for teams to have a well-designed plan, be well trained, motivated, and have strong leadership and communication.
“Successful implementation requires good leadership, as well as well-trained staff,” he says. “It’s important for a company to put in the right resources and to think about all of the stakeholders involved. Risk-based monitoring is not just about CRAs. It is an approach that will have an impact on all functional areas that make up a study team within the sponsor company and the CRO.”
Mr. Wozniak says sponsors also need to think about sites.
“Sponsor and CRO staff need to clearly articulate what risk-based monitoring is and how it will affect site staff,” he says.
Mr. Wozniak says internally, pharmaceutical companies have to have leadership that will champion a risk-based approach within the organization.
“Risk-based monitoring involves cross-functional areas,” he says. “It’s really important for all of the groups — data management, statistics, clinical monitoring — to be on the same team designing an approach to implement risk-based monitoring.”
Lori Convy, assistant director, clinical research monitoring at Sanofi, stresses the need for understanding change management issues
Companies have to understand the level of impact on the organization beyond just the systems,” she says. “It’s about effectively getting the message out about what risk-based monitoring is and how the company plans to implement this model in the future.”
Kyle Given, principal at Medidata Solutions, says when it comes to risk-based monitoring, companies should consider several factors.
“Careful selection of risk factors is important,” he says. “Too many risk factors may end up creating signal noise, limiting the ability to detect quality signals effectively. Also, data analytics related to risk factors must be programmed with high sensitivity to quickly and reliably alert study teams to developing quality issues as well as quality improvements at the study or site level.”
Jill Collins, senior director, integrated clinical processes, at INC Research, says the most important best practice is to remember risk-based monitoring, or strategic data monitoring, is not a one-size-fits-all solution when it comes to implementation.
“Each study has unique features and requirements that when assessed for risks require an individualized approach,” she adds. “The approach to evaluating risk and planning a monitoring strategy should follow a standard format but retain the flexibility to provide a workable solution. The importance of change management also cannot be overstated. To make strategic data monitoring work efficiently requires a shift in processes and mindset, so it is critical that these changes in approach are carefully managed to produce the best results.”
“As the protocol is being developed and the sample size and primary endpoints are being determined, a risk analysis needs to be performed and put in place so that when the protocol is being finalized there is a clear view as to what the monitoring plan should be.” Brian Bollwage / Theorem Clinical Research
“Risk-based monitoring is an approach that will have an impact on all functional areas that make up a study team within the sponsor company and the CRO.” Craig Wozniak / GlaxoSmithKline
Risk-Based Monitoring Key Best Practices
- Identify critical data and processes to be monitored (e.g., verification that informed consent was obtained appropriately)
- Conduct a risk assessment to identify potential causes of risk that could affect the collection of critical data or the performance of critical processes
- Consider key factors, such as complexity of the study design, types of study end-points, and clinical complexity of the study population
- Create a well-designed and articulated protocol as well as a case report form (CRF) that captures data accurately and facilitates consistent data collection across investigator sites
Source: Mukhtar Ahmed, Oracle Health Sciences