How AI is reshaping clinical trials
When a drug fails, it’s not always because it doesn’t work. In many cases, the problem is linked to a limitation in trial design. This is especially true in mental health research, where studies of conditions like depression, and post-traumatic stress disorder, rely on subjective symptom ratings, not objective measures such as tumor size or blood markers — a challenge that can introduce bias.
Symptom ratings are sometimes inflated by investigators who may be eager to help a patient meet study criteria, according to research published in Journal for Clinical Studies. Tweaking these numbers may boost enrollment, but could lessen the chances a drug will succeed. When baseline scores are artificially high, it becomes harder to see the actual difference between the drug and the control group, said Gary Zammit, founder of Clinilabs, a central nervous system-focused clinical research organization.
This long-held hurdle to improving outcomes is one of the many issues in clinical trials AI is being tapped to fix.
“AI models not only seem to be more accurate in their ratings, but they help to eliminate the biases that investigators might bring to assessment,” Zammit said.
AI is making its mark in drug development in other ways as well.
Promoting safety and expanding diversity
AI holds substantial potential in safety monitoring as well and is being used to mine trial data to detect safety signals before patients experience an adverse event, Zammit said.
Companies are also exploring whether AI can shrink trial size using so-called digital twin studies, which use computational models to predict a patient’s trajectory if they were taking a placebo.
“In essence, you get an AI-generated or artificial twin that can be used by pharmaceutical companies to derive estimates on sample sizes and on statistical approaches that might be taken in the study,” Zammit said.
This digital twin technology can help fill diversity gaps in drug trials by helping to predict outcomes in different populations such as pregnant women, children or groups that are difficult to enroll due to ethical or regulatory challenges, according to an article in Nature.
AI is also delivering measurable gains in efficiency. A McKinsey & Company analysis found that companies adding AI and machine learning can increase trial enrollment by 10% to 20%, which can lead to faster trials.
One study at Mass General Brigham Health System in Boston found that an AI-assisted patient screening tool significantly reduced the time it took to determine trial eligibility and enrollment compared with manual screening, according to a research letter published in JAMA. The rise of agentic AI, which can carry out some tasks independently, could further speed processes.
Together, small gains could add up to substantial time savings.
“If you can shave off 5% of the time required for a clinical trial by improving subject recruitment and another 5% by improving subject screening, and reduce the sample size because now you’ve got a better edge on rating accuracy,” Zammit said. “Taken in the aggregate, that can have a huge impact on what we’re able to deliver to patients.”
Oversight is critical
Despite growing enthusiasm for AI, its full impact is still being realized, Zammit said.
“People see the promise, but adoption is going to take some time, and it’s only after that adoption that we're going to be able to see the real value,” he said.
As the technology evolves, it’s critical to ensure AI won’t introduce errors into the system and that people know how the technology is arriving at its conclusions.
“For us as scientists and people who have to produce datasets with integrity, we need to have a good understanding of that underlying process,” Zammit said.
One of the largest concerns about AI is that its models could be systematically biased, depending on the logic used to develop it. AI that’s asked to assess the efficacy of a particular therapeutic, for instance, might generate its response based on historical data.
“But the data it used could be based on narrowly defined populations. Maybe there wasn't enough diversity in that clinical trial data set. So, the caution is in understanding what's behind AI’s answer,” he said.
Human oversight will remain critical as the technology expands its footprint.
“AI is not going to replace the ethical oversight of studies,” Zammit said. “We still need people.”
For example, as companies integrate AI systems to monitor for potential safety signals, the role of a local qualified person for pharmacovigilance will evolve, not disappear, according to IQVIA. The position will simply gain a new focus on ensuring that safety monitoring is accurate, compliant with regulations and ethical.





