In an era when the industry is being pressed to do more with less, interest in AI solutions is intense.
About 70% of pharma executives surveyed by Define Ventures said they consider AI an immediate priority. AI systems are already being used to identify the most promising compounds to speed drug development, assist with medical writing, and find trial sites and patients. An analysis from McKinsey & Company found that companies adding AI and machine learning can increase enrollment by as much as 20%, accelerating trial timelines.
Now agentic AI, a type of system that marks a significant shift from traditional AI models, is poised to make an impact in clinical research.
Clinical research associates have long been advisors for research sites. But as clinical trials grow increasingly complex, many now feel more like harried data cops, chasing down missing entries.
“Their role has changed and become more of a data aggregator, data chaser, nag to the sites,” said Andrew Mackinnon, executive general manager for customer value at Medable.
Agentic AI, which is capable of carrying out limited tasks independently, can take over many burdensome, repetitive duties. These systems can perform certain functions entirely, such as analyzing datasets, monitoring enrollment rates or identifying compliance risks. Rather than just reporting that information and making suggestions, they can decide on and take corrective actions to get things back on course, according to IQVIA.
"An imperative of the use of agentic AI is to make [R&D] more efficient so that we can do more."

Andrew Mackinnon
Executive general manager for customer value, Medable
Gartner predicts that by 2028, at least 15% of day-to-day work decisions will be made autonomously through agentic AI.
Medable recently rolled out Agent Studio, which it says is the industry’s first agentic AI platform, along with a clinical trials monitoring assistant, CRA Agent. It allows users to custom-configure AI agents to do specified jobs without continuous human prompts such as monitoring data submissions, flagging missing information and asking sites to make the needed additions.
But allowing AI to act independently requires careful planning and knowing when to allow the technology to run free and when to call in human support, Mackinnon said.
Mitigating potential risks
The goal with agentic AI is to assign tasks to humans or AI based on their strengths.
“Humans are really great at being creative and understanding problems. What we’re not quite so adept at, largely because it’s a fairly dull dredging task, is crawling different databases and systems, trying to find the data, extract it and put it into some kind of a workable format and then being able to use that data to tell you something,” Mackinnon said.
AI can now do a lot of that work and provide a human with a draft to review.
In addition to assigning the right tasks to agentic AI, companies need to carefully select the problems they want it to solve so that it doesn’t create noise and frustration.
“A lot of people throw as much AI at stuff as possible and hope something sticks,” Mackinnon said.
Risk evaluations are also critical to determine when systems should and should not work alone. AI often makes mistakes. It’s also known to create data or facts out of whole cloth when it struggles to find a reality-based answer, Mackinnon said.
To keep AI from glitching or going rogue, protections must be programmed into the system that control the agent’s actions, Mackinnon said. For example, if AI scans a system and there isn't any data, it should be told to identify the gap, not make something up, Mackinnon said.
For higher-stakes decisions, systems need to keep a human in the loop. If AI is reviewing a database to identify the worst performing sites, that’s a very low risk task.
“There's very little that could go wrong in that situation,” Mackinnon said.
But for safety monitoring or regulatory reporting, the AI agent would bring in the CRA to review evidence instead of acting independently, he said. Another safeguard is ensuring the agent’s work is transparent to allow for oversight and auditing.
The future of AI
While AI systems continue their creep into everyday life, some areas of adoption in healthcare will be slower, Mackinnon said.
“Anything that interacts directly with patients is far more fraught with ethical responsibilities and guidelines,” he said.
However, in areas such as data analysis and review, the industry will likely see rapid innovation and progress.
Adding AI to clinical trials could also allow companies to move drugs through the pipeline faster. While some fear that AI will put humans out of work, Mackinnon said it might just reshuffle responsibilities and help clear the current backlog.
The FDA only approved 50 novel drugs last year, and often the annual tally is less.
“That’s not enough. We need to be doing a much better job of putting drugs through that process,” Mackinnon said, noting that promising drugs are often deprioritized because there’s not enough time to move them forward.
“I think that an imperative of the use of agentic AI is to make it more efficient so that we can do more,” he said.