The use of AI for evidence generation from healthcare literature
By Matt Michelson of Genesis Research, creator of EVID AI
Within life science companies, different functions ask different questions, but often utilize the same data source – the healthcare literature. For example, value and access departments perform systematic literature reviews (SLRs) to craft unbiased assessments; clinical development leverages animal-model studies to inform hypotheses; and real-world evidence (RWE) researchers validate findings from claims-based studies with literature support on the topic.
And yet the medical / healthcare literature has only become more unwieldy over time. Vast volumes of papers are published each year at an accelerating rate. And while this provides more opportunity to find more evidence, it also makes the finding of it more difficult.
Fortunately, artificial intelligence (AI) has begun to alleviate the burden – allowing scientists to bear the breadth of the literature while minimizing the effort required to find the important results. We are no longer limited by human reading capacity: well-designed AI tools such as our own EVID AI literature review platform make it possible to surface all relevant reports and data, not just those that could be read in a specific time period.
However, it’s important to note that while technological advances enabled part of this shift, it is not only brought about by technical improvements but also by important cultural and ethical changes, three of which I highlight below.
The evolving technology aside, perhaps the most significant change has been the growing acceptance of and comfort with using AI tools. Until recently, there was much skepticism about the use of artificial intelligence in healthcare after some businesses over-promised and under-delivered on its potential. However, acceptance is growing again. One example was a recent workshop at ISPOR that focused on AI for health technology assessment (HTA) submissions. The panelists included ourselves and scientists from Novartis and the UK’s National Institute for Health and Care Excellence (NICE). This isn’t unique to life sciences: we are seeing people get more comfortable with self-parking cars, smart phone assistants, and even doorbells that alert you if a person is coming.
Transparency and other ethical considerations
As AI algorithms have become increasingly sophisticated, the challenge for end users is to understand exactly how they work ¬- for example, why the algorithm selects a particular phrase or makes a particular prediction. Without understanding the ‘why’ it can raise concerns over trust and transparency (an issue that was brought up at the ISPOR workshop). Those seeking insights from the literature need to understand the context and relevance, otherwise it’s confusing to see only the results provided by an AI.
To address this, Genesis Research intentionally takes a transparent approach, providing ‘provenance’ for data generated by EVID AI. A user can trace a piece of evidence from the extracted result to the sentence to the original source article, ensuring that there is a path to reproducibility and transparency. In fact, we recently participated in a MAPS Workshop that was focused on how one should take medical ethics into consideration when developing AI systems. Part of the reason for adoption is the comfort users feel when they can understand what’s happening, or at least trace an AI data-point back to its source.
Finally, in addition to the increase in adoption and the focus on ethical considerations in AI, another important, non-technical focus leading to improved AI capabilities within life sciences is the alignment of purpose. AI has evolved from a ‘whiz bang’ capability, technologically advanced but not always practical, to systems that have been purpose-built to help users in specific and meaningful tasks.
For instance, our EVID AI platform focuses specifically on easing the burden associated with surfacing results from papers for evidence generation. Other AI systems focus on scoping out molecules as drug targets for specific indications or unearth cohorts of patients from claims data that lack easily identifiable codes. While diverse in nature, all these systems are built for specific tasks, and do them well.
To the future!
We are now on the cusp of a time where onerous tasks in life sciences will be mitigated by help from machines, freeing up workers to focus on the more creative and analytical work. By combining technical progress with cultural adoption, the future of AI and human intelligence (HI) is as vast as it is exciting!
To find out more about EVID AI and our range of technology-assisted reviews and solutions, please email us at [email protected], view our introductory video here, or watch Matt’s presentation on ‘The use of AI to support evidence development’.