The Post-Covid Future of AI for Drug Development

AI drug development

With the global Covid-19 outbreak in early 2020, pharma companies and biotechs have increasingly turned to artificial intelligence to improve precision and speed in drug development. 

Artificial intelligence (AI), the ability of machines to learn from new input, is a broad term for a range of computing methods. Recommendation engines used by online shopping or streaming services use forms of AI to learn consumer preferences and tailor recommendations accordingly. This same technology can be used to predict which drugs are more likely to be effective against a specific target without causing severe side effects.

AI offers a high level of precision to the complicated and time-consuming discovery phase in drug development. That precision leads to faster development timelines and a lower failure risk down the road. 

“We use AI technology to design best-in-class molecules that are perfectly teed up for clinical development,” explains Andrew Hopkins, CEO of the UK company Exscientia, which recently raised a €435M ($525M) Series D round to fund its efforts applying AI to drug discovery

“Drug discovery is precision engineering at the molecular scale, and our focus is always on the quality of our molecules. Making well-balanced molecules that deliver across competing properties means they are less likely to fail during the drug development phases.”

AI also gives researchers the power to analyze disparate datasets. For example, it can combine vast libraries of chemical compounds, biomedical data from the literature, and patient health data into knowledge graphs. This data model creates new connections and insights into previously unrelated information, which researchers can use to make predictions, model novel pathways and disease states, and test their findings. 

The push of the pandemic

Precision and speed made AI an ideal tool to help find treatments and vaccines for Covid in a way that legacy drug development could not. 

“The Covid-19 pandemic shone a light on the pitfalls of the traditional NCE [new chemical entity] process that takes many years, costs a lot of money, and has a massive failure rate,” said Andrew Watson, VP of Artificial Intelligence at Cambridge-based drug discovery firm Healx. “Pressed with the need to find treatments and vaccines fast, many were open to exploring how AI could speed up, and de-risk, the process.”

Last year, London-based firm BenevolentAI successfully applied AI to quickly find a potential Covid treatment in repurposing the rheumatology drug baricitinib. BenevolentAI’s Scientific Advisor, Jackie Hunter, remarks that they progressed an idea to an actionable research finding in as little as 48 hours. 

“It was our CEO, Joanna Shields, who said ‘we’ve got to do something, we’ve got the ability with our platform to be able to see if there’s any drug we could reposition to have an effect [on Covid]. Of course, first you need a scientist who’s going to ask the right question. In our case, our VP of Pharmacology, Peter Richardson, realized we needed something anti-inflammatory [that] actually affects viral entry or viral replication.”

“Using our knowledge graph, we could ask those sorts of questions, pull out compounds, and interrogate the pathways that could lead to identifying those molecules that would have anti-inflammatory action, but would potentially also inhibit viral entry into the cell.”

Improved access to data 

AI is purpose-built to process vast data libraries – big data – that traditional data processing systems cannot handle. This allows researchers to work with large datasets such as population-wide Covid infection rates or decades of accumulated research papers.

However, big data isn’t always the best option. “Designing a novel molecule against a new target is inherently a small data problem – there is simply less accumulated data available for newly identified targets,” says Hopkins.

Getting access to meaningful data may be a challenge regardless of its size. It may be restricted by commercial interests or patient confidentiality. According to Hunter, the Covid health emergency helped to overcome some of these limitations and may have helped data sharing for the future.   

“What Covid has done is show that you can share patient data without compromising patient data integrity, or any of the issues around patient anonymity. And because this has demonstrated benefits, in terms of outcomes for patients, it’s a good example [showing how] not doing that would have really slowed everything down and people wouldn’t have access to the treatments they’ve got.”

AI and new ways of working

Sharing data may also contribute to new collaborations between previously distinct research or business entities. While this isn’t a new concept in drug development, the pandemic accelerated this as new partnerships combined clinical trial, manufacturing, and distribution expertise. Pfizer and BioNTech’s record time to market with an efficacious Covid vaccine demonstrates the synergy possible with such collaborations. 

“Our opinion is that the interface between AI, machine learning, and pharma coming together is potentially revolutionary and similar to what we saw when molecular biology and pharma came together to feed biotech 40 years ago,”  says Hopkins.

Such collaborations need cross-functional teams to work effectively across organizations and within them, where skill sets and working cultures must sit together. One of the applications where this becomes evident is in silico clinical trials, which use AI for computational modeling and simulation of the biology of healthy humans, disease states, and treatments. This helps reduce costs, complexity, and risk in the drug development process. While human subjects remain the critical test measure for new drugs, the output from machine-only in silico trials can more deeply inform research and the decision process, especially when moving from the preclinical phase to the more expensive and complex human trials. 

According to François-Henri Boissel, co-founder and CEO of Novadiscovery a firm that uses AI to run clinical trial simulations traditionally siloed areas of expertise must combine with the nimble and collaborative approach of modern tech organizations. 

“With the adoption of in silico [simulations], we’re trying to build fairly complex disease models, and we need everyone on board. We need input from a range of experts in areas such as immunology, fibrosis, experiment planning, clinical trial design, mathematical modeling, software development, and data science expertise. It is naturally collaborative and, irrespective of in silico trials, I think it’s a trend that pharma companies have started to adopt too.”

Pandemic lockdowns pushed pharma to turn to in silico clinical trials to continue drug development during times when it is harder to recruit human subjects. Boissel believes this trend is very likely to increase, leading to an increase in the number of clinical trials using smaller, more focused targets and patient populations.  

Overcoming limitations to AI

Despite the promises of the technology, AI is unlikely to replace human test subjects any time soon, and harnessing its power relies on human input to answer the right question.

“The challenge is to avoid over-complicating your model. You really want to have the right level of knowledge and information and the inclusion of the right biological subsystems to properly address set questions. Because human biology is very complex, you could spend an entire lifetime building a mathematical model of the human body,” explains Boissel.

Yet while AI offers the power to collect, assimilate and analyze vast and disparate datasets, it cannot assess how valuable the data is to researchers. Boissel suggests there is more benefit in curating smaller volumes of high-quality data to create mathematical representations of what we already know and combine that with years of scientific and clinical knowledge and expertise in a more meaningful way. More focused inputs contribute to more focused outputs, which also gives researchers confidence in those outputs.  

Another way to create more reliable output is to build AI systems that can offer an explanation to their answers. George Paliouras, Senior Researcher at the Institute of Informatics and Telecommunications at NCSR Demokritos in Greece, studies these sorts of AI issues. Paliouras led the research for iASiS, a Europe-wide project investigating the use of AI and big data in precision medicine, which led to the creation of LangAware, a startup that uses language recognition to detect Alzheimer’s. 

“One aspect of AI that is still problematic is neural networks and their lack of transparency for health applications. To trust a decision, we need to know why that decision was made,” Paliouras explains

“For example, AI might reveal a patient prone to Alzheimer’s, but why is this? What data has the system used to base this decision on? This is very, very important for all health applications of AI. So right now, a hot field of AI is known as ‘explainable AI’, or AI with decisions understandable by humans.” 

In 2020, his team started the BioASQ Synergy project to push AI systems further to find answers to difficult Covid questions. The project involves researchers and scientists asking questions to competing AI systems, such as “is there a relationship between exposure to air pollution and Covid-19 mortality?” While the AI-derived output does not always provide a precise answer to the question, the goal is that it will be understandable and useful to humans.  

Changing perceptions of AI and data

Other challenges to the use of AI exist outside the technology. For example, sharing personal healthcare data remains a controversial topic. Many people fear their personal data could end up in the wrong hands. For healthcare providers and other collectors of patient data, one argument for limiting sharing is the lack of a sure way to completely (and permanently) anonymize the data. 

Paliouras believes these barriers are largely artificial ones and says research indicates patients are more willing to share their data than is assumed. Overcoming barriers from healthcare providers, commercial organizations protecting data assets, or researchers and publishers controlling the release of findings ultimately comes down to one simple idea.

“I think we should interpret the [data protection laws] in a way that puts the patient in control of all their data. Then we need to give them that control, and the knowledge to decide that their data can and should be shared for a particular purpose.”

He suggests a possible workaround is federated learning, which could involve using AI techniques directly at the data collection point, such as hospitals. This approach allows researchers to generate aggregate models based on that data, rather than harvesting and moving them elsewhere, effectively circumventing the ownership issue. 

There is also a perception problem with the idea of machines designing medicines destined for human bodies and the risks some people think that poses. While AI has the potential to be used for nefarious purposes such as intelligent weapons, its use in drug development remains within the boundaries of the highly regulated medical field. As the adoption of in silico trials increases, the FDA and EMA have been establishing regulations around it. 

Although the pandemic eased access to data, it will take time to address all implementation issues for AI technology. Meanwhile, AI’s benefits for drug development and human health will continue to grow exponentially as knowledge and experience increase and technology continues to advance. AI has already enabled many things not possible just a few years ago. As George Paliouras comments, it’s difficult to imagine treatments such as gene therapy without it.

“The space of possibilities and choices in genetic research is huge. Imagine one gene that has a few million bases. If they’re looking for alternative sequences within this bigger sequence, the possibilities are 10 to the 16th, and researchers might have to choose ten molecules to try out. AI makes going through this process efficiently and systematically a reality.”

Watson stresses the significant potential AI will continue to have for their research focus in rare diseases, where 95% of conditions are still without an approved treatment. 

“We must all think deeply about how we balance the need to safeguard sensitive information against the increasing opportunities offered by advanced tools like artificial intelligence. There needs to be an open dialogue that builds on the lessons learned from this pandemic and allows everyone – patients, public, researchers, industry, and government – to share their vision for the healthcare of the future.”

“Biotech companies need to talk about their achievements, but doing it in a responsible way, not over-hyping it,” says Hunter. “Helping people to understand how successful research has used AI… in a simple way, to do things that could not have been done before.”

 

 

Newsletter Signup - Under Article / In Page

"*" indicates required fields

Subscribe to our newsletter to get the latest biotech news!

Name*
This field is for validation purposes and should be left unchanged.
Labiotech.eu

Suggested Articles

Show More