Mismanagement of R&D projects in biotech and pharma is more common than one might think and can result in high development costs and eventual clinical failure. Can we do something about it?
Biotechnology and pharmaceutical businesses are synonymous with both a high risk and an even bigger investment. Bringing new drugs to the market can take around 10 years of R&D and a lot of cash, averaging $1.3B per drug.
While we often hear about clinical trial failures, a less-discussed aspect is failure in the initial steps in the drug discovery and R&D process management. In fact, the success of the drug discovery process is notoriously low with an overall failure rate of over 96%.
Anything and everything from selecting a target to outsourcing basic research can be a major contributing factor to the later success or failure of a drug candidate in the pipeline. Moreover, the common setup where a large number of different departments have to cohesively interact throughout this drug discovery journey, which often involves multiple simultaneous projects, naturally can lead to various organizational roadblocks.
Ideas about pharmacological intervention most often come from academic partners or basic research initiatives within the company. This is the first point where signs of trouble can manifest. Parsing through large volumes of data requires both very deep expertise and the ability to quickly discard research that is not likely to lead to any desirable outcome.
Not all biotech companies can boast a well-rounded core of their disease experts. I have seen firsthand how researchers can be overconfident in their understanding, dismissive of existing research and obvious red flags, or simply comply with less knowledgeable R&D managers pushing the project forward at any cost.
Many interesting studies about disease mechanisms are first performed in animals; however, this rarely translates perfectly to match what we see in humans. As a result, bridging this gap ultimately rests on internal competencies.
What companies often do is either engage various CROs or iterate internally through a bunch of exploratory studies hoping to have an ‘aha moment.’ Therefore, success strongly relies on the quality of data generated by a CRO and the trust built between the companies.
Such R&D mentality is nicely quipped as the ‘basic research–brute force’ bias — which can be described as attempting everything in the hopes that finally something will work out. In my experience, departments that follow this strategy without actually doing due diligence or employing a guided research approach often fail. I think any researcher working in the R&D could come up with their own list of examples of wasted expensive RNAseq studies, metabolomics analyses, or animal studies that were poorly planned. That is why brute-forcing research inevitably leads to uncertain outcomes and financial losses both in the short and long term.
As a result, more and more companies are shifting toward bioinformatics and artificial intelligence approaches to take advantage of historical data to identify the most promising targets for a therapeutic intervention. However, this still has to make significant inroads into medical R&D, where the apparent productivity and process efficacy are lower than they were 50 years ago.
A lack of expertise in managing these new processes and adapting to a multitude of smaller services geared toward bioinformatics can leave a company with many expensive decisions that in retrospect might not have been necessary.
A good example is a company (let’s keep it anonymous) that decided to invest into a very expensive knowledge base solution for its target research, including known disease-related pathways, research article lists, and so on. In the end, not many employees used it and it became obvious that open-source solutions exist that rival the commercial one.
The next steps of the development process are usually split across inner departments that specialize in a specific therapeutic area. In bigger companies, the allocation of resources and prioritization of diseases are usually decided by a handful of people or project-steering committees. This trickle-down decision-making process might not always capture what ‘boots on the ground’ know about potential projects. One can only speculate how many failed projects are due to the managerial chain of command.
For instance, if a company decides to investigate a particular pathology, it might require complex experimental setups to understand the mode of action of a particular group of lead compounds. Miscommunication between R&D departments or lacking competency when making complex experimental selections can easily disrupt the order of how preparatory work should be done and generate losses.
I like to call it a ‘dilution of responsibility’ — when a project is made unnecessarily complex and people doing the actual work get less recognition or decision-making power than they should, accountability is lost. Don’t get me wrong, things fail in R&D and we should not set out with pitchforks to find a guilty project leader each time something fails. What we need instead is an ability to recognize what failed and what we need to improve the next time around. Otherwise, the company continues to invest in projects without really benefiting from it.
This usually manifests as the ‘sunk cost fallacy’ — because of the already large investment, department managers continue to push projects even if realistically they are unlikely to succeed. Another typical scenario is when there is not a good target or a clear scientific direction, so R&D teams end up with expensive cycles of experiments that are not proportional to the actual value brought to the company.
Overmanagement is another common issue in R&D (and any industry) where the workers, in this case scientists, technicians, and researchers, working closely with the project can be diluted by multiple layers of bureaucracy. This is very nicely described as the ‘throw money at it’ tendency, where the senior management keeps on adding human resources, various project intermediaries, and other non-scientific roles into R&D hoping to remedy the situation.
Agreeably, reporting and evaluation systems are necessary, especially when you have tens of thousands of employees. But some of these efforts are very artificial and just add to the bureaucracy prolonging the decision time and thus, the mobility of the R&D processes.
Some pharmaceutical companies, such as Novartis or GlaxoSmithKline, made changes to their project management and research ecosystem to create a more productive R&D culture. However, the more complex the organization becomes, the more difficult it is to evolve a natural system to maintain a successful research culture.
In contrast, small and early-stage companies aspiring to grow their pipelines have very little choice, as the decision making at least initially tends to be decentralized. This may help to quickly see what projects are worth pursuing and what needs to be stopped immediately.
Also, there are now companies that can support the R&D process. For example, the firm Repositive speeds up projects by helping preclinical oncology researchers find and source the most suitable cancer models for their needs, bridging the translation gap between preclinical and clinical development. Additionally, it can level the field for small CROs and the big ones that are currently dominating the market of preclinical studies.
To conclude, R&D strategies and frameworks are different between companies and therapeutic goals. However, it is no secret that issues persist, which in part contribute to the widely seen failures and increasing costs. Thus, only by acknowledging shortcomings, welcoming new strategies, and avoiding perpetuating old cycles can we enter the new age of therapeutics development.