Guy Levy-Yurista, CEO of the experiment platform developer Synthace, gives his take on what’s missing in increasingly software-dependent life sciences labs: a holistic mindset for improving experimental methods.
There’s a major problem holding back progress in biotech, and more widely in the life sciences. It’s not an obvious one, but based on the conversations I have—with friends, colleagues, customers, and at every conference I attend—it’s there. These conversations range in topic and tone, but all of them reveal the outline of something different but hidden from sight and, somehow, tantalizingly within reach.
It’s a software-as-a-service (SaaS) problem. Bioscience and the life sciences more widely need SaaS to accelerate their potential. Specialized SaaS solutions are the norm in every other industry, and the abstraction and delegation of difficult tasks—tasks that computers are better at working on than humans—allows us to do much more.
And while the current SaaS landscape is filled with exemplary companies doing incredible things, it’s deeply flawed. It’s a flaw that, if fixed, could lead to a revolution in biotech computing before the end of this decade and a new crop of SaaS companies enabling incredible new things for the companies who know a good opportunity when they see one.
Today’s biotech SaaS landscape is powerful but badly fragmented
To date, software in the lab environment has been biased towards individual and discrete tasks, especially for things like record-keeping or standalone operational execution.
There are the design tools we use before entering the lab, the automation tools we use when we are in the lab, and even more—the eponymous electronic lab notebooks (ELNs)—that we use after the fact to manually record what took place. On top of all of this, we have an entire world of lab hardware, each with their own interfaces and modes of operation.
All of these different systems and tools cover different parts of the experimental process: the loop we cycle through in designing experiments, running them, analyzing experimental data, and starting all over again. But there’s a problem: there’s no throughline, no thread that links all of these disparate tools together into a unified whole.
So while we have an ecosystem, it’s a fragmented one; when we step back and look at the bigger picture, we see islands of data, islands of capability, islands of understanding, islands of data and metadata—all of it difficult to connect, integrate and interoperate.
This is the heart of the problem, and it’s a problem with two distinct causes. The first cause is the difficulty of making these tools. But the second is the limitation created by how we think about the problem in the first place.
Our current model limits the progress we can make
How we think about this fragmented ecosystem problem—the “mental model” we adopt—determines the answer we find. But not all answers are equal. And getting it wrong has a massive impact on our long-term prospects. Let’s take ELNs as an example.
Right now, ELNs are seen as something worth having, maintaining and improving. This makes sense: better ELNs means more chance of understanding and recording what goes on, more than the current alternatives (which are some form of “we didn’t record it” or “it’s in this word doc”).
But does an improvement to an ELN, a “point solution” requiring manual intervention and subject to potential human error, directly correlate with better science? With greater scientific value? The answer is: “Maybe.” Sadly, it’s not as direct a correlation as many of us would hope.
We can draw the same example and find the same big question with any other discrete element of lab software, whether that’s laboratory information management systems or automation tools. This forces us to ask an even bigger question: if there’s no direct correlation between improving an individual thing, what do we improve to make our science better, and how do we do it?
We need a new model to move forward
Instead of the eponymous “lab of the future,” we should be framing our thinking around the “experiment of the future”. This subtle but profound shift asks us to examine the many assumptions about how we should work in the first place and think of the broader system.
When we think in terms of the experimental record, we need an ELN. When we think in terms of sample management, we need a laboratory information management system. But when we think of the experiment itself, we stop thinking about the processes, equipment, data and methodologies as separate problems to be solved in isolation.
One such team embracing this mindset is the Discovery Biology department at AstraZeneca. One team in the department found a way to run an assay with 50% less reagent for the same quality assay. Another explored the entire design space of one assay to definitively confirm that buffer choice had no effect. A negative result, but a clear one that allowed that project to move on rapidly.
They were doing all of this with an emerging methodology that combines design of experiments with automated dispensing of reagents into 1536-well plates, something we’re calling high dimensional experimentation (HDE).
When we think about improving the experiment as a whole, instead of fixating on individual point solutions, we get a clearer idea of how and why we should connect and integrate the many different elements that contribute to progress. We also understand what tools, platforms or capabilities might be missing from the wider landscape.
If we can continue to build that understanding and act on it, the next few years will allow us to explore even more untold possibilities.
Guy Levy-Yurista has been CEO of Synthace since May 2021. He has over 20 years of experience in strategy, marketing, and product leadership positions in startups and Fortune 500 companies.