There are a number of ways we could make the hardest part of biomedicine, namely, safe translation from animal models to humans, more reliable through tooling and public goods oriented projects
One of the big problems in drug discovery is the low “predictive validity” of disease models, like mice. Most mouse results don’t translate to humans. Most drugs fail in human trials. Yet humans generally only enter the picture at the stage of Phase 1 clinical trials, which often fail. As Sam Rodriques writes, “If you are still skeptical of the value of testing directly on humans, consider that natural experiments in single humans (e.g. brain lesions, genetic disorders) can often tell us more than arbitrarily large numbers of experiments in mice.”
Are there ways to get more data from human beings earlier in the research process, or to get more rich and useful human data in early clinical trials?
Here’s a fun idea. Instead of testing one cancer drug in a given human, test dozens using tiny micro-needles to inject different drugs into different parts of a tumor. That’s the kind of idea we want more of.
What kind of rich data do we want from humans?
One key type of data would be “whatever it takes to be able to predict drug pharmacokinetics and toxicity” – which are variables that often cause drugs to fail in humans. As Trevor Klee writes, “Pharmacokinetics is the study of everything that happens to a drug when you put it in your body. So, if you’ve ever asked questions like “Why does my Advil take a few hours to work?” or “Why do I have to take a Claritin every 12 hours?” or even “Why does asparagus make my pee smell funny?”, well, those are all pharmacokinetic questions.” What Trevor points out is that pharmacokinetics is primarily descriptive not predictive today. What Trevor proposes to create as a FRO is the underpinnings for “physiologically based pharmacokinetic predictive modeling”: “It would just require the raw data from a variety of pharmacokinetic trials, some in-depth experiments on human liver and gastric membranes, and some simulation of the physics of how different drugs diffuse into the bloodstream and across membranes. This would be difficult, but not impossible, and would not require any huge scientific advances. If it were done, it would likely save hundreds of millions, if not billions of pharma dollars each year, improve or even save the lives of the thousands of people who depend on therapeutic dose monitoring (e.g. every organ transplant recipient), and get us way closer to obviating healthy human trials altogether.”
Making a predictive model of toxicity is a hard problem, as explored in a recent blog. Sam writes “Existing datasets for toxicity are generally low quality, and are limited in their coverage of chemical space, so it is unlikely that a high quality predictive model for toxicity can be trained directly from existing data. Gathering better datasets in animals and in vitro models will be important, but gathering large toxicology datasets for humans is unlikely to be possible. Instead, we may need to leverage inductive biases, for example by making predictions based on molecule-protein interactions.”
Predicting immunogencity of biologic drugs in humans would be a big unlock for pharma – many drugs sit in the freezer because they triggered an unexpected immune reaction in early human trials.
Missing datasets are indeed part of the problem. This idea of mapping molecule-protein interactions more comprehensively is at the core of EvE Bio’s approach to “mapping the pharmome”. Mostly, drug developers start with targets and then screen many drugs against them. Here, Eve Bio is pushing open data for many drugs against many targets, with both positive and negative results released. This could underpin better toxicity predictors. There are also some other more limited stabs in this direction.
Going further in this direction, in a recent essay, legendary drug developer Mark Murcko argues for the need for a project to find the “anti targets” that drive toxicity, the so-called “avoid-ome”. They write: “A particular challenge results from the interaction of drugs with the enzymes, transporters, channels, and receptors that are largely responsible for controlling the metabolism and pharmacokinetic properties (DMPK) of those drugs— their absorption, distribution, metabolism, and elimination…in general, the goal of a drug discovery team is to avoid interacting with the avoidome class of proteins… Unfortunately, the structures of the vast majority of avoidome targets have not yet been determined… multiple structures spanning a range of bound ligands and protein conformational states will be required to fully understand how best to prevent drugs from engaging these problematic anti-targets. We believe the structural biology community should ‘‘embrace the avoidome’’ with the same enthusiasm that structure based design has been applied to intended targets…Crucially, a detailed understanding of the ways that drugs engage with avoidome targets would significantly expedite drug discovery.”
What other sorts of technologies and data could support greater predictive understanding of human biology? Mapping how hormones and metabolites change in the body over time would be a powerful approach. Today, we have continuous glucose monitors and emerging monitors for selected other hormones. Occasionally, people come up with clever approaches, like measuring cortisol over time from human hair. But Anand Muthusamy and David Garrett propose to create a FRO to take these from continuous glucose monitors to “continuous everything monitors”, i.e., monitors that multiplex a large number of targets.
In addition, there is the question of how much information one can get from a given blood sample. Arguably, deep immune system profiling will likely be one of the most powerful ways to extract diverse information from a blood draw. The ultimate version of this would be the Immune Computer Interface, see this fantastic thread by Hannu.
The broader set of issues here is how to get “higher dimensional” measurements from human subjects early, for cheap and in natural contexts. Measuring the breath could be powerful, and there are proposals for a human breath atlas. Companies like Owlstone are making progress in this breath-omics or volatile-omics area. DNA sequencing is also getting closer to true point of care formats.
Using real, intact human organs to test drugs is another key approach that is developing. One company is doing this for the liver. Meanwhile, researchers are getting much better at keeping entire organs alive and functioning in a vat. This latter approach is being pioneered by an innovative startup called Bexorg, especially for the brain.
The brain is another key set of variables that is hard to access in humans, in part because humans have thick skulls (literally). Noninvasive brain imaging is improving (Kernel Flow, OpenWater), making non-invasive brain activity measurement meaningfully useful for trials of drugs.
Meanwhile, a SpecTech Brains Fellow named Manjari Narayan is developing a program to improve predictive validity at a more system and data integration level.
And BioState AI is developing a holistic omics based approach to understand cross-organism differences in drug responses.
Overall, there is a lot to do in this area. A lot of the problem is under-investment – pharma and biotech VCs currently seem to see this type of improved data and technology as more of a public good, and focus their main investments more on specific drugs, because of how value capture and risk are structured in our current system. This leads to a tragedy of the commons. But the technologies and approaches to make progress here are coming along. They just need a push.
great article, curious if there is any appetite for a pharma industry consortium to fund some of these FROs?