POPULARITY
80% of all autoimmune diseases occur in women, and no one can explain why. Cancer cells are always present in your body, but it's only when your T cells go into energy deficit that cancer starts overtaking the system. And here's what almost no one is talking about: the mitochondria in your immune cells are the reason MS, chronic fatigue, neurodegeneration, and even cancer progression happen when they happen. In this episode, I sit down with Dr. Anurag Singh, MD, PhD immunologist who spent 20 years studying mitochondria and screened 4000 compounds from pomegranates to discover one molecule that changes cellular aging. We break down immunometabolism, the emerging field linking immune health and metabolism, why your T regulatory cells are the CEOs of your immune system, how mitochondrial dysfunction in immune cells triggers autoimmune conditions, and why rejuvenating mitochondria can get your immune system in check to defeat cancer. We also cover NAD+ (and why NMN and NR supplements don't work the way people think), the creatine sweet spot for muscle quality (500mg-1g, not the 5g everyone's taking), why Parkinson's is linked to paraquat, a mitochondrial toxin used in fertilizers and dry cleaning and how AI is fast-tracking the discovery of next-generation molecules for neurodegeneration. This conversation completely shifted how I think about immune health, brain protection, and what's actually driving the diseases we fear most. Reduce your risk of Alzheimer's with my science-backed protocol for women 30+: https://go.neuroathletics.com.au/youtube-sales-page Subscribe to The Neuro Experience for evidence-based conversations at the intersection of brain science, longevity, and performance. _____ TOPICS DISCUSSED 00:00 Intro: Why 80% of Autoimmune Diseases Occur in Women 01:24 Why Dr. Anurag Became an Immunologist 03:19 Immunometabolism: The Link Between Immune Health and Metabolism 04:20 T Cells, B Cells, and the Thymus Gland 05:51 MS and Autoimmune Disease: The T Regulatory Cell Problem 11:32 Mitochondrial Dysfunction and Immune Exhaustion 18:45 Cancer Cells and T Cell Energy Deficit 24:10 Urolithin A: Screening 4000 Pomegranate Compounds 31:20 Mitophagy and Autophagy: Cellular Housekeeping 38:50 NAD+ vs NMN and NR Supplements: What Actually Works 43:15 Creatine Dosing: The 500mg-1g Sweet Spot for Muscle Quality 48:30 Gut-Brain Connection and Neurodegeneration 50:54 Parkinson's Disease and Paraquat: The Mitochondrial Toxin 53:25 AI in Drug Discovery and Next-Generation Molecules 55:38 Skincare and Mitochondrial Health: Collagen Synthesis _______ Thank you to our sponsors KetoneIQ: https://ketone.com/NEURO for 30% OFF Caraway: Carawayhome.com/neuro10 Jones Road Beauty: https://www.jonesroadbeauty.com - Use code NEURO _______ I'm Louisa Nicola - clinical neurophysiologist - Alzheimer's prevention specialist - founder of Neuro Athletics. My mission is to translate cutting-edge neuroscience into actionable strategies for cognitive longevity, peak performance, and brain disease prevention. If you're committed to optimizing your brain- reducing Alzheimer's risk - and staying mentally sharp for life, you're in the right place. Stay sharp. Stay informed. Join thousands who subscribe to the Neuro Athletics Newsletter → https://bit.ly/3ewI5P0 Instagram: https://www.instagram.com/louisanicola_/ Twitter : https://twitter.com/louisanicola_ Learn more about your ad choices. Visit megaphone.fm/adchoices
On today's episode of Dr. M's Women and Children First Podcast, we welcome a scientist whose work has quietly shaped the cardiovascular health of millions around the world. Dr. Sundeep Dugar is a pharmaceutical innovator, inventor, and industry leader with more than three decades at the forefront of drug discovery. He is best known as a co-inventor of ezetimibe — marketed as Zetia® — a landmark cholesterol-lowering medication that transformed lipid management by targeting intestinal cholesterol absorption. He also co-inventor of the combination therapy Vytorin® (ezetimibe plus simvastatin), expanding treatment options for patients at high cardiovascular risk. For this groundbreaking work, Dr. Dugar and his colleagues received the prestigious 2005 National Inventor of the Year Award from the Intellectual Property Owners Association and the Heroes of Chemistry award from the American Chemical Society. Across his career, Dr. Dugar has contributed to more than 140 patents and has authored over 70 scientific publications, reflecting a lifetime devoted to translating chemistry into real-world therapies. He is currently the founder of Aayam Therapeutics, where he leads efforts to develop innovative, accessible medicines through collaborative global research. He also serves as Co-Chief Executive Officer of Blue Oak Nutraceuticals, advancing a novel mitochondrial-targeted compound known as Mitokatlyst™, designed to stimulate mitochondrial biogenesis and cellular energy — with potential implications for muscle strength, metabolic health, cardiovascular function, and inflammation. He is the first one to decipher the mechanism by which exercise induces mitochondria levels. Mitokatlyst mechanism of action mimics this process. Dr. Dugar's scientific journey spans continents and some of the world's premier institutions. He earned both his Bachelor's and Master's degrees in Organic Chemistry from the University of Delhi, completed his PhD in Chemistry at the University of California, Davis, and pursued postdoctoral research at ETH Zürich in Switzerland and at Cornell University. Today, we'll explore the story behind major pharmaceutical breakthroughs, the science of mitochondrial health, and what the future of therapeutics may look like when innovation meets global accessibility. Please join me in welcoming Dr. Sundeep Dugar.
Send a textDr. Karsten Eastman, Ph.D. is the CEO and Co-Founder of Sethera Therapeutics ( https://setheratx.com/ ), a company focused revolutionizing peptide-based drug development with a cutting-edge enzymatic cross-linking technology, and their platform enables the synthesis of highly stable, polymacrocyclic peptides designed to engage multiple targets simultaneously, offering unparalleled precision in therapeutic design.Trained as a chemist at the University of Utah, where he earned his PhD in 2023, Dr. Eastman has built his career at the intersection of peptide synthesis, protein engineering, and radical enzymology. His work focuses on understanding how enzymes choreograph complex molecular transformations — and then harnessing those principles to build programmable, drug-like molecules.In collaboration with researchers at the University of Utah, Dr. Eastman and his team recently published in Proceedings of the National Academy of Sciences a breakthrough discovery involving PapB, a radical S-adenosyl-L-methionine (SAM) enzyme capable of installing precise, durable thioether “staples” into peptides in a single enzymatic step. This work opens vast new chemical space for macrocyclic peptide therapeutics and offers a powerful new approach to targeting diseases long considered “undruggable.”At Sethera Therapeutics, Dr. Eastman is translating this enzymatic platform into next-generation peptide medicines that aim to combine the selectivity of biologics with the drug-like properties of small molecules.#PeptideTherapeutics #DrugDiscovery #Biotechnology #Enzymology#RadicalSAM #PapB #SetheraTherapeutics #UndruggableDiseases#MacrocyclicPeptides #Bioengineering #Therapeutics #Innovation #ScienceBreakthrough #PeptideDrugs #EnzymeTechnologySupport the show
Ep194: Ansu Satpathy on Cancer and Autoimmune Drug Discovery by Timmerman Report
When Terry Pirovolakis learned his son had an ultra-rare neurodegenerative disease, SPG50, he refused to accept “no options.” What started as a desperate search for hope became Elpida Therapeutics, a nonprofit driving gene therapy innovation for multiple rare diseases. In this episode, Terry shares the remarkable journey from diagnosis to clinical trials, the power of partnerships, and why urgency matters when every day counts.Show NotesFrom Mystery to Medicine: The Science Behind a Mother's Search | PodcastTaking a Customized and Collaborative Approach to Therapeutic Development | PodcastRare Disease Research for Drug Development | Charles RiverRare Disease | Charles RiverDiscovery | Charles RiverBeyond The Diagnosis
a16z general partner Jorge Conde talks with Vasant Narasimhan, CEO of Novartis International, about transforming a 250-year-old conglomerate into a pure play medicines company and unlocking $180 billion of value in the process. They cover Novartis's platform technologies: cell and gene therapies, RNA medicines, and radioligand therapies. They also discuss AI in drug discovery, the rise of China as a biotech competitor, and what Vasant looks for when evaluating startup partnerships, including his advice on the killer experiments and CMC work that can make or break a deal. Resources: Follow Vasant Narasimhan on X: https://twitter.com/VasNarasimhanFollow Jorge Conde on X: https://x.com/JorgeCondeBio Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
In this episode of Outside the OR, Steven C. Katz, MD, FACS shares his journey at the intersection of surgical oncology and industry. From clinical insight to drug development, Dr. Katz discusses how surgeons can play a pivotal role in advancing innovation, translating research into therapeutics, and collaborating with industry partners to bring new treatments to patients. Tune in for an engaging conversation about leadership, entrepreneurship, and how surgical oncologists can help shape the future of drug discovery.
In this episode of Data in Biotech, host Ross Katz sits down with James Yoder, Founder and CEO of OpenBench, to unpack a radical new approach to early-stage drug discovery. James shares how OpenBench's "success-driven" model shifts risk away from biotech partners by only charging for validated hits. They dive deep into computational screening, molecular modeling, and the company's evolving tech stack that's making hit discovery smarter and more accessible. Discover how data, AI, and strategic collaboration are redefining biotech R&D. What you'll learn in this episode: >> Why OpenBench moved away from SaaS to a success-based service model >> How their computational platform predicts binding affinity and screens trillions of compounds >> The role of data flywheels and ML in improving drug discovery success rates >> Real-world case studies from biotech collaborations >> How OpenBench evaluates druggable targets in one week Meet our guest James Yoder is the Founder and CEO of OpenBench. With a background in statistics, data science, and applied machine learning, he leads OpenBench's mission to deliver validated drug discovery hits through computational innovation and a success-driven business model. About the host Ross Katz is Principal and Data Science Lead at CorrDyn. Ross specializes in building intelligent data systems that empower biotech and healthcare organizations to extract insights and drive innovation. Connect with our guest: Sponsor: CorrDyn, a data consultancyConnect with James Yoder on LinkedIn Connect with us: Follow the podcast for more insightful discussions on the latest in biotech and data science.Subscribe and leave a review if you enjoyed this episode!Connect with Ross Katz on LinkedIn Sponsored by… This episode is brought to you by CorrDyn, the leader in data-driven solutions for biotech and healthcare. Discover how CorrDyn is helping organizations turn data into breakthroughs at CorrDyn.
Send a textOver the past few years, artificial intelligence has rapidly entered drug discovery — but one of the true “holy grail” challenges inside pharma is no longer just predicting what proteins look like, but understanding how molecules actually interact: how proteins bind drugs, antibodies, RNA, and each other, and how those insights can guide better decisions long before anything reaches the lab.Early breakthroughs in structure prediction made protein models widely accessible, but real biology happens at interfaces, in motion, and often in fleeting conformations that determine whether a therapy ultimately succeeds or fails. Today's conversation explores what it means to move into this next chapter — where structural predictions are translated into actionable insight for real-world drug development.Joining us are two scientists from Merck KGaA, Darmstadt, Germany ( https://www.emdgroup.com/en ), working at the intersection of protein structure prediction, molecular dynamics, and generative design, helping to build internal platforms that turn computational models into practical decision tools for therapeutic discovery.Dr. Stephanie Linker, Ph.D. is a Senior Computational Biochemist in Merck's Group Digital Innovation unit, where she leads initiatives in generative antibody design, de novo protein binder development, and advanced structure prediction platforms. Her work focuses on how molecular shape, flexibility, and dynamics influence whether a designed molecule actually performs in biological systems.Dr. Philipp Schnee, Ph.D. is a Computational Protein Design expert at Merck KGaA, currently part of the GoGlobal Data & AI rotation program. His research bridges high-resolution molecular dynamics simulations with experimental biochemistry to understand protein function, mutation effects, and mechanisms that can be leveraged for enzyme engineering and inhibitor design.Together, their work reflects a broader shift happening across the pharmaceutical industry — away from static structures and standalone models, and toward integrated platforms that combine folding, binding, ranking, and experimental validation to guide smarter, faster therapeutic decisions.In this episode, we explore what these next-generation tools can do today, where their limitations remain, and why the ability to move from structure prediction to decision-ready insight may become one of the most important frontiers in modern drug discovery.AI drug discovery, protein structure prediction, computational biology, biologics design, pharmaceutical R&D#DrugDiscovery #ArtificialIntelligence #AlphaFold #ProteinFolding#Biotech #PharmaInnovation #ComputationalBiology #StructuralBiology #AIinHealthcare #AntibodyEngineering #MolecularDynamics #FutureOfMedicine #SystemsBiology #LifeSciences #ProgressPotentialPossibilities #MachineLearning #BioTechPodcastSupport the show
AI is everywhere in biotech, but where does it genuinely create impact?In this episode of the ThinkData Podcast, I sat down with Josh Haimson, Co-Founder and CEO of Inductive Bio, to explore how AI-driven virtual chemistry labs are transforming drug discovery.We cover:Where AI truly outperforms traditional approaches in drug developmentWhy computational models are finally earning real trust from scientistsWhat Inductive Bio's Series A unlocked, and how they're balancing speed vs depthThe role of industry partnerships and early adopters in shaping the platformWhere are the biggest AI opportunities in drug discovery over the next 3–5 yearsA grounded, technical, and refreshingly honest conversation about the future of AI-powered therapeutics.
Episode SummaryPotent in vitro hits often fail in vivo—Martin Marro details how robust assay choice and pathway deconvolution can revive GPCR drug discovery programs.Listeners will learn practical approaches to assay development for GPCR drug discovery, the pitfalls of calcium readouts, and how identifying pathway bias impacts translational success. Dr. Marro shares his experience bridging in vitro–in vivo gaps, refining selection flowcharts, and leveraging pharmacology research to drive clinical candidates. His strategic perspective is rooted in years of leading multimodal discovery teams in pharma and biotech. Key TakeawaysAssay selection critically shapes the trajectory from hit to clinic.Calcium and IP1 assays may not predict in vivo efficacy for all Gq-coupled receptor targetsAlternative pathway analysis may be essential for mechanism elucidation.Persistence in probing beyond standard readouts can rescue high-profile discovery programs. Team structure and collaborative problem-solving are pivotal in resolving translational bottlenecks.Explore Dr. GPCR Resources- Dr. GPCR Ecosystem- Membership & Pricing- Weekly NewsExplore the full depth of GPCR resources, events, and member-exclusive tools with Dr. GPCR Premium.About the GuestDr. Martin Marro leads the Cell Pharmacology group in the DOCTA division at Lilly's Seaport Innovation Center in Boston, MA. Trained as a pharmacologist, Dr. Marro has accumulated over 20 years of experience spanning large pharmaceutical firms—including GSK, Novartis, and Lilly—and innovative biotech such as Tectonic Therapeutic. He holds deep expertise in early drug discovery across small molecules, peptides, and antibody therapeutics for metabolic, cardiovascular, and gastrointestinal diseases.Dr. Marro's research has been central to the discovery and characterization of multiple clinical candidates, with a focus on GPCR target validation, receptor pharmacology, and translational assay strategies. He played a key role in patenting and developing novel fatty acid-conjugated GLP-1 receptor agonists. Driven by the challenge of translating robust in vitro science into clinical proof-of-concept, Dr. Marro's leadership continues to impact the field of GPCR drug discovery.Keywords: gpcr podcast, assay development, pharmacology research.
Designing proteins that have never existed in nature is no longer sci-fi — it's becoming a real drug discovery strategy. In this episode, Kashif Sadiq, Founder & CEO of DenovAI Biotech, explains how AI is powering a shift from searching for biologic binders to intentionally designing new proteins from scratch.Kashif shares his journey from studying physics at University of Cambridge into computational biophysics, and how breakthroughs like AlphaFold from DeepMind helped unlock the next frontier: de novo protein design. Instead of hoping evolution has already produced a usable molecule, Kashif describes how modern AI can engineer bespoke proteins for specific functions, including challenging targets where traditional approaches come up short.The conversation dives into the sheer scale of “protein space” and why evolution has only explored a tiny fraction of what's possible. Kashif outlines how this opens the door to targeting diseases and biological mechanisms that have historically been considered undruggable, especially where flat protein interfaces or complex signalling pathways have made small molecules ineffective.Finally, Kashif explains why combining generative AI with physics-based methods is essential to reduce false positives, improve real-world binding performance, and enable “one-shot design” — where discovery and optimisation become a single integrated process. He also shares what keeps him up at night: clinical trial attrition — and why designing better earlier may be the key to improving success later.Topics CoveredDe novo protein design vs traditional biologics discoveryWhy evolution explored only a tiny fraction of protein space“Programmable biologics” and intentional molecular designAlpha Design and designing proteins from the inverse problemAntibodies, nanobodies, and therapeutic protein engineeringCombining generative AI with physics-based validationReducing false positives in protein binding predictions“One-shot design” and compressing discovery timelinesUndruggable targets, flat interfaces, and intracellular signallingClinical trial attrition and what's missing at the preclinical stageWhen the first de novo-designed therapeutic could enter trialsAbout the PodcastAI for Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr Andree Bates, created to help pharma, biotech and healthcare organisations understand how AI-based technologies can save time, grow brands, and improve company results.This show blends deep sector experience with practical, no-fluff conversations that demystify AI for biopharma execs — from start-up biotech right through to Big Pharma. Each episode features experts building AI-powered tools that are driving real-world results across discovery, R&D, clinical trials, market access, medical affairs, regulatory, insights, sales, marketing, and more.Dr. Andree Bates LinkedIn | Facebook | X
Brant Peterson, Vice President & Fellow at Valo Health, joins Data in Biotech to explore how his team leverages real-world data, genetic insights, and machine learning to de-risk drug discovery. From building causal DAGs to identifying patient subtypes in neurodegenerative diseases like Parkinson's, this episode dives deep into a patient-first, data-driven approach to biomedical innovation. What You'll Learn in This Episode: >> How Valo Health uses real-world evidence and EHR data to prioritize drug targets earlier in the development pipeline. >> Why integrating wet lab experiments and causal DAGs accelerates therapeutic validation. >> The importance of genetic pleiotropy and Mendelian randomization in refining disease hypotheses. >> How Valo Health identifies high-impact patient subgroups in neurodegenerative diseases like Parkinson's and Alzheimer's. >> Where machine learning models succeed and fall short, in uncovering mechanisms of disease from sparse longitudinal data. Meet Our Guest Brant Peterson is Vice President & Fellow in Data Science at Valo Health. He brings deep expertise in genetics, computational biology, and biomedical innovation. Formerly a Distinguished Data Scientist at Valo and Computational Biologist at Novartis, Brant focuses on leveraging patient-centric data to drive causal discovery in drug development. About The Host Ross Katz is Principal and Data Science Lead at CorrDyn. Ross specializes in building intelligent data systems that empower biotech and healthcare organizations to extract insights and drive innovation. Connect with Our Guest: Sponsor: CorrDyn, a data consultancyConnect with Brant Peterson on LinkedIn Connect with Us: Follow the podcast for more insightful discussions on the latest in biotech and data science.Subscribe and leave a review if you enjoyed this episode!Connect with Ross Katz on LinkedIn Sponsored by… This episode is brought to you by CorrDyn, the leader in data-driven solutions for biotech and healthcare. Discover how CorrDyn is helping organizations turn data into breakthroughs at CorrDyn.
Send us a textJesse Mendelsohn and Michael Grosberg from Model N discuss why U.S. pricing complexity is spreading globally, the collapse of the PBM rebate model, what's really driving the pharmaceutical manufacturing boom, and why direct-to-consumer discount programs won't solve America's drug access problem.00:00 Introduction to Life Science Success Podcast00:34 Pressing Issues in Pharmaceutical Manufacturing00:51 Introducing the Experts from Model N03:08 Understanding International Reference Pricing04:14 Impact of US Pricing on Global Drug Launches09:26 Challenges with Pharmacy Benefit Managers16:29 Domestic Manufacturing Boom in Pharmaceuticals23:08 AI in Drug Discovery and Personalized Medicine29:03 Access and Policy Discussion30:01 Direct to Consumer Pricing31:32 TrumpRx Overview33:11 Compliance Challenges38:30 Pharmaceutical Revenue Management41:38 AI in Life Sciences48:35 Future of Life Sciences50:45 Concerns and Challenges53:52 Excitement in Current Work55:54 Conclusion and Final Thoughts
Synopsis: Fresh from the JPM 2026 in San Francisco, Alok Tayi welcomes Johan Luthman, Executive Vice President of R&D at Lundbeck, for a sweeping, deeply personal conversation on the future of neuroscience drug development. From his early days as a Swedish clinician-scientist to leading breakthrough Alzheimer's programs and rebuilding Lundbeck's pipeline from the ground up, Johan shares the pivotal moments—and phone calls—that shaped a 30-year career across AstraZeneca, Merck, Serono, and now Denmark's neuroscience powerhouse. The discussion dives into Lundbeck's bold strategic reset: letting biology lead, de-risking early in patients, embracing rare disease and sleep medicine, and making disciplined bets on monoclonal antibodies, migraine prevention, epilepsy, and neuroendocrine disorders. Johan explains how the company shifted capital toward innovation, rebuilt its portfolio through targeted acquisitions, and built one of the most advanced neuroscience pipelines in pharma today. In one of the episode's most powerful moments, Johan opens up about his personal motivation—caring for family members with Alzheimer's and dedicating his career to diseases of the brain. From AI-driven R&D productivity and adaptive trials to Denmark's unique foundation-owned pharma model, this conversation is a masterclass in scientific rigor, decision-making under uncertainty, and keeping patients at the center of everything. Biography: In 1991, Johan Luthman began his career in the pharmaceutical industry in Astra, later AstraZeneca. In 2005, Johan joined Serono as Head of Neuroscience & Immunology Research, and subsequently, in MerckSerono, as Therapy Area Head, Neurology & Immunology. In 2009, he became CEO of biotech start-up GeNeuro. In late 2009, Johan joined Merck as VP & Franchise Integrator for Neuroscience and Ophthalmology. In 2014, he came to Eisai where he was Senior Vice President and Head of Clinical Development. Johan joined Lundbeck as Executive Vice President, R&D in March 2019. Johan is a Swedish national and is trained as a Doctor of Dental Sciences from the Karolinska Institute, Sweden. He also holds a PhD in Neurobiology and Histology as well as an Associate Professor title from the Karolinska Institute, Sweden. Johan is a Member of the Board of Directors of Brain+.
At the JP Morgan Healthcare conference this year, a lot of the discourse around AI drug discovery focused on making the leap from purely in silico drug discovery operations to real-world operations that are able to incorporate wet lab data in an iterative way. This is easily said, but to do it requires innovating new processes and infrastructures. On the sidelines of the show, pharmaphorum's Jonah Comstock caught up with Yann Gaston-Mathe, founder and CEO of Iktos, an AI drug discovery company that just signed a billion euro deal with Servier to put this technology into action. In this quick dispatch from JPM (and we apologise for the shaky audio), Gaston-Mathe describes this shift in AI drug discovery, why it needs to happen, and what it takes, as well as giving some insights on why Servier and Iktos are a good fit as partners. “You need to think about how effective you are in the transition between the in vitro world and the in silico world,” Gaston-Mathe says. “Building on the data which is available is not enough.” You can listen to episode 242 of the pharmaphorum podcast in the player below, download the episode to your computer, or find it - and subscribe to the rest of the series – on Apple Podcasts, Spotify, Overcast, Pocket Casts, Podbean, and pretty much wherever else you download your other podcasts from.
In this week's episode of the Xtalks Life Science Podcast, host Ayesha Rashid, Senior Life Science Journalist at Xtalks, spoke with Michelle Hoffmann, PhD, Executive Director of the Chicago Biomedical Consortium (CBC). CBC is a consortium of biomedical researchers across Northwestern University, the University of Illinois at Chicago and the University of Chicago supported by the Searle Funds at The Chicago Community Trust. The CBC's mission is to stimulate collaboration among scientists to accelerate discovery that will transform biomedical research and improve human health. At the CBC, Dr. Hoffmann is reshaping the organization to train local PhD talent in early-stage commercialization and fund high-potential biopharma innovations from Chicago's universities. Previously, she served as Senior Vice President of Deep Tech at P33 and spent 15 years helping life sciences companies grow as a senior vice president at Back Bay Life Science Advisors, where she worked on major transactions including platform and asset deals with Gilead and AbbVie. Dr. Hoffmann has a PhD in molecular and cellular biology from the University of California Berkeley and completed a postdoctoral fellowship at Brandeis University. For more life science and medical device content, visit the Xtalks Vitals homepage. https://xtalks.com/vitals/ Follow Us on Social Media Twitter: https://twitter.com/Xtalks Instagram: https://www.instagram.com/xtalks/ Facebook: https://www.facebook.com/Xtalks.Webinars/ LinkedIn: https://www.linkedin.com/company/xtalks-webconferences YouTube: https://www.youtube.com/c/XtalksWebinars/featured
What if the future of drug discovery isn't on Earth? In this episode of ScaleUp Radio, Kevin Brent is joined by Aqeel Shamsul, the visionary CEO and co-founder of Frontier Space, a UK-based space biotech company developing shoebox-sized biolabs to unlock the power of microgravity for pharmaceutical R&D. Spun out of Aqeel's PhD at Cranfield University, Frontier Space is on a mission to make in-space research and biomanufacturing accessible, affordable, and impactful, and they've already achieved what few startups can: bootstrapping two space missions and securing over £1.3M in non-dilutive grant funding.
Artificial intelligence is rapidly reshaping the pharmaceutical industry—and nowhere is that more evident than in small-molecule drug discovery. In this episode, we sit down with Tom Shani, CEO and co-founder of ProPhet, an AI-driven biotech company focused on discovering drugs for hard-to-target proteins.Tom explains how machine learning models, transformers, and AI-driven molecular representations are overcoming the biggest limitations of traditional drug discovery: slow timelines, high failure rates, missing data, and billion-dollar R&D costs. Rather than relying solely on physics-based simulations and trial-and-error lab work, AI systems learn patterns directly from noisy biological data—making them uniquely suited for real-world biology.The conversation explores how AI can compress drug discovery timelines from decades to years, reduce failed trials, and dramatically lower costs by improving early-stage target and molecule selection. Tom also breaks down why small molecules remain the backbone of modern medicine, how AI enables scalable exploration of vast chemical space, and why trust, regulation, and validation remain the biggest hurdles to adoption.This episode is essential listening for anyone working in pharma R&D, biotech, AI-driven drug discovery, computational biology, or life sciences innovation.Topics CoveredAI-powered small-molecule drug discoveryMachine learning vs traditional pharmaceutical R&DHard-to-drug proteins and undruggable targetsTransformers, AlphaFold, and molecular representationsReducing drug discovery timelines and costsAI robustness to missing and noisy biological dataOff-target effects, toxicity, and safety predictionThe future of AI in pharma and biotech startupsAbout the PodcastAI for Pharma Growth is a podcast focused on exploring how artificial intelligence can revolutionise healthcare by addressing disparities and creating equitable systems. Join us as we unpack groundbreaking technologies, real-world applications, and expert insights to inspire a healthier, more equitable future.This show brings together leading experts and changemakers to demystify AI and show how it's being used to transform healthcare. Whether you're in the medical field, technology sector, or just curious about AI's role in social good, this podcast offers valuable insights.AI For Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr. Andree Bates created to help organisations understand how the use of AI based technologies can easily save them time and grow their brands and business. This show blends deep experience in the sector with demystifying AI for all pharma people, from start up biotech right through to Big Pharma. In this podcast Dr Andree will teach you the tried and true secrets to building a pharma company using AI that anyone can use, at any budget.As the author of many peer-reviewed journals and having addressed over 500 industry conferences across the globe, Dr Andree Bates uses her obsession with all things AI and futuretech to help you to navigate through the, sometimes confusing but, magical world of AI powered tools to grow pharma businesses.This podcast features many experts who have developed powerful AI powered tools that are the secret behind some time saving and supercharged revenue generating business results. Those who share their stories and expertise show how AI can be applied to sales, marketing, production, social media, psychology, customer insights and so much more.Dr. Andree Bates LinkedIn | Facebook | Twitter
When Patricia Weltin's daughters were diagnosed with Ehlers-Danlos Syndrome after years of uncertainty, she turned her frustration into a global movement. In this episode of Sounds of Science, Patricia shares the story behind Beyond the Diagnosis, a powerful art and advocacy initiative that uses portraiture to humanize rare diseases and inspire empathy in medical professionals, students, and communities around the world. From medical schools to courthouses and even Parisian galleries, the traveling exhibit is reshaping how we see children with rare diseases—not as diagnoses, but as vibrant individuals with stories worth telling. Tune in to hear how Patricia's mission is bridging the gap between science and compassion, and how you can help carry it forward.Show NotesFrom Mystery to Medicine: The Science Behind a Mother's Search | PodcastTaking a Customized and Collaborative Approach to Therapeutic Development | PodcastRare Disease Research for Drug Development | Charles RiverRare Disease | Charles RiverDiscovery | Charles RiverBeyond The Diagnosis
This is the latest episode of the free DDW Narrated Podcast. The episode covers two articles written for DDW Volume 25, Issue 2, Spring 2024. The first article is called 'Machine learning: developing next generation antibody therapeutics'. Ben Holland, CTO and Co-Founder of Antiverse, discusses how artificial intelligence and machine learning are benefitting antibody discovery and design. The second article is called 'The next generation of AI'. In the piece, DDW Editor Reece Armstrong speaks to Sean McClain, Founder & CEO of Absci, about the rise of generative AI in life sciences. You can listen below, or find The Drug Discovery World Podcast on Spotify, Google Play and Apple Podcasts.
Send us a textThe SLAS2026 Short Course Program consists of 20+ courses that will provide in-depth instruction on topics, issues and techniques related to the laboratory science and technology community. Short Courses run from February 7-8, 2026.This episode features several instructors who preview their courses, including lesson highlights, takeaways, and why you should attend.Attending SLAS2026? Browse the full course program and sign up for a course here: SLAS2026 Short Course ProgramSee the timestamps for each course discussion below:00:00 — NexusXp: An Introduction to the DMTA Cycle in Drug Discovery 08:43 — NexusXp: Advanced DMTA Strategies for the Modern Lab 17:00 — Get the Word Out: Strategic Communication and Marketing in Life Sciences27:30 — Introduction to Biologics Stay connected with SLAS:www.slas.orgFacebookX (@SLAS_Org)LinkedInInstagram (@slas_org)YouTubeAbout SLASSLAS (Society for Laboratory Automation and Screening) is an international professional society of academic, industry and government life sciences researchers and the developers and providers of laboratory automation technology. The SLAS mission is to bring together researchers in academia, industry and government to advance life sciences discovery and technology via education, knowledge exchange and global community building. SLAS2026 International Conference & Exhibition February 7-11, 2026 Boston, MA SLAS Europe 2026 Conference and Exhibition 19-21 May 2026 Vienna, Austria View the full events calendar
Artificial intelligence is transforming healthcare and research—but how much of it is real progress, and how much is hype? In this episode of The I'm Pharmacy Podcast, host Mina Tadrous explores the practical impact of AI across clinical care, population health, and drug discovery. Featuring insights from Dr. Devin Singh (SickKids), Professpr Laura Rosella (University of Toronto), and Assistant Professor Rachel Harding (Leslie Dan Faculty of Pharmacy), this episode examines how AI is already improving workflows and research, where limitations and risks remain, and why transparency, validation, and open science are critical to building trust.
Cofounders Jeremy Wohlwend and Gabriele Corso join the a16z podcast to discuss the launch of Boltz, a public benefit company building AI infrastructure for molecular biology. The conversation explains how breakthroughs following AlphaFold moved the field beyond protein structure prediction into modeling biomolecular interactions and binding strength, why open-source Boltz models saw rapid adoption across pharma and biotech, and how that work is now being productized. They outline the launch of Boltz Lab, a platform that brings protein and small-molecule design agents into scientist workflows, Boltz's decision to operate as an infrastructure company rather than a therapeutics company, and how AI could reduce early drug discovery bottlenecks by improving molecular design and speeding iteration between computation and the lab. Resources: Follow Gabriele on X: https://twitter.com/GabriCorso Follow Jeremy on X: https://twitter.com/jeremyWohlwend Follow Jorge X: https://twitter.com/jorgecondebio Follow Zak on X: https://twitter.com/zakdoric Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X:https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://twitter.com/eriktorenberg](https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of Disruption/Interruption, host KJ sits down with Jurek Kozyra, founder and CEO of Nanovery, to explore how DNA nanotechnology and AI are revolutionizing molecular medicine. Discover how tiny nanorobots made from DNA could dramatically accelerate drug development, make diagnostics faster and more affordable, and potentially cure diseases that were previously untreatable. From detecting diseases in hours instead of days to cutting years off the drug development process, this conversation reveals the cutting-edge science that's transforming healthcare. Four Key Takeaways: The Promise of Oligonucleotide Therapeutics (9:06) Traditional medicine targets defective proteins, but many diseases can't be cured because we can't find the right molecule. Oligonucleotide therapeutics target mRNA—the underlying mechanism of disease—meaning you could potentially cure all diseases since all proteins come from mRNA. DNA Nanorobots for Rapid Detection (14:12) Nanovery's DNA nanorobots can detect diseases in blood samples within 2-4 hours compared to traditional lab tests that take two days. These self-assembling machines produce fluorescent signals when they find specific DNA or RNA molecules, enabling point-of-care diagnostics. Accelerating Drug Development (17:13) Pharmaceutical companies race against 20-year patents while drugs take 10+ years to develop. Nanovery's technology provides more accurate data at lower cost and time, potentially shaving years off the development process and helping more drugs successfully reach the market. Real-World Clinical Validation (20:26) In a hospital study with 170 patient samples, Nanovery's technology delivered same or better results than traditional tests in just two hours instead of two days—a game-changer for emergency situations like drug overdoses where immediate answers are critical. Quote of the Show (9:05):"If you can target mRNA very specifically, that means that in theory you could potentially cure all diseases. That's why this area is so exciting right now." – Jurek Kozyra Join our Anti-PR newsletter where we’re keeping a watchful and clever eye on PR trends, PR fails, and interesting news in tech so you don't have to. You're welcome. Want PR that actually matters? Get 30 minutes of expert advice in a fast-paced, zero-nonsense session from Karla Jo Helms, a veteran Crisis PR and Anti-PR Strategist who knows how to tell your story in the best possible light and get the exposure you need to disrupt your industry. Click here to book your call: https://info.jotopr.com/free-anti-pr-eval Ways to connect with Jurek Kozyra: LinkedIn: https://www.linkedin.com/in/j3ny/ Company Website: https://nanovery.co.uk How to get more Disruption/Interruption: Amazon Music - https://music.amazon.com/podcasts/eccda84d-4d5b-4c52-ba54-7fd8af3cbe87/disruption-interruption Apple Podcast - https://podcasts.apple.com/us/podcast/disruption-interruption/id1581985755 Spotify - https://open.spotify.com/show/6yGSwcSp8J354awJkCmJlDSee omnystudio.com/listener for privacy information.
This roundtable on the role of AI in the biotech sector features Frank Yocca, Senior VP and Chief Scientific Officer at BioXcel Therapeutics, Joanne Taylor, Senior VP for Research at Gain Therapeutics, and Martin Brenner, CEO and Chief Scientific Officer at iBio. The conversation covers the historical adoption of AI in biotech, its current use in drug discovery, and future possibilities. AI is not a new phenomenon in biotech and has evolved from data processing to sophisticated models that can screen vast amounts of data. There is a critical need for high-quality, structured data to train effective AI models, and these experts caution about the hype surrounding AI-generated discoveries and emphasize the need for real-world biological and human testing. Frank explains, "We are all about AI right from the get-go. We sort of inherited that from the parent company, BioXcel, which is now BioXcel, LLC. The company started by deploying data science on big biomedical and other datasets. Much of the data was unstructured and required significant curation, which at first was largely manual. Later, we began deploying more natural language processing and knowledge graphs to predict whether drugs that initially failed but were safe could be repurposed for other indications. More recently, the latest evolution has really been to use large language models and more agentic workflows to generate hypotheses and insights." Joanne explains, "So Gain has had for many years, I think 10 years also, a virtual drug discovery platform where we've been able to screen millions of compounds virtually to discover allosteric binding molecules. But about three or so years ago, we made the change from screening millions of compounds to screening, now we're up to the capability of screening trillions of compounds." "We can screen in days, whereas it would take you months and maybe a year to do high-throughput screening. But in terms of having introduced AI into this system, it means that we can do things better because obviously, if you can screen trillions of compounds, you're screening more of the possibilities, you are going to be making better drugs. At least that's the hypothesis than if you are screening fewer compounds. So it's the fact that this is a fast tool set that makes you able to do things that you wouldn't have been otherwise able to do, but it doesn't necessarily make the process itself that much faster because you are doing much more." Martin elaborates, "So we had the good fortune to start from scratch. We're a very small company. We have made from the get-go the decision that our scientists would be bilingual. They're not only data and AI scientists, but they're also biologists. That makes it a lot easier to translate between the two disciplines. We literally started, or Rubrik Therapeutics started, on the hypothesis that would be a model of structure prediction for proteins. So the company was clearly ahead of its time, and we started by making molecules that set up better than existing ones. And that's, I think, a very low hurdle that a lot of people are doing right now. And you hear sometimes this overreaching argument, we make AI drugs. First of all, tomorrow medicines take 10,000 steps, and enabling three of them is not making an AI drug, but making better molecules. This was the first important step." #BioXcel #GainTherapeutics #iBio #AI #ClinicalAI #ArtificialIntelligence #Biotechnology #DrugDiscovery #PersonalizedMedicine #HealthcareInnovation #BiopharmaAI #ClinicalTrials #RareDisease #Neuroscience #PrecisionMedicine #HealthTech #BiotechLeadership #AIinHealthcare #DrugDevelopment #MedicalInnovation bioxceltherapeutics.com gaintherapeutics.com ibioinc.com Download the transcript here
This roundtable on the role of AI in the biotech sector features Frank Yocca, Senior VP and Chief Scientific Officer at BioXcel Therapeutics, Joanne Taylor, Senior VP for Research at Gain Therapeutics, and Martin Brenner, CEO and Chief Scientific Officer at iBio. The conversation covers the historical adoption of AI in biotech, its current use in drug discovery, and future possibilities. AI is not a new phenomenon in biotech and has evolved from data processing to sophisticated models that can screen vast amounts of data. There is a critical need for high-quality, structured data to train effective AI models, and these experts caution about the hype surrounding AI-generated discoveries and emphasize the need for real-world biological and human testing. Frank explains, "We are all about AI right from the get-go. We sort of inherited that from the parent company, BioXcel, which is now BioXcel, LLC. The company started by deploying data science on big biomedical and other datasets. Much of the data was unstructured and required significant curation, which at first was largely manual. Later, we began deploying more natural language processing and knowledge graphs to predict whether drugs that initially failed but were safe could be repurposed for other indications. More recently, the latest evolution has really been to use large language models and more agentic workflows to generate hypotheses and insights." Joanne explains, "So Gain has had for many years, I think 10 years also, a virtual drug discovery platform where we've been able to screen millions of compounds virtually to discover allosteric binding molecules. But about three or so years ago, we made the change from screening millions of compounds to screening, now we're up to the capability of screening trillions of compounds." "We can screen in days, whereas it would take you months and maybe a year to do high-throughput screening. But in terms of having introduced AI into this system, it means that we can do things better because obviously, if you can screen trillions of compounds, you're screening more of the possibilities, you are going to be making better drugs. At least that's the hypothesis than if you are screening fewer compounds. So it's the fact that this is a fast tool set that makes you able to do things that you wouldn't have been otherwise able to do, but it doesn't necessarily make the process itself that much faster because you are doing much more." Martin elaborates, "So we had the good fortune to start from scratch. We're a very small company. We have made from the get-go the decision that our scientists would be bilingual. They're not only data and AI scientists, but they're also biologists. That makes it a lot easier to translate between the two disciplines. We literally started, or Rubrik Therapeutics started, on the hypothesis that would be a model of structure prediction for proteins. So the company was clearly ahead of its time, and we started by making molecules that set up better than existing ones. And that's, I think, a very low hurdle that a lot of people are doing right now. And you hear sometimes this overreaching argument: we make AI drugs. First of all, tomorrow medicines take 10,000 steps, and enabling three of them is not making an AI drug, but making better molecules. This was the first important step." #BioXcel #GainTherapeutics #iBio #AI #ClinicalAI #ArtificialIntelligence #Biotechnology #DrugDiscovery #PersonalizedMedicine #HealthcareInnovation #BiopharmaAI #ClinicalTrials #RareDisease #Neuroscience #PrecisionMedicine #HealthTech #BiotechLeadership #AIinHealthcare #DrugDevelopment #MedicalInnovation bioxceltherapeutics.com gaintherapeutics.com ibioinc.com Listen to the podcast here
Good morning from Pharma Daily: the podcast that brings you the most important developments in the pharmaceutical and biotech world. Today, we delve into the significant events of 2025, a year marked by pivotal scientific breakthroughs, regulatory changes, and industry trends that have reshaped drug development and patient care.One of the standout advancements was Novo Nordisk gaining FDA approval for an oral version of Wegovy, a glucagon-like peptide-1 (GLP-1) receptor agonist for obesity management. This marks a notable shift in treatment accessibility, as it provides an easier alternative to injectables for those managing weight and cardiovascular risks. This development could significantly enhance patient adherence and broaden access to this critical therapy.However, not all news was positive. Pfizer faced a challenging situation when a patient death occurred in the extension of their Hympavzi hemophilia study. Such incidents highlight the intrinsic risks of clinical trials, especially within gene therapy realms where safety monitoring is paramount. These events remind us of the delicate balance between innovation and patient safety in advanced biologic therapies.In legal news, Johnson & Johnson was ordered by a Baltimore jury to pay $1.56 billion in a talc-related cancer case. This ruling underscores heightened scrutiny on product safety and consumer protection within the pharmaceutical industry, potentially influencing future litigation and regulatory measures.Clinical trial outcomes also presented mixed results. Neurocrine Biosciences' Ingrezza did not meet efficacy endpoints in its phase 3 trial for cerebral palsy-related dyskinesia. Although it is approved for other movement disorders, this setback reflects the complexities involved in expanding drug indications. Such challenges highlight ongoing hurdles in translating preclinical successes into clinical realities.Despite geopolitical tensions, particularly between China and the U.S., Chinese biotech firms thrived, maintaining robust deal activity. China's continued growth as an innovation hub is driven by strategic investments and collaborations that bolster global drug development efforts, underscoring its increasing influence in life sciences.Regulatory landscapes also shifted with proposals from the Center for Medicare & Medicaid Innovation to align U.S. drug prices with international rates under Medicare Parts B and D. These proposed models could significantly impact pricing strategies and market dynamics within the U.S., requiring pharmaceutical companies to adapt while ensuring equitable access to medications.Ethical challenges surfaced as six individuals were charged with insider trading involving biotech stocks. Such incidents highlight the necessity for stringent ethical standards and regulatory oversight to maintain investor confidence and market integrity.Meanwhile, AstraZeneca's extended partnership with Niowave for actinium-225 supply reflects an interest in radiopharmaceuticals as targeted cancer therapies. This collaboration highlights the potential of radiopharmaceuticals in oncology, opening promising avenues for precision medicine approaches.As 2025 closes, it's clear that this year has been one of both triumphs and trials for the pharmaceutical and biotech industries. Scientific innovations like Novo Nordisk's oral GLP-1 receptor agonist offer new hope for patients, yet challenges such as clinical trial setbacks and legal battles indicate ongoing hurdles in drug development and commercialization. These developments will likely influence industry strategies and regulatory policies as we advance into 2026.The sustained momentum of China's biotech industry amid global trade tensions remains notable. This trend reflects China's strategic investments in biotech capabilities and its growing role in global markets despite geopolitical frictions.In clinical research, Hope BioscienceSupport the show
“There are hundreds, maybe thousands, of drug repurposing opportunities just waiting to be uncovered,” explains David Fajgenbaum, M.D. David Fajgenbaum, M.D., physician-scientist, bestselling author of Chasing My Cure, co-founder of Every Cure, and leader in the global push for drug repurposing, joins us today to explain why the cures of tomorrow may already be on pharmacy shelves today—and how his team is racing to uncover them. - From college athlete to ICU (~3:15) - Finding a cure (~7:20) - Hope needs to drive action (~9:45) - Repurposing drugs (~11:10) - Use cases of generic drugs (~13:30) - Lithium for bipolar & Alzheimer's (~16:00) - Lidocaine & breast cancer (~17:25) - GLP-1 for longevity benefits (~19:20) - Increasing awareness in the healthcare system (~20:10) - The 3 main hurdles for repurposing drugs (~22:00) - Opportunities in the space (~23:10) - 14 advanced repurpose treatments (~28:00) - The power of AI (~32:50) - Using AI for personalized medicine (~34:30) - AI for treatment options (~37:45) - Common drugs with big potential (~41:00) - The future of healthcare & drug discovery (~44:50) - How you can help (~49:30) Referenced in the episode: - Follow Fajgenbaum on Instagram (@dfajgenbaum) - Check out his website (https://davidfajgenbaum.com/) - Pick up his book, Chasing My Cure (https://www.amazon.com/Chasing-My-Cure-Doctors-Action/dp/1524799637/) - Listen to his TED Talk (https://www.youtube.com/watch?v=sb34MfJjurc) - Learn more about Every Cure (https://everycure.org/) We hope you enjoy this episode, and feel free to watch the full video on YouTube! Whether it's an article or podcast, we want to know what we can do to help here at mindbodygreen. Let us know at: podcast@mindbodygreen.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
Dr. Christina Smolke runs a brewery, except the yeast isn't making alcohol. It's making medicine. At Antheia, Smolke has turned a long-shot Stanford research project into a new way to manufacture critical pharmaceutical ingredients, using biology instead of traditional chemistry.The approach is already being used to produce opioid precursors for Narcan, with more drugs in the pipeline aimed at chronic shortages and supply-chain failures. Smolke talks about regulation, security, and why some of the hardest problems in science are worth chasing—especially when everyone says they won't work. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Richard Bonneau, Vice President of Machine Learning for Drug Discovery at Genentech and Roche, provides Pitt's HexAI podcast host, Jordan Gass-Pooré, with an insider view on how his team is fundamentally changing and accelerating how new drug candidate molecules are designed, predicted, and optimized.Geared for students in computational sciences and hybrid STEM fields, the episode introduces listeners to uses of AI and ML in molecular design, the biomolecular structure and structure-function relationships that underpin drug discovery, and how distinct teams at Genentech work together through an integrated computational system.Richard and Jordan use the opportunity to touch on how advances in the molecule design domain can inspire and inform advances in computational pathology and laboratory medicine. Richard also delves into the critical role of Explainable AI (XAI), interpretability, and error estimation in the drug design-prototype-test cycle, and provides advice on domain knowledge and skills needed today by students interested in joining teams like his at Genentech and Roche.
Rick Pierce, Co-Founder and CEO of Decoy Therapeutics. is using AI and machine learning to accelerate drug discovery and is developing broad-acting antivirals using peptide conjugates that target a shared invasion mechanism of hundreds of viruses. The company is using small language models and a high-speed peptide synthesizer to dramatically reduce drug creation time. Rick predicts that the future of drug discovery will combine AI-driven design with advanced biological models, such as organoids, to better predict drug toxicity and efficacy. Rick explains, "Decoy Therapeutics was founded years ago, during the COVID era. And what we've learned during that was that in order to develop drugs rapidly and scale up their manufacturing, we needed to use machine learning and AI. And the drugs that we're looking at developing today as a result of that are broad-acting antivirals that can be used against multiple viruses. So one drug can be used against multiple viruses like Flu, COVID, and RSV." "So we chose antivirals as a space because viruses have what is called polypharmacology, and in plain layman's terms, what that means is that about 250 of these viruses share the same invasion machinery, meaning the way the virus enters the healthy cells is shared across all those viruses. It's slightly different in each of those viruses, but effectively for drug development, very similar." "That allows us to use peptides, which are also alpha helices, to be able to design drugs with AI and machine learning that physically block the invasion machinery and thus basically the virus from binding to a healthy cell. Peptides are uniquely positioned as drugs for this set of viral targets. Again, it's a rich set of targets among 250 viruses across multiple viral families." #DecoyTherapeutics #PeptideConjugates #BroadSpectrumAntiviral #AIinBiotech #NextGenMedicine decoytx.com Download the transcript here
Rick Pierce, Co-Founder and CEO of Decoy Therapeutics. is using AI and machine learning to accelerate drug discovery and is developing broad-acting antivirals using peptide conjugates that target a shared invasion mechanism of hundreds of viruses. The company is using small language models and a high-speed peptide synthesizer to dramatically reduce drug creation time. Rick predicts that the future of drug discovery will combine AI-driven design with advanced biological models, such as organoids, to better predict drug toxicity and efficacy. Rick explains, "Decoy Therapeutics was founded years ago, during the COVID era. And what we've learned during that was that in order to develop drugs rapidly and scale up their manufacturing, we needed to use machine learning and AI. And the drugs that we're looking at developing today as a result of that are broad-acting antivirals that can be used against multiple viruses. So one drug can be used against multiple viruses like Flu, COVID, and RSV." "So we chose antivirals as a space because viruses have what is called polypharmacology, and in plain layman's terms, what that means is that about 250 of these viruses share the same invasion machinery, meaning the way the virus enters the healthy cells is shared across all those viruses. It's slightly different in each of those viruses, but effectively for drug development, very similar." "That allows us to use peptides, which are also alpha helices, to be able to design drugs with AI and machine learning that physically block the invasion machinery and thus basically the virus from binding to a healthy cell. Peptides are uniquely positioned as drugs for this set of viral targets. Again, it's a rich set of targets among 250 viruses across multiple viral families." #DecoyTherapeutics #PeptideConjugates #BroadSpectrumAntiviral #AIinBiotech #NextGenMedicine decoytx.com Listen to the podcast here
In this episode, we sit down with Caitlyn Krebs, Co-founder and CEO of Nalu Bio, to discuss how her company is leveraging generative AI to revolutionize drug discovery. Caitlyn shares how they are creating novel chemical entities five times faster than traditional methods to tackle massive unmet needs like endometriosis and post-surgical pain.We also dive deep into the business of biotech: the looming $250 billion "Patent Cliff" facing big pharma, the reality of the fundraising "rollercoaster," and why bringing innovation back to the US is critical for the industry's future.If you are interested in the intersection of AI and biology, the future of pain management, or the grit required to build a life sciences startup, you won't want to miss this conversation.⭐ Sponsored by Podcast10x - Podcasting agency for VCs - https://podcast10x.comKey Topics Covered:- The Next GLP-1? Why the endocannabinoid system is the largest regulator in the human body.- AI in Biotech: How Nalu Bio uses "digital twins" and virtual patients to de-risk drug development.- The $250B Opportunity: Understanding the massive patent cliff approaching the pharma industry.- Women's Health: Solving endometriosis with non-hormonal, non-opioid therapeutics.- Founder Resilience: Caitlyn's story of a lead investor walking away at the final document stage and how she bounced back.- Building Moats: How to protect IP and technology in a competitive market.Connect with Caitlyn & Nalu Bio:* Website: https://nalubio.com* LinkedIn: https://www.linkedin.com/in/caitlynkrebs* Email: caitlyn@nalubio.comVC10X website - https://VC10X.comDon't forget to LIKE, SUBSCRIBE, and turn on notifications for more deep dives into the future of technology and healthcare!#Biotech #AI #DrugDiscovery #Endometriosis #Startup #NaluBio #HealthTech #Entrepreneurship #GLP1 #Pharma
Biotech Bytes: Conversations with Biotechnology / Pharmaceutical IT Leaders
AI in Drug Discovery | #aidrugdiscovery #biotechinnovation #medicalinnovationAmid a rapidly changing biotech landscape, AI is transforming how we discover and develop new medicines. Please visit our website to get more information: https://swangroup.net/ In this episode, I sit down with Smbat Rafayelyan, founder and CEO of Bioneex, a platform that connects early-stage biotech innovators with investors and pharma companies using AI-driven insights. He shares his journey from big pharma to entrepreneurship and how his team is reshaping drug discovery.We explore how personalized large language models are being applied in biotech, the role of data integration in connecting biotech, pharma, and investors, and why China's biotech ecosystem is fueling a surge of innovation.✅ How personalized AI models improve drug discovery and evaluation✅ The role of data integration in connecting biotech, pharma, and investors✅ Global opportunities, including China's emerging biotech sectorIf you've ever wondered how AI is making sense of scientific data chaos, this episode is a must-watch.Links from this episode:✅ Get to know more about Smbat Rafayelyan: https://www.linkedin.com/in/dr-smbat-rafayelyan/?originalSubdomain=de ✅ Learn more about Bioneex: https://bioneex.com
Marc Tessier-Lavigne, CEO of South San Francisco-based Xaira Therapeutics, on reinventing drug discovery with AI.
November 11, 2025 | What is the next modality to focus on in the next 10 years? For Bahija Jallal, CEO of Immunocore, it would be T-cell engagers. In this episode of The Chain, host Rakesh Dixit speaks with Jallal on the potential advantages of bispecific T-cell engager therapy versus T-cell receptor therapy, biggest anticipated changes in drug discovery and development in the next 10 years, and how AI is going to impact the next generation of scientists. Plus, Jallal shares her experiences as the previous president of MedImmune and at AstraZeneca, what her most rewarding project was, and the transformations and achievements that occurred under her leadership. Links from this episode: Immunocore
AI and digital twins are redrawing the boundaries of drug discovery. Once defined by lab benches, animal studies, and years of trial and error, the field is now embracing virtual methodologies that promise faster, safer, and more precise innovation. But could these technologies ever make animal testing obsolete?In this episode of Tech Tomorrow, David Elliman speaks with Professor Julie Frearson, SVP and Chief Scientific Officer at Charles River Laboratories, about how artificial intelligence is transforming early-stage drug discovery. Julie explains how AI is already accelerating small-molecule design and enabling the use of virtual control animals, reducing the need for live testing without compromising scientific integrity.They also unpack the growing challenges of explainability, bias, and regulation in AI-driven science. From ensuring transparency and accountability in complex models to understanding how regulators like the FDA are beginning to accept hybrid data sets that combine in vivo results with AI predictions, the discussion balances optimism with realism in a rapidly evolving field.Ultimately, Professor Julie and David agree that while AI is reshaping discovery, humans must remain firmly in the loop. For now, it is the only way to ensure that innovation remains both ethical, trustworthy, and safe.Episode Highlights:01:31 – Areas of drug discovery already transformed by AI and digital twins.03:25 – Digital twins in animal testing and the creation of “virtual animals.”05:50 – David's thoughts: What executives often get wrong about digital twins.07:30 – How digital twins accurately recreate parts of animals.10:11 – How regulation currently views AI models in drug discovery.13:30 – The timeline for regulators to become more comfortable with hybrid data sets.14:37 – David's thoughts: How ‘black box' AI processes create challenges, and how to address them.16:31 – The role of humans in the drug discovery loop.17:37 – Will technology outpace regulation?20:34 – Could AI and digital twins make animal testing in drug discovery obsolete?About Zühlke:Zühlke is a global transformation partner, with engineering and innovation at its core. We help clients envision and build their businesses for the future – running smarter today while adapting for tomorrow's markets, customers, and communities.Our multidisciplinary teams specialise in technology strategy and business innovation, digital solutions and applications, and device and systems engineering. We thrive in complex, regulated sectors such as healthcare and finance, connecting strategy, implementation, and operations to help clients build more effective and resilient businesses.Links:Zühlke WebsiteZühlke on LinkedInDavid Elliman on LinkedInProfessor Julie Frearson on LinkedInCharles River Laboratories Website
Editor's Summary by Linda Brubaker, MD, and Preeti Malani, MD, MSJ, Deputy Editors of JAMA, the Journal of the American Medical Association, for articles published from November 1-7, 2025.
Tessara Therapeutics, a pioneering biotech start-up based in Melbourne, has developed a platform which creates 3D human brain models using stem cells.Its RealBrain technology generates reproducible, scalable micro-tissues that mimic the complexity of the human brain. Ready to accelerate neural drug discovery – without using animal models.From working with CSIRO's Kickstart program, receiving a CRC-P grant with Xylo Bio, and the University of Sydney to develop neuroplastogens to research the treatment of addiction disorders and inking a new agreement with Swiss based InSphero, Tessara Therapeutics is helping to unlock human neuroscience. Joining us on the MTPConnect podcast is Tessara Therapeutics CEO and Managing Director, Dr Christos Papadimitriou to tell us more about their innovation to accelerate neural drug discovery and their plans to take this technology global.
On this episode of #TheShot of #DigitalHealth Therapy, Jim Joyce and I had the pleasure of chatting with the globally minded and endlessly curious Alette Ramos Hunt, PhD, Global Director, Digital Innovation & AI for Drug Discovery at Novartis. From being a third culture kid (Danish dad, Filipino mom, born in Japan, raised in Hong Kong) to becoming one of the sharpest voices connecting biotech, digital health, and AI, Alette brings perspective that's as international as it is insightful. We explored her fascinating path from studying proteins in Glasgow to driving AI innovation in pharma, and how she's bridging the gap between molecules, humans, and machines. She reminded us that practical AI and game-changing AI both have a place - one makes us efficient, the other makes us dream bigger. It's an episode filled with humility, humor, and yes.. human intelligence - proving that even in a world of algorithms, empathy still leads the way. Fun mentions as always: Chandana Fitzgerald Jeff Weness Milind Kamkolkar [00:00-02:00] Bloopers, sunshine, and background banter. [00:03-05:00] Alette's third-culture upbringing — Japan, Hong Kong, Denmark. [00:05-07:00] Boarding school, biochemistry, and falling in love with proteins. [00:10-12:00] From academia to Pfizer — bringing science to life. [00:13-15:00] Leap to HealthXL — discovering digital health beyond the lab. [00:18-21:00] Entering Novartis — pre-ChatGPT AI strategy and innovation cycles. [00:22-25:00] Practical AI vs. game-changing AI — redefining productivity. [00:24-28:00] AI and drug discovery — startups, partnerships, and collaboration. [00:29-34:00] Lessons on open-minded leadership and partnering with purpose. [00:36-39:00] Jim's classic closing story and Alette's advice: Value your strengths, cherish your partners.
In this episode of the Shift AI Podcast, Vik Bajaj, CEO and Co-founder of Foresite Labs and Interim President at Xaira Therapeutics, joins host Boaz Ashkenazy to explore how AI is revolutionizing life sciences and healthcare. With a distinguished background spanning physics, structural biology, radiology at Stanford, and pioneering work at Google, Vik brings a unique perspective on the intersection of AI and medicine.The conversation delves into the stark realities of drug discovery—where 2 million researchers globally struggle against success rates so low that most will be lucky to contribute to one or two successful drugs in their entire careers. Vik explains how AI, particularly David Baker's groundbreaking protein design work, is poised to transform this landscape by enabling drugs for previously "undruggable" targets and moving healthcare from reactive treatment to predictive, personalized medicine.From genetic tests that could provide actionable insights for 75% of people to AI models that can predict 5-year mortality from a simple chest X-ray, this episode reveals how we're approaching a future where disease prevention replaces disease treatment. If you're interested in understanding how AI will fundamentally reshape healthcare economics and why Vik compares this transformation to the industrial revolution, this episode is essential listening.Chapters: [01:48] From Physics to Life Sciences: Vik's Interdisciplinary Journey [04:13] The Google Years: Early AI Revolution in Science [06:56] Xaira Therapeutics and David Baker's Protein Design Breakthrough [09:28] The Optimistic Future of AI in Drug Discovery [15:17] The Four-Stage Evolution of AI in Medicine [19:53] Healthcare Cost Crisis and AI Solutions [23:27] The Promise of Personalized Medicine [28:39] Foresite Labs: Specializing in Science and Engineering AI [32:00] Why Healthcare AI is Harder Than Software Engineering [36:27] Two Words for the Future: Industrial RevolutionContact Info:Connect with Vik Bajaj● LinkedIn: https://www.linkedin.com/in/drvikbajaj/ Connect with Boaz Ashkenazy● LinkedIn: https://www.linkedin.com/in/boazashkenazy● X: boazashkenazy● Email: info@shiftai.fm
Subscribe to UnitedHealthcare's Community & State newsletter.Health Affairs' Rob Lott interviews Tris Dyson, Founder of Challenge Works on his efforts in cultivating challenge prizes as an opportunity to nurture innovation in science and health care, the newly launched Longitude Prize on ALS, the transformation of drug discovery, and more.Currently, more than 70 percent of our content is freely available - and we'd like to keep it that way. With your support, we can continue to keep our digital publication Forefront and podcast Subscribe to UnitedHealthcare's Community & State newsletter.
When Dalila Sabaredzovic's sons were diagnosed with an ultra-rare genetic condition, she faced more questions than answers. But through resilience, advocacy, and the power of collaboration, her family's story has become a beacon of hope in rare disease research. In this deeply moving episode of Sounds of Science, Dalila shares her journey from despair to discovery—and how a global village of scientists came together to pursue a personalized treatment that could change everything.Show NotesTaking a Customized and Collaborative Approach to Therapeutic DevelopmentDrug Discovery Services | Charles RiverASO Screening Services | Charles RiverRare Disease Research for Drug Development | Charles RiverTwo in Eight Billion | Eureka blog
Serial entrepreneur Michael Heltzen, CEO of Exozymes, reveals how his NASDAQ-listed company is "liberating enzymes from cells" to create a new generation of chemical manufacturing. Instead of using living cells as factories, Exozymes isolates enzymatic pathways to work as pure chemistry—achieving engineering-level control previously deemed impossible in conventional synthetic biology. Michael discusses Exozymes' AI-powered enzyme evolution, six-week development timelines, bold IPO strategy during biotech's funding winter, and applications in pharmaceuticals like NCT for liver disease. This is synthetic biology's next chapter: sustainable, scalable enzyme-based manufacturing that could replace both petrochemicals and natural harvesting. Make sure to check out eXoZymes' website: https://exozymes.com/ Follow our Instagram @insidebiotech for updates about episodes and upcoming guests! To learn more about BCLA's events and consulting visit our website.Follow BCLA on LinkedIn
This episode captures Walid Mehanna's perspective on how Merck KGaA has approached enterprise AI adoption through a federated strategy that prioritizes people over technology. The core message is that successful AI implementation requires building organizational capability across three dimensions - people, processes, and technology - rather than seeking a single transformative solution. Walid argues that companies must establish a broad foundation of AI literacy (exemplified by their internal MyGPT tool reaching 25,000 users) before pursuing specialized applications, while maintaining human accountability to prevent complacency. He emphasizes that AI works best when it's treated as an experimental, iterative capability distributed across the organization rather than controlled centrally, with success depending on persistence through the inevitable J-curve of initial productivity drops. The conversation reveals how a large multinational navigates the practical realities of AI deployment - from managing regulatory complexity across different geographies to making pragmatic build-versus-buy decisions - while maintaining focus on the fundamental principle that AI should augment human expertise rather than replace human judgment and responsibility. (0:00) Intro(0:29) How AI is Used at Merck(2:18) AI Applications Across the Value Chain(4:31) Challenges and Risks of AI Implementation(5:35) Federated Approach to AI Prioritization(6:44) Future AI Use Cases and Data Challenges(10:38) Building and Partnering for AI Solutions(15:11) AI in Drug Discovery and R&D(26:47) Quickfire Out-Of-Pocket: https://www.outofpocket.health/
In this episode, Jacob sits down with Joshua Meier, co-founder of Chai Discovery and former Chief AI Officer at Absci, to explore the breakthrough moment happening in AI drug discovery. They discuss how the field has evolved through three distinct waves, with the current generation of companies finally achieving success rates that seemed impossible just years ago. The conversation covers everything from moving drug discovery out of the lab and into computers, to why AI models think differently than human chemists, to the strategic decisions around open sourcing foundational models while keeping design capabilities proprietary. It's an in-depth look at how AI is fundamentally changing pharmaceutical innovation and what it means for the future of medicine. Check out the full Chai-2 Zero-Shot Antibody report linked here: https://www.biorxiv.org/content/10.1101/2025.07.05.663018v1.full.pdf (0:00) Intro(1:25) The Evolution of AI in Drug Discovery(5:14) Current State and Future of AI in Biotech(10:08) Challenges and Modalities in Therapeutics(14:44) Data Generation and Model Training(22:52) Open Source and Model Development at Chai(29:52) Open Source Models and Their Impact(34:36) How Should Chai-2 Be Used?(38:53) The Future of AI in Pharma and Biotech(42:46) Key Milestones and Metrics in AI-Driven Drug Discovery(47:20) Critiques and Hesitation(54:01) Quickfire Out-Of-Pocket: https://www.outofpocket.health/
Rahul Gupta, MD, MPH, MBA, FACP, is a physician, President of GATC Health Corp, and the former Director of the U.S. Office of National Drug Control Policy (ONDCP). He was the first medical doctor to lead ONDCP, and he served as the Director from November 2021 - January 2025. Through his work, particularly at ONDCP, Dr. Gupta has made important contributions to protecting public health, which is an important component of the PCC's mission. In this interview, he discussed drug control policy in the U.S., positive outcomes from the initiatives he has led, his role in safeguarding clean sport, his experience as keynote speaker at our recent PCC Conference, and his current innovative endeavors in drug discovery and healthcare.
We've long marveled at how efficiently plants convert sunlight into energy—but no one guessed they were using quantum mechanics to do it.In this episode, we speak with Greg Engel, a pioneering University of Chicago biophysicist who helped launch the field of quantum biology. Engel explains how plants and bacteria evolved to exploit quantum effects for photosynthesis—and how understanding these systems could spark a revolution in quantum sensing, medicine, and neuroscience.Engel's team has already built quantum sensors inspired by nature's designs, with the potential to transform how we detect disease, develop drugs, and even read neural signals. The ultimate goal? A new era of quantum medicine, powered by the weird and wonderful physics found in leaves.