Podcasts about Physics

Study of the fundamental properties of matter and energy

  • 8,912PODCASTS
  • 24,901EPISODES
  • 46mAVG DURATION
  • 4DAILY NEW EPISODES
  • Feb 16, 2026LATEST
Physics

POPULARITY

20192020202120222023202420252026






    Latest podcast episodes about Physics

    Machine Learning Street Talk
    Evolution "Doesn't Need" Mutation - Blaise Agüera y Arcas

    Machine Learning Street Talk

    Play Episode Listen Later Feb 16, 2026 55:48


    What if life itself is just a really sophisticated computer program that wrote itself into existence?In this mind-bending talk, *Blaise Agüera y Arcas* takes us on a journey from random noise to the emergence of life, using nothing but simple code and a whole lot of patience. His artificial life experiment, cheekily named "BFF" (the first two letters stand for "Brainf***"), demonstrates something remarkable: when you let random strings of code interact millions of times, complex self-replicating programs spontaneously emerge from pure chaos.*Key Insights from this Talk:**The "Artificial Kidney" Test for Life* — What makes something alive isn't what it's made of, but what it *does*. A rock broken in half gives you two rocks. A kidney broken in half gives you a broken kidney. Function is what separates the living from the non-living.*Von Neumann Called It* — Before we even knew what DNA looked like, mathematician John von Neumann figured out exactly what life needed to copy itself: instructions, a constructor to follow them, and a way to copy those instructions. He basically predicted molecular biology from pure logic.*The Magic Moment* — Watch as Blaise shows the exact instant when his simulation transitions from random noise to organized, self-replicating code. It's a genuine phase transition, like water freezing into ice, except instead of ice, you get *life*.*Evolution Without Mutation* — Here's the twist that challenges everything you learned in biology class: this complexity emerges even when mutation is set to zero. The secret? Symbiogenesis. Things don't just mutate to get better; they *merge*. Two simple replicators that work well together fuse into something more complex.*We're All Made of Viruses* — This isn't just simulation theory. In the real world, the mammalian placenta came from an ancient virus. A gene essential for forming memories? Also a virus. Life has been merging and absorbing other life forms all the way down.The implications are profound: life isn't just computational, it was computational from the very beginning. And intelligence? That's just what happens when these biological computers start modeling each other.Whether you're into artificial life, evolutionary biology, or just want to understand what makes you *you*, this talk will fundamentally change how you think about the boundary between living and non-living matter.---TIMESTAMPS:00:00:00 Introduction: From Noise to Programs & ALife History00:03:15 Defining Life: Function as the "Spirit"00:05:45 Von Neumann's Insight: Life is Embodied Computation00:09:15 Physics of Computation: Irreversibility & Fallacies00:15:00 The BFF Experiment: Spontaneous Generation of Code00:23:45 The Mystery: Complexity Growth Without Mutation00:27:00 Symbiogenesis: The Engine of Novelty00:33:15 Mathematical Proof: Blocking Symbiosis Stops Life00:40:15 Evolutionary Implications: It's Symbiogenesis All The Way Down00:44:30 Intelligence as Modeling Others00:46:49 Q&A: Levels of Abstraction & Definitions---REFERENCES:Paper:[00:01:16] Open Problems in Artificial Lifehttps://direct.mit.edu/artl/article/6/4/363/2354/Open-Problems-in-Artificial-Life[00:09:30] When does a physical system compute?https://arxiv.org/abs/1309.7979[00:15:00] Computational Lifehttps://arxiv.org/abs/2406.19108[00:27:30] On the Origin of Mitosing Cellshttps://pubmed.ncbi.nlm.nih.gov/11541392/[00:42:00] The Major Evolutionary Transitionshttps://www.nature.com/articles/374227a0[00:44:00] The ARC genehttps://www.nih.gov/news-events/news-releases/memory-gene-goes-viralPerson:[00:05:45] Alan Turinghttps://plato.stanford.edu/entries/turing/[00:07:30] John von Neumannhttps://en.wikipedia.org/wiki/John_von_Neumann[00:11:15] Hector Zenilhttps://hectorzenil.net/[00:12:00] Robert Sapolskyhttps://profiles.stanford.edu/robert-sapolsky---LINKS:RESCRIPT: https://app.rescript.info/public/share/ff7gb6HpezOR3DF-gr9-rCoMFzzEgUjLQK6voV5XVWY

    Innovation Now
    Olympic Physics

    Innovation Now

    Play Episode Listen Later Feb 16, 2026 1:30


    Living With Cystic Fibrosis
    When Insurance Gets Between Doctors and Patients

    Living With Cystic Fibrosis

    Play Episode Listen Later Feb 16, 2026 44:35


    When Insurance Gets Between Doctors and PatientsDr. Elizabeth Ames and Dr. Caleb Bupp are deeply committed to their patients. But like so many clinicians today, they're spending an extraordinary amount of time battling insurance companies instead of practicing medicine.Between prior authorizations, step therapy requirements, and outright coverage denials, physicians and their teams are buried in paperwork, often at the direct expense of patient care. Time that should be spent listening, diagnosing, and treating is instead consumed by forms, phone calls, and appeals.Boston Globe reporter Jonathan Saltzman raised the concern and Dr. Ames brought it to my attention. The reporter talks about, a new program rolled out by Blue Cross Blue Shield of Massachusetts. The insurer says the initiative is designed to control rising healthcare costs for its 3 million members, noting that costs have increased by 30 percent since 2021. But, the program specifically targets physicians who bill for the most expensive visits. The reason for the increased expense, which is discussed in our podcast, is because doctors are choosing to spend more time with rare disease patients who have complicated health issues. They need to spend more time with complex medical needs patients than say, someone with a sore throat.Drs. Ames and Bupp warn that this approach fundamentally misunderstands patient care, particularly for those with complex or rare conditions. “These patients don't need less time; they need more” says Dr. Ames. Physicians argue that policies like this risk rushed appointments, strained doctor/patient relationships, and poorer outcomes. Nowhere is this more concerning than in the rare disease community, where delays and denials can be devastating.Dr. Elizabeth Ames and Dr. Caleb Bupp talk about what this looks like in real life. As pediatric geneticists, they see firsthand how insurance barriers impact families already navigating diagnostic odysseys, uncertainty, and fear. Their work sits at the intersection of cutting-edge science and deeply human stories, and insurance interference often disrupts both. Dr. Ames, “Usually we get faxes saying, this has been denied and we start working on it. But the family gets a letter that the drug they need, the process is delayed by a “no”. We try and have good communication and say, “hey, we got this denial,” we're working on it. But I think it's deaths by a thousand cuts for the family. Families take the denial as, “I'm not worth of coverage, and that's really hard”. Dr. Bupp says they have had to hire genetic counselors, a job that didn't exist even 5 years ago, “We have a job description in our organization for it now because of the complexities that come with trying to unravel these insurance situations”.We should also note that Dr. Ames, Dr. Bupp, and I all serve on the Rare Disease Advisory Council (RDAC) in Michigan. “I think rare disease advocacy, there is power in numbers. One person can be a huge difference maker, but it's not one plus one equals two. It really exponentially grows, and I think with things like rare disease advisory councils, that gives you a better connection within your state, for state government and for advocacy. And I also think, or I hope, that it gives a place for an individual to plug in and that can then magnify and amplify. their voice so that they're not alone”. Many states have RDAC's, You can see if your state has an RDAC. For more on the Michigan RDACIn this article and in the podcast we are not speaking on behalf of the council, but it's important to understand why bodies like RDAC exist in the first place. Michigan is home to approximately one million people living with rare diseases, and the RDAC was created to ensure their voices, and experiences help shape policy. RDAC meetings are open to the public, and anyone in Michigan can participate and offer public comment. We hope you join our meetings via zoom (sometimes hybrid).This conversation isn't just about insurance policies. It's about time, trust, and whether our healthcare system truly serves patients, especially those with the most complex needs. Speak up, share your story. Advocate. Make a difference, Mold the future, for future generations.To look at the Everylife Diagnosis Odyssey https://everylifefoundation.org/delayed-diagnosis-study/ discussed in the podcast.  Everylife impact of diagnosis: https://everylifefoundation.org/burden-study/ Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Watch our podcasts on YouTube: https://www.youtube.com/@laurabonnell1136/featuredThanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/enRead us on Substack: https://substack.com/@lstb?utm_campaign=profile&utm_medium=profile-pageWatch our trailer of Embracing Egypt: https://youtu.be/RYjlB25Cr9Y

    The Brand Called You
    Exploring the Cosmos: Prof. Neil Comins, Professor of Physics & Astronomy, University of Maine, USA

    The Brand Called You

    Play Episode Listen Later Feb 16, 2026 28:56


    Dive into the mysteries of the cosmos with Prof. Neil Comins, Professor of Physics & Astronomy at the University of Maine, USA, in this illuminating episode of The Brand Called You, hosted by Ashutosh Garg.In this thought-provoking conversation, Prof. Comins shares how personal adversity shaped his journey into astrophysics and why he chose to specialize in general relativity—one of the most challenging fields in science.The episode explores:Common misconceptions about galaxies and the Big BangThe search for life beyond Earth using the James Webb Space TelescopeAncient India's remarkable contributions to astronomyIndia's growing global role in space researchThe transformative impact of artificial intelligence on astronomyThe scientific foundation behind his bestselling “What If” books

    @HPCpodcast with Shahin Khan and Doug Black

    - "Ride the Wave, Build the Future: Scientific Computing in an AI World", by Dongarra, Reed, Gannon - Call for National Moonshot Program for future HPC systems - DOE Genesis Mission, 26 Challenges for National Science and Technology - NSF $100M National Quantum and Nanotechnology Infrastructure, NQNI - State of The Quantum Computing Industry - Los Alamos National Laboratory Center for Quantum Computing [audio mp3="https://orionx.net/wp-content/uploads/2026/02/HPCNB_20260216.mp3"][/audio] The post HPC News Bytes – 20260216 appeared first on OrionX.net.

    AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
    Teaser AI Business and Development Daily News RUndown February 16 2026: GPT-5.2's Physics Breakthrough, The Pentagon vs. Anthropic, & ByteDance's "Seed" Surge

    AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

    Play Episode Listen Later Feb 16, 2026 2:09


    Full Audio at https://podcasts.apple.com/ca/podcast/ai-bisiness-and-development-daily-news-rundown-gpt-5/id1684415169?i=1000750032166

    Science & Futurism with Isaac Arthur

    Is math truly universal—or just human? Explore how alien minds might think, count, and reason in ways we don't recognize as mathematics at all.Get Nebula using my link for 50% off an annual subscription: https://go.nebula.tv/isaacarthurWatch my exclusive video The Future of Interstellar Communication: https://nebula.tv/videos/isaacarthur-chronoengineering-manipulating-time-as-technologyCheck out Joe Scott's Oldest & Newest: https://nebula.tv/videos/joescott-oldest-and-newest-places-on-earth?ref=isaacarthur

    Science & Futurism with Isaac Arthur
    Alien Mathematics (Narration Only)

    Science & Futurism with Isaac Arthur

    Play Episode Listen Later Feb 15, 2026 27:07


    Is math truly universal—or just human? Explore how alien minds might think, count, and reason in ways we don't recognize as mathematics at all.Get Nebula using my link for 50% off an annual subscription: https://go.nebula.tv/isaacarthurWatch my exclusive video The Future of Interstellar Communication: https://nebula.tv/videos/isaacarthur-chronoengineering-manipulating-time-as-technologyCheck out Joe Scott's Oldest & Newest: https://nebula.tv/videos/joescott-oldest-and-newest-places-on-earth?ref=isaacarthur

    Robinson's Podcast
    270 - Tim Maudlin & Jacob Barandes: The Indivisible Approach to Quantum Theory

    Robinson's Podcast

    Play Episode Listen Later Feb 15, 2026 189:30


    Tim Maudlin is Professor of Philosophy at NYU and Founder and Director of the John Bell Institute for the Foundations of Physics. Jacob Barandes is Senior Preceptor in Physics at Harvard University, where he works widely across the philosophy of physics, with focuses on the foundations of quantum mechanics, the philosophy of spacetime, and the metaphysics of laws. In this episode, Robinson, Tim, and Jacob discuss Jacob's novel approach to quantum mechanics, which he calls the “Indivisible Approach”. More particularly, they discuss the problems at the core of quantum mechanics, the ontology of the theory, causality and quantum phenomena, probability, and more. If you're interested in the foundations of physics, then please check out the JBI, which is devoted to providing a home for research and education in this important area. Any donations are immensely helpful at this early stage in the institute's life.Tim's Website: www.tim-maudlin.siteThe John Bell Institute: https://www.johnbellinstitute.orgJacob's Website: https://www.jacobbarandes.comThe Stochastic-Quantum Correspondence: https://philosophyofphysics.lse.ac.uk/articles/10.31389/pop.186Historical Debates over the Physical Reality of the Wave Function: https://arxiv.org/abs/2602.09397Pilot-Wave Theories as Hidden Markov Models: https://arxiv.org/abs/2602.10569OUTLINE00:21 The Problems at the Foundations of Quantum Mechanics13:00 More on the Problems26:09 Is the Wave Function a Real Thing?32:48 Causation, Correlation, and Quantum Mechanics42:03 Terminological Issues44:34 Causal Models and the Markov Condition01:00:57 Can Time Exist Without Change?01:15:00 On Time and Change01:30:38 Newtonian Mechanics and the Markov Condition1:45:00 More on Newtonian Mechanics2:00:00 More on the Markov Condition02:17:49 Tim's Response02:28:18 Philosophy and Physics02:32:38 More on Probability02:42:13 Probability and the Double Slit Experiment 02:59:42 Why Tim Remains PuzzledRobinson's Website: http://robinsonerhardt.comRobinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University, where he is also a JD candidate in the Law School.

    Demystifying Science
    What Would a Serious Aether Theory Look Like? - Dmitrii Osenilo, DemystifySci #401

    Demystifying Science

    Play Episode Listen Later Feb 15, 2026 61:42


    This episode asks a simple question: if forces act, what is actually moving? Physicist Dmitrii Osenilo revives a gaseous aether model built from hydrodynamics, vortex motion, and mechanical principles rather than abstract fields. Light, charge, and spin are reframed as structured flows in a compressible medium, with Maxwell's equations emerging from fluid dynamics instead of postulated forces. It is an argument that physics should not stop at mathematics, but push toward a coherent, visualizable account of what the vacuum is made of and how it moves.Part 2: https://youtu.be/7F9bwPm_iQMPATREON https://www.patreon.com/c/demystifysciPARADOX LOST PRE-SALE: https://buy.stripe.com/7sY7sKdoN5d29eUdYddEs0bHOMEBREW MUSIC - Check out our new album!Hard Copies (Vinyl): FREE SHIPPING https://demystifysci-shop.fourthwall.com/products/vinyl-lp-secretary-of-nature-everything-is-so-good-hereStreaming:https://secretaryofnature.bandcamp.com/album/everything-is-so-good-herePARADIGM DRIFThttps://demystifysci.com/paradigm-drift-show00:00 Go! Aether Returns: A Hydrodynamic Alternative00:02:07 Building a Gaseous Aether Model00:06:04 Vortices and the Michelson-Morley Revisit00:09:46 Why Mechanical Models Were Abandoned00:17:09 What Do We Actually Measure in Physics?00:20:08 Aether Drift, Relativity, and Interpretation00:23:20 The Metaphysics Behind Modern Physics00:26:24 Should Nature Make Mechanical Sense?00:28:40 Mathematics vs Physical Reality00:31:14 What Makes a Good Scientific Theory?00:34:08 Computational Limits and Lost Hydrodynamics00:39:04 Vortices as the Basis of Electrodynamics00:43:51 Why Aether Must Be a Gas00:48:48 The Problem with Elastic Solid Aether00:53:19 Why Transverse Waves Matter00:55:14 Electromagnetism as Vortex Flow01:00:04 Particles as Toroidal Vortices#physics #aether , #fluiddynamics , #vortex , #hydrodynamics , #quantumphysics, #gravity #Electrodynamics, #toroid , #astrophysics #physicspodcast, #philosophypodcast MERCH: Rock some DemystifySci gear : https://demystifysci-shop.fourthwall.com/AMAZON: Do your shopping through this link: https://amzn.to/3YyoT98DONATE: https://bit.ly/3wkPqaDSUBSTACK: https://substack.com/@UCqV4_7i9h1_V7hY48eZZSLw@demystifysci RSS: https://anchor.fm/s/2be66934/podcast/rssMAILING LIST: https://bit.ly/3v3kz2S SOCIAL: - Discord: https://discord.gg/MJzKT8CQub- Facebook: https://www.facebook.com/groups/DemystifySci- Instagram: https://www.instagram.com/DemystifySci/- Twitter: https://twitter.com/DemystifySciMUSIC: -Shilo Delay: https://g.co/kgs/oty671

    Faith Hill Church
    Heart Physics - Tafara Butayi (15 February 2026)

    Faith Hill Church

    Play Episode Listen Later Feb 15, 2026 50:09


    Heart Physics - Tafara Butayi (15 February 2026)

    physics tafara
    The Skeptics' Guide to the Universe
    The Skeptics Guide #1075 - Feb 14 2026

    The Skeptics' Guide to the Universe

    Play Episode Listen Later Feb 14, 2026


    Quickie with Evan: Erich von Däniken dies at 90; News Items: Review of ADHD Treatment, Religious Nones, EPA Ends Endangerment Ruling, The Physics of the Quintuple Jump, Crotchgate; Who's That Noisy; Your Questions and E-mails: More on the yellow sun, The Turkey Illusion; Science or Fiction

    The 365 Days of Astronomy, the daily podcast of the International Year of Astronomy 2009
    NOIRLab - Mysterious Metallic Cloud Discovered Orbiting Mystery Object

    The 365 Days of Astronomy, the daily podcast of the International Year of Astronomy 2009

    Play Episode Listen Later Feb 14, 2026 15:40


    Sweeping winds of vaporized metals have been found in a massive cloud that dimmed the light of a star for nearly nine months. This discovery, made with the Gemini South telescope in Chile offers a rare glimpse into the chaotic and dynamic processes still shaping planetary systems long after their formation. In this podcast, Dr. Nadia Zakamska describes the discovery of this object, stemming from a mysterious dimming of a star, to the analysis of the gas cloud.   Bios: - Rob Sparks is in the Communications, Education and Engagement group at NSF's NOIRLab in Tucson, Arizona. - Dr. Nadia Zakamska was born and raised in Russia and received a Masters degree from Moscow Institute of Physics and Technology. She came to the United States in 2001 to pursue graduate education in Astrophysics in Princeton University. After her Ph.D., she was a postdoctoral researcher at the Institute for Advanced Study in Princeton and at Stanford University before moving to the Johns Hopkins University for a faculty position in 2011. She is now a professor in the Department of Physics and Astronomy, with a wide range of research interests across many areas of astrophysics. She lives in Baltimore with her husband and four children.   NOIRLab social media channels can be found at: https://www.facebook.com/NOIRLabAstro https://twitter.com/NOIRLabAstro https://www.instagram.com/noirlabastro/ https://www.youtube.com/noirlabastro   We've added a new way to donate to 365 Days of Astronomy to support editing, hosting, and production costs.  Just visit: https://www.patreon.com/365DaysOfAstronomy and donate as much as you can! Share the podcast with your friends and send the Patreon link to them too!  Every bit helps! Thank you! ------------------------------------ Do go visit http://www.redbubble.com/people/CosmoQuestX/shop for cool Astronomy Cast and CosmoQuest t-shirts, coffee mugs and other awesomeness! http://cosmoquest.org/Donate This show is made possible through your donations.  Thank you! (Haven't donated? It's not too late! Just click!) ------------------------------------ The 365 Days of Astronomy Podcast is produced by the Planetary Science Institute. http://www.psi.edu Visit us on the web at 365DaysOfAstronomy.org or email us at info@365DaysOfAstronomy.org.

    Demystifying Science
    60 Second Theories - Paradigm Drift #8, DemystifySci #400

    Demystifying Science

    Play Episode Listen Later Feb 14, 2026 140:50


    DemystifySci is the place where we bring people together in search of the theories that will change the world, and Paradigm Drift is your chance to have a hand in that future. Theorists, randomly chosen, get 60 seconds to present their revolutionary idea... and then we see if it's gonna stand the test of time or if it's back into the tank for a little while longer.PATREON https://www.patreon.com/c/demystifysciPARADOX LOST PRE-SALE: https://buy.stripe.com/7sY7sKdoN5d29eUdYddEs0bHOMEBREW MUSIC - Check out our new album!Hard Copies (Vinyl): FREE SHIPPING https://demystifysci-shop.fourthwall.com/products/vinyl-lp-secretary-of-nature-everything-is-so-good-hereStreaming:https://secretaryofnature.bandcamp.com/album/everything-is-so-good-herePARADIGM DRIFThttps://demystifysci.com/paradigm-drift-show00:00 Go! Paradigm Drift: how the show works00:02:18 Returning guest and research updates00:03:34 Death, meaning, and the limits of mortality00:07:32 Consciousness, fear, and letting go of death00:13:53 Morality, judgment, and life review myths00:19:25 Quantum choice, discreteness, and randomness00:25:06 Least action as a discrete network00:29:39 Dimensional analysis, Higgs energy, and constants00:36:13 Cosmology, observation limits, and evolving universes00:40:22 Time as the fundamental unit of measurement00:43:21 Free will, attention, and inhibition00:51:12 Willpower as a trainable physical resource00:55:02 Edge theory and topological particle models01:03:00 Human experience behind abstract theories01:09:05 Kundalini, electromagnetism, and physiology01:18:50 Physics metaphors for spiritual experience01:24:45 Dark matter as compressed wavelength states01:31:10 Self-taught physics and interdisciplinary thinking01:39:36 Consciousness as matter and intelligent evolution01:47:36 Thought, embodiment, and physical reality01:54:41 Substrate physics and unified mechanics02:05:36 Vortices, pressure, and force mediation02:14:21 Closing reflections on first-principles inquiry#ParadigmDrift, #FirstPrinciples, #FoundationalPhysics, #QuantumFoundations, #FreeWill, #Consciousness, #AlternativeTheories, #PhysicsBeyondMath, #SubstratePhysics, #Vortices, #TimeIsFundamental, #DarkMatter, #Cosmology, #IndependentThinkers, #RethinkingReality #physicspodcast, #openmic #philosophypodcast MERCH: Rock some DemystifySci gear : https://demystifysci-shop.fourthwall.com/AMAZON: Do your shopping through this link: https://amzn.to/3YyoT98DONATE: https://bit.ly/3wkPqaDSUBSTACK: https://substack.com/@UCqV4_7i9h1_V7hY48eZZSLw@demystifysci RSS: https://anchor.fm/s/2be66934/podcast/rssMAILING LIST: https://bit.ly/3v3kz2S SOCIAL: - Discord: https://discord.gg/MJzKT8CQub- Facebook: https://www.facebook.com/groups/DemystifySci- Instagram: https://www.instagram.com/DemystifySci/- Twitter: https://twitter.com/DemystifySciMUSIC: -Shilo Delay: https://g.co/kgs/oty671

    The Skeptics' Guide to the Universe
    The Skeptics Guide #1075 - Feb 14 2026

    The Skeptics' Guide to the Universe

    Play Episode Listen Later Feb 14, 2026


    Quickie with Evan: Erich von Däniken dies at 90; News Items: Review of ADHD Treatment, Religious Nones, EPA Ends Endangerment Ruling, The Physics of the Quintuple Jump, Crotchgate; Who's That Noisy; Your Questions and E-mails: More on the yellow sun, The Turkey Illusion; Science or Fiction

    House of R
    Our 10 Favorite Ships, Sex Scenes, and Seminal Love Stories of the Century (So Far)

    House of R

    Play Episode Listen Later Feb 13, 2026 117:17


    Mal and Jo celebrate another Valentine's Day together by pairing their annual V-Day Quickie with the next installment of the Best of the Century (So Far) series. Talk about a hookup! They share their favorite descriptions of love, sex scenes, surprise pairings, and more!(00:00) Intro(05:12) The Rules(10:38) Favorite Couple That We Actually Got(21:21) That's What the FanFic Is For(36:05) Surprise Pairing You Wound up Caring About the Most(43:00) No. 1 Champion Yearner(50:05) Passionate First Kiss(56:27) Hottest Sex Scene(01:05:57) The Sex Scene That Made You a Romantasy Fan(01:20:37) I Have Questions About the Physics(01:38:26) Most Gorgeous Description of Love(01:46:32) Poignant PartingHosts: Mallory Rubin and Joanna RobinsonProducer: Carlos ChiribogaSocial: Jomi AdeniranAdditional Production Support: Arjuna Ramgopowell Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Growth Mindset Podcast
    Hidden Potential: The Psychology of self-sabotage and changing what's possible - Heather Moyse (2x Olympic Gold)

    Growth Mindset Podcast

    Play Episode Listen Later Feb 13, 2026 53:14


    We all love the idea of “potential” until it quietly turns into pressure, guilt, and a weird fear of actually trying. Heather Moyse didn't start professional sport until 27 and still became a double Olympic gold medallist. Breaking out of her social safety and truly exploring her potential wasn't easy. We dig into behavioral inertia, self‑sabotage, and the invisible “settings” your environment installs in your brain. Think of this as a mindset update: less motivational quotes, more useful psychology. f you've ever felt stuck in the “nearly” version of your life—nearly starting, nearly committing, nearly backing yourself—this conversation gives you language, mental models, and a few friendly kicks to move. Spot the subtle ways your environment is capping your potential. Replace “be realistic” thinking with experiments that actually feel safe. Build a high‑potential identity without burning out or becoming a robot. Hit play and give your potential something better than another inspirational reel. SPONSORS

    Roxy's Ride & Inspire RAWcast - Mountain Bike & Mindset Podcast
    Why MTB Is Objectively Harder for Women (Physics, Not “Excuses”) #51

    Roxy's Ride & Inspire RAWcast - Mountain Bike & Mindset Podcast

    Play Episode Listen Later Feb 13, 2026 14:41


    If you're fed up with men telling women that they're just making excuses, or you think “women just need to try harder”… you're exactly why this episode exists.

    Science & Futurism with Isaac Arthur
    Space Hotels - How Close Are We to Vacationing in Orbit?

    Science & Futurism with Isaac Arthur

    Play Episode Listen Later Feb 12, 2026 19:45


    How close are we to vacationing in orbit? Space hotels, real costs, and the tipping point where space tourism becomes normal.Get Nebula using my link for 50% off an annual subscription: https://go.nebula.tv/isaacarthurWatch my exclusive video The Future of Interstellar Communication: https://nebula.tv/videos/isaacarthur-chronoengineering-manipulating-time-as-technologyThe Overview Effekt https://nebula.tv/overvieweffekt?ref=isaacarthurVisit our Website: http://www.isaacarthur.netJoin Nebula: https://go.nebula.tv/isaacarthurSupport us on Patreon: https://www.patreon.com/IsaacArthurSupport us on Subscribestar: https://www.subscribestar.com/isaac-arthurFacebook Group: https://www.facebook.com/groups/1583992725237264/Reddit: https://www.reddit.com/r/IsaacArthur/Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.SFIA Discord Server: https://discord.gg/53GAShE

    Science & Futurism with Isaac Arthur
    Space Hotels - How Close Are We to Vacationing in Orbit? (Narration Only)

    Science & Futurism with Isaac Arthur

    Play Episode Listen Later Feb 12, 2026 19:20


    How close are we to vacationing in orbit? Space hotels, real costs, and the tipping point where space tourism becomes normal.Get Nebula using my link for 50% off an annual subscription: https://go.nebula.tv/isaacarthurWatch my exclusive video The Future of Interstellar Communication: https://nebula.tv/videos/isaacarthur-chronoengineering-manipulating-time-as-technologyThe Overview Effekt https://nebula.tv/overvieweffekt?ref=isaacarthurVisit our Website: http://www.isaacarthur.netJoin Nebula: https://go.nebula.tv/isaacarthurSupport us on Patreon: https://www.patreon.com/IsaacArthurSupport us on Subscribestar: https://www.subscribestar.com/isaac-arthurFacebook Group: https://www.facebook.com/groups/1583992725237264/Reddit: https://www.reddit.com/r/IsaacArthur/Twitter: https://twitter.com/Isaac_A_Arthur on Twitter and RT our future content.SFIA Discord Server: https://discord.gg/53GAShE

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

    Cogwheel Gaming
    One-Shot: Cartoon Physics Ep 2 (Lasers & Feelings)

    Cogwheel Gaming

    Play Episode Listen Later Feb 12, 2026 60:12


    Crash GMs for Beth Ellie, & Io as they play a game based loosely on Lasers & Feelings, but with cartoons and physics. Download the rules for Cartoon Physics here. Lasers & Feelings 1.4 was created by John Harper and released under a CC BY 4.0 license. (Earlier versions were CC BY NC) You can find the rules here: https://johnharper.itch.io/lasers-feelings Follow this series on… One-Shots RSS Feed: https://aaronbsmith.com/cogwheel/tag/one-shot/podcast/ Patreon: https://www.patreon.com/cogwheelgaming Mastodon: https://is.aaronbsmith.com/@cogwheel Not on Mastodon? Consider these instances: gamepad.club dice.camp mastodon.art chirp.enworld.org MP3 Download: One-Shot: Cartoon Physics Ep 2 (Lasers & Feelings) Music Used: Bubble Machine by Drozerix Keep us ad free by supporting us on Patreon!  Thanks to our current Patreon Patrons (as of this upload…): Cindy (Patron Emeritus), Ellie, Liv Dromen, Paul, ShanShen, and Walter!

    Arizona Science
    Studying plasma and why it matters in physics

    Arizona Science

    Play Episode Listen Later Feb 12, 2026 7:30


    Plasma scientists investigate ionized gases and how they interact with various materials. University of Arizona mathematics professor Lise-Marie Imbert-Gerard is studying how waves of energy travel through plasma. The findings could help scientists improve nuclear fusion technology.Lise-Marie Imbert-Gerard spoke with Leslie Tolbert, Ph. D Regents professor emerita in Neuroscience at the University of Arizona.

    The 'X' Zone Radio Show
    Rob McConnell Interviews - THOMAS FUSCO - Trying To Prove The Paranormal Using His Own Hypothesis

    The 'X' Zone Radio Show

    Play Episode Listen Later Feb 12, 2026 47:00 Transcription Available


    Thomas Fusco is a researcher and author who approaches paranormal phenomena through a self-developed hypothesis aimed at explaining unexplained experiences within a structured, testable framework. Rather than relying solely on anecdote, Fusco attempts to correlate reports of hauntings, apparitions, and anomalous events with underlying principles involving consciousness, environment, and physics. His work emphasizes experimentation, pattern analysis, and critical evaluation—seeking to move paranormal inquiry closer to a repeatable, hypothesis-driven model while acknowledging the limits and challenges of proving extraordinary claims.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-x-zone-radio-tv-show--1078348/support.Please note that all XZBN radio and/or television shows are Copyright © REL-MAR McConnell Meda Company, Niagara, Ontario, Canada – www.rel-mar.com. For more Episodes of this show and all shows produced, broadcasted and syndicated from REL-MAR McConell Media Company and The 'X' Zone Broadcast Network and the 'X' Zone TV Channell, visit www.xzbn.net. For programming, distribution, and syndication inquiries, email programming@xzbn.net.We are proud to announce the we have launched TWATNews.com, launched in August 2025.TWATNews.com is an independent online news platform dedicated to uncovering the truth about Donald Trump and his ongoing influence in politics, business, and society. Unlike mainstream outlets that often sanitize, soften, or ignore stories that challenge Trump and his allies, TWATNews digs deeper to deliver hard-hitting articles, investigative features, and sharp commentary that mainstream media won't touch.These are stories and articles that you will not read anywhere else.Our mission is simple: to expose corruption, lies, and authoritarian tendencies while giving voice to the perspectives and evidence that are often marginalized or buried by corporate-controlled media

    The Audio Long Read
    From the archive: Do we need a new theory of evolution?

    The Audio Long Read

    Play Episode Listen Later Feb 11, 2026 40:36


    We are raiding the Guardian long read archives to bring you some classic pieces from years past, with new introductions from the authors. This week, from 2022: A new wave of scientists argues that mainstream evolutionary theory needs an urgent overhaul. Their opponents have dismissed them as misguided careerists – and the conflict may determine the future of biology By Stephen Buranyi. Read by Andrew McGregor. Help support our independent journalism at theguardian.com/longreadpod

    Ceramic Tech Chat
    Research experiences support next-gen scientists: Mario Affatigato

    Ceramic Tech Chat

    Play Episode Listen Later Feb 11, 2026 28:37


    Undergraduate research experiences have many well-known benefits for those just starting on their potential career path. Mario Affatigato, the Fran Allison and Francis Halpin Professor of Physics at Coe College, shares how his initial experiences with glass research as a student at Coe came full circle when he returned to Coe as a professor, describes the fundamental and applied glass science that his research group conducts, and discusses his plans and goals as president of ACerS this year.View the transcript for this episode here.About the guestMario Affatigato is the Fran Allison and Francis Halpin Professor of Physics at Coe College in Cedar Rapids, Iowa. His group studies various glass-related questions from both a fundamental and applied perspective, including electrical conductivity of vanadate glasses and laser-based manufacturing. He is serving as this year's president of The American Ceramic Society, and he is also editor-in-chief of the International Journal of Applied Glass Science. About ACerSFounded in 1898, The American Ceramic Society is the leading professional membership organization for scientists, engineers, researchers, manufacturers, plant personnel, educators, and students working with ceramics and related materials.

    The 'X' Zone Radio Show
    Rob McConnell Interviews - RODNEY CLUFF - Breaking News… The World is Hollow.......NOT

    The 'X' Zone Radio Show

    Play Episode Listen Later Feb 11, 2026 46:48 Transcription Available


    Rodney Cluff is a researcher and commentator known for critically examining the Hollow Earth hypothesis. In Breaking News… The World Is Hollow… NOT, Cluff revisits claims about a hollow planet by scrutinizing geology, physics, seismology, gravity, and historical sources. His work challenges sensational narratives, arguing that observational science and Earth's measured properties contradict Hollow Earth ideas. By separating myth, misinterpretation, and speculation from testable evidence, Cluff encourages skeptical inquiry and scientific literacy in discussions about fringe theories.Become a supporter of this podcast: https://www.spreaker.com/podcast/the-x-zone-radio-tv-show--1078348/support.Please note that all XZBN radio and/or television shows are Copyright © REL-MAR McConnell Meda Company, Niagara, Ontario, Canada – www.rel-mar.com. For more Episodes of this show and all shows produced, broadcasted and syndicated from REL-MAR McConell Media Company and The 'X' Zone Broadcast Network and the 'X' Zone TV Channell, visit www.xzbn.net. For programming, distribution, and syndication inquiries, email programming@xzbn.net.We are proud to announce the we have launched TWATNews.com, launched in August 2025.TWATNews.com is an independent online news platform dedicated to uncovering the truth about Donald Trump and his ongoing influence in politics, business, and society. Unlike mainstream outlets that often sanitize, soften, or ignore stories that challenge Trump and his allies, TWATNews digs deeper to deliver hard-hitting articles, investigative features, and sharp commentary that mainstream media won't touch.These are stories and articles that you will not read anywhere else.Our mission is simple: to expose corruption, lies, and authoritarian tendencies while giving voice to the perspectives and evidence that are often marginalized or buried by corporate-controlled media

    Short Wave
    The physics of the Winter Olympics

    Short Wave

    Play Episode Listen Later Feb 10, 2026 13:00


    Watching a ski jumper fly through the air might get you wondering, “How do they do that?” The answer is – physics!That's why this episode, we have two physicists – Amy Pope, a physicist from Clemson University and host Regina G. Barber – break down the science at play across some of the sports at the 2026 Winter Olympics. Because what's a sport without a little friction, lift and conservation of energy? They also get into the new sport this year, ski mountaineering - or “skimo” as many call it - and the recent scandal involving the men's ski jump suits. Interested in more science behind Olympic sports? Check out our episodes on how extreme G-forces affect Olympic bobsledders, the physics of figure skating and the science behind Simone Biles' Olympic gold. Also, we'd love to know what science questions have you stumped. Email us your questions at shortwave@npr.org – we may solve it for you on a future episode!Listen to every episode of Short Wave sponsor-free and support our work at NPR by signing up for Short Wave+ at plus.npr.org/shortwave.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

    Discover Your Talent–Do What You Love
    1195. Finding a Deep Sense of Purpose: Successful Venture Capitalist to Passionate Education Change Advocate

    Discover Your Talent–Do What You Love

    Play Episode Listen Later Feb 10, 2026 37:38


    Guest: Ted Tindersmith "I was recognized by one of the trade publications as one of the top-ranked venture capitalists in the country for 1995 to 1999 – which were good years to be good at it. I loved every day. But as I got further into it, I realized that a lot of the companies we backed were developing products and solutions to make customers far more productive. And that seems to be a really good thing. "But at a certain point, I realized that if you make a few people really productive, you may be laying off a bunch of others, which gets me to AI and why I am so focused on things today. "As I looked back on my business career, every day was really fun, but I didn't feel a sense of purpose. Now, every day, I feel a deep sense of purpose by fighting for different priorities in schools and fighting for helping kids find their strengths – instead of putting students on the narrow conveyor belt that leads right into the jaws of AI." Recommendation to listeners:  "Find the things you love to do. Be resourceful in terms of connecting your passions with ways to support yourself financially. Take chances and be bold. And leverage technology. You will never look back and you are going to be in great shape."   Ted Dintersmith is a best-selling author, education advocate, and former venture capitalist who believes math has been weaponized—and it's time to set things right. His professional career has been immersed in the world of technology-driven education, giving him a ringside seat to the advances of integrated circuits, robotics, and Artificial Intelligence. For the past fifteen years, he has focused on the world of education, forming an education non-profit, authoring best seller books, and setting a mission to help catalyze and accelerate progress in our schools and equip our children with skills and mindsets that are essential in a world defined by rapidly-advancing innovation. Ted graduated from the College of William and Mary with High Honors in English and Physics and then got a PhD in Engineering from Stanford. In 2012, he was appointed by President Obama to represent the U.S. at the United Nations General Assembly, where he focused on education and youth entrepreneurship.

    Regenerative Health with Max Gulhane, MD
    100. Mitochondria, Origins of Life and the Physics of Aging | Prof. Alistair Nunn

    Regenerative Health with Max Gulhane, MD

    Play Episode Listen Later Feb 10, 2026 130:49


    Modern humans are living in an environment radically different from the one our biology evolved to operate within. In this conversation, we explore how light, mitochondria, evolution, and physics shape aging, disease, mental health, and longevity. This episode reframes health not as a matter of isolated interventions, but as the consequence of whether we remain inside—or drift outside—the human biological envelope.Dr Alistair Nunn is the Director of Science of The Guy Foundation Family Trust and Visiting Professor in Theoretical Quantum Biology & Bioenergetics Research Centre for Optimal Health at the University of Westminster, London, UK. SUPPORT MY WORK

    Where We Live
    CT goes quantum: A look at the littlest things out there

    Where We Live

    Play Episode Listen Later Feb 10, 2026 49:00


    This show either exists or doesn't exist. It's possible you won't know until you listen to it. Today, we're getting quarky, exploring the weird — and mind-boggingly small — world of quantum mechanics. What is it? Should we be excited? Scared? Some superposition of both? We’ll also hear about new state and federal investments into quantum technology, and learn how Connecticut colleges are making quantum more accessible. Guests: Chad Orzel: chair of the Department of Physics and Astronomy at Union College and author of the book “How to Teach Quantum Physics to Your Dog.” Christine Broadbridge: founding director of CSCU’s Center for Quantum and Nanotechnology and the executive director of research and innovation at SCSU. Emily Edwards: associate research professor at Duke University and co-leader of the National Q-12 Education Partnership. Where We Live is available on Apple Podcasts, Spotify, Amazon Music, TuneIn, Listen Notes, or wherever you get your podcasts. Subscribe and never miss an episode.Support the show: http://wnpr.org/donateSee omnystudio.com/listener for privacy information.

    Theories of Everything with Curt Jaimungal
    Vitaly Vanchurin: This Cosmologist Discovered Something Strange...

    Theories of Everything with Curt Jaimungal

    Play Episode Listen Later Feb 9, 2026 118:32


    What if physics is just the universe learning? Most Theories of Everything episodes are mind‑bending for their math, physics, philosophy, or consciousness implications. This one hits all four simultaneously. Professor Vitaly Vanchurin joins me to argue the cosmos isn't just modeled by neural networks—it literally is one. Learning dynamics aren't a metaphor for physics; they are the physics. Vanchurin shows why we need a three‑way unification: quantum mechanics, general relativity, and observers. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe TIMESTAMPS: - 00:00:00 - The Neural Network Universe - 00:05:48 - Learning Dynamics as Physics - 00:11:52 - Optimization and Variational Principles - 00:21:17 - Deriving Fundamental Field Equations - 00:28:47 - Fermions and Particle Emergence - 00:37:17 - Geometry of Learning Algorithms - 00:44:53 - Emergent Quantum Mechanics - 00:50:01 - Renormalization and Interpretability - 00:57:00 - Second Law of Learning - 01:05:10 - Subatomic Natural Selection - 01:15:40 - Consciousness and Learning Efficiency - 01:24:09 - Unifying Physics and Observers - 01:31:01 - Qualia and Hidden Variables - 01:40:24 - Free Energy Principle Integration - 01:46:04 - Epistemological Doubt and Advice LINKS MENTIONED: - Vitaly's Papers: https://inspirebeta.net/literature?sort=mostrecent&size=25&page=1&q=find%20author%20vanchurin - Vitaly's Lecture: https://youtu.be/TagDLiLb2VQ - Vitaly's Website: https://cosmos.phy.tufts.edu/~vitaly/ - Towards A Theory Of Machine Learning [Paper]: https://arxiv.org/pdf/2004.09280 - Autonomous Particles [Paper]: https://arxiv.org/pdf/2301.10077 - Emergent Field Theories From Neural Networks [Paper]: https://arxiv.org/pdf/2411.08138 - Covariant Gradient Descent [Paper]: https://arxiv.org/pdf/2504.05279 - A Quantum-Classical Duality And Emergent Spacetime [Paper]: https://arxiv.org/abs/1903.06083 - Emergent Quantumness In Neural Networks [Paper]: https://arxiv.org/abs/2012.05082 - Predictability Crisis In Inflationary Cosmology And Its Resolution [Paper]: https://arxiv.org/abs/gr-qc/9905097 - Stationary Measure In The Multiverse [Paper]: https://arxiv.org/abs/0812.0005 - The World As A Neural Network [Paper]: https://arxiv.org/pdf/2008.01540 - Self-Organized Criticality In Neural Networks [Paper]: https://arxiv.org/pdf/2107.03402v1 - One Hundred Authors Against Einstein [Book]: https://amazon.com/dp/B09PHH7KC8?tag=toe08-20 - Geocentric Cosmology: A New Look At The Measure Problem [Paper]: https://arxiv.org/abs/1006.4148 - Jacob Barandes [TOE]: https://youtu.be/gEK4-XtMwro - Yang-Hui He [TOE]: https://youtu.be/spIquD_mBFk - Eva Miranda [TOE]: https://youtu.be/6XyMepn-AZo - Felix Finster [TOE]: https://youtu.be/fXzO_KAqrh0 - Stephen Wolfram [TOE]: https://youtu.be/FkYer0xP37E - Stephen Wolfram 2 [TOE]: https://youtu.be/0YRlQQw0d-4 - Avshalom Elitzur [TOE]: https://youtu.be/pWRAaimQT1E - Ted Jacobson [TOE]: https://youtu.be/3mhctWlXyV8 - Geoffrey Hinton [TOE]: https://youtu.be/b_DUft-BdIE - Wayne Myrvold [TOE]: https://youtu.be/HIoviZe14pY - Cumrun Vafa [TOE]: https://youtu.be/kUHOoMX4Bqw - Claudia De Rham [TOE]: https://youtu.be/Ve_Mpd6dGv8 - Lee Smolin [TOE]: https://youtu.be/uOKOodQXjhc - Consciousness Iceberg [TOE]: https://youtu.be/65yjqIDghEk - Matthew Segall [TOE]: https://youtu.be/DeTm4fSXpbM - Andres Emilsson [TOE]: https://youtu.be/BBP8WZpYp0Y - Will Hahn [TOE]: https://youtu.be/3fkg0uTA3qU - David Wallace [TOE]: https://youtu.be/4MjNuJK5RzM - Karl Friston [TOE]: https://youtu.be/uk4NZorRjCo Learn more about your ad choices. Visit megaphone.fm/adchoices

    Moser, Lombardi and Kane
    2-09-26 Hour 2 - Take joy in Patriots pain/The Physics of Luge/New segment: Oh, By the Way...

    Moser, Lombardi and Kane

    Play Episode Listen Later Feb 9, 2026 40:34 Transcription Available


    0:00 - Call it sour grapes or whatever, but how great was it to see the Patriots lose yesterday? Boston fans are miserable and we're here for it. Their MVP-candidate Golden Boy QB put together a lackluster performance in the biggest game of his life. How da ya like them apples, Dave Portnoy?12:07 - Why would it be shameful if Vic died his hair? If he went full Vicky Martin, why is that embarrassing?Also, Luge is awesome but it makes no sense to us. How does it work? Does being fat give you an advantage because you slide faster? Or is it better to be skinny and aerodynamic?28:36 - We're debuting a new daily segment called "Oh, By The Way..."

    Global Medical Device Podcast powered by Greenlight Guru
    #446: The Hidden Physics of the MedTech Life Cycle with Dr. Kristy Katzenmeyer-Pleuss

    Global Medical Device Podcast powered by Greenlight Guru

    Play Episode Listen Later Feb 9, 2026 45:42


    In this episode, Etienne Nichols sits down with Dr. Kristy Katzenmeyer-Pleuss, President and Founder of KP Medical Device Consulting, to unpack the complexities of the medical device life cycle. The conversation centers on how manufacturers often overlook critical phases of a product's journey, such as transportation, shelf life, and the decommissioning phase, focusing instead solely on the point of patient use.Dr. Katzenmeyer-Pleuss highlights the significance of the upcoming ISO 10993-1:2025 standard and its renewed emphasis on life cycle-based risk assessments. She explains how the transition between global markets—particularly between the EU and the US—can lead to unexpected FDA deficiencies when manufacturers rely on justifications that worked for notified bodies but do not meet more stringent FDA testing expectations for reusable or in situ curing devices.The discussion concludes with actionable advice on early design decisions, such as narrowing down material suppliers and reprocessing options to reduce testing burdens. They also explore the critical need for cross-functional communication and quality system integration to ensure that learnings from one project or regulatory interaction are captured and applied across a company's entire portfolio.Key Timestamps01:45 – Introduction of Dr. Kristy Katzenmeyer-Pleuss and the mission of KP Medical Device Consulting.04:12 – Defining the Medical Device Life Cycle: Concept to decommissioning and the "hidden" phases in between.05:30 – ISO 10993-1:2025: The impact of the new biological evaluation standard on risk-based approaches.09:15 – Global Regulatory Discrepancies: Why a device approved in Europe might face hurdles at the FDA regarding "worst-case" testing.13:40 – Reusable Devices & Reprocessing: Managing the "permutation explosion" of cleaning agents and sterilization cycles.17:22 – Early Design Decisions: How limiting options in the IFU can significantly decrease your regulatory testing burden.21:05 – In Situ Curing Devices: The unique testing challenges of materials that change states during use.25:10 – Quality System Integration: Strategies for linking regulatory deficiencies and materials across multiple projects.Quotes"The life cycle is really the concept of the medical device from when it's a concept all the way through to the end where you are disposing or decommissioning... shelf life and transport are steps that usually don't get a lot of focus, but they are very important." - Dr. Katzenmeyer-Pleuss"You might have one device where literally they don't ask these questions at all, and then other times they're very, very picky... the longer you go in that process, the harder it is to pivot without spending a lot of time and money." - Dr. Katzenmeyer-PleussTakeawaysFront-load Risk Assessments: Don't wait for FDA deficiencies to consider how shelf life or reprocessing affects device safety; integrate these into the biological evaluation plan from day one.

    Living With Cystic Fibrosis
    Impacting CF with science: Dr. Jeffry Weers

    Living With Cystic Fibrosis

    Play Episode Listen Later Feb 9, 2026 34:48


    Innovating Medicine: How Science, Collaboration, and Curiosity Transform Patient CareIt is always inspiring to speak with true innovators on this podcast, the people who don't just follow the science, but actively push it forward, turning ideas into real-world solutions that change lives. We are honored to welcome Dr. Jeffry Weers whose work has profoundly impacted the cystic fibrosis (CF) community and beyond.Dr. Weers is a distinguished pharmaceutical scientist with more than 35 years of experience designing and developing novel drug-delivery systems. Throughout his career, he has focused on innovative treatments for CF, working across formulations, biologics, small molecules, and combination products. His achievements include an extensive patent portfolio and a remarkable publication record, but what truly sets him apart is his ability to translate ideas into treatments that improve patient lives.  I found that many scientists like Dr. Weers are soft spoken. They don't want to brag about their scientific successes, they just want their work to speak for itself.  Dr. Weers is so darn smart!  He won't toot his own horn, so I must!  He's a great person who is filled with so much hope for the future.One of Dr. Weers' most notable contributions is the invention of the Tobi Podhaler, a device that transformed how inhaled antibiotics reach the lungs. For people living with CF, this innovation has meant more effective, easier-to-administer treatment, significantly improving daily quality of life. His work exemplifies the power of scientific innovation to directly impact patient care.Dr. Weers delves into both the breakthroughs and the challenges of drug development. He shares insights into the ongoing hurdles of developing inhaled medications, including inhaled insulin, and emphasizes the regulatory obstacles that can slow the introduction of new anti-infectives. Yet, he remains optimistic about the future, highlighting the role of collaboration among scientists and the potential of AI to enhance medical imaging, diagnosis, and patient outcomes.Dr. Weers also stresses the critical importance of addressing infectious diseases in CF patients and the responsibility of the scientific community to advocate for better treatments. Beyond his professional achievements, he reflects on the personal side of being a lifelong scientist, sharing how interests like farming provide balance and perspective in a demanding career.I particularly loved recording this episode because Dr. Weers has a rare ability to make complex science accessible, explaining the “why” behind innovations in a way anyone can understand. For anyone curious about the intersection of science, medicine, and human impact, this conversation is both enlightening and inspiring.To watch a fabulous video that explains the creation of what it takes to get medicine into the lungs, view here: You Tube link: https://www.youtube.com/watch?v=fwglM8Zo4m0Inhaled drug delivery in CF/ YouTube link: nother YouTube link: https://youtu.be/iV27VdieQbo Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Watch our podcasts on YouTube: https://www.youtube.com/@laurabonnell1136/featuredThanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

    @HPCpodcast with Shahin Khan and Doug Black

    - Sovereign AI: what is it, and does anyone have it? - Bullish on Eviden: Europe's top system company restores old name - Intel to build server GPUs of its own - MIT Technology Review AI Predictions [audio mp3="https://orionx.net/wp-content/uploads/2026/02/HPCNB_20260209.mp3"][/audio] The post HPC News Bytes – 20260209 appeared first on OrionX.net.

    Science & Futurism with Isaac Arthur
    Wormhole Stableways – Constructing and Navigating Artificial Shortcuts Through Space

    Science & Futurism with Isaac Arthur

    Play Episode Listen Later Feb 8, 2026 36:09


    Could we build wormholes and travel the galaxy? Exploring stable wormholes, spacetime shortcuts, and the future of interstellar civilization.Get Nebula using my link for 50% off an annual subscription: https://go.nebula.tv/isaacarthurWatch my exclusive video The Future of Interstellar Communication: https://nebula.tv/videos/isaacarthur-chronoengineering-manipulating-time-as-technologyCheck out Joe Scott's Oldest & Newest: https://nebula.tv/videos/joescott-oldest-and-newest-places-on-earth?ref=isaacarthur

    Science & Futurism with Isaac Arthur
    Wormhole Stableways – Constructing and Navigating Artificial Shortcuts Through Space (Narration Only)

    Science & Futurism with Isaac Arthur

    Play Episode Listen Later Feb 8, 2026 35:48


    Could we build wormholes and travel the galaxy? Exploring stable wormholes, spacetime shortcuts, and the future of interstellar civilization.Get Nebula using my link for 50% off an annual subscription: https://go.nebula.tv/isaacarthurWatch my exclusive video The Future of Interstellar Communication: https://nebula.tv/videos/isaacarthur-chronoengineering-manipulating-time-as-technologyCheck out Joe Scott's Oldest & Newest: https://nebula.tv/videos/joescott-oldest-and-newest-places-on-earth?ref=isaacarthur

    Hillside Fellowship Podcast
    The physics of following Jesus

    Hillside Fellowship Podcast

    Play Episode Listen Later Feb 8, 2026 42:18


    Two masters. One heart. One winner. What you desire most will decide your direction--everytime.OUTLINE: Management 16:1-13 Motives 16:14-15 Morals 16:16-18 QUESTIONS: 16:1-13 Jesus says no one can serve two masters- even though we try. Where do you sense the strongest pull between Jesus and something else right now? In physics, not force determines direction- over time the strongest force always wins. What desire currently has the greatest ‘pull' in your life? 16:14-15 The Pharisees looked righteous on the outside but were driven by hidden motives. What motives are easiest for you to hide from others? When God exposes a backstage motive, what is your typical response- defensiveness, ridicule, denial or repentance? 16:16-18 “Holy hopscotch” describes skipping over uncomfortable truths. Where are you most tempted to minimise, redefine, or avoid what Jesus clearly says? When obedience feels costly, what competing desire usually steps in to take control? Where do you need to remember and be reminded that Jesus is the perfect manager who paid your debt in full? SCRIPTURE REFERENCE: Luke 16:1-18 https://www.bible.com/bible/100/LUK.16.NASB1995

    ANGELA'S SYMPOSIUM 📖 Academic Study on Witchcraft, Paganism, esotericism, magick and the Occult

    Do Gods exist? Why is magic more effective when Gods and spirits are involved? What makes magic effective? How to influence people and political events?All of this and more in this discussion with Chaos Magician Peter J. Carroll.Check out Peter Carroll's website: https://specularium.org/CONNECT & SUPPORT

    Pippin church of Christ
    Evidence for God - Physics

    Pippin church of Christ

    Play Episode Listen Later Feb 8, 2026 36:53


    Evidence for God - Physics - Ecc. 1:6

    To The Best Of Our Knowledge
    Carlo Rovelli: Cosmic Mysteries and the Politics of Wonder

    To The Best Of Our Knowledge

    Play Episode Listen Later Feb 7, 2026 37:41 Transcription Available


    Carlo Rovelli's quest to understand the nature of reality began not in a physics lab, but in youthful experiments with consciousness, political protest and a restless hunger for meaning—years before he “fell madly in love with physics.” Today, Rovelli is famous for his bestselling books, including "Seven Brief Lessons on Physics" and "Reality Is Not What It Seems," and his pioneering work on some of the biggest mysteries in physics, including black holes and quantum gravity. In a wide-ranging conversation, Steve Paulson talks with Rovelli about his early, profound experiences with LSD; his discovery of the "spectacular" beauty of general relativity and quantum mechanics; his lifelong search for purpose in both the cosmos and his own life; and why scientists need to be politically engaged. Carlo also tells us about the big idea that he'd put in our own wonder cabinet.This interview was recorded at the Island of Knowledge think tank in Tuscany, a project supported by Dartmouth College and the John Templeton Foundation. We also play a short excerpt from Anne Strainchamps' earlier interview with Rovelli that originally aired on Wisconsin Public Radio's To The Best Of Our Knowledge. This Wonder Cabinet episode was not funded, endorsed or affiliated with Wisconsin Public Media or the University of Wisconsin - Madison.--- Deep Time: Carlo Rovelli's white holes, where time dissolves: https://www.ttbook.org/interview/carlo-rovellis-white-holes-where-time-dissolves More from Carlo Rovelli: https://www.cpt.univ-mrs.fr/~rovelli/ ---00:00:00 Introduction & The Chirp of Black Holes00:04:10 Early Years in Verona00:10:00 Falling in Love with Physics00:17:30 Search for Truth00:25:05 Politics of Wonder---Wonder Cabinet is hosted by Anne Strainchamps and Steve Paulson.Find out more about the show at https://wondercabinetproductions.com, where you can subscribe to the podcast and our newsletter.  Wonder Cabinet is hosted by Anne Strainchamps and Steve Paulson. Find out more about the show at https://wondercabinetproductions.com, where you can subscribe to the podcast and our newsletter.

    Sleep Space from Astrum
    The Impossible Discoveries That Made Us Rewrite Physics

    Sleep Space from Astrum

    Play Episode Listen Later Feb 7, 2026 89:39


    This Astrum compilation explores the top mind-blowing discoveries that broke our physics models wide open. From the expansion of the universe, down to the tiniest quantum scale, these discoveries just don't make sense.▀▀▀▀▀▀Astrum's newsletter has launched! Want to know what's happening in space? Sign up here: ⁠https://astrumspace.kit.com⁠A huge thanks to our Patreons who help make these videos possible. Sign-up here: ⁠https://bit.ly/4aiJZNF

    StarTalk Radio
    Cosmic Queries – Understanding Infinity with Stephon Alexander

    StarTalk Radio

    Play Episode Listen Later Feb 6, 2026 45:47


    What is infinity? Neil deGrasse Tyson and comedian Negin Farsad explore whether we are in a finite universe, the issues with infinity, string theory, and more with theoretical physicist Stephon Alexander.Originally aired April 11, 2023. NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/cosmic-queries-understanding-infinity-with-stephon-alexander/ Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Ridgefield Tiger Talk
    Ridgefield Tiger Talk 139: RHS Robotics Competition 2026

    Ridgefield Tiger Talk

    Play Episode Listen Later Feb 6, 2026 22:07


    In this week's episode of Ridgefield Tiger talk we welcome back Michael Murphy, Physics and Robotics teacher at Ridgefield high school. Joining him are three RHS students, Connor Graves Robotics Club Vice President, Brian Murphy Co-Student Event Manager for this year's Ridgefield VEX Tournament, and Ayan Bhowmik Robotics Club Co-President. We discussed the annual robotics tournament held at Ridgefield high school this Saturday, February 7. Click here to see the event flyer. Also, if you can't make the event, it'll be streamed on YouTube here. Thanks for listening.

    Science & Futurism with Isaac Arthur
    Deep Space Habitats – Designing Self-Sustaining Biomes for Interstellar Journeys (Narration Only)

    Science & Futurism with Isaac Arthur

    Play Episode Listen Later Feb 5, 2026 26:08


    Todd N Tyler Radio Empire
    2/2 4-3 Why Planes Fly

    Todd N Tyler Radio Empire

    Play Episode Listen Later Feb 2, 2026 14:25


    Physics.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.