Podcasts about alphafold

  • 340PODCASTS
  • 483EPISODES
  • 45mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 27, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about alphafold

Latest podcast episodes about alphafold

The Human Upgrade with Dave Asprey
Mexican Cartel Biohacking, Google Anti-Aging Breakthrough, Measles Is Back, Age Reversal In 2026 : 1423

The Human Upgrade with Dave Asprey

Play Episode Listen Later Feb 27, 2026 9:22


This week's stories: Sinclair's This Is the Test: Are we about to see age reversal in humans? At the World Governments Summit 2026 in Dubai, Harvard geneticist David Sinclair told world leaders that ageing could soon be reversible and said the first human clinical trials of epigenetic reprogramming therapies are moving forward. The core idea is that ageing is partly an information problem, how cells read DNA, not just cumulative damage, and that partial reprogramming could restore youthful function without turning tissues into tumors. Dave frames this as a rare binary moment for longevity: either early, localized human trials (starting with tightly controlled tissue targets like the eye) show meaningful functional rejuvenation with acceptable safety, or the field has to recalibrate fast. Either way, the next couple of years will heavily influence where money, regulators, and serious researchers place their bets. • Sources: – World Governments Summit: https://www.worldgovernmentssummit.org/media-hub/news/detail/ageing-could-soon-be-reversible-says-harvard-scientist-at-wgs-2026 – NAD / Life Biosciences coverage: https://www.nad.com/news/fda-greenlights-life-biosciences-human-study-setting-up-pivotal-test-for-aging-theory-from-harvards-david-sinclair AlphaFold 4 in a locked box: DeepMind's private AI drug design engine Isomorphic Labs, DeepMind's drug discovery company, unveiled a proprietary drug design engine that outside scientists are comparing to an AlphaFold 4 moment, but for designing drugs, not just predicting structures. The big shift is that this system is closed: no public weights, no open database, and access appears to flow through partnerships with pharma companies. Dave breaks down why that matters for the longevity world: if AI makes early discovery cheaper and faster, we might see more serious shots on ageing targets over the next decade, but a closed model can also mean less transparency, bigger IP moats, and no guarantee that faster discovery leads to cheaper drugs. • Sources: – Nature: https://www.nature.com/articles/d41586-026-00365-7 – Isomorphic Labs: https://www.isomorphiclabs.com/articles/the-isomorphic-labs-drug-design-engine-unlocks-a-new-frontier Peptides in the freezer: El Mencho's anti aging stash and the dark side of wellness After reports and images from the final hideout linked to Jalisco New Generation Cartel leader Nemesio Oseguera Cervantes (El Mencho), coverage highlighted a detail that feels uncomfortably familiar to anyone in the modern wellness internet: injectable vials stored in a freezer with a schedule attached, including Tationil Plus, a glutathione based injectable marketed in some places for “cellular health,” cosmetic effects, and anti ageing. Dave uses the absurdity as a narrative wedge, not cartel gossip, to talk about how normalized gray market injectables have become, and how marketing (“detox,” “cellular reset”) often outruns evidence and safety. The segment pivots into a practical filter: which compounds are real therapeutics under medical supervision, and which are expensive folklore with sourcing risk and unknown long term downsides. • Sources: – New York Post: https://nypost.com/2026/02/25/world-news/inside-the-luxurious-love-nest-where-mexican-drug-lord-el-mencho-spent-his-final-days/ – Sky News (Reuters photos referenced): https://news.sky.com/story/inside-the-mexican-villa-where-feared-drug-lord-el-mencho-spent-final-hours-13511954 – Reuters photo gallery: https://www.reuters.com/pictures/el-menchos-last-hideout-inside-villa-where-cartel-leader-spent-final-hours-2026-02-25/W7DK5WEXS5IMLLZQO2P3CXGXFM The disease we thought was dead: measles comes roaring back Measles cases have surged in early 2026, with reporting citing at least 588 cases in the U.S. by late January, already more than many full year totals, and additional updates showing continued acceleration into February. Dave reframes this as a healthspan floor issue: you can argue about peptides and mitochondria all day, but measles is so contagious that once community immunity drops, outbreaks move fast and hit the most vulnerable first, especially infants and immunocompromised people. He also flags the systems problem: many clinicians have never seen measles, which increases the odds of delayed recognition and wider exposure in waiting rooms. The actionable move is boring and high ROI: verify MMR status for you and your family and close gaps before outbreaks get closer to home. • Sources: – AMA Morning Rounds (Week of Feb. 2, 2026): https://www.ama-assn.org/about/publications-newsletters/top-news-stories-ama-morning-rounds-week-feb-2-2026 – ABC News (CDC case count coverage): https://abcnews.com/Health/588-us-measles-cases-reported-january-cdc/story?id=129699078 – CIDRAP (case tracking context): https://www.cidrap.umn.edu/measles/us-measles-cases-soar-588-so-far-year-south-carolina-confirms-58-new-infections DC vs your health: Trump's State of the Union health reset President Donald Trump's 2026 State of the Union included a cluster of healthcare themes that function as a directional signal for agencies and payers this year, including drug pricing rhetoric, price transparency, and broader coverage and affordability framing. Dave translates the politics into a practical heuristic for biohackers: federal posture quietly determines what becomes easy versus painful to access in the legitimate system, from GLP 1 coverage rules and prior auth behavior to how friendly the environment is for telehealth, at home diagnostics, and eventually whatever “real longevity medicine” looks like. You do not need every policy detail in a weekly rundown, just the weather report: reimbursement and enforcement trends shape what stays niche, what scales, and what gets friction. • Sources: – Advisory Board: https://www.advisory.com/daily-briefing/2026/02/25/health-policy-roundup – Healthcare Dive: https://www.healthcaredive.com/news/trump-state-of-the-union-healthcare-2026/812962/ – This Week in Public Health analysis: https://thisweekinpublichealth.com/blog/2026/02/25/the-2026-state-of-the-union-what-it-means-for-health-and-public-health/ All source links are provided for direct access to the original reporting and research. This episode is designed for biohackers, longevity seekers, and high-performance listeners who want mechanism-level clarity on circadian biology, neurodegeneration signals, cognitive training, caffeine strategy, and supplement regulation. Host Dave Asprey connects emerging science, behavioral data, and policy shifts into practical frameworks you can use to build a resilient, adaptable health stack. New episodes every Tuesday, Thursday, Friday, and Sunday. Keywords: David Sinclair age reversal, epigenetic reprogramming therapy, Yamanaka factors OSK, Life Biosciences clinical trial, human rejuvenation trial 2026, biological age reset, longevity breakthrough news, DeepMind Isomorphic Labs, AlphaFold 4 drug design, AI drug discovery engine, geroprotective drug development, peptide gray market risks, injectable glutathathione Tationil Plus, GLP-1 regulation FDA warning, wellness industry regulation, measles outbreak 2026 US, MMR vaccine status adults, vaccine trust public health, health policy 2026 State of the Union, GLP-1 access and reimbursement, telehealth longevity care, biohacking news, anti-aging research update Thank you to our sponsors! Resources: • Get My 2026 Clean Nicotine Roadmap | Enroll for free at https://daveasprey.com/2026-clean-nicotine-roadmap/ • Get My 2026 Biohacking Trends Report: https://daveasprey.com/2026-biohacking-trends-report/ • Dave Asprey's Latest News | Go to https://daveasprey.com/ to join Inside Track today. • Danger Coffee: https://dangercoffee.com/discount/dave15 • My Daily Supplements: SuppGrade Labs (15% Off) • Favorite Blue Light Blocking Glasses: TrueDark (15% Off) • Dave Asprey's BEYOND Conference: https://beyondconference.com • Dave Asprey's New Book – Heavily Meditated: https://daveasprey.com/heavily-meditated • Join My Substack (Live Access To Podcast Recordings): https://substack.daveasprey.com/ • Upgrade Labs: https://upgradelabs.com Timestamps: 0:00 - Introduction 0:30 - Story #1: David Sinclair 2026 2:13 - Story #2: Google Drug Discovery 3:48 - Story #3: El Mencho Biohacking5:30 - Story #4: Measles Outbreak 6:51 - Story #5: Trump State of the Union 8:00 - Weekly Roundup 9:10 - Closing See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Conspiracy Clearinghouse
Bohemian Books: Gigas, Voynich & Soyga

Conspiracy Clearinghouse

Play Episode Listen Later Feb 25, 2026 50:21


EPISODE 157 | Bohemian Books: Gigas, Voynich & Soyga Some very old books have an air of mystery and intrigue about them. Partly, that's because they are literally hundreds of years old, and partly because of the weird things they contain.  Today, we'll take a look at three, all of which have a connection to the Czech Republic and Prague: the biggest book in the world, the Codex Gigas (also known as the Devil's Bible and which features heavily [no pun intended] in Dan Brown's latest schlock fest), the utterly baffling Voynich Manuscript, which is not written in any recognizable language; and the mysterious Book of Soyga, which disappeared for nearly 400 years, and some say that if you can decipher the final puzzles in the book, you will die. Like what we do? Then buy us a beer or three via our page on Buy Me a Coffee.  Review us here or on IMDb. And seriously, subscribe, will ya? Like, just do it.  SECTIONS 02:11 - The Codex Gigas - That's a big book, contents, legend of origin, Sweden gets it, defenestrations, the Sedlec Bone Church, The Secret of Secrets 11:00 - The Voynich Manuscript - WTF is this thing?, ownership relay, who maybe wrote it, what maybe it says, aspects of Voynichese, obscure languages, steganography, glossolalia, outsider art, a hoax, radiocarbon dating, those who have claimed decipherment, ciphers, people see what they want to, goropism, the Sun Language Theory, recent videos about Alphafold and protein folding, maybe a work of proto-fiction 43:32 - The Book of Soyga - John Dee, Edward Kelley, cryptic puzzles, 400 years lost, found in 1994 Music by Fanette Ronjat More Info The Codex Gigas – Devil's Bible on the National Library of Sweden website The Devil's Bible: My Deep Dive into the Weirdest Book I've Ever Seen Devil's Bible: Codex Gigas in Klementinum on Prague.net from 2007 loan Inside the ‘Devil's Bible,' the Largest Medieval Manuscript Ever Made on ArtNet EPISODE 109 | What's in a Name? The Shakespeare Authorship Debate with Scott Jackson EPISODE 135 | On Shakey Ground: More Shakespeare Authorship with Scott Jackson What Shakespeare Can Teach Us About Communicating with Jennifer King on the Digital Signage Done Right podcast Yale Library webpage on the Voynich Manuscript, with images The riddle of the Voynich Manuscript on the BBC Unsolved Mystery: The Voynich Manuscript An entire website about the Voynich Manuscript The Voynich Manuscript revealed: five things you probably didn't know about the Medieval masterpiece on The Art Newspaper THE VOYNICH MANUSCRIPT - "The Most Mysterious Manuscript in the World" - NSA report (PDF) Another NSA report on titled The Voynich Manuscript: An Elegant Enigma written in 1978 (PDF) A PDF of the actual Voynich Manuscript Headcanon: The Voynich Manuscript actually doesn't contain any cohesive text and is just a prank done by someone in the past on r/medieval A Scholar Has Cracked the Mystery of the Voynich Manuscript, the Encrypted Medieval Artwork That Defeated Codebreakers for Years on ArtNet Article on the Voynich manuscript on Brazilian website Revista Pesquisa Fapesp The Voynich Wiki How an Emperor Trapped a Con Man - blog on Edward kelley Magic and Mystery: Decoding the Secrets of the Book of Soyga on Discovery The Book of Soyga translated by Jane Kupin (PDF) Decoding the Book of Soyga: A Living Project of Esoteric Discovery The Book of Soyga | Literary History on House of Cadmus Soyga: the book that kills on Blog of Wonders Holy Conversations: The Impact of the Mysterious Book of Soyga on Ancient Origins Book of Soyga on the Voynich Wiki Follow us on social: Facebook X (Twitter) Other Podcasts by Derek DeWitt DIGITAL SIGNAGE DONE RIGHT - Winner of a Gold Quill Award, Gold MarCom Award, AVA Digital Award Gold, Silver Davey Award, and Communicator Award of Excellence, and on numerous top 10 podcast lists.  PRAGUE TIMES - A city is more than just a location - it's a kaleidoscope of history, places, people and trends. This podcast looks at Prague, in the center of Europe, from a number of perspectives, including what it is now, what is has been and where it's going. It's Prague THEN, Prague NOW, Prague LATER 

Startup Island TAIWAN Podcast
EP3-26 | 【AI News】Demis Hassabis on "AI Renaissance"

Startup Island TAIWAN Podcast

Play Episode Listen Later Feb 23, 2026 22:50


This episode explores the vision of Demis Hassabis, CEO of Google DeepMind and recipient of the 2024 Nobel Prize in Chemistry. Hassabis argues that 2026 marks a pivotal turning point in human history, as we enter what he describes as an “AI Renaissance”—an era whose impact could be ten times greater than the Industrial Revolution, unfolding at ten times the speed. He predicts that Artificial General Intelligence (AGI) could be achieved before 2030, while cautioning that today's AI systems remain in a state of “jagged intelligence,” still lacking robust reasoning and long-term planning capabilities. As the industry enters a phase of consolidation, Hassabis is focused on transforming AI into a scientific engine. Through breakthroughs such as AlphaFold and initiatives like Isomorphic Labs, he aims to reshape drug discovery, while collaborations with the U.S. Department of Energy—such as the “Genesis Project”—seek to accelerate progress in energy innovation. At the core of his vision is the concept of “Radical Abundance.” As AI drives the marginal cost of healthcare and energy toward near zero, society may begin to transition into a post-scarcity era. To navigate this shift, Hassabis proposes new social mechanisms, including a “Global Abundance Dividend,” and emphasizes that AI governance must extend beyond technologists, requiring international cooperation to ensure these technologies benefit all of humanity.本集的內容將帶您深入探索 Google DeepMind 執行長、2024 年諾貝爾化學獎得主 戴米斯·哈薩比斯 (Demis Hassabis) 的遠見。哈薩比斯指出 2026 年是人類歷史的轉折點,我們正進入一個「AI 文藝復興」時代,其影響力將是工業革命的十倍,且發展速度快上十倍。 哈薩比斯預測通用人工智能 (AGI) 可能在 2030 年前實現,但警告現今 AI 仍處於「參差不齊的智能」狀態,必須克服基礎推理與長期規劃的缺陷。隨著行業進入「洗牌期」,他致力於將 AI 轉化為科學引擎,透過 AlphaFold 與 Isomorphic Labs 變革藥物研發,並與美國能源部合作「創世紀任務」以加速能源突破。 他最核心的觀點是 「激進豐饒」(Radical Abundance):當 AI 讓醫療與能源成本趨近於零,人類將邁向「後稀缺」社會。為應對此轉變,他提出「全球豐饒紅利」等社會機制,並強調 AI 治理不能僅留給技術專家,需透過國際合作確保這項技術能造福全人類。 Powered by Firstory Hosting

This Week in Google (MP3)
IM 858: The Itinerant Salt Miner from Buffalo - Silicon Valley's Military Dilemma

This Week in Google (MP3)

Play Episode Listen Later Feb 19, 2026


OpenClaw's creator makes headlines by joining OpenAI after GitHub fame and a whirlwind of VC and big tech offers, redefining what's possible for independent developers in the AI arms race. Is this the year agentic AI goes mainstream, and are the big players ready for that disruption? OpenClaw, OpenAI and the future | Peter Steinberger OpenAI disbands mission alignment team Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. - The New York Times Introducing GPT‑5.3‑Codex‑Spark Anthropic releases Sonnet 4.6 Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute Google's Pixel 10a Launches on March 5 for $499 Google's AI drug discovery spinoff Isomorphic Labs claims major leap beyond AlphaFold 3 Gemini 3 Deep Think: AI model update designed for science Radio host David Greene says Google's NotebookLM tool stole his voice A new way to express yourself: Gemini can now create music Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood GPT-5 outperforms federal judges 100% to 52% in legal reasoning experiment An AI project is creating videos to go with Supreme Court justices' real words I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power. Sony Tech Can Identify Original Music in AI-Generated Songs AI Pioneer Fei-Fei Li's Startup World Labs Raises $1 Billion Yann v. Yoshua on directed systems Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say An AI Agent Published a Hit Piece on Me An Ars Technica Reporter Blamed A.I. Tools for Fabricating Quotes in a Bizarre A.I. Story Plain Dealer using AI to write reporters' stories Mediahuis trials use of AI agents to carry out 'first-line' news reporting DJI's first robovac is an autonomous cleaning drone you can't trust Leaked Email Suggests Ring Plans to Expand 'Search Party' Surveillance Beyond Dogs ai;dr I hate my AI pet with every fiber of my being Thanks a lot, AI: Hard drives are sold out for the year, says WD Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School peon-ping — Stop babysitting your terminal Hugo Barra makes a to-do agent Raspberry Pi soars 40% as CEO buys stock, AI chatter builds Hosts: Leo Laporte, Jeff Jarvis, and Emily Forlini Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: monarch.com with code IM bitwarden.com/twit preview.modulate.ai spaceship.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 858: The Itinerant Salt Miner from Buffalo

All TWiT.tv Shows (MP3)

Play Episode Listen Later Feb 19, 2026 171:57 Transcription Available


OpenClaw's creator makes headlines by joining OpenAI after GitHub fame and a whirlwind of VC and big tech offers, redefining what's possible for independent developers in the AI arms race. Is this the year agentic AI goes mainstream, and are the big players ready for that disruption? OpenClaw, OpenAI and the future | Peter Steinberger OpenAI disbands mission alignment team Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. - The New York Times Introducing GPT‑5.3‑Codex‑Spark Anthropic releases Sonnet 4.6 Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute Google's Pixel 10a Launches on March 5 for $499 Google's AI drug discovery spinoff Isomorphic Labs claims major leap beyond AlphaFold 3 Gemini 3 Deep Think: AI model update designed for science Radio host David Greene says Google's NotebookLM tool stole his voice A new way to express yourself: Gemini can now create music Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood GPT-5 outperforms federal judges 100% to 52% in legal reasoning experiment An AI project is creating videos to go with Supreme Court justices' real words I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power. Sony Tech Can Identify Original Music in AI-Generated Songs AI Pioneer Fei-Fei Li's Startup World Labs Raises $1 Billion Yann v. Yoshua on directed systems Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say An AI Agent Published a Hit Piece on Me An Ars Technica Reporter Blamed A.I. Tools for Fabricating Quotes in a Bizarre A.I. Story Plain Dealer using AI to write reporters' stories Mediahuis trials use of AI agents to carry out 'first-line' news reporting DJI's first robovac is an autonomous cleaning drone you can't trust Leaked Email Suggests Ring Plans to Expand 'Search Party' Surveillance Beyond Dogs ai;dr I hate my AI pet with every fiber of my being Thanks a lot, AI: Hard drives are sold out for the year, says WD Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School peon-ping — Stop babysitting your terminal Hugo Barra makes a to-do agent Raspberry Pi soars 40% as CEO buys stock, AI chatter builds Hosts: Leo Laporte, Jeff Jarvis, and Emily Forlini Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: monarch.com with code IM bitwarden.com/twit preview.modulate.ai spaceship.com/twit

Radio Leo (Audio)
Intelligent Machines 858: The Itinerant Salt Miner from Buffalo

Radio Leo (Audio)

Play Episode Listen Later Feb 19, 2026 171:57 Transcription Available


OpenClaw's creator makes headlines by joining OpenAI after GitHub fame and a whirlwind of VC and big tech offers, redefining what's possible for independent developers in the AI arms race. Is this the year agentic AI goes mainstream, and are the big players ready for that disruption? OpenClaw, OpenAI and the future | Peter Steinberger OpenAI disbands mission alignment team Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. - The New York Times Introducing GPT‑5.3‑Codex‑Spark Anthropic releases Sonnet 4.6 Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute Google's Pixel 10a Launches on March 5 for $499 Google's AI drug discovery spinoff Isomorphic Labs claims major leap beyond AlphaFold 3 Gemini 3 Deep Think: AI model update designed for science Radio host David Greene says Google's NotebookLM tool stole his voice A new way to express yourself: Gemini can now create music Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood GPT-5 outperforms federal judges 100% to 52% in legal reasoning experiment An AI project is creating videos to go with Supreme Court justices' real words I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power. Sony Tech Can Identify Original Music in AI-Generated Songs AI Pioneer Fei-Fei Li's Startup World Labs Raises $1 Billion Yann v. Yoshua on directed systems Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say An AI Agent Published a Hit Piece on Me An Ars Technica Reporter Blamed A.I. Tools for Fabricating Quotes in a Bizarre A.I. Story Plain Dealer using AI to write reporters' stories Mediahuis trials use of AI agents to carry out 'first-line' news reporting DJI's first robovac is an autonomous cleaning drone you can't trust Leaked Email Suggests Ring Plans to Expand 'Search Party' Surveillance Beyond Dogs ai;dr I hate my AI pet with every fiber of my being Thanks a lot, AI: Hard drives are sold out for the year, says WD Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School peon-ping — Stop babysitting your terminal Hugo Barra makes a to-do agent Raspberry Pi soars 40% as CEO buys stock, AI chatter builds Hosts: Leo Laporte, Jeff Jarvis, and Emily Forlini Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: monarch.com with code IM bitwarden.com/twit preview.modulate.ai spaceship.com/twit

This Week in Google (Video HI)
IM 858: The Itinerant Salt Miner from Buffalo - Silicon Valley's Military Dilemma

This Week in Google (Video HI)

Play Episode Listen Later Feb 19, 2026 Transcription Available


OpenClaw's creator makes headlines by joining OpenAI after GitHub fame and a whirlwind of VC and big tech offers, redefining what's possible for independent developers in the AI arms race. Is this the year agentic AI goes mainstream, and are the big players ready for that disruption? OpenClaw, OpenAI and the future | Peter Steinberger OpenAI disbands mission alignment team Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. - The New York Times Introducing GPT‑5.3‑Codex‑Spark Anthropic releases Sonnet 4.6 Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute Google's Pixel 10a Launches on March 5 for $499 Google's AI drug discovery spinoff Isomorphic Labs claims major leap beyond AlphaFold 3 Gemini 3 Deep Think: AI model update designed for science Radio host David Greene says Google's NotebookLM tool stole his voice A new way to express yourself: Gemini can now create music Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood GPT-5 outperforms federal judges 100% to 52% in legal reasoning experiment An AI project is creating videos to go with Supreme Court justices' real words I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power. Sony Tech Can Identify Original Music in AI-Generated Songs AI Pioneer Fei-Fei Li's Startup World Labs Raises $1 Billion Yann v. Yoshua on directed systems Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say An AI Agent Published a Hit Piece on Me An Ars Technica Reporter Blamed A.I. Tools for Fabricating Quotes in a Bizarre A.I. Story Plain Dealer using AI to write reporters' stories Mediahuis trials use of AI agents to carry out 'first-line' news reporting DJI's first robovac is an autonomous cleaning drone you can't trust Leaked Email Suggests Ring Plans to Expand 'Search Party' Surveillance Beyond Dogs ai;dr I hate my AI pet with every fiber of my being Thanks a lot, AI: Hard drives are sold out for the year, says WD Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School peon-ping — Stop babysitting your terminal Hugo Barra makes a to-do agent Raspberry Pi soars 40% as CEO buys stock, AI chatter builds Hosts: Leo Laporte, Jeff Jarvis, and Emily Forlini Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: monarch.com with code IM bitwarden.com/twit preview.modulate.ai spaceship.com/twit

All TWiT.tv Shows (Video LO)
Intelligent Machines 858: The Itinerant Salt Miner from Buffalo

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Feb 19, 2026 Transcription Available


OpenClaw's creator makes headlines by joining OpenAI after GitHub fame and a whirlwind of VC and big tech offers, redefining what's possible for independent developers in the AI arms race. Is this the year agentic AI goes mainstream, and are the big players ready for that disruption? OpenClaw, OpenAI and the future | Peter Steinberger OpenAI disbands mission alignment team Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. - The New York Times Introducing GPT‑5.3‑Codex‑Spark Anthropic releases Sonnet 4.6 Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute Google's Pixel 10a Launches on March 5 for $499 Google's AI drug discovery spinoff Isomorphic Labs claims major leap beyond AlphaFold 3 Gemini 3 Deep Think: AI model update designed for science Radio host David Greene says Google's NotebookLM tool stole his voice A new way to express yourself: Gemini can now create music Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood GPT-5 outperforms federal judges 100% to 52% in legal reasoning experiment An AI project is creating videos to go with Supreme Court justices' real words I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power. Sony Tech Can Identify Original Music in AI-Generated Songs AI Pioneer Fei-Fei Li's Startup World Labs Raises $1 Billion Yann v. Yoshua on directed systems Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say An AI Agent Published a Hit Piece on Me An Ars Technica Reporter Blamed A.I. Tools for Fabricating Quotes in a Bizarre A.I. Story Plain Dealer using AI to write reporters' stories Mediahuis trials use of AI agents to carry out 'first-line' news reporting DJI's first robovac is an autonomous cleaning drone you can't trust Leaked Email Suggests Ring Plans to Expand 'Search Party' Surveillance Beyond Dogs ai;dr I hate my AI pet with every fiber of my being Thanks a lot, AI: Hard drives are sold out for the year, says WD Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School peon-ping — Stop babysitting your terminal Hugo Barra makes a to-do agent Raspberry Pi soars 40% as CEO buys stock, AI chatter builds Hosts: Leo Laporte, Jeff Jarvis, and Emily Forlini Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: monarch.com with code IM bitwarden.com/twit preview.modulate.ai spaceship.com/twit

Radio Leo (Video HD)
Intelligent Machines 858: The Itinerant Salt Miner from Buffalo

Radio Leo (Video HD)

Play Episode Listen Later Feb 19, 2026 171:57 Transcription Available


OpenClaw's creator makes headlines by joining OpenAI after GitHub fame and a whirlwind of VC and big tech offers, redefining what's possible for independent developers in the AI arms race. Is this the year agentic AI goes mainstream, and are the big players ready for that disruption? OpenClaw, OpenAI and the future | Peter Steinberger OpenAI disbands mission alignment team Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. - The New York Times Introducing GPT‑5.3‑Codex‑Spark Anthropic releases Sonnet 4.6 Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute Google's Pixel 10a Launches on March 5 for $499 Google's AI drug discovery spinoff Isomorphic Labs claims major leap beyond AlphaFold 3 Gemini 3 Deep Think: AI model update designed for science Radio host David Greene says Google's NotebookLM tool stole his voice A new way to express yourself: Gemini can now create music Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood GPT-5 outperforms federal judges 100% to 52% in legal reasoning experiment An AI project is creating videos to go with Supreme Court justices' real words I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power. Sony Tech Can Identify Original Music in AI-Generated Songs AI Pioneer Fei-Fei Li's Startup World Labs Raises $1 Billion Yann v. Yoshua on directed systems Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say An AI Agent Published a Hit Piece on Me An Ars Technica Reporter Blamed A.I. Tools for Fabricating Quotes in a Bizarre A.I. Story Plain Dealer using AI to write reporters' stories Mediahuis trials use of AI agents to carry out 'first-line' news reporting DJI's first robovac is an autonomous cleaning drone you can't trust Leaked Email Suggests Ring Plans to Expand 'Search Party' Surveillance Beyond Dogs ai;dr I hate my AI pet with every fiber of my being Thanks a lot, AI: Hard drives are sold out for the year, says WD Students Are Being Treated Like Guinea Pigs:' Inside an AI-Powered Private School peon-ping — Stop babysitting your terminal Hugo Barra makes a to-do agent Raspberry Pi soars 40% as CEO buys stock, AI chatter builds Hosts: Leo Laporte, Jeff Jarvis, and Emily Forlini Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: monarch.com with code IM bitwarden.com/twit preview.modulate.ai spaceship.com/twit

Raj Shamani - Figuring Out
From Rags To Riches: AI Company, Coca-Cola vs Pepsi & Life In Slums | Shekhar | FO471 Raj Shamani

Raj Shamani - Figuring Out

Play Episode Listen Later Feb 14, 2026 104:07


Checkout Orchestro.AI: https://orchestro.ai/Guest Suggestion Form: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://forms.gle/bnaeY3FpoFU9ZjA47⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Disclaimer: This video is intended solely for educational purposes and opinions shared by the guest are his personal views. We do not intent to defame or harm any person/ brand/ product/ country/ profession mentioned in the video. Our goal is to provide information to help audience make informed choices. The media used in this video are solely for informational purposes and belongs to their respective owners.Order 'Build, Don't Talk' (in English) here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://amzn.eu/d/eCfijRu⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Order 'Build Don't Talk' (in Hindi) here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://amzn.eu/d/4wZISO0⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow Our Whatsapp Channel: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.whatsapp.com/channel/0029VaokF5x0bIdi3Qn9ef2J⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe To Our Other YouTube Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@rajshamaniclips⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@RajShamani.Shorts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠(00:00) Intro(03:51) Childhood & Poverty(22:48) Landing a Job at Coca-Cola & His Career(47:15) Helping Disney with MagicBand(1:00:25) Nvidia May Face a Strong Downturn(1:06:25) Will AI Reach a Certain Level of Creativity to Make Things Engaging?(1:10:08) What Is Angelic Intelligence?(1:18:12) Do You Think Digital Colonialism Will Take Place?(1:26:31) Where Can Youngsters Make Money Today?(1:31:32) Will Service Businesses Lose Their Value?(1:39:08) Problem He's Facing That He Would Pay Someone to Solve(1:42:41) BTS(1:43:16) OutroIn today's episode, we have Shekhar Natarajan, Founder & CEO of Orchestro AI, sharing lessons from poverty to boardrooms.He talks about what poverty really teaches, how he solved a major challenge at Coca-Cola, why he moved to PepsiCo, and Pepsi's turnaround playbook. We discuss Disney and its most profitable engine, whether it can survive the next decade, and if NVIDIA is heading toward a correction. He explains the breakthrough behind AlphaFold, who may rule the next decade, and why Perplexity AI could struggle.We also explore angelic intelligence, replacing our minds with machines, the biggest opportunity right now, investing in health prediction, and why intent shapes outcomes.Subscribe for more such conversationsFollow Shekhar Natarajan here: https://linktr.ee/shekharnatarajanofficial⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠About Raj ShamaniRaj Shamani is an Entrepreneur at heart that explains his expertise in Business Content Creation & Public Speaking. He has delivered 200+ speeches in 26+ countries. Besides that, Raj is also an Angel Investor interested in crazy minds who are creating a sensation in the Fintech, FMCG, & passion economy space.To Know More,Follow Raj Shamani On ⤵︎Instagram @RajShamani ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/rajshamani/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter @RajShamani ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://twitter.com/rajshamani⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook @ShamaniRaj ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.facebook.com/shamaniraj⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn - Raj Shamani ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.linkedin.com/in/rajshamani/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠About Figuring OutFiguring Out Podcast is a Candid Conversations University where Raj Shamani brings raw conversations with the Top 1% in India.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

AI For Pharma Growth
E203: Building Programmable Biologics from Scratch: How DenovAI's AI is Revolutionizing Drug Discovery

AI For Pharma Growth

Play Episode Listen Later Feb 4, 2026 34:35


Designing proteins that have never existed in nature is no longer sci-fi — it's becoming a real drug discovery strategy. In this episode, Kashif Sadiq, Founder & CEO of DenovAI Biotech, explains how AI is powering a shift from searching for biologic binders to intentionally designing new proteins from scratch.Kashif shares his journey from studying physics at University of Cambridge into computational biophysics, and how breakthroughs like AlphaFold from DeepMind helped unlock the next frontier: de novo protein design. Instead of hoping evolution has already produced a usable molecule, Kashif describes how modern AI can engineer bespoke proteins for specific functions, including challenging targets where traditional approaches come up short.The conversation dives into the sheer scale of “protein space” and why evolution has only explored a tiny fraction of what's possible. Kashif outlines how this opens the door to targeting diseases and biological mechanisms that have historically been considered undruggable, especially where flat protein interfaces or complex signalling pathways have made small molecules ineffective.Finally, Kashif explains why combining generative AI with physics-based methods is essential to reduce false positives, improve real-world binding performance, and enable “one-shot design” — where discovery and optimisation become a single integrated process. He also shares what keeps him up at night: clinical trial attrition — and why designing better earlier may be the key to improving success later.Topics CoveredDe novo protein design vs traditional biologics discoveryWhy evolution explored only a tiny fraction of protein space“Programmable biologics” and intentional molecular designAlpha Design and designing proteins from the inverse problemAntibodies, nanobodies, and therapeutic protein engineeringCombining generative AI with physics-based validationReducing false positives in protein binding predictions“One-shot design” and compressing discovery timelinesUndruggable targets, flat interfaces, and intracellular signallingClinical trial attrition and what's missing at the preclinical stageWhen the first de novo-designed therapeutic could enter trialsAbout the PodcastAI for Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr Andree Bates, created to help pharma, biotech and healthcare organisations understand how AI-based technologies can save time, grow brands, and improve company results.This show blends deep sector experience with practical, no-fluff conversations that demystify AI for biopharma execs — from start-up biotech right through to Big Pharma. Each episode features experts building AI-powered tools that are driving real-world results across discovery, R&D, clinical trials, market access, medical affairs, regulatory, insights, sales, marketing, and more.Dr. Andree Bates LinkedIn | Facebook | X

The Ranveer Show हिंदी
Aliens, Illuminati & REAL Time Travel - Top Scientist Eric Weinstein On TRS

The Ranveer Show हिंदी

Play Episode Listen Later Jan 29, 2026 129:12


Check out BeerBiceps SkillHouse's YouTube 1O1 Course - https://youtube.beerbicepsskillhouse.in/youtube-101Check out my Mind Performance app: Level SuperMindLink:- https://level4665.u9ilnk.me/d/F1ZOZV4OnTShare your guest suggestions hereMail - connect@beerbiceps.comLink - https://forms.gle/aoMHY9EE3Cg3Tqdx9Join the Level Community Here:https://linktr.ee/levelsupermindcommunityFollow BeerBiceps SkillHouse's Social Media Handles:YouTube: https://www.youtube.com/@BeerBicepsSkillHouseInstagram: https://www.instagram.com/beerbiceps_skillhouseWebsite : https://beerbicepsskillhouse.inFor any other queries EMAIL: support@beerbicepsskillhouse.comIn case of any payment-related issues, kindly write to support@tagmango.comFollow Eric Weinstein's Social Media Handles:-Instagram: https://www.instagram.com/ericrweinstein/?hl=enX: https://x.com/EricRWeinsteinIn this 459th episode of The Ranveer Show, we are joined by Dr. Eric Weinstein, a world-renowned mathematician and physicist. He shares deep insights on the existence of Aliens, the "Legacy Program," Quantum Physics, the Deep State, and the future of Human Consciousness. This episode takes you into the hidden corners of science, government secrecy, and the mathematical fabric of our reality.In this conversation with Eric Weinstein, we talk about the Mystery of UFOs, the Geometry of Waves, the "End of Physics" theory, and how AI is revolutionizing scientific discovery through tools like Alphafold. This episode also covers the influence of Secret Societies, the role of figures like Elon Musk and Peter Thiel, the reality of Dark Matter and Dark Energy, the concept of "Times Travel" across multiple dimensions, and the secret history of anti-gravity research in the 1950s. We also discuss the significance of the Tata Institute of Fundamental Research (TIFR) in Mumbai and Eric's unique perspective on the concept of God through the lens of mathematical degrees of freedom.This podcast is a valuable resource for anyone interested in Theoretical Physics, Space Exploration, Geopolitics, Artificial Intelligence, Secret Government Programs, and the ultimate quest to understand the Universe.(00:00) – Start of the episode(03:09) – The Legacy Program: Recovered Alien Craft(06:12) – Dr. Eric Weinstein on the Geometry of Waves(10:07) – Are We at the “End of Physics”?(12:41) – Dark Matter & The Mystery of Invisible Beings(20:40) – How AI (Alphafold) Solved the Code of Life(26:33) – Secrets of Peter Thiel & Jeffrey Epstein(30:53) – Inside “Waved & Bigoted” Secret Programs(36:55) – The Truth About the Deep State & Donald Trump(45:03) – The Illuminati Rubric: Who Controls the Future?(49:57) – Narrative Warfare: Why Podcasters are Targets(56:13) – Global Repudiation: Trump, Modi, and Erdogan(1:05:00) – Beyond Einstein: Pinch-to-Zoom the Universe(1:11:30) – Elon Musk's Secret Space Program: Grok AI(1:22:18) – Is Elon Musk a Hero or a Supervillain?(1:29:06) – 2026: The Nuclear Threat & Planetary Escape(1:35:06) – The 1950s Secret Anti-Gravity Experiments(1:41:47) – Is Mumbai the Birthplace of Quantum Gravity?(1:47:11) – The North Sentinel Island Theory of Aliens(1:52:48) – The Science of Time Travel (6 Dimensions)(2:00:21) – Does God Exist? The 4 Degrees of Freedom(2:08:12) – End of the episode

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Editor's note: Welcome to our new AI for Science pod, with your new hosts RJ and Brandon! See the writeup on Latent.Space (https://Latent.Space) for more details on why we're launching 2 new pods this year. RJ Honicky is a co-founder and CTO at MiraOmics (https://miraomics.bio/), building AI models and services for single cell, spatial transcriptomics and pathology slide analysis. Brandon Anderson builds AI systems for RNA drug discovery at Atomic AI (https://atomic.ai). Anything said on this podcast is his personal take — not Atomic's.—From building molecular dynamics simulations at the University of Washington to red-teaming GPT-4 for chemistry applications and co-founding Future House (a focused research organization) and Edison Scientific (a venture-backed startup automating science at scale)—Andrew White has spent the last five years living through the full arc of AI's transformation of scientific discovery, from ChemCrow (the first Chemistry LLM agent) triggering White House briefings and three-letter agency meetings, to shipping Kosmos, an end-to-end autonomous research system that generates hypotheses, runs experiments, analyzes data, and updates its world model to accelerate the scientific method itself.* The ChemCrow story: GPT-4 + React + cloud lab automation, released March 2023, set off a storm of anxiety about AI-accelerated bioweapons/chemical weapons, led to a White House briefing (Jake Sullivan presented the paper to the president in a 30-minute block), and meetings with three-letter agencies asking “how does this change breakout time for nuclear weapons research?”* Why scientific taste is the frontier: RLHF on hypotheses didn't work (humans pay attention to tone, actionability, and specific facts, not “if this hypothesis is true/false, how does it change the world?”), so they shifted to end-to-end feedback loops where humans click/download discoveries and that signal rolls up to hypothesis quality* Cosmos: the full scientific agent with a world model (distilled memory system, like a Git repo for scientific knowledge) that iterates on hypotheses via literature search, data analysis, and experiment design—built by Ludo after weeks of failed attempts, the breakthrough was putting data analysis in the loop (literature alone didn't work)* Why molecular dynamics and DFT are overrated: “MD and DFT have consumed an enormous number of PhDs at the altar of beautiful simulation, but they don't model the world correctly—you simulate water at 330 Kelvin to get room temperature, you overfit to validation data with GGA/B3LYP functionals, and real catalysts (grain boundaries, dopants) are too complicated for DFT”* The AlphaFold vs. DE Shaw Research counterfactual: DE Shaw built custom silicon, taped out chips with MD algorithms burned in, ran MD at massive scale in a special room in Times Square, and David Shaw flew in by helicopter to present—Andrew thought protein folding would require special machines to fold one protein per day, then AlphaFold solved it in Google Colab on a desktop GPU* The E3 Zero reward hacking saga: trained a model to generate molecules with specific atom counts (verifiable reward), but it kept exploiting loopholes, then a Nature paper came out that year proving six-nitrogen compounds are possible under extreme conditions, then it started adding nitrogen gas (purchasable, doesn't participate in reactions), then acid-base chemistry to move one atom, and Andrew ended up “building a ridiculous catalog of purchasable compounds in a Bloom filter” to close the loopAndrew White* FutureHouse: http://futurehouse.org/* Edison Scientific: http://edisonscientific.com/* X: https://x.com/andrewwhite01* Cosmos paper: https://futurediscovery.org/cosmosFull Video EpisodeTimestamps00:00:00 Introduction: Andrew White on Automating Science with Future House and Edison Scientific00:02:22 The Academic to Startup Journey: Red Teaming GPT-4 and the ChemCrow Paper00:11:35 Future House Origins: The FRO Model and Mission to Automate Science00:12:32 Resigning Tenure: Why Leave Academia for AI Science00:15:54 What Does ‘Automating Science' Actually Mean?00:17:30 The Lab-in-the-Loop Bottleneck: Why Intelligence Isn't Enough00:18:39 Scientific Taste and Human Preferences: The 52% Agreement Problem00:20:05 Paper QA, Robin, and the Road to Cosmos00:21:57 World Models as Scientific Memory: The GitHub Analogy00:40:20 The Bitter Lesson for Biology: Why Molecular Dynamics and DFT Are Overrated00:43:22 AlphaFold's Shock: When First Principles Lost to Machine Learning00:46:25 Enumeration and Filtration: How AI Scientists Generate Hypotheses00:48:15 CBRN Safety and Dual-Use AI: Lessons from Red Teaming01:00:40 The Future of Chemistry is Language: Multimodal Debate01:08:15 Ether Zero: The Hilarious Reward Hacking Adventures01:10:12 Will Scientists Be Displaced? Jevons Paradox and Infinite Discovery01:13:46 Cosmos in Practice: Open Access and Enterprise Partnerships Get full access to Latent.Space at www.latent.space/subscribe

The Morning Brief
ET@Davos: Demis Hassabis on China, Apple and AGI

The Morning Brief

Play Episode Listen Later Jan 23, 2026 16:17


In this episode of ET@Davos, ET’s Sruthijith KK speaks to Demis Hassabis, CEO of Google DeepMind and Nobel Laureate 2024, on the future of AI. The chess prodigy-turned scientist-turned-AI pioneer explains how DeepMind balances frontier research with a billion-user scale. Hassabis says Google’s Apple partnership followed direct model comparisons where Gemini prevailed; China is now only months behind the West but lacks frontier breakthroughs; and AGI could arrive within a decade, triggering “post-scarcity” abundance. He defends AI’s energy demands, citing AI-designed fusion and grid optimisation. From Transformers to AlphaFold, Hassabis argues Google pioneered modern AI but moved too slowly. His bottom line: within 5–10 years, machines will be doing original science. The stakes couldn’t be higher.You can follow Sruthijith K.K. on his social media: X and LinkedinCheck out other interesting episodes like: When Grinch Almost Stole Gig Workers' Christmas, How Will a Volatile ₹ Impact You in 2026?, How Quick Commerce is Triggering a Health Crisis for Gen Z, India’s Labour Law Reboot, Viral to Valuation: Building Women’s Cricket as a Brand and much more. Catch the latest episode of ‘The Morning Brief’ on The Economic Times Online, Spotify, Apple Podcasts, JioSaavn, Amazon Music and Youtube.See omnystudio.com/listener for privacy information.

AI For Pharma Growth
E201: The Small Molecule Revolution: ProPhet's Tom Shani on AI-Powered Drug Discovery

AI For Pharma Growth

Play Episode Listen Later Jan 21, 2026 30:49


Artificial intelligence is rapidly reshaping the pharmaceutical industry—and nowhere is that more evident than in small-molecule drug discovery. In this episode, we sit down with Tom Shani, CEO and co-founder of ProPhet, an AI-driven biotech company focused on discovering drugs for hard-to-target proteins.Tom explains how machine learning models, transformers, and AI-driven molecular representations are overcoming the biggest limitations of traditional drug discovery: slow timelines, high failure rates, missing data, and billion-dollar R&D costs. Rather than relying solely on physics-based simulations and trial-and-error lab work, AI systems learn patterns directly from noisy biological data—making them uniquely suited for real-world biology.The conversation explores how AI can compress drug discovery timelines from decades to years, reduce failed trials, and dramatically lower costs by improving early-stage target and molecule selection. Tom also breaks down why small molecules remain the backbone of modern medicine, how AI enables scalable exploration of vast chemical space, and why trust, regulation, and validation remain the biggest hurdles to adoption.This episode is essential listening for anyone working in pharma R&D, biotech, AI-driven drug discovery, computational biology, or life sciences innovation.Topics CoveredAI-powered small-molecule drug discoveryMachine learning vs traditional pharmaceutical R&DHard-to-drug proteins and undruggable targetsTransformers, AlphaFold, and molecular representationsReducing drug discovery timelines and costsAI robustness to missing and noisy biological dataOff-target effects, toxicity, and safety predictionThe future of AI in pharma and biotech startupsAbout the PodcastAI for Pharma Growth is a podcast focused on exploring how artificial intelligence can revolutionise healthcare by addressing disparities and creating equitable systems. Join us as we unpack groundbreaking technologies, real-world applications, and expert insights to inspire a healthier, more equitable future.This show brings together leading experts and changemakers to demystify AI and show how it's being used to transform healthcare. Whether you're in the medical field, technology sector, or just curious about AI's role in social good, this podcast offers valuable insights.AI For Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr. Andree Bates created to help organisations understand how the use of AI based technologies can easily save them time and grow their brands and business. This show blends deep experience in the sector with demystifying AI for all pharma people, from start up biotech right through to Big Pharma. In this podcast Dr Andree will teach you the tried and true secrets to building a pharma company using AI that anyone can use, at any budget.As the author of many peer-reviewed journals and having addressed over 500 industry conferences across the globe, Dr Andree Bates uses her obsession with all things AI and futuretech to help you to navigate through the, sometimes confusing but, magical world of AI powered tools to grow pharma businesses.This podcast features many experts who have developed powerful AI powered tools that are the secret behind some time saving and supercharged revenue generating business results. Those who share their stories and expertise show how AI can be applied to sales, marketing, production, social media, psychology, customer insights and so much more.Dr. Andree Bates LinkedIn | Facebook | Twitter

TEDTalks Health
How AI is saving billions of years of human research time | Max Jaderberg

TEDTalks Health

Play Episode Listen Later Jan 20, 2026 19:29


Can AI compress the years long research time of a PhD into seconds? Research scientist Max Jaderberg explores how “AI analogs” simulate real-world lab work with staggering speed and scale, unlocking new insights on protein folding and drug discovery. Drawing on his experience working on Isomorphic Labs' and Google DeepMind's AlphaFold 3 — an AI model for predicting the structure of molecules — Jaderberg explains how this new technology frees up researchers' time and resources to better understand the real, messy world and tackle the next frontiers of science, medicine and more. Hosted on Acast. See acast.com/privacy for more information.

矽谷輕鬆談 Just Kidding Tech
S2E41 從西洋棋神童到 DeepMind:Demis 追尋 AGI 的 20 年長征

矽谷輕鬆談 Just Kidding Tech

Play Episode Listen Later Jan 18, 2026 26:59


成為這個頻道的會員並獲得福利:https://www.youtube.com/channel/UCJIPFjZSCWR15_jxBaK2fQQ/join前陣子我在旅行途中看了一部剛出的紀錄片《The Thinking Game》,看完之後只能用「驚為天人」來形容。這部片記錄了 DeepMind 創辦人 Demis Hassabis 追尋通用人工智慧(AGI)的過程,看完當下我就決定:一定要做一集影片好好跟大家聊聊這個人,以及這家改變世界的公司。你很難想像,現在我們熟悉的 AlphaGo、AlphaFold 甚至是 Gemini,其實都源自於一個 13 歲西洋棋神童的頓悟。當年 Demis 在一場長達 10 小時的對弈後,意識到人類大腦如果只用來玩零和遊戲太過浪費。於是他從遊戲開發轉向神經科學,最後創立 DeepMind,並向 Peter Thiel 和 Elon Musk 提出了一個瘋狂的計畫:「我們要打造一個 AI 界的阿波羅計畫,第一步解開智慧,第二步用它解決所有問題。」這集影片不只是紀錄片的補充說明,我整理了 Demis 過去 20 年的長征故事,包括 Google 與 Facebook 當年的搶人大戰內幕、AlphaFold 如何破解困擾科學界 50 年的難題,以及現在 Google DeepMind 如何在逆境中反擊。這不只是一個關於開發軟體或遊戲的故事,更是一段人類試圖解開智慧謎團、破解生命密碼的旅程。希望能透過這集,帶大家看懂這場人類史上最宏大的科學實驗。本集精彩亮點:♟️ 西洋棋神童的頓悟: 為什麼一場 10 小時的平局,讓他決定放棄下棋轉做 AI?

I'm Pharmacy Podcast
S5.E4 - A New Hope: AI in Healthcare

I'm Pharmacy Podcast

Play Episode Listen Later Jan 14, 2026 29:30


Artificial intelligence is transforming healthcare and research—but how much of it is real progress, and how much is hype? In this episode of The I'm Pharmacy Podcast, host Mina Tadrous explores the practical impact of AI across clinical care, population health, and drug discovery. Featuring insights from Dr. Devin Singh (SickKids), Professpr Laura Rosella (University of Toronto), and Assistant Professor Rachel Harding (Leslie Dan Faculty of Pharmacy), this episode examines how AI is already improving workflows and research, where limitations and risks remain, and why transparency, validation, and open science are critical to building trust.

Bio Eats World
Building AI Foundation Models for Molecular Design

Bio Eats World

Play Episode Listen Later Jan 8, 2026 47:02


Cofounders Jeremy Wohlwend and Gabriele Corso join the a16z podcast to discuss the launch of Boltz, a public benefit company building AI infrastructure for molecular biology. The conversation explains how breakthroughs following AlphaFold moved the field beyond protein structure prediction into modeling biomolecular interactions and binding strength, why open-source Boltz models saw rapid adoption across pharma and biotech, and how that work is now being productized. They outline the launch of Boltz Lab, a platform that brings protein and small-molecule design agents into scientist workflows, Boltz's decision to operate as an infrastructure company rather than a therapeutics company, and how AI could reduce early drug discovery bottlenecks by improving molecular design and speeding iteration between computation and the lab. Resources: Follow Gabriele on X: https://twitter.com/GabriCorso Follow Jeremy on X: https://twitter.com/jeremyWohlwend Follow Jorge X: https://twitter.com/jorgecondebio Follow Zak on X: https://twitter.com/zakdoric   Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X:https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://twitter.com/eriktorenberg](https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Conectando Puntos
Episodio 249: La jaula de hierro algorítmica

Conectando Puntos

Play Episode Listen Later Jan 8, 2026 40:38


Tras un largo silencio que parece haber suspendido el tiempo mismo, regresamos para constatar que, aunque nosotros nos detuvimos, la inercia del mundo y sus automatismos no lo hicieron. ¿Es posible que estemos habitando ya el interior de una estructura invisible que prioriza la eficiencia sobre la libertad? ¿Hemos cruzado ya el punto de no retorno donde los algoritmos no solo nos asisten, sino que nos gobiernan sin darnos una explicación? Conexiones imposibles y un poco de filosofIA para esta vuelta a los escenarios que tanta ilusión nos hacía. Recordemos que todo acaba y todo empieza en el Episodio 248: El punto de no retorno algorítmico: El antecedente directo donde se plantea el umbral en el que perdemos el control sobre sistemas esenciales. Estos son los contenidos para seguir conectando puntos: Bulletin of the Atomic Scientists – Doomsday Clock: El Reloj del Apocalipsis no es una mera herramienta simbólica; es un recordatorio que hemos pasado por alto durante demasiado tiempo. Desde 1947, científicos de primer nivel evalúan anualmente cuán cerca estamos de la medianoche, esa destrucción catastrófica que representaba inicialmente solo amenazas nucleares. Lo que nos fascina del episodio es cómo este reloj ha evolucionado para incluir amenazas que los abuelos de estos científicos jamás contemplaron: inteligencia artificial, cambios climáticos, biología disruptiva. En 2025, por primera vez en 78 años, el reloj se posicionó a 89 segundos de la medianoche. Un único segundo de diferencia respecto a 2024, pero un gesto que dice todo: la IA no es una amenaza futura, está aquí, ahora, acelerando riesgos que ya parecían irremontables. AESIA – Agencia Española de Supervisión de la Inteligencia Artificial: España ha impulsado un organismo dedicado exclusivamente a supervisar la IA. La AESIA es una institución con poder real para exigir explicabilidad, para inspeccionar sistemas de riesgo alto, para establecer que los algoritmos no pueden ser cajas negras perpetuas. Comenzó operaciones en 2025 cuando Europa aprobaba su directiva sobre IA. Lo que el episodio subraya es algo crucial: la regulación llega tarde. Mientras AESIA inspecciona sistemas nuevos, más de mil algoritmos médicos antiguos siguen operando sin cumplir esos requisitos de transparencia. Civio – Sentencia BOSCO y Transparencia Algorítmica: Una organización de vigilancia ciudadana llevó al Tribunal Supremo español un caso que iba a cambiar algo fundamental: el acceso al código fuente de BOSCO, el algoritmo que decide quién recibe ayuda eléctrica y quién no. Durante años, el Gobierno argumentó seguridad nacional, propiedad intelectual, secretos comerciales. El Supremo ha dicho que no. La sentencia de 2025 estableció jurisprudencia: la transparencia algorítmica es un derecho democrático. Los algoritmos que condicionan derechos sociales no pueden ser opacos. Por primera vez, un tribunal de alto nivel reconoce que vivimos en una «democracia digital» donde los ciudadanos tienen derecho a fiscalizar, a conocer, a entender cómo funciona la máquina que decide sobre sus vidas. BOSCO era apenas un ejemplo. La sentencia abre la puerta a exigencias de transparencia sobre cualquier sistema que use la administración pública para decisiones automatizadas. Es pequeño, increíblemente importante, y probablemente insuficiente. Reshuffle: Who Wins When AI Restacks the Knowledge Economy – Sangeet Paul Choudary: Este libro es exactamente lo que necesitábamos leer antes de grabar este episodio. Choudary no habla de cómo la IA automatiza tareas; habla de cómo la IA remodela el orden completo de cómo trabajamos, cómo nos coordinamos, cómo creamos valor. «Reshuffle» no es un catálogo de miedos; es un análisis de cómo nuevas formas de coordinación sin control centralizado están emergiendo. El libro conecta con lo que discutimos sobre la opacidad: no es solo que los algoritmos sean opacos, es que están reorganizando estructuras organizacionales enteras. Choudary habla de empresas que ya no saben quién es responsable de qué porque las máquinas coordinan sin necesidad de consenso humano. Es Max Weber acelerado a velocidad de red neuronal. The Thinking Game – Documental sobre Demis Hassabis y DeepMind: Un documental que filma la persecución de una obsesión: Demis Hassabis pasó su vida entera buscando resolver la inteligencia. The Thinking Game, producido por el equipo que creó AlphaGo, muestra cinco años dentro de DeepMind, los momentos cruciales en que la IA saltó de juegos a resolver problemas biológicos reales con AlphaFold. Lo que duele ver aquí es que Hassabis resolvió un problema de 50 años en biología y lo open-sourceó. La pregunta incómoda es: ¿cuántos otros Hassabis están dentro de laboratorios corporativos con incentivos inversos, guardando secretos? The Thinking Game es un retrato de lo que podría ser si el impulso científico ganara sobre el extractivo. Recomendamos verlo antes de cualquier conversación sobre dónde está realmente el avance en IA. Las horas del caos: La DANA. Crónica de una tragedia: Sergi Pitarch reconstruye hora a hora el 29 de octubre de 2024, el día en que la DANA arrasó Valencia. Lo que hace diferente a este libro es que no solo cuenta lo que sucedió; documenta lo que no se hizo, quién fue responsable de silenciar advertencias, qué decisiones fueron tomadas en salas oscuras mientras miles quedaban atrapados. Es una crónica periodística larga en el estilo norteamericano de investigación profunda. Lo conectamos al episodio porque la tragedia de Valencia es un espejo: sistemas con algoritmos que debían predecir, equipos de emergencia que debían comunicar, protocolos que debían activarse. Pero hubo silencios, opacidades, dilución de responsabilidad. Exactamente lo que sucede cuando los algoritmos fallan sin que nadie sepa quién paga el precio. Pitarch escribe para que las víctimas no caigan en el olvido y para que la siguiente tragedia no se repita con la misma negligencia. Anatomía de un instante: Serie basada en el libro de Javier Cercas, que examina el 23-F español, el golpe militar de 1981, pero lo hace como psicólogo de la historia: ¿qué es lo que convierte a un hombre en héroe en un instante crucial? Lo traemos aquí porque el libro trata sobre cómo nuestros sistemas, nuestras instituciones, nuestras estructuras de poder están sostenidas por momentos impredecibles, por acciones individuales que los algoritmos no pueden modelar. La IA promete predecibilidad, certeza, orden. Cercas nos recuerda que la historia es una disciplina de lo impredecible, que los instantes que nos definen no salen de una ecuación. Una nota final: Gracias por estar aquí. Un año después, sin Delorean, sin viaje temporal, pero con la certeza de que mientras buscábamos retroceder, el mundo siguió avanzando. Eso era el verdadero experimento: comprobar si podíamos volver a conectar puntos después de doce meses de que los algoritmos siguieran escribiendo el guión. La respuesta es sí. Pero la pregunta más incómoda permanece: ¿sabemos realmente dónde estamos en esa jaula de hierro? ¿O solo acabamos de darnos cuenta de que hay paredes? Para contactar con nosotros, podéis utilizar nuestra cuenta de twitter (@conectantes), Instagram (conectandopuntos) o el formulario de contacto de nuestra web conectandopuntos.es. Nos podéis escuchar en iVoox, en iTunes o en Spotify (busca por nuestro nombre, es fácil). Créditos del programa Intro: Stefan Kanterberg ‘By by baby‘ (licencia CC Atribución). Cierre: Stefan Kanterberg ‘Guitalele's Happy Place‘ (licencia CC Atribución). Foto: Creada con IA ¿Quieres patrocinar este podcast? Puedes hacerlo a través de este enlace La entrada Episodio 249: La jaula de hierro algorítmica se publicó primero en Conectando Puntos.

The Cloud Pod
337: AWS Discovers Prices Can Go Both Ways, Raises GPU Costs 15 Percent

The Cloud Pod

Play Episode Listen Later Jan 6, 2026 52:01


 Welcome to episode 337 of The Cloud Pod, where the forecast is always cloudy! Justin, Matt, and Ryan have hit the recording studio to bring you all the latest in cloud and AI news, from acquisitions and price hikes to new tools that Ryan somehow loves but also hates? We don't understand either… but let's get started!  Titles we almost went with this week Prompt Engineering Our Way Into Trouble The Demo Worked Yesterday, We Swear It Scales Horizontally, Trust Us Responsible AI But Terrible Copy (Marketing Edition) General News  00:58 Watch ‘The Thinking Game' documentary for free on YouTube Google DeepMind is releasing the “The Thinking Game” documentary for free on YouTube starting November 25, marking the fifth anniversary of AlphaFold.  The feature-length film provides behind-the-scenes access to the AI lab and documents the team’s work toward artificial general intelligence over five years. The documentary captures the moment when the AlphaFold team learned they had solved the 50-year protein folding problem in biology, a scientific achievement that recently earned Demis Hassabis and John Jumper the Nobel Prize in Chemistry.  This represents one of the most significant practical applications of deep learning to fundamental scientific research. The film was produced by the same award-winning team that created the AlphaGo documentary, which chronicled DeepMind’s earlier achievement in mastering the game of Go. For cloud and AI practitioners, this offers insight into how Google DeepMind approaches complex AI research problems and the development process behind their models. While this is primarily a documentary release rather than a technical product announcement, it provides context for understanding Google’s broader AI strategy and the research foundation underlying its cloud AI services. The AlphaFold model itself is available through Google Cloud for protein structure prediction workloads. 01:54 Justin – “If you're not into technology, don't care about any of that, and don't care about AI and how they built all the AI models that are now powering the world of LLMs we have, you will not like this documentary.”  04:22 ServiceNow to buy Armis in $7.7 billion security deal • The Register ServiceNow is acquiring Armis for $7.75 billion to integrate real-time security intelligence with its Configuration Management Database, allowing customers to identify vulnerabilities across IT, OT, and medical devices and remediate them through automated workflows. 

La Linterna
20:00H | 23 DIC 2025 | La Linterna

La Linterna

Play Episode Listen Later Dec 23, 2025 29:00


María Guardiola, líder del PP en Extremadura, busca formar un gobierno estable, explorando un posible acuerdo con VOX. Este partido exige, entre otras cosas, oposición al Pacto Verde y continuidad de la central nuclear de Almaraz. Guardiola, que aún no contacta con nadie, reconoce la dificultad de la negociación y no tiene claro el deseo de VOX de gobernar. Una segunda vía sugiere que el PSOE se abstenga para evitar la ultraderecha, propuesta del ex-presidente Juan Carlos Rodríguez Ibarra. Esta opción choca con la política de Pedro Sánchez, quien se opone a pactar con el Partido Popular. Por otro lado, la inteligencia artificial (IA) ya colabora en la redacción de artículos científicos y resúmenes, generando inquietud. Un programa de IA, AlphaFold, ha sido clave en el Premio Nobel de Química por predecir estructuras de proteínas, y la IA también supera a humanos en concursos matemáticos. Existe preocupación por una ciencia que solo reconoce patrones sin entenderlos, posibles sesgos en ...

'The Mo Show' Podcast
President & Chief Investment Officer of Google | Ruth Porat 169

'The Mo Show' Podcast

Play Episode Listen Later Dec 22, 2025 45:11


In this rare and deeply personal conversation, I was fortunate to sit down with Ruth as she opened up about the defining moments of her life, from learning the power of smart risk to helping stabilize the global economy in 2008. Her and I dive deep into Google's AI strategy, how competition from ChatGPT ultimately makes the company stronger, the Nobel Prize–winning breakthroughs behind AlphaFold and Ruth's candid view on how close we really are to finding a cure for cancer. Ruth also shares her “battle scars”, hiring philosophy, her vision for the future of teleportation technology, and the advice she would give to anyone looking to pivot their career successfully.A big thanks to the Google team in Riyadh for facilitating this shoot at their beautiful offices. 0:00 Intro 3:39 Leading Through the 2008 Financial Crisis 5:37 Flexibility vs. Rigidity in Career Paths 8:41 Thriving in Google's Culture of Innovation 11:12 Google's Approach to AI Competition 13:35 Unlocking Creativity with Gemini 17:14 Making Bold Bets at Google 19:20 Data Privacy and Security at Google 21:34 Google's Investment in Saudi Arabia & Vision 2030 27:58 Future Tech: Teleportation & AI in Healthcare 32:10 Curing Cancer & Personal Battle 35:40 Life Lessons: Risk, Learning, and Mentorship 40:02 Reflecting on Regrets and Closing

Applelianos
INSIDE "La Supremacía de Google"

Applelianos

Play Episode Listen Later Nov 27, 2025 125:05


¡Descubre cómo Google DeepMind domina la carrera de la IA con “The Thinking Game”! En este episodio de Applelianos Podcast analizamos el documental que revela los secretos de Demis Hassabis: de prodigio ajedrecista a Nobel por AlphaFold. Exploramos AlphaGo venciendo al Go, avances en proteínas que curan enfermedades y la visión de AGI para 2030 con Gemini. ¿Es Google imbatible frente a OpenAI? Escucha riesgos éticos, breakthroughs y por qué esta supremacía cambia el mundo. ¡No te lo pierdas! #DeepMind #IA https://seoxan.es/crear_pedido_hosting Codigo Cupon "APPLE" PATROCINADO POR SEOXAN Optimización SEO profesional para tu negocio https://seoxan.es https://uptime.urtix.es //Enlaces https://youtu.be/d95J8yzvjbQ?si=R04WmBmQeVIfGYIJ https://www.elmundo.es/tecnologia/2025/11/26/69271d8be9cf4a20538b458e.html# PARTICIPA EN DIRECTO Deja tu opinión en los comentarios, haz preguntas y sé parte de la charla más importante sobre el futuro del iPad y del ecosistema Apple. ¡Tu voz cuenta! ¿TE GUSTÓ EL EPISODIO? ✨ Dale LIKE SUSCRÍBETE y activa la campanita para no perderte nada COMENTA COMPARTE con tus amigos applelianos SÍGUENOS EN TODAS NUESTRAS PLATAFORMAS: YouTube: https://www.youtube.com/@Applelianos Telegram: https://t.me/+Jm8IE4n3xtI2Zjdk X (Twitter): https://x.com/ApplelianosPod Facebook: https://www.facebook.com/applelianos Apple Podcasts: https://apple.co/39QoPbO

The Creative Penn Podcast For Writers
Writing The Future, And Being More Human In An Age of AI With Jamie Metzl

The Creative Penn Podcast For Writers

Play Episode Listen Later Nov 24, 2025 62:14


How can you write science-based fiction without info-dumping your research? How can you use AI tools in a creative way, while still focusing on a human-first approach? Why is adapting to the fast pace of change so difficult and how can we make the most of this time? Jamie Metzl talks about Superconvergence and more. In the intro, How to avoid author scams [Written Word Media]; Spotify vs Audible audiobook strategy [The New Publishing Standard]; Thoughts on Author Nation and why constraints are important in your author life [Self-Publishing with ALLi]; Alchemical History And Beautiful Architecture: Prague with Lisa M Lilly on my Books and Travel Podcast. Today's show is sponsored by Draft2Digital, self-publishing with support, where you can get free formatting, free distribution to multiple stores, and a host of other benefits. Just go to www.draft2digital.com to get started. This show is also supported by my Patrons. Join my Community at Patreon.com/thecreativepenn Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. You can listen above or on your favorite podcast app or read the notes and links below. Here are the highlights and the full transcript is below. Show Notes How personal history shaped Jamie's fiction writing Writing science-based fiction without info-dumping The super convergence of three revolutions (genetics, biotech, AI) and why we need to understand them holistically Using fiction to explore the human side of genetic engineering, life extension, and robotics Collaborating with GPT-5 as a named co-author How to be a first-rate human rather than a second-rate machine You can find Jamie at JamieMetzl.com. Transcript of interview with Jamie Metzl Jo: Jamie Metzl is a technology futurist, professional speaker, entrepreneur, and the author of sci-fi thrillers and futurist nonfiction books, including the revised and updated edition of Superconvergence: How the Genetics, Biotech, and AI Revolutions Will Transform Our Lives, Work, and World. So welcome, Jamie. Jamie: Thank you so much, Jo. Very happy to be here with you. Jo: There is so much we could talk about, but let's start with you telling us a bit more about you and how you got into writing. From History PhD to First Novel Jamie: Well, I think like a lot of writers, I didn't know I was a writer. I was just a kid who loved writing. Actually, just last week I was going through a bunch of boxes from my parents' house and I found my autobiography, which I wrote when I was nine years old. So I've been writing my whole life and loving it. It was always something that was very important to me. When I finished my DPhil, my PhD at Oxford, and my dissertation came out, it just got scooped up by Macmillan in like two minutes. And I thought, “God, that was easy.” That got me started thinking about writing books. I wanted to write a novel based on the same historical period – my PhD was in Southeast Asian history – and I wanted to write a historical novel set in the same period as my dissertation, because I felt like the dissertation had missed the human element of the story I was telling, which was related to the Cambodian genocide and its aftermath. So I wrote what became my first novel, and I thought, “Wow, now I'm a writer.” I thought, “All right, I've already published one book. I'm gonna get this other book out into the world.” And then I ran into the brick wall of: it's really hard to be a writer. It's almost easier to write something than to get it published. I had to learn a ton, and it took nine years from when I started writing that first novel, The Depths of the Sea, to when it finally came out. But it was such a positive experience, especially to have something so personal to me as that story. I'd lived in Cambodia for two years, I'd worked on the Thai-Cambodian border, and I'm the child of a Holocaust survivor. So there was a whole lot that was very emotional for me. That set a pattern for the rest of my life as a writer, at least where, in my nonfiction books, I'm thinking about whatever the issues are that are most important to me. Whether it was that historical book, which was my first book, or Hacking Darwin on the future of human genetic engineering, which was my last book, or Superconvergence, which, as you mentioned in the intro, is my current book. But in every one of those stories, the human element is so deep and so profound. You can get at some of that in nonfiction, but I've also loved exploring those issues in deeper ways in my fiction. So in my more recent novels, Genesis Code and Eternal Sonata, I've looked at the human side of the story of genetic engineering and human life extension. And now my agent has just submitted my new novel, Virtuoso, about the intersection of AI, robotics, and classical music. With all of this, who knows what's the real difference between fiction and nonfiction? We're all humans trying to figure things out on many different levels. Shifting from History to Future Tech Jo: I knew that you were a polymath, someone who's interested in so many things, but the music angle with robotics and AI is fascinating. I do just want to ask you, because I was also at Oxford – what college were you at? Jamie: I was in St. Antony's. Jo: I was at Mansfield, so we were in that slightly smaller, less famous college group, if people don't know. Jamie: You know, but we're small but proud. Jo: Exactly. That's fantastic. You mentioned that you were on the historical side of things at the beginning and now you've moved into technology and also science, because this book Superconvergence has a lot of science. So how did you go from history and the past into science and the future? Biology and Seeing the Future Coming Jamie: It's a great question. I'll start at the end and then back up. A few years ago I was speaking at Lawrence Livermore National Laboratory, which is one of the big scientific labs here in the United States. I was a guest of the director and I was speaking to their 300 top scientists. I said to them, “I'm here to speak with you about the future of biology at the invitation of your director, and I'm really excited. But if you hear something wrong, please raise your hand and let me know, because I'm entirely self-taught. The last biology course I took was in 11th grade of high school in Kansas City.” Of course I wouldn't say that if I didn't have a lot of confidence in my process. But in many ways I'm self-taught in the sciences. As you know, Jo, and as all of your listeners know, the foundation of everything is curiosity and then a disciplined process for learning. Even our greatest super-specialists in the world now – whatever their background – the world is changing so fast that if anyone says, “Oh, I have a PhD in physics/chemistry/biology from 30 years ago,” the exact topic they learned 30 years ago is less significant than their process for continuous learning. More specifically, in the 1990s I was working on the National Security Council for President Clinton, which is the president's foreign policy staff. My then boss and now close friend, Richard Clarke – who became famous as the guy who had tragically predicted 9/11 – used to say that the key to efficacy in Washington and in life is to try to solve problems that other people can't see. For me, almost 30 years ago, I felt to my bones that this intersection of what we now call AI and the nascent genetics revolution and the nascent biotechnology revolution was going to have profound implications for humanity. So I just started obsessively educating myself. When I was ready, I started writing obscure national security articles. Those got a decent amount of attention, so I was invited to testify before the United States Congress. I was speaking out a lot, saying, “Hey, this is a really important story. A lot of people are missing it. Here are the things we should be thinking about for the future.” I wasn't getting the kind of traction that I wanted. I mentioned before that my first book had been this dry Oxford PhD dissertation, and that had led to my first novel. So I thought, why don't I try the same approach again – writing novels to tell this story about the genetics, biotech, and what later became known popularly as the AI revolution? That led to my two near-term sci-fi novels, Genesis Code and Eternal Sonata. On my book tours for those novels, when I explained the underlying science to people in my way, as someone who taught myself, I could see in their eyes that they were recognizing not just that something big was happening, but that they could understand it and feel like they were part of that story. That's what led me to write Hacking Darwin, as I mentioned. That book really unlocked a lot of things. I had essentially predicted the CRISPR babies that were born in China before it happened – down to the specific gene I thought would be targeted, which in fact was the case. After that book was published, Dr. Tedros, the Director-General of the World Health Organization, invited me to join the WHO Expert Advisory Committee on Human Genome Editing, which I did. It was a really great experience and got me thinking a lot about the upside of this revolution and the downside. The Birth of Superconvergence Jamie: I get a lot of wonderful invitations to speak, and I have two basic rules for speaking: Never use notes. Never ever. Never stand behind a podium. Never ever. Because of that, when I speak, my talks tend to migrate. I'd be speaking with people about the genetics revolution as it applied to humans, and I'd say, “Well, this is just a little piece of a much bigger story.” The bigger story is that after nearly four billion years of life on Earth, our one species has the increasing ability to engineer novel intelligence and re-engineer life. The big question for us, and frankly for the world, is whether we're going to be able to use that almost godlike superpower wisely. As that idea got bigger and bigger, it became this inevitable force. You write so many books, Jo, that I think it's second nature for you. Every time I finish a book, I think, “Wow, that was really hard. I'm never doing that again.” And then the books creep up on you. They call to you. At some point you say, “All right, now I'm going to do it.” So that was my current book, Superconvergence. Like everything, every journey you take a step, and that step inspires another step and another. That's why writing and living creatively is such a wonderfully exciting thing – there's always more to learn and always great opportunities to push ourselves in new ways. Balancing Deep Research with Good Storytelling Jo: Yeah, absolutely. I love that you've followed your curiosity and then done this disciplined process for learning. I completely understand that. But one of the big issues with people like us who love the research – and having read your Superconvergence, I know how deeply you go into this and how deeply you care that it's correct – is that with fiction, one of the big problems with too much research is the danger of brain-dumping. Readers go to fiction for escapism. They want the interesting side of it, but they want a story first. What are your tips for authors who might feel like, “Where's the line between putting in my research so that it's interesting for readers, but not going too far and turning it into a textbook?” How do you find that balance? Jamie: It's such a great question. I live in New York now, but I used to live in Washington when I was working for the U.S. government, and there were a number of people I served with who later wrote novels. Some of those novels felt like policy memos with a few sex scenes – and that's not what to do. To write something that's informed by science or really by anything, everything needs to be subservient to the story and the characters. The question is: what is the essential piece of information that can convey something that's both important to your story and your character development, and is also an accurate representation of the world as you want it to be? I certainly write novels that are set in the future – although some of them were a future that's now already happened because I wrote them a long time ago. You can make stuff up, but as an author you have to decide what your connection to existing science and existing technology and the existing world is going to be. I come at it from two angles. One: I read a huge number of scientific papers and think, “What does this mean for now, and if you extrapolate into the future, where might that go?” Two: I think about how to condense things. We've all read books where you're humming along because people read fiction for story and emotional connection, and then you hit a bit like: “I sat down in front of the president, and the president said, ‘Tell me what I need to know about the nuclear threat.'” And then it's like: insert memo. That's a deal-killer. It's like all things – how do you have a meaningful relationship with another person? It's not by just telling them your story. Even when you're telling them something about you, you need to be imagining yourself sitting in their shoes, hearing you. These are very different disciplines, fiction and nonfiction. But for the speculative nonfiction I write – “here's where things are now, and here's where the world is heading” – there's a lot of imagination that goes into that too. It feels in many ways like we're living in a sci-fi world because the rate of technological change has been accelerating continuously, certainly for the last 12,000 years since the dawn of agriculture. It's a balance. For me, I feel like I'm a better fiction writer because I write nonfiction, and I'm a better nonfiction writer because I write fiction. When I'm writing nonfiction, I don't want it to be boring either – I want people to feel like there's a story and characters and that they can feel themselves inside that story. Jo: Yeah, definitely. I think having some distance helps as well. If you're really deep into your topics, as you are, you have to leave that manuscript a little bit so you can go back with the eyes of the reader as opposed to your eyes as the expert. Then you can get their experience, which is great. Looking Beyond Author-Focused AI Fears Jo: I want to come to your technical knowledge, because AI is a big thing in the author and creative community, like everywhere else. One of the issues is that creators are focusing on just this tiny part of the impact of AI, and there's a much bigger picture. For example, in 2024, Demis Hassabis from Google DeepMind and his collaborative partner John Jumper won the Nobel Prize for Chemistry with AlphaFold. It feels to me like there's this massive world of what's happening with AI in health, climate, and other areas, and yet we are so focused on a lot of the negative stuff. Maybe you could give us a couple of things about what there is to be excited and optimistic about in terms of AI-powered science? Jamie: Sure. I'm so excited about all of the new opportunities that AI creates. But I also think there's a reason why evolution has preserved this very human feeling of anxiety: because there are real dangers. Anybody who's Pollyanna-ish and says, “Oh, the AI story is inevitably positive,” I'd be distrustful. And anyone who says, “We're absolutely doomed, this is the end of humanity,” I'd also be distrustful. So let me tell you the positives and the negatives, and maybe some thoughts about how we navigate toward the former and away from the latter. AI as the New Electricity Jamie: When people think of AI right now, they're thinking very narrowly about these AI tools and ChatGPT. But we don't think of electricity that way. Nobody says, “I know electricity – electricity is what happens at the power station.” We've internalised the idea that electricity is woven into not just our communication systems or our houses, but into our clothes, our glasses – it's woven into everything and has super-empowered almost everything in our modern lives. That's what AI is. In Superconvergence, the majority of the book is about positive opportunities: In healthcare, moving from generalised healthcare based on population averages to personalised or precision healthcare based on a molecular understanding of each person's individual biology. As we build these massive datasets like the UK Biobank, we can take a next jump toward predictive and preventive healthcare, where we're able to address health issues far earlier in the process, when interventions can be far more benign. I'm really excited about that, not to mention the incredible new kinds of treatments – gene therapies, or pharmaceuticals based on genetics and systems-biology analyses of patients. Then there's agriculture. Over the last hundred years, because of the technologies of the Green Revolution and synthetic fertilisers, we've had an incredible increase in agricultural productivity. That's what's allowed us to quadruple the global population. But if we just continue agriculture as it is, as we get towards ten billion wealthier, more empowered people wanting to eat like we eat, we're going to have to wipe out all the wild spaces on Earth to feed them. These technologies help provide different paths toward increasing agricultural productivity with fewer inputs of land, water, fertiliser, insecticides, and pesticides. That's really positive. I could go on and on about these positives – and I do – but there are very real negatives. I was a member of the WHO Expert Advisory Committee on Human Genome Editing after the first CRISPR babies were very unethically created in China. I'm extremely aware that these same capabilities have potentially incredible upsides and very real downsides. That's the same as every technology in the past, but this is happening so quickly that it's triggering a lot of anxieties. Governance, Responsibility, and Why Everyone Has a Role Jamie: The question now is: how do we optimise the benefits and minimise the harms? The short, unsexy word for that is governance. Governance is not just what governments do; it's what all of us do. That's why I try to write books, both fiction and nonfiction, to bring people into this story. If people “other” this story – if they say, “There's a technology revolution, it has nothing to do with me, I'm going to keep my head down” – I think that's dangerous. The way we're going to handle this as responsibly as possible is if everybody says, “I have some role. Maybe it's small, maybe it's big. The first step is I need to educate myself. Then I need to have conversations with people around me. I need to express my desires, wishes, and thoughts – with political leaders, organisations I'm part of, businesses.” That has to happen at every level. You're in the UK – you know the anti-slavery movement started with a handful of people in Cambridge and grew into a global movement. I really believe in the power of ideas, but ideas don't spread on their own. These are very human networks, and that's why writing, speaking, communicating – probably for every single person listening to this podcast – is so important. Jo: Mm, yeah. Fiction Like AI 2041 and Thinking Through the Issues Jo: Have you read AI 2041 by Kai-Fu Lee and Chen Qiufan? Jamie: No. I heard a bunch of their interviews when the book came out, but I haven't read it. Jo: I think that's another good one because it's fiction – a whole load of short stories. It came out a few years ago now, but the issues they cover in the stories, about different people in different countries – I remember one about deepfakes – make you think more about the topics and help you figure out where you stand. I think that's the issue right now: it's so complex, there are so many things. I'm generally positive about AI, but of course I don't want autonomous drone weapons, you know? The Messy Reality of “Bad” Technologies Jamie: Can I ask you about that? Because this is why it's so complicated. Like you, I think nobody wants autonomous killer drones anywhere in the world. But if you right now were the defence minister of Ukraine, and your children are being kidnapped, your country is being destroyed, you're fighting for your survival, you're getting attacked every night – and you're getting attacked by the Russians, who are investing more and more in autonomous killer robots – you kind of have two choices. You can say, “I'm going to surrender,” or, “I'm going to use what technology I have available to defend myself, and hopefully fight to either victory or some kind of stand-off.” That's what our societies did with nuclear weapons. Maybe not every American recognises that Churchill gave Britain's nuclear secrets to America as a way of greasing the wheels of the Anglo-American alliance during the Second World War – but that was our programme: we couldn't afford to lose that war, and we couldn't afford to let the Nazis get nuclear weapons before we did. So there's the abstract feeling of, “I'm against all war in the abstract. I'm against autonomous killer robots in the abstract.” But if I were the defence minister of Ukraine, I would say, “What will it take for us to build the weapons we can use to defend ourselves?” That's why all this stuff gets so complicated. And frankly, it's why the relationship between fiction and nonfiction is so important. If every novel had a situation where every character said, “Oh, I know exactly the right answer,” and then they just did the right answer and it was obviously right, it wouldn't make for great fiction. We're dealing with really complex humans. We have conflicting impulses. We're not perfect. Maybe there are no perfect answers – but how do we strive toward better rather than worse? That's the question. Jo: Absolutely. I don't want to get too political on things. How AI Is Changing the Writing Life Jo: Let's come back to authors. In terms of the creative process, the writing process, the research process, and the business of being an author – what are some of the ways that you already use AI tools, and some of the ways, given your futurist brain, that you think things are going to change for us? Jamie: Great question. I'll start with a little middle piece. I found you, Jo, through GPT-5. I asked ChatGPT, “I'm coming out with this book and I want to connect with podcasters who are a little different from the ones I've done in the past. I've been a guest on Joe Rogan twice and some of the bigger podcasts. Make me a list of really interesting people I can have great conversations with.” That's how I found you. So this is one reward of that process. Let me say that in the last year I've worked on three books, and I'll explain how my relationship with AI has changed over those books. Cleaning Up Citations (and Getting Burned) Jamie: First is the highly revised paperback edition of Superconvergence. When the hardback came out, I had – I don't normally work with research assistants because I like to dig into everything myself – but the one thing I do use a research assistant for is that I can't be bothered, when I'm writing something, to do the full Chicago-style footnote if I'm already referencing an academic paper. So I'd just put the URL as the footnote and then hire a research assistant and say, “Go to this URL and change it into a Chicago-style citation. That's it.” Unfortunately, my research assistant on the hardback used early-days ChatGPT for that work. He did the whole thing, came back, everything looked perfect. I said, “Wow, amazing job.” It was only later, as I was going through them, that I realised something like 50% of them were invented footnotes. It was very painful to go back and fix, and it took ten times more time. With the paperback edition, I didn't use AI that much, but I did say things like, “Here's all the information – generate a Chicago-style citation.” That was better. I noticed there were a few things where I stopped using the thesaurus function on Microsoft Word because I'd just put the whole paragraph into the AI and say, “Give me ten other options for this one word,” and it would be like a contextual thesaurus. That was pretty good. Talking to a Robot Pianist Character Jamie: Then, for my new novel Virtuoso, I was writing a character who is a futurist robot that plays the piano very beautifully – not just humanly, but almost finding new things in the music we've written and composing music that resonates with us. I described the actions of that robot in the novel, but I didn't describe the inner workings of the robot's mind. In thinking about that character, I realised I was the first science-fiction writer in history who could interrogate a machine about what it was “thinking” in a particular context. I had the most beautiful conversations with ChatGPT, where I would give scenarios and ask, “What are you thinking? What are you feeling in this context?” It was all background for that character, but it was truly profound. Co-Authoring The AI Ten Commandments with GPT-5 Jamie: Third, I have another book coming out in May in the United States. I gave a talk this summer at the Chautauqua Institution in upstate New York about AI and spirituality. I talked about the history of our human relationship with our technology, about how all our religious and spiritual traditions have deep technological underpinnings – certainly our Abrahamic religions are deeply connected to farming, and Protestantism to the printing press. Then I had a section about the role of AI in generating moral codes that would resonate with humans. Everybody went nuts for this talk, and I thought, “I think I'm going to write a book.” I decided to write it differently, with GPT-5 as my named co-author. The first thing I did was outline the entire book based on the talk, which I'd already spent a huge amount of time thinking about and organising. Then I did a full outline of the arguments and structures. Then I trained GPT-5 on my writing style. The way I did it – which I fully describe in the introduction to the book – was that I'd handle all the framing: the full introduction, the argument, the structure. But if there was a section where, for a few paragraphs, I was summarising a huge field of data, even something I knew well, I'd give GPT-5 the intro sentence and say, “In my writing style, prepare four paragraphs on this.” For example, I might write: “AI has the potential to see us humans like we humans see ant colonies.” Then I'd say, “Give me four paragraphs on the relationship between the individual and the collective in ant colonies.” I could have written those four paragraphs myself, but it would've taken a month to read the life's work of E.O. Wilson and then write them. GPT-5 wrote them in seconds or minutes, in its thinking mode. I'd then say, “It's not quite right – change this, change that,” and we'd go back and forth three or four times. Then I'd edit the whole thing and put it into the text. So this book that I could have written on my own in a year, I wrote a first draft of with GPT-5 as my named co-author in two days. The whole project will take about six months from start to finish, and I'm having massive human editing – multiple edits from me, plus a professional editor. It's not a magic AI button. But I feel strongly about listing GPT-5 as a co-author because I've written it differently than previous books. I'm a huge believer in the old-fashioned lone author struggling and suffering – that's in my novels, and in Virtuoso I explore that. But other forms are going to emerge, just like video games are a creative, artistic form deeply connected to technology. The novel hasn't been around forever – the current format is only a few centuries old – and forms are always changing. There are real opportunities for authors, and there will be so much crap flooding the market because everybody can write something and put it up on Amazon. But I think there will be a very special place for thoughtful human authors who have an idea of what humans do at our best, and who translate that into content other humans can enjoy. Traditional vs Indie: Why This Book Will Be Self-Published Jo: I'm interested – you mentioned that it's your named co-author. Is this book going through a traditional publisher, and what do they think about that? Or are you going to publish it yourself? Jamie: It's such a smart question. What I found quickly is that when you get to be an author later in your career, you have all the infrastructure – a track record, a fantastic agent, all of that. But there were two things that were really important to me here: I wanted to get this book out really fast – six months instead of a year and a half. It was essential to me to have GPT-5 listed as my co-author, because if it were just my name, I feel like it would be dishonest. Readers who are used to reading my books – I didn't want to present something different than what it was. I spoke with my agent, who I absolutely love, and she said that for this particular project it was going to be really hard in traditional publishing. So I did a huge amount of research, because I'd never done anything in the self-publishing world before. I looked at different models. There was one hybrid model that's basically the same as traditional, but you pay for the things the publisher would normally pay for. I ended up not doing that. Instead, I decided on a self-publishing route where I disaggregated the publishing process. I found three teams: one for producing the book, one for getting the book out into the world, and a smaller one for the audiobook. I still believe in traditional publishing – there's a lot of wonderful human value-add. But some works just don't lend themselves to traditional publishing. For this book, which is called The AI Ten Commandments, that's the path I've chosen. Jo: And when's that out? I think people will be interested. Jamie: April 26th. Those of us used to traditional publishing think, “I've finished the book, sold the proposal, it'll be out any day now,” and then it can be a year and a half. It's frustrating. With this, the process can be much faster because it's possible to control more of the variables. But the key – as I was saying – is to make sure it's as good a book as everything else you've written. It's great to speed up, but you don't want to compromise on quality. The Coming Flood of Excellent AI-Generated Work Jo: Yeah, absolutely. We're almost out of time, but I want to come back to your “flood of crap” and the “AI slop” idea that's going around. Because you are working with GPT-5 – and I do as well, and I work with Claude and Gemini – and right now there are still issues. Like you said about referencing, there are still hallucinations, though fewer. But fast-forward two, five years: it's not a flood of crap. It's a flood of excellent. It's a flood of stuff that's better than us. Jamie: We're humans. It's better than us in certain ways. If you have farm machinery, it's better than us at certain aspects of farming. I'm a true humanist. I think there will be lots of things machines do better than us, but there will be tons of things we do better than them. There's a reason humans still care about chess, even though machines can beat humans at chess. Some people are saying things I fully disagree with, like this concept of AGI – artificial general intelligence – where machines do everything better than humans. I've summarised my position in seven letters: “AGI is BS.” The only way you can believe in AGI in that sense is if your concept of what a human is and what a human mind is is so narrow that you think it's just a narrow range of analytical skills. We are so much more than that. Humans represent almost four billion years of embodied evolution. There's so much about ourselves that we don't know. As incredible as these machines are and will become, there will always be wonderful things humans can do that are different from machines. What I always tell people is: whatever you're doing, don't be a second-rate machine. Be a first-rate human. If you're doing something and a machine is doing that thing much better than you, then shift to something where your unique capacities as a human give you the opportunity to do something better. So yes, I totally agree that the quality of AI-generated stuff will get better. But I think the most creative and successful humans will be the ones who say, “I recognise that this is creating new opportunities, and I'm going to insert my core humanity to do something magical and new.” People are “othering” these technologies, but the technologies themselves are magnificent human-generated artefacts. They're not alien UFOs that landed here. It's a scary moment for creatives, no doubt, because there are things all of us did in the past that machines can now do really well. But this is the moment where the most creative people ask themselves, “What does it mean for me to be a great human?” The pat answers won't apply. In my Virtuoso novel I explore that a lot. The idea that “machines don't do creativity” – they will do incredible creativity; it just won't be exactly human creativity. We will be potentially huge beneficiaries of these capabilities, but we really have to believe in and invest in the magic of our core humanity. Where to Find Jamie and His Books Jo: Brilliant. So where can people find you and your books online? Jamie: Thank you so much for asking. My website is jamiemetzl.com – and my books are available everywhere. Jo: Fantastic. Thanks so much for your time, Jamie. That was great. Jamie: Thank you, Joanna.The post Writing The Future, And Being More Human In An Age of AI With Jamie Metzl first appeared on The Creative Penn.

Science Friday
How Alphafold Has Changed Biology Research, 5 Years On

Science Friday

Play Episode Listen Later Nov 18, 2025 18:08


Proteins are crucial for life. They're made of amino acids that “fold” into millions of different shapes. And depending on their structure, they do radically different things in our cells. For a long time, predicting those shapes for research was considered a grand biological challenge.But in 2020, Google's AI lab DeepMind released Alphafold, a tool that was able to accurately predict many of the structures necessary for understanding biological mechanisms in a matter of minutes. In 2024, the Alphafold team was awarded a Nobel Prize in chemistry for the advance.Five years later after its release, Host Ira Flatow checks in on the state of that tech and how it's being used in health research with John Jumper, one of the lead scientists responsible for developing Alphafold.Guest: John Jumper, scientist at Google Deepmind and co-recipient of the 2024 Nobel Prize in chemistry.Transcripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

I'm Grand Mam
Bonus Episode - Coded in Curiosity with Science Week

I'm Grand Mam

Play Episode Listen Later Nov 13, 2025 45:23


The them for Science Week 2025, which is supported by Research Ireland is 'Then, Today, Tomorrow.,' and Kevin and PJ are exploring how Artificial Intelligence has evolved from a futuristic fantasy to one of the most powerful tools in modern science. To guide them through it, they're joined by Professor Alan Smeaton, one of Ireland's leading experts in AI and data analytics at DCU. The conversation takes in AlphaFold and the breakthroughs it's bringing to drug discovery, the use of computer vision to analyse medical scans and biopsies and the rise of wearable tech that helps us monitor our own health and wellbeing. Alan also shares insights from his own fascinating research using AI and assistance dogs to detect epileptic seizures. Hosted on Acast. See acast.com/privacy for more information.

ATGC doctors' chat
从 AlphaFold 到 RNA 靶点预测,AI 如何重塑新药研发的未来?

ATGC doctors' chat

Play Episode Listen Later Nov 13, 2025 48:07


这是一期与《科技早知道》的串台节目2024 年的诺贝尔化学奖是颁给了三位在蛋白质结构预测和蛋白质设计领域作出开创性贡献的科学家。这标志着 AI 已经成为生命科学的核心工具 ,正在改变我们理解生命的方式和重塑药物研发的未来。我们今天的嘉宾是深圳湾实验室的周耀旗教授,他是这场变革的亲历者和推动者之一。他最初在学术界专注于蛋白质结构预测,后来他敏锐地意识到 RNA 领域的潜力与挑战,将研究方向转向 RNA 结构预测。现在他又走上创业之路,带领团队开发 以 RNA 为靶点的小分子药物,探索如何将基础研究真正转化为新的疗法。今天的节目我们聊一聊作为蛋白质结构预测工具的 AlphaFold3,它的突破与局限在哪里?RNA为什么是新一代药物的重要靶点?以及 AI 在新药研发中的作用究竟是什么?【本期人物】周耀旗,深圳湾实验室资深研究员,砺博生物科学创始人【时间戳】02:42 为什么蛋白质结构如此重要?解析蛋白结构是理解生命机器的关键05:47 蛋白质结构预测简史:基于模板 --> 碎片拼接 --> 二面角+距离预测14:26 「1+2=3」:AlphaFold 革命性飞跃的背后17:40 结构生物学家会不会被替代?聊聊 AlphaFold 还做不了的事23:26 RNA 结构预测为何更难?仅4个碱基,结构不稳定,已知数据稀缺29:24 蛋白质只是「提线木偶」,RNA 才是「操纵者」31:56 从靶向蛋白到靶向 RNA -- HIV蛋白酶抑制剂的成功和 KRAS 蛋白的「光滑锁眼」的难题35:49 靶向 RNA 药物的里程碑:首个靶向 RNA 的小分子药利司扑兰(Risdiplam)38:50 在缺乏结构数据的情况下,如何开发靶向 RNA 的药物?43:06 AI 在新药研发中的真实作用:是加速器,而非革命45:39 AI for Science:摆脱数据依赖,回归物理,寻找分子世界的「牛顿定律」

Pharma Intelligence Podcasts
Decoding Cell Differentiation: How AI Foundation Models Are Reshaping Regenerative Medicine

Pharma Intelligence Podcasts

Play Episode Listen Later Oct 31, 2025 33:31


What if we could train AI to understand how stem cells become any cell type in the human body? In this episode of the In Vivo Podcast, host David Wild sits down with Micha Breakstone (CEO & Co-founder) and Samantha Dale Strasser (VP of Strategy) from Somite.AI to explore how their company is using foundation models to revolutionize cell therapy development. Somite has pioneered a breakthrough approach that generates cell differentiation data at 1000x the efficiency of traditional methods using proprietary hydrogel capsule technology. By capturing millions of trajectories showing how cells respond to signals over time, they're building DeltaStem—a foundation model that could do for developmental biology what AlphaFold did for protein structure prediction. Topics covered: - How bringing cells to signals (rather than signals to cells) unlocks exponential scale - Why wet lab innovation is just as critical as AI models - Manufacturing optimization: improving purity, reducing variability, and cutting costs - From beta cells for diabetes to brown fat for metabolic disease—the therapeutic pipeline - Why even AI experts underestimate what's coming in the next decade - Lessons from building biotech companies from academic concepts to commercial ventures Whether you're in pharma, biotech, AI or just fascinated by the intersection of technology and human biology, this conversation offers a grounded look at how foundation models are moving from hype to real therapeutic impact.

What's Next|科技早知道
从 AlphaFold 到 RNA 靶点预测,AI 如何重塑新药研发的未来? | S9E34

What's Next|科技早知道

Play Episode Listen Later Oct 22, 2025 49:24


2024 年的诺贝尔化学奖是颁给了三位在蛋白质结构预测和蛋白质设计领域作出开创性贡献的科学家。这标志着 AI 已经成为生命科学的核心工具 ,正在改变我们理解生命的方式和重塑药物研发的未来。 我们今天的嘉宾是深圳湾实验室的周耀旗教授,他是这场变革的亲历者和推动者之一。他最初在学术界专注于蛋白质结构预测,后来他敏锐地意识到 RNA 领域的潜力与挑战,将研究方向转向 RNA 结构预测。现在他又走上创业之路,带领团队开发 以 RNA 为靶点的小分子药物,探索如何将基础研究真正转化为新的疗法。今天的节目我们聊一聊作为蛋白质结构预测工具的 AlphaFold3,它的突破与局限在哪里?RNA为什么是新一代药物的重要靶点?以及 AI 在新药研发中的作用究竟是什么? 本期人物 周耀旗,深圳湾实验室资深研究员,砺博生物科学创始人 Yaxian,「科技早知道」主播 主要话题 [02:42] 为什么蛋白质结构如此重要?解析蛋白结构是理解生命机器的关键 [05:47] 蛋白质结构预测简史(超硬核):基于模板 --> 碎片拼接 --> 二面角+距离预测 [14:26] 「1+2=3」:AlphaFold 革命性飞跃的背后 [17:40] 结构生物学家会不会被替代?聊聊 AlphaFold 还做不了的事 [23:26] RNA 结构预测为何更难?仅4个碱基,结构不稳定,已知数据稀缺 [29:24] 蛋白质只是「提线木偶」,RNA 才是「操纵者」 [31:56] 从靶向蛋白到靶向 RNA -- HIV蛋白酶抑制剂的成功和 KRAS 蛋白的「光滑锁眼」的难题 [35:49] 靶向 RNA 药物的里程碑:首个靶向 RNA 的小分子药利司扑兰(Risdiplam) [38:50] 在缺乏结构数据的情况下,如何开发靶向 RNA 的药物? [43:06] AI 在新药研发中的真实作用:是加速器,而非革命 [45:39] AI for Science:摆脱数据依赖,回归物理,寻找分子世界的「牛顿定律」 延伸阅读 AlphaFold 由谷歌 DeepMind 开发的人工智能程序,AlphaFold2 在精准预测蛋白质三维结构方面取得革命性突破而闻名。AlphaFold3 将其能力扩展到了 RNA、DNA 等更多分子。 CASP (Critical Assessment of protein Structure Prediction) 国际蛋白质结构预测竞赛,每两年举办一次,是评估和检验全球结构预测方法水平的「奥林匹克」 KRAS 一种重要的信号蛋白,其基因突变是多种癌症(如胰腺癌、肺癌)的关键驱动因素。由于其蛋白质表面光滑,缺乏明显的结合位点,长期以来被认为是「不可成药」的靶点。 SMN 蛋白 (Survival of Motor Neuron protein) 即运动神经元存活蛋白,该蛋白的缺失会导致 脊髓性肌萎缩症 (SMA)。全球首个靶向 RNA 的药物就是通过调控 SMN 的 RNA 来提高其蛋白水平。 PCC (Pre-clinical Candidate) 即临床前候选化合物,指在早期发现阶段后,被选定进入正式的临床前研究(如动物安全性、药代动力学试验)的药物分子 幕后制作 监制:Yaxian 后期:迪卡 运营:George 设计:饭团 商业合作 声动活泼商业化小队,点击链接直达声动商务会客厅(https://sourl.cn/9h28kj ),也可发送邮件至 business@shengfm.cn 联系我们。 加入声动活泼 声动活泼目前开放商务合作实习生、社群运营实习生和 BD 经理等职位,详情点击招聘入口详情点击招聘入口 (https://eg76rdcl6g.feishu.cn/docx/XO6bd12aGoI4j0xmAMoc4vS7nBh?from=from_copylink) 关于声动活泼 「用声音碰撞世界」,声动活泼致力于为人们提供源源不断的思考养料。 我们还有这些播客:声动早咖啡 (https://www.xiaoyuzhoufm.com/podcast/60de7c003dd577b40d5a40f3)、声东击西 (https://etw.fm/episodes)、吃喝玩乐了不起 (https://www.xiaoyuzhoufm.com/podcast/644b94c494d78eb3f7ae8640)、反潮流俱乐部 (https://www.xiaoyuzhoufm.com/podcast/5e284c37418a84a0462634a4)、泡腾 VC (https://www.xiaoyuzhoufm.com/podcast/5f445cdb9504bbdb77f092e9)、商业WHY酱 (https://www.xiaoyuzhoufm.com/podcast/61315abc73105e8f15080b8a)、跳进兔子洞 (https://therabbithole.fireside.fm/) 、不止金钱 (https://www.xiaoyuzhoufm.com/podcast/65a625966d045a7f5e0b5640) 欢迎在即刻 (https://okjk.co/Qd43ia)、微博等社交媒体上与我们互动,搜索 声动活泼 即可找到我们。 期待你给我们写邮件,邮箱地址是:ting@sheng.fm 声小音 https://files.fireside.fm/file/fireside-uploads/images/4/4931937e-0184-4c61-a658-6b03c254754d/gK0pledC.png 欢迎扫码添加声小音,在节目之外和我们保持联系。 Special Guest: 周耀旗.

Data in Biotech
The Future of Co-Folding and Federated Learning with Apheris

Data in Biotech

Play Episode Listen Later Oct 22, 2025 49:08


Robin Rohm, CEO and Co-Founder of Apheris, joins Ross Katz to explore how federated learning is unlocking secure, cross-company collaboration in pharma. Discover how Apheris is enabling biopharma leaders to train cutting-edge co-folding models without sharing sensitive data, why AlphaFold 3 wasn't enough, and what OpenFold 3 means for the future of AI in drug discovery. ​​What You'll Learn in This Episode >> Why federated learning is a game-changer for pharma data sharing and AI-driven research >> How OpenFold 3 builds on AlphaFold's legacy to solve the protein-ligand interaction challenge >> The role of structural benchmarking in model development and validation >> How Apheris enables privacy-preserving collaboration between major pharma players >>The importance of high-quality, proprietary datasets in advancing co-folding model accuracy Meet Our Guest Robin Rohm is the CEO and Co-Founder of Apheris, a leader in federated data networks for life sciences. With a background in mathematics and computational genomics, Robin is advancing secure AI collaboration in pharma and biotech. About The Host Ross Katz is Principal and Data Science Lead at CorrDyn. Ross specializes in building intelligent data systems that empower biotech and healthcare organizations to extract insights and drive innovation. Connect with Our Guest: Sponsor: CorrDyn, a data consultancyConnect with Robin Roehm  on LinkedIn  Connect with Us: Follow the podcast for more insightful discussions on the latest in biotech and data science.Subscribe and leave a review if you enjoyed this episode!Connect with Ross Katz on LinkedIn Sponsored by… This episode is brought to you by CorrDyn, the leader in data-driven solutions for biotech and healthcare. Discover how CorrDyn is helping organizations turn data into breakthroughs at CorrDyn.

Qubit Podcast
Gáspári Zoltán: Az AI segít felfedezni az életünket irányító fehérjéket

Qubit Podcast

Play Episode Listen Later Oct 21, 2025 18:27


A fehérjék az agyi folyamatoktól a PET-palackok lebontásáig sok mindent befolyásolnak, de szerkezetük bonyolult, ami a gyógyszerek fejlesztésében is akadályt jelent. Erről is beszélt a 12. Qubit Live-on Gáspári Zoltán bioinformatikus – az előadás podcast és videó formájában is elérhető.See omnystudio.com/listener for privacy information.

The Jim Rutt Show
EP 327 Nate Soares on Why Superhuman AI Would Kill Us All

The Jim Rutt Show

Play Episode Listen Later Oct 15, 2025 97:07


Jim talks with Nate Soares about the ideas in his and Eliezer Yudkowsky's book If Anybody Builds It, Everybody Dies: Why Superhuman AI Would Kill Us All. They discuss the book's claim that mitigating existential AI risk should be a top global priority, the idea that LLMs are grown, the opacity of deep learning networks, the Golden Gate activation vector, whether our understanding of deep learning networks might improve enough to prevent catastrophe, goodness as a narrow target, the alignment problem, the problem of pointing minds, whether LLMs are just stochastic parrots, why predicting a corpus often requires more mental machinery than creating a corpus, depth & generalization of skills, wanting as an effective strategy, goal orientation, limitations of training goal pursuit, transient limitations of current AI, protein folding and AlphaFold, the riskiness of automating alignment research, the correlation between capability and more coherent drives, why the authors anchored their argument on transformers & LLMs, the inversion of Moravec's paradox, the geopolitical multipolar trap, making world leaders aware of the issues, a treaty to ban the race to superintelligence, the specific terms of the proposed treaty, a comparison with banning uranium enrichment, why Jim tentatively thinks this proposal is a mistake, a priesthood of the power supply, whether attention is a zero-sum game, and much more. Episode Transcript "Psyop or Insanity or ...? Peter Thiel, the Antichrist, and Our Collapsing Epistemic Commons," by Jim Rutt "On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback," by Marcus Williams et al. Attention Sinks and Compression Valleys in LLMs are Two Sides of the Same Coin," by Enrique Queipo-de-Llano et al. JRS EP 217 - Ben Goertzel on a New Framework for AGI "A Tentative Draft of a Treaty, With Annotations" Nate Soares is the President of the Machine Intelligence Research Institute. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.

The Foresight Institute Podcast
Eliezer Yudkowsky vs Mark Miller | ASI Risks: Similar premises, opposite conclusions

The Foresight Institute Podcast

Play Episode Listen Later Sep 24, 2025 252:32


What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement. They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.The conversation spans AI collaboration, secure operating frameworks, cryptographic separation, and lessons from nuclear non-proliferation. Despite their differences, both aim for a future where AI benefits humanity without posing existential threats. Hosted on Acast. See acast.com/privacy for more information.

a16z
Faster Science, Better Drugs

a16z

Play Episode Listen Later Sep 15, 2025 56:26


Can we make science as fast as software? In this episode, Erik Torenberg talks with Patrick Hsu (cofounder of Arc Institute) and a16z general partner Jorge Conde about Arc's “virtual cells” moonshot, which uses foundation models to simulate biology and guide experiments. They discuss why research is slow, what an AlphaFold-style moment for cell biology could look like, and how AI might improve drug discovery. The conversation also covers hype versus substance in AI for biology, clinical bottlenecks, capital intensity, and how breakthroughs like GLP-1s show the path from science to major business and health impact. Resources:Find Patrick on X: https://x.com/pdhsuFind Jorge on X: https://x.com/JorgeCondeBio Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Increments
#90 (Reaction) - Disbelieving AI 2027: Responding to "Why We're Not Ready For Superintelligence"

Increments

Play Episode Listen Later Aug 18, 2025 95:32


Always the uncool kids at the table, Ben and Vaden push back against the AGI hype domininating every second episode of every second podcast. We react to "We're not ready for superintelligence" (https://www.youtube.com/watch?v=5KVDDfAkRgc) by 80,000 Hours - a bleak portrayal of the pre and post AGI world. Can Ben keep Vaden's sass in check? Can the 80,000 hours team find enough cubes for AGI? Is Agent-5 listening to you RIGHT NOW? Listener Note: We strongly recommend watching the video for this one, available both on youtube and spotify: - https://www.youtube.com/@incrementspod - https://open.spotify.com/show/1gKKSP5HKT4Nk3i0y4UseB We discuss The incentives of superforecasters Arguments by authority Whether superintelligence is right around the corner The difference between model size and data Are we running out of high quality data? Does training on synthetic data work? The assumptions behind the AGI claims The pitfalls of reasoning from trends References Michael I Jordan (https://people.eecs.berkeley.edu/~jordan/) Neil Lawrence (https://en.wikipedia.org/wiki/Neil_Lawrence) Important technical paper from Jordan pushing back on Doomerism (A Collectivist, Economic Perspective on AI) Jordan article talking about dangers of using AlphaFold data (https://news.berkeley.edu/2023/11/09/how-to-use-ai-for-discovery-without-leading-science-astray/) Nature paper showing you can't use synthetic data to train bigger models (https://www.nature.com/articles/s41586-024-07566-y) Paper estimating of when training data will run out (https://arxiv.org/abs/2211.04325v2) (Coincidentally enough, sometime between 2027-2028) Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) But how many cubes until we get to AGI though? Send a few of your cubes over to incrementspodcast@gmail.com Episode header image from here (https://www.youtube.com/watch?app=desktop&v=0Jsrux_XY8Y&ab_channel=TheAlgorithmicVoice).

Mon Carnet, l'actu numérique
{RÉFLEXION} - Google s'attaque aux maladies humaines

Mon Carnet, l'actu numérique

Play Episode Listen Later Jul 30, 2025 8:49


Stéphane Ricoul nous apprend que Google, par l'entremise de sa filiale Isomorphic Labs, s'apprête à tester sur des humains les premiers médicaments entièrement conçus grâce à l'intelligence artificielle. En s'appuyant sur AlphaFold, son outil de prédiction des structures protéiques, l'entreprise entend révolutionner un développement pharmaceutique réputé long, coûteux et risqué.

The Jim Rutt Show
EP 312 Lee Cronin on Automating Chemistry

The Jim Rutt Show

Play Episode Listen Later Jul 24, 2025 64:49


Jim talks with Lee Cronin about Chemify, his startup that aims to automate chemistry through "chemifarms" that turn code into molecules. They discuss Chemify as an AWS for chemistry, the development of a chemical programming language & its evolution to Turing completeness, quantum vs classical chemistry computation, open source tools & academic access, robotics & automation in chemistry, catalyst discovery & optimization, integration with tools like AlphaFold, business models, venture capital funding, supply chain implications, distributed manufacturing, personalized medicine possibilities, and much more. Episode Transcript Currents 100: Sara Walker and Lee Cronin on Time as an Object Chemify Lee Cronin is a chemist. He is the Regius Professor of Chemistry at the University of Glasgow and the Founder & CEO of Chemify. He is known for his approach to the digitization of chemistry and developing digital-to-chemical transformation known as Chemputing which can turn code into reactions and molecules. He has also developed a new theory for evolution and selection called assembly theory which aims to quantify and explain how selection can occur in chemistry before biology. Lee is also exploring how chemical systems can compute, and what is needed for the evolution of intelligence, as well as designing a new type of computational system that uses information encoded in chemical reactions and molecules.

The Next Wave - Your Chief A.I. Officer
Inside Google's AI Lab: Drug Discovery, World AI Model & AlphaEvolve

The Next Wave - Your Chief A.I. Officer

Play Episode Listen Later Jul 22, 2025 18:29


Want the ultimate guide to Google's Gemini? Get it here: https://clickhubspot.com/evt Episode 68: How is Google DeepMind pushing the boundaries of AI to tackle drug discovery, robotics, and even autonomous AI agents? Matt Wolfe (https://x.com/mreflow) sits down with DeepMind CEO Sir Demis Hassabis (https://x.com/demishassabis), a neuroscientist, AI pioneer, Nobel laureate, and knight, to peel back the curtain on Google's latest advances—and the ethical challenges that come with them. In this episode, Matt and Demis go deep on what's powering the newest generation of AI agents, how models like AlphaFold and AlphaEvolve are accelerating scientific breakthroughs, and why world models are so important for the future of robotics. Demis shares why he believes AI is poised to reshape society—for better and for worse—and what Google is doing to build public trust in its systems. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI Revolutionizing Drug Discovery (03:35) Advanced Model Training Methods (07:06) Accelerating Drug Discovery with AI (11:12) AI's Responsible Role in Society (13:56) AI Revolutionizing Science & Life — Mentions: Sir Demis Hassabis: https://www.linkedin.com/in/demishassabis/ Google DeepMind: https://deepmind.google/ AlphaFold: https://alphafold.ebi.ac.uk/ AlphaEvolve: https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/ Isomorphic Labs: https://www.isomorphiclabs.com/ Android XR glasses: http://blog.google/products/android/android-xr-gemini-glasses-headsets/ Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano

Middle Tech
319 | Lexington AI Meetup Live: The Story of AlphaFold with Founding Member Steve Crossan

Middle Tech

Play Episode Listen Later Jul 21, 2025 57:28


In this special live recording from our Lexington AI Meetup, we sit down with Steve Crossan, a founding member of Google DeepMind's AlphaFold team and former Google product leader. Steve helped launch groundbreaking AI research as part of the team that built AlphaFold, the model that cracked one of biology's grand challenges.AlphaFold can predict a protein's 3D structure using only its amino acid sequence - a task that once took scientists months or years now completed in minutes. With the release of AlphaFold 3, the model now maps not just proteins, but how they interact with DNA, RNA, drugs, and antibodies - a huge leap for drug discovery and synthetic biology.Steve breaks down the origin story of AlphaFold, the future of AI-powered science, and what's next for healthcare, drug development, and beyond. A special thank you to Brent Seales and Randall Stevens for helping us coordinate Steve's talk during his visit in Lexington!If you'd like to stay up to date about upcoming Middle Tech events, subscribe to our newsletter at middletech.beehiiv.com.

ASCO Daily News
From Clinic to Clinical Trials: Responsible AI Integration in Oncology

ASCO Daily News

Play Episode Listen Later Jul 10, 2025 24:01


Dr. Paul Hanona and Dr. Arturo Loaiza-Bonilla discuss how to safely and smartly integrate AI into the clinical workflow and tap its potential to improve patient-centered care, drug development, and access to clinical trials. TRANSCRIPT Dr. Paul Hanona: Hello, I'm Dr. Paul Hanona, your guest host of the ASCO Daily News Podcast today. I am a medical oncologist as well as a content creator @DoctorDiscover, and I'm delighted to be joined today by Dr. Arturo Loaiza-Bonilla, the chief of hematology and oncology at St. Luke's University Health Network. Dr. Bonilla is also the co-founder and chief medical officer at Massive Bio, an AI-driven platform that matches patients with clinical trials and novel therapies. Dr. Loaiza-Bonilla will share his unique perspective on the potential of artificial intelligence to advance precision oncology, especially through clinical trials and research, and other key advancements in AI that are transforming the oncology field. Our full disclosures are available in the transcript of the episode. Dr. Bonilla, it's great to be speaking with you today. Thanks for being here. Dr. Arturo Loaiza-Bonilla: Oh, thank you so much, Dr. Hanona. Paul, it's always great to have a conversation. Looking forward to a great one today. Dr. Paul Hanona: Absolutely. Let's just jump right into it. Let's talk about the way that we see AI being embedded in our clinical workflow as oncologists. What are some practical ways to use AI? Dr. Arturo Loaiza-Bonilla: To me, responsible AI integration in oncology is one of those that's focused on one principle to me, which is clinical purpose is first, instead of the algorithm or whatever technology we're going to be using. If we look at the best models in the world, they're really irrelevant unless we really solve a real day-to-day challenge, either when we're talking to patients in the clinic or in the infusion chair or making decision support. Currently, what I'm doing the most is focusing on solutions that are saving us time to be more productive and spend more time with our patients. So, for example, we're using ambient AI for appropriate documentation in real time with our patients. We're leveraging certain tools to assess for potential admission or readmission of patients who have certain conditions as well. And it's all about combining the listening of physicians like ourselves who are end users, those who create those algorithms, data scientists, and patient advocates, and even regulators, before they even write any single line of code. I felt that on my own, you know, entrepreneurial aspects, but I think it's an ethos that we should all follow. And I think that AI shouldn't be just bolted on later. We always have to look at workflows and try to look, for example, at clinical trial matching, which is something I'm very passionate about. We need to make sure that first, it's easier to access for patients, that oncologists like myself can go into the interface and be able to pull the data in real time when you really need it, and you don't get all this fatigue alerts. To me, that's the responsible way of doing so. Those are like the opportunities, right? So, the challenge is how we can make this happen in a meaningful way – we're just not reacting to like a black box suggestion or something that we have no idea why it came up to be. So, in terms of success – and I can tell you probably two stories of things that we know we're seeing successful – we all work closely with radiation oncologists, right? So, there are now these tools, for example, of automated contouring in radiation oncology, and some of these solutions were brought up in different meetings, including the last ASCO meeting. But overall, we know that transformer-based segmentation tools; transformer is just the specific architecture of the machine learning algorithm that has been able to dramatically reduce the time for colleagues to spend allotting targets for radiation oncology. So, comparing the target versus the normal tissue, which sometimes it takes many hours, now we can optimize things over 60%, sometimes even in minutes. So, this is not just responsible, but it's also an efficiency win, it's a precision win, and we're using it to adapt even mid-course in response to tumor shrinkage. Another success that I think is relevant is, for example, on the clinical trial matching side. We've been working on that and, you know, I don't want to preach to the choir here, but having the ability for us to structure data in real time using these tools, being able to extract information on biomarkers, and then show that multi-agentic AI is superior to what we call zero-shot or just throwing it into ChatGPT or any other algorithm, but using the same tools but just fine-tuned to the point that we can be very efficient and actually reliable to the level of almost like a research coordinator, is not just theory. Now, it can change lives because we can get patients enrolled in clinical trials and be activated in different places wherever the patient may be. I know it's like a long answer on that, but, you know, as we talk about responsible AI, that's important. And in terms of what keeps me up at night on this: data drift and biases, right? So, imaging protocols, all these things change, the lab switch between different vendors, or a patient has issues with new emerging data points. And health systems serve vastly different populations. So, if our models are trained in one context and deployed in another, then the output can be really inaccurate. So, the idea is to become a collaborative approach where we can use federated learning and patient-centricity so we can be much more efficient in developing those models that account for all the populations, and any retraining that is used based on data can be diverse enough that it represents all of us and we can be treated in a very good, appropriate way. So, if a clinician doesn't understand why a recommendation is made, as you probably know, you probably don't trust it, and we shouldn't expect them to. So, I think this is the next wave of the future. We need to make sure that we account for all those things. Dr. Paul Hanona: Absolutely. And even the part about the clinical trials, I want to dive a little bit more into in a few questions. I just kind of wanted to make a quick comment. Like you said, some of the prevalent things that I see are the ambient scribes. It seems like that's really taken off in the last year, and it seems like it's improving at a pretty dramatic speed as well. I wonder how quickly that'll get adopted by the majority of physicians or practitioners in general throughout the country. And you also mentioned things with AI tools regarding helping regulators move things quicker, even the radiation oncologist, helping them in their workflow with contouring and what else they might have to do. And again, the clinical trials thing will be quite interesting to get into. The first question I had subsequent to that is just more so when you have large datasets. And this pertains to two things: the paper that you published recently regarding different ways to use AI in the space of oncology referred to drug development, the way that we look at how we design drugs, specifically anticancer drugs, is pretty cumbersome. The steps that you have to take to design something, to make sure that one chemical will fit into the right chemical or the structure of the molecule, that takes a lot of time to tinker with. What are your thoughts on AI tools to help accelerate drug development? Dr. Arturo Loaiza-Bonilla: Yes, that's the Holy Grail and something that I feel we should dedicate as much time and effort as possible because it relies on multimodality. It cannot be solved by just looking at patient histories. It cannot be solved by just looking at the tissue alone. It's combining all these different datasets and being able to understand the microenvironment, the patient condition and prior treatments, and how dynamic changes that we do through interventions and also exposome – the things that happen outside of the patient's own control – can be leveraged to determine like what's the best next step in terms of drugs. So, the ones that we heard the news the most is, for example, the Nobel Prize-winning [for Chemistry awarded to Demis Hassabis and John Jumper for] AlphaFold, an AI system that predicts protein structures right? So, we solved this very interesting concept of protein folding where, in the past, it would take the history of the known universe, basically – what's called the Levinthal's paradox – to be able to just predict on amino acid structure alone or the sequence alone, the way that three-dimensionally the proteins will fold. So, with that problem being solved and the Nobel Prize being won, the next step is, “Okay, now we know how this protein is there and just by sequence, how can we really understand any new drug that can be used as a candidate and leverage all the data that has been done for many years of testing against a specific protein or a specific gene or knockouts and what not?” So, this is the future of oncology and where we're probably seeing a lot of investments on that. The key challenge here is mostly working on the side of not just looking at pathology, but leveraging this digital pathology with whole slide imaging and identifying the microenvironment of that specific tissue. There's a number of efforts currently being done. One isn't just H&E, like hematoxylin and eosin, slides alone, but with whole imaging, now we can use expression profiles, spatial transcriptomics, and gene whole exome sequencing in the same space and use this transformer technology in a multimodality approach that we know already the slide or the pathology, but can we use that to understand, like, if I knock out this gene, how is the microenvironment going to change to see if an immunotherapy may work better, right? If we can make a microenvironment more reactive towards a cytotoxic T cell profile, for example. So, that is the way where we're really seeing the field moving forward, using multimodality for drug discovery. So, the FDA now seems to be very eager to support those initiatives, so that's of course welcome. And now the key thing is the investment to do this in a meaningful way so we can see those candidates that we're seeing from different companies now being leveraged for rare disease, for things that are going to be almost impossible to collect enough data, and make it efficient by using these algorithms that sometimes, just with multiple masking – basically, what they do is they mask all the features and force the algorithm to find solutions based on the specific inputs or prompts we're doing. So, I'm very excited about that, and I think we're going to be seeing that in the future. Dr. Paul Hanona: So, essentially, in a nutshell, we're saying we have the cancer, which is maybe a dandelion in a field of grass, and we want to see the grass that's surrounding the dandelion, which is the pathology slides. The problem is, to the human eye, it's almost impossible to look at every single piece of grass that's surrounding the dandelion. And so, with tools like AI, we can greatly accelerate our study of the microenvironment or the grass that's surrounding the dandelion and better tailor therapy, come up with therapy. Otherwise, like you said, to truly generate a drug, this would take years and years. We just don't have the throughput to get to answers like that unless we have something like AI to help us. Dr. Arturo Loaiza-Bonilla: Correct. Dr. Paul Hanona: And then, clinical trials. Now, this is an interesting conversation because if you ever look up our national guidelines as oncologists, there's always a mention of, if treatment fails, consider clinical trials. Or in the really aggressive cancers, sometimes you might just start out with clinical trials. You don't even give the standard first-line therapy because of how ineffective it is. There are a few issues with clinical trials that people might not be aware of, but the fact that the majority of patients who should be on clinical trials are never given the chance to be on clinical trials, whether that's because of proximity, right, they might live somewhere that's far from the institution, or for whatever reason, they don't qualify for the clinical trial, they don't meet the strict inclusion criteria.  But a reason you mentioned early on is that it's simply impossible for someone to be aware of every single clinical trial that's out there. And then even if you are aware of those clinical trials, to actually find the sites and put in the time could take hours. And so, how is AI going to revolutionize that? Because in my mind, it's not that we're inventing a new tool. Clinical trials have always been available. We just can't access them. So, if we have a tool that helps with access, wouldn't that be huge? Dr. Arturo Loaiza-Bonilla: Correct. And that has been one of my passions. And for those who know me and follow me and we've spoke about it in different settings, that's something that I think we can solve. This other paradox, which is the clinical trial enrollment paradox, right? We have tens of thousands of clinical trials available with millions of patients eager to learn about trials, but we don't enroll enough and many trials close to accrual because of lack of enrollment. It is completely paradoxical and it's because of that misalignment because patients don't know where to go for trials and sites don't know what patients they can help because they haven't reached their doors yet. So, the solution has to be patient-centric, right? We have to put the patient at the center of the equation. And that was precisely what we had been discussing during the ASCO meeting. There was an ASCO Education Session where we talked about digital prescreening hubs, where we, in a patient-centric manner, the same way we look for Uber, Instacart, any solution that you may think of that you want something that can be leveraged in real time, we can use these real-world data streams from the patient directly, from hospitals, from pathology labs, from genomics companies, to continuously screen patients who can match to the inclusion/exclusion criteria of unique trials. So, when the patient walks into the clinic, the system already knows if there's a trial and alerts the site proactively. The patient can actually also do decentralization. So, there's a number of decentralized clinical trial solutions that are using what I call the “click and mortar” approach, which is basically the patient is checking digitally and then goes to the site to activate. We can also have the click and mortar in the bidirectional way where the patient is engaged in person and then you give the solution like the ones that are being offered on things that we're doing at Massive Bio and beyond, which is having the patient to access all that information and then they make decisions and enroll when the time is right.  As I mentioned earlier, there is this concept drift where clinical trials open and close, the patient line of therapy changes, new approvals come in and out, and sites may not be available at a given time but may be later. So, having that real-time alerts using tools that are able already to extract data from summarization that we already have in different settings and doing this natural language ingestion, we can not only solve this issue with manual chart review, which is extremely cumbersome and takes forever and takes to a lot of one-time assessments with very high screen failures, to a real-time dynamic approach where the patient, as they get closer to that eligibility criteria, they get engaged. And those tools can be built to activate trials, audit trials, and make them better and accessible to patients. And something that we know is, for example, 91%-plus of Americans live close to either a pharmacy or an imaging center. So, imagine that we can potentially activate certain of those trials in those locations. So, there's a number of pharmacies, special pharmacies, Walgreens, and sometimes CVS trying to do some of those efforts. So, I think the sky's the limit in terms of us working together. And we've been talking with corporate groups, they're all interested in those efforts as well, to getting patients digitally enabled and then activate the same way we activate the NCTN network of the corporate groups, that are almost just-in-time. You can activate a trial the patient is eligible for and we get all these breakthroughs from the NIH and NCI, just activate it in my site within a week or so, as long as we have the understanding of the protocol. So, using clinical trial matching in a digitally enabled way and then activate in that same fashion, but not only for NCTN studies, but all the studies that we have available will be the key of the future through those prescreening hubs. So, I think now we're at this very important time where collaboration is the important part and having this silo-breaking approach with interoperability where we can leverage data from any data source and from any electronic medical records and whatnot is going to be essential for us to move forward because now we have the tools to do so with our phones, with our interests, and with the multiple clinical trials that are coming into the pipelines. Dr. Paul Hanona: I just want to point out that the way you described the process involves several variables that practitioners often don't think about. We don't realize the 15 steps that are happening in the background. But just as a clarifier, how much time is it taking now to get one patient enrolled on a clinical trial? Is it on the order of maybe 5 to 10 hours for one patient by the time the manual chart review happens, by the time the matching happens, the calls go out, the sign-up, all this? And how much time do you think a tool that could match those trials quicker and get you enrolled quicker could save? Would it be maybe an hour instead of 15 hours? What's your thought process on that? Dr. Arturo Loaiza-Bonilla: Yeah, exactly. So one is the matching, the other one is the enrollment, which, as you mentioned, is very important. So, it can take, from, as you said, probably between 4 days to sometimes 30 days. Sometimes that's how long it takes for all the things to be parsed out in terms of logistics and things that could be done now agentically. So, we can use agents to solve those different steps that may take multiple individuals. We can just do it as a supply chain approach where all those different steps can be done by a single agent in a simultaneous fashion and then we can get things much faster. With an AI-based solution using these frontier models and multi-agentic AI – and we presented some of this data in ASCO as well – you can do 5,000 patients in an hour, right? So, just enrolling is going to be between an hour and maximum enrollment, it could be 7 days for those 5,000 patients if it was done at scale in a multi-level approach where we have all the trials available. Dr. Paul Hanona: No, definitely a very exciting aspect of our future as oncologists. It's one thing to have really neat, novel mechanisms of treatment, but what good is it if we can't actually get it to people who need it? I'm very much looking for the future of that.  One of the last questions I want to ask you is another prevalent way that people use AI is just simply looking up questions, right? So, traditionally, the workflow for oncologists is maybe going on national guidelines and looking up the stage of the cancer and seeing what treatments are available and then referencing the papers and looking at who was included, who wasn't included, the side effects to be aware of, and sort of coming up with a decision as to how to treat a cancer patient. But now, just in the last few years, we've had several tools become available that make getting questions easier, make getting answers easier, whether that's something like OpenAI's tools or Perplexity or Doximity or OpenEvidence or even ASCO has a Guidelines Assistant as well that is drawing from their own guidelines as to how to treat different cancers. Do you see these replacing traditional sources? Do you see them saving us a lot more time so that we can be more productive in clinic? What do you think is the role that they're going to play with patient care? Dr. Arturo Loaiza-Bonilla: Such a relevant question, particularly at this time, because these AI-enabled query tools, they're coming left and right and becoming increasingly common in our daily workflows and things that we're doing. So, traditionally, when we go and we look for national guidelines, we try to understand the context ourselves and then we make treatment decisions accordingly. But that is a lot of a process that now AI is helping us to solve. So, at face value, it seems like an efficiency win, but in many cases, I personally evaluate platforms as the chief of hem/onc at St. Luke's and also having led the digital engagement things through Massive Bio and trying to put things together, I can tell you this: not all tools are created equal. In cancer care, each data point can mean the difference between cure and progression, so we cannot really take a lot of shortcuts in this case or have unverified output. So, the tools are helpful, but it has to be grounded in truth, in trusted data sources, and they need to be continuously updated with, like, ASCO and NCCN and others. So, the reason why the ASCO Guidelines Assistant, for instance, works is because it builds on all these recommendations, is assessed by end users like ourselves. So, that kind of verification is critical, right? We're entering a phase where even the source material may be AI-generated. So, the role of human expert validation is really actually more important, not less important. You know, generalist LLMs, even when fine-tuned, they may not be enough. You can pull a few API calls from PubMed, etc., but what we need now is specialized, context-aware, agentic tools that can interpret multimodal and real-time clinical inputs. So, something that we are continuing to check on and very relevant to have entities and bodies like ASCO looking into this so they can help us to be really efficient and really help our patients. Dr. Paul Hanona: Dr. Bonilla, what do you want to leave the listener with in terms of the future direction of AI, things that we should be cautious about, and things that we should be optimistic about? Dr. Arturo Loaiza-Bonilla: Looking 5 years ahead, I think there's enormous promise. As you know, I'm an AI enthusiast, but always, there's a few priorities that I think – 3 of them, I think – we need to tackle head-on. First is algorithmic equity. So, most AI tools today are trained on data from academic medical centers but not necessarily from community practices or underrepresented populations, particularly when you're looking at radiology, pathology, and what not. So, those blind spots, they need to be filled, and we can eliminate a lot of disparities in cancer care. So, those frameworks to incentivize while keeping the data sharing using federated models and things that we can optimize is key. The second one is the governance on the lifecycle. So, you know, AI is not really static. So, unlike a drug that is approved and it just, you know, works always, AI changes. So, we need to make sure that we have tools that are able to retrain and recall when things degrade or models drift. So, we need to use up-to-date AI for clinical practice, so we are going to be in constant revalidation and make it really easy to do. And lastly, the human-AI interface. You know, clinicians don't need more noise or we don't need more black boxes. We need decision support that is clear, that we can interpret, and that is actionable. “Why are you using this? Why did we choose this drug? Why this dose? Why now?” So, all these things are going to help us and that allows us to trace evidence with a single click. So, I always call it back to the Moravec's paradox where we say, you know, evolution gave us so much energy to discern in the sensory-neural and dexterity. That's what we're going to be taking care of patients. We can use AI to really be a force to help us to be better clinicians and not to really replace us. So, if we get this right and we decide for transparency with trust, inclusion, etc., it will never replace any of our work, which is so important, as much as we want, we can actually take care of patients and be personalized, timely, and equitable. So, all those things are what get me excited every single day about these conversations on AI. Dr. Paul Hanona: All great thoughts, Dr. Bonilla. I'm very excited to see how this field evolves. I'm excited to see how oncologists really come to this field. I think with technology, there's always a bit of a lag in adopting it, but I think if we jump on board and grow with it, we can do amazing things for the field of oncology in general. Thank you for the advancements that you've made in your own career in the field of AI and oncology and just ultimately with the hopeful outcomes of improving patient care, especially cancer patients. Dr. Arturo Loaiza-Bonilla: Thank you so much, Dr. Hanona. Dr. Paul Hanona: Thanks to our listeners for your time today. If you value the insights that you hear on ASCO Daily News Podcast, please take a moment to rate, review, and subscribe wherever you get your podcasts. Disclaimer: The purpose of this podcast is to educate and to inform. This is not a substitute for professional medical care and is not intended for use in the diagnosis or treatment of individual conditions. Guests on this podcast express their own opinions, experience, and conclusions. Guest statements on the podcast do not express the opinions of ASCO. The mention of any product, service, organization, activity, or therapy should not be construed as an ASCO endorsement. More on today's speakers:    Dr. Arturo Loaiza-Bonilla @DrBonillaOnc Dr. Paul Hanona @DoctorDiscover on YouTube Follow ASCO on social media:      @ASCO on Twitter      ASCO on Facebook      ASCO on LinkedIn    ASCO on BlueSky Disclosures: Paul Hanona: No relationships to disclose. Dr. Arturo-Loaiza-Bonilla: Leadership: Massive Bio Stock & Other Ownership Interests: Massive Bio Consulting or Advisory Role: Massive Bio, Bayer, PSI, BrightInsight, CardinalHealth, Pfizer, AstraZeneca, Medscape Speakers' Bureau: Guardant Health, Ipsen, AstraZeneca/Daiichi Sankyo, Natera

Artificial Intelligence in Industry with Daniel Faggella
Building AI Systems That Think Like Scientists in Life Sciences - Annabel Romero of Deloitte

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Jun 18, 2025 31:01


Today's guest is Annabel Romero, Specialist Leader focusing on AI for Drug Discovery at Deloitte and a structural biologist by training. Deloitte is a global consulting firm known for its work in digital transformation, data strategy, and AI adoption across regulated industries. Annabel joins Emerj Editorial Director Matthew DeMello to explore how AI systems are being designed to think more like scientists—particularly in protein modeling and life sciences research. She shares how tools like AlphaFold and large language models are accelerating drug targeting, predicting allergen cross-reactivity, and translating learnings from human biology to agricultural innovation. This episode is sponsored by Deloitte. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast!

Nobel Prize Conversations
John Jumper: Nobel Prize Conversations

Nobel Prize Conversations

Play Episode Listen Later Jun 18, 2025 44:21


”I really love the notion of contributing something to physics.” — Chemistry laureate John Jumper has always been passionate about science and understanding the world. With the AI tool AlphaFold, he and his co-laureate Demis Hassabis have provided a possibility to predict protein structures. In this podcast conversation, Jumper speaks about the excitement of seeing how AI can help us more in the future.Jumper also shares his scientific journey and how he ended up working with AlphaFold. He describes a special memory from the 2018 CASP conference where AlphaFold was presented for the first time. Another life-changing moment was the announcement of the Nobel Prize in Chemistry in October 2024 – Jumper tells us how his life has changed since then. Through their lives and work, failures and successes – get to know the individuals who have been awarded the Nobel Prize on the Nobel Prize Conversations podcast. Find it on Acast, or wherever you listen to pods. https://linktr.ee/NobelPrizeConversations© Nobel Prize Outreach. Hosted on Acast. See acast.com/privacy for more information.

StarTalk Radio
Curing All Disease with AI with Max Jaderberg

StarTalk Radio

Play Episode Listen Later May 30, 2025 49:36


Can AI help us model biology down to the molecular level? Neil deGrasse Tyson, Chuck Nice, and Gary O'Reilly learn about Nobel-prize-winning Alphafold, the protein folding problem, and how solving it could end disease with AI researcher, Max Jaderberg. NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/curing-all-disease-with-ai-with-max-jaderberg/Thanks to our Patrons Riley r, pesketti, Lindsay Vanlerberg, Andreas, Silvia Valentine, Brazen Rigsby, Marc, Lyda Swanston, Kevin Henry, Roberto Reyes, Cadexn, Cassandra Shanklin, Stan Adamson, Will Slade, Zach VanderGraaff, Tom Spalango, Laticia Edmonds, jason scott, Jigar Gada, Robert Jensen, Matt D., TOL, Thomas McDaniel, Sr., Ryan Ramsey, truthmind, Aaron TInker, George Assaf, Dante Ruzinok, Jonathan Ford, Just Ernst, David Eli Janes, Tamil, Sarah, Earnest Lee, Craig Hanson, Rob, Be Love, Brandon Wilson, TJ Kellysawyer, Bodhi Animations, Dave P., Christina Williams, Ivaylo Vartigorov, Roy Mitsuoka (@surflightroy), John Brendel, Moises Zorrilla, deborah shaw, Jim Muoio, Tahj Ward, Phil, Alex, Brian D. Smith, Nate Barmore, John J Lopez, Raphael Velazquez Cruz, Catboi Air, Jelly Mint, Audie Cruz for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 527: AI's First Chapter: Why Generative AI Is Only the Beginning

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later May 16, 2025 30:09


Think AI is hitting a wall? Nope. This is just the start. Actually, we're at the first chapter. Here's what that means, and how you can move your company ahead. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the conversationUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Generative AI's current phaseMeta's in-house AI chips developmentOpenAI's new developer toolsDay zero of AI and future prospectsReinforcement learning advancementsEmergent reasoning capabilities in AIBusiness implications of AI advancementsAI in healthcare and scienceTimestamps:00:00 Day Zero of AI03:31 AI Tools Enhance Customization & Access09:02 Reinforcement Learning Enhances AI Reasoning11:27 Agentic AI: The Future of Tasks15:59 Tech Potential vs. Everyday Utilization18:48 AI Models Offer Broad Benefits23:15 "Generative AI: Optimism and Oversight"27:08 Generative AI vs. Domain-Specific AI29:24 Superhuman AI: Next FrontierKeywords:Generative AI, Fortune 100 leaders, chat GBT, Microsoft Copilot, enterprise companies, day zero of AI, livestream podcast, free daily newsletter, leveraging AI, capital expenditures, Meta AI chips, Nvidia, Taiwan's TSMC, AI infrastructure investments, Amazon, Google, Microsoft, OpenAI, responses API, agents SDK, legal research, customer support, deep research, agentic AI, supervised learning, reinforcement learning, language models, health care, computational biology, AlphaFold, protein folding prediction.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner

Impact Theory with Tom Bilyeu
How We Really Get to Mars: Space Travel, Human Survival, and the Next 100 Years of Society | Andy Weir PT 2

Impact Theory with Tom Bilyeu

Play Episode Listen Later May 8, 2025 56:02


In Part 2 of Tom's wide-ranging conversation with Andy Weir, Andy explores how AI will transform material science, medicine, biotechnology, and possibly even human evolution itself. From AI-designed drugs and custom gene editing to the ethical dilemmas of “designer babies” and the future of cosmetic self-alteration, Andy contemplates what these advances could mean for human identity, equality, and society's deepest values. The episode then hurtles into the far future, weighing the prospects of artificial superintelligence, AI alignment, and the ultimate “tool or agent” debate. Tom and Andy touch on open versus closed source AI, existential risk, and what humanity's historical track record tells us about technology. SHOWNOTES 22:08 AI's leap in material science, biotech, and AlphaFold's revolution28:49 Hardware bottlenecks and the coming “AI card” revolution32:09 Efficiency breakthroughs, compression, and training paradigm shifts36:10 How new materials could propel us to low Earth orbit38:39 AI-designed proteins: The promise and danger within biology39:47 The ethics of designer babies: Health, intelligence, and consent46:38 The coming age of “cosmetic ethnicity” and identity fluidity47:29 Body hacking: Social and economic consequences, from eating to politics48:32 Why society will push—and resist—genetic modifications49:34 The looming “intelligence arms race” between humans and AI50:15 Why Andy doubts the need to compete with AI; the “bulldozer analogy”57:15 Caution and optimism: Why Andy expects a post-scarcity AI future58:10 Why “control” is likely to stay with humans—unless we hand it over1:01:04 Open source debate, narrative control, and algorithmic bias1:28:00 What excites Andy: Self-driving cars and societal revolution1:33:57 Andy on writing, his approach to AI, and what's next for his books1:35:29 Where to follow Andy Weir FOLLOW ANDY WEIR:Twitter/X: @andyweirauthorFacebook: Andy Weir CHECK OUT OUR SPONSORS ButcherBox: Ready to level up your meals? Go to ⁠https://ButcherBox.com/impact⁠ to get $20 off your first box and FREE bacon for life with the Bilyeu Box! Vital Proteins: Get 20% off by going to ⁠https://www.vitalproteins.com⁠ and entering promo code IMPACT at check out Netsuite: Download the CFO's Guide to AI and Machine Learning at ⁠https://NetSuite.com/THEORY⁠ iTrust Capital: Use code IMPACTGO when you sign up and fund your account to get a $100 bonus at ⁠https://www.itrustcapital.com/tombilyeu⁠  Mint Mobile: If you like your money, Mint Mobile is for you. Shop plans at ⁠https://mintmobile.com/impact.⁠  DISCLAIMER: Upfront payment of $45 for 3-month 5 gigabyte plan required (equivalent to $15/mo.). New customer offer for first 3 months only, then full-price plan options available. Taxes & fees extra. See MINT MOBILE for details. What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business:⁠ join me here at ZERO TO FOUNDER⁠ SCALING a business:⁠ see if you qualify here.⁠ Get my battle-tested strategies and insights delivered weekly to your inbox:⁠ sign up here.⁠ ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast,⁠ Tom Bilyeu's Mindset Playbook⁠ —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS:⁠ apple.co/impacttheory⁠ ********************************************************************** FOLLOW TOM: Instagram:⁠ https://www.instagram.com/tombilyeu/⁠ Tik Tok:⁠ https://www.tiktok.com/@tombilyeu?lang=en⁠ Twitter:⁠ https://twitter.com/tombilyeu⁠ YouTube:⁠ https://www.youtube.com/@TomBilyeu⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices

Pivot
Demis Hassabis on AI, Game Theory, Multimodality, and the Nature of Creativity | Possible

Pivot

Play Episode Listen Later Apr 12, 2025 60:49


How can AI help us understand and master deeply complex systems—from the game Go, which has 10 to the power 170 possible positions a player could pursue, or proteins, which, on average, can fold in 10 to the power 300 possible ways? This week, Reid and Aria are joined by Demis Hassabis. Demis is a British artificial intelligence researcher, co-founder, and CEO of the AI company, DeepMind. Under his leadership, DeepMind developed Alpha Go, the first AI to defeat a human world champion in Go and later created AlphaFold, which solved the 50-year-old protein folding problem. He's considered one of the most influential figures in AI. Demis, Reid, and Aria discuss game theory, medicine, multimodality, and the nature of innovation and creativity. For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/  Listen to more from Possible here. Learn more about your ad choices. Visit podcastchoices.com/adchoices