Podcasts about Vertex

  • 468PODCASTS
  • 922EPISODES
  • 37mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jun 3, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Vertex

Show all podcasts related to vertex

Latest podcast episodes about Vertex

DeFi Slate
Crypto's Foundation Model Is Dead?, Delphi Digital CEO On AI x Crypto, Ethereal Founder On Institutional DeFi, Institutional Adoption with Robinson Burkey, Dougie DeLuca, Anil Lulla, Laurence Day

DeFi Slate

Play Episode Listen Later Jun 3, 2025 124:51


The Rollup TV is brought to you by:Celestia: https://celestia.org/Boundless: https://beboundless.xyz/AltLayer: https://www.altlayer.io/Mantle: https://www.mantle.xyz/Omni Network: https://omni.network/Vertex: https://vertexprotocol.com/Join The Rollup Family:Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-disclTimestamps00:00 Trump's Crypto Wallet Launch03:08 The Death of the Crypto Foundation Model06:09 Ethereum's New Model and Strategic Goals09:03 The Future of Ethereum and Its Competition11:49 Delphi's AI Month and Investment Strategies18:05 Building on Athena: The Ethereal Project42:10 The Evolving Landscape of On-Chain Trading43:08 Innovative Features and User Engagement45:58 Building a Vertically Integrated DeFi Ecosystem49:23 Liquidity Challenges and Strategies52:20 User Base and Market Positioning56:08 Iterative Product Development and Success01:00:52 Balancing Adoption and Revenue Generation01:06:31 Institutional Relationships and Tokenization01:10:50 Future of Stablecoins and Market Trends01:25:15 The Rise of Hyper Liquid Trading01:27:11 Shifting Perspectives on Crypto Foundations01:28:36 The Future of Tokenomics and Revenue Generation01:34:39 Founders' Evolving Approach to Business Models01:38:06 New Hampshire: A Hidden Crypto Hub?01:40:34 The Importance of Revenue in Crypto Projects01:43:25 Comparing Ethereum and Solana Events01:45:23 The Lore of Lawrence and Wildcat's Unique Marketing01:51:50 Innovations in Under-Collateralized Lending01:56:58 The Future of Liquidation Mechanisms in DeFi

Living With Cystic Fibrosis
Costco to Connection: Magazine to Mic with Michelle Glogovac

Living With Cystic Fibrosis

Play Episode Listen Later Jun 2, 2025 47:44


From Costco to Connection: Podcast Advice That Changed EverythingWhen I spotted a feature on podcasting in The Costco Connection, I was excited. When I saw Michelle Glogovac featured? I knew I had to reach out. That decision turned into one of the best moves I've made for growing my podcast.Michelle,  THE Podcast Matchmaker®, publicist, and author of How To Get On Podcasts, shared simple, powerful strategies that helped expand my reach. One standout? Getting featured on other podcasts. It boosted my visibility, brought in new listeners, and gave me fresh insights into how other hosts run their shows.In this episode, Michelle shares her approach to storytelling, visibility, and the importance of showing up. Her message: Your story is your superpower.If you want to grow your platform and connect with more listeners, don't miss this one.“Your story is your superpower. The more you share it, the more people you help—and the more you grow in the process.”— Michelle Glogovac,The Podcast Matchmaker®. Michelle is terrific and you will hear and relate to her infectious personality. You'll want to be her best friend!Find out more or connect with Michelle:Author: How To Get On Podcasts Podcast Host:  My Simplified LifeFounder and CEO: The MLG Collective  Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Watch our podcasts on YouTube: https://www.youtube.com/@laurabonnell1136/featuredThanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

DeFi Slate
James Wynn Saga Continues, MicroStrategy Playbook Accelerates, Loudio Token Launch with Stephen of DeFi Dojo, Jacquelyn Melinek, Felix Jauvin, Lauris, Founder of Multiplier.fun

DeFi Slate

Play Episode Listen Later Jun 2, 2025 107:45


The Rollup TV : Monday, June 2ndTimestamps00:00 Concentrated Liquidity and Trading Dynamics02:52 Community Building vs. Customer Generation06:08 Navigating Pre and Post TGE Strategies09:01 The Role of Oracles in DeFi11:55 Yield Opportunities in DeFi14:45 Hidden Gems and Overlooked Yields18:01 Balancing Risk and Return in DeFi21:04 Starting in DeFi: Tips for Newcomers36:46 Jacquelyn Intro and the Avalanche Summit39:01 Insights from Vlad at Robinhood41:23 Policymakers and the Future of Crypto44:10 Understanding the Anti-Crypto Sentiment46:31 Founding Token Relations49:11 The Role of Token Relations51:42 Macro Analysis and Its Impact on Crypto54:32 Fiscal Dominance and Bitcoin's Future01:00:46 Global Trade Dynamics and Bitcoin's Position01:05:22 July and August Market Predictions01:12:52 Navigating Fiscal Deficits and Currency Dynamics01:16:14 The Unsustainable Fiscal Path and Hyperinflation Concerns01:18:41 Running the Economy Hot01:22:36 Tariff Policies and Capital Controls01:23:39 The Rise of BRICS and Dollar Diversification01:24:37 Evaluating the Dollar Milkshake Theory01:26:07 Market Predictions and Seasonal Trends01:29:14 Multiplier: A New Player in DeFi01:35:10 User Experience and Conversion Strategies in DeFi01:42:07 Building a Successful Crypto BusinessThe Rollup TV is brought to you by:Celestia: https://celestia.org/Boundless: https://beboundless.xyz/AltLayer: https://www.altlayer.io/Mantle: https://www.mantle.xyz/Omni Network: https://omni.network/Vertex: https://vertexprotocol.com/Join The Rollup Family:Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

Oral Arguments for the Court of Appeals for the Fifth Circuit

Penthol v. Vertex Energy

DeFi Slate
Telegram xAI partnership, FTX Distributions coming, Enterprise AI agent raises 60m, The rise of Loudio info-fi, US Macro Update, RAJ KYC Controversy with Tom Dunleavy, David Kostiner, Breadguy & Flood

DeFi Slate

Play Episode Listen Later May 29, 2025 149:26


The Rollup TV presents: Mammoth May.The Rollup TV is brought to you by:Celestia: https://celestia.org/Boundless: https://beboundless.xyz/AltLayer: https://www.altlayer.io/Mantle: https://www.mantle.xyz/Omni Network: https://omni.network/Vertex: https://vertexprotocol.com/Frax: https://frax.com/Join The Rollup Family:Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

Pharma and BioTech Daily
Pharma and Biotech Daily: Insider Trading, R&D Spending, and Acquisitions in the World of Pharmaceuticals

Pharma and BioTech Daily

Play Episode Listen Later May 29, 2025 1:59


Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma and Biotech world.Former Chinook board member, Rouzbeh Haghighat, has been indicted for insider trading related to Novartis' $3.2 billion acquisition of the company. Despite this scandal, pharma R&D spending increased in 2024, climbing 1.5% across the global pharmaceutical sector. The acquisition of SiteOne by Lilly follows Vertex into the non-opioid pain space, providing diversification for Lilly, which has been focusing on obesity and diabetes treatments. Meanwhile, AbbVie's Allergan cuts over 200 staff after a botched marketing campaign, and Inflarx axes a rare skin disease study due to disappointing late-stage data.AGC Biologics will be at Bio International in Boston to showcase their global capabilities in drug production. Vaccine overhaul, rocket grounding, and drug price transparency are also highlighted in the latest news. Biogen's strategy for Zurzuvae shifts as obstetricians/gynecologists rise to the front lines. Drug price transparency in the US is discussed as being easier said than done. Additionally, Rocket's gene therapy for Danon disease is on hold after a patient death, and four biotechs are facing uncertainty in the COVID-19 vaccine landscape.Global pharmaceutical companies are increasing their research and development spending despite political and economic challenges. Biogen is shifting its strategy for the drug Zurzuvae as obstetricians and gynecologists become more involved. Drug price transparency in the US is still a challenge, despite efforts to increase transparency. Trilink has introduced a new poly(A) tail modification to enhance protein expression.In other news, a former Chinook board member has been indicted for insider trading, Trump has appointed Dr. Oz to lead drug pricing negotiations, and Lilly is following Vertex into non-opioid pain treatment with a SiteOne acquisition. Sanofi has purchased Vigil for $470 million to reignite an Alzheimer's target.

DeFi Slate
Banks Explore Stablecoins, 1Mil Connect to Helium Network, Dubai to Tokenize Real Estate, US Goes in on Digital Assets, Nordics Back Out on Digital Currency, ETH Culture War, & Seed Phrase Fiasco

DeFi Slate

Play Episode Listen Later May 27, 2025 92:45


The Rollup TV presents: Mammoth May.The Rollup TV is brought to you by:Celestia: https://celestia.org/Boundless: https://beboundless.xyz/AltLayer: https://www.altlayer.io/Mantle: https://www.mantle.xyz/Omni Network: https://omni.network/Vertex: https://vertexprotocol.com/Frax: https://frax.com/Join The Rollup Family:Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

Living With Cystic Fibrosis
Voices of Care: A Live and Unfiltered Conversation

Living With Cystic Fibrosis

Play Episode Listen Later May 26, 2025 76:39


Live from Mix and Mingle Education Day: A Powerful Conversation with CaregiversIn this deeply moving live episode recorded at the Mix and Mingle Education Day, we brought together a powerful group of caregivers—grandparents, parents, stepparents, dads, and friends—for a heartfelt discussion about the emotional journey of caring for a loved one with cystic fibrosis.What started as a simple idea to gather voices turned into our most beautiful and emotional podcast yet. There were tears, laughter, and unforgettable stories. We were honored to be joined by a Grief Counselor who helped guide us through the complex feelings that surfaced during our conversation.This episode is a raw, real, and uplifting tribute to the strength, vulnerability, and love that caregivers bring to their roles every single day.Join us for a conversation that honors the heart of caregiving and the power of community.You'll hear from:(00:00:00) Laura Bonnell - Host - (Egypt, Foundation programs, legislation)(00:16:49)  Lois Teicher - CF Grandmother (Laura's Mom) (00:19:05)  Natalie Wicks Lois's partner(00:21:36) Julie Weatherhead - Grief Doula(00:28:45)  Sharon Tischio - CF Mom(00:33:08) Cambria Whitaker - CF Mom in a queer/transgender relationship(00:38:55) Theresa Dagget, MSU Clinic, Respiratory Therapist, CF coordinator(00:49:00 ) Dorothy Stratford,CF Family Caregiver(00:52:15) Jillian Rogers Smith, 33 year old CF Patient and Dad, Bill Rogers (01:01:40) CF Mom to 6 year old daughter Louisa(01:07:28) Wendi Tague (Nurse Coordinator) and Claire Haglund (Social Worker) Present but not on the microphone were Joe Bonnell (Laura's husband), Jeannette Bovensie (Dorothy's Mom) and Dani Nettleton and daughter.Claire Haglund: CHaglund@dmc.orgWendi Tague: wtague@dmc.orgLois Teicher (Laura's Mom): Loisteicher@yahoo.comNatalie Wicks: Piccolo35@gmail.comTheresa Daggett: daggett3@msu.eduCambria Whittaker: cambriawhitta@gmail.comDorothy Straford: dstrat701@gmail.comSharon Tischio: stischio@comcast.netJamie Rudnycky: jamie.rudnycky@gmail.comJulie Weatherhead: weathervanecounseling@gmail.comJillian Smith, Jillian's Jay Walkers: jill@jilliansjaywalkers.org   And Jillian's Dad, Bill Rogers Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Watch our podcasts on YouTube: https://www.youtube.com/@laurabonnell1136/featuredThanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

ABN Newswire Finance Video
Vertex Minerals Ltd (ASX:VTX) Exec. Chairman Roger Jackson Outlines the Move to Gold Production>

ABN Newswire Finance Video

Play Episode Listen Later May 26, 2025 7:04


DeFi Slate
OpenAI Acquires Startup, Google Breakthrough, Elon's Robot News, China's AI Ambitions, Apollo On Solana, Worldcoin Funding, Billion Hyperliquid Position with Tristan0x, Anthony Rose and Ben Rubin

DeFi Slate

Play Episode Listen Later May 22, 2025 99:25


The Rollup TV presents: Mammoth May.The Rollup TV is brought to you by:Celestia: https://celestia.org/Boundless: https://beboundless.xyz/AltLayer: https://www.altlayer.io/Mantle: https://www.mantle.xyz/Omni Network: https://omni.network/Vertex: https://vertexprotocol.com/Frax: https://frax.com/Join The Rollup Family:Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

DeFi Slate
Circle Acquisition Talks, Japan's Bond Crisis, CRISPR Gene Edited Human, Crypto as a Service Expansion, Hyperliquid Dominance, Trump Holders Dinner, JPMorgan Enabling Bitcoin with Zach Rynes and Cedo

DeFi Slate

Play Episode Listen Later May 20, 2025 96:43


The Rollup TV presents: Mammoth May.The Rollup TV is brought to you by:Celestia: https://celestia.org/Boundless: https://beboundless.xyz/AltLayer: https://www.altlayer.io/Mantle: https://www.mantle.xyz/Omni Network: https://omni.network/Vertex: https://vertexprotocol.com/Frax: https://frax.com/Join The Rollup Family:Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

Living With Cystic Fibrosis
Obesity in CF: A New Challenge in a Healthier Future

Living With Cystic Fibrosis

Play Episode Listen Later May 19, 2025 38:12


Cystic Fibrosis and obesity?  Until recently this has not been a topic of conversation for the CF community. The reason for obesity in the CF community is better health and longer lives, so the concern is now a reality.  University of Michigan CF doctor, Carey Lumeng is researching the issue.  As he says in this podcast, researchers have a lot to learn about the connection between better health in CF and obesity.  We also talk about The Bonnell Foundation fellowship program. A few years ago we started the program to encourage doctors to work in the specialty field of cystic fibrosis. Dr. Lumeng is one of the doctors who oversees this program.Dr. Lumeng is the Frederick G.L. Huetwell Professor for the Cure and Prevention of Birth Defects and Professor in Pediatrics and Molecular and Integrative Physiology. Dr. Lumeng is the Division Chief of Pediatric Pulmonology at the C.S. Mott Children's Hospital and Associate Director of the Michigan MSTP Program.He grew up in Indiana and graduated from Princeton University in Molecular Biology. He received his PhD in Human Genetics and MD from the University of Michigan and completed residency training in Pediatrics in the Boston Combined Pediatrics Residency Program at Boston Children's Hospital and Boston Medical Center. He then completed fellowship training in Pediatric Pulmonology at the University of Michigan and started as faculty in 2006.  He runs a research lab focused on the health effects of obesity and the links between metabolism and lung health. The laboratory participates in both basic science and translational research projects in adult and pediatric obesity. He is funded by the NIH and the CF Foundation for new projects studying the changing causes of diabetes in people with CF.To contact the CF pediatric department (the Bonnell girls are pictured on this page): https://www.mottchildren.org/conditions-treatments/cystic-fibrosis-pediatric?pk_vid=6ff46bd2d38fe04c1739891353f5b28b Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Watch our podcasts on YouTube: https://www.youtube.com/@laurabonnell1136/featuredThanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

DeFi Slate
Blackbird backs Celestia, Institutional BTC Buys, Klarna AI Reversal, Galaxy IPO, Crypto Capital Showdown, Internet Capital Markets & Crypto's Fantasy Draft with Haseeb Qureshi, Jan Liphardt & Gabin

DeFi Slate

Play Episode Listen Later May 15, 2025 102:24


Welcome to The Rollup TV.The Rollup TV is brought to you by:Celestia: https://celestia.org/Boundless: https://beboundless.xyz/AltLayer: https://www.altlayer.io/Mantle: https://www.mantle.xyz/Omni Network: https://omni.network/Vertex: https://vertexprotocol.com/Frax: https://frax.com/Join The Rollup Family:Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

DeFi Slate
OpenAI Leadership Shakeup, Coinbase Joins S&P 500, USD Reserve Status, Meta's Stablecoin Revival, & The Robot Revolution with Nils Pihl, Arik Galansky & Nick Forster

DeFi Slate

Play Episode Listen Later May 13, 2025 87:21


Welcome to The Rollup TV.The Rollup TV is brought to you by:Celestia: https://celestia.org/Boundless: https://beboundless.xyz/AltLayer: https://www.altlayer.io/Mantle: https://www.mantle.xyz/Omni Network: https://omni.network/Vertex: https://vertexprotocol.com/Frax: https://frax.com/Join The Rollup Family:Website: https://therollup.co/Spotify: https://open.spotify.com/show/1P6ZeYd..Podcast: https://therollup.co/category/podcastFollow us on X: https://www.x.com/therollupcoFollow Rob on X: https://www.x.com/robbie_rollupFollow Andy on X: https://www.x.com/ayyyeandyJoin our TG group: https://t.me/+8ARkR_YZixE5YjBhThe Rollup Disclosures: https://therollup.co/the-rollup-discl

Pharma and BioTech Daily
Pharma and Biotech Daily Podcast: Stay Informed on Drug Pricing, HIV Research, and Industry Updates

Pharma and BioTech Daily

Play Episode Listen Later May 13, 2025 2:09


Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma and Biotech world.The White House has announced a new drug pricing policy that includes the revival of the most favored nations rule and extends to the private markets, leveraging the patent system, drug importation, and more. Meanwhile, Lilly's Zepbound has been found to have a superior benefit-risk ratio compared to Novo's Wegovy, BMS and Sanofi settle a Plavix lawsuit with Hawaii for $700 million, and biopharma companies are focusing on developing a cure for HIV as federal funding for related research is being cut. Sino Biological offers comprehensive solutions for autoimmune diseases, and Roche promises a $300 million investment in China production after a multibillion-dollar investment in the US. On the other hand, Lexeo and IGM have both announced significant layoffs. Novartis CEO has expressed concerns about Trump's pricing controls.Funding for HIV-related research and infrastructure is being cut by the Trump administration, leading biopharma companies like Gilead and Immunocore to focus on finding a cure for HIV. In the field of neurology, there is a need for more precise diagnostic tools to effectively treat neurodegenerative conditions. The new HHS vaccine requirement has been criticized by leading vaccine physician Paul Offit as potentially being anti-vaccine activism disguised as policy. Companies like Novartis, Bayer, and AstraZeneca are exploring new indications and innovations in radiopharmaceuticals, hoping to capitalize on a market that could reach $16 billion by 2033. The FDA has faced delays in reviewing certain drugs, while biotech stocks have fallen after the appointment of Vinay Prasad to succeed Marks at CBER. Vertex has decided to abandon AAV in the gene therapy space.Upcoming events include a webinar on surviving and thriving in the biotech downturn. Job opportunities in the biopharma industry include positions at Takeda, Daiichi Sankyo, and AbbVie. Heather McKenzie, senior editor at BioSpace, is open to suggestions for future coverage topics in neuroscience, oncology, cell & gene therapy, metabolic, or other areas.

Living With Cystic Fibrosis
70 years strong: The Luanne McKinnon story.

Living With Cystic Fibrosis

Play Episode Listen Later May 12, 2025 63:30


A 70-year-old person with cystic fibrosis. It's a phrase that wasn't just uncommon a few decades ago—it was virtually unheard of.When Luanne McKinnon was diagnosed in 1969 at just 13 years old, doctors told her parents she might live to be 19 years old. Today, Luanne stands on the edge of her 70th birthday—a milestone that not only redefines possibility but embodies resilience, creativity, and purpose.Born in Dallas, Texas in 1955, Luanne was diagnosed at a time when cystic fibrosis was still barely understood. No vests. No targeted medications. No community. And yet, she carved out a life of profound impact. “I stand as a witness to the possible.” says Luanne McKinnonAfter earning a Master of Fine Art in Painting and a PhD in Art History, she launched a celebrated career in the visual arts—owning an art dealership in New York City, directing major university museums, publishing works, and curating over 35 exhibitions. She even became a Fellow at the prestigious Getty Research Institute.And while that would be more than enough for most of us, Luanne continued to pour herself into advocacy—serving as Co-chair for Stanford's Patient and Family Advisory Committee, raising awareness for CF patients before and after transplant. In 2011, she underwent a successful double-lung transplant at Stanford, and fourteen years later, she is still very much living proof.This episode is not about her equally remarkable husband—EMMY award-winning filmmaker Daniel Reeve—though we'll mention him later. This is about Luanne—her life, her art, her truth, and her refusal to let a diagnosis define the limits of her possibility. She says, “I stand as a witness to the possible.”And after listening to this conversation, I think you'll believe in the possible, too.Welcome, to a very special episode of the Living with cystic fibrosis podcast and our incredible guest, Luanne McKinnon. Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Watch our podcasts on YouTube: https://www.youtube.com/@laurabonnell1136/featuredThanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

Pharma and BioTech Daily
Breaking Down the Latest in Pharma and Biotech: Executive Orders, Investment Trends, and Industry News

Pharma and BioTech Daily

Play Episode Listen Later May 7, 2025 0:58


Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma and Biotech world.President Donald Trump has issued an executive order to expedite the timeline for building new facilities in the US while increasing inspections on foreign plants. Despite a drop in venture capital fundraising in the first quarter of the year, large investments are still being made in the pharmaceutical industry. Companies like BMS are investing billions in US manufacturing despite potential tariffs. Vertex has seen success with their non-opioid pain drug launch and biopharma venture capital fundraising has declined, but median deal sizes remain high. Lotte Biologics offers specialized ADC manufacturing services in Syracuse, NY. Other news includes Lilly's ALS pipeline expansion, states suing to block HHS cuts, and layoffs at PTC despite a Phase II win. Thank you for tuning in to today's episode of Pharma and Biotech daily. Let's dive into the latest updates from the industry.

BioSpace
Trump's US Manufacturing Push, New Vaccine Policy, Novo's Weight Loss Pill up for FDA Review

BioSpace

Play Episode Listen Later May 7, 2025 24:16


In his effort to onshore manufacturing, President Donald Trump issued an executive order on Monday afternoon ordering the FDA to ease permitting processes for new and expanded U.S. facilities. The announcement comes as more and more Big Pharma companies commit billions to expanding their U.S. footprints. Bristol Myers Squibb CEO Christopher Boerner announced this week that the company will pump $40 billion into its stateside operations over the next five years, even as the pharma executes a massive cost-cutting effort that involves shaving $3.5 billion from expenses by 2027 and cutting thousands of jobs, including another 516 in New Jersey, according to a May WARN notice.  In other policy news, the Department of Health and Human Services on Wednesday said it will require all new vaccines to be tested in placebo-controlled trials to earn FDA approval but some vaccine experts have raised concerns about this approach. Meanwhile, turmoil still envelopes the FDA, with staff cuts and rehires continuing at a dizzying pace. On Monday, several states sued HHS, saying that the cuts offload critical functions and costs onto the states and impede public health efforts.  As Q1 earnings season for Big Pharma begins to wind down, there are still headlines coming from the biotech sector. Vertex revealed last week that it is abandoning all of its adeno-associated virus vector work, while BioNTech on Monday announced that tariffs could get in the way of its ambitious plans for a closely watched PD-L1-VEGF therapy. Moderna, meanwhile, continues its fall from COVID grace, missing Q1 revenue expectations and announcing plans to reduce operating expenses by around $1.5 billion by 2027.   In the weight loss space, Novo Nordisk announced on Friday that the FDA has accepted the application for a pill version of Wegovy, with a decision expected this fall. Novo has also struck partnerships with CVS and Hims & Hers pharmacies to market injectable Wegovy, drawing the attention of Eli Lilly CEO David Ricks.   Also this week, check out BioSpace's deep dives into advances in base editing—a technology that's been touted as a “safer” CRISPR—and Summit Therapeutics' push to bring closely watched PD-1/VEGF immunotherapy ivonescimab to the U.S. market after its recent approval in China. 

Living With Cystic Fibrosis
Live Fearlessly: Jacob Venditti

Living With Cystic Fibrosis

Play Episode Listen Later May 5, 2025 31:54


Eight miles. Two friends. One cause.In this inspiring episode, Jacob Venditti opens up about his life with cystic fibrosis, offering candid updates on his health and the challenges he faces as he prepares for a lung transplant. He emphasizes the vital role of community support and shares how his work with the Live Fearlessly Foundation fuels his mission to empower others. Jacob also sheds light on the rare disease income threshold amendment he's championing, which aims to create more equitable opportunities for patients. The conversation builds toward his upcoming Crossing 4 CF event, showcasing his unwavering resilience and commitment to living fearlessly.The heartfelt conversation continues with Rob Brown. Rob talks about their upcoming 80-mile paddle race aimed at raising awareness for cystic fibrosis (CF). Jacob shares how open ocean paddling has become both a personal passion and a powerful way to connect with the CF community. Rob reflects on his enduring friendship with Jacob and their mutual love for surfing. Together, they highlight the healing power of the ocean—physically, mentally, and emotionally—especially for those living with CF. To connect with Jacob and his team: https://livefearlesslyfoundation.com   Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Watch our podcasts on YouTube: https://www.youtube.com/@laurabonnell1136/featuredThanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

TechCrunch Startups – Spoken Edition
AI sales tax startup Kintsugi had doubled its valuation in 6 months

TechCrunch Startups – Spoken Edition

Play Episode Listen Later May 5, 2025 5:22


Kintsugi, a Silicon Valley-based startup that helps companies offload and automate their sales tax compliance, has raised $18 million in new funding led by global indirect tax technology solution provider Vertex. The startup plans to enable more small and medium businesses to use its AI-enabled capabilities for tax calculations and filings. Learn more about your ad choices. Visit podcastchoices.com/adchoices

1Mby1M Entrepreneurship Podcast
682nd 1Mby1M Entrepreneurship Podcast with Piyush Kharbanda, Vertex Ventures - 1Mby1M Entrepreneurship Podcast

1Mby1M Entrepreneurship Podcast

Play Episode Listen Later Apr 25, 2025 32:32


Piyush Kharbanda, General Partner at Vertex Ventures, discusses his firm's AI investment thesis.

Slice of Healthcare
#492 - Peter Donnelly, CEO & Co-Founder at Genomics

Slice of Healthcare

Play Episode Listen Later Apr 23, 2025 21:50


Join us on the latest episode, hosted by Jared S. Taylor!Our Guest: Peter Donnelly, CEO & Co-Founder at Genomics.What you'll get out of this episode:Genomics' Mission: Founded in 2014, Genomics is bridging cutting-edge genetic research with real-world healthcare solutions.Actionable Insights: Advances now allow actionable health insights for ~70% of people via genetic testing.Strategic Partnerships: Collaborations with companies like Vertex and GSK use genetics to improve drug targeting and trial outcomes.Insurance Innovation: Life insurers are early adopters of genetics to promote longevity and healthier lives.The Future Is Now: With global health systems under pressure, predictive genomics is primed to shift care from treatment to prevention.To learn more about Genomics:Website: http://www.genomics.com/ LinkedIn: https://www.linkedin.com/company/genomics-ltd/Our sponsors for this episode are:Sage Growth Partners https://www.sage-growth.com/Quantum Health https://www.quantum-health.com/Show and Host's Socials:Slice of HealthcareLinkedIn: https://www.linkedin.com/company/sliceofhealthcare/Jared S TaylorLinkedIn: https://www.linkedin.com/in/jaredstaylor/WHAT IS SLICE OF HEALTHCARE?The go-to site for digital health executive/provider interviews, technology updates, and industry news. Listed to in 65+ countries.

Living With Cystic Fibrosis
From Bulky to Breakthrough: The Future of Airway Clearance

Living With Cystic Fibrosis

Play Episode Listen Later Apr 21, 2025 27:37


From Clunky to Cutting-Edge: The Evolution of Airway Clearance with Nicole DunnWhen our daughters first received their vest machines, they felt like they weighed a hundred pounds and had to be plugged into the wall. The vests didn't fit well—riding high in the armpits and leaving much to be desired in comfort and function. Fast forward 25 years, and everything has changed.In this episode, Laura talks with Nicole Dunn, Senior Market Development and Education Manager at Tactile Medicaland an expert on the AffloVest. With a strong background as a registered respiratory therapist and a deep passion for respiratory education, Nicole is at the forefront of innovation in airway clearance therapy.Together, they dive into the evolution of the AffloVest—from its design improvements to the company's mission to provide accessible, life-changing therapy for people with chronic respiratory diseases like cystic fibrosis. Nicole shares how patient feedback has shaped product development, the impact of CF modulators on airway clearance, and how community engagement plays a vital role in Tactile Medical's approach.This episode is full of inspiration, real-life success stories, and a look at how far we've come in improving comfort, mobility, and quality of life for people with CF.To learn more about Tactile Medical please visit:  https://tactilemedical.comTo learn more about AffloVest:  https://affloVest.comFor questions: afflovestinfo@tactilemedical.com Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Watch our podcasts on YouTube: https://www.youtube.com/@laurabonnell1136/featuredThanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

Conversation Balloons
77. The Success Sequence and Low-Income Students w/ Ian Rowe

Conversation Balloons

Play Episode Listen Later Apr 16, 2025 51:07


Black scholar and educator and American Enterprise Institute senior fellow Ian V. Rowe on why and how his Vertex schools are shaping low-income young people with moral foundations and optimism.  Children in the Bronx are learning the four cardinal virtues and the personal agency which lead them into "the success sequence" that lifts kids out of poverty.Additional Resources: Book: Agency: The four point plan (F.R.E.E.) for all children to overcome the victimhood narrative and discover their pathway to power 

Living With Cystic Fibrosis
Ask Siri: all things CF!

Living With Cystic Fibrosis

Play Episode Listen Later Apr 7, 2025 53:36


CFRI's Executive Director, Siri Vaeth is sunshine to me. She's a dear friend.We met after Siri took on her role with the Cystic Fibrosis Research Institute. I consider Siri a dear friend, and a mentor.  Siri is truly among the smartest people I know.  She is an advocate for her daughter Tess, who has CF,  and is an incredible advocate for the CF community.  If you need legislation explained to you, Siri can help you. She can put it in a way you'd understand.Siri has a master's degree in social Welfare, she's fluent in Spanish, she's great at marketing and does a lot of public speaking…and is an all-around great person.This episode is packed with information about legislation, colon cancer, health insurance and discussion about the fact that people of color are under-diagnosed, concerns for the future of CF and catching up about our kids.To learn more about CFRI: https://www.cfri.org Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Thanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

Pharma and BioTech Daily
Pharma and Biotech Daily: Your Essential Update on the Latest Industry News

Pharma and BioTech Daily

Play Episode Listen Later Apr 1, 2025 2:16


Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma e Biotech world.Sanofi and Alnylam have received FDA approval for the first RNAi treatment for hemophilia, with the drug, Qfitlia, indicated for both hemophilia A and B. This approval is significant as it can be given regardless of the presence of neutralizing antibodies against clotting factors VIII or IX. However, the sudden departure of FDA director Peter Marks has caused uncertainty in the biopharma industry. In other news, Vertex has cut a diabetes asset but analysts remain optimistic about their phase III option. Lilly's RNA silencer has shown promising results in lowering a key cardiovascular biomarker. Trilink is offering custom guide RNAs for CRISPR workflow to accelerate therapy discoveries. Despite market challenges, the cell and gene therapy sector has seen a 30% investment surge. Companies like Amgen, Aldeyra, and Argenx are among those with upcoming FDA actions. Arbutus has announced layoffs, while big pharmas are pushing boundaries in radiopharmaceuticals. Michelle Werner of AltoRNA is focused on making better drugs. Safety questions are looming in Duchenne as Dyne and Wave plan FDA filings. There are job opportunities available in data management and program leadership within the biopharma industry.Moving on to other news, several big pharmaceutical companies such as Novartis, Bayer, AstraZeneca, Bristol Myers Squibb, and Eli Lilly are competing in the radiopharmaceuticals market, which is projected to be worth over $13 billion by 2033. The FDA is expected to announce decisions on therapies for dry eye disease soon. Michelle Werner, CEO of AllTrna, is focused on developing trna-based treatments for various diseases.Safety concerns are emerging in the Duchenne muscular dystrophy space as companies like Dyne and Wave plan FDA filings. The EU rejected Lilly's Alzheimer's drug Kisunla, Biontech's bispecific showed promise in treating SCLC patients, and Wave's duchenne exon-skipper reversed muscle damage in a mid-stage trial. Job opportunities within the biopharma industry were also highlighted for those interested.Thank you for tuning in to Pharma and Biotech daily - keeping you updated on all the latest news in the world of pharmaceuticals and biotechnology.

CryptoNews Podcast
#426: Darius Tabai, CEO of Vertex Protocol, on Perpetual Trading, DeFi Infrastructure, and UX Improvements

CryptoNews Podcast

Play Episode Listen Later Mar 31, 2025 23:57


Darius Tabai is the CEO and co-founder of Vertex Protocol, a leading decentralized exchange built on Arbitrum. Darius is an experienced trader with a history of working in FX, Commodities, and Crypto. Darius' previous roles include the Head of Trading at JST Digital, Head of Trading at CrossTower, Global Head of Metals Trading at Merrill Lynch, and the Global Head of Precious Metals Trading at Credit Suisse.In this conversation, we discuss:- The story behind founding Vertex Protocol- What is a perp DEX?- Arbitrum's speed, low fees, and EVM compatibility- DeFi Infrastructure & UX Improvements- The competitive landscape of DEXs- What Makes Vertex Unique?- Tokenomics & Protocol Incentives- The rise and fall of memecoins- Liquidity across blockchains- Partnership with Berachain and SONICVertexWebsite: vertexprotocol.io X: @vertex_protocolTelegram: t.me/LiquidityLoungeDarius TabaiX: @DariusTabai ---------------------------------------------------------------------------------  This episode is brought to you by PrimeXBT.  PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.   PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions.  Code: CRYPTONEWS50  This promotion is available for a month after activation. Click the link below:  PrimeXBT x CRYPTONEWS50

OHNE AKTIEN WIRD SCHWER - Tägliche Börsen-News
“Vertex schlägt Novo Nordisk” - GameStop-BTC, NVIDIA-Probleme & Rüstungs-Boom

OHNE AKTIEN WIRD SCHWER - Tägliche Börsen-News

Play Episode Listen Later Mar 27, 2025 13:58


Unser Partner Scalable Capital ist der einzige Broker, den du brauchst. Inklusive Trading-Flatrate, Zinsen und Portfolio-Analysen. Alle weiteren Infos gibt's hier: scalable.capital/oaws. Aktien + Whatsapp = Hier anmelden. Lieber als Newsletter? Geht auch. Das Buch zum Podcast? Jetzt lesen. Rüstung boomt. Auch dank Porsche. KI boomt nicht. OpenAI ist zwar optimistisch, aber China schießt gegen NVIDIA. Sonst: ProSiebenSat.1-Übernahme, Dollar-Tree-Verkauf, Nintendo-Bulle & neue Auto-Zölle. Krypto: Stablecoins & Coinbase plant Übernahme. Vertex Pharmaceuticals (WKN: 882807) hat in den letzten 20 Jahren besser performt als Eli Lilly & Novo Nordisk. Wie kann das sein? Nischen-Medizin und Schmerz ohne Sucht. Die Börse hat drei neue Krypto-Aktien: eToro (bald), GameStop (WKN: A0HGDX) und Trump Media (WKN: A3CYXD). So funktioniert Ethena: https://hi.omr.com/33-zinsen-mit-neuem-stablecoin. Diesen Podcast vom 27.03.2025, 3:00 Uhr stellt dir die Podstars GmbH (Noah Leidinger) zur Verfügung.

Living With Cystic Fibrosis
Aaron Trumm: living his best life!

Living With Cystic Fibrosis

Play Episode Listen Later Mar 24, 2025 45:23


I love that I was able to bump into Aaron Trumm via an email.  He reached out to check in about our scholarship program for college.  We only award grants to undergrad students, but I was intrigued by all I learned about him.Aaron has CF, he is post-transplant, he started a recording label, he plays the piano and wraps, And he worked with the man known as the Lion of Zimbabwe. And he's going to law school in the Fall.We have a lot to talk about!  To get in touch with Aaron:https://aarontrumm.comA music production education brand:https://recordinglikemacgyver.com This site Aaron says is disappearing soon! https://nquit.com  Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Thanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

Dispatch in Depth
CentralSquare: Technology You Can Trust with Rob Farmer

Dispatch in Depth

Play Episode Listen Later Mar 18, 2025


Rob Farmer, interim National Director of Sales of Vertex NG911 Call Handling, joins us to give you the scoop on CentralSquare. He discusses the company's own Engage conference, the newly named Vertex, and what to expect at their booth at NAVIGATOR.For Your Information:Check out the CentralSquare website: https://www.centralsquare.com/ Learn more about Vertex NG911 Call Handling: https://www.centralsquare.com/solutions/public-safety-software/vertex-ng911-call-handling Register for their Engage conference (April 27–30, 2025): https://web.cvent.com/event/f25feacb-fa48-4079-a8fe-d85500fa69ce/summary

My Veterinary Life
Veterinary Podcast Crossover Series with Veterinary Vertex

My Veterinary Life

Play Episode Listen Later Mar 13, 2025 19:52


We are thrilled to be sharing our veterinary podcast crossover series with you. Throughout these episodes, we are having conversations with other veterinary podcast hosts who are sharing why they started their podcast and their goals, as well as how we can all work together towards supporting our colleagues in this profession. Today our guests are Drs. Lisa Fortier and Sarah Wright. Dr. Fortier is the editor in chief for the AVMA and Dr. Wright is the associate editor for the AVMA journals. This is a great conversation to understand different ways to utilize “Veterinary Vertex” podcast and want to better understand what you can learn from each episode. They also share some of the bonus questions they add at the end of the episodes which does take us on a slight tangent as we discuss puzzles. If you want this and so much more, be sure to check out the entire episode!   We want to share a big thank you to our sponsor CareCredit. You can learn more about Veterinary Patient Financing for Providers through CareCredit by visiting: https://www.carecredit.com/providers/animal-healthcare/  You can find Veterinary Vertex on all major podcasting platforms.  Remember we want to hear from you! Please be sure to subscribe to our feed on Apple Podcasts and leave us a ratings and review. You can also contact us at MVLPodcast@avma.org   Follow us on social media @AVMAVets #MyVetLife #MVLPodcast 

Living With Cystic Fibrosis
Bob Coughlin, CF Dad: from Congress to Science

Living With Cystic Fibrosis

Play Episode Listen Later Mar 10, 2025 45:50


CF Dad Bob Coughlin see's a cure in the future for his son, and all of our kids. His high energy in this podcast is contagious. In this conversation, Laura Bonnell and Bob Coughlin discuss the journey of Bob's son, Bobby, who has cystic fibrosis. They explore the advancements in treatment, the importance of advocacy, and the intersection of policy and innovation in the biotechnology sector. Bob shares his personal experiences as a caregiver and advocate, emphasizing the need for continued support and education in the medical community. The conversation highlights the emotional rollercoaster of living with a chronic illness and the hope brought by new therapies. In this conversation, Bob Coughlin shares his emotional journey as a parent of a child with cystic fibrosis, detailing the transformative impact of new treatments and the importance of community support. He discusses the hope brought by advancements in gene therapy and the future of cystic fibrosis treatment, emphasizing the need for continued advocacy and innovation in healthcare. The conversation highlights the emotional highs and lows experienced by families dealing with chronic illness and the importance of maintaining a positive outlook.Bob aligns real estate strategies with scientific business objectives. Which is very cool if you ask me. He's on numerous boards and is extremely involved in work, life and organizations.___________________________Bob Coughlin is a Managing Director at JLL and is the New England's Life Science and Healthcare Practice Group lead. He specializes in the representation of lab, GMP manufacturing and technology space. Robert delivers creative solutions that align real estate strategies with scientific business objectives. ExperienceRobert most recently operated as the President & CEO of the Massachusetts Biotechnology Council, where his mission was to advance Massachusetts's leadership in the life sciences to grow the industry. Robert has spent his career in both the public and private sectors. Before joining MassBio, he served as the Undersecretary of Economic Development within Governor Deval Patrick's administration, where he prioritized both healthcare and economic development issues and was a strong advocate for the life sciences industry in Massachusetts. Prior to that, he was elected as State Representative to the 11th Norfolk district for three terms. Robert has also held senior executive positions in the environmental services, capital management and venture capital industries.Board InvolvementFranciscan Children's Hospital, Vice Chair, Board of TrusteesTeam Impact,  Member of National Board of DirectorsMassBio, Member, Board of DirectorsBA Sciences, Member, Board of DirectorsAnagram, Member, Board of DirectorsNuvara, Member, Board of DirectorsCystic Fibrosis Foundation, Chair, MA/RI Board of DirectorsSchwartz Center for Compassionate Care, Lifetime Board Member Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Thanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

Pharma and BioTech Daily
Pharma and Biotech Daily: Breaking News in the World of Healthcare Innovation

Pharma and BioTech Daily

Play Episode Listen Later Mar 3, 2025 1:01


Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma e Biotech world.Lilly has entered a $1.6 billion deal in the molecular glue space, following the steps of other big pharma players like Novo Nordisk, Pfizer, and Novartis. On the other hand, the FDA, NIH, and CDC are experiencing chaos with canceled meetings and postponed discussions since Donald Trump's return as U.S. president, without a clear reason. In other news, Cassava's stock is on the rise after licensing seizure rights for simufilam, while BridgeBio Oncology is going public through a SPAC deal. Meanwhile, Vertex has decided to sever a partnership for liver gene therapies. Trilink is offering mRNA designs for reliable performance, and Eisai is planning to cut 7% of its US workforce.The role of mRNA in bridging the innovation gap in rare diseases is being discussed, highlighting the importance of industry and AI in fighting misinformation and empowering patients.Thank you for tuning in to Pharma and Biotech daily.

The Grey Nato
The Grey NATO – 320 – New Watches From Sinn, Omega, Nodus, Vertex, & More

The Grey Nato

Play Episode Listen Later Feb 27, 2025 56:53


Thanks so much for listening! For the complete show notes, links, and comments, please visit The Grey NATO Show Notes for this episode:https://thegreynato.substack.com/p/320-new-watches-2025The Grey NATO is a listener-supported podcast. If you'd like to support the show, which includes a variety of possible benefits, including additional episodes, access to the TGN Crew Slack, and even a TGN edition grey NATO, please visit:https://thegreynato.com/support-tgnSupport the show

5G Guys I Tech Talks
FINAL EPISODE - Farewell

5G Guys I Tech Talks

Play Episode Listen Later Feb 26, 2025 22:43


Farewell to 5G Guys: Reflecting on an Incredible Journey In the final episode of 5G Guys, hosts Dan McVaugh and Wayne Smith reflect on their three-and-a-half-year journey, celebrating their achievements and memorable moments. They discuss notable guests such as Marty Cooper, the father of the cell phone, and dive into the significant advances and current state of 5G technology. Wayne shares the future of Vertex Innovations, including their expansion into data centers and managed services. Meanwhile, Dan announces his new podcast, 'Connectivity Evolution,' which will continue exploring the impact of technology on human connectivity. The episode concludes with heartfelt thanks to their listeners and a look ahead at their ongoing contributions to the industry.  __________________________ CONNECT WITH DAN __________________________      Dan's new podcast ➡  Connectivity Evolution on Apple︎        Dan's new podcast ➡ Connectivity Evolution on Spotify       Dan on LinkedIn ➡ https://www.linkedin.com/in/danmcvaugh/   __________________________ CONNECT WITH WAYNE __________________________      Vertex Innovations Website ➡︎ https://vertex-us.com/      Vertex on LinkedIn ➡︎  https://www.linkedin.com/company/vertexinnovations/posts/?feedView=all      Vertex on X ➡︎  https://x.com/VertexInnovate   ⏰Episode Minute-by-Minute⏰ 00:00 Welcome and Introduction 00:05 Reflecting on the Journey 01:16 Memorable Guests and Episodes 05:12 The Future of 5G and Beyond 07:14 Challenges and Innovations in the Industry 13:35 Updates and Future Plans 19:45 Final Thoughts and Farewell

Pharma and BioTech Daily
Breaking News in Pharma and Biotech: From Trump's Tariffs to RNA Editing

Pharma and BioTech Daily

Play Episode Listen Later Feb 25, 2025 1:44


Good morning from Pharma and Biotech daily: the podcast that gives you only what's important to hear in Pharma and Biotech world.President Trump has threatened big pharma with tariffs unless they reshore manufacturing. He also refused to promise pharma executives that he would hamstring the IRA's drug negotiation program. RNA editing is fueling hope for rare and common diseases, with experts calling for more efficiency and breakthroughs in delivery methods. The FDA is rehiring scientists after Trump's firing spree, with around 300 staff being asked to return. Vertex's Journavx is changing the pain treatment landscape, but opioids are still prevalent. Trilink offers mRNA designs for reliable performance in various applications. In other news, a small Harvard start-up is fighting antimicrobial resistance, and OpinionAI is focusing on small molecules in the I&I space. Bluebird is going private, Mirum receives FDA approval for a rare disease, and PhRMA is meeting with Trump on various policies.Vertex's Journavx is changing the landscape of pain treatment, but opioids are still widely used. Non-opioid pain therapies, including Journavx, have been approved by the FDA, but their uptake remains uncertain. Meanwhile, RNA editing is showing promise in clinical trials for treating rare and common diseases, and artificial intelligence is making small molecules more attractive in the inflammatory and immunology disease space. The FDA is facing low morale after job cuts under Trump's administration, raising concerns about delays in new medicine approvals. Additional news includes Sanofi challenging Novo with an FDA approval for a biosimilar, FDA rare cancer approvals, and Lead passing on an option for Arcus' cancer drug.Thank you for tuning in to Pharma and Biotech Daily.

Living With Cystic Fibrosis
Michael Armstrong, wise beyond his years

Living With Cystic Fibrosis

Play Episode Listen Later Feb 24, 2025 26:45


Michael Armstrong is a 25-year-old pre-law student. He loves to read, paint, play card games and video games. He was diagnosed with CF as an infant. We're going to talk about his CF journey and how life took a turn when he was being evaluated for a lung transplant in 2023 and 2024.   Michael was featured in the 2025 Portraits of cystic fibrosis calendar and our first or second  calendar he was featured when he was about five with his  brother.  Michaels dad, Tom was on our Board of Directors for many years…and I was lucky to see him just the other day.Thanks for sharing your story Michael. Please like, subscribe, and comment on our podcasts!Please consider making a donation: https://thebonnellfoundation.org/donate/The Bonnell Foundation website:https://thebonnellfoundation.orgEmail us at: thebonnellfoundation@gmail.com Thanks to our sponsors:Vertex: https://www.vrtx.comViatris: https://www.viatris.com/en

Chip Stock Investor Podcast
Episode 276: Will A New Pain Medication Release Make Vertex Pharma Stock Soar In 2025? VRTX Stock Analysis

Chip Stock Investor Podcast

Play Episode Listen Later Feb 24, 2025 10:51


Fund your account in five minutes or less at https://www.public.com/CSI and get up to $10,000 when you transfer your old portfolio. Join us on Discord with Semiconductor Insider: https://ko-fi.com/chipstockinvestor/tiersIn this episode of Chip Stock Investor, Kasey discusses Vertex Pharmaceuticals which was a recent addition to the Chip Stock Investor portfolio. Vertex had two new approvals before the most recent earnings report with more to come in the coming years. Will 2025 be a breakout year for Vertex?Supercharge your analysis with AI! Get 15% of your membership with our special link here: https://finchat.io/csi/Affiliate links that are sprinkled in throughout this video. If something catches your eye and you decide to buy it, we might earn a little coffee money. Thanks for helping us (Kasey) fuel our caffeine addiction!Content in this video is for general information or entertainment only and is not specific or individual investment advice. Forecasts and information presented may not develop as predicted and there is no guarantee any strategies presented will be successful. All investing involves risk, and you could lose some or all of your principal. #vertexstock #vrtx #newpainmedication #pharmastocks #healthcarestocks #semiconductors #chips #investing #stocks #finance #financeeducation #financeeducation #finance #stocks #investing #investor #financeeducation #stockmarket #chipstockinvestor #semiconductorstocks Nick and Kasey own shares of Vertex PharmaceuticalsPublic Disclosure:All investing involves the risk of loss, including loss of principal. Brokerage services for US-listed, registered securities, options and bonds in a self-directed account are offered by Public Investing, Inc., member FINRA & SIPC. Public Investing offers a High-Yield Cash Account where funds from this account are automatically deposited into partner banks where they earn interest and are eligible for FDIC insurance; Public Investing is not a bank. Cryptocurrency trading services are offered by Bakkt Crypto Solutions, LLC (NMLS ID 1890144), which is licensed to engage in virtual currency business activity by the NYSDFS. Cryptocurrency is highly speculative, involves a high degree of risk, and has the potential for loss of the entire amount of an investment. Cryptocurrency holdings are not protected by the FDIC or SIPC. A Bond Account is a self-directed brokerage account with Public Investing, member FINRA/SIPC. Deposits into this account are used to purchase 10 investment-grade and high-yield bonds. The 6%+ yield is the average, annualized yield to worst (YTW) across all ten bonds in the Bond Account, before fees, as of 12/13/2024. A bond's yield is a function of its market price, which can fluctuate; therefore, a bond's YTW is not “locked in” until the bond is purchased, and your yield at time of purchase may be different from the yield shown here. The “locked in” YTW is not guaranteed; you may receive less than the YTW of the bonds in the Bond Account if you sell any of the bonds before maturity or if the issuer defaults on the bond. Public Investing charges a markup on each bond trade. See our Fee Schedule. Bond Accounts are not recommendations of individual bonds or default allocations. The bonds in the Bond Account have not been selected based on your needs or risk profile. See Bond Account Disclosures to learn more.Alpha is an AI research tool powered by GPT-4. Alpha is experimental and may generate inaccurate responses. Output from Alpha should not be construed as investment research or recommendations, and should not serve as the basis for any investment decision. Public makes no warranties about its accuracy, completeness, quality, or timeliness of any Alpha out. Please independently evaluate and verify any such output for your own use case.*Terms and Conditions apply.

Halftime Report
The State of the Bull Market 02/11/25

Halftime Report

Play Episode Listen Later Feb 11, 2025 34:37


Scott Wapner and the Investment Committee debate the state of the bull market following Fed Chair Powell's Testimony before the Senate Banking Committee. Plus, the desk debates the latest Calls of the Day on Schwab, Vertex and Transocean. And later, the Committee reveal their latest portfolio moves.  

Pharma Intelligence Podcasts
Scrip's Five Must-Know Things - 10 February 2025

Pharma Intelligence Podcasts

Play Episode Listen Later Feb 10, 2025 14:47


Audio roundup of selected biopharma industry content from Scrip over the business week ended 7 February 2025. In this episode: Novo outlines CagriSema strategy; Pfizer is back in the deal game; Vertex's pain drug faces opportunities and headwinds; uncertain times for Korean pharma; and a view on women's health at JPM. https://insights.citeline.com/scrip/podcasts/scrips-five-must-know-things/quick-listen-scrips-five-must-know-things-Y37SM5W2TZDXNE26O3W6E33TZE/ Playlist: soundcloud.com/citelinesounds/sets/scrips-five-must-know-things

The Top Line
Looking ahead at the most anticipated drug launches of 2025

The Top Line

Play Episode Listen Later Feb 7, 2025 14:07


If Fierce Pharma Marketing’s annual list of the top 10 biggest potential drug launches of the coming year is any indication, biopharma may soon be in for a blockbuster boom. All together, the 10 meds that made the 2025 list stand to generate a whopping $29 billion in annual sales by the end of the decade. In this week’s episode of The Top Line, we dig into the report’s predictions. Fierce’s Andrea Park and Eric Sagonowsky take a deep dive into the top three drugs on the list—all of which had already snagged their first FDA approvals by this episode’s release—and highlight some of the prevailing trends from past years’ reports, including repeat entries, popular indications and drugs that never had the chance to meet their predicted potential. To learn more about the topics in this episode: Top 10 most anticipated drug launches of 2025 Vertex snags FDA nod for once-daily cystic fibrosis triplet Alyftrek as switch from Trikafta kicks off Datroway, 2nd ADC from AstraZeneca-Daiichi collab, wins first FDA nod in breast cancer Vertex scores FDA nod for long-awaited non-opioid pain reliever Journavx This episode is brought to you by Cencora. Learn more at cencora.com/breakthrough.See omnystudio.com/listener for privacy information.

Word Podcast
The rise of David Bowie and the Spiders From Mars through the eyes of Woody Woodmansey

Word Podcast

Play Episode Listen Later Feb 6, 2025 36:41


The teenage Woody Woodmansey was offered the job of under-foreman in the Vertex spectacle factory in Hull but then got a call from Bowie inviting him to move to London and play drums on his new album - “plus food and somewhere to stay”. It took him all weekend to decide. And involved some cultural readjustment when he did. 56 years later he's a founding member of Holy Holy and touring the UK in May – along with Tony Visconti and Glenn Gregory – performing songs from Bowie's breakthrough early ‘70s albums. He talks here about … … the life-changing sound behind the silver door of an air-raid shelter in Driffield. … supporting the Kinks in Bridlington and the Herd at Leeds University - and why Peter Frampton told him, “I'll see you at the top”. ... his first paid gig at the local girls' school. … the Spiders' instructional group outings to see ballet, mime and theatre. ... “never more than three takes”: how Bowie wrote and recorded and the sketches he drew for their stage gear.  … life at Haddon Hall and its “Gone With The Wind staircase”. … Yorkshire to London and the cultural collisions involved. … what Bowie realised was “the missing ingredient”. … Woody's checklist to assess Bowie's talents when he met him: “He wasn't Paul Rodgers or Roger Daltrey. He could write. He could communicate.” … “I'm not wearing that!” The day Mick Ronson packed his bags and left. Order Holy Holy tickets here:https://www.ticketmaster.co.uk/tony-visconti-tickets/artist/2003254Find out more about how to help us to keep the conversation going: https://www.patreon.com/wordinyourear Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Did you know that adding a simple Code Interpreter took o3 from 9.2% to 32% on FrontierMath? The Latent Space crew is hosting a hack night Feb 11th in San Francisco focused on CodeGen use cases, co-hosted with E2B and Edge AGI; watch E2B's new workshop and RSVP here!We're happy to announce that today's guest Samuel Colvin will be teaching his very first Pydantic AI workshop at the newly announced AI Engineer NYC Workshops day on Feb 22! 25 tickets left.If you're a Python developer, it's very likely that you've heard of Pydantic. Every month, it's downloaded >300,000,000 times, making it one of the top 25 PyPi packages. OpenAI uses it in its SDK for structured outputs, it's at the core of FastAPI, and if you've followed our AI Engineer Summit conference, Jason Liu of Instructor has given two great talks about it: “Pydantic is all you need” and “Pydantic is STILL all you need”. Now, Samuel Colvin has raised $17M from Sequoia to turn Pydantic from an open source project to a full stack AI engineer platform with Logfire, their observability platform, and PydanticAI, their new agent framework.Logfire: bringing OTEL to AIOpenTelemetry recently merged Semantic Conventions for LLM workloads which provides standard definitions to track performance like gen_ai.server.time_per_output_token. In Sam's view at least 80% of new apps being built today have some sort of LLM usage in them, and just like web observability platform got replaced by cloud-first ones in the 2010s, Logfire wants to do the same for AI-first apps. If you're interested in the technical details, Logfire migrated away from Clickhouse to Datafusion for their backend. We spent some time on the importance of picking open source tools you understand and that you can actually contribute to upstream, rather than the more popular ones; listen in ~43:19 for that part.Agents are the killer app for graphsPydantic AI is their attempt at taking a lot of the learnings that LangChain and the other early LLM frameworks had, and putting Python best practices into it. At an API level, it's very similar to the other libraries: you can call LLMs, create agents, do function calling, do evals, etc.They define an “Agent” as a container with a system prompt, tools, structured result, and an LLM. Under the hood, each Agent is now a graph of function calls that can orchestrate multi-step LLM interactions. You can start simple, then move toward fully dynamic graph-based control flow if needed.“We were compelled enough by graphs once we got them right that our agent implementation [...] is now actually a graph under the hood.”Why Graphs?* More natural for complex or multi-step AI workflows.* Easy to visualize and debug with mermaid diagrams.* Potential for distributed runs, or “waiting days” between steps in certain flows.In parallel, you see folks like Emil Eifrem of Neo4j talk about GraphRAG as another place where graphs fit really well in the AI stack, so it might be time for more people to take them seriously.Full Video EpisodeLike and subscribe!Chapters* 00:00:00 Introductions* 00:00:24 Origins of Pydantic* 00:05:28 Pydantic's AI moment * 00:08:05 Why build a new agents framework?* 00:10:17 Overview of Pydantic AI* 00:12:33 Becoming a believer in graphs* 00:24:02 God Model vs Compound AI Systems* 00:28:13 Why not build an LLM gateway?* 00:31:39 Programmatic testing vs live evals* 00:35:51 Using OpenTelemetry for AI traces* 00:43:19 Why they don't use Clickhouse* 00:48:34 Competing in the observability space* 00:50:41 Licensing decisions for Pydantic and LogFire* 00:51:48 Building Pydantic.run* 00:55:24 Marimo and the future of Jupyter notebooks* 00:57:44 London's AI sceneShow Notes* Sam Colvin* Pydantic* Pydantic AI* Logfire* Pydantic.run* Zod* E2B* Arize* Langsmith* Marimo* Prefect* GLA (Google Generative Language API)* OpenTelemetry* Jason Liu* Sebastian Ramirez* Bogomil Balkansky* Hood Chatham* Jeremy Howard* Andrew LambTranscriptAlessio [00:00:03]: Hey, everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Good morning. And today we're very excited to have Sam Colvin join us from Pydantic AI. Welcome. Sam, I heard that Pydantic is all we need. Is that true?Samuel [00:00:24]: I would say you might need Pydantic AI and Logfire as well, but it gets you a long way, that's for sure.Swyx [00:00:29]: Pydantic almost basically needs no introduction. It's almost 300 million downloads in December. And obviously, in the previous podcasts and discussions we've had with Jason Liu, he's been a big fan and promoter of Pydantic and AI.Samuel [00:00:45]: Yeah, it's weird because obviously I didn't create Pydantic originally for uses in AI, it predates LLMs. But it's like we've been lucky that it's been picked up by that community and used so widely.Swyx [00:00:58]: Actually, maybe we'll hear it. Right from you, what is Pydantic and maybe a little bit of the origin story?Samuel [00:01:04]: The best name for it, which is not quite right, is a validation library. And we get some tension around that name because it doesn't just do validation, it will do coercion by default. We now have strict mode, so you can disable that coercion. But by default, if you say you want an integer field and you get in a string of 1, 2, 3, it will convert it to 123 and a bunch of other sensible conversions. And as you can imagine, the semantics around it. Exactly when you convert and when you don't, it's complicated, but because of that, it's more than just validation. Back in 2017, when I first started it, the different thing it was doing was using type hints to define your schema. That was controversial at the time. It was genuinely disapproved of by some people. I think the success of Pydantic and libraries like FastAPI that build on top of it means that today that's no longer controversial in Python. And indeed, lots of other people have copied that route, but yeah, it's a data validation library. It uses type hints for the for the most part and obviously does all the other stuff you want, like serialization on top of that. But yeah, that's the core.Alessio [00:02:06]: Do you have any fun stories on how JSON schemas ended up being kind of like the structure output standard for LLMs? And were you involved in any of these discussions? Because I know OpenAI was, you know, one of the early adopters. So did they reach out to you? Was there kind of like a structure output console in open source that people were talking about or was it just a random?Samuel [00:02:26]: No, very much not. So I originally. Didn't implement JSON schema inside Pydantic and then Sebastian, Sebastian Ramirez, FastAPI came along and like the first I ever heard of him was over a weekend. I got like 50 emails from him or 50 like emails as he was committing to Pydantic, adding JSON schema long pre version one. So the reason it was added was for OpenAPI, which is obviously closely akin to JSON schema. And then, yeah, I don't know why it was JSON that got picked up and used by OpenAI. It was obviously very convenient for us. That's because it meant that not only can you do the validation, but because Pydantic will generate you the JSON schema, it will it kind of can be one source of source of truth for structured outputs and tools.Swyx [00:03:09]: Before we dive in further on the on the AI side of things, something I'm mildly curious about, obviously, there's Zod in JavaScript land. Every now and then there is a new sort of in vogue validation library that that takes over for quite a few years and then maybe like some something else comes along. Is Pydantic? Is it done like the core Pydantic?Samuel [00:03:30]: I've just come off a call where we were redesigning some of the internal bits. There will be a v3 at some point, which will not break people's code half as much as v2 as in v2 was the was the massive rewrite into Rust, but also fixing all the stuff that was broken back from like version zero point something that we didn't fix in v1 because it was a side project. We have plans to move some of the basically store the data in Rust types after validation. Not completely. So we're still working to design the Pythonic version of it, in order for it to be able to convert into Python types. So then if you were doing like validation and then serialization, you would never have to go via a Python type we reckon that can give us somewhere between three and five times another three to five times speed up. That's probably the biggest thing. Also, like changing how easy it is to basically extend Pydantic and define how particular types, like for example, NumPy arrays are validated and serialized. But there's also stuff going on. And for example, Jitter, the JSON library in Rust that does the JSON parsing, has SIMD implementation at the moment only for AMD64. So we can add that. We need to go and add SIMD for other instruction sets. So there's a bunch more we can do on performance. I don't think we're going to go and revolutionize Pydantic, but it's going to continue to get faster, continue, hopefully, to allow people to do more advanced things. We might add a binary format like CBOR for serialization for when you'll just want to put the data into a database and probably load it again from Pydantic. So there are some things that will come along, but for the most part, it should just get faster and cleaner.Alessio [00:05:04]: From a focus perspective, I guess, as a founder too, how did you think about the AI interest rising? And then how do you kind of prioritize, okay, this is worth going into more, and we'll talk about Pydantic AI and all of that. What was maybe your early experience with LLAMP, and when did you figure out, okay, this is something we should take seriously and focus more resources on it?Samuel [00:05:28]: I'll answer that, but I'll answer what I think is a kind of parallel question, which is Pydantic's weird, because Pydantic existed, obviously, before I was starting a company. I was working on it in my spare time, and then beginning of 22, I started working on the rewrite in Rust. And I worked on it full-time for a year and a half, and then once we started the company, people came and joined. And it was a weird project, because that would never go away. You can't get signed off inside a startup. Like, we're going to go off and three engineers are going to work full-on for a year in Python and Rust, writing like 30,000 lines of Rust just to release open-source-free Python library. The result of that has been excellent for us as a company, right? As in, it's made us remain entirely relevant. And it's like, Pydantic is not just used in the SDKs of all of the AI libraries, but I can't say which one, but one of the big foundational model companies, when they upgraded from Pydantic v1 to v2, their number one internal model... The metric of performance is time to first token. That went down by 20%. So you think about all of the actual AI going on inside, and yet at least 20% of the CPU, or at least the latency inside requests was actually Pydantic, which shows like how widely it's used. So we've benefited from doing that work, although it didn't, it would have never have made financial sense in most companies. In answer to your question about like, how do we prioritize AI, I mean, the honest truth is we've spent a lot of the last year and a half building. Good general purpose observability inside LogFire and making Pydantic good for general purpose use cases. And the AI has kind of come to us. Like we just, not that we want to get away from it, but like the appetite, uh, both in Pydantic and in LogFire to go and build with AI is enormous because it kind of makes sense, right? Like if you're starting a new greenfield project in Python today, what's the chance that you're using GenAI 80%, let's say, globally, obviously it's like a hundred percent in California, but even worldwide, it's probably 80%. Yeah. And so everyone needs that stuff. And there's so much yet to be figured out so much like space to do things better in the ecosystem in a way that like to go and implement a database that's better than Postgres is a like Sisyphean task. Whereas building, uh, tools that are better for GenAI than some of the stuff that's about now is not very difficult. Putting the actual models themselves to one side.Alessio [00:07:40]: And then at the same time, then you released Pydantic AI recently, which is, uh, um, you know, agent framework and early on, I would say everybody like, you know, Langchain and like, uh, Pydantic kind of like a first class support, a lot of these frameworks, we're trying to use you to be better. What was the decision behind we should do our own framework? Were there any design decisions that you disagree with any workloads that you think people didn't support? Well,Samuel [00:08:05]: it wasn't so much like design and workflow, although I think there were some, some things we've done differently. Yeah. I think looking in general at the ecosystem of agent frameworks, the engineering quality is far below that of the rest of the Python ecosystem. There's a bunch of stuff that we have learned how to do over the last 20 years of building Python libraries and writing Python code that seems to be abandoned by people when they build agent frameworks. Now I can kind of respect that, particularly in the very first agent frameworks, like Langchain, where they were literally figuring out how to go and do this stuff. It's completely understandable that you would like basically skip some stuff.Samuel [00:08:42]: I'm shocked by the like quality of some of the agent frameworks that have come out recently from like well-respected names, which it just seems to be opportunism and I have little time for that, but like the early ones, like I think they were just figuring out how to do stuff and just as lots of people have learned from Pydantic, we were able to learn a bit from them. I think from like the gap we saw and the thing we were frustrated by was the production readiness. And that means things like type checking, even if type checking makes it hard. Like Pydantic AI, I will put my hand up now and say it has a lot of generics and you need to, it's probably easier to use it if you've written a bit of Rust and you really understand generics, but like, and that is, we're not claiming that that makes it the easiest thing to use in all cases, we think it makes it good for production applications in big systems where type checking is a no-brainer in Python. But there are also a bunch of stuff we've learned from maintaining Pydantic over the years that we've gone and done. So every single example in Pydantic AI's documentation is run on Python. As part of tests and every single print output within an example is checked during tests. So it will always be up to date. And then a bunch of things that, like I say, are standard best practice within the rest of the Python ecosystem, but I'm not followed surprisingly by some AI libraries like coverage, linting, type checking, et cetera, et cetera, where I think these are no-brainers, but like weirdly they're not followed by some of the other libraries.Alessio [00:10:04]: And can you just give an overview of the framework itself? I think there's kind of like the. LLM calling frameworks, there are the multi-agent frameworks, there's the workflow frameworks, like what does Pydantic AI do?Samuel [00:10:17]: I glaze over a bit when I hear all of the different sorts of frameworks, but I like, and I will tell you when I built Pydantic, when I built Logfire and when I built Pydantic AI, my methodology is not to go and like research and review all of the other things. I kind of work out what I want and I go and build it and then feedback comes and we adjust. So the fundamental building block of Pydantic AI is agents. The exact definition of agents and how you want to define them. is obviously ambiguous and our things are probably sort of agent-lit, not that we would want to go and rename them to agent-lit, but like the point is you probably build them together to build something and most people will call an agent. So an agent in our case has, you know, things like a prompt, like system prompt and some tools and a structured return type if you want it, that covers the vast majority of cases. There are situations where you want to go further and the most complex workflows where you want graphs and I resisted graphs for quite a while. I was sort of of the opinion you didn't need them and you could use standard like Python flow control to do all of that stuff. I had a few arguments with people, but I basically came around to, yeah, I can totally see why graphs are useful. But then we have the problem that by default, they're not type safe because if you have a like add edge method where you give the names of two different edges, there's no type checking, right? Even if you go and do some, I'm not, not all the graph libraries are AI specific. So there's a, there's a graph library called, but it allows, it does like a basic runtime type checking. Ironically using Pydantic to try and make up for the fact that like fundamentally that graphs are not typed type safe. Well, I like Pydantic, but it did, that's not a real solution to have to go and run the code to see if it's safe. There's a reason that starting type checking is so powerful. And so we kind of, from a lot of iteration eventually came up with a system of using normally data classes to define nodes where you return the next node you want to call and where we're able to go and introspect the return type of a node to basically build the graph. And so the graph is. Yeah. Inherently type safe. And once we got that right, I, I wasn't, I'm incredibly excited about graphs. I think there's like masses of use cases for them, both in gen AI and other development, but also software's all going to have interact with gen AI, right? It's going to be like web. There's no longer be like a web department in a company is that there's just like all the developers are building for web building with databases. The same is going to be true for gen AI.Alessio [00:12:33]: Yeah. I see on your docs, you call an agent, a container that contains a system prompt function. Tools, structure, result, dependency type model, and then model settings. Are the graphs in your mind, different agents? Are they different prompts for the same agent? What are like the structures in your mind?Samuel [00:12:52]: So we were compelled enough by graphs once we got them right, that we actually merged the PR this morning. That means our agent implementation without changing its API at all is now actually a graph under the hood as it is built using our graph library. So graphs are basically a lower level tool that allow you to build these complex workflows. Our agents are technically one of the many graphs you could go and build. And we just happened to build that one for you because it's a very common, commonplace one. But obviously there are cases where you need more complex workflows where the current agent assumptions don't work. And that's where you can then go and use graphs to build more complex things.Swyx [00:13:29]: You said you were cynical about graphs. What changed your mind specifically?Samuel [00:13:33]: I guess people kept giving me examples of things that they wanted to use graphs for. And my like, yeah, but you could do that in standard flow control in Python became a like less and less compelling argument to me because I've maintained those systems that end up with like spaghetti code. And I could see the appeal of this like structured way of defining the workflow of my code. And it's really neat that like just from your code, just from your type hints, you can get out a mermaid diagram that defines exactly what can go and happen.Swyx [00:14:00]: Right. Yeah. You do have very neat implementation of sort of inferring the graph from type hints, I guess. Yeah. Is what I would call it. Yeah. I think the question always is I have gone back and forth. I used to work at Temporal where we would actually spend a lot of time complaining about graph based workflow solutions like AWS step functions. And we would actually say that we were better because you could use normal control flow that you already knew and worked with. Yours, I guess, is like a little bit of a nice compromise. Like it looks like normal Pythonic code. But you just have to keep in mind what the type hints actually mean. And that's what we do with the quote unquote magic that the graph construction does.Samuel [00:14:42]: Yeah, exactly. And if you look at the internal logic of actually running a graph, it's incredibly simple. It's basically call a node, get a node back, call that node, get a node back, call that node. If you get an end, you're done. We will add in soon support for, well, basically storage so that you can store the state between each node that's run. And then the idea is you can then distribute the graph and run it across computers. And also, I mean, the other weird, the other bit that's really valuable is across time. Because it's all very well if you look at like lots of the graph examples that like Claude will give you. If it gives you an example, it gives you this lovely enormous mermaid chart of like the workflow, for example, managing returns if you're an e-commerce company. But what you realize is some of those lines are literally one function calls another function. And some of those lines are wait six days for the customer to print their like piece of paper and put it in the post. And if you're writing like your demo. Project or your like proof of concept, that's fine because you can just say, and now we call this function. But when you're building when you're in real in real life, that doesn't work. And now how do we manage that concept to basically be able to start somewhere else in the in our code? Well, this graph implementation makes it incredibly easy because you just pass the node that is the start point for carrying on the graph and it continues to run. So it's things like that where I was like, yeah, I can just imagine how things I've done in the past would be fundamentally easier to understand if we had done them with graphs.Swyx [00:16:07]: You say imagine, but like right now, this pedantic AI actually resume, you know, six days later, like you said, or is this just like a theoretical thing we can go someday?Samuel [00:16:16]: I think it's basically Q&A. So there's an AI that's asking the user a question and effectively you then call the CLI again to continue the conversation. And it basically instantiates the node and calls the graph with that node again. Now, we don't have the logic yet for effectively storing state in the database between individual nodes that we're going to add soon. But like the rest of it is basically there.Swyx [00:16:37]: It does make me think that not only are you competing with Langchain now and obviously Instructor, and now you're going into sort of the more like orchestrated things like Airflow, Prefect, Daxter, those guys.Samuel [00:16:52]: Yeah, I mean, we're good friends with the Prefect guys and Temporal have the same investors as us. And I'm sure that my investor Bogomol would not be too happy if I was like, oh, yeah, by the way, as well as trying to take on Datadog. We're also going off and trying to take on Temporal and everyone else doing that. Obviously, we're not doing all of the infrastructure of deploying that right yet, at least. We're, you know, we're just building a Python library. And like what's crazy about our graph implementation is, sure, there's a bit of magic in like introspecting the return type, you know, extracting things from unions, stuff like that. But like the actual calls, as I say, is literally call a function and get back a thing and call that. It's like incredibly simple and therefore easy to maintain. The question is, how useful is it? Well, I don't know yet. I think we have to go and find out. We have a whole. We've had a slew of people joining our Slack over the last few days and saying, tell me how good Pydantic AI is. How good is Pydantic AI versus Langchain? And I refuse to answer. That's your job to go and find that out. Not mine. We built a thing. I'm compelled by it, but I'm obviously biased. The ecosystem will work out what the useful tools are.Swyx [00:17:52]: Bogomol was my board member when I was at Temporal. And I think I think just generally also having been a workflow engine investor and participant in this space, it's a big space. Like everyone needs different functions. I think the one thing that I would say like yours, you know, as a library, you don't have that much control of it over the infrastructure. I do like the idea that each new agents or whatever or unit of work, whatever you call that should spin up in this sort of isolated boundaries. Whereas yours, I think around everything runs in the same process. But you ideally want to sort of spin out its own little container of things.Samuel [00:18:30]: I agree with you a hundred percent. And we will. It would work now. Right. As in theory, you're just like as long as you can serialize the calls to the next node, you just have to all of the different containers basically have to have the same the same code. I mean, I'm super excited about Cloudflare workers running Python and being able to install dependencies. And if Cloudflare could only give me my invitation to the private beta of that, we would be exploring that right now because I'm super excited about that as a like compute level for some of this stuff where exactly what you're saying, basically. You can run everything as an individual. Like worker function and distribute it. And it's resilient to failure, et cetera, et cetera.Swyx [00:19:08]: And it spins up like a thousand instances simultaneously. You know, you want it to be sort of truly serverless at once. Actually, I know we have some Cloudflare friends who are listening, so hopefully they'll get in front of the line. Especially.Samuel [00:19:19]: I was in Cloudflare's office last week shouting at them about other things that frustrate me. I have a love-hate relationship with Cloudflare. Their tech is awesome. But because I use it the whole time, I then get frustrated. So, yeah, I'm sure I will. I will. I will get there soon.Swyx [00:19:32]: There's a side tangent on Cloudflare. Is Python supported at full? I actually wasn't fully aware of what the status of that thing is.Samuel [00:19:39]: Yeah. So Pyodide, which is Python running inside the browser in scripting, is supported now by Cloudflare. They basically, they're having some struggles working out how to manage, ironically, dependencies that have binaries, in particular, Pydantic. Because these workers where you can have thousands of them on a given metal machine, you don't want to have a difference. You basically want to be able to have a share. Shared memory for all the different Pydantic installations, effectively. That's the thing they work out. They're working out. But Hood, who's my friend, who is the primary maintainer of Pyodide, works for Cloudflare. And that's basically what he's doing, is working out how to get Python running on Cloudflare's network.Swyx [00:20:19]: I mean, the nice thing is that your binary is really written in Rust, right? Yeah. Which also compiles the WebAssembly. Yeah. So maybe there's a way that you'd build... You have just a different build of Pydantic and that ships with whatever your distro for Cloudflare workers is.Samuel [00:20:36]: Yes, that's exactly what... So Pyodide has builds for Pydantic Core and for things like NumPy and basically all of the popular binary libraries. Yeah. It's just basic. And you're doing exactly that, right? You're using Rust to compile the WebAssembly and then you're calling that shared library from Python. And it's unbelievably complicated, but it works. Okay.Swyx [00:20:57]: Staying on graphs a little bit more, and then I wanted to go to some of the other features that you have in Pydantic AI. I see in your docs, there are sort of four levels of agents. There's single agents, there's agent delegation, programmatic agent handoff. That seems to be what OpenAI swarms would be like. And then the last one, graph-based control flow. Would you say that those are sort of the mental hierarchy of how these things go?Samuel [00:21:21]: Yeah, roughly. Okay.Swyx [00:21:22]: You had some expression around OpenAI swarms. Well.Samuel [00:21:25]: And indeed, OpenAI have got in touch with me and basically, maybe I'm not supposed to say this, but basically said that Pydantic AI looks like what swarms would become if it was production ready. So, yeah. I mean, like, yeah, which makes sense. Awesome. Yeah. I mean, in fact, it was specifically saying, how can we give people the same feeling that they were getting from swarms that led us to go and implement graphs? Because my, like, just call the next agent with Python code was not a satisfactory answer to people. So it was like, okay, we've got to go and have a better answer for that. It's not like, let us to get to graphs. Yeah.Swyx [00:21:56]: I mean, it's a minimal viable graph in some sense. What are the shapes of graphs that people should know? So the way that I would phrase this is I think Anthropic did a very good public service and also kind of surprisingly influential blog post, I would say, when they wrote Building Effective Agents. We actually have the authors coming to speak at my conference in New York, which I think you're giving a workshop at. Yeah.Samuel [00:22:24]: I'm trying to work it out. But yes, I think so.Swyx [00:22:26]: Tell me if you're not. yeah, I mean, like, that was the first, I think, authoritative view of, like, what kinds of graphs exist in agents and let's give each of them a name so that everyone is on the same page. So I'm just kind of curious if you have community names or top five patterns of graphs.Samuel [00:22:44]: I don't have top five patterns of graphs. I would love to see what people are building with them. But like, it's been it's only been a couple of weeks. And of course, there's a point is that. Because they're relatively unopinionated about what you can go and do with them. They don't suit them. Like, you can go and do lots of lots of things with them, but they don't have the structure to go and have like specific names as much as perhaps like some other systems do. I think what our agents are, which have a name and I can't remember what it is, but this basically system of like, decide what tool to call, go back to the center, decide what tool to call, go back to the center and then exit. One form of graph, which, as I say, like our agents are effectively one implementation of a graph, which is why under the hood they are now using graphs. And it'll be interesting to see over the next few years whether we end up with these like predefined graph names or graph structures or whether it's just like, yep, I built a graph or whether graphs just turn out not to match people's mental image of what they want and die away. We'll see.Swyx [00:23:38]: I think there is always appeal. Every developer eventually gets graph religion and goes, oh, yeah, everything's a graph. And then they probably over rotate and go go too far into graphs. And then they have to learn a whole bunch of DSLs. And then they're like, actually, I didn't need that. I need this. And they scale back a little bit.Samuel [00:23:55]: I'm at the beginning of that process. I'm currently a graph maximalist, although I haven't actually put any into production yet. But yeah.Swyx [00:24:02]: This has a lot of philosophical connections with other work coming out of UC Berkeley on compounding AI systems. I don't know if you know of or care. This is the Gartner world of things where they need some kind of industry terminology to sell it to enterprises. I don't know if you know about any of that.Samuel [00:24:24]: I haven't. I probably should. I should probably do it because I should probably get better at selling to enterprises. But no, no, I don't. Not right now.Swyx [00:24:29]: This is really the argument is that instead of putting everything in one model, you have more control and more maybe observability to if you break everything out into composing little models and changing them together. And obviously, then you need an orchestration framework to do that. Yeah.Samuel [00:24:47]: And it makes complete sense. And one of the things we've seen with agents is they work well when they work well. But when they. Even if you have the observability through log five that you can see what was going on, if you don't have a nice hook point to say, hang on, this is all gone wrong. You have a relatively blunt instrument of basically erroring when you exceed some kind of limit. But like what you need to be able to do is effectively iterate through these runs so that you can have your own control flow where you're like, OK, we've gone too far. And that's where one of the neat things about our graph implementation is you can basically call next in a loop rather than just running the full graph. And therefore, you have this opportunity to to break out of it. But yeah, basically, it's the same point, which is like if you have two bigger unit of work to some extent, whether or not it involves gen AI. But obviously, it's particularly problematic in gen AI. You only find out afterwards when you've spent quite a lot of time and or money when it's gone off and done done the wrong thing.Swyx [00:25:39]: Oh, drop on this. We're not going to resolve this here, but I'll drop this and then we can move on to the next thing. This is the common way that we we developers talk about this. And then the machine learning researchers look at us. And laugh and say, that's cute. And then they just train a bigger model and they wipe us out in the next training run. So I think there's a certain amount of we are fighting the bitter lesson here. We're fighting AGI. And, you know, when AGI arrives, this will all go away. Obviously, on Latent Space, we don't really discuss that because I think AGI is kind of this hand wavy concept that isn't super relevant. But I think we have to respect that. For example, you could do a chain of thoughts with graphs and you could manually orchestrate a nice little graph that does like. Reflect, think about if you need more, more inference time, compute, you know, that's the hot term now. And then think again and, you know, scale that up. Or you could train Strawberry and DeepSeq R1. Right.Samuel [00:26:32]: I saw someone saying recently, oh, they were really optimistic about agents because models are getting faster exponentially. And I like took a certain amount of self-control not to describe that it wasn't exponential. But my main point was. If models are getting faster as quickly as you say they are, then we don't need agents and we don't really need any of these abstraction layers. We can just give our model and, you know, access to the Internet, cross our fingers and hope for the best. Agents, agent frameworks, graphs, all of this stuff is basically making up for the fact that right now the models are not that clever. In the same way that if you're running a customer service business and you have loads of people sitting answering telephones, the less well trained they are, the less that you trust them, the more that you need to give them a script to go through. Whereas, you know, so if you're running a bank and you have lots of customer service people who you don't trust that much, then you tell them exactly what to say. If you're doing high net worth banking, you just employ people who you think are going to be charming to other rich people and set them off to go and have coffee with people. Right. And the same is true of models. The more intelligent they are, the less we need to tell them, like structure what they go and do and constrain the routes in which they take.Swyx [00:27:42]: Yeah. Yeah. Agree with that. So I'm happy to move on. So the other parts of Pydantic AI that are worth commenting on, and this is like my last rant, I promise. So obviously, every framework needs to do its sort of model adapter layer, which is, oh, you can easily swap from OpenAI to Cloud to Grok. You also have, which I didn't know about, Google GLA, which I didn't really know about until I saw this in your docs, which is generative language API. I assume that's AI Studio? Yes.Samuel [00:28:13]: Google don't have good names for it. So Vertex is very clear. That seems to be the API that like some of the things use, although it returns 503 about 20% of the time. So... Vertex? No. Vertex, fine. But the... Oh, oh. GLA. Yeah. Yeah.Swyx [00:28:28]: I agree with that.Samuel [00:28:29]: So we have, again, another example of like, well, I think we go the extra mile in terms of engineering is we run on every commit, at least commit to main, we run tests against the live models. Not lots of tests, but like a handful of them. Oh, okay. And we had a point last week where, yeah, GLA is a little bit better. GLA1 was failing every single run. One of their tests would fail. And we, I think we might even have commented out that one at the moment. So like all of the models fail more often than you might expect, but like that one seems to be particularly likely to fail. But Vertex is the same API, but much more reliable.Swyx [00:29:01]: My rant here is that, you know, versions of this appear in Langchain and every single framework has to have its own little thing, a version of that. I would put to you, and then, you know, this is, this can be agree to disagree. This is not needed in Pydantic AI. I would much rather you adopt a layer like Lite LLM or what's the other one in JavaScript port key. And that's their job. They focus on that one thing and they, they normalize APIs for you. All new models are automatically added and you don't have to duplicate this inside of your framework. So for example, if I wanted to use deep seek, I'm out of luck because Pydantic AI doesn't have deep seek yet.Samuel [00:29:38]: Yeah, it does.Swyx [00:29:39]: Oh, it does. Okay. I'm sorry. But you know what I mean? Should this live in your code or should it live in a layer that's kind of your API gateway that's a defined piece of infrastructure that people have?Samuel [00:29:49]: And I think if a company who are well known, who are respected by everyone had come along and done this at the right time, maybe we should have done it a year and a half ago and said, we're going to be the universal AI layer. That would have been a credible thing to do. I've heard varying reports of Lite LLM is the truth. And it didn't seem to have exactly the type safety that we needed. Also, as I understand it, and again, I haven't looked into it in great detail. Part of their business model is proxying the request through their, through their own system to do the generalization. That would be an enormous put off to an awful lot of people. Honestly, the truth is I don't think it is that much work unifying the model. I get where you're coming from. I kind of see your point. I think the truth is that everyone is centralizing around open AIs. Open AI's API is the one to do. So DeepSeq support that. Grok with OK support that. Ollama also does it. I mean, if there is that library right now, it's more or less the open AI SDK. And it's very high quality. It's well type checked. It uses Pydantic. So I'm biased. But I mean, I think it's pretty well respected anyway.Swyx [00:30:57]: There's different ways to do this. Because also, it's not just about normalizing the APIs. You have to do secret management and all that stuff.Samuel [00:31:05]: Yeah. And there's also. There's Vertex and Bedrock, which to one extent or another, effectively, they host multiple models, but they don't unify the API. But they do unify the auth, as I understand it. Although we're halfway through doing Bedrock. So I don't know about it that well. But they're kind of weird hybrids because they support multiple models. But like I say, the auth is centralized.Swyx [00:31:28]: Yeah, I'm surprised they don't unify the API. That seems like something that I would do. You know, we can discuss all this all day. There's a lot of APIs. I agree.Samuel [00:31:36]: It would be nice if there was a universal one that we didn't have to go and build.Alessio [00:31:39]: And I guess the other side of, you know, routing model and picking models like evals. How do you actually figure out which one you should be using? I know you have one. First of all, you have very good support for mocking in unit tests, which is something that a lot of other frameworks don't do. So, you know, my favorite Ruby library is VCR because it just, you know, it just lets me store the HTTP requests and replay them. That part I'll kind of skip. I think you are busy like this test model. We're like just through Python. You try and figure out what the model might respond without actually calling the model. And then you have the function model where people can kind of customize outputs. Any other fun stories maybe from there? Or is it just what you see is what you get, so to speak?Samuel [00:32:18]: On those two, I think what you see is what you get. On the evals, I think watch this space. I think it's something that like, again, I was somewhat cynical about for some time. Still have my cynicism about some of the well, it's unfortunate that so many different things are called evals. It would be nice if we could agree. What they are and what they're not. But look, I think it's a really important space. I think it's something that we're going to be working on soon, both in Pydantic AI and in LogFire to try and support better because it's like it's an unsolved problem.Alessio [00:32:45]: Yeah, you do say in your doc that anyone who claims to know for sure exactly how your eval should be defined can safely be ignored.Samuel [00:32:52]: We'll delete that sentence when we tell people how to do their evals.Alessio [00:32:56]: Exactly. I was like, we need we need a snapshot of this today. And so let's talk about eval. So there's kind of like the vibe. Yeah. So you have evals, which is what you do when you're building. Right. Because you cannot really like test it that many times to get statistical significance. And then there's the production eval. So you also have LogFire, which is kind of like your observability product, which I tried before. It's very nice. What are some of the learnings you've had from building an observability tool for LEMPs? And yeah, as people think about evals, even like what are the right things to measure? What are like the right number of samples that you need to actually start making decisions?Samuel [00:33:33]: I'm not the best person to answer that is the truth. So I'm not going to come in here and tell you that I think I know the answer on the exact number. I mean, we can do some back of the envelope statistics calculations to work out that like having 30 probably gets you most of the statistical value of having 200 for, you know, by definition, 15% of the work. But the exact like how many examples do you need? For example, that's a much harder question to answer because it's, you know, it's deep within the how models operate in terms of LogFire. One of the reasons we built LogFire the way we have and we allow you to write SQL directly against your data and we're trying to build the like powerful fundamentals of observability is precisely because we know we don't know the answers. And so allowing people to go and innovate on how they're going to consume that stuff and how they're going to process it is we think that's valuable. Because even if we come along and offer you an evals framework on top of LogFire, it won't be right in all regards. And we want people to be able to go and innovate and being able to write their own SQL connected to the API. And effectively query the data like it's a database with SQL allows people to innovate on that stuff. And that's what allows us to do it as well. I mean, we do a bunch of like testing what's possible by basically writing SQL directly against LogFire as any user could. I think the other the other really interesting bit that's going on in observability is OpenTelemetry is centralizing around semantic attributes for GenAI. So it's a relatively new project. A lot of it's still being added at the moment. But basically the idea that like. They unify how both SDKs and or agent frameworks send observability data to to any OpenTelemetry endpoint. And so, again, we can go and having that unification allows us to go and like basically compare different libraries, compare different models much better. That stuff's in a very like early stage of development. One of the things we're going to be working on pretty soon is basically, I suspect, GenAI will be the first agent framework that implements those semantic attributes properly. Because, again, we control and we can say this is important for observability, whereas most of the other agent frameworks are not maintained by people who are trying to do observability. With the exception of Langchain, where they have the observability platform, but they chose not to go down the OpenTelemetry route. So they're like plowing their own furrow. And, you know, they're a lot they're even further away from standardization.Alessio [00:35:51]: Can you maybe just give a quick overview of how OTEL ties into the AI workflows? There's kind of like the question of is, you know, a trace. And a span like a LLM call. Is it the agent? It's kind of like the broader thing you're tracking. How should people think about it?Samuel [00:36:06]: Yeah, so they have a PR that I think may have now been merged from someone at IBM talking about remote agents and trying to support this concept of remote agents within GenAI. I'm not particularly compelled by that because I don't think that like that's actually by any means the common use case. But like, I suppose it's fine for it to be there. The majority of the stuff in OTEL is basically defining how you would instrument. A given call to an LLM. So basically the actual LLM call, what data you would send to your telemetry provider, how you would structure that. Apart from this slightly odd stuff on remote agents, most of the like agent level consideration is not yet implemented in is not yet decided effectively. And so there's a bit of ambiguity. Obviously, what's good about OTEL is you can in the end send whatever attributes you like. But yeah, there's quite a lot of churn in that space and exactly how we store the data. I think that one of the most interesting things, though, is that if you think about observability. Traditionally, it was sure everyone would say our observability data is very important. We must keep it safe. But actually, companies work very hard to basically not have anything that sensitive in their observability data. So if you're a doctor in a hospital and you search for a drug for an STI, the sequel might be sent to the observability provider. But none of the parameters would. It wouldn't have the patient number or their name or the drug. With GenAI, that distinction doesn't exist because it's all just messed up in the text. If you have that same patient asking an LLM how to. What drug they should take or how to stop smoking. You can't extract the PII and not send it to the observability platform. So the sensitivity of the data that's going to end up in observability platforms is going to be like basically different order of magnitude to what's in what you would normally send to Datadog. Of course, you can make a mistake and send someone's password or their card number to Datadog. But that would be seen as a as a like mistake. Whereas in GenAI, a lot of data is going to be sent. And I think that's why companies like Langsmith and are trying hard to offer observability. On prem, because there's a bunch of companies who are happy for Datadog to be cloud hosted, but want self-hosted self-hosting for this observability stuff with GenAI.Alessio [00:38:09]: And are you doing any of that today? Because I know in each of the spans you have like the number of tokens, you have the context, you're just storing everything. And then you're going to offer kind of like a self-hosting for the platform, basically. Yeah. Yeah.Samuel [00:38:23]: So we have scrubbing roughly equivalent to what the other observability platforms have. So if we, you know, if we see password as the key, we won't send the value. But like, like I said, that doesn't really work in GenAI. So we're accepting we're going to have to store a lot of data and then we'll offer self-hosting for those people who can afford it and who need it.Alessio [00:38:42]: And then this is, I think, the first time that most of the workloads performance is depending on a third party. You know, like if you're looking at Datadog data, usually it's your app that is driving the latency and like the memory usage and all of that. Here you're going to have spans that maybe take a long time to perform because the GLA API is not working or because OpenAI is kind of like overwhelmed. Do you do anything there since like the provider is almost like the same across customers? You know, like, are you trying to surface these things for people and say, hey, this was like a very slow span, but actually all customers using OpenAI right now are seeing the same thing. So maybe don't worry about it or.Samuel [00:39:20]: Not yet. We do a few things that people don't generally do in OTA. So we send. We send information at the beginning. At the beginning of a trace as well as sorry, at the beginning of a span, as well as when it finishes. By default, OTA only sends you data when the span finishes. So if you think about a request which might take like 20 seconds, even if some of the intermediate spans finished earlier, you can't basically place them on the page until you get the top level span. And so if you're using standard OTA, you can't show anything until those requests are finished. When those requests are taking a few hundred milliseconds, it doesn't really matter. But when you're doing Gen AI calls or when you're like running a batch job that might take 30 minutes. That like latency of not being able to see the span is like crippling to understanding your application. And so we've we do a bunch of slightly complex stuff to basically send data about a span as it starts, which is closely related. Yeah.Alessio [00:40:09]: Any thoughts on all the other people trying to build on top of OpenTelemetry in different languages, too? There's like the OpenLEmetry project, which doesn't really roll off the tongue. But how do you see the future of these kind of tools? Is everybody going to have to build? Why does everybody want to build? They want to build their own open source observability thing to then sell?Samuel [00:40:29]: I mean, we are not going off and trying to instrument the likes of the OpenAI SDK with the new semantic attributes, because at some point that's going to happen and it's going to live inside OTEL and we might help with it. But we're a tiny team. We don't have time to go and do all of that work. So OpenLEmetry, like interesting project. But I suspect eventually most of those semantic like that instrumentation of the big of the SDKs will live, like I say, inside the main OpenTelemetry report. I suppose. What happens to the agent frameworks? What data you basically need at the framework level to get the context is kind of unclear. I don't think we know the answer yet. But I mean, I was on the, I guess this is kind of semi-public, because I was on the call with the OpenTelemetry call last week talking about GenAI. And there was someone from Arize talking about the challenges they have trying to get OpenTelemetry data out of Langchain, where it's not like natively implemented. And obviously they're having quite a tough time. And I was realizing, hadn't really realized this before, but how lucky we are to primarily be talking about our own agent framework, where we have the control rather than trying to go and instrument other people's.Swyx [00:41:36]: Sorry, I actually didn't know about this semantic conventions thing. It looks like, yeah, it's merged into main OTel. What should people know about this? I had never heard of it before.Samuel [00:41:45]: Yeah, I think it looks like a great start. I think there's some unknowns around how you send the messages that go back and forth, which is kind of the most important part. It's the most important thing of all. And that is moved out of attributes and into OTel events. OTel events in turn are moving from being on a span to being their own top-level API where you send data. So there's a bunch of churn still going on. I'm impressed by how fast the OTel community is moving on this project. I guess they, like everyone else, get that this is important, and it's something that people are crying out to get instrumentation off. So I'm kind of pleasantly surprised at how fast they're moving, but it makes sense.Swyx [00:42:25]: I'm just kind of browsing through the specification. I can already see that this basically bakes in whatever the previous paradigm was. So now they have genai.usage.prompt tokens and genai.usage.completion tokens. And obviously now we have reasoning tokens as well. And then only one form of sampling, which is top-p. You're basically baking in or sort of reifying things that you think are important today, but it's not a super foolproof way of doing this for the future. Yeah.Samuel [00:42:54]: I mean, that's what's neat about OTel is you can always go and send another attribute and that's fine. It's just there are a bunch that are agreed on. But I would say, you know, to come back to your previous point about whether or not we should be relying on one centralized abstraction layer, this stuff is moving so fast that if you start relying on someone else's standard, you risk basically falling behind because you're relying on someone else to keep things up to date.Swyx [00:43:14]: Or you fall behind because you've got other things going on.Samuel [00:43:17]: Yeah, yeah. That's fair. That's fair.Swyx [00:43:19]: Any other observations just about building LogFire, actually? Let's just talk about this. So you announced LogFire. I was kind of only familiar with LogFire because of your Series A announcement. I actually thought you were making a separate company. I remember some amount of confusion with you when that came out. So to be clear, it's Pydantic LogFire and the company is one company that has kind of two products, an open source thing and an observability thing, correct? Yeah. I was just kind of curious, like any learnings building LogFire? So classic question is, do you use ClickHouse? Is this like the standard persistence layer? Any learnings doing that?Samuel [00:43:54]: We don't use ClickHouse. We started building our database with ClickHouse, moved off ClickHouse onto Timescale, which is a Postgres extension to do analytical databases. Wow. And then moved off Timescale onto DataFusion. And we're basically now building, it's DataFusion, but it's kind of our own database. Bogomil is not entirely happy that we went through three databases before we chose one. I'll say that. But like, we've got to the right one in the end. I think we could have realized that Timescale wasn't right. I think ClickHouse. They both taught us a lot and we're in a great place now. But like, yeah, it's been a real journey on the database in particular.Swyx [00:44:28]: Okay. So, you know, as a database nerd, I have to like double click on this, right? So ClickHouse is supposed to be the ideal backend for anything like this. And then moving from ClickHouse to Timescale is another counterintuitive move that I didn't expect because, you know, Timescale is like an extension on top of Postgres. Not super meant for like high volume logging. But like, yeah, tell us those decisions.Samuel [00:44:50]: So at the time, ClickHouse did not have good support for JSON. I was speaking to someone yesterday and said ClickHouse doesn't have good support for JSON and got roundly stepped on because apparently it does now. So they've obviously gone and built their proper JSON support. But like back when we were trying to use it, I guess a year ago or a bit more than a year ago, everything happened to be a map and maps are a pain to try and do like looking up JSON type data. And obviously all these attributes, everything you're talking about there in terms of the GenAI stuff. You can choose to make them top level columns if you want. But the simplest thing is just to put them all into a big JSON pile. And that was a problem with ClickHouse. Also, ClickHouse had some really ugly edge cases like by default, or at least until I complained about it a lot, ClickHouse thought that two nanoseconds was longer than one second because they compared intervals just by the number, not the unit. And I complained about that a lot. And then they caused it to raise an error and just say you have to have the same unit. Then I complained a bit more. And I think as I understand it now, they have some. They convert between units. But like stuff like that, when all you're looking at is when a lot of what you're doing is comparing the duration of spans was really painful. Also things like you can't subtract two date times to get an interval. You have to use the date sub function. But like the fundamental thing is because we want our end users to write SQL, the like quality of the SQL, how easy it is to write, matters way more to us than if you're building like a platform on top where your developers are going to write the SQL. And once it's written and it's working, you don't mind too much. So I think that's like one of the fundamental differences. The other problem that I have with the ClickHouse and Impact Timescale is that like the ultimate architecture, the like snowflake architecture of binary data in object store queried with some kind of cache from nearby. They both have it, but it's closed sourced and you only get it if you go and use their hosted versions. And so even if we had got through all the problems with Timescale or ClickHouse, we would end up like, you know, they would want to be taking their 80% margin. And then we would be wanting to take that would basically leave us less space for margin. Whereas data fusion. Properly open source, all of that same tooling is open source. And for us as a team of people with a lot of Rust expertise, data fusion, which is implemented in Rust, we can literally dive into it and go and change it. So, for example, I found that there were some slowdowns in data fusion's string comparison kernel for doing like string contains. And it's just Rust code. And I could go and rewrite the string comparison kernel to be faster. Or, for example, data fusion, when we started using it, didn't have JSON support. Obviously, as I've said, it's something we can do. It's something we needed. I was able to go and implement that in a weekend using our JSON parser that we built for Pydantic Core. So it's the fact that like data fusion is like for us the perfect mixture of a toolbox to build a database with, not a database. And we can go and implement stuff on top of it in a way that like if you were trying to do that in Postgres or in ClickHouse. I mean, ClickHouse would be easier because it's C++, relatively modern C++. But like as a team of people who are not C++ experts, that's much scarier than data fusion for us.Swyx [00:47:47]: Yeah, that's a beautiful rant.Alessio [00:47:49]: That's funny. Most people don't think they have agency on these projects. They're kind of like, oh, I should use this or I should use that. They're not really like, what should I pick so that I contribute the most back to it? You know, so but I think you obviously have an open source first mindset. So that makes a lot of sense.Samuel [00:48:05]: I think if we were probably better as a startup, a better startup and faster moving and just like headlong determined to get in front of customers as fast as possible, we should have just started with ClickHouse. I hope that long term we're in a better place for having worked with data fusion. We like we're quite engaged now with the data fusion community. Andrew Lam, who maintains data fusion, is an advisor to us. We're in a really good place now. But yeah, it's definitely slowed us down relative to just like building on ClickHouse and moving as fast as we can.Swyx [00:48:34]: OK, we're about to zoom out and do Pydantic run and all the other stuff. But, you know, my last question on LogFire is really, you know, at some point you run out sort of community goodwill just because like, oh, I use Pydantic. I love Pydantic. I'm going to use LogFire. OK, then you start entering the territory of the Datadogs, the Sentrys and the honeycombs. Yeah. So where are you going to really spike here? What differentiator here?Samuel [00:48:59]: I wasn't writing code in 2001, but I'm assuming that there were people talking about like web observability and then web observability stopped being a thing, not because the web stopped being a thing, but because all observability had to do web. If you were talking to people in 2010 or 2012, they would have talked about cloud observability. Now that's not a term because all observability is cloud first. The same is going to happen to gen AI. And so whether or not you're trying to compete with Datadog or with Arise and Langsmith, you've got to do first class. You've got to do general purpose observability with first class support for AI. And as far as I know, we're the only people really trying to do that. I mean, I think Datadog is starting in that direction. And to be honest, I think Datadog is a much like scarier company to compete with than the AI specific observability platforms. Because in my opinion, and I've also heard this from lots of customers, AI specific observability where you don't see everything else going on in your app is not actually that useful. Our hope is that we can build the first general purpose observability platform with first class support for AI. And that we have this open source heritage of putting developer experience first that other companies haven't done. For all I'm a fan of Datadog and what they've done. If you search Datadog logging Python. And you just try as a like a non-observability expert to get something up and running with Datadog and Python. It's not trivial, right? That's something Sentry have done amazingly well. But like there's enormous space in most of observability to do DX better.Alessio [00:50:27]: Since you mentioned Sentry, I'm curious how you thought about licensing and all of that. Obviously, your MIT license, you don't have any rolling license like Sentry has where you can only use an open source, like the one year old version of it. Was that a hard decision?Samuel [00:50:41]: So to be clear, LogFire is co-sourced. So Pydantic and Pydantic AI are MIT licensed and like properly open source. And then LogFire for now is completely closed source. And in fact, the struggles that Sentry have had with licensing and the like weird pushback the community gives when they take something that's closed source and make it source available just meant that we just avoided that whole subject matter. I think the other way to look at it is like in terms of either headcount or revenue or dollars in the bank. The amount of open source we do as a company is we've got to be open source. We're up there with the most prolific open source companies, like I say, per head. And so we didn't feel like we were morally obligated to make LogFire open source. We have Pydantic. Pydantic is a foundational library in Python. That and now Pydantic AI are our contribution to open source. And then LogFire is like openly for profit, right? As in we're not claiming otherwise. We're not sort of trying to walk a line if it's open source. But really, we want to make it hard to deploy. So you probably want to pay us. We're trying to be straight. That it's to pay for. We could change that at some point in the future, but it's not an immediate plan.Alessio [00:51:48]: All right. So the first one I saw this new I don't know if it's like a product you're building the Pydantic that run, which is a Python browser sandbox. What was the inspiration behind that? We talk a lot about code interpreter for lamps. I'm an investor in a company called E2B, which is a code sandbox as a service for remote execution. Yeah. What's the Pydantic that run story?Samuel [00:52:09]: So Pydantic that run is again completely open source. I have no interest in making it into a product. We just needed a sandbox to be able to demo LogFire in particular, but also Pydantic AI. So it doesn't have it yet, but I'm going to add basically a proxy to OpenAI and the other models so that you can run Pydantic AI in the browser. See how it works. Tweak the prompt, et cetera, et cetera. And we'll have some kind of limit per day of what you can spend on it or like what the spend is. The other thing we wanted to b

Word In Your Ear
The rise of David Bowie and the Spiders From Mars through the eyes of Woody Woodmansey

Word In Your Ear

Play Episode Listen Later Feb 6, 2025 36:41


The teenage Woody Woodmansey was offered the job of under-foreman in the Vertex spectacle factory in Hull but then got a call from Bowie inviting him to move to London and play drums on his new album - “plus food and somewhere to stay”. It took him all weekend to decide. And involved some cultural readjustment when he did. 56 years later he's a founding member of Holy Holy and touring the UK in May – along with Tony Visconti and Glenn Gregory – performing songs from Bowie's breakthrough early ‘70s albums. He talks here about … … the life-changing sound behind the silver door of an air-raid shelter in Driffield. … supporting the Kinks in Bridlington and the Herd at Leeds University - and why Peter Frampton told him, “I'll see you at the top”. ... his first paid gig at the local girls' school. … the Spiders' instructional group outings to see ballet, mime and theatre. ... “never more than three takes”: how Bowie wrote and recorded and the sketches he drew for their stage gear.  … life at Haddon Hall and its “Gone With The Wind staircase”. … Yorkshire to London and the cultural collisions involved. … what Bowie realised was “the missing ingredient”. … Woody's checklist to assess Bowie's talents when he met him: “He wasn't Paul Rodgers or Roger Daltrey. He could write. He could communicate.” … “I'm not wearing that!” The day Mick Ronson packed his bags and left. Order Holy Holy tickets here:https://www.ticketmaster.co.uk/tony-visconti-tickets/artist/2003254Find out more about how to help us to keep the conversation going: https://www.patreon.com/wordinyourear Get bonus content on Patreon Hosted on Acast. See acast.com/privacy for more information.

BioCentury This Week
Ep. 277 - Asia's NewCo Model, FDA Tipping Point, Vertex's Pain Drug

BioCentury This Week

Play Episode Listen Later Feb 4, 2025 24:05


The boom in the creation of companies that were launched in the West based on assets sourced in Asia signals China's galloping speed of innovation. On the latest BioCentury This Week podcast, BioCentury's editors discuss the “NewCo Model,” who the players are, from CEOs and companies to investors; the areas of innovation the start-ups are tackling; and the evolution of the trend. (Read about the company that kicked off the trend, Arrivent, here.)The editors also assess how personnel losses at FDA from the Trump administration's plans to slash government payrolls are likely to cause short- and long-term harm for the agency and the drug approval process. And a new pain therapy from Vertex is in the spotlight as the CF specialist enters new turf.View full story: https://www.biocentury.com/article/65494600:00 - Introduction01:20 - Asia's NewCo Model15:24 - FDA Tipping Point19:29 - Vertex's Pain DrugTo submit a question to BioCentury's editors, email the BioCentury This Week team at podcasts@biocentury.com.Reach us by sending a text

Lucky Paper Radio
Debriefing Vertex Philadelphia & 100 Ornithopters Unite

Lucky Paper Radio

Play Episode Listen Later Feb 3, 2025 76:37


View all cards mentioned in this episode Andy and Anthony recap Magic events they participated in this week. First, Anthony talks through his experience at Vertex Philadelphia, a ~100 person Cube tournament. Anthony had another solid run in his drafts, and the event was the first time his new Broadcast Shuffling method was tried at scale to reasonable success. Andy shares his experience helping organize and play in a charity stream for the Tabletop Workers United strike hardship fund. He ended up played in the event as an alternate and shares his complex experience drafting on stream. Discussed in this episode: Anthony's Shuffling Article Anthony, Roja, and Arlo's episodes of Recross the Paths Lucky Paper Events Page The Cascade Cube Anthony's 3-0 Decks from the Weekend on BlueSky The Jund Cube May's Fae Cube May's Fae Cube episode of Uber Cube Cubereviews Podcast Eiganjo Drift The Dungeoneer's Cube Support the Tabletop Workers United Strike Hardship Fund on GoFundMe Watch the 100 Ornithopters Unite Stream on YouTube Check us out on Twitch and YouTube for paper Cube gameplay. You can find the hosts' Cubes on Cube Cobra: Andy's “Bun Magic” Cube Anthony's “Regular” Cube You can find both your hosts in the MTG Cube Talk Discord. Send in questions to the show at mail@luckypaper.co or our p.o. box: Lucky PaperPO Box 4855Baltimore, MD 21211 If you'd like to show your support for the show, please leave us a review on iTunes or wherever you listen. Musical production by DJ James Nasty. Timestamps 0:00 - Intro 5:32 - Vertex: Philadelphia Tournament Report — Event Overview 10:38 - Anthony's trajectory and improvement as a player 13:11 - Anthony's draft of the Cascade Cube 18:22 - Anthony's draft of the Jund Cube 23:35 - Anthony's draft of May's Fae Cube 29:14 - Anthony's draft of Eiganjo Drift 34:10 - Anthony's draft of The Dungeoneer's Cube 37:37 - Andy's experience helping with the 100 Ornithopters Unite draft event

Squawk on the Street
Apple Shines, Pres. Trump-Nvidia CEO Meeting, Vertex CEO on Non-Opioid Painkiller Approval 1/31/25

Squawk on the Street

Play Episode Listen Later Jan 31, 2025 42:44


Carl Quintanilla, Jim Cramer and David Faber led off the show with shares of Apple rising on better-than-expected quarterly results and revenue guidance -- despite an iPhone sales miss, hurt by a slump in China. President Trump and Nvidia CEO Jensen Huang expected to hold a Friday meeting at the White House. Vertex Pharmaceuticals CEO Reshma Kewalramani joined the program to discuss FDA approval of her company's non-opioid painkiller -- marking the first time in decadesthat the U.S. has approved a new type of pain medication. Also in focus: The Fed and PCE, Trump tariffs deadline watch, Chevron and Exxon Mobil earnings, D.C. plane crash investigation. Squawk on the Street Disclaimer

The Financial Exchange Show
Should you fear the 'silver tsunami'?

The Financial Exchange Show

Play Episode Listen Later Jan 31, 2025 38:32


Chuck Zodda and Mike Armstrong discuss the US probing if DeepSeek got Nvidia chips from firms in Singapore. The FDA approved Journavx, a new opiod painkiller from Vertex. Walgreens tumbles after suspending steady dividend to save cash. Boomers as Boogeymen: Should you fear the 'silver tsunami'? Kalshi, an online prediction market, will open its bets to brokerages. Paul LaMonica, Barron's, joins the show to chat about the wild week for Nvidia.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Outlasting Noam Shazeer, crowdsourcing Chat + AI with >1.4m DAU, and becoming the "Western DeepSeek" — with William Beauchamp, Chai Research

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 26, 2025 75:46


One last Gold sponsor slot is available for the AI Engineer Summit in NYC. Our last round of invites is going out soon - apply here - If you are building AI agents or AI eng teams, this will be the single highest-signal conference of the year for you!While the world melts down over DeepSeek, few are talking about the OTHER notable group of former hedge fund traders who pivoted into AI and built a remarkably profitable consumer AI business with a tiny team with incredibly cracked engineering team — Chai Research. In short order they have:* Started a Chat AI company well before Noam Shazeer started Character AI, and outlasted his departure.* Crossed 1m DAU in 2.5 years - William updates us on the pod that they've hit 1.4m DAU now, another +40% from a few months ago. Revenue crossed >$22m. * Launched the Chaiverse model crowdsourcing platform - taking 3-4 week A/B testing cycles down to 3-4 hours, and deploying >100 models a week.While they're not paying million dollar salaries, you can tell they're doing pretty well for an 11 person startup:The Chai Recipe: Building infra for rapid evalsRemember how the central thesis of LMarena (formerly LMsys) is that the only comprehensive way to evaluate LLMs is to let users try them out and pick winners?At the core of Chai is a mobile app that looks like Character AI, but is actually the largest LLM A/B testing arena in the world, specialized on retaining chat users for Chai's usecases (therapy, assistant, roleplay, etc). It's basically what LMArena would be if taken very, very seriously at one company (with $1m in prizes to boot):Chai publishes occasional research on how they think about this, including talks at their Palo Alto office:William expands upon this in today's podcast (34 mins in):Fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours.In Crowdsourcing the leap to Ten Trillion-Parameter AGI, William describes Chai's routing as a recommender system, which makes a lot more sense to us than previous pitches for model routing startups:William is notably counter-consensus in a lot of his AI product principles:* No streaming: Chats appear all at once to allow rejection sampling* No voice: Chai actually beat Character AI to introducing voice - but removed it after finding that it was far from a killer feature.* Blending: “Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model.” (that's it!)But chief above all is the recommender system.We also referenced Exa CEO Will Bryk's concept of SuperKnowlege:Full Video versionOn YouTube. please like and subscribe!Timestamps* 00:00:04 Introductions and background of William Beauchamp* 00:01:19 Origin story of Chai AI* 00:04:40 Transition from finance to AI* 00:11:36 Initial product development and idea maze for Chai* 00:16:29 User psychology and engagement with AI companions* 00:20:00 Origin of the Chai name* 00:22:01 Comparison with Character AI and funding challenges* 00:25:59 Chai's growth and user numbers* 00:34:53 Key inflection points in Chai's growth* 00:42:10 Multi-modality in AI companions and focus on user-generated content* 00:46:49 Chaiverse developer platform and model evaluation* 00:51:58 Views on AGI and the nature of AI intelligence* 00:57:14 Evaluation methods and human feedback in AI development* 01:02:01 Content creation and user experience in Chai* 01:04:49 Chai Grant program and company culture* 01:07:20 Inference optimization and compute costs* 01:09:37 Rejection sampling and reward models in AI generation* 01:11:48 Closing thoughts and recruitmentTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and today we're in the Chai AI office with my usual co-host, Swyx.swyx [00:00:14]: Hey, thanks for having us. It's rare that we get to get out of the office, so thanks for inviting us to your home. We're in the office of Chai with William Beauchamp. Yeah, that's right. You're founder of Chai AI, but previously, I think you're concurrently also running your fund?William [00:00:29]: Yep, so I was simultaneously running an algorithmic trading company, but I fortunately was able to kind of exit from that, I think just in Q3 last year. Yeah, congrats. Yeah, thanks.swyx [00:00:43]: So Chai has always been on my radar because, well, first of all, you do a lot of advertising, I guess, in the Bay Area, so it's working. Yep. And second of all, the reason I reached out to a mutual friend, Joyce, was because I'm just generally interested in the... ...consumer AI space, chat platforms in general. I think there's a lot of inference insights that we can get from that, as well as human psychology insights, kind of a weird blend of the two. And we also share a bit of a history as former finance people crossing over. I guess we can just kind of start it off with the origin story of Chai.William [00:01:19]: Why decide working on a consumer AI platform rather than B2B SaaS? So just quickly touching on the background in finance. Sure. Originally, I'm from... I'm from the UK, born in London. And I was fortunate enough to go study economics at Cambridge. And I graduated in 2012. And at that time, everyone in the UK and everyone on my course, HFT, quant trading was really the big thing. It was like the big wave that was happening. So there was a lot of opportunity in that space. And throughout college, I'd sort of played poker. So I'd, you know, I dabbled as a professional poker player. And I was able to accumulate this sort of, you know, say $100,000 through playing poker. And at the time, as my friends would go work at companies like ChangeStreet or Citadel, I kind of did the maths. And I just thought, well, maybe if I traded my own capital, I'd probably come out ahead. I'd make more money than just going to work at ChangeStreet.swyx [00:02:20]: With 100k base as capital?William [00:02:22]: Yes, yes. That's not a lot. Well, it depends what strategies you're doing. And, you know, there is an advantage. There's an advantage to being small, right? Because there are, if you have a 10... Strategies that don't work in size. Exactly, exactly. So if you have a fund of $10 million, if you find a little anomaly in the market that you might be able to make 100k a year from, that's a 1% return on your 10 million fund. If your fund is 100k, that's 100% return, right? So being small, in some sense, was an advantage. So started off, and the, taught myself Python, and machine learning was like the big thing as well. Machine learning had really, it was the first, you know, big time machine learning was being used for image recognition, neural networks come out, you get dropout. And, you know, so this, this was the big thing that's going on at the time. So I probably spent my first three years out of Cambridge, just building neural networks, building random forests to try and predict asset prices, right, and then trade that using my own money. And that went well. And, you know, if you if you start something, and it goes well, you You try and hire more people. And the first people that came to mind was the talented people I went to college with. And so I hired some friends. And that went well and hired some more. And eventually, I kind of ran out of friends to hire. And so that was when I formed the company. And from that point on, we had our ups and we had our downs. And that was a whole long story and journey in itself. But after doing that for about eight or nine years, on my 30th birthday, which was four years ago now, I kind of took a step back to just evaluate my life, right? This is what one does when one turns 30. You know, I just heard it. I hear you. And, you know, I looked at my 20s and I loved it. It was a really special time. I was really lucky and fortunate to have worked with this amazing team, been successful, had a lot of hard times. And through the hard times, learned wisdom and then a lot of success and, you know, was able to enjoy it. And so the company was making about five million pounds a year. And it was just me and a team of, say, 15, like, Oxford and Cambridge educated mathematicians and physicists. It was like the real dream that you'd have if you wanted to start a quant trading firm. It was like...swyx [00:04:40]: Your own, all your own money?William [00:04:41]: Yeah, exactly. It was all the team's own money. We had no customers complaining to us about issues. There's no investors, you know, saying, you know, they don't like the risk that we're taking. We could. We could really run the thing exactly as we wanted it. It's like Susquehanna or like Rintec. Yeah, exactly. Yeah. And they're the companies that we would kind of look towards as we were building that thing out. But on my 30th birthday, I look and I say, OK, great. This thing is making as much money as kind of anyone would really need. And I thought, well, what's going to happen if we keep going in this direction? And it was clear that we would never have a kind of a big, big impact on the world. We can enrich ourselves. We can make really good money. Everyone on the team would be paid very, very well. Presumably, I can make enough money to buy a yacht or something. But this stuff wasn't that important to me. And so I felt a sort of obligation that if you have this much talent and if you have a talented team, especially as a founder, you want to be putting all that talent towards a good use. I looked at the time of like getting into crypto and I had a really strong view on crypto, which was that as far as a gambling device. This is like the most fun form of gambling invented in like ever super fun, I thought as a way to evade monetary regulations and banking restrictions. I think it's also absolutely amazing. So it has two like killer use cases, not so much banking the unbanked, but everything else, but everything else to do with like the blockchain and, and you know, web, was it web 3.0 or web, you know, that I, that didn't, it didn't really make much sense. And so instead of going into crypto, which I thought, even if I was successful, I'd end up in a lot of trouble. I thought maybe it'd be better to build something that governments wouldn't have a problem with. I knew that LLMs were like a thing. I think opening. I had said they hadn't released GPT-3 yet, but they'd said GPT-3 is so powerful. We can't release it to the world or something. Was it GPT-2? And then I started interacting with, I think Google had open source, some language models. They weren't necessarily LLMs, but they, but they were. But yeah, exactly. So I was able to play around with, but nowadays so many people have interacted with the chat GPT, they get it, but it's like the first time you, you can just talk to a computer and it talks back. It's kind of a special moment and you know, everyone who's done that goes like, wow, this is how it should be. Right. It should be like, rather than having to type on Google and search, you should just be able to ask Google a question. When I saw that I read the literature, I kind of came across the scaling laws and I think even four years ago. All the pieces of the puzzle were there, right? Google had done this amazing research and published, you know, a lot of it. Open AI was still open. And so they'd published a lot of their research. And so you really could be fully informed on, on the state of AI and where it was going. And so at that point I was confident enough, it was worth a shot. I think LLMs are going to be the next big thing. And so that's the thing I want to be building in, in that space. And I thought what's the most impactful product I can possibly build. And I thought it should be a platform. So I myself love platforms. I think they're fantastic because they open up an ecosystem where anyone can contribute to it. Right. So if you think of a platform like a YouTube, instead of it being like a Hollywood situation where you have to, if you want to make a TV show, you have to convince Disney to give you the money to produce it instead, anyone in the world can post any content they want to YouTube. And if people want to view it, the algorithm is going to promote it. Nowadays. You can look at creators like Mr. Beast or Joe Rogan. They would have never have had that opportunity unless it was for this platform. Other ones like Twitter's a great one, right? But I would consider Wikipedia to be a platform where instead of the Britannica encyclopedia, which is this, it's like a monolithic, you get all the, the researchers together, you get all the data together and you combine it in this, in this one monolithic source. Instead. You have this distributed thing. You can say anyone can host their content on Wikipedia. Anyone can contribute to it. And anyone can maybe their contribution is they delete stuff. When I was hearing like the kind of the Sam Altman and kind of the, the Muskian perspective of AI, it was a very kind of monolithic thing. It was all about AI is basically a single thing, which is intelligence. Yeah. Yeah. The more intelligent, the more compute, the more intelligent, and the more and better AI researchers, the more intelligent, right? They would speak about it as a kind of erased, like who can get the most data, the most compute and the most researchers. And that would end up with the most intelligent AI. But I didn't believe in any of that. I thought that's like the total, like I thought that perspective is the perspective of someone who's never actually done machine learning. Because with machine learning, first of all, you see that the performance of the models follows an S curve. So it's not like it just goes off to infinity, right? And the, the S curve, it kind of plateaus around human level performance. And you can look at all the, all the machine learning that was going on in the 2010s, everything kind of plateaued around the human level performance. And we can think about the self-driving car promises, you know, how Elon Musk kept saying the self-driving car is going to happen next year, it's going to happen next, next year. Or you can look at the image recognition, the speech recognition. You can look at. All of these things, there was almost nothing that went superhuman, except for something like AlphaGo. And we can speak about why AlphaGo was able to go like super superhuman. So I thought the most likely thing was going to be this, I thought it's not going to be a monolithic thing. That's like an encyclopedia Britannica. I thought it must be a distributed thing. And I actually liked to look at the world of finance for what I think a mature machine learning ecosystem would look like. So, yeah. So finance is a machine learning ecosystem because all of these quant trading firms are running machine learning algorithms, but they're running it on a centralized platform like a marketplace. And it's not the case that there's one giant quant trading company of all the data and all the quant researchers and all the algorithms and compute, but instead they all specialize. So one will specialize on high frequency training. Another will specialize on mid frequency. Another one will specialize on equity. Another one will specialize. And I thought that's the way the world works. That's how it is. And so there must exist a platform where a small team can produce an AI for a unique purpose. And they can iterate and build the best thing for that, right? And so that was the vision for Chai. So we wanted to build a platform for LLMs.Alessio [00:11:36]: That's kind of the maybe inside versus contrarian view that led you to start the company. Yeah. And then what was maybe the initial idea maze? Because if somebody told you that was the Hugging Face founding story, people might believe it. It's kind of like a similar ethos behind it. How did you land on the product feature today? And maybe what were some of the ideas that you discarded that initially you thought about?William [00:11:58]: So the first thing we built, it was fundamentally an API. So nowadays people would describe it as like agents, right? But anyone could write a Python script. They could submit it to an API. They could send it to the Chai backend and we would then host this code and execute it. So that's like the developer side of the platform. On their Python script, the interface was essentially text in and text out. An example would be the very first bot that I created. I think it was a Reddit news bot. And so it would first, it would pull the popular news. Then it would prompt whatever, like I just use some external API for like Burr or GPT-2 or whatever. Like it was a very, very small thing. And then the user could talk to it. So you could say to the bot, hi bot, what's the news today? And it would say, this is the top stories. And you could chat with it. Now four years later, that's like perplexity or something. That's like the, right? But back then the models were first of all, like really, really dumb. You know, they had an IQ of like a four year old. And users, there really wasn't any demand or any PMF for interacting with the news. So then I was like, okay. Um. So let's make another one. And I made a bot, which was like, you could talk to it about a recipe. So you could say, I'm making eggs. Like I've got eggs in my fridge. What should I cook? And it'll say, you should make an omelet. Right. There was no PMF for that. No one used it. And so I just kept creating bots. And so every single night after work, I'd be like, okay, I like, we have AI, we have this platform. I can create any text in textile sort of agent and put it on the platform. And so we just create stuff night after night. And then all the coders I knew, I would say, yeah, this is what we're going to do. And then I would say to them, look, there's this platform. You can create any like chat AI. You should put it on. And you know, everyone's like, well, chatbots are super lame. We want absolutely nothing to do with your chatbot app. No one who knew Python wanted to build on it. I'm like trying to build all these bots and no consumers want to talk to any of them. And then my sister who at the time was like just finishing college or something, I said to her, I was like, if you want to learn Python, you should just submit a bot for my platform. And she, she built a therapy for me. And I was like, okay, cool. I'm going to build a therapist bot. And then the next day I checked the performance of the app and I'm like, oh my God, we've got 20 active users. And they spent, they spent like an average of 20 minutes on the app. I was like, oh my God, what, what bot were they speaking to for an average of 20 minutes? And I looked and it was the therapist bot. And I went, oh, this is where the PMF is. There was no demand for, for recipe help. There was no demand for news. There was no demand for dad jokes or pub quiz or fun facts or what they wanted was they wanted the therapist bot. the time I kind of reflected on that and I thought, well, if I want to consume news, the most fun thing, most fun way to consume news is like Twitter. It's not like the value of there being a back and forth, wasn't that high. Right. And I thought if I need help with a recipe, I actually just go like the New York times has a good recipe section, right? It's not actually that hard. And so I just thought the thing that AI is 10 X better at is a sort of a conversation right. That's not intrinsically informative, but it's more about an opportunity. You can say whatever you want. You're not going to get judged. If it's 3am, you don't have to wait for your friend to text back. It's like, it's immediate. They're going to reply immediately. You can say whatever you want. It's judgment-free and it's much more like a playground. It's much more like a fun experience. And you could see that if the AI gave a person a compliment, they would love it. It's much easier to get the AI to give you a compliment than a human. From that day on, I said, okay, I get it. Humans want to speak to like humans or human like entities and they want to have fun. And that was when I started to look less at platforms like Google. And I started to look more at platforms like Instagram. And I was trying to think about why do people use Instagram? And I could see that I think Chai was, was filling the same desire or the same drive. If you go on Instagram, typically you want to look at the faces of other humans, or you want to hear about other people's lives. So if it's like the rock is making himself pancakes on a cheese plate. You kind of feel a little bit like you're the rock's friend, or you're like having pancakes with him or something, right? But if you do it too much, you feel like you're sad and like a lonely person, but with AI, you can talk to it and tell it stories and tell you stories, and you can play with it for as long as you want. And you don't feel like you're like a sad, lonely person. You feel like you actually have a friend.Alessio [00:16:29]: And what, why is that? Do you have any insight on that from using it?William [00:16:33]: I think it's just the human psychology. I think it's just the idea that, with old school social media. You're just consuming passively, right? So you'll just swipe. If I'm watching TikTok, just like swipe and swipe and swipe. And even though I'm getting the dopamine of like watching an engaging video, there's this other thing that's building my head, which is like, I'm feeling lazier and lazier and lazier. And after a certain period of time, I'm like, man, I just wasted 40 minutes. I achieved nothing. But with AI, because you're interacting, you feel like you're, it's not like work, but you feel like you're participating and contributing to the thing. You don't feel like you're just. Consuming. So you don't have a sense of remorse basically. And you know, I think on the whole people, the way people talk about, try and interact with the AI, they speak about it in an incredibly positive sense. Like we get people who say they have eating disorders saying that the AI helps them with their eating disorders. People who say they're depressed, it helps them through like the rough patches. So I think there's something intrinsically healthy about interacting that TikTok and Instagram and YouTube doesn't quite tick. From that point on, it was about building more and more kind of like human centric AI for people to interact with. And I was like, okay, let's make a Kanye West bot, right? And then no one wanted to talk to the Kanye West bot. And I was like, ah, who's like a cool persona for teenagers to want to interact with. And I was like, I was trying to find the influencers and stuff like that, but no one cared. Like they didn't want to interact with the, yeah. And instead it was really just the special moment was when we said the realization that developers and software engineers aren't interested in building this sort of AI, but the consumers are right. And rather than me trying to guess every day, like what's the right bot to submit to the platform, why don't we just create the tools for the users to build it themselves? And so nowadays this is like the most obvious thing in the world, but when Chai first did it, it was not an obvious thing at all. Right. Right. So we took the API for let's just say it was, I think it was GPTJ, which was this 6 billion parameter open source transformer style LLM. We took GPTJ. We let users create the prompt. We let users select the image and we let users choose the name. And then that was the bot. And through that, they could shape the experience, right? So if they said this bot's going to be really mean, and it's going to be called like bully in the playground, right? That was like a whole category that I never would have guessed. Right. People love to fight. They love to have a disagreement, right? And then they would create, there'd be all these romantic archetypes that I didn't know existed. And so as the users could create the content that they wanted, that was when Chai was able to, to get this huge variety of content and rather than appealing to, you know, 1% of the population that I'd figured out what they wanted, you could appeal to a much, much broader thing. And so from that moment on, it was very, very crystal clear. It's like Chai, just as Instagram is this social media platform that lets people create images and upload images, videos and upload that, Chai was really about how can we let the users create this experience in AI and then share it and interact and search. So it's really, you know, I say it's like a platform for social AI.Alessio [00:20:00]: Where did the Chai name come from? Because you started the same path. I was like, is it character AI shortened? You started at the same time, so I was curious. The UK origin was like the second, the Chai.William [00:20:15]: We started way before character AI. And there's an interesting story that Chai's numbers were very, very strong, right? So I think in even 20, I think late 2022, was it late 2022 or maybe early 2023? Chai was like the number one AI app in the app store. So we would have something like 100,000 daily active users. And then one day we kind of saw there was this website. And we were like, oh, this website looks just like Chai. And it was the character AI website. And I think that nowadays it's, I think it's much more common knowledge that when they left Google with the funding, I think they knew what was the most trending, the number one app. And I think they sort of built that. Oh, you found the people.swyx [00:21:03]: You found the PMF for them.William [00:21:04]: We found the PMF for them. Exactly. Yeah. So I worked a year very, very hard. And then they, and then that was when I learned a lesson, which is that if you're VC backed and if, you know, so Chai, we'd kind of ran, we'd got to this point, I was the only person who'd invested. I'd invested maybe 2 million pounds in the business. And you know, from that, we were able to build this thing, get to say a hundred thousand daily active users. And then when character AI came along, the first version, we sort of laughed. We were like, oh man, this thing sucks. Like they don't know what they're building. They're building the wrong thing anyway, but then I saw, oh, they've raised a hundred million dollars. Oh, they've raised another hundred million dollars. And then our users started saying, oh guys, your AI sucks. Cause we were serving a 6 billion parameter model, right? How big was the model that character AI could afford to serve, right? So we would be spending, let's say we would spend a dollar per per user, right? Over the, the, you know, the entire lifetime.swyx [00:22:01]: A dollar per session, per chat, per month? No, no, no, no.William [00:22:04]: Let's say we'd get over the course of the year, we'd have a million users and we'd spend a million dollars on the AI throughout the year. Right. Like aggregated. Exactly. Exactly. Right. They could spend a hundred times that. So people would say, why is your AI much dumber than character AIs? And then I was like, oh, okay, I get it. This is like the Silicon Valley style, um, hyper scale business. And so, yeah, we moved to Silicon Valley and, uh, got some funding and iterated and built the flywheels. And, um, yeah, I, I'm very proud that we were able to compete with that. Right. So, and I think the reason we were able to do it was just customer obsession. And it's similar, I guess, to how deep seek have been able to produce such a compelling model when compared to someone like an open AI, right? So deep seek, you know, their latest, um, V2, yeah, they claim to have spent 5 million training it.swyx [00:22:57]: It may be a bit more, but, um, like, why are you making it? Why are you making such a big deal out of this? Yeah. There's an agenda there. Yeah. You brought up deep seek. So we have to ask you had a call with them.William [00:23:07]: We did. We did. We did. Um, let me think what to say about that. I think for one, they have an amazing story, right? So their background is again in finance.swyx [00:23:16]: They're the Chinese version of you. Exactly.William [00:23:18]: Well, there's a lot of similarities. Yes. Yes. I have a great affinity for companies which are like, um, founder led, customer obsessed and just try and build something great. And I think what deep seek have achieved. There's quite special is they've got this amazing inference engine. They've been able to reduce the size of the KV cash significantly. And then by being able to do that, they're able to significantly reduce their inference costs. And I think with kind of with AI, people get really focused on like the kind of the foundation model or like the model itself. And they sort of don't pay much attention to the inference. To give you an example with Chai, let's say a typical user session is 90 minutes, which is like, you know, is very, very long for comparison. Let's say the average session length on TikTok is 70 minutes. So people are spending a lot of time. And in that time they're able to send say 150 messages. That's a lot of completions, right? It's quite different from an open AI scenario where people might come in, they'll have a particular question in mind. And they'll ask like one question. And a few follow up questions, right? So because they're consuming, say 30 times as many requests for a chat, or a conversational experience, you've got to figure out how to how to get the right balance between the cost of that and the quality. And so, you know, I think with AI, it's always been the case that if you want a better experience, you can throw compute at the problem, right? So if you want a better model, you can just make it bigger. If you want it to remember better, give it a longer context. And now, what open AI is doing to great fanfare is with projection sampling, you can generate many candidates, right? And then with some sort of reward model or some sort of scoring system, you can serve the most promising of these many candidates. And so that's kind of scaling up on the inference time compute side of things. And so for us, it doesn't make sense to think of AI is just the absolute performance. So. But what we're seeing, it's like the MML you score or the, you know, any of these benchmarks that people like to look at, if you just get that score, it doesn't really tell tell you anything. Because it's really like progress is made by improving the performance per dollar. And so I think that's an area where deep seek have been able to form very, very well, surprisingly so. And so I'm very interested in what Lama four is going to look like. And if they're able to sort of match what deep seek have been able to achieve with this performance per dollar gain.Alessio [00:25:59]: Before we go into the inference, some of the deeper stuff, can you give people an overview of like some of the numbers? So I think last I checked, you have like 1.4 million daily active now. It's like over 22 million of revenue. So it's quite a business.William [00:26:12]: Yeah, I think we grew by a factor of, you know, users grew by a factor of three last year. Revenue over doubled. You know, it's very exciting. We're competing with some really big, really well funded companies. Character AI got this, I think it was almost a $3 billion valuation. And they have 5 million DAU is a number that I last heard. Torquay, which is a Chinese built app owned by a company called Minimax. They're incredibly well funded. And these companies didn't grow by a factor of three last year. Right. And so when you've got this company and this team that's able to keep building something that gets users excited, and they want to tell their friend about it, and then they want to come and they want to stick on the platform. I think that's very special. And so last year was a great year for the team. And yeah, I think the numbers reflect the hard work that we put in. And then fundamentally, the quality of the app, the quality of the content, the quality of the content, the quality of the content, the quality of the content, the quality of the content. AI is the quality of the experience that you have. You actually published your DAU growth chart, which is unusual. And I see some inflections. Like, it's not just a straight line. There's some things that actually inflect. Yes. What were the big ones? Cool. That's a great, great, great question. Let me think of a good answer. I'm basically looking to annotate this chart, which doesn't have annotations on it. Cool. The first thing I would say is this is, I think the most important thing to know about success is that success is born out of failures. Right? Through failures that we learn. You know, if you think something's a good idea, and you do and it works, great, but you didn't actually learn anything, because everything went exactly as you imagined. But if you have an idea, you think it's going to be good, you try it, and it fails. There's a gap between the reality and expectation. And that's an opportunity to learn. The flat periods, that's us learning. And then the up periods is that's us reaping the rewards of that. So I think the big, of the growth shot of just 2024, I think the first thing that really kind of put a dent in our growth was our backend. So we just reached this scale. So we'd, from day one, we'd built on top of Google's GCP, which is Google's cloud platform. And they were fantastic. We used them when we had one daily active user, and they worked pretty good all the way up till we had about 500,000. It was never the cheapest, but from an engineering perspective, man, that thing scaled insanely good. Like, not Vertex? Not Vertex. Like GKE, that kind of stuff? We use Firebase. So we use Firebase. I'm pretty sure we're the biggest user ever on Firebase. That's expensive. Yeah, we had calls with engineers, and they're like, we wouldn't recommend using this product beyond this point, and you're 3x over that. So we pushed Google to their absolute limits. You know, it was fantastic for us, because we could focus on the AI. We could focus on just adding as much value as possible. But then what happened was, after 500,000, just the thing, the way we were using it, and it would just, it wouldn't scale any further. And so we had a really, really painful, at least three-month period, as we kind of migrated between different services, figuring out, like, what requests do we want to keep on Firebase, and what ones do we want to move on to something else? And then, you know, making mistakes. And learning things the hard way. And then after about three months, we got that right. So that, we would then be able to scale to the 1.5 million DAE without any further issues from the GCP. But what happens is, if you have an outage, new users who go on your app experience a dysfunctional app, and then they're going to exit. And so your next day, the key metrics that the app stores track are going to be something like retention rates. And so your next day, the key metrics that the app stores track are going to be something like retention rates. Money spent, and the star, like, the rating that they give you. In the app store. In the app store, yeah. Tyranny. So if you're ranked top 50 in entertainment, you're going to acquire a certain rate of users organically. If you go in and have a bad experience, it's going to tank where you're positioned in the algorithm. And then it can take a long time to kind of earn your way back up, at least if you wanted to do it organically. If you throw money at it, you can jump to the top. And I could talk about that. But broadly speaking, if we look at 2024, the first kink in the graph was outages due to hitting 500k DAU. The backend didn't want to scale past that. So then we just had to do the engineering and build through it. Okay, so we built through that, and then we get a little bit of growth. And so, okay, that's feeling a little bit good. I think the next thing, I think it's, I'm not going to lie, I have a feeling that when Character AI got... I was thinking. I think so. I think... So the Character AI team fundamentally got acquired by Google. And I don't know what they changed in their business. I don't know if they dialed down that ad spend. Products don't change, right? Products just what it is. I don't think so. Yeah, I think the product is what it is. It's like maintenance mode. Yes. I think the issue that people, you know, some people may think this is an obvious fact, but running a business can be very competitive, right? Because other businesses can see what you're doing, and they can imitate you. And then there's this... There's this question of, if you've got one company that's spending $100,000 a day on advertising, and you've got another company that's spending zero, if you consider market share, and if you're considering new users which are entering the market, the guy that's spending $100,000 a day is going to be getting 90% of those new users. And so I have a suspicion that when the founders of Character AI left, they dialed down their spending on user acquisition. And I think that kind of gave oxygen to like the other apps. And so Chai was able to then start growing again in a really healthy fashion. I think that's kind of like the second thing. I think a third thing is we've really built a great data flywheel. Like the AI team sort of perfected their flywheel, I would say, in end of Q2. And I could speak about that at length. But fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours. And when we did that, we could really, really, really perfect techniques like DPO, fine tuning, prompt engineering, blending, rejection sampling, training a reward model, right, really successfully, like boom, boom, boom, boom, boom. And so I think in Q3 and Q4, we got, the amount of AI improvements we got was like astounding. It was getting to the point, I thought like how much more, how much more edge is there to be had here? But the team just could keep going and going and going. That was like number three for the inflection point.swyx [00:34:53]: There's a fourth?William [00:34:54]: The important thing about the third one is if you go on our Reddit or you talk to users of AI, there's like a clear date. It's like somewhere in October or something. The users, they flipped. Before October, the users... The users would say character AI is better than you, for the most part. Then from October onwards, they would say, wow, you guys are better than character AI. And that was like a really clear positive signal that we'd sort of done it. And I think people, you can't cheat consumers. You can't trick them. You can't b******t them. They know, right? If you're going to spend 90 minutes on a platform, and with apps, there's the barriers to switching is pretty low. Like you can try character AI, you can't cheat consumers. You can't cheat them. You can't cheat them. You can't cheat AI for a day. If you get bored, you can try Chai. If you get bored of Chai, you can go back to character. So the users, the loyalty is not strong, right? What keeps them on the app is the experience. If you deliver a better experience, they're going to stay and they can tell. So that was the fourth one was we were fortunate enough to get this hire. He was hired one really talented engineer. And then they said, oh, at my last company, we had a head of growth. He was really, really good. And he was the head of growth for ByteDance for two years. Would you like to speak to him? And I was like, yes. Yes, I think I would. And so I spoke to him. And he just blew me away with what he knew about user acquisition. You know, it was like a 3D chessswyx [00:36:21]: sort of thing. You know, as much as, as I know about AI. Like ByteDance as in TikTok US. Yes.William [00:36:26]: Not ByteDance as other stuff. Yep. He was interviewing us as we were interviewing him. Right. And so pick up options. Yeah, exactly. And so he was kind of looking at our metrics. And he was like, I saw him get really excited when he said, guys, you've got a million daily active users and you've done no advertising. I said, correct. And he was like, that's unheard of. He's like, I've never heard of anyone doing that. And then he started looking at our metrics. And he was like, if you've got all of this organically, if you start spending money, this is going to be very exciting. I was like, let's give it a go. So then he came in, we've just started ramping up the user acquisition. So that looks like spending, you know, let's say we're spending, we started spending $20,000 a day, it looked very promising than 20,000. Right now we're spending $40,000 a day on user acquisition. That's still only half of what like character AI or talkie may be spending. But from that, it's sort of, we were growing at a rate of maybe say, 2x a year. And that got us growing at a rate of 3x a year. So I'm growing, I'm evolving more and more to like a Silicon Valley style hyper growth, like, you know, you build something decent, and then you canswyx [00:37:33]: slap on a huge... You did the important thing, you did the product first.William [00:37:36]: Of course, but then you can slap on like, like the rocket or the jet engine or something, which is just this cash in, you pour in as much cash, you buy a lot of ads, and your growth is faster.swyx [00:37:48]: Not to, you know, I'm just kind of curious what's working right now versus what surprisinglyWilliam [00:37:52]: doesn't work. Oh, there's a long, long list of surprising stuff that doesn't work. Yeah. The surprising thing, like the most surprising thing, what doesn't work is almost everything doesn't work. That's what's surprising. And I'll give you an example. So like a year and a half ago, I was working at a company, we were super excited by audio. I was like, audio is going to be the next killer feature, we have to get in the app. And I want to be the first. So everything Chai does, I want us to be the first. We may not be the company that's strongest at execution, but we can always be theswyx [00:38:22]: most innovative. Interesting. Right? So we can... You're pretty strong at execution.William [00:38:26]: We're much stronger, we're much stronger. A lot of the reason we're here is because we were first. If we launched today, it'd be so hard to get the traction. Because it's like to get the flywheel, to get the users, to build a product people are excited about. If you're first, people are naturally excited about it. But if you're fifth or 10th, man, you've got to beswyx [00:38:46]: insanely good at execution. So you were first with voice? We were first. We were first. I only knowWilliam [00:38:51]: when character launched voice. They launched it, I think they launched it at least nine months after us. Okay. Okay. But the team worked so hard for it. At the time we did it, latency is a huge problem. Cost is a huge problem. Getting the right quality of the voice is a huge problem. Right? Then there's this user interface and getting the right user experience. Because you don't just want it to start blurting out. Right? You want to kind of activate it. But then you don't have to keep pressing a button every single time. There's a lot that goes into getting a really smooth audio experience. So we went ahead, we invested the three months, we built it all. And then when we did the A-B test, there was like, no change in any of the numbers. And I was like, this can't be right, there must be a bug. And we spent like a week just checking everything, checking again, checking again. And it was like, the users just did not care. And it was something like only 10 or 15% of users even click the button to like, they wanted to engage the audio. And they would only use it for 10 or 15% of the time. So if you do the math, if it's just like something that one in seven people use it for one seventh of their time. You've changed like 2% of the experience. So even if that that 2% of the time is like insanely good, it doesn't translate much when you look at the retention, when you look at the engagement, and when you look at the monetization rates. So audio did not have a big impact. I'm pretty big on audio. But yeah, I like it too. But it's, you know, so a lot of the stuff which I do, I'm a big, you can have a theory. And you resist. Yeah. Exactly, exactly. So I think if you want to make audio work, it has to be a unique, compelling, exciting experience that they can't have anywhere else.swyx [00:40:37]: It could be your models, which just weren't good enough.William [00:40:39]: No, no, no, they were great. Oh, yeah, they were very good. it was like, it was kind of like just the, you know, if you listen to like an audible or Kindle, or something like, you just hear this voice. And it's like, you don't go like, wow, this is this is special, right? It's like a convenience thing. But the idea is that if you can, if Chai is the only platform, like, let's say you have a Mr. Beast, and YouTube is the only platform you can use to make audio work, then you can watch a Mr. Beast video. And it's the most engaging, fun video that you want to watch, you'll go to a YouTube. And so it's like for audio, you can't just put the audio on there. And people go, oh, yeah, it's like 2% better. Or like, 5% of users think it's 20% better, right? It has to be something that the majority of people, for the majority of the experience, go like, wow, this is a big deal. That's the features you need to be shipping. If it's not going to appeal to the majority of people, for the majority of the experience, and it's not a big deal, it's not going to move you. Cool. So you killed it. I don't see it anymore. Yep. So I love this. The longer, it's kind of cheesy, I guess, but the longer I've been working at Chai, and I think the team agrees with this, all the platitudes, at least I thought they were platitudes, that you would get from like the Steve Jobs, which is like, build something insanely great, right? Or be maniacally focused, or, you know, the most important thing is saying no to, not to work on. All of these sort of lessons, they just are like painfully true. They're painfully true. So now I'm just like, everything I say, I'm either quoting Steve Jobs or Zuckerberg. I'm like, guys, move fast and break free.swyx [00:42:10]: You've jumped the Apollo to cool it now.William [00:42:12]: Yeah, it's just so, everything they said is so, so true. The turtle neck. Yeah, yeah, yeah. Everything is so true.swyx [00:42:18]: This last question on my side, and I want to pass this to Alessio, is on just, just multi-modality in general. This actually comes from Justine Moore from A16Z, who's a friend of ours. And a lot of people are trying to do voice image video for AI companions. Yes. You just said voice didn't work. Yep. What would make you revisit?William [00:42:36]: So Steve Jobs, he was very, listen, he was very, very clear on this. There's a habit of engineers who, once they've got some cool technology, they want to find a way to package up the cool technology and sell it to consumers, right? That does not work. So you're free to try and build a startup where you've got your cool tech and you want to find someone to sell it to. That's not what we do at Chai. At Chai, we start with the consumer. What does the consumer want? What is their problem? And how do we solve it? So right now, the number one problems for the users, it's not the audio. That's not the number one problem. It's not the image generation either. That's not their problem either. The number one problem for users in AI is this. All the AI is being generated by middle-aged men in Silicon Valley, right? That's all the content. You're interacting with this AI. You're speaking to it for 90 minutes on average. It's being trained by middle-aged men. The guys out there, they're out there. They're talking to you. They're talking to you. They're like, oh, what should the AI say in this situation, right? What's funny, right? What's cool? What's boring? What's entertaining? That's not the way it should be. The way it should be is that the users should be creating the AI, right? And so the way I speak about it is this. Chai, we have this AI engine in which sits atop a thin layer of UGC. So the thin layer of UGC is absolutely essential, right? It's just prompts. But it's just prompts. It's just an image. It's just a name. It's like we've done 1% of what we could do. So we need to keep thickening up that layer of UGC. It must be the case that the users can train the AI. And if reinforcement learning is powerful and important, they have to be able to do that. And so it's got to be the case that there exists, you know, I say to the team, just as Mr. Beast is able to spend 100 million a year or whatever it is on his production company, and he's got a team building the content, the Mr. Beast company is able to spend 100 million a year on his production company. And he's got a team building the content, which then he shares on the YouTube platform. Until there's a team that's earning 100 million a year or spending 100 million on the content that they're producing for the Chai platform, we're not finished, right? So that's the problem. That's what we're excited to build. And getting too caught up in the tech, I think is a fool's errand. It does not work.Alessio [00:44:52]: As an aside, I saw the Beast Games thing on Amazon Prime. It's not doing well. And I'mswyx [00:44:56]: curious. It's kind of like, I mean, the audience reading is high. The run-to-meet-all sucks, but the audience reading is high.Alessio [00:45:02]: But it's not like in the top 10. I saw it dropped off of like the... Oh, okay. Yeah, that one I don't know. I'm curious, like, you know, it's kind of like similar content, but different platform. And then going back to like, some of what you were saying is like, you know, people come to ChaiWilliam [00:45:13]: expecting some type of content. Yeah, I think it's something that's interesting to discuss is like, is moats. And what is the moat? And so, you know, if you look at a platform like YouTube, the moat, I think is in first is really is in the ecosystem. And the ecosystem, is comprised of you have the content creators, you have the users, the consumers, and then you have the algorithms. And so this, this creates a sort of a flywheel where the algorithms are able to be trained on the users, and the users data, the recommend systems can then feed information to the content creators. So Mr. Beast, he knows which thumbnail does the best. He knows the first 10 seconds of the video has to be this particular way. And so his content is super optimized for the YouTube platform. So that's why it doesn't do well on Amazon. If he wants to do well on Amazon, how many videos has he created on the YouTube platform? By thousands, 10s of 1000s, I guess, he needs to get those iterations in on the Amazon. So at Chai, I think it's all about how can we get the most compelling, rich user generated content, stick that on top of the AI engine, the recommender systems, in such that we get this beautiful data flywheel, more users, better recommendations, more creative, more content, more users.Alessio [00:46:34]: You mentioned the algorithm, you have this idea of the Chaiverse on Chai, and you have your own kind of like LMSYS-like ELO system. Yeah, what are things that your models optimize for, like your users optimize for, and maybe talk about how you build it, how people submit models?William [00:46:49]: So Chaiverse is what I would describe as a developer platform. More often when we're speaking about Chai, we're thinking about the Chai app. And the Chai app is really this product for consumers. And so consumers can come on the Chai app, they can come on the Chai app, they can come on the Chai app, they can interact with our AI, and they can interact with other UGC. And it's really just these kind of bots. And it's a thin layer of UGC. Okay. Our mission is not to just have a very thin layer of UGC. Our mission is to have as much UGC as possible. So we must have, I don't want people at Chai training the AI. I want people, not middle aged men, building AI. I want everyone building the AI, as many people building the AI as possible. Okay, so what we built was we built Chaiverse. And Chaiverse is kind of, it's kind of like a prototype, is the way to think about it. And it started with this, this observation that, well, how many models get submitted into Hugging Face a day? It's hundreds, it's hundreds, right? So there's hundreds of LLMs submitted each day. Now consider that, what does it take to build an LLM? It takes a lot of work, actually. It's like someone devoted several hours of compute, several hours of their time, prepared a data set, launched it, ran it, evaluated it, submitted it, right? So there's a lot of, there's a lot of, there's a lot of work that's going into that. So what we did was we said, well, why can't we host their models for them and serve them to users? And then what would that look like? The first issue is, well, how do you know if a model is good or not? Like, we don't want to serve users the crappy models, right? So what we would do is we would, I love the LMSYS style. I think it's really cool. It's really simple. It's a very intuitive thing, which is you simply present the users with two completions. You can say, look, this is from model one. This is from model two. This is from model three. This is from model A. This is from model B, which is better. And so if someone submits a model to Chaiverse, what we do is we spin up a GPU. We download the model. We're going to now host that model on this GPU. And we're going to start routing traffic to it. And we're going to send, we think it takes about 5,000 completions to get an accurate signal. That's roughly what LMSYS does. And from that, we're able to get an accurate ranking. And we're able to get an accurate ranking. And we're able to get an accurate ranking of which models are people finding entertaining and which models are not entertaining. If you look at the bottom 80%, they'll suck. You can just disregard them. They totally suck. Then when you get the top 20%, you know you've got a decent model, but you can break it down into more nuance. There might be one that's really descriptive. There might be one that's got a lot of personality to it. There might be one that's really illogical. Then the question is, well, what do you do with these top models? From that, you can do more sophisticated things. You can try and do like a routing thing where you say for a given user request, we're going to try and predict which of these end models that users enjoy the most. That turns out to be pretty expensive and not a huge source of like edge or improvement. Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model. Just a random 50%? Just a random, yeah. And then... That's blending? That's blending. You can do more sophisticated things on top of that, as in all things in life, but the 80-20 solution, if you just do that, you get a pretty powerful effect out of the gate. Random number generator. I think it's like the robustness of randomness. Random is a very powerful optimization technique, and it's a very robust thing. So you can explore a lot of the space very efficiently. There's one thing that's really, really important to share, and this is the most exciting thing for me, is after you do the ranking, you get an ELO score, and you can track a user's first join date, the first date they submit a model to Chaiverse, they almost always get a terrible ELO, right? So let's say the first submission they get an ELO of 1,100 or 1,000 or something, and you can see that they iterate and they iterate and iterate, and it will be like, no improvement, no improvement, no improvement, and then boom. Do you give them any data, or do you have to come up with this themselves? We do, we do, we do, we do. We try and strike a balance between giving them data that's very useful, you've got to be compliant with GDPR, which is like, you have to work very hard to preserve the privacy of users of your app. So we try to give them as much signal as possible, to be helpful. The minimum is we're just going to give you a score, right? That's the minimum. But that alone is people can optimize a score pretty well, because they're able to come up with theories, submit it, does it work? No. A new theory, does it work? No. And then boom, as soon as they figure something out, they keep it, and then they iterate, and then boom,Alessio [00:51:46]: they figure something out, and they keep it. Last year, you had this post on your blog, cross-sourcing the lead to the 10 trillion parameter, AGI, and you call it a mixture of experts, recommenders. Yep. Any insights?William [00:51:58]: Updated thoughts, 12 months later? I think the odds, the timeline for AGI has certainly been pushed out, right? Now, this is in, I'm a controversial person, I don't know, like, I just think... You don't believe in scaling laws, you think AGI is further away. I think it's an S-curve. I think everything's an S-curve. And I think that the models have proven to just be far worse at reasoning than people sort of thought. And I think whenever I hear people talk about LLMs as reasoning engines, I sort of cringe a bit. I don't think that's what they are. I think of them more as like a simulator. I think of them as like a, right? So they get trained to predict the next most likely token. It's like a physics simulation engine. So you get these like games where you can like construct a bridge, and you drop a car down, and then it predicts what should happen. And that's really what LLMs are doing. It's not so much that they're reasoning, it's more that they're just doing the most likely thing. So fundamentally, the ability for people to add in intelligence, I think is very limited. What most people would consider intelligence, I think the AI is not a crowdsourcing problem, right? Now with Wikipedia, Wikipedia crowdsources knowledge. It doesn't crowdsource intelligence. So it's a subtle distinction. AI is fantastic at knowledge. I think it's weak at intelligence. And a lot, it's easy to conflate the two because if you ask it a question and it gives you, you know, if you said, who was the seventh president of the United States, and it gives you the correct answer, I'd say, well, I don't know the answer to that. And you can conflate that with intelligence. But really, that's a question of knowledge. And knowledge is really this thing about saying, how can I store all of this information? And then how can I retrieve something that's relevant? Okay, they're fantastic at that. They're fantastic at storing knowledge and retrieving the relevant knowledge. They're superior to humans in that regard. And so I think we need to come up for a new word. How does one describe AI should contain more knowledge than any individual human? It should be more accessible than any individual human. That's a very powerful thing. That's superswyx [00:54:07]: powerful. But what words do we use to describe that? We had a previous guest on Exa AI that does search. And he tried to coin super knowledge as the opposite of super intelligence.William [00:54:20]: Exactly. I think super knowledge is a more accurate word for it.swyx [00:54:24]: You can store more things than any human can.William [00:54:26]: And you can retrieve it better than any human can as well. And I think it's those two things combined that's special. I think that thing will exist. That thing can be built. And I think you can start with something that's entertaining and fun. And I think, I often think it's like, look, it's going to be a 20 year journey. And we're in like, year four, or it's like the web. And this is like 1998 or something. You know, you've got a long, long way to go before the Amazon.coms are like these huge, multi trillion dollar businesses that every single person uses every day. And so AI today is very simplistic. And it's fundamentally the way we're using it, the flywheels, and this ability for how can everyone contribute to it to really magnify the value that it brings. Right now, like, I think it's a bit sad. It's like, right now you have big labs, I'm going to pick on open AI. And they kind of go to like these human labelers. And they say, we're going to pay you to just label this like subset of questions that we want to get a really high quality data set, then we're going to get like our own computers that are really powerful. And that's kind of like the thing. For me, it's so much like Encyclopedia Britannica. It's like insane. All the people that were interested in blockchain, it's like, well, this is this is what needs to be decentralized, you need to decentralize that thing. Because if you distribute it, people can generate way more data in a distributed fashion, way more, right? You need the incentive. Yeah, of course. Yeah. But I mean, the, the, that's kind of the exciting thing about Wikipedia was it's this understanding, like the incentives, you don't need money to incentivize people. You don't need dog coins. No. Sometimes, sometimes people get the satisfaction fro