Podcasts about PMF

  • 316PODCASTS
  • 514EPISODES
  • 47mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about PMF

Latest podcast episodes about PMF

The Bootstrapped Founder
388: The Job To Be Done: Understanding Customer Value Communication

The Bootstrapped Founder

Play Episode Listen Later May 2, 2025 11:44 Transcription Available


PMF is MIA without JTBD. Make sense? :DThe "job to be done" sits at the core of my customer's use of my product. I need to understand it to understand them. To fathom their needs.This week, I'll share how I approach that — and why it's taken me years to get here.The blog post: https://thebootstrappedfounder.com/the-job-to-be-done-understanding-customer-value-communication/The podcast episode: https://tbf.fm/episodes/388-the-job-to-be-done-understanding-customer-value-communicationCheck out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw

Tam Tam : Le recrutement par celles et ceux qui le font au quotidien
#56 - Manager une équipe Recrutement, c'est du sport ! - Salima Koné

Tam Tam : Le recrutement par celles et ceux qui le font au quotidien

Play Episode Listen Later Apr 29, 2025 70:54


Recruter, c'est comme jouer un match de foot : on ne marque pas de but sans passe décisive du reste de son équipe ! ⚽Ex Head of PMF et Formatrice en management, recrutement et en intelligence émotionnelle, Salima Koné insiste bien là-dessus : ça se joue en ÉQUIPE ! Son rôle aujourd'hui d'ailleurs : manager des recruteur·euses — ou, comme elle aime bien le dire, révélatrice du potentiel des gens. Bref, ce qu'elle aime c'est amener ses équipes au niveau supérieur pour que le recrutement n'ait plus aucun secret pour elles !Alors dans ce nouvel épisode de Tam Tam, elle t'explique :

枫言枫语
Vol. 139 Owen: 那个弄"沉浸式翻译"的后生

枫言枫语

Play Episode Listen Later Apr 20, 2025 81:51


本期节目我们邀请到著名的“沉浸式翻译”的作者Owen,跟我们分享沉浸式翻译的故事。 Owen在大厂工作过,后来离职做独立开发,尝试过很多不同的项目,机缘巧合“沉浸式翻译”发布后跑通PMF,于是全力投入。随着项目发展,功能也日益强大了起来,广受用户好评,还获得过2024 Chrome Web Store年度最佳。后来沉浸式翻译也被一家与Owen三观相符的公司收购了,目前这个项目还在积极更新中。 我台两位主播也是沉浸式翻译的用户,插件非常好用,是效率好帮手。那么Owen除了做沉浸式翻译之外还做了什么呢?沉浸式翻译又是什么机缘开始开发的呢?让我们一起走进本期节目吧!

Run The Numbers
Is Product-Market Fit Getting Harder to Keep?

Run The Numbers

Play Episode Listen Later Apr 19, 2025 7:07


Everyone loves talking about finding product-market fit. But what if the real challenge in 2025 is keeping it?In this solo episode, I riff on why PMF has become more fleeting than ever. I unpack what Harry Stebbings, Jason Lemkin and Rory O'Driscoll observed —companies that once had a five-year runway now fall out of PMF in five weeks. We explore AI-fueled growth curves, the myth of “escape velocity,” and why today's go-to-market edge can vanish overnight.I also shares a brutal hill from a recent road race (and the startup metaphor it unlocked), plus a text exchange with a growth investor on the hunt for vertical SaaS alpha.If you're an operator, investor, or CFO trying to understand the new physics of scaling, this one's for you.Run the Numbers is sponsored by Gelt. You already know tax season isn't just a deadline—it's a lever.At Gelt, they work with CFOs who aren't just looking to stay compliant—they're looking to unlock strategic value from their tax position. Think: optimized entity structures, real-time visibility across complex ownership, and proactive tax planning that actually moves the needle on cash flow.Gelt gives you quarterly strategy check-ins and proactive estimates, all designed to lower your effective tax rate, protect equity value, and drive long-term efficiency. You get direct access to your Gelt CPA and a clean, modern platform to track it all.Learn if you qualify at go.joingelt.com/mostlymetrics and schedule a call to learn more. Get full access to Mostly metrics at www.mostlymetrics.com/subscribe

The Product Market Fit Show
He grew to $25M in ARR and $14M in annual profits—with no funding & no dilution. | Adam Robinson, Founder of Retention.com

The Product Market Fit Show

Play Episode Listen Later Apr 10, 2025 52:10 Transcription Available


Adam Robinson once struggled with a stagnant email SaaS stuck at $3M ARR, but he kept experimenting until he found how to solve a problem no one else was tackling—and everything changed. Suddenly, buyers were begging for his identity-based marketing tool—so he spun out Retention.com and grew it to $14M+ in annual profit with no outside funding.In this episode, Adam reveals why he ignored “scalable hacks” until his product proved undeniable, the two keys that finally unleashed product-market fit, and how he uses no-friction brand marketing on LinkedIn to sign up thousands of new leads.____Why You Should Listen1. He chose profit over fundraising – Adam shows how ignoring “growth-hack hype” and focusing on real word-of-mouth built a wildly profitable SaaS.2. Shocking pivot to product-market fit – A failed email tool spun out a game-changing identity product that users demanded.3. The #1 trap killing early-stage founders – Why “growth hacking” tactics fail without genuine pull, and what to do instead.4. Bootstrapping to $14M profit – His surprising path from 3M stalled ARR to unstoppable momentum (with a team of only six).5. LinkedIn brand building done right – How to attract thousands of perfect-fit leads—no spammy sequences required._____KeywordsBootstrapped SaaS, Product Market Fit, Email Marketing Growth, Founder Lessons, B2B LinkedIn Strategy, High Profit Margins, Startup Pivot, Word-of-Mouth Marketing, Early-Stage ExperimentationTimestamps(00:00:00) Intro(00:01:57) A Bootstrap Story(00:06:33) Why Bootstrapping Often Means You Can't Lose(00:10:36) The downside of raising VC(00:19:53) A Case Study: Constant Contact(00:22:45) Find an Unsolved Porblem(00:32:06) PMF and Word of Mouth(00:46:45) Piece of AdviceSend me a message to let me know what you think!

Wellness Force Radio
Brian Richards | The Science Behind Infrared Saunas: Can Near-Infrared Light Repair Your DNA?

Wellness Force Radio

Play Episode Listen Later Apr 8, 2025 64:53


Wellness + Wisdom | Episode 730 Could your DNA hold a hidden superpower that's just waiting to be activated by light? Brian Richards, Founder of SaunaSpace, joins Josh Trent on the Wellness + Wisdom Podcast, episode 730, to uncover how near-infrared saunas can help cleanse toxins, support brain function, and even raise your vibrational frequency, and why embracing your innate power to heal is the ultimate path to true freedom. "One of the effects of photobiomodulation is to correct gene transcription = how the DNA is read. The DNA and the epigenetics are being fixed in a much more powerful way with the near-infrared light than they're being damaged by the ultraviolet component of the sunlight. But indoors, we only use damaging light. There's none of the healing light anymore. So to bring ourselves back into balance, we need to get a lot more near-infrared light. And the incandescent bulb uniquely presents itself as a way to do that." - Brian Richards 10% Off SaunaSpace ThermaLight® technology combines the finest tungsten filaments, mouth-blown red-stained glass, and years of engineering to produce a full-spectrum infrared light that mimics the best of nature. Benefits: Compact and portable, hypoallergenic, tool-free assembly, wheelchair accessible, machine-washable cover, minimalist design, ZERO EMFs. All saunas have the same goal: to raise your core body temperature enough to jump-start its natural healing processes. Traditional saunas use steam, fire, or electric heaters to make the air in the sauna hot, which eventually heats up your body from the outside in. Infrared saunas tap into the science of light to help you sweat faster and more comfortably. Save 10% With code "JOSH10" In This Episode, Brian Richards Uncovers: [00:55] Your DNA Creates Light Brian Richards SaunaSpace - 10% off Sauna Space with code "JOSH10" 525 Red Light Therapy For Seasonal Affective Disorder (SAD) | Brian Richards How our DNA creates light. The difference between far-infrared and near-infrared light. How we need to have compassion for the people who are stuck in the Matrix. What makes us vibrate at a higher frequency. [04:05] Choose Compassion Over Judgment The Four Agreements by Don Miguel Ruiz Why there is no benefit to judgment. How compassion is a vehicle for new choices. Be Here Now by Ram Dass We're not truly healed until our family stops triggering us. [06:30] Redefining Biohacking Why we attach a negative meaning to the word "work." How Brian's work is an extension of him and his consciousness. Why SaunaSpace is about being, not about doing. How he's making biohacking and science more heart-focused. Why red light therapy heals us but also enhances the indoor environment. How red light therapy serves as a replacement for fire light. [14:15] The Benefits of Red Light Therapy How SaunaSpace helped Josh's daughter heal jaundice when she was born. Why technology prevents us from connecting with our families. How people gravitate toward the screen because they're constantly stressed out. Why subtle frequencies calm our nervous system, ease stress, and remind us to be ourselves. [18:00] Infrared Sauna VS Regular Sauna How SaunaSpace works holistically to support our well-being. The science of near-infrared light. How man-made EMFs are toxic to us. The difference between an infrared sauna and a regular sauna. Why 15 minutes inside the infrared sauna is enough to get all the benefits. How physical health is not the biggest benefit of the sauna. Why the infrared sauna provides us with more capacity to make lifestyle changes. [25:10] Can Light Change The DNA? How melatonin is produced in the body. Why taking vitamin D supplements is not as efficient as taking light in through the skin. How photobiomodulation corrects gene transcription. Why we only use damaging light indoors. [27:45] How to Get Energy from Light How we can satisfy 70% of our caloric needs through sunlight. Dr. Steven Young Samuel B. Lee The benefits of sun gazing. How we can produce our own energy when we live in alignment with who we truly are. Why we get the benefits of near-infrared light from the sun even on a cloudy day. [33:15] Light + Heat for Optimal Health + Recovery How photobiomodulation supports the healing of traumatic brain injuries. Why light therapy repairs cells and how cells work collectively. How red light therapy helps people heal from stroke. Why heat corrects protein folding. How frequent use of sauna reduces the risk of dementia. [38:10] How to Be Your Authentic Self Why attaching to our programming harms us. How light removes toxins and reactivates cellular rejuvenation. hat it takes to become our best selves. How spiritual and emotional work is an important component of healing. Why our being naturally wants to be authentic. How our guides only help us remember what we already know. Why doing what's aligned with our highest purpose gives us more purpose and happiness. [43:40] The Wounded Healer Why a wounded healer should also focus on their own healing. How the best healers continue to work on themselves. Why curiosity and new experiences reprogram our brain. [46:45] Are You Aware of Your Limitations? How ayahuasca showed Josh that he's not been his true self. Why Brian has been learning about himself through building his business. How experiencing polarity helps us level up in the game. Why we need to take care of the body in order to do the emotional and spiritual work. How physical movement helped Brian realize he was avoiding his emotions. Why taking care of the body has become more accessible to everyone. [54:20] Detoxify Your Body How modern stressors require us to take measures that our ancestors didn't need. Why sauna helps detoxify from toxic chemicals. How we are contaminated with genetically engineered e-coli. Maria Crisler How SaunaSpace and PMF help break up hydrogels in the body. Why the CV-19 vaccine acts like snake venom. Finding Joe (2011) How being true to himself is important for Brian's personal wellness. Why we always have a choice and we can find empowerment instead of victimhood. Leave Wellness + Wisdom a Review on Apple Podcasts Power Quotes From The Show Near-Infrared Light Can Heal Traumatic Brain Injuries "When you have a TBI, you have damage to your nerve cells and your nervous system. The near-infrared light goes into the mitochondria in the brain cells and it promotes regeneration, healing, inflammation reduction, and also anti aging. It corrects the function of the cell itself by helping to re-establish connections and repair the interworking of all the cells together collectively." - Brian Richards We Lack Sunlight "The average American gets five to seven minutes of sunlight a day, just going out to the car on the way to work. We're really deprived of the things that keep us resilient and also allow us to be high frequency and multi-dimensional." - Brian Richards Light Brings Your Back Into Yourself "Light opens us up and brings us back into ourselves. When you go into the SaunaSpace, your thoughts just kind of melts, and you you get grounded back into yourself. It becomes a very transcendental experience. It's not just about healing, it's very meditative." - Brian Richards Links From Today's Show  Brian Richards SaunaSpace - 10% off Sauna Space with code "JOSH10" 525 Red Light Therapy For Seasonal Affective Disorder (SAD) | Brian Richards The Four Agreements by Don Miguel Ruiz Be Here Now by Ram Dass Dr. Steven Young Samuel B. Lee Maria Crisler Finding Joe (2011)   Josh's Trusted Products | Up To 40% Off Shop All Products Biohacking⁠ MANNA Vitality - Save 20% with code JOSH20 HigherDOSE - 15% off with the code JOSH15 PLUNGE - $150 off with discount code WELLNESSFORCE Pulsetto - Save 20% with code "JOSH" SaunaSpace - 10% off with discount code JOSH10 Ultrahuman Ring Air - 10% off with code JOSH Wellness Test Kits Choose Joi - Save 50% on all Lab Tests with JOSH Blokes - Save 50% on all Lab Tests with JOSH FertilityWize Test by Clockwize - Save 10% with code JOSH Tiny Health Gut Tests - $20 off with discount code JOSH20 VIVOO Health Tests - Save 30% off with code JOSH SiPhox Health Blood Test - Save 15% off with code JOSH Nutrition + Gut Health Organifi - 20% off with discount code WELLNESSFORCE SEED Synbiotic - 25% off with the code 25JOSHTRENT Paleovalley -  Save 15% off here! EQUIP Foods - 20% off with the code WELLNESS20 DRY FARM WINES - Get an extra bottle of Pure Natural Wine with your order for just 1¢ Just Thrive - 20% off with the code JOSH Legacy Cacao - Save 10% with JOSH when you order by the pound! Kreatures of Habit - Save 20% with WISDOM20 Force of Nature Meats - Save 10% with JOSH Supplements MANNA GOLD - $20 off with code JOSHGOLD Adapt Naturals - 20% off with discount code WELLNESSFORCE MitoZen - 10% off with code WELLNESSFORCE Activation Products - 20% off with code JOSH20 BiOptimizers - 10% off with discount code JOSH10 Fatty15 Essential Fatty Acids Supplement - Get 15% off with code JOSH15 Sleep BiOptimizers Sleep Breakthrough - 10% off with JOSH10 Zyppah Anti-Snoring Mouthpiece - 20% off with the code JOSH MitoZen Super SandMan Ultra™ (Melatonin Liposomal)+ 10% off with WELLNESSFORCE Luminette Light Therapy Glasses - 15% off with JOSH Cured Nutrition CBN Night Oil - 20% off with JOSH Natural Energy MTE - Save 20% with JOSH TruKava - Save 20% with code JOSH20 Drink Update - Save 25% with discount code JOSH25 Lifeboost Coffee - Save 10% with JOSH10 EONS Mushroom Coffee - 20% off with the discount code JOSH20 EnergyBITS - 20% off with the code WELLNESSFORCE BUBS Naturals - Save 20% with JOSH20 Fitness + Physical Health Detox Dudes Online Courses - Up to $500 off with discount code JOSH Kineon - 10% off with discount code JOSH10 Create Wellness Creatine Gummies - 20% off with discount code JOSH20 BioPro+ by BioProtein Technology - Save $30 OFF WITH CODE JOSH Drink LMNT - Zero Sugar Hydration: Get your free LMNT Sample Pack, with any purchase ⁠Myoxcience - 20% off with the code JOSH20 Healthy Home SunHome Saunas - Save $200 with JOSH200 JASPR Air Purifier - Save 10% with code WELLNESS QI-Shield EMF Device by NOA AON - 20% off with the code JOSH Holy Hydrogen - $100 off with discount code JOSH SimplyO3 - 10% off with discount code JOSH10 LEELA Quantum Upgrade + Frequency Bundles - Get 15 days free with code JOSH15 TrulyFree Toxic- Free Cleaning Products - Get 40% off + Freebies with code WELLNESSFORCE Mental Health + Stress Release Mendi.io - 20% off with the code JOSH20 Cured Nutrition CBD - 20% off with the discount code JOSH NOOTOPIA - 10% off with the discount code JOSH10 CalmiGo - $30 off the device with discount code JOSH30 QUALIA - 15% off with WELLNESSFORCE LiftMode - 10% off with JOSH10 Personal Care⁠ The Wellness Company's Emergency Health Kits + More - Save 10% with code JOSH Canopy Filtered Showerhead + Essential Oils - Save 15% with JOSH15 Farrow Life - Save 20% with JOSH Timeline Nutrition - 10% off with JOSH ⁠⁠Intelligence of Nature - 15% off Skin Support with the code JOSH15⁠⁠ Young Goose - Save 10% with code JOSH10 Mindfulness + Meditation BREATHE - 33% off with the code PODCAST33 Neuvana - 15% off with the code WELLNESSFORCE Essential Oil Wizardry - 10% off with the code WELLNESSFORCE Four Visions - Save 15% with code JOSH15 Lotuswei - 10% off with JOSH Clothing NativeToWear - Save 20% with code JOSH20 Rhizal Grounded Barefoot Shoes - Save 10% with code WELLNESS Earth Runners Shoes - 10% off with the code JOSHT10 MYNDOVR - 20% off with JOSH Free Resources M21 Wellness Guide - Free 3-Week Breathwork Program with Josh Trent Join Wellness + Wisdom Community About Brian Richards Brian Richards is a nationally known expert in sauna therapy, light science, and EMF science. In 2013, he founded SaunaSpace®, a company that combines cutting-edge incandescent infrared technology with the age-old practice of sauna. In 2008, Brian faced an important health decision: start taking pharmaceuticals for insomnia, adrenal fatigue, and other health issues or try full-spectrum sauna therapy coupled with ancestral practices like a clean diet, proper sleep, and yoga. By opting for the natural approach, Brian rapidly transformed his health. This life-changing experience inspired him to create SaunaSpace®. In his journey to develop the perfect product, Brian has immersed himself in the science and research behind light, heat, electromagnetism, and the human body. 14 years later, he brings a refreshing approach to natural living, biohacking, and natural health transformation. Website Instagram Facebook YouTube X

The Engineering Leadership Podcast
From early days to IPO: Scaling leadership, enterprise growth, product ownership, & outgrowing your failure modes w/ Jon Hyman #215

The Engineering Leadership Podcast

Play Episode Listen Later Apr 8, 2025 57:14


ABOUT JON HYMANJon Hyman is the co-founder and chief technology officer of Braze, the customer engagement platform that delivers messaging experiences across push, email, in-app, and more. He leads the charge for building the platform's technical systems and infrastructure as well as overseeing the company's technical operations and engineering team.Prior to Braze, Jon served as lead engineer for the Core Technology group at Bridgewater Associates, the world's largest hedge fund. There, he managed a team that maintained 80+ software assets and was responsible for the security and stability of critical trading systems. Jon met cofounder Bill Magnuson during his time at Bridgewater, and together they won the 2011 TechCrunch Disrupt Hackathon. Jon is a recipient of the SmartCEO Executive Management Award in the CIO/CTO Category for New York. Jon holds a B.A. from Harvard University in Computer Science.ABOUT BRAZEBraze is the leading customer engagement platform that empowers brands to Be Absolutely Engaging.™ Braze allows any marketer to collect and take action on any amount of data from any source, so they can creatively engage with customers in real time, across channels from one platform. From cross-channel messaging and journey orchestration to Al-powered experimentation and optimization, Braze enables companies to build and maintain absolutely engaging relationships with their customers that foster growth and loyalty. The company has been recognized as a 2024 U.S. News & World Report Best Companies to Work For, 2024 Best Small & Medium Workplaces in Europe by Great Place to Work®, 2024 Fortune Best Workplaces for Women™ by Great Place to Work® and was named a Leader by Gartner® in the 2024 Magic Quadrant™ for Multichannel Marketing Hubs and a Strong Performer in The Forrester Wave™: Email Marketing Service Providers, Q3 2024.  Braze is headquartered in New York with 15 offices across North America, Europe, and APAC. Learn more at braze.com.SHOW NOTES:What Jon learned from being the only person on call for his company's first four years (2:56)Knowing when it's time to get help managing your servers, ops, scaling, etc. (5:42)Establishing areas of product ownership & other scaling lessons from the early days (9:25)Frameworks for conversations on splitting of products across teams (12:00)The challenges, complexities & strategies behind assigning ownership in the early days (14:40)Founding Braze (18:01)Why Braze? The story & insights behind the original vision for Braze (20:08)Identifying Braze's product market fit (22:34)Early-stage PMF challenges faced by Jon & his co-founders (25:40)Pivoting to focus on enterprise customers (27:48)“Let's integrate the SDK right now” - founder-led sales ideas to validate your product (29:22)Behind the decision to hire a chief revenue officer for the first time (34:02)The evolution of enterprise & its impact on Braze's product offering (36:42)Growing out of your early-stage failure modes (39:00)Why it's important to make personnel decisions quickly (41:22)Setting & maintaining a vision pre IPO vs. post IPO (44:21)Jon's next leadership evolution & growth areas he is focusing on (49:50)Rapid fire questions (52:53)LINKS AND RESOURCESWhen We Cease to Understand the World - Benjamín Labatut's fictional examination of the lives of real-life scientists and thinkers whose discoveries resulted in moral consequences beyond their imagining. At a breakneck pace and with a wealth of disturbing detail, Labatut uses the imaginative resources of fiction to tell the stories of Fritz Haber, Alexander Grothendieck, Werner Heisenberg, and Erwin Schrödinger, the scientists and mathematicians who expanded our notions of the possible.This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/

SaaS Sessions
S9E3 - Cracking Early-Stage SaaS Growth ft. Jacob Bank, CEO of Relay.app

SaaS Sessions

Play Episode Listen Later Apr 2, 2025 43:22


In this episode of the SaaS Sessions podcast, Jacob Bank, founder of Relay.app, shares his journey from academia to startup founder, discussing the challenges of building a product in the AI space. He emphasizes the importance of validating ideas, finding early customers, and experimenting with various marketing channels. Jacob also highlights the significance of cohort retention as a measure of product-market fit and the need to balance innovation with competition in a rapidly evolving market.Key Takeaways –1. You're not failing - your distribution isBuilding is easy; getting customers is the battlefield.Early channels (Reddit, network, cold email) only take you so far.Most startup advice is outdated or irrelevant to your context - test everything yourself.2. Validate with precision, not egoThe Mom Test changed how Jacob gathered honest feedback.10 well-run interviews can kill or greenlight an idea.Getting “likes” is not validation - retention and willingness to pay are.3. Every growth stage needs a new motion0 → 10: Scrappy hustle (Reddit, LinkedIn DMs, direct outreach).10 → 100: Partner marketing, SEO, and high-intent blog content.100 → 1000: Viral LinkedIn content + YouTube for education + community-led growth.4. PMF isn't hype - it's cohort retentionRetention is the only true sign of product-market fit.Competitive pressure is a forcing function to build better products.Don't be afraid to pick a fight in a crowded space - just know your edge.Connect with Jacob Bank:

Pretty Much Fine
44: Fully Aware

Pretty Much Fine

Play Episode Listen Later Mar 30, 2025 44:57


The gals are back with another episode of PMF, and this week, we're serving up giggles, unsolicited opinions, and having at least one existential crisis. We kick things off by dissecting iPad kids—are they the future, or overstimulated monsters raised by YouTube autoplay? Then, we dive into how we are complete opposites in every way (yet somehow still best friends), and Carol drops a shocking bombshell about her personality and Floridian lore that may or may not change everything. But wait, there's more! Are all NYC influencers truly the same person in a different font? We break it all down, from the algorithm's eerie mind control to the dystopian content pipeline we can't stop consuming.So grab your snacks, settle in, and join the spiral. It's gonna be a ride.

Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies
Ronin: How Axie Infinity Kickstarted the Blockchain Gaming Revolution - Jeff Zirlin

Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies

Play Episode Listen Later Mar 27, 2025 54:34


While DeFi's ultimate goal is to provide an alternative for TradFi, blockchain gaming caters to retail masses, onboarding millions of users to crypto through incentives and fun gameplay. The greatest success story in Web3 gaming thus far has been, without a doubt, Axie Infinity. Apart from creating an engaged community whose early adopters also experienced rags-to-riches stories, it kickstarted an entire movement around play-to-earn gaming. Many have tried copying it, yet most of them failed. Axie Infinity managed to design a sustainable economy that combined NFTs and fungible tokens. The result was the first gamified crypto mining event, at scale. In order to accommodate such a massive demand, in a time when L2 rollups were still in R&D, Sky Mavis founded Ronin, a gaming-oriented Ethereum sidechain. The community that formed around Axie Infinity created a tremendous network effect, solidifying Ronin's PMF as a gaming powerhouse. Before long, many up-and-coming titles migrated to Ronin, and the growth effects in all their key metrics (i.e. DAUs, transactions, revenue, etc.) further fuelled the flywheel.Topics covered in this episode:Jeff's backgroundCryptokittiesJoining Axie Infinity and how it evolvedMonetizing Axie's economyThe recipe for successful blockchain gamesBuilding Ronin, the gaming L1On-chain vs. off-chain game elementsCommunity buildingRonin's economy & composabilityMost suitable gaming genre for Web3AI x gamingCrypto gaming investmentsRonin's current state and future roadmapAxie's upcoming MMOEpisode links:Jeff Zirlin on XAxie Infinity on XRonin on XSky Mavis on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Brian Fabian Crain.

Fund/Build/Scale
Building a Market from Scratch: Generative Music for Enterprise Customers

Fund/Build/Scale

Play Episode Listen Later Mar 27, 2025 30:47


Recorded in July 2024, Aimi founder Edward Balassanian joins Fund/Build/Scale to share how his AI-powered platform creates generative, copyright-safe music for enterprise clients. He explains how customer discovery with DJs shaped Aimi's tech, why compliance is core to their strategy, and why the company downsized after hitting product-market fit — all while inventing a market where AI music solves problems humans can't. RUNTIME 30:47 EPISODE BREAKDOWN (4:17) “ I consider myself a platform person. I build operating systems.” (7:39) “ It's incumbent on a founder in a space like this to be well-versed in not only the art of the music, but the science of the music as well.” (9:14) “ We see a song as a medium between fans and artists. We're not in the song business.” (11:03) How Aimi is building a library of licensed content: “We've been pretty methodical.” (14:59) “ We see ourselves as kind of uniquely in the business of music AI for creation, not for imitation.” (16:46) “ We de-risk the use of music. That's one of the biggest selling points for enterprise customers.” (18:44) “ Like most tech people, I would say we're always going to be in beta.” (21:58) Why Aimi raised its $20M Series B in 2021. (24:01) Downsizing after reaching PMF “ was the best decision that we that we could have made.” (28:05) “ I think what's really interesting is building platforms, and any platform today is going to have to incorporate AI into it.” LINKS Edward Balassanian (Crunchbase) Aimi AI-Powered Music Platform Aimi has raised $20 million in a Series B round of funding SUBSCRIBE

Go To Market Grit
#235 Effective, Transparent Government with OpenGov's Zac Bookman

Go To Market Grit

Play Episode Listen Later Mar 24, 2025 76:13


Guest: Zac Bookman, CEO and Co-Founder of OpenGovThirteen years after co-founding the government transparency startup OpenGov, Zac Bookman is still finding ways to surprise people. In 2023, Cox Enterprises bought the company for $1.8 billion — but as far as Zac is concerned, “we're just getting started.”“ I left the vast majority of my net worth in the company,” he says. “So I'm a believer. I'm all in.”The mission of powering “more effective and accountable government” has been stable since OpenGov's earliest days, and that mission has informed everything from hiring to M&A to the decision to sell. “These people buy and don't sell,” Zac said of Cox. “They're all in on the mission. And they're all in on taking care of employees. So I see a triple win: A win for employees, win for the investors, win for the customers, maybe a quadruple win for me and the management.”Chapters:(01:46) - OpenGov's mission (04:34) - Shrinking the product-market fit (07:34) - Super misson driven (08:59) - Why OpenGov almost shut down (13:08) - Zac's early career (16:16) - Picking (and losing) a CTO (22:50) - Growing upside-down (25:29) - The SPAC backstabber (31:26) - Why Zac didn't get fired (33:24) - Selling in 2024 (37:04) - Growth by acquisition (42:31) - John Chambers and PMF (49:32) - Zac's cross-country bike ride (56:25) - Expectations vs. reality (58:57) - The coup attempt (01:01:59) - Tiring work (01:05:47) - Going to the White House (01:09:40) - DOGE & disrespect (01:12:54) - “We're just getting started” (01:14:18) - Who OpenGov is hiring (and where) (01:15:13) - What “grit” means to Zac Mentioned in this episode: Joe Lonsdale, Cox Enterprises, OpenAI, the Department of Government Efficiency, Workday, H.R. McMaster, Stanford University, Formation 8, 8VC, the National Academy of Sciences, the Stanford Review, Kamala Harris, Marc Andreessen, Balaji Srinivasan, Coinbase, Earn, Ben Horowitz, Facebook, Steve Laughlin, Cisco, Laurene Powell Jobs, Glynn Capital, Acme, Allen & Company, Harry You, Joe Tucci, EMC, Bill Green, Accenture, Tyler Technologies, HP, Josh Kushner, GTY Technology Holdings, John Keker, Palantir, CKAN, Oracle, Kevin McCarthy, The American Technology Council Summit, Jeff Bezos, Tim Cook, Satya Nadella, Pat Gelsinger, Donald Trump, Jared Kushner, Elon Musk, Bill Clinton, and Al Gore.Links:Connect with ZacLinkedInConnect with JoubinTwitterLinkedInEmail: grit@kleinerperkins.com Learn more about Kleiner PerkinsThis episode was edited by Eric Johnson from LightningPod.fm

The Product Market Fit Show
How to tell if you have true product-market fit—& what to do if you don't. | Matt Watson, Host of Product Driven

The Product Market Fit Show

Play Episode Listen Later Mar 20, 2025 27:31 Transcription Available


One of the most common questions I get is 'How do I know if I have product market fit?" Especially when you're in that gray zone where things are kind of working but they're not really taking off yet, how do you know if you have product-market fit or not?That's exactly what we dive into here. Why you should listen:Why demo to close is an excellent leading indicator of PMF.Why NPS is not as good as Sean Ellis test to measure product-market fit.Why retention is the best long-term metric, but takes long.What qualitative signals you'll feel when you have true PMF.What to do if you realize you don't have real product-market fit.This podcast originally aired on Matt's podcast called Product Driven, because the topics were so relevant, I figured I'd post it here too.Timestamps(00:00:00) Intro(00:00:55) How do you know if you have PMF(00:06:33) Why some problems are good(00:11:24) Solve a True Top of Mind Pain(00:15:54) Why timing matters(00:21:30) How to Know When to Pivot(00:25:00) Asking the Right Questions to CustomersSend me a message to let me know what you think!

SaaS Club
[REDIFF] - Créer sa propre catégorie dès le lancement. Et conquérir +3K users en 1 an, avec Pierre Touzeau

SaaS Club

Play Episode Listen Later Mar 17, 2025 87:54


Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies
Solana: From On-Chain Nasdaq to the Pump Fun Craze - Anatoly Yakovenko

Epicenter - Learn about Blockchain, Ethereum, Bitcoin and Distributed Technologies

Play Episode Listen Later Mar 15, 2025 78:28


Solana needs no introduction. Ever since its inception, it pushed throughput scaling on a single chain, without the need of sharding or rollups. Despite its ups and downs that culminated at the bottom of the bear market after the FTX crash, it managed to not only survive, but build a vibrant community around crypto's (arguably) most prominent PMF (thus far).Join us for a fascinating discussion and learn about Anatoly's take on controversial topics such as MEV, concurrent block leaders (the equivalent of Ethereum's PBS proposal), L2 rollups, Solana economics, how to tackle potential exploits and more.Topics covered in this episode:How the original Solana vision turned outWhat makes blockchains valuableMEV & program writable accountsConcurrent block proposersCurrent bottlenecks for scaling SolanaMainnet vs. L2 rollupsFiredancer upgradeHalting the network vs. rollbacksSolana's scaling roadmapDoubleZeroWorst hacks on SolanaUI exploits, Bybit hack and smart contract securitySolana economics and the SIMD-0228 proposalFuture improvementsUse cases for blockchainsSolana mobileEpisode links:Anatoly Yakovenko on XSolana on XSolana Mobile on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Brian Fabian Crain & Martin Köppelmann.

The Peel
Alloy's Unconventional Path to $1.5B with Tommy Nicholas, Co-founder and CEO

The Peel

Play Episode Listen Later Mar 14, 2025 89:54


Tommy Nicholas is the Co-founder and CEO of Alloy, the identity and fraud prevention platform trusted by over 700 financial service companies.Our conversation explores the early days of fintech, why more consumer financial protections actually lead to more fraud, and gets into the weeds of various tactics he's learned building a technical platform company like Alloy.We talk about embracing that the hard parts of company building are actually the best parts, why you're most likely to give up when things first start getting better, using hands-on sales implementations in the early days to gave Alloy product market fit on steroids, how hiring changes as you scale, getting 100's of no's over 20 months raising their Seed round, and why TAM doesn't matter.Thanks to Charley Ma for his help brainstorming topics for Tommy!Thanks to Numeral for supporting this episode, the end-to-end platform for sales tax and compliance. Try them here: https://bit.ly/NumeralThePeelTimestamps:(3:56) The platform to manage fraud(5:48) What fintech risk was like in the early 2010's(14:34) Why company building never gets easier(19:30) Reasons the hard stuff is actually the good stuff(24:00) You're most likely to give up when things start getting better(33:47) Doing hands-on sales implementation to get PMF on steroids(42:26) Deciding when PLG or hands-on sales will work best(52:33) Why more consumer financial protections leads to more fraud(58:14) 20 months to raise $2m vs $200m from a spreadsheet(1:06:32) “Make yourself look like a good investment”(1:10:14) Why TAM doesn't matter(1:14:35) How to hire collaborative problem solvers(1:24:38) Why Alloy didn't do much marketing early onReferencedTry Alloy: https://www.alloy.com/Charley Ma's Pod Episode: https://youtu.be/5cxgB1_q2lwTry Artie: https://www.artie.com/Follow TommyTwitter: https://x.com/tommyrvaLinkedIn: https://www.linkedin.com/in/tommynicholasFollow TurnerTwitter: https://twitter.com/TurnerNovakLinkedIn: https://www.linkedin.com/in/turnernovakSubscribe to my newsletter to get every episode + the transcript in your inbox every week: https://www.thespl.it/

The Product Market Fit Show
From mushroom-picking in Belarus to $200M/year. How he built Flo Health into a $1B health app. | Dmitry Gurski, Founder of Flo Health

The Product Market Fit Show

Play Episode Listen Later Mar 10, 2025 73:34 Transcription Available


This is one of the wildest founder journeys you'll ever hear. Dmitry Gurski went from growing potatoes and picking mushrooms on a farm in Belarus to building Flo—a billion-dollar company with 75M monthly users that dominates the health and fitness category worldwide. He started Flo in a market already controlled by PayPal co-founder Max Levchin's startup, which had $30M in funding from a16z. Today, Flo is 100x bigger than its once-dominant rival.Dmitry shares raw, unfiltered startup truths—like why he got rejected by 200+ VCs, why 90% of startup failures are team-related, and why most founders are delusional about product-market fit. He breaks down how simplicity beats complexity in product, why retention is everything, and how deleting features can actually boost revenue.If you're a founder, this episode will fundamentally change how you think about perseverance, pivots, and building something that lasts. Listen now—you'll be referencing this one for years.Why you should listen:Why big market beats niche – How Flo won because it targeted all women's health while competitors focused only on fertility.How Retention is the real test – A product with natural recurring use cases (like periods) has built-in retention, unlike fitness or productivity apps.Why simple wins – The first version of Flo was less complex than competitors but had far better predictions—accuracy mattered more than features.Fundraising is brutal – Flo got 300+ investor rejections before raising $300M. Many VCs just didn't “get” the space.Keywordsstartup, entrepreneurship, product design, user retention, Flow app, health and fitness, early stage founders, product market fit, simplicity, user engagement, retention, user case, app development, entrepreneurship, product market fit, mobile apps, business strategy, team dynamics, failure, success, risk, uncertainty, decision making, market demand, competition, product-market fit, fundraising, entrepreneurship, startup success, female healthTimestamps(00:00:00) Intro(00:09:10) Why you Need to Keep it Simple(00:13:10) Why B2C is All About Retention(00:19:05) Why you Need to Delete Features(00:24:14) PMF is about the Shape of the Curve(00:39:17) When to Persevere, When to Pivot, and When to Quit(00:42:22) More attempts = more success(00:51:34) The Idea for Flo(00:59:05) Finding Product Market Fit(01:02:07) Advice for An Early Stage Founder(01:10:22) A Potato StorySend me a message to let me know what you think!

No Sharding - The Solana Podcast
Is Social Trading Crypto's Next Big Thing? w/ Ilja Moisejevs and Richard Wu

No Sharding - The Solana Podcast

Play Episode Listen Later Mar 4, 2025 40:38


In this episode, Austin chats with Ilmoi and Richard Wu about Vector, a social trading app that combines elements of Robinhood and Twitter. They delve into the core functionality of Vector, its community-building strategies, and the sophisticated technical infrastructure underpinning its seamless experience. The discussion also addresses broader topics including the relationship between timing and PMF and the nuances of moderation policies in web3.  DISCLAIMER The content herein is provided for educational, informational, and entertainment purposes only, and does not constitute an offer to sell or a solicitation of an offer to buy any securities, options, futures, or other derivatives related to securities in any jurisdiction, nor should not be relied upon as advice to buy, sell or hold any of the foregoing. This content is intended to be general in nature and is not specific to you, the user or anyone else. You should not make any decision, financial, investment, trading or otherwise, based on any of the information presented without undertaking independent due diligence and consultation with a professional advisor. Solana Foundation Foundation and its agents, advisors, council members, officers and employees (the “Foundation Parties”) make no representation or warranties, expressed or implied, as to the accuracy of the information herein and expressly disclaims any and all liability that may be based on such information or any errors or omissions therein. The Foundation Parties shall have no liability whatsoever, under contract, tort, trust or otherwise, to any person arising from or related to the content or any use of the information contained herein by you or any of your representatives. All opinions expressed herein are the speakers' own personal opinions and do not reflect the opinions of any entities.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Today's episode is with Paul Klein, founder of Browserbase. We talked about building browser infrastructure for AI agents, the future of agent authentication, and their open source framework Stagehand.* [00:00:00] Introductions* [00:04:46] AI-specific challenges in browser infrastructure* [00:07:05] Multimodality in AI-Powered Browsing* [00:12:26] Running headless browsers at scale* [00:18:46] Geolocation when proxying* [00:21:25] CAPTCHAs and Agent Auth* [00:28:21] Building “User take over” functionality* [00:33:43] Stagehand: AI web browsing framework* [00:38:58] OpenAI's Operator and computer use agents* [00:44:44] Surprising use cases of Browserbase* [00:47:18] Future of browser automation and market competition* [00:53:11] Being a solo founderTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.swyx [00:00:12]: Hey, and today we are very blessed to have our friends, Paul Klein, for the fourth, the fourth, CEO of Browserbase. Welcome.Paul [00:00:21]: Thanks guys. Yeah, I'm happy to be here. I've been lucky to know both of you for like a couple of years now, I think. So it's just like we're hanging out, you know, with three ginormous microphones in front of our face. It's totally normal hangout.swyx [00:00:34]: Yeah. We've actually mentioned you on the podcast, I think, more often than any other Solaris tenant. Just because like you're one of the, you know, best performing, I think, LLM tool companies that have started up in the last couple of years.Paul [00:00:50]: Yeah, I mean, it's been a whirlwind of a year, like Browserbase is actually pretty close to our first birthday. So we are one years old. And going from, you know, starting a company as a solo founder to... To, you know, having a team of 20 people, you know, a series A, but also being able to support hundreds of AI companies that are building AI applications that go out and automate the web. It's just been like, really cool. It's been happening a little too fast. I think like collectively as an AI industry, let's just take a week off together. I took my first vacation actually two weeks ago, and Operator came out on the first day, and then a week later, DeepSeat came out. And I'm like on vacation trying to chill. I'm like, we got to build with this stuff, right? So it's been a breakneck year. But I'm super happy to be here and like talk more about all the stuff we're seeing. And I'd love to hear kind of what you guys are excited about too, and share with it, you know?swyx [00:01:39]: Where to start? So people, you've done a bunch of podcasts. I think I strongly recommend Jack Bridger's Scaling DevTools, as well as Turner Novak's The Peel. And, you know, I'm sure there's others. So you covered your Twilio story in the past, talked about StreamClub, you got acquired to Mux, and then you left to start Browserbase. So maybe we just start with what is Browserbase? Yeah.Paul [00:02:02]: Browserbase is the web browser for your AI. We're building headless browser infrastructure, which are browsers that run in a server environment that's accessible to developers via APIs and SDKs. It's really hard to run a web browser in the cloud. You guys are probably running Chrome on your computers, and that's using a lot of resources, right? So if you want to run a web browser or thousands of web browsers, you can't just spin up a bunch of lambdas. You actually need to use a secure containerized environment. You have to scale it up and down. It's a stateful system. And that infrastructure is, like, super painful. And I know that firsthand, because at my last company, StreamClub, I was CTO, and I was building our own internal headless browser infrastructure. That's actually why we sold the company, is because Mux really wanted to buy our headless browser infrastructure that we'd built. And it's just a super hard problem. And I actually told my co-founders, I would never start another company unless it was a browser infrastructure company. And it turns out that's really necessary in the age of AI, when AI can actually go out and interact with websites, click on buttons, fill in forms. You need AI to do all of that work in an actual browser running somewhere on a server. And BrowserBase powers that.swyx [00:03:08]: While you're talking about it, it occurred to me, not that you're going to be acquired or anything, but it occurred to me that it would be really funny if you became the Nikita Beer of headless browser companies. You just have one trick, and you make browser companies that get acquired.Paul [00:03:23]: I truly do only have one trick. I'm screwed if it's not for headless browsers. I'm not a Go programmer. You know, I'm in AI grant. You know, browsers is an AI grant. But we were the only company in that AI grant batch that used zero dollars on AI spend. You know, we're purely an infrastructure company. So as much as people want to ask me about reinforcement learning, I might not be the best guy to talk about that. But if you want to ask about headless browser infrastructure at scale, I can talk your ear off. So that's really my area of expertise. And it's a pretty niche thing. Like, nobody has done what we're doing at scale before. So we're happy to be the experts.swyx [00:03:59]: You do have an AI thing, stagehand. We can talk about the sort of core of browser-based first, and then maybe stagehand. Yeah, stagehand is kind of the web browsing framework. Yeah.What is Browserbase? Headless Browser Infrastructure ExplainedAlessio [00:04:10]: Yeah. Yeah. And maybe how you got to browser-based and what problems you saw. So one of the first things I worked on as a software engineer was integration testing. Sauce Labs was kind of like the main thing at the time. And then we had Selenium, we had Playbrite, we had all these different browser things. But it's always been super hard to do. So obviously you've worked on this before. When you started browser-based, what were the challenges? What were the AI-specific challenges that you saw versus, there's kind of like all the usual running browser at scale in the cloud, which has been a problem for years. What are like the AI unique things that you saw that like traditional purchase just didn't cover? Yeah.AI-specific challenges in browser infrastructurePaul [00:04:46]: First and foremost, I think back to like the first thing I did as a developer, like as a kid when I was writing code, I wanted to write code that did stuff for me. You know, I wanted to write code to automate my life. And I do that probably by using curl or beautiful soup to fetch data from a web browser. And I think I still do that now that I'm in the cloud. And the other thing that I think is a huge challenge for me is that you can't just create a web site and parse that data. And we all know that now like, you know, taking HTML and plugging that into an LLM, you can extract insights, you can summarize. So it was very clear that now like dynamic web scraping became very possible with the rise of large language models or a lot easier. And that was like a clear reason why there's been more usage of headless browsers, which are necessary because a lot of modern websites don't expose all of their page content via a simple HTTP request. You know, they actually do require you to run this type of code for a specific time. JavaScript on the page to hydrate this. Airbnb is a great example. You go to airbnb.com. A lot of that content on the page isn't there until after they run the initial hydration. So you can't just scrape it with a curl. You need to have some JavaScript run. And a browser is that JavaScript engine that's going to actually run all those requests on the page. So web data retrieval was definitely one driver of starting BrowserBase and the rise of being able to summarize that within LLM. Also, I was familiar with if I wanted to automate a website, I could write one script and that would work for one website. It was very static and deterministic. But the web is non-deterministic. The web is always changing. And until we had LLMs, there was no way to write scripts that you could write once that would run on any website. That would change with the structure of the website. Click the login button. It could mean something different on many different websites. And LLMs allow us to generate code on the fly to actually control that. So I think that rise of writing the generic automation scripts that can work on many different websites, to me, made it clear that browsers are going to be a lot more useful because now you can automate a lot more things without writing. If you wanted to write a script to book a demo call on 100 websites, previously, you had to write 100 scripts. Now you write one script that uses LLMs to generate that script. That's why we built our web browsing framework, StageHand, which does a lot of that work for you. But those two things, web data collection and then enhanced automation of many different websites, it just felt like big drivers for more browser infrastructure that would be required to power these kinds of features.Alessio [00:07:05]: And was multimodality also a big thing?Paul [00:07:08]: Now you can use the LLMs to look, even though the text in the dome might not be as friendly. Maybe my hot take is I was always kind of like, I didn't think vision would be as big of a driver. For UI automation, I felt like, you know, HTML is structured text and large language models are good with structured text. But it's clear that these computer use models are often vision driven, and they've been really pushing things forward. So definitely being multimodal, like rendering the page is required to take a screenshot to give that to a computer use model to take actions on a website. And it's just another win for browser. But I'll be honest, that wasn't what I was thinking early on. I didn't even think that we'd get here so fast with multimodality. I think we're going to have to get back to multimodal and vision models.swyx [00:07:50]: This is one of those things where I forgot to mention in my intro that I'm an investor in Browserbase. And I remember that when you pitched to me, like a lot of the stuff that we have today, we like wasn't on the original conversation. But I did have my original thesis was something that we've talked about on the podcast before, which is take the GPT store, the custom GPT store, all the every single checkbox and plugin is effectively a startup. And this was the browser one. I think the main hesitation, I think I actually took a while to get back to you. The main hesitation was that there were others. Like you're not the first hit list browser startup. It's not even your first hit list browser startup. There's always a question of like, will you be the category winner in a place where there's a bunch of incumbents, to be honest, that are bigger than you? They're just not targeted at the AI space. They don't have the backing of Nat Friedman. And there's a bunch of like, you're here in Silicon Valley. They're not. I don't know.Paul [00:08:47]: I don't know if that's, that was it, but like, there was a, yeah, I mean, like, I think I tried all the other ones and I was like, really disappointed. Like my background is from working at great developer tools, companies, and nothing had like the Vercel like experience. Um, like our biggest competitor actually is partly owned by private equity and they just jacked up their prices quite a bit. And the dashboard hasn't changed in five years. And I actually used them at my last company and tried them and I was like, oh man, like there really just needs to be something that's like the experience of these great infrastructure companies, like Stripe, like clerk, like Vercel that I use in love, but oriented towards this kind of like more specific category, which is browser infrastructure, which is really technically complex. Like a lot of stuff can go wrong on the internet when you're running a browser. The internet is very vast. There's a lot of different configurations. Like there's still websites that only work with internet explorer out there. How do you handle that when you're running your own browser infrastructure? These are the problems that we have to think about and solve at BrowserBase. And it's, it's certainly a labor of love, but I built this for me, first and foremost, I know it's super cheesy and everyone says that for like their startups, but it really, truly was for me. If you look at like the talks I've done even before BrowserBase, and I'm just like really excited to try and build a category defining infrastructure company. And it's, it's rare to have a new category of infrastructure exists. We're here in the Chroma offices and like, you know, vector databases is a new category of infrastructure. Is it, is it, I mean, we can, we're in their office, so, you know, we can, we can debate that one later. That is one.Multimodality in AI-Powered Browsingswyx [00:10:16]: That's one of the industry debates.Paul [00:10:17]: I guess we go back to the LLMOS talk that Karpathy gave way long ago. And like the browser box was very clearly there and it seemed like the people who were building in this space also agreed that browsers are a core primitive of infrastructure for the LLMOS that's going to exist in the future. And nobody was building something there that I wanted to use. So I had to go build it myself.swyx [00:10:38]: Yeah. I mean, exactly that talk that, that honestly, that diagram, every box is a startup and there's the code box and then there's the. The browser box. I think at some point they will start clashing there. There's always the question of the, are you a point solution or are you the sort of all in one? And I think the point solutions tend to win quickly, but then the only ones have a very tight cohesive experience. Yeah. Let's talk about just the hard problems of browser base you have on your website, which is beautiful. Thank you. Was there an agency that you used for that? Yeah. Herb.paris.Paul [00:11:11]: They're amazing. Herb.paris. Yeah. It's H-E-R-V-E. I highly recommend for developers. Developer tools, founders to work with consumer agencies because they end up building beautiful things and the Parisians know how to build beautiful interfaces. So I got to give prep.swyx [00:11:24]: And chat apps, apparently are, they are very fast. Oh yeah. The Mistral chat. Yeah. Mistral. Yeah.Paul [00:11:31]: Late chat.swyx [00:11:31]: Late chat. And then your videos as well, it was professionally shot, right? The series A video. Yeah.Alessio [00:11:36]: Nico did the videos. He's amazing. Not the initial video that you shot at the new one. First one was Austin.Paul [00:11:41]: Another, another video pretty surprised. But yeah, I mean, like, I think when you think about how you talk about your company. You have to think about the way you present yourself. It's, you know, as a developer, you think you evaluate a company based on like the API reliability and the P 95, but a lot of developers say, is the website good? Is the message clear? Do I like trust this founder? I'm building my whole feature on. So I've tried to nail that as well as like the reliability of the infrastructure. You're right. It's very hard. And there's a lot of kind of foot guns that you run into when running headless browsers at scale. Right.Competing with Existing Headless Browser Solutionsswyx [00:12:10]: So let's pick one. You have eight features here. Seamless integration. Scalability. Fast or speed. Secure. Observable. Stealth. That's interesting. Extensible and developer first. What comes to your mind as like the top two, three hardest ones? Yeah.Running headless browsers at scalePaul [00:12:26]: I think just running headless browsers at scale is like the hardest one. And maybe can I nerd out for a second? Is that okay? I heard this is a technical audience, so I'll talk to the other nerds. Whoa. They were listening. Yeah. They're upset. They're ready. The AGI is angry. Okay. So. So how do you run a browser in the cloud? Let's start with that, right? So let's say you're using a popular browser automation framework like Puppeteer, Playwright, and Selenium. Maybe you've written a code, some code locally on your computer that opens up Google. It finds the search bar and then types in, you know, search for Latent Space and hits the search button. That script works great locally. You can see the little browser open up. You want to take that to production. You want to run the script in a cloud environment. So when your laptop is closed, your browser is doing something. The browser is doing something. Well, I, we use Amazon. You can see the little browser open up. You know, the first thing I'd reach for is probably like some sort of serverless infrastructure. I would probably try and deploy on a Lambda. But Chrome itself is too big to run on a Lambda. It's over 250 megabytes. So you can't easily start it on a Lambda. So you maybe have to use something like Lambda layers to squeeze it in there. Maybe use a different Chromium build that's lighter. And you get it on the Lambda. Great. It works. But it runs super slowly. It's because Lambdas are very like resource limited. They only run like with one vCPU. You can run one process at a time. Remember, Chromium is super beefy. It's barely running on my MacBook Air. I'm still downloading it from a pre-run. Yeah, from the test earlier, right? I'm joking. But it's big, you know? So like Lambda, it just won't work really well. Maybe it'll work, but you need something faster. Your users want something faster. Okay. Well, let's put it on a beefier instance. Let's get an EC2 server running. Let's throw Chromium on there. Great. Okay. I can, that works well with one user. But what if I want to run like 10 Chromium instances, one for each of my users? Okay. Well, I might need two EC2 instances. Maybe 10. All of a sudden, you have multiple EC2 instances. This sounds like a problem for Kubernetes and Docker, right? Now, all of a sudden, you're using ECS or EKS, the Kubernetes or container solutions by Amazon. You're spending up and down containers, and you're spending a whole engineer's time on kind of maintaining this stateful distributed system. Those are some of the worst systems to run because when it's a stateful distributed system, it means that you are bound by the connections to that thing. You have to keep the browser open while someone is working with it, right? That's just a painful architecture to run. And there's all this other little gotchas with Chromium, like Chromium, which is the open source version of Chrome, by the way. You have to install all these fonts. You want emojis working in your browsers because your vision model is looking for the emoji. You need to make sure you have the emoji fonts. You need to make sure you have all the right extensions configured, like, oh, do you want ad blocking? How do you configure that? How do you actually record all these browser sessions? Like it's a headless browser. You can't look at it. So you need to have some sort of observability. Maybe you're recording videos and storing those somewhere. It all kind of adds up to be this just giant monster piece of your project when all you wanted to do was run a lot of browsers in production for this little script to go to google.com and search. And when I see a complex distributed system, I see an opportunity to build a great infrastructure company. And we really abstract that away with Browserbase where our customers can use these existing frameworks, Playwright, Publisher, Selenium, or our own stagehand and connect to our browsers in a serverless-like way. And control them, and then just disconnect when they're done. And they don't have to think about the complex distributed system behind all of that. They just get a browser running anywhere, anytime. Really easy to connect to.swyx [00:15:55]: I'm sure you have questions. My standard question with anything, so essentially you're a serverless browser company, and there's been other serverless things that I'm familiar with in the past, serverless GPUs, serverless website hosting. That's where I come from with Netlify. One question is just like, you promised to spin up thousands of servers. You promised to spin up thousands of browsers in milliseconds. I feel like there's no real solution that does that yet. And I'm just kind of curious how. The only solution I know, which is to kind of keep a kind of warm pool of servers around, which is expensive, but maybe not so expensive because it's just CPUs. So I'm just like, you know. Yeah.Browsers as a Core Primitive in AI InfrastructurePaul [00:16:36]: You nailed it, right? I mean, how do you offer a serverless-like experience with something that is clearly not serverless, right? And the answer is, you need to be able to run... We run many browsers on single nodes. We use Kubernetes at browser base. So we have many pods that are being scheduled. We have to predictably schedule them up or down. Yes, thousands of browsers in milliseconds is the best case scenario. If you hit us with 10,000 requests, you may hit a slower cold start, right? So we've done a lot of work on predictive scaling and being able to kind of route stuff to different regions where we have multiple regions of browser base where we have different pools available. You can also pick the region you want to go to based on like lower latency, round trip, time latency. It's very important with these types of things. There's a lot of requests going over the wire. So for us, like having a VM like Firecracker powering everything under the hood allows us to be super nimble and spin things up or down really quickly with strong multi-tenancy. But in the end, this is like the complex infrastructural challenges that we have to kind of deal with at browser base. And we have a lot more stuff on our roadmap to allow customers to have more levers to pull to exchange, do you want really fast browser startup times or do you want really low costs? And if you're willing to be more flexible on that, we may be able to kind of like work better for your use cases.swyx [00:17:44]: Since you used Firecracker, shouldn't Fargate do that for you or did you have to go lower level than that? We had to go lower level than that.Paul [00:17:51]: I find this a lot with Fargate customers, which is alarming for Fargate. We used to be a giant Fargate customer. Actually, the first version of browser base was ECS and Fargate. And unfortunately, it's a great product. I think we were actually the largest Fargate customer in our region for a little while. No, what? Yeah, seriously. And unfortunately, it's a great product, but I think if you're an infrastructure company, you actually have to have a deeper level of control over these primitives. I think it's the same thing is true with databases. We've used other database providers and I think-swyx [00:18:21]: Yeah, serverless Postgres.Paul [00:18:23]: Shocker. When you're an infrastructure company, you're on the hook if any provider has an outage. And I can't tell my customers like, hey, we went down because so-and-so went down. That's not acceptable. So for us, we've really moved to bringing things internally. It's kind of opposite of what we preach. We tell our customers, don't build this in-house, but then we're like, we build a lot of stuff in-house. But I think it just really depends on what is in the critical path. We try and have deep ownership of that.Alessio [00:18:46]: On the distributed location side, how does that work for the web where you might get sort of different content in different locations, but the customer is expecting, you know, if you're in the US, I'm expecting the US version. But if you're spinning up my browser in France, I might get the French version. Yeah.Paul [00:19:02]: Yeah. That's a good question. Well, generally, like on the localization, there is a thing called locale in the browser. You can set like what your locale is. If you're like in the ENUS browser or not, but some things do IP, IP based routing. And in that case, you may want to have a proxy. Like let's say you're running something in the, in Europe, but you want to make sure you're showing up from the US. You may want to use one of our proxy features so you can turn on proxies to say like, make sure these connections always come from the United States, which is necessary too, because when you're browsing the web, you're coming from like a, you know, data center IP, and that can make things a lot harder to browse web. So we do have kind of like this proxy super network. Yeah. We have a proxy for you based on where you're going, so you can reliably automate the web. But if you get scheduled in Europe, that doesn't happen as much. We try and schedule you as close to, you know, your origin that you're trying to go to. But generally you have control over the regions you can put your browsers in. So you can specify West one or East one or Europe. We only have one region of Europe right now, actually. Yeah.Alessio [00:19:55]: What's harder, the browser or the proxy? I feel like to me, it feels like actually proxying reliably at scale. It's much harder than spending up browsers at scale. I'm curious. It's all hard.Paul [00:20:06]: It's layers of hard, right? Yeah. I think it's different levels of hard. I think the thing with the proxy infrastructure is that we work with many different web proxy providers and some are better than others. Some have good days, some have bad days. And our customers who've built browser infrastructure on their own, they have to go and deal with sketchy actors. Like first they figure out their own browser infrastructure and then they got to go buy a proxy. And it's like you can pay in Bitcoin and it just kind of feels a little sus, right? It's like you're buying drugs when you're trying to get a proxy online. We have like deep relationships with these counterparties. We're able to audit them and say, is this proxy being sourced ethically? Like it's not running on someone's TV somewhere. Is it free range? Yeah. Free range organic proxies, right? Right. We do a level of diligence. We're SOC 2. So we have to understand what is going on here. But then we're able to make sure that like we route around proxy providers not working. There's proxy providers who will just, the proxy will stop working all of a sudden. And then if you don't have redundant proxying on your own browsers, that's hard down for you or you may get some serious impacts there. With us, like we intelligently know, hey, this proxy is not working. Let's go to this one. And you can kind of build a network of multiple providers to really guarantee the best uptime for our customers. Yeah. So you don't own any proxies? We don't own any proxies. You're right. The team has been saying who wants to like take home a little proxy server, but not yet. We're not there yet. You know?swyx [00:21:25]: It's a very mature market. I don't think you should build that yourself. Like you should just be a super customer of them. Yeah. Scraping, I think, is the main use case for that. I guess. Well, that leads us into CAPTCHAs and also off, but let's talk about CAPTCHAs. You had a little spiel that you wanted to talk about CAPTCHA stuff.Challenges of Scaling Browser InfrastructurePaul [00:21:43]: Oh, yeah. I was just, I think a lot of people ask, if you're thinking about proxies, you're thinking about CAPTCHAs too. I think it's the same thing. You can go buy CAPTCHA solvers online, but it's the same buying experience. It's some sketchy website, you have to integrate it. It's not fun to buy these things and you can't really trust that the docs are bad. What Browserbase does is we integrate a bunch of different CAPTCHAs. We do some stuff in-house, but generally we just integrate with a bunch of known vendors and continually monitor and maintain these things and say, is this working or not? Can we route around it or not? These are CAPTCHA solvers. CAPTCHA solvers, yeah. Not CAPTCHA providers, CAPTCHA solvers. Yeah, sorry. CAPTCHA solvers. We really try and make sure all of that works for you. I think as a dev, if I'm buying infrastructure, I want it all to work all the time and it's important for us to provide that experience by making sure everything does work and monitoring it on our own. Yeah. Right now, the world of CAPTCHAs is tricky. I think AI agents in particular are very much ahead of the internet infrastructure. CAPTCHAs are designed to block all types of bots, but there are now good bots and bad bots. I think in the future, CAPTCHAs will be able to identify who a good bot is, hopefully via some sort of KYC. For us, we've been very lucky. We have very little to no known abuse of Browserbase because we really look into who we work with. And for certain types of CAPTCHA solving, we only allow them on certain types of plans because we want to make sure that we can know what people are doing, what their use cases are. And that's really allowed us to try and be an arbiter of good bots, which is our long term goal. I want to build great relationships with people like Cloudflare so we can agree, hey, here are these acceptable bots. We'll identify them for you and make sure we flag when they come to your website. This is a good bot, you know?Alessio [00:23:23]: I see. And Cloudflare said they want to do more of this. So they're going to set by default, if they think you're an AI bot, they're going to reject. I'm curious if you think this is something that is going to be at the browser level or I mean, the DNS level with Cloudflare seems more where it should belong. But I'm curious how you think about it.Paul [00:23:40]: I think the web's going to change. You know, I think that the Internet as we have it right now is going to change. And we all need to just accept that the cat is out of the bag. And instead of kind of like wishing the Internet was like it was in the 2000s, we can have free content line that wouldn't be scraped. It's just it's not going to happen. And instead, we should think about like, one, how can we change? How can we change the models of, you know, information being published online so people can adequately commercialize it? But two, how do we rebuild applications that expect that AI agents are going to log in on their behalf? Those are the things that are going to allow us to kind of like identify good and bad bots. And I think the team at Clerk has been doing a really good job with this on the authentication side. I actually think that auth is the biggest thing that will prevent agents from accessing stuff, not captchas. And I think there will be agent auth in the future. I don't know if it's going to happen from an individual company, but actually authentication providers that have a, you know, hidden login as agent feature, which will then you put in your email, you'll get a push notification, say like, hey, your browser-based agent wants to log into your Airbnb. You can approve that and then the agent can proceed. That really circumvents the need for captchas or logging in as you and sharing your password. I think agent auth is going to be one way we identify good bots going forward. And I think a lot of this captcha solving stuff is really short-term problems as the internet kind of reorients itself around how it's going to work with agents browsing the web, just like people do. Yeah.Managing Distributed Browser Locations and Proxiesswyx [00:24:59]: Stitch recently was on Hacker News for talking about agent experience, AX, which is a thing that Netlify is also trying to clone and coin and talk about. And we've talked about this on our previous episodes before in a sense that I actually think that's like maybe the only part of the tech stack that needs to be kind of reinvented for agents. Everything else can stay the same, CLIs, APIs, whatever. But auth, yeah, we need agent auth. And it's mostly like short-lived, like it should not, it should be a distinct, identity from the human, but paired. I almost think like in the same way that every social network should have your main profile and then your alt accounts or your Finsta, it's almost like, you know, every, every human token should be paired with the agent token and the agent token can go and do stuff on behalf of the human token, but not be presumed to be the human. Yeah.Paul [00:25:48]: It's like, it's, it's actually very similar to OAuth is what I'm thinking. And, you know, Thread from Stitch is an investor, Colin from Clerk, Octaventures, all investors in browser-based because like, I hope they solve this because they'll make browser-based submission more possible. So we don't have to overcome all these hurdles, but I think it will be an OAuth-like flow where an agent will ask to log in as you, you'll approve the scopes. Like it can book an apartment on Airbnb, but it can't like message anybody. And then, you know, the agent will have some sort of like role-based access control within an application. Yeah. I'm excited for that.swyx [00:26:16]: The tricky part is just, there's one, one layer of delegation here, which is like, you're authoring my user's user or something like that. I don't know if that's tricky or not. Does that make sense? Yeah.Paul [00:26:25]: You know, actually at Twilio, I worked on the login identity and access. Management teams, right? So like I built Twilio's login page.swyx [00:26:31]: You were an intern on that team and then you became the lead in two years? Yeah.Paul [00:26:34]: Yeah. I started as an intern in 2016 and then I was the tech lead of that team. How? That's not normal. I didn't have a life. He's not normal. Look at this guy. I didn't have a girlfriend. I just loved my job. I don't know. I applied to 500 internships for my first job and I got rejected from every single one of them except for Twilio and then eventually Amazon. And they took a shot on me and like, I was getting paid money to write code, which was my dream. Yeah. Yeah. I'm very lucky that like this coding thing worked out because I was going to be doing it regardless. And yeah, I was able to kind of spend a lot of time on a team that was growing at a company that was growing. So it informed a lot of this stuff here. I think these are problems that have been solved with like the SAML protocol with SSO. I think it's a really interesting stuff with like WebAuthn, like these different types of authentication, like schemes that you can use to authenticate people. The tooling is all there. It just needs to be tweaked a little bit to work for agents. And I think the fact that there are companies that are already. Providing authentication as a service really sets it up. Well, the thing that's hard is like reinventing the internet for agents. We don't want to rebuild the internet. That's an impossible task. And I think people often say like, well, we'll have this second layer of APIs built for agents. I'm like, we will for the top use cases, but instead of we can just tweak the internet as is, which is on the authentication side, I think we're going to be the dumb ones going forward. Unfortunately, I think AI is going to be able to do a lot of the tasks that we do online, which means that it will be able to go to websites, click buttons on our behalf and log in on our behalf too. So with this kind of like web agent future happening, I think with some small structural changes, like you said, it feels like it could all slot in really nicely with the existing internet.Handling CAPTCHAs and Agent Authenticationswyx [00:28:08]: There's one more thing, which is the, your live view iframe, which lets you take, take control. Yeah. Obviously very key for operator now, but like, was, is there anything interesting technically there or that the people like, well, people always want this.Paul [00:28:21]: It was really hard to build, you know, like, so, okay. Headless browsers, you don't see them, right. They're running. They're running in a cloud somewhere. You can't like look at them. And I just want to really make, it's a weird name. I wish we came up with a better name for this thing, but you can't see them. Right. But customers don't trust AI agents, right. At least the first pass. So what we do with our live view is that, you know, when you use browser base, you can actually embed a live view of the browser running in the cloud for your customer to see it working. And that's what the first reason is the build trust, like, okay, so I have this script. That's going to go automate a website. I can embed it into my web application via an iframe and my customer can watch. I think. And then we added two way communication. So now not only can you watch the browser kind of being operated by AI, if you want to pause and actually click around type within this iframe that's controlling a browser, that's also possible. And this is all thanks to some of the lower level protocol, which is called the Chrome DevTools protocol. It has a API called start screencast, and you can also send mouse clicks and button clicks to a remote browser. And this is all embeddable within iframes. You have a browser within a browser, yo. And then you simulate the screen, the click on the other side. Exactly. And this is really nice often for, like, let's say, a capture that can't be solved. You saw this with Operator, you know, Operator actually uses a different approach. They use VNC. So, you know, you're able to see, like, you're seeing the whole window here. What we're doing is something a little lower level with the Chrome DevTools protocol. It's just PNGs being streamed over the wire. But the same thing is true, right? Like, hey, I'm running a window. Pause. Can you do something in this window? Human. Okay, great. Resume. Like sometimes 2FA tokens. Like if you get that text message, you might need a person to type that in. Web agents need human-in-the-loop type workflows still. You still need a person to interact with the browser. And building a UI to proxy that is kind of hard. You may as well just show them the whole browser and say, hey, can you finish this up for me? And then let the AI proceed on afterwards. Is there a future where I stream my current desktop to browser base? I don't think so. I think we're very much cloud infrastructure. Yeah. You know, but I think a lot of the stuff we're doing, we do want to, like, build tools. Like, you know, we'll talk about the stage and, you know, web agent framework in a second. But, like, there's a case where a lot of people are going desktop first for, you know, consumer use. And I think cloud is doing a lot of this, where I expect to see, you know, MCPs really oriented around the cloud desktop app for a reason, right? Like, I think a lot of these tools are going to run on your computer because it makes... I think it's breaking out. People are putting it on a server. Oh, really? Okay. Well, sweet. We'll see. We'll see that. I was surprised, though, wasn't I? I think that the browser company, too, with Dia Browser, it runs on your machine. You know, it's going to be...swyx [00:30:50]: What is it?Paul [00:30:51]: So, Dia Browser, as far as I understand... I used to use Arc. Yeah. I haven't used Arc. But I'm a big fan of the browser company. I think they're doing a lot of cool stuff in consumer. As far as I understand, it's a browser where you have a sidebar where you can, like, chat with it and it can control the local browser on your machine. So, if you imagine, like, what a consumer web agent is, which it lives alongside your browser, I think Google Chrome has Project Marina, I think. I almost call it Project Marinara for some reason. I don't know why. It's...swyx [00:31:17]: No, I think it's someone really likes the Waterworld. Oh, I see. The classic Kevin Costner. Yeah.Paul [00:31:22]: Okay. Project Marinara is a similar thing to the Dia Browser, in my mind, as far as I understand it. You have a browser that has an AI interface that will take over your mouse and keyboard and control the browser for you. Great for consumer use cases. But if you're building applications that rely on a browser and it's more part of a greater, like, AI app experience, you probably need something that's more like infrastructure, not a consumer app.swyx [00:31:44]: Just because I have explored a little bit in this area, do people want branching? So, I have the state. Of whatever my browser's in. And then I want, like, 100 clones of this state. Do people do that? Or...Paul [00:31:56]: People don't do it currently. Yeah. But it's definitely something we're thinking about. I think the idea of forking a browser is really cool. Technically, kind of hard. We're starting to see this in code execution, where people are, like, forking some, like, code execution, like, processes or forking some tool calls or branching tool calls. Haven't seen it at the browser level yet. But it makes sense. Like, if an AI agent is, like, using a website and it's not sure what path it wants to take to crawl this website. To find the information it's looking for. It would make sense for it to explore both paths in parallel. And that'd be a very, like... A road not taken. Yeah. And hopefully find the right answer. And then say, okay, this was actually the right one. And memorize that. And go there in the future. On the roadmap. For sure. Don't make my roadmap, please. You know?Alessio [00:32:37]: How do you actually do that? Yeah. How do you fork? I feel like the browser is so stateful for so many things.swyx [00:32:42]: Serialize the state. Restore the state. I don't know.Paul [00:32:44]: So, it's one of the reasons why we haven't done it yet. It's hard. You know? Like, to truly fork, it's actually quite difficult. The naive way is to open the same page in a new tab and then, like, hope that it's at the same thing. But if you have a form halfway filled, you may have to, like, take the whole, you know, container. Pause it. All the memory. Duplicate it. Restart it from there. It could be very slow. So, we haven't found a thing. Like, the easy thing to fork is just, like, copy the page object. You know? But I think there needs to be something a little bit more robust there. Yeah.swyx [00:33:12]: So, MorphLabs has this infinite branch thing. Like, wrote a custom fork of Linux or something that let them save the system state and clone it. MorphLabs, hit me up. I'll be a customer. Yeah. That's the only. I think that's the only way to do it. Yeah. Like, unless Chrome has some special API for you. Yeah.Paul [00:33:29]: There's probably something we'll reverse engineer one day. I don't know. Yeah.Alessio [00:33:32]: Let's talk about StageHand, the AI web browsing framework. You have three core components, Observe, Extract, and Act. Pretty clean landing page. What was the idea behind making a framework? Yeah.Stagehand: AI web browsing frameworkPaul [00:33:43]: So, there's three frameworks that are very popular or already exist, right? Puppeteer, Playwright, Selenium. Those are for building hard-coded scripts to control websites. And as soon as I started to play with LLMs plus browsing, I caught myself, you know, code-genning Playwright code to control a website. I would, like, take the DOM. I'd pass it to an LLM. I'd say, can you generate the Playwright code to click the appropriate button here? And it would do that. And I was like, this really should be part of the frameworks themselves. And I became really obsessed with SDKs that take natural language as part of, like, the API input. And that's what StageHand is. StageHand exposes three APIs, and it's a super set of Playwright. So, if you go to a page, you may want to take an action, click on the button, fill in the form, etc. That's what the act command is for. You may want to extract some data. This one takes a natural language, like, extract the winner of the Super Bowl from this page. You can give it a Zod schema, so it returns a structured output. And then maybe you're building an API. You can do an agent loop, and you want to kind of see what actions are possible on this page before taking one. You can do observe. So, you can observe the actions on the page, and it will generate a list of actions. You can guide it, like, give me actions on this page related to buying an item. And you can, like, buy it now, add to cart, view shipping options, and pass that to an LLM, an agent loop, to say, what's the appropriate action given this high-level goal? So, StageHand isn't a web agent. It's a framework for building web agents. And we think that agent loops are actually pretty close to the application layer because every application probably has different goals or different ways it wants to take steps. I don't think I've seen a generic. Maybe you guys are the experts here. I haven't seen, like, a really good AI agent framework here. Everyone kind of has their own special sauce, right? I see a lot of developers building their own agent loops, and they're using tools. And I view StageHand as the browser tool. So, we expose act, extract, observe. Your agent can call these tools. And from that, you don't have to worry about it. You don't have to worry about generating playwright code performantly. You don't have to worry about running it. You can kind of just integrate these three tool calls into your agent loop and reliably automate the web.swyx [00:35:48]: A special shout-out to Anirudh, who I met at your dinner, who I think listens to the pod. Yeah. Hey, Anirudh.Paul [00:35:54]: Anirudh's a man. He's a StageHand guy.swyx [00:35:56]: I mean, the interesting thing about each of these APIs is they're kind of each startup. Like, specifically extract, you know, Firecrawler is extract. There's, like, Expand AI. There's a whole bunch of, like, extract companies. They just focus on extract. I'm curious. Like, I feel like you guys are going to collide at some point. Like, right now, it's friendly. Everyone's in a blue ocean. At some point, it's going to be valuable enough that there's some turf battle here. I don't think you have a dog in a fight. I think you can mock extract to use an external service if they're better at it than you. But it's just an observation that, like, in the same way that I see each option, each checkbox in the side of custom GBTs becoming a startup or each box in the Karpathy chart being a startup. Like, this is also becoming a thing. Yeah.Paul [00:36:41]: I mean, like, so the way StageHand works is that it's MIT-licensed, completely open source. You bring your own API key to your LLM of choice. You could choose your LLM. We don't make any money off of the extract or really. We only really make money if you choose to run it with our browser. You don't have to. You can actually use your own browser, a local browser. You know, StageHand is completely open source for that reason. And, yeah, like, I think if you're building really complex web scraping workflows, I don't know if StageHand is the tool for you. I think it's really more if you're building an AI agent that needs a few general tools or if it's doing a lot of, like, web automation-intensive work. But if you're building a scraping company, StageHand is not your thing. You probably want something that's going to, like, get HTML content, you know, convert that to Markdown, query it. That's not what StageHand does. StageHand is more about reliability. I think we focus a lot on reliability and less so on cost optimization and speed at this point.swyx [00:37:33]: I actually feel like StageHand, so the way that StageHand works, it's like, you know, page.act, click on the quick start. Yeah. It's kind of the integration test for the code that you would have to write anyway, like the Puppeteer code that you have to write anyway. And when the page structure changes, because it always does, then this is still the test. This is still the test that I would have to write. Yeah. So it's kind of like a testing framework that doesn't need implementation detail.Paul [00:37:56]: Well, yeah. I mean, Puppeteer, Playwright, and Slenderman were all designed as testing frameworks, right? Yeah. And now people are, like, hacking them together to automate the web. I would say, and, like, maybe this is, like, me being too specific. But, like, when I write tests, if the page structure changes. Without me knowing, I want that test to fail. So I don't know if, like, AI, like, regenerating that. Like, people are using StageHand for testing. But it's more for, like, usability testing, not, like, testing of, like, does the front end, like, has it changed or not. Okay. But generally where we've seen people, like, really, like, take off is, like, if they're using, you know, something. If they want to build a feature in their application that's kind of like Operator or Deep Research, they're using StageHand to kind of power that tool calling in their own agent loop. Okay. Cool.swyx [00:38:37]: So let's go into Operator, the first big agent launch of the year from OpenAI. Seems like they have a whole bunch scheduled. You were on break and your phone blew up. What's your just general view of computer use agents is what they're calling it. The overall category before we go into Open Operator, just the overall promise of Operator. I will observe that I tried it once. It was okay. And I never tried it again.OpenAI's Operator and computer use agentsPaul [00:38:58]: That tracks with my experience, too. Like, I'm a huge fan of the OpenAI team. Like, I think that I do not view Operator as the company. I'm not a company killer for browser base at all. I think it actually shows people what's possible. I think, like, computer use models make a lot of sense. And I'm actually most excited about computer use models is, like, their ability to, like, really take screenshots and reasoning and output steps. I think that using mouse click or mouse coordinates, I've seen that proved to be less reliable than I would like. And I just wonder if that's the right form factor. What we've done with our framework is anchor it to the DOM itself, anchor it to the actual item. So, like, if it's clicking on something, it's clicking on that thing, you know? Like, it's more accurate. No matter where it is. Yeah, exactly. Because it really ties in nicely. And it can handle, like, the whole viewport in one go, whereas, like, Operator can only handle what it sees. Can you hover? Is hovering a thing that you can do? I don't know if we expose it as a tool directly, but I'm sure there's, like, an API for hovering. Like, move mouse to this position. Yeah, yeah, yeah. I think you can trigger hover, like, via, like, the JavaScript on the DOM itself. But, no, I think, like, when we saw computer use, everyone's eyes lit up because they realized, like, wow, like, AI is going to actually automate work for people. And I think seeing that kind of happen from both of the labs, and I'm sure we're going to see more labs launch computer use models, I'm excited to see all the stuff that people build with it. I think that I'd love to see computer use power, like, controlling a browser on browser base. And I think, like, Open Operator, which was, like, our open source version of OpenAI's Operator, was our first take on, like, how can we integrate these models into browser base? And we handle the infrastructure and let the labs do the models. I don't have a sense that Operator will be released as an API. I don't know. Maybe it will. I'm curious to see how well that works because I think it's going to be really hard for a company like OpenAI to do things like support CAPTCHA solving or, like, have proxies. Like, I think it's hard for them structurally. Imagine this New York Times headline, OpenAI CAPTCHA solving. Like, that would be a pretty bad headline, this New York Times headline. Browser base solves CAPTCHAs. No one cares. No one cares. And, like, our investors are bored. Like, we're all okay with this, you know? We're building this company knowing that the CAPTCHA solving is short-lived until we figure out how to authenticate good bots. I think it's really hard for a company like OpenAI, who has this brand that's so, so good, to balance with, like, the icky parts of web automation, which it can be kind of complex to solve. I'm sure OpenAI knows who to call whenever they need you. Yeah, right. I'm sure they'll have a great partnership.Alessio [00:41:23]: And is Open Operator just, like, a marketing thing for you? Like, how do you think about resource allocation? So, you can spin this up very quickly. And now there's all this, like, open deep research, just open all these things that people are building. We started it, you know. You're the original Open. We're the original Open operator, you know? Is it just, hey, look, this is a demo, but, like, we'll help you build out an actual product for yourself? Like, are you interested in going more of a product route? That's kind of the OpenAI way, right? They started as a model provider and then…Paul [00:41:53]: Yeah, we're not interested in going the product route yet. I view Open Operator as a model provider. It's a reference project, you know? Let's show people how to build these things using the infrastructure and models that are out there. And that's what it is. It's, like, Open Operator is very simple. It's an agent loop. It says, like, take a high-level goal, break it down into steps, use tool calling to accomplish those steps. It takes screenshots and feeds those screenshots into an LLM with the step to generate the right action. It uses stagehand under the hood to actually execute this action. It doesn't use a computer use model. And it, like, has a nice interface using the live view that we talked about, the iframe, to embed that into an application. So I felt like people on launch day wanted to figure out how to build their own version of this. And we turned that around really quickly to show them. And I hope we do that with other things like deep research. We don't have a deep research launch yet. I think David from AOMNI actually has an amazing open deep research that he launched. It has, like, 10K GitHub stars now. So he's crushing that. But I think if people want to build these features natively into their application, they need good reference projects. And I think Open Operator is a good example of that.swyx [00:42:52]: I don't know. Actually, I'm actually pretty bullish on API-driven operator. Because that's the only way that you can sort of, like, once it's reliable enough, obviously. And now we're nowhere near. But, like, give it five years. It'll happen, you know. And then you can sort of spin this up and browsers are working in the background and you don't necessarily have to know. And it just is booking restaurants for you, whatever. I can definitely see that future happening. I had this on the landing page here. This might be a slightly out of order. But, you know, you have, like, sort of three use cases for browser base. Open Operator. Or this is the operator sort of use case. It's kind of like the workflow automation use case. And it completes with UiPath in the sort of RPA category. Would you agree with that? Yeah, I would agree with that. And then there's Agents we talked about already. And web scraping, which I imagine would be the bulk of your workload right now, right?Paul [00:43:40]: No, not at all. I'd say actually, like, the majority is browser automation. We're kind of expensive for web scraping. Like, I think that if you're building a web scraping product, if you need to do occasional web scraping or you have to do web scraping that works every single time, you want to use browser automation. Yeah. You want to use browser-based. But if you're building web scraping workflows, what you should do is have a waterfall. You should have the first request is a curl to the website. See if you can get it without even using a browser. And then the second request may be, like, a scraping-specific API. There's, like, a thousand scraping APIs out there that you can use to try and get data. Scraping B. Scraping B is a great example, right? Yeah. And then, like, if those two don't work, bring out the heavy hitter. Like, browser-based will 100% work, right? It will load the page in a real browser, hydrate it. I see.swyx [00:44:21]: Because a lot of people don't render to JS.swyx [00:44:25]: Yeah, exactly.Paul [00:44:26]: So, I mean, the three big use cases, right? Like, you know, automation, web data collection, and then, you know, if you're building anything agentic that needs, like, a browser tool, you want to use browser-based.Alessio [00:44:35]: Is there any use case that, like, you were super surprised by that people might not even think about? Oh, yeah. Or is it, yeah, anything that you can share? The long tail is crazy. Yeah.Surprising use cases of BrowserbasePaul [00:44:44]: One of the case studies on our website that I think is the most interesting is this company called Benny. So, the way that it works is if you're on food stamps in the United States, you can actually get rebates if you buy certain things. Yeah. You buy some vegetables. You submit your receipt to the government. They'll give you a little rebate back. Say, hey, thanks for buying vegetables. It's good for you. That process of submitting that receipt is very painful. And the way Benny works is you use their app to take a photo of your receipt, and then Benny will go submit that receipt for you and then deposit the money into your account. That's actually using no AI at all. It's all, like, hard-coded scripts. They maintain the scripts. They've been doing a great job. And they build this amazing consumer app. But it's an example of, like, all these, like, tedious workflows that people have to do to kind of go about their business. And they're doing it for the sake of their day-to-day lives. And I had never known about, like, food stamp rebates or the complex forms you have to do to fill them. But the world is powered by millions and millions of tedious forms, visas. You know, Emirate Lighthouse is a customer, right? You know, they do the O1 visa. Millions and millions of forms are taking away humans' time. And I hope that Browserbase can help power software that automates away the web forms that we don't need anymore. Yeah.swyx [00:45:49]: I mean, I'm very supportive of that. I mean, forms. I do think, like, government itself is a big part of it. I think the government itself should embrace AI more to do more sort of human-friendly form filling. Mm-hmm. But I'm not optimistic. I'm not holding my breath. Yeah. We'll see. Okay. I think I'm about to zoom out. I have a little brief thing on computer use, and then we can talk about founder stuff, which is, I tend to think of developer tooling markets in impossible triangles, where everyone starts in a niche, and then they start to branch out. So I already hinted at a little bit of this, right? We mentioned more. We mentioned E2B. We mentioned Firecrawl. And then there's Browserbase. So there's, like, all this stuff of, like, have serverless virtual computer that you give to an agent and let them do stuff with it. And there's various ways of connecting it to the internet. You can just connect to a search API, like SERP API, whatever other, like, EXA is another one. That's what you're searching. You can also have a JSON markdown extractor, which is Firecrawl. Or you can have a virtual browser like Browserbase, or you can have a virtual machine like Morph. And then there's also maybe, like, a virtual sort of code environment, like Code Interpreter. So, like, there's just, like, a bunch of different ways to tackle the problem of give a computer to an agent. And I'm just kind of wondering if you see, like, everyone's just, like, happily coexisting in their respective niches. And as a developer, I just go and pick, like, a shopping basket of one of each. Or do you think that you eventually, people will collide?Future of browser automation and market competitionPaul [00:47:18]: I think that currently it's not a zero-sum market. Like, I think we're talking about... I think we're talking about all of knowledge work that people do that can be automated online. All of these, like, trillions of hours that happen online where people are working. And I think that there's so much software to be built that, like, I tend not to think about how these companies will collide. I just try to solve the problem as best as I can and make this specific piece of infrastructure, which I think is an important primitive, the best I possibly can. And yeah. I think there's players that are actually going to like it. I think there's players that are going to launch, like, over-the-top, you know, platforms, like agent platforms that have all these tools built in, right? Like, who's building the rippling for agent tools that has the search tool, the browser tool, the operating system tool, right? There are some. There are some. There are some, right? And I think in the end, what I have seen as my time as a developer, and I look at all the favorite tools that I have, is that, like, for tools and primitives with sufficient levels of complexity, you need to have a solution that's really bespoke to that primitive, you know? And I am sufficiently convinced that the browser is complex enough to deserve a primitive. Obviously, I have to. I'm the founder of BrowserBase, right? I'm talking my book. But, like, I think maybe I can give you one spicy take against, like, maybe just whole OS running. I think that when I look at computer use when it first came out, I saw that the majority of use cases for computer use were controlling a browser. And do we really need to run an entire operating system just to control a browser? I don't think so. I don't think that's necessary. You know, BrowserBase can run browsers for way cheaper than you can if you're running a full-fledged OS with a GUI, you know, operating system. And I think that's just an advantage of the browser. It is, like, browsers are little OSs, and you can run them very efficiently if you orchestrate it well. And I think that allows us to offer 90% of the, you know, functionality in the platform needed at 10% of the cost of running a full OS. Yeah.Open Operator: Browserbase's Open-Source Alternativeswyx [00:49:16]: I definitely see the logic in that. There's a Mark Andreessen quote. I don't know if you know this one. Where he basically observed that the browser is turning the operating system into a poorly debugged set of device drivers, because most of the apps are moved from the OS to the browser. So you can just run browsers.Paul [00:49:31]: There's a place for OSs, too. Like, I think that there are some applications that only run on Windows operating systems. And Eric from pig.dev in this upcoming YC batch, or last YC batch, like, he's building all run tons of Windows operating systems for you to control with your agent. And like, there's some legacy EHR systems that only run on Internet-controlled systems. Yeah.Paul [00:49:54]: I think that's it. I think, like, there are use cases for specific operating systems for specific legacy software. And like, I'm excited to see what he does with that. I just wanted to give a shout out to the pig.dev website.swyx [00:50:06]: The pigs jump when you click on them. Yeah. That's great.Paul [00:50:08]: Eric, he's the former co-founder of banana.dev, too.swyx [00:50:11]: Oh, that Eric. Yeah. That Eric. Okay. Well, he abandoned bananas for pigs. I hope he doesn't start going around with pigs now.Alessio [00:50:18]: Like he was going around with bananas. A little toy pig. Yeah. Yeah. I love that. What else are we missing? I think we covered a lot of, like, the browser-based product history, but. What do you wish people asked you? Yeah.Paul [00:50:29]: I wish people asked me more about, like, what will the future of software look like? Because I think that's really where I've spent a lot of time about why do browser-based. Like, for me, starting a company is like a means of last resort. Like, you shouldn't start a company unless you absolutely have to. And I remain convinced that the future of software is software that you're going to click a button and it's going to do stuff on your behalf. Right now, software. You click a button and it maybe, like, calls it back an API and, like, computes some numbers. It, like, modifies some text, whatever. But the future of software is software using software. So, I may log into my accounting website for my business, click a button, and it's going to go load up my Gmail, search my emails, find the thing, upload the receipt, and then comment it for me. Right? And it may use it using APIs, maybe a browser. I don't know. I think it's a little bit of both. But that's completely different from how we've built software so far. And that's. I think that future of software has different infrastructure requirements. It's going to require different UIs. It's going to require different pieces of infrastructure. I think the browser infrastructure is one piece that fits into that, along with all the other categories you mentioned. So, I think that it's going to require developers to think differently about how they've built software for, you know

Fund/Build/Scale
Why PR Comes Before PMF

Fund/Build/Scale

Play Episode Listen Later Feb 26, 2025 31:28


Do you want to know the difference between marketing and PR? Marketing is when you say something nice about yourself; PR is when other people say nice things about you. Jenna Guarneri is the founder of JMG Public Relations and author of the bestseller "You Need PR." In this episode, she shares DIY PR tactics that help founders establish themselves as experts, attract customers, and raise their profile with investors — without spending a fortune on an agency. If you've ever wondered why reporters never get back to you, we cover that, too. Key takeaways from this episode: ✅ Why PR comes before PMF in the startup playbook ✅ The biggest PR misconceptions founders have (and how to avoid them) ✅ How to craft a media pitch that actually gets responses ✅ DIY PR strategies for building credibility before you hire a firm ✅ The right way to engage with journalists without being ignored ✅ How PR can help secure funding and drive startup growth If you're trying to take control of your PR strategy and attract positive attention, listen in. RUNTIME 31:28 EPISODE BREAKDOWN (1:52) How Jenna sets client expectations on what PR can and cannot accomplish. (5:14) Key signals that indicate an early-stage startup is ready to hire outside PR. (6:41) “The founders usually are amenable to PR and doing media interviews. It kind of comes with the territory of being a founder.” (7:59) How to get started with DIY PR by sharing thought leadership that creates value. (11:35) “PR should be done at the very beginning, right from the very start.” (12:20) The right way for stealth startups to approach PR. (13:02) The top reasons why reporters ignore pitches — and how to avoid them. (15:21) Crafting a news hook that genuinely engages journalists. (17:33) How your world changes when PR starts working. (18:50) “Effective public relations will drive the business in a number of ways.” (20:50) How to interview and vet a PR firm before making a commitment. (22:46) PR is a long game: “We can't work miracles in three months.” (25:22) Why using ChatGPT to pitch reporters is a terrible idea. (27:42) “Content creation does take a lot of time, a lot of energy, but it goes a long way really quickly for brand awareness.” (30:14) The one question Jenna would have to ask before hiring a PR firm. LINKS Jenna Guarneri JMG Public Relations You Need PR SUBSCRIBE

Swisspreneur Show
EP #474 - The Successful Contovista Exit & Building a Climate Tech Startup

Swisspreneur Show

Play Episode Listen Later Feb 12, 2025 53:46


Timestamps: 7:35 - The successful Contovista exit 12:30 - Contovista investors getting a 33x return 17:17 - How do you reach product market fit? 31:05 - Why Gian started NORM 42:10 - Energy efficiency and climate change Resources Mentioned: The 4 Levels of PMF, First Round Capital About Gian Reto à Porta: Gian Reto à Porta is a serial entrepreneur and business Angel. He co-founded and successfully exited Contovista, a 2013 fintech pioneer, and nowadays is active as a co-founder and CEO at Norm, a startup which wants to help drive the decarbonization in the building sector. Gian holds a MSc in Computer Science from UZH. Gian started Contovista in 2013, and the company quickly garnered so much success and media attention that he had to quit his corporate job. Despite its very small number of employees, Contovista began working with most of the banks in Switzerland. They were acquired by Aduno Group in 2017 for an undisclosed amount, and Contovista's early investors received a 33x return on their investment. After working a few years solely as a business angel, Gian descended once more into the startup trenches by founding NORM, a company which assesses the energy efficiency, CO₂ emissions and renovation costs of buildings. They saw a need for this product not only because Swiss banks will soon have to mandatorily discuss energy efficiency with clients when issuing mortgages, but also because there was no “centralized” way to improve the energy efficiency of your home outside of finding an energy consultant, who may or may not be acting in their clients' best interest.  The NORM co-founders, of which there are 6, started their business because they wanted to work with tech which had a positive impact on society. They've certainly hit the nail on the head with NORM, considering that in Europe 30-40% of all CO₂ emissions come from the building sector. The cover portrait was edited by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.smartportrait.io⁠ ‍ Don't forget to give us a follow on⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠and⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, so you can always stay up to date with our latest initiatives. That way, there's no excuse for missing out on live shows, weekly giveaways or founders' dinners.

In Depth
Inside Braze's blitz to $500M in CARR | Building broad, going global, and outfoxing the competition | Bill Magnuson (Co-founder & CEO) and Kevin Wang (CPO)

In Depth

Play Episode Listen Later Feb 6, 2025 83:07


Bill Magnuson is the co-founder and CEO at Braze, along with Kevin Wang, who joined as employee #8 and serves as the CPO. The two MIT graduates have built Braze into a publicly listed customer engagement platform with a $4.4B market cap. In 2023, Braze surpassed $500M in CARR, and serves over 2,200 customers worldwide. Before Braze, Bill spent time at Bridgewater Associates. Kevin's academic background is in brain & cognitive sciences, and prior to joining Braze he worked at Accenture and Brewgene. – In today's episode, we discuss: The Braze founders' early insights into the mobile revolution How a TechCrunch Hackathon sparked Braze's creation The journey from 1,000 beta signups to 2,200+ paying customers Breaking traditional lean startup rules Navigating early fundraising challenges Finding product market fit by “fishing in every pond” Approaching competition strategically like a boxer Much more – Referenced: Accenture: https://www.accenture.com/ Appboy: https://www.braze.com/resources/articles/appboy-social-network-for-mobile-apps Bipul Sinha: https://www.linkedin.com/in/bipulsinha/ Braze: https://www.braze.com/ Bridgewater Associates: https://www.bridgewater.com/ Jon Hyman: https://www.linkedin.com/in/jon-hyman/ Mark Ghermezian: https://x.com/markgher MIT: https://www.mit.edu/ Rubrik: https://www.rubrik.com/ WeWork: https://www.wework.com/ – Where to find Bill: LinkedIn: https://www.linkedin.com/in/billmagnuson/ Twitter/X: https://x.com/billmag – Where to find Kevin: LinkedIn: https://www.linkedin.com/in/kevin-wang-96131916/ – Where to find Brett: LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/ Twitter/X: https://twitter.com/brettberson – Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter/X: https://twitter.com/firstround YouTube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast – Timestamps: (00:00) Teaser: Finding “terminal value” product market fit (00:24) Introduction (02:34) Bill's insights into the mobile revolution (04:43) Lessons from Bridgewater Associates (09:12) First principles thinking in action at Braze (14:14) Meeting co-founders at an NYC Hackathon (24:35) Braze's scrappy scaling (33:37) Early product development (39:37) From 1,000 beta signups to 2,200+ paying customers (43:51) Braze's fundraising struggles (47:01) Breaking the rules of a lean startup (53:02) Riding the mobile wave to success (60:02) Building a global customer base (64:04) The never-ending quest for PMF (70:29) 3 things every founder needs to know (73:56) Navigating competition like a boxer (79:03) When scale helps or hurts (80:32) 1 thing they've learned from each other

Midrats
Episode 713: Seth Folsom's, Nothing Here Worth Dying For

Midrats

Play Episode Listen Later Feb 3, 2025 64:12


Returning to Midrats this week to discuss his latest non-fiction novel is Seth W.B. Folsom, Colonel, USMC (Ret.).From the Amazon page:Nothing Here Worth Dying For tells the story of his command of Task Force Lion—a “purpose-built” combat advisor team—and his frenetic 2017 deployment to Iraq's Al Anbar Province. Charged with the daunting task of advising, assisting, and enabling the Iraqi Security Forces in their fight against the Islamic State of Iraq and Syria, Folsom and his team of Marines and sailors struggled to support their Iraqi partners in the Jazeera Operations Command while simultaneously grappling with their own leadership for their relevance on the battlefield.…As with the author's previous books, Nothing Here Worth Dying For focuses on individual Marine actions at the tactical and operational levels while also addressing regional events that contributed to the overall narrative of the U.S. war in Iraq. Folsom describes his unpopular decision to prioritize his team members and their mission to support the Iraqi army above the desires of his own military service branch. As the final operation against ISIS in western Al Anbar gained steam, he questioned the wisdom of the military leadership to which he had dedicated his entire adult life.ShowlinksNothing Here Worth Dying ForThe Highway War: A Marine Company Commander in IraqIn the Gray Area: A Marine Advisor Team at WarWhere Youth and Laughter Go: With ‘the Cutting Edge in AfghanistanSummaryThis conversation delves into the complexities of military operations in Iraq, focusing on the formation and challenges faced by Task Force Lion during the fight against ISIS. Colonel Seth Folsom shares insights on the cultural dynamics, logistical feats, and the intricate relationships between various military and coalition forces. The discussion highlights the sacrifices made by service members and the ongoing questions about the purpose and impact of their missions.TakeawaysThe rise of ISIS in 2014 prompted a swift military response.Task Force Lion was formed from diverse units, creating unique challenges.Cultural differences between U.S. and Iraqi forces impacted operations.Logistical coordination was crucial for mission success.The PMF played a significant role in the fight against ISIS.Command structures were complex and often convoluted.The importance of building a cohesive team was emphasized.Leadership involved navigating various military and political dynamics.Sacrifices made by service members were a central theme.Reflections on the purpose of military engagement remain relevant.Chapters00:00: Introduction and Context of the Long War02:56: The Rise of ISIS and Initial Responses05:39: Building Task Force Lion08:12: Challenges of Individual Augments10:54: Mission Overview and Arrival in Iraq13:49: The Complex Landscape of Iraqi Forces16:12: The Role of PMF and Tribal Forces19:09: Navigating Command Structures and Relationships36:42: Challenges of Coalition Operations39:59: Authority and Responsibility in Combat40:54: Logistical Feats in a War Zone45:19: The Complexity of Joint Operations47:50: Cultural Differences in Military Operations55:17: Reflections on Purpose and SacrificeSeth W. B. Folsom is a retired Marine Corps colonel who served more than twenty-eight years in uniform. Throughout the Global War on Terror, he deployed multiple times to Iraq and Afghanistan, where he commanded in combat at the company, battalion, and task force levels. A graduate of the University of Virginia, Naval Postgraduate School, and the Marine Corps War College, he is the author of “The Highway War: A Marine Company Commander in Iraq;” “In the Gray Area: A Marine Advisor Team at War;” “Where Youth and Laughter Go: With ‘the Cutting Edge in Afghanistan;” and “Nothing Here Worth Dying For: Task Force Lion in Iraq.” He, his family, and their needy, spoiled cat live in Southern California.

The Product Market Fit Show
He exited for hundreds of millions—then invested in 20+ founders. Here's what he looks for. | Jason Van Gaal, Founder of ROOT

The Product Market Fit Show

Play Episode Listen Later Feb 3, 2025 48:02 Transcription Available


Jason built a data center company in the 2013. When he exited in 2019, it was the third-largest exit in Canada that year. He'd sold his previous startup and invested 100% of his capital into ROOT. He grew to 10s of millions and exited for 100s of millions. Now he's invested in over 20 angel-stage startups. He shares the story of ROOT and what he looks for in the startups and founders he backs.  Why you should listen:Why seeing inefficiencies can lead to huge advantages vs competitors.How customer concentration can actually lead to a huge success.Why the 'Why Now' slide is so important.Why Jason values startups can get to free cash flow within 1-2 years.How to use the lead to conversation ratio as a leading indicator of PMF. Keywordsdata centers, investment, entrepreneurship, product market fit, angel investing, business growth, technology, risk management, funding strategies, customer relationships, investment, startup, venture capital, product-market fit, founder advice, business model, cash flow, total addressable market, team dynamics, entrepreneurial hungerSend me a message to let me know what you think!

Doctor Warrick
EP371: Magnets and Health with Mark Fox (Part 1)

Doctor Warrick

Play Episode Listen Later Feb 1, 2025 21:32


Welcome to my podcast. I am Doctor Warrick Bishop, and I want to help you to live as well as possible for as long as possible. I'm a practising cardiologist, best-selling author, keynote speaker, and the creator of The Healthy Heart Network. I have over 20 years as a specialist cardiologist and a private practice of over 10,000 patients. In this podcast, Doctor Warrick Bishop, a cardiologist and CEO of the Healthy Heart Network, discusses heart disease in Australia and introduces Mark Fox, CEO and founder of Resona Health, who specializes in pulse electromagnetic field (PMF) therapy. Mark shares his journey from chemical engineering to working on the Space Shuttle program and ultimately exploring energy therapy after a personal experience with his dog's arthritis. Initially skeptical, he became a proponent of PMF therapy after witnessing its effects on various conditions, including PTSD and anxiety.

The Hacked Life
Molecular Hydrogen: The Game-Changer in Anti-Aging & Recovery - Dr.Mike Van Thielen : 350

The Hacked Life

Play Episode Listen Later Jan 30, 2025 36:18


In this conversation, Dr. Mike Van Thielen discusses the benefits and mechanisms of molecular hydrogen as a powerful biohacking tool for health optimization. He explains how hydrogen acts as a selective antioxidant, promotes cellular health, and supports mitochondrial function. The discussion also covers practical applications, including different methods of hydrogen delivery and the importance of integrating hydrogen therapy with other health practices for optimal results. Takeaways ✅ Molecular hydrogen has positive effects on over 170 diseases. ✅ It acts as a selective antioxidant, neutralizing harmful free radicals. ✅ Hydrogen promotes autophagy, recycling damaged cellular components. ✅ Mitochondrial health is crucial for overall energy production and longevity. ✅ Hydrogen can easily cross the blood-brain barrier, benefiting neurological health. ✅ There are no shortcuts to health; a holistic approach is necessary. ✅ Hydrogen therapy is 100% safe and can be used alongside other treatments. ✅ Avoiding electromagnetic radiation is important for cellular health. ✅ Not all PMF devices are effective; analog devices are preferred. Chapters 00:00 Introduction to Molecular Hydrogen and Biohacking 11:21 Understanding the Mechanisms of Molecular Hydrogen 22:50 Practical Applications and Devices for Hydrogen Therapy 33:08 Integrating Hydrogen Therapy with Other Health Practices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Outlasting Noam Shazeer, crowdsourcing Chat + AI with >1.4m DAU, and becoming the "Western DeepSeek" — with William Beauchamp, Chai Research

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Jan 26, 2025 75:46


One last Gold sponsor slot is available for the AI Engineer Summit in NYC. Our last round of invites is going out soon - apply here - If you are building AI agents or AI eng teams, this will be the single highest-signal conference of the year for you!While the world melts down over DeepSeek, few are talking about the OTHER notable group of former hedge fund traders who pivoted into AI and built a remarkably profitable consumer AI business with a tiny team with incredibly cracked engineering team — Chai Research. In short order they have:* Started a Chat AI company well before Noam Shazeer started Character AI, and outlasted his departure.* Crossed 1m DAU in 2.5 years - William updates us on the pod that they've hit 1.4m DAU now, another +40% from a few months ago. Revenue crossed >$22m. * Launched the Chaiverse model crowdsourcing platform - taking 3-4 week A/B testing cycles down to 3-4 hours, and deploying >100 models a week.While they're not paying million dollar salaries, you can tell they're doing pretty well for an 11 person startup:The Chai Recipe: Building infra for rapid evalsRemember how the central thesis of LMarena (formerly LMsys) is that the only comprehensive way to evaluate LLMs is to let users try them out and pick winners?At the core of Chai is a mobile app that looks like Character AI, but is actually the largest LLM A/B testing arena in the world, specialized on retaining chat users for Chai's usecases (therapy, assistant, roleplay, etc). It's basically what LMArena would be if taken very, very seriously at one company (with $1m in prizes to boot):Chai publishes occasional research on how they think about this, including talks at their Palo Alto office:William expands upon this in today's podcast (34 mins in):Fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours.In Crowdsourcing the leap to Ten Trillion-Parameter AGI, William describes Chai's routing as a recommender system, which makes a lot more sense to us than previous pitches for model routing startups:William is notably counter-consensus in a lot of his AI product principles:* No streaming: Chats appear all at once to allow rejection sampling* No voice: Chai actually beat Character AI to introducing voice - but removed it after finding that it was far from a killer feature.* Blending: “Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model.” (that's it!)But chief above all is the recommender system.We also referenced Exa CEO Will Bryk's concept of SuperKnowlege:Full Video versionOn YouTube. please like and subscribe!Timestamps* 00:00:04 Introductions and background of William Beauchamp* 00:01:19 Origin story of Chai AI* 00:04:40 Transition from finance to AI* 00:11:36 Initial product development and idea maze for Chai* 00:16:29 User psychology and engagement with AI companions* 00:20:00 Origin of the Chai name* 00:22:01 Comparison with Character AI and funding challenges* 00:25:59 Chai's growth and user numbers* 00:34:53 Key inflection points in Chai's growth* 00:42:10 Multi-modality in AI companions and focus on user-generated content* 00:46:49 Chaiverse developer platform and model evaluation* 00:51:58 Views on AGI and the nature of AI intelligence* 00:57:14 Evaluation methods and human feedback in AI development* 01:02:01 Content creation and user experience in Chai* 01:04:49 Chai Grant program and company culture* 01:07:20 Inference optimization and compute costs* 01:09:37 Rejection sampling and reward models in AI generation* 01:11:48 Closing thoughts and recruitmentTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel, and today we're in the Chai AI office with my usual co-host, Swyx.swyx [00:00:14]: Hey, thanks for having us. It's rare that we get to get out of the office, so thanks for inviting us to your home. We're in the office of Chai with William Beauchamp. Yeah, that's right. You're founder of Chai AI, but previously, I think you're concurrently also running your fund?William [00:00:29]: Yep, so I was simultaneously running an algorithmic trading company, but I fortunately was able to kind of exit from that, I think just in Q3 last year. Yeah, congrats. Yeah, thanks.swyx [00:00:43]: So Chai has always been on my radar because, well, first of all, you do a lot of advertising, I guess, in the Bay Area, so it's working. Yep. And second of all, the reason I reached out to a mutual friend, Joyce, was because I'm just generally interested in the... ...consumer AI space, chat platforms in general. I think there's a lot of inference insights that we can get from that, as well as human psychology insights, kind of a weird blend of the two. And we also share a bit of a history as former finance people crossing over. I guess we can just kind of start it off with the origin story of Chai.William [00:01:19]: Why decide working on a consumer AI platform rather than B2B SaaS? So just quickly touching on the background in finance. Sure. Originally, I'm from... I'm from the UK, born in London. And I was fortunate enough to go study economics at Cambridge. And I graduated in 2012. And at that time, everyone in the UK and everyone on my course, HFT, quant trading was really the big thing. It was like the big wave that was happening. So there was a lot of opportunity in that space. And throughout college, I'd sort of played poker. So I'd, you know, I dabbled as a professional poker player. And I was able to accumulate this sort of, you know, say $100,000 through playing poker. And at the time, as my friends would go work at companies like ChangeStreet or Citadel, I kind of did the maths. And I just thought, well, maybe if I traded my own capital, I'd probably come out ahead. I'd make more money than just going to work at ChangeStreet.swyx [00:02:20]: With 100k base as capital?William [00:02:22]: Yes, yes. That's not a lot. Well, it depends what strategies you're doing. And, you know, there is an advantage. There's an advantage to being small, right? Because there are, if you have a 10... Strategies that don't work in size. Exactly, exactly. So if you have a fund of $10 million, if you find a little anomaly in the market that you might be able to make 100k a year from, that's a 1% return on your 10 million fund. If your fund is 100k, that's 100% return, right? So being small, in some sense, was an advantage. So started off, and the, taught myself Python, and machine learning was like the big thing as well. Machine learning had really, it was the first, you know, big time machine learning was being used for image recognition, neural networks come out, you get dropout. And, you know, so this, this was the big thing that's going on at the time. So I probably spent my first three years out of Cambridge, just building neural networks, building random forests to try and predict asset prices, right, and then trade that using my own money. And that went well. And, you know, if you if you start something, and it goes well, you You try and hire more people. And the first people that came to mind was the talented people I went to college with. And so I hired some friends. And that went well and hired some more. And eventually, I kind of ran out of friends to hire. And so that was when I formed the company. And from that point on, we had our ups and we had our downs. And that was a whole long story and journey in itself. But after doing that for about eight or nine years, on my 30th birthday, which was four years ago now, I kind of took a step back to just evaluate my life, right? This is what one does when one turns 30. You know, I just heard it. I hear you. And, you know, I looked at my 20s and I loved it. It was a really special time. I was really lucky and fortunate to have worked with this amazing team, been successful, had a lot of hard times. And through the hard times, learned wisdom and then a lot of success and, you know, was able to enjoy it. And so the company was making about five million pounds a year. And it was just me and a team of, say, 15, like, Oxford and Cambridge educated mathematicians and physicists. It was like the real dream that you'd have if you wanted to start a quant trading firm. It was like...swyx [00:04:40]: Your own, all your own money?William [00:04:41]: Yeah, exactly. It was all the team's own money. We had no customers complaining to us about issues. There's no investors, you know, saying, you know, they don't like the risk that we're taking. We could. We could really run the thing exactly as we wanted it. It's like Susquehanna or like Rintec. Yeah, exactly. Yeah. And they're the companies that we would kind of look towards as we were building that thing out. But on my 30th birthday, I look and I say, OK, great. This thing is making as much money as kind of anyone would really need. And I thought, well, what's going to happen if we keep going in this direction? And it was clear that we would never have a kind of a big, big impact on the world. We can enrich ourselves. We can make really good money. Everyone on the team would be paid very, very well. Presumably, I can make enough money to buy a yacht or something. But this stuff wasn't that important to me. And so I felt a sort of obligation that if you have this much talent and if you have a talented team, especially as a founder, you want to be putting all that talent towards a good use. I looked at the time of like getting into crypto and I had a really strong view on crypto, which was that as far as a gambling device. This is like the most fun form of gambling invented in like ever super fun, I thought as a way to evade monetary regulations and banking restrictions. I think it's also absolutely amazing. So it has two like killer use cases, not so much banking the unbanked, but everything else, but everything else to do with like the blockchain and, and you know, web, was it web 3.0 or web, you know, that I, that didn't, it didn't really make much sense. And so instead of going into crypto, which I thought, even if I was successful, I'd end up in a lot of trouble. I thought maybe it'd be better to build something that governments wouldn't have a problem with. I knew that LLMs were like a thing. I think opening. I had said they hadn't released GPT-3 yet, but they'd said GPT-3 is so powerful. We can't release it to the world or something. Was it GPT-2? And then I started interacting with, I think Google had open source, some language models. They weren't necessarily LLMs, but they, but they were. But yeah, exactly. So I was able to play around with, but nowadays so many people have interacted with the chat GPT, they get it, but it's like the first time you, you can just talk to a computer and it talks back. It's kind of a special moment and you know, everyone who's done that goes like, wow, this is how it should be. Right. It should be like, rather than having to type on Google and search, you should just be able to ask Google a question. When I saw that I read the literature, I kind of came across the scaling laws and I think even four years ago. All the pieces of the puzzle were there, right? Google had done this amazing research and published, you know, a lot of it. Open AI was still open. And so they'd published a lot of their research. And so you really could be fully informed on, on the state of AI and where it was going. And so at that point I was confident enough, it was worth a shot. I think LLMs are going to be the next big thing. And so that's the thing I want to be building in, in that space. And I thought what's the most impactful product I can possibly build. And I thought it should be a platform. So I myself love platforms. I think they're fantastic because they open up an ecosystem where anyone can contribute to it. Right. So if you think of a platform like a YouTube, instead of it being like a Hollywood situation where you have to, if you want to make a TV show, you have to convince Disney to give you the money to produce it instead, anyone in the world can post any content they want to YouTube. And if people want to view it, the algorithm is going to promote it. Nowadays. You can look at creators like Mr. Beast or Joe Rogan. They would have never have had that opportunity unless it was for this platform. Other ones like Twitter's a great one, right? But I would consider Wikipedia to be a platform where instead of the Britannica encyclopedia, which is this, it's like a monolithic, you get all the, the researchers together, you get all the data together and you combine it in this, in this one monolithic source. Instead. You have this distributed thing. You can say anyone can host their content on Wikipedia. Anyone can contribute to it. And anyone can maybe their contribution is they delete stuff. When I was hearing like the kind of the Sam Altman and kind of the, the Muskian perspective of AI, it was a very kind of monolithic thing. It was all about AI is basically a single thing, which is intelligence. Yeah. Yeah. The more intelligent, the more compute, the more intelligent, and the more and better AI researchers, the more intelligent, right? They would speak about it as a kind of erased, like who can get the most data, the most compute and the most researchers. And that would end up with the most intelligent AI. But I didn't believe in any of that. I thought that's like the total, like I thought that perspective is the perspective of someone who's never actually done machine learning. Because with machine learning, first of all, you see that the performance of the models follows an S curve. So it's not like it just goes off to infinity, right? And the, the S curve, it kind of plateaus around human level performance. And you can look at all the, all the machine learning that was going on in the 2010s, everything kind of plateaued around the human level performance. And we can think about the self-driving car promises, you know, how Elon Musk kept saying the self-driving car is going to happen next year, it's going to happen next, next year. Or you can look at the image recognition, the speech recognition. You can look at. All of these things, there was almost nothing that went superhuman, except for something like AlphaGo. And we can speak about why AlphaGo was able to go like super superhuman. So I thought the most likely thing was going to be this, I thought it's not going to be a monolithic thing. That's like an encyclopedia Britannica. I thought it must be a distributed thing. And I actually liked to look at the world of finance for what I think a mature machine learning ecosystem would look like. So, yeah. So finance is a machine learning ecosystem because all of these quant trading firms are running machine learning algorithms, but they're running it on a centralized platform like a marketplace. And it's not the case that there's one giant quant trading company of all the data and all the quant researchers and all the algorithms and compute, but instead they all specialize. So one will specialize on high frequency training. Another will specialize on mid frequency. Another one will specialize on equity. Another one will specialize. And I thought that's the way the world works. That's how it is. And so there must exist a platform where a small team can produce an AI for a unique purpose. And they can iterate and build the best thing for that, right? And so that was the vision for Chai. So we wanted to build a platform for LLMs.Alessio [00:11:36]: That's kind of the maybe inside versus contrarian view that led you to start the company. Yeah. And then what was maybe the initial idea maze? Because if somebody told you that was the Hugging Face founding story, people might believe it. It's kind of like a similar ethos behind it. How did you land on the product feature today? And maybe what were some of the ideas that you discarded that initially you thought about?William [00:11:58]: So the first thing we built, it was fundamentally an API. So nowadays people would describe it as like agents, right? But anyone could write a Python script. They could submit it to an API. They could send it to the Chai backend and we would then host this code and execute it. So that's like the developer side of the platform. On their Python script, the interface was essentially text in and text out. An example would be the very first bot that I created. I think it was a Reddit news bot. And so it would first, it would pull the popular news. Then it would prompt whatever, like I just use some external API for like Burr or GPT-2 or whatever. Like it was a very, very small thing. And then the user could talk to it. So you could say to the bot, hi bot, what's the news today? And it would say, this is the top stories. And you could chat with it. Now four years later, that's like perplexity or something. That's like the, right? But back then the models were first of all, like really, really dumb. You know, they had an IQ of like a four year old. And users, there really wasn't any demand or any PMF for interacting with the news. So then I was like, okay. Um. So let's make another one. And I made a bot, which was like, you could talk to it about a recipe. So you could say, I'm making eggs. Like I've got eggs in my fridge. What should I cook? And it'll say, you should make an omelet. Right. There was no PMF for that. No one used it. And so I just kept creating bots. And so every single night after work, I'd be like, okay, I like, we have AI, we have this platform. I can create any text in textile sort of agent and put it on the platform. And so we just create stuff night after night. And then all the coders I knew, I would say, yeah, this is what we're going to do. And then I would say to them, look, there's this platform. You can create any like chat AI. You should put it on. And you know, everyone's like, well, chatbots are super lame. We want absolutely nothing to do with your chatbot app. No one who knew Python wanted to build on it. I'm like trying to build all these bots and no consumers want to talk to any of them. And then my sister who at the time was like just finishing college or something, I said to her, I was like, if you want to learn Python, you should just submit a bot for my platform. And she, she built a therapy for me. And I was like, okay, cool. I'm going to build a therapist bot. And then the next day I checked the performance of the app and I'm like, oh my God, we've got 20 active users. And they spent, they spent like an average of 20 minutes on the app. I was like, oh my God, what, what bot were they speaking to for an average of 20 minutes? And I looked and it was the therapist bot. And I went, oh, this is where the PMF is. There was no demand for, for recipe help. There was no demand for news. There was no demand for dad jokes or pub quiz or fun facts or what they wanted was they wanted the therapist bot. the time I kind of reflected on that and I thought, well, if I want to consume news, the most fun thing, most fun way to consume news is like Twitter. It's not like the value of there being a back and forth, wasn't that high. Right. And I thought if I need help with a recipe, I actually just go like the New York times has a good recipe section, right? It's not actually that hard. And so I just thought the thing that AI is 10 X better at is a sort of a conversation right. That's not intrinsically informative, but it's more about an opportunity. You can say whatever you want. You're not going to get judged. If it's 3am, you don't have to wait for your friend to text back. It's like, it's immediate. They're going to reply immediately. You can say whatever you want. It's judgment-free and it's much more like a playground. It's much more like a fun experience. And you could see that if the AI gave a person a compliment, they would love it. It's much easier to get the AI to give you a compliment than a human. From that day on, I said, okay, I get it. Humans want to speak to like humans or human like entities and they want to have fun. And that was when I started to look less at platforms like Google. And I started to look more at platforms like Instagram. And I was trying to think about why do people use Instagram? And I could see that I think Chai was, was filling the same desire or the same drive. If you go on Instagram, typically you want to look at the faces of other humans, or you want to hear about other people's lives. So if it's like the rock is making himself pancakes on a cheese plate. You kind of feel a little bit like you're the rock's friend, or you're like having pancakes with him or something, right? But if you do it too much, you feel like you're sad and like a lonely person, but with AI, you can talk to it and tell it stories and tell you stories, and you can play with it for as long as you want. And you don't feel like you're like a sad, lonely person. You feel like you actually have a friend.Alessio [00:16:29]: And what, why is that? Do you have any insight on that from using it?William [00:16:33]: I think it's just the human psychology. I think it's just the idea that, with old school social media. You're just consuming passively, right? So you'll just swipe. If I'm watching TikTok, just like swipe and swipe and swipe. And even though I'm getting the dopamine of like watching an engaging video, there's this other thing that's building my head, which is like, I'm feeling lazier and lazier and lazier. And after a certain period of time, I'm like, man, I just wasted 40 minutes. I achieved nothing. But with AI, because you're interacting, you feel like you're, it's not like work, but you feel like you're participating and contributing to the thing. You don't feel like you're just. Consuming. So you don't have a sense of remorse basically. And you know, I think on the whole people, the way people talk about, try and interact with the AI, they speak about it in an incredibly positive sense. Like we get people who say they have eating disorders saying that the AI helps them with their eating disorders. People who say they're depressed, it helps them through like the rough patches. So I think there's something intrinsically healthy about interacting that TikTok and Instagram and YouTube doesn't quite tick. From that point on, it was about building more and more kind of like human centric AI for people to interact with. And I was like, okay, let's make a Kanye West bot, right? And then no one wanted to talk to the Kanye West bot. And I was like, ah, who's like a cool persona for teenagers to want to interact with. And I was like, I was trying to find the influencers and stuff like that, but no one cared. Like they didn't want to interact with the, yeah. And instead it was really just the special moment was when we said the realization that developers and software engineers aren't interested in building this sort of AI, but the consumers are right. And rather than me trying to guess every day, like what's the right bot to submit to the platform, why don't we just create the tools for the users to build it themselves? And so nowadays this is like the most obvious thing in the world, but when Chai first did it, it was not an obvious thing at all. Right. Right. So we took the API for let's just say it was, I think it was GPTJ, which was this 6 billion parameter open source transformer style LLM. We took GPTJ. We let users create the prompt. We let users select the image and we let users choose the name. And then that was the bot. And through that, they could shape the experience, right? So if they said this bot's going to be really mean, and it's going to be called like bully in the playground, right? That was like a whole category that I never would have guessed. Right. People love to fight. They love to have a disagreement, right? And then they would create, there'd be all these romantic archetypes that I didn't know existed. And so as the users could create the content that they wanted, that was when Chai was able to, to get this huge variety of content and rather than appealing to, you know, 1% of the population that I'd figured out what they wanted, you could appeal to a much, much broader thing. And so from that moment on, it was very, very crystal clear. It's like Chai, just as Instagram is this social media platform that lets people create images and upload images, videos and upload that, Chai was really about how can we let the users create this experience in AI and then share it and interact and search. So it's really, you know, I say it's like a platform for social AI.Alessio [00:20:00]: Where did the Chai name come from? Because you started the same path. I was like, is it character AI shortened? You started at the same time, so I was curious. The UK origin was like the second, the Chai.William [00:20:15]: We started way before character AI. And there's an interesting story that Chai's numbers were very, very strong, right? So I think in even 20, I think late 2022, was it late 2022 or maybe early 2023? Chai was like the number one AI app in the app store. So we would have something like 100,000 daily active users. And then one day we kind of saw there was this website. And we were like, oh, this website looks just like Chai. And it was the character AI website. And I think that nowadays it's, I think it's much more common knowledge that when they left Google with the funding, I think they knew what was the most trending, the number one app. And I think they sort of built that. Oh, you found the people.swyx [00:21:03]: You found the PMF for them.William [00:21:04]: We found the PMF for them. Exactly. Yeah. So I worked a year very, very hard. And then they, and then that was when I learned a lesson, which is that if you're VC backed and if, you know, so Chai, we'd kind of ran, we'd got to this point, I was the only person who'd invested. I'd invested maybe 2 million pounds in the business. And you know, from that, we were able to build this thing, get to say a hundred thousand daily active users. And then when character AI came along, the first version, we sort of laughed. We were like, oh man, this thing sucks. Like they don't know what they're building. They're building the wrong thing anyway, but then I saw, oh, they've raised a hundred million dollars. Oh, they've raised another hundred million dollars. And then our users started saying, oh guys, your AI sucks. Cause we were serving a 6 billion parameter model, right? How big was the model that character AI could afford to serve, right? So we would be spending, let's say we would spend a dollar per per user, right? Over the, the, you know, the entire lifetime.swyx [00:22:01]: A dollar per session, per chat, per month? No, no, no, no.William [00:22:04]: Let's say we'd get over the course of the year, we'd have a million users and we'd spend a million dollars on the AI throughout the year. Right. Like aggregated. Exactly. Exactly. Right. They could spend a hundred times that. So people would say, why is your AI much dumber than character AIs? And then I was like, oh, okay, I get it. This is like the Silicon Valley style, um, hyper scale business. And so, yeah, we moved to Silicon Valley and, uh, got some funding and iterated and built the flywheels. And, um, yeah, I, I'm very proud that we were able to compete with that. Right. So, and I think the reason we were able to do it was just customer obsession. And it's similar, I guess, to how deep seek have been able to produce such a compelling model when compared to someone like an open AI, right? So deep seek, you know, their latest, um, V2, yeah, they claim to have spent 5 million training it.swyx [00:22:57]: It may be a bit more, but, um, like, why are you making it? Why are you making such a big deal out of this? Yeah. There's an agenda there. Yeah. You brought up deep seek. So we have to ask you had a call with them.William [00:23:07]: We did. We did. We did. Um, let me think what to say about that. I think for one, they have an amazing story, right? So their background is again in finance.swyx [00:23:16]: They're the Chinese version of you. Exactly.William [00:23:18]: Well, there's a lot of similarities. Yes. Yes. I have a great affinity for companies which are like, um, founder led, customer obsessed and just try and build something great. And I think what deep seek have achieved. There's quite special is they've got this amazing inference engine. They've been able to reduce the size of the KV cash significantly. And then by being able to do that, they're able to significantly reduce their inference costs. And I think with kind of with AI, people get really focused on like the kind of the foundation model or like the model itself. And they sort of don't pay much attention to the inference. To give you an example with Chai, let's say a typical user session is 90 minutes, which is like, you know, is very, very long for comparison. Let's say the average session length on TikTok is 70 minutes. So people are spending a lot of time. And in that time they're able to send say 150 messages. That's a lot of completions, right? It's quite different from an open AI scenario where people might come in, they'll have a particular question in mind. And they'll ask like one question. And a few follow up questions, right? So because they're consuming, say 30 times as many requests for a chat, or a conversational experience, you've got to figure out how to how to get the right balance between the cost of that and the quality. And so, you know, I think with AI, it's always been the case that if you want a better experience, you can throw compute at the problem, right? So if you want a better model, you can just make it bigger. If you want it to remember better, give it a longer context. And now, what open AI is doing to great fanfare is with projection sampling, you can generate many candidates, right? And then with some sort of reward model or some sort of scoring system, you can serve the most promising of these many candidates. And so that's kind of scaling up on the inference time compute side of things. And so for us, it doesn't make sense to think of AI is just the absolute performance. So. But what we're seeing, it's like the MML you score or the, you know, any of these benchmarks that people like to look at, if you just get that score, it doesn't really tell tell you anything. Because it's really like progress is made by improving the performance per dollar. And so I think that's an area where deep seek have been able to form very, very well, surprisingly so. And so I'm very interested in what Lama four is going to look like. And if they're able to sort of match what deep seek have been able to achieve with this performance per dollar gain.Alessio [00:25:59]: Before we go into the inference, some of the deeper stuff, can you give people an overview of like some of the numbers? So I think last I checked, you have like 1.4 million daily active now. It's like over 22 million of revenue. So it's quite a business.William [00:26:12]: Yeah, I think we grew by a factor of, you know, users grew by a factor of three last year. Revenue over doubled. You know, it's very exciting. We're competing with some really big, really well funded companies. Character AI got this, I think it was almost a $3 billion valuation. And they have 5 million DAU is a number that I last heard. Torquay, which is a Chinese built app owned by a company called Minimax. They're incredibly well funded. And these companies didn't grow by a factor of three last year. Right. And so when you've got this company and this team that's able to keep building something that gets users excited, and they want to tell their friend about it, and then they want to come and they want to stick on the platform. I think that's very special. And so last year was a great year for the team. And yeah, I think the numbers reflect the hard work that we put in. And then fundamentally, the quality of the app, the quality of the content, the quality of the content, the quality of the content, the quality of the content, the quality of the content. AI is the quality of the experience that you have. You actually published your DAU growth chart, which is unusual. And I see some inflections. Like, it's not just a straight line. There's some things that actually inflect. Yes. What were the big ones? Cool. That's a great, great, great question. Let me think of a good answer. I'm basically looking to annotate this chart, which doesn't have annotations on it. Cool. The first thing I would say is this is, I think the most important thing to know about success is that success is born out of failures. Right? Through failures that we learn. You know, if you think something's a good idea, and you do and it works, great, but you didn't actually learn anything, because everything went exactly as you imagined. But if you have an idea, you think it's going to be good, you try it, and it fails. There's a gap between the reality and expectation. And that's an opportunity to learn. The flat periods, that's us learning. And then the up periods is that's us reaping the rewards of that. So I think the big, of the growth shot of just 2024, I think the first thing that really kind of put a dent in our growth was our backend. So we just reached this scale. So we'd, from day one, we'd built on top of Google's GCP, which is Google's cloud platform. And they were fantastic. We used them when we had one daily active user, and they worked pretty good all the way up till we had about 500,000. It was never the cheapest, but from an engineering perspective, man, that thing scaled insanely good. Like, not Vertex? Not Vertex. Like GKE, that kind of stuff? We use Firebase. So we use Firebase. I'm pretty sure we're the biggest user ever on Firebase. That's expensive. Yeah, we had calls with engineers, and they're like, we wouldn't recommend using this product beyond this point, and you're 3x over that. So we pushed Google to their absolute limits. You know, it was fantastic for us, because we could focus on the AI. We could focus on just adding as much value as possible. But then what happened was, after 500,000, just the thing, the way we were using it, and it would just, it wouldn't scale any further. And so we had a really, really painful, at least three-month period, as we kind of migrated between different services, figuring out, like, what requests do we want to keep on Firebase, and what ones do we want to move on to something else? And then, you know, making mistakes. And learning things the hard way. And then after about three months, we got that right. So that, we would then be able to scale to the 1.5 million DAE without any further issues from the GCP. But what happens is, if you have an outage, new users who go on your app experience a dysfunctional app, and then they're going to exit. And so your next day, the key metrics that the app stores track are going to be something like retention rates. And so your next day, the key metrics that the app stores track are going to be something like retention rates. Money spent, and the star, like, the rating that they give you. In the app store. In the app store, yeah. Tyranny. So if you're ranked top 50 in entertainment, you're going to acquire a certain rate of users organically. If you go in and have a bad experience, it's going to tank where you're positioned in the algorithm. And then it can take a long time to kind of earn your way back up, at least if you wanted to do it organically. If you throw money at it, you can jump to the top. And I could talk about that. But broadly speaking, if we look at 2024, the first kink in the graph was outages due to hitting 500k DAU. The backend didn't want to scale past that. So then we just had to do the engineering and build through it. Okay, so we built through that, and then we get a little bit of growth. And so, okay, that's feeling a little bit good. I think the next thing, I think it's, I'm not going to lie, I have a feeling that when Character AI got... I was thinking. I think so. I think... So the Character AI team fundamentally got acquired by Google. And I don't know what they changed in their business. I don't know if they dialed down that ad spend. Products don't change, right? Products just what it is. I don't think so. Yeah, I think the product is what it is. It's like maintenance mode. Yes. I think the issue that people, you know, some people may think this is an obvious fact, but running a business can be very competitive, right? Because other businesses can see what you're doing, and they can imitate you. And then there's this... There's this question of, if you've got one company that's spending $100,000 a day on advertising, and you've got another company that's spending zero, if you consider market share, and if you're considering new users which are entering the market, the guy that's spending $100,000 a day is going to be getting 90% of those new users. And so I have a suspicion that when the founders of Character AI left, they dialed down their spending on user acquisition. And I think that kind of gave oxygen to like the other apps. And so Chai was able to then start growing again in a really healthy fashion. I think that's kind of like the second thing. I think a third thing is we've really built a great data flywheel. Like the AI team sort of perfected their flywheel, I would say, in end of Q2. And I could speak about that at length. But fundamentally, the way I would describe it is when you're building anything in life, you need to be able to evaluate it. And through evaluation, you can iterate, we can look at benchmarks, and we can say the issues with benchmarks and why they may not generalize as well as one would hope in the challenges of working with them. But something that works incredibly well is getting feedback from humans. And so we built this thing where anyone can submit a model to our developer backend, and it gets put in front of 5000 users, and the users can rate it. And we can then have a really accurate ranking of like which model, or users finding more engaging or more entertaining. And it gets, you know, it's at this point now, where every day we're able to, I mean, we evaluate between 20 and 50 models, LLMs, every single day, right. So even though we've got only got a team of, say, five AI researchers, they're able to iterate a huge quantity of LLMs, right. So our team ships, let's just say minimum 100 LLMs a week is what we're able to iterate through. Now, before that moment in time, we might iterate through three a week, we might, you know, there was a time when even doing like five a month was a challenge, right? By being able to change the feedback loops to the point where it's not, let's launch these three models, let's do an A-B test, let's assign, let's do different cohorts, let's wait 30 days to see what the day 30 retention is, which is the kind of the, if you're doing an app, that's like A-B testing 101 would be, do a 30-day retention test, assign different treatments to different cohorts and come back in 30 days. So that's insanely slow. That's just, it's too slow. And so we were able to get that 30-day feedback loop all the way down to something like three hours. And when we did that, we could really, really, really perfect techniques like DPO, fine tuning, prompt engineering, blending, rejection sampling, training a reward model, right, really successfully, like boom, boom, boom, boom, boom. And so I think in Q3 and Q4, we got, the amount of AI improvements we got was like astounding. It was getting to the point, I thought like how much more, how much more edge is there to be had here? But the team just could keep going and going and going. That was like number three for the inflection point.swyx [00:34:53]: There's a fourth?William [00:34:54]: The important thing about the third one is if you go on our Reddit or you talk to users of AI, there's like a clear date. It's like somewhere in October or something. The users, they flipped. Before October, the users... The users would say character AI is better than you, for the most part. Then from October onwards, they would say, wow, you guys are better than character AI. And that was like a really clear positive signal that we'd sort of done it. And I think people, you can't cheat consumers. You can't trick them. You can't b******t them. They know, right? If you're going to spend 90 minutes on a platform, and with apps, there's the barriers to switching is pretty low. Like you can try character AI, you can't cheat consumers. You can't cheat them. You can't cheat them. You can't cheat AI for a day. If you get bored, you can try Chai. If you get bored of Chai, you can go back to character. So the users, the loyalty is not strong, right? What keeps them on the app is the experience. If you deliver a better experience, they're going to stay and they can tell. So that was the fourth one was we were fortunate enough to get this hire. He was hired one really talented engineer. And then they said, oh, at my last company, we had a head of growth. He was really, really good. And he was the head of growth for ByteDance for two years. Would you like to speak to him? And I was like, yes. Yes, I think I would. And so I spoke to him. And he just blew me away with what he knew about user acquisition. You know, it was like a 3D chessswyx [00:36:21]: sort of thing. You know, as much as, as I know about AI. Like ByteDance as in TikTok US. Yes.William [00:36:26]: Not ByteDance as other stuff. Yep. He was interviewing us as we were interviewing him. Right. And so pick up options. Yeah, exactly. And so he was kind of looking at our metrics. And he was like, I saw him get really excited when he said, guys, you've got a million daily active users and you've done no advertising. I said, correct. And he was like, that's unheard of. He's like, I've never heard of anyone doing that. And then he started looking at our metrics. And he was like, if you've got all of this organically, if you start spending money, this is going to be very exciting. I was like, let's give it a go. So then he came in, we've just started ramping up the user acquisition. So that looks like spending, you know, let's say we're spending, we started spending $20,000 a day, it looked very promising than 20,000. Right now we're spending $40,000 a day on user acquisition. That's still only half of what like character AI or talkie may be spending. But from that, it's sort of, we were growing at a rate of maybe say, 2x a year. And that got us growing at a rate of 3x a year. So I'm growing, I'm evolving more and more to like a Silicon Valley style hyper growth, like, you know, you build something decent, and then you canswyx [00:37:33]: slap on a huge... You did the important thing, you did the product first.William [00:37:36]: Of course, but then you can slap on like, like the rocket or the jet engine or something, which is just this cash in, you pour in as much cash, you buy a lot of ads, and your growth is faster.swyx [00:37:48]: Not to, you know, I'm just kind of curious what's working right now versus what surprisinglyWilliam [00:37:52]: doesn't work. Oh, there's a long, long list of surprising stuff that doesn't work. Yeah. The surprising thing, like the most surprising thing, what doesn't work is almost everything doesn't work. That's what's surprising. And I'll give you an example. So like a year and a half ago, I was working at a company, we were super excited by audio. I was like, audio is going to be the next killer feature, we have to get in the app. And I want to be the first. So everything Chai does, I want us to be the first. We may not be the company that's strongest at execution, but we can always be theswyx [00:38:22]: most innovative. Interesting. Right? So we can... You're pretty strong at execution.William [00:38:26]: We're much stronger, we're much stronger. A lot of the reason we're here is because we were first. If we launched today, it'd be so hard to get the traction. Because it's like to get the flywheel, to get the users, to build a product people are excited about. If you're first, people are naturally excited about it. But if you're fifth or 10th, man, you've got to beswyx [00:38:46]: insanely good at execution. So you were first with voice? We were first. We were first. I only knowWilliam [00:38:51]: when character launched voice. They launched it, I think they launched it at least nine months after us. Okay. Okay. But the team worked so hard for it. At the time we did it, latency is a huge problem. Cost is a huge problem. Getting the right quality of the voice is a huge problem. Right? Then there's this user interface and getting the right user experience. Because you don't just want it to start blurting out. Right? You want to kind of activate it. But then you don't have to keep pressing a button every single time. There's a lot that goes into getting a really smooth audio experience. So we went ahead, we invested the three months, we built it all. And then when we did the A-B test, there was like, no change in any of the numbers. And I was like, this can't be right, there must be a bug. And we spent like a week just checking everything, checking again, checking again. And it was like, the users just did not care. And it was something like only 10 or 15% of users even click the button to like, they wanted to engage the audio. And they would only use it for 10 or 15% of the time. So if you do the math, if it's just like something that one in seven people use it for one seventh of their time. You've changed like 2% of the experience. So even if that that 2% of the time is like insanely good, it doesn't translate much when you look at the retention, when you look at the engagement, and when you look at the monetization rates. So audio did not have a big impact. I'm pretty big on audio. But yeah, I like it too. But it's, you know, so a lot of the stuff which I do, I'm a big, you can have a theory. And you resist. Yeah. Exactly, exactly. So I think if you want to make audio work, it has to be a unique, compelling, exciting experience that they can't have anywhere else.swyx [00:40:37]: It could be your models, which just weren't good enough.William [00:40:39]: No, no, no, they were great. Oh, yeah, they were very good. it was like, it was kind of like just the, you know, if you listen to like an audible or Kindle, or something like, you just hear this voice. And it's like, you don't go like, wow, this is this is special, right? It's like a convenience thing. But the idea is that if you can, if Chai is the only platform, like, let's say you have a Mr. Beast, and YouTube is the only platform you can use to make audio work, then you can watch a Mr. Beast video. And it's the most engaging, fun video that you want to watch, you'll go to a YouTube. And so it's like for audio, you can't just put the audio on there. And people go, oh, yeah, it's like 2% better. Or like, 5% of users think it's 20% better, right? It has to be something that the majority of people, for the majority of the experience, go like, wow, this is a big deal. That's the features you need to be shipping. If it's not going to appeal to the majority of people, for the majority of the experience, and it's not a big deal, it's not going to move you. Cool. So you killed it. I don't see it anymore. Yep. So I love this. The longer, it's kind of cheesy, I guess, but the longer I've been working at Chai, and I think the team agrees with this, all the platitudes, at least I thought they were platitudes, that you would get from like the Steve Jobs, which is like, build something insanely great, right? Or be maniacally focused, or, you know, the most important thing is saying no to, not to work on. All of these sort of lessons, they just are like painfully true. They're painfully true. So now I'm just like, everything I say, I'm either quoting Steve Jobs or Zuckerberg. I'm like, guys, move fast and break free.swyx [00:42:10]: You've jumped the Apollo to cool it now.William [00:42:12]: Yeah, it's just so, everything they said is so, so true. The turtle neck. Yeah, yeah, yeah. Everything is so true.swyx [00:42:18]: This last question on my side, and I want to pass this to Alessio, is on just, just multi-modality in general. This actually comes from Justine Moore from A16Z, who's a friend of ours. And a lot of people are trying to do voice image video for AI companions. Yes. You just said voice didn't work. Yep. What would make you revisit?William [00:42:36]: So Steve Jobs, he was very, listen, he was very, very clear on this. There's a habit of engineers who, once they've got some cool technology, they want to find a way to package up the cool technology and sell it to consumers, right? That does not work. So you're free to try and build a startup where you've got your cool tech and you want to find someone to sell it to. That's not what we do at Chai. At Chai, we start with the consumer. What does the consumer want? What is their problem? And how do we solve it? So right now, the number one problems for the users, it's not the audio. That's not the number one problem. It's not the image generation either. That's not their problem either. The number one problem for users in AI is this. All the AI is being generated by middle-aged men in Silicon Valley, right? That's all the content. You're interacting with this AI. You're speaking to it for 90 minutes on average. It's being trained by middle-aged men. The guys out there, they're out there. They're talking to you. They're talking to you. They're like, oh, what should the AI say in this situation, right? What's funny, right? What's cool? What's boring? What's entertaining? That's not the way it should be. The way it should be is that the users should be creating the AI, right? And so the way I speak about it is this. Chai, we have this AI engine in which sits atop a thin layer of UGC. So the thin layer of UGC is absolutely essential, right? It's just prompts. But it's just prompts. It's just an image. It's just a name. It's like we've done 1% of what we could do. So we need to keep thickening up that layer of UGC. It must be the case that the users can train the AI. And if reinforcement learning is powerful and important, they have to be able to do that. And so it's got to be the case that there exists, you know, I say to the team, just as Mr. Beast is able to spend 100 million a year or whatever it is on his production company, and he's got a team building the content, the Mr. Beast company is able to spend 100 million a year on his production company. And he's got a team building the content, which then he shares on the YouTube platform. Until there's a team that's earning 100 million a year or spending 100 million on the content that they're producing for the Chai platform, we're not finished, right? So that's the problem. That's what we're excited to build. And getting too caught up in the tech, I think is a fool's errand. It does not work.Alessio [00:44:52]: As an aside, I saw the Beast Games thing on Amazon Prime. It's not doing well. And I'mswyx [00:44:56]: curious. It's kind of like, I mean, the audience reading is high. The run-to-meet-all sucks, but the audience reading is high.Alessio [00:45:02]: But it's not like in the top 10. I saw it dropped off of like the... Oh, okay. Yeah, that one I don't know. I'm curious, like, you know, it's kind of like similar content, but different platform. And then going back to like, some of what you were saying is like, you know, people come to ChaiWilliam [00:45:13]: expecting some type of content. Yeah, I think it's something that's interesting to discuss is like, is moats. And what is the moat? And so, you know, if you look at a platform like YouTube, the moat, I think is in first is really is in the ecosystem. And the ecosystem, is comprised of you have the content creators, you have the users, the consumers, and then you have the algorithms. And so this, this creates a sort of a flywheel where the algorithms are able to be trained on the users, and the users data, the recommend systems can then feed information to the content creators. So Mr. Beast, he knows which thumbnail does the best. He knows the first 10 seconds of the video has to be this particular way. And so his content is super optimized for the YouTube platform. So that's why it doesn't do well on Amazon. If he wants to do well on Amazon, how many videos has he created on the YouTube platform? By thousands, 10s of 1000s, I guess, he needs to get those iterations in on the Amazon. So at Chai, I think it's all about how can we get the most compelling, rich user generated content, stick that on top of the AI engine, the recommender systems, in such that we get this beautiful data flywheel, more users, better recommendations, more creative, more content, more users.Alessio [00:46:34]: You mentioned the algorithm, you have this idea of the Chaiverse on Chai, and you have your own kind of like LMSYS-like ELO system. Yeah, what are things that your models optimize for, like your users optimize for, and maybe talk about how you build it, how people submit models?William [00:46:49]: So Chaiverse is what I would describe as a developer platform. More often when we're speaking about Chai, we're thinking about the Chai app. And the Chai app is really this product for consumers. And so consumers can come on the Chai app, they can come on the Chai app, they can come on the Chai app, they can interact with our AI, and they can interact with other UGC. And it's really just these kind of bots. And it's a thin layer of UGC. Okay. Our mission is not to just have a very thin layer of UGC. Our mission is to have as much UGC as possible. So we must have, I don't want people at Chai training the AI. I want people, not middle aged men, building AI. I want everyone building the AI, as many people building the AI as possible. Okay, so what we built was we built Chaiverse. And Chaiverse is kind of, it's kind of like a prototype, is the way to think about it. And it started with this, this observation that, well, how many models get submitted into Hugging Face a day? It's hundreds, it's hundreds, right? So there's hundreds of LLMs submitted each day. Now consider that, what does it take to build an LLM? It takes a lot of work, actually. It's like someone devoted several hours of compute, several hours of their time, prepared a data set, launched it, ran it, evaluated it, submitted it, right? So there's a lot of, there's a lot of, there's a lot of work that's going into that. So what we did was we said, well, why can't we host their models for them and serve them to users? And then what would that look like? The first issue is, well, how do you know if a model is good or not? Like, we don't want to serve users the crappy models, right? So what we would do is we would, I love the LMSYS style. I think it's really cool. It's really simple. It's a very intuitive thing, which is you simply present the users with two completions. You can say, look, this is from model one. This is from model two. This is from model three. This is from model A. This is from model B, which is better. And so if someone submits a model to Chaiverse, what we do is we spin up a GPU. We download the model. We're going to now host that model on this GPU. And we're going to start routing traffic to it. And we're going to send, we think it takes about 5,000 completions to get an accurate signal. That's roughly what LMSYS does. And from that, we're able to get an accurate ranking. And we're able to get an accurate ranking. And we're able to get an accurate ranking of which models are people finding entertaining and which models are not entertaining. If you look at the bottom 80%, they'll suck. You can just disregard them. They totally suck. Then when you get the top 20%, you know you've got a decent model, but you can break it down into more nuance. There might be one that's really descriptive. There might be one that's got a lot of personality to it. There might be one that's really illogical. Then the question is, well, what do you do with these top models? From that, you can do more sophisticated things. You can try and do like a routing thing where you say for a given user request, we're going to try and predict which of these end models that users enjoy the most. That turns out to be pretty expensive and not a huge source of like edge or improvement. Something that we love to do at Chai is blending, which is, you know, it's the simplest way to think about it is you're going to end up, and you're going to pretty quickly see you've got one model that's really smart, one model that's really funny. How do you get the user an experience that is both smart and funny? Well, just 50% of the requests, you can serve them the smart model, 50% of the requests, you serve them the funny model. Just a random 50%? Just a random, yeah. And then... That's blending? That's blending. You can do more sophisticated things on top of that, as in all things in life, but the 80-20 solution, if you just do that, you get a pretty powerful effect out of the gate. Random number generator. I think it's like the robustness of randomness. Random is a very powerful optimization technique, and it's a very robust thing. So you can explore a lot of the space very efficiently. There's one thing that's really, really important to share, and this is the most exciting thing for me, is after you do the ranking, you get an ELO score, and you can track a user's first join date, the first date they submit a model to Chaiverse, they almost always get a terrible ELO, right? So let's say the first submission they get an ELO of 1,100 or 1,000 or something, and you can see that they iterate and they iterate and iterate, and it will be like, no improvement, no improvement, no improvement, and then boom. Do you give them any data, or do you have to come up with this themselves? We do, we do, we do, we do. We try and strike a balance between giving them data that's very useful, you've got to be compliant with GDPR, which is like, you have to work very hard to preserve the privacy of users of your app. So we try to give them as much signal as possible, to be helpful. The minimum is we're just going to give you a score, right? That's the minimum. But that alone is people can optimize a score pretty well, because they're able to come up with theories, submit it, does it work? No. A new theory, does it work? No. And then boom, as soon as they figure something out, they keep it, and then they iterate, and then boom,Alessio [00:51:46]: they figure something out, and they keep it. Last year, you had this post on your blog, cross-sourcing the lead to the 10 trillion parameter, AGI, and you call it a mixture of experts, recommenders. Yep. Any insights?William [00:51:58]: Updated thoughts, 12 months later? I think the odds, the timeline for AGI has certainly been pushed out, right? Now, this is in, I'm a controversial person, I don't know, like, I just think... You don't believe in scaling laws, you think AGI is further away. I think it's an S-curve. I think everything's an S-curve. And I think that the models have proven to just be far worse at reasoning than people sort of thought. And I think whenever I hear people talk about LLMs as reasoning engines, I sort of cringe a bit. I don't think that's what they are. I think of them more as like a simulator. I think of them as like a, right? So they get trained to predict the next most likely token. It's like a physics simulation engine. So you get these like games where you can like construct a bridge, and you drop a car down, and then it predicts what should happen. And that's really what LLMs are doing. It's not so much that they're reasoning, it's more that they're just doing the most likely thing. So fundamentally, the ability for people to add in intelligence, I think is very limited. What most people would consider intelligence, I think the AI is not a crowdsourcing problem, right? Now with Wikipedia, Wikipedia crowdsources knowledge. It doesn't crowdsource intelligence. So it's a subtle distinction. AI is fantastic at knowledge. I think it's weak at intelligence. And a lot, it's easy to conflate the two because if you ask it a question and it gives you, you know, if you said, who was the seventh president of the United States, and it gives you the correct answer, I'd say, well, I don't know the answer to that. And you can conflate that with intelligence. But really, that's a question of knowledge. And knowledge is really this thing about saying, how can I store all of this information? And then how can I retrieve something that's relevant? Okay, they're fantastic at that. They're fantastic at storing knowledge and retrieving the relevant knowledge. They're superior to humans in that regard. And so I think we need to come up for a new word. How does one describe AI should contain more knowledge than any individual human? It should be more accessible than any individual human. That's a very powerful thing. That's superswyx [00:54:07]: powerful. But what words do we use to describe that? We had a previous guest on Exa AI that does search. And he tried to coin super knowledge as the opposite of super intelligence.William [00:54:20]: Exactly. I think super knowledge is a more accurate word for it.swyx [00:54:24]: You can store more things than any human can.William [00:54:26]: And you can retrieve it better than any human can as well. And I think it's those two things combined that's special. I think that thing will exist. That thing can be built. And I think you can start with something that's entertaining and fun. And I think, I often think it's like, look, it's going to be a 20 year journey. And we're in like, year four, or it's like the web. And this is like 1998 or something. You know, you've got a long, long way to go before the Amazon.coms are like these huge, multi trillion dollar businesses that every single person uses every day. And so AI today is very simplistic. And it's fundamentally the way we're using it, the flywheels, and this ability for how can everyone contribute to it to really magnify the value that it brings. Right now, like, I think it's a bit sad. It's like, right now you have big labs, I'm going to pick on open AI. And they kind of go to like these human labelers. And they say, we're going to pay you to just label this like subset of questions that we want to get a really high quality data set, then we're going to get like our own computers that are really powerful. And that's kind of like the thing. For me, it's so much like Encyclopedia Britannica. It's like insane. All the people that were interested in blockchain, it's like, well, this is this is what needs to be decentralized, you need to decentralize that thing. Because if you distribute it, people can generate way more data in a distributed fashion, way more, right? You need the incentive. Yeah, of course. Yeah. But I mean, the, the, that's kind of the exciting thing about Wikipedia was it's this understanding, like the incentives, you don't need money to incentivize people. You don't need dog coins. No. Sometimes, sometimes people get the satisfaction fro

The Peel

Paul Klein is the Founder and CEO of Browserbase, building infrastructure for AI browsers.Our conversation gets into the future of software and AI agents, why authentication is a huge problem in AI, how the best infrastructure companies become product companies, and the memo he wrote that convinced him to start Browserbase despite not wanting to build another company.A year ago, Paul was a relatively unknown commodity, and definitely did not want to raise venture capital again. He shares the playbook he used to go from zero to raising $27 million in nine months “as a non-famous person” (his words).He shares all his lessons learned in the arena as he's processing them, like what he thinks will unlock better AI agents, why you should like your own tweets, and how Browserbase competes with incumbents.Timestamps:(00:00) Intro(02:39) How LLMs unlock automation online(08:34) The future of software (AI agents)(11:21) Why AI agents need better authentication(12:59) Lessons from Twilio on building an infrastructure company(17:27) Learnings from his first startup(19:56) Bubbles, and how they drive innovation(20:37) Reasons this moment in AI is special(29:58) Why technical founders love post-PMF(31:55) The memo that started Browserbase(34:09) Why a startup should be a means of last resort(36:53) Being a solo founder(42:24) Importance of in-person culture(45:56) The best place to find engineers(48:34) How Paul hired a contractor army to build Browserbase(50:16) Why you can't hire mercenaries(54:28) The power of emojis in marketing(57:39) Browserbase's early growth playbook (3 videos)(01:04:00) Benefits of sharing an office with other startups(01:06:00) Sales lessons from his parents(01:08:07) Why startups are like video games(01:13:43) Successful founders work the hardest and are shameless(01:18:44) Customer support is a startups greatest differentiator(01:22:06) Paul's playbook that raised $27m in nine months as a non-famous person(01:29:03) How investors make decisions(01:33:10) Risks help startups avoid competition(01:36:37) Great infrastructure needs its own frameworks(01:39:05) Long-term thinking in LLMs will enable mass AI agents(01:42:21) Avoiding tech debt with AI moving so fast(01:43:48) Infrastructure companies need to become product companies(01:46:54) The Sine Wave philosophy to startupsReferenced:Browserbase: https://www.browserbase.com/ An Internet Browser for AI: https://memos.hawkhill.ventures/p/an-internet-browser-for-ai Rise of the Product Engineer: https://memos.hawkhill.ventures/p/rise-of-the-product-engineer Death to the Backend: https://memos.hawkhill.ventures/p/death-to-the-backend The three Browserbase marketing videosPre-Seed: https://x.com/pk_iv/status/1775183751800377344 Seed: https://x.com/pk_iv/status/1798731220005883935 Series A: https://x.com/pk_iv/status/1851270308701106383Follow Paul:Twitter: https://x.com/pk_iv LinkedIn: https://www.linkedin.com/in/paulkleiniv/Follow Turner:Twitter: https://x.com/TurnerNovak LinkedIn: https://www.linkedin.com/in/turnernovakSubscribe to my newsletter to get every episode + the transcript in your inbox every week: https://www.thespl.it/ 

Matrix Moments by Matrix Partners India
201: Founder Archetypes - The First Time Founder Edge Part 2

Matrix Moments by Matrix Partners India

Play Episode Listen Later Jan 10, 2025 37:12


Going deeper into their discussion on first-time and experienced founders, Avnish Bajaj and Chandrasekhar Venugopal dissect everything from failure zones founders should watch out for, to increasing one's chances of success and finding purpose. But the question remains: How first-time founders truly leverage this evolved ecosystem to outpace seasoned counterparts? Can there be just one winner? Watch the second and final part of the #Z47Podcast on Founder Archetypes and find out! And, if you haven't already, catch episode 1 here:   • Founder Archetypes: The First Time Fo...  ▶️Some more of our podcasts that came up in the conversation: Burn Rates: https://www.z47.com/podcast/whats-the... PMF: https://www.z47.com/podcast/finding-s... Org 3.0: https://www.z47.com/podcast/finding-s... Single VC Multiple VCs: https://www.z47.com/podcast/single-ve...

The Product Market Fit Show
The 5 Steps to Product-Market Fit w/ Chris Saad, ex-Head of Product at Uber, Host of The Startup Podcast

The Product Market Fit Show

Play Episode Listen Later Jan 9, 2025 51:03 Transcription Available


We took examples from the last 100 episodes and built a clear, 5 step path to finding product market fit:1.Before Startup Mode, There's Research Mode —> Become an expert to find problems worth solving. 2.Only the Insanely Focused Survive —> Focus all your resources to do more with less. 3.You have to be in the market to win the market —> Use niche markets to discover unique insights. 4.Forget Growth. Find Value. —> Optimize for value delivery and growth will follow. 5.Pivot Harder, Faster —> As soon as you realize you're not solving a #1 problem, pivot.Why you should listen:Why creating value by solving problems is the core of startups.Startups must avoid perfectionism and embrace learning.Research mode involves deeply understanding customer needs.Insane focus and hustle are essential for early-stage success.Validation comes from engaging with the market directly.Why growth should be a byproduct of delivering value.Keywordsstartups, product market fit, entrepreneurship, research mode, focus, hustle, validation, customer experience, growth, business strategy, Wattpad, user-generated content, agility, iteration, product market fit, startup growth, pivots, entrepreneurship, value creation, founder storiesTimestamps:(00:00:00) Intro(00:01:47) Who is Chris Saad?(00:02:30) Pablo's Story(00:05:53) The Core to Early Stage is PMF(00:09:07) Step 1: Research Mode(00:16:44) Step 2: Only the Insanely Focused Survive(00:23:46) Step 3: Be in the Market to Win the Market(00:32:57) Step 4: Forget Growth, Find Value(00:37:59) Step 5: Pivot Harder and Faster(00:45:45) RecapSend me a message to let me know what you think!

The Cloudcast
Career Advice in 2025

The Cloudcast

Play Episode Listen Later Jan 8, 2025 40:23


As we start in 2025, we wanted to offer advice on job searching and evaluating roles. The market and the interviewing process have shifted; what do you need to know in 2025 if you seek a new opportunity?SHOW: 887SHOW TRANSCRIPT: The Cloudcast #887 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNETCLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST: "CLOUDCAST BASICS"SHOW NOTES:Considerations when evaluating a startup vs. a more traditional roleRed FlagsConsidering BurnoutHow has the macro environment changed funding and employmentNetworkingKeeping a routineFEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod

Partner Path
E45: Breaking Down the Agent Economy with Robert Chandler (Wordware)

Partner Path

Play Episode Listen Later Jan 8, 2025 36:16


This week, we sat down with Robert Chandler, CTO and Co-Founder at Wordware. Wordware is a collaborative platform to build, iterate, and deploy AI agents.We explore his initial interest in self-driving cars, finding PMF (knowing when to pivot), turning weak AGI into a narrow AI expert, and why prompting is the new programming with first principles in mind. Episode Chapters:Entrepreneurship curiosity- 1:54Building HeyDaily - 4:38Pivot to agents - 9:42End to end agent orchestration - 13:55Long tail of use cases - 18:55Future of Wordware - 20:56 Feature prioritization - 24:24Building with high switching costs - 27:48Ending questions - 33:55As always, feel free to contact us at partnerpathpodcast@gmail.com. We would love to hear ideas for content, guests, and overall feedback.This episode is brought to you by Grata, the world's leading deal sourcing platform. Our AI-powered search, investment-grade data, and intuitive workflows give you the edge needed to find and win deals in your industry. Visit grata.com to schedule a demo today.Fresh out of Y Combinator's Summer batch, Overlap is an AI-driven app that uses LLMs to curate the best moments from podcast episodes. Imagine having a smart assistant who reads through every podcast transcript, finds the best parts or parts most relevant to your search, and strings them together to form a new curated stream of content - that is what Overlap does. Podcasts are an exponentially growing source of unique information. Make use of it! Check out Overlap 2.0 on the App Store today.

Swisspreneur Show
EP # 468 - Laurent Decrue & Boris Manhart: Have You Done Your Homework on Product-Market Fit?

Swisspreneur Show

Play Episode Listen Later Jan 8, 2025 46:57


Timestamps: 10:22 - Defining and testing the Ideal Customer Profile 13:11 - Does ICP vary from B2B to B2C? 20:40 - Metrics that prove you've hit PMF 30:29 - When is it time to start scaling? About Laurent Decrue & Boris Manhart: Laurent Decrue is the co-founder of the moving company MOVU and the software company Holycode, and the former CEO at Bexio. Currently he is active as CFO and co-CEO at Holycode. He holds an MBA from the University of Basel and previously worked at DeinDeal. Boris Manhart is a serial entrepreneur and the founder of Growth Unltd, a company that helps scale your business. He was previously involved with CodeCheck and Numbrs, and studied for a couple of years at the Universität Zürich. During their chat with Silvan, they discussed how to achieve the all-coveted product-market fit. Naturally, this journey starts with defining and testing your ICP, i.e. your Ideal Customer Profile: You'll create an ICP in your head based on assumptions, at the beginning. That's normal. You might even be more or less right. But you can't rely on your intuition alone;  First of all, niche down. Go for a smaller market than the one you had in mind; Then conduct interviews to find out if it's the right market. This is how you can actually test all your customer development assumptions. Once you've more or less figured out your ICP based on real customer data, you can start thinking about metrics. Which ones can tell you if you're really selling your product/service to the right market? Boris and Laurent don't like the common overreliance on figures like Customer Lifetime Value (CLV) or the 1M ARR benchmark. They both prefer to use the Sean Ellis test, which consists of asking people how disappointed they'd be if they could never use your product again. But don't get too hung up on the “40% were very disappointed” metric one sometimes sees floating around: each company is its own unique case. It's often considered that once PMF is reached (or, at least, once you reach it for the first of many times), that's when you should start scaling your company. But you might want to think carefully about how to go about it: during his chat with us, Laurent shared his story of how Holycode made the mistake of hiring 2 new sales people in Switzerland and Germany to move from founder-led sales to employee-led sales, when in fact they should have hired a growth marketing team to help generate more leads — because without generating more leads, what use is having the sales people to convert them? Check out the conversation for more valuable learnings on how to (continuously) reach PMF.  The cover portrait was edited by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.smartportrait.io⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Don't forget to give us a follow on⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠and⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Linkedin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, so you can always stay up to date with our latest initiatives. That way, there's no excuse for missing out on live shows, weekly giveaways or founders' dinners.

Coaster Cuzzies
Cuzzies Corner - PMF Coasters (Tyler)

Coaster Cuzzies

Play Episode Listen Later Jan 3, 2025 68:04


On this week's episode Tyler (PMF Coasters) joins the show as the second Canadian guest. We chat all things Alpen Fury, Wonderland through the years, the origin of PMF, riding Mindbender the week the pandemic shut the world down, and Toronto sports. Join the conversation on Discord: ⁠⁠⁠https://discord.gg/abTDb3eVav⁠⁠⁠ Support the show on Patreon: ⁠⁠⁠https://www.patreon.com/c/user?u=38631549⁠⁠⁠ Links to all things Cuzzies: ⁠⁠⁠https://www.solo.to/coastercuzzies%E2%81%A0⁠⁠⁠ Till next Friday!

Podcast Notes Playlist: Latest Episodes
PN Deep Dive: Happy New Year! The Magic of Stem Cells, The Emperors of Rome, Huberman on Happiness, Micky Malka, David Senra and Danny Hillis

Podcast Notes Playlist: Latest Episodes

Play Episode Listen Later Dec 30, 2024 29:34


Get more notes at https://podcastnotes.org NEW Premium Notes​​5 Ways Stem Cells Will TRANSFORM What It Means To Be Healthy | The Gabby Reece Show​​ ​​Christian Drapeau (@stemcellchristian) is a pioneer in stem cell research. And the first to propose stem cells as the body's innate repair system. Christian Drapeau discusses the transformative potential of stem cells in health and healing. They explore the basics of stem cells, their role in the body, the evolution of stem cell research, and the impact of aging on stem cell efficacy.​The Roman Caesars' Guide to Ruling | Barry Strauss on The Art of Manliness with Brett McKay (#1047)​​ ​​How often do you think about the Roman Empire? Lately, this question has gone viral, leaving many women wondering why their husband or boyfriend is reflecting on Roman history so often. The truth lies in the Empire's enduring allure: its epic tales of power, war, ambition, innovation, and dramatic leadership. In these Premium Podcast Notes, we turn to historian Barry Strauss who gives a detailed overview of his book, Ten Caesars: Roman Emperors from Augustus to Constantine, unpacking the themes that make Rome's legacy a timeless source of fascination.​Podcast Notes Book Collection: 2024 Edition (UPDATED AND COMPLETE with 200+ Books)​After reviewing all 400+ Podcast Notes we wrote in 2024, we selected a comprehensive reading list of over 200 books. Whether you're a new parent, hopeful entrepreneur, tech geek, history buff, or just a general bookworm, there's truly something for everyone!Top Takeaways Of The Week​5 Ways Stem Cells Will TRANSFORM What It Means To Be Healthy | The Gabby Reece Show ​​​Stem Cells With Age: ​By age 30, you've lost 90% of red marrow stem cells; as stem cells disappear from the body, the loss becomes increasingly apparent over time​Stem cell sources:​ Bone marrow, fat tissue, blood, umbilical cord​Yamanaka factor:​ Take old cells and turn them into cells with characteristics of young cells​Why We Lose Them:​ “Over the past 150 years, we've gained an extra 50 years of lifespan. I don't think we have the biology to have an extra 50 years naturally with full blossoming health. We need to kind of hack and tap and leverage things in our body.” – Christian Drapeau​Stem cells from blood​ (like PRP) are very potent; they're dormant so need to be ‘awakened'. This is likely the future of stem cells and relatively new to use; the task now is to learn how to isolate and grow them for use ​Stem cells from the umbilical cord​: The most potent – but not as mainstream because of three notable concerns:* (1) Risk of contaminated stem cells is high which can lead to sepsis;* (2) They're difficult to isolate cleanly;* (3) You can obtain genetic material from a donor who may have been a carrier for a disease that you may develop in the future​Tips to Supported Stem Cells:​* Red lights might help with bodily repair, possibly by aiding blood circulation* Pulsed magnetic frequency (PMF) has been well-documents to boost stem cell frequency* Exercise releases stem cells – but this might be because if you trigger tissue damage, your body will trigger a response* Quality sleep supports the proliferation of brain stem cells* Meditation has a healing component – 20 minutes of meditation has been shown to encourage stem cell release* Plants for health: Shilajit, black seed (periodically), natto, kava (hard to source)​The Roman Caesars' Guide to Ruling | Barry Strauss on The Art of Manliness with Brett McKay (#1047) ​​​The Rise of Octavian/Augustus Caesar:​ Octavian, born Gaius Octavius, was Julius Caesar's grand-nephew and was adopted posthumously as part of Caesar's will — which was technically illegal, but law was pretty loose during civil war​How to Win the Game of Thrones:​ Augustus simply killed off a lot of his enemies, between several civil wars and the execution of about 100 Roman senators​Augustus' Expansion​: Octavian added Egypt, northwestern Spain, and Switzerland to the Roman Empire​The Infamy of Nero:​ Nero was less interested in the empire itself and more interested in the celebrity of being Emperor* He was egotistical and also very vain about his singing and chariot racing ability* He competed in the Panhellenic Games and “won” every event​The Truth About Nero's Fiddling While Rome Burned: ​There's a rumor that Nero fiddled while Rome burned, but the fiddle technically hadn't been invented yet. However, he may have played a lyre, which is a harp-like instrument.* There are even rumors that claim Nero started the fire, with the intent to engage in a massive urban renewal project​Marcus Aurelius, the Ivory Tower Emperor: ​Marcus Aurelius was a great philosopher but not a great emperor* Marcus Aurelius had no military experience or experience outside Italy, he was completely unprepared to become empowered — a common theme in the Roman Empire system* He let is son Commodus follow him as emperor​Commodus, The Villain:​ The first man born to be emperor* Commodus was arrogant, entitled, and irresponsible* Competed as a gladiator* He offered bread and circuses to the people but killed many senators* Eventually, the senate carries out a plot to kill CommodusDr. Laurie Santos: How​ ​To Achieve True Happiness Using Science-Based Protocols | Huberman LabHappiness Operates on 3 Timescales: (1) The immediate timescale of happiness; (2) The intermediate timescale of happiness where we introduce a story; (3) Meaning which is the full pictureTips to Approach Happiness on All 3 Levels:* Engage in activities that fit with values and strengths;* Infuse strengths into work & leisure;* Do for others* Ask others for helpExtrinsic rewards: A tangible, visible reward for achieving somethingIntrinsic rewards: Reward comes from within, feeling progress, purpose, etc.On the lower end of the income spectrum, money affects happiness: If you struggle to put food on the table, keep a roof over your head – more money almost has a linear relationship with happinessStop being a Sad Loner: One of the biggest levers you can pull to improve happiness is to get more social connection“The two things that predict whether you're happy or not so happy are how much time you spend with friends and family members and how much you are physical around other people. The more of that you do, the happier you're going to be.” – Laurie SantosDopamine Must Be Earned, Not Given: “Be wary of any dopamine hit not preceded by effort to achieve it.” – Andrew HubermanPrediction error: Introverts predict social connection will be negative, when in actuality they get report a great experience; extroverts predict social connection will be ok, and anticipation is highNegativity bias: We're built to notice the risky, potentially scary stuff; our brain goes there automatically which makes sense evolutionarilyToxic Positivity: The notion that something is wrong if you're feeling anything but happy or ‘good vibes'; negative emotions are a cue from our body to take action in some direction to fixDog vs. Cat People: If you crave unconditional love, you are probably more of a dog person; cats are more independentHedonic adaptation: We get used to things; you can't be happy all the time because you become sensitized* “Every good thing in life becomes boring after some time.” – Laurie Santos* Tip: Imagine the negative things and obstacles; use worse things as a comparison to realize how good things are – don't ruminate on it but use your imagination to appreciate what you have* Tip #2: Space out positive experiences to come back to them over timeBanister effect: Believe something is positive; be optimistic enough to think something is doable then ask yourself the question ‘what will get in the way?' so you problem-solveArrival fallacy: The idea, “I'll be happy when…” – the thing we arrive at doesn't feel as good once we get there and we chase the next thingMicky Malka – Building ​Ribbit ​| Invest Like the Best with Patrick O'Shaughnessy Ep. 400​ ​Life and business principles from Micky Malka:* 1. Never forget where you came from* 2. Fewer decisions is best* 3. Be genuine to yourself and those around youLessons learned from the Robinhood saga* Being the CEO is a very lonely existence* Develop and craft a flexible mindset so that you can be comfortable taking risk* When you truly understand the variables at play, you will be more comfortable with taking ‘risk'* Mistakes made in a single day can damage a company's brand for years to comeTechnologies have their own grid complexes* The electrical grid consists of towers, power meters, transformers, and more* The internet is a grid of data and knowledge* The monetary system has its own grid as well, consisting of the different financial layers and rails which include Visa, Mastercard, SWIFT, etcStablecoins: US dollar stablecoins are travel checks in today's world; they are instant dollars that the individual can control and move, anytime, and for zero* Stablecoins provided people with access to US dollars who previously did not have access to dollars and who were forced to use their local (and weaker) fiat currency* Today, US-dollar stablecoins do a similar monthly volume to that of Visa and MastercardWhat Got You Here, Won't Get You There: Burn the bridge that got you here; whatever got you here will not get you to the next phaseGet the full notes at Podcastnotes.org Thank you for subscribing. Leave a comment or share this episode.

The Change Life Destiny Show
#47 - Unlock Your Body's Full Potential: Dr. Sean Drake's Holistic Healing Secrets

The Change Life Destiny Show

Play Episode Listen Later Dec 18, 2024 34:24


Welcome to the Change Life Destiny podcast series! In this episode, host Stephanie engages in a compelling conversation with Dr. Sean Drake, a sports chiropractor and holistic health advocate. Dr. Drake shares his personal journey from a devastating car accident to discovering chiropractic and naturopathic medicine, and how he now uses a variety of modalities, such as PMF, infrared sauna, and cold exposure, for high-performance and healing. Tune in to learn more about his innovative approaches, touching success stories, and aspirations for the future of healthcare, all aimed at helping individuals unlock their fullest potential.00:00 Welcome to the Change Life Destiny Podcast00:39 Introducing Sean Drake: A Journey of Holistic Healing01:25 Sean's Personal Health Journey01:49 Discovering Chiropractic and Naturopathic Medicine02:39 From Politics to Chiropractic: A Life-Changing Accident04:32 The Path to Becoming a Sports Chiropractor08:01 Exploring Advanced Modalities in Sports Performance10:31 The Importance of Nervous System and Energy14:27 Cold Exposure: Techniques and Benefits17:20 Transforming Stress Responses18:07 The Power of Cold Therapy19:02 Using Modalities for High Performance20:04 Inspiring Stories of Healing25:57 Future Aspirations for Healthcare32:14 Living the Practice33:18 Conclusion and Final ThoughtsGet in touch with Dr. Sean Drake:WebsiteInstagramLinkedInChange Life & Destiny is a movement to excite, engage, and educate communities about the importance of taking control of our health and wellness. We highlight the latest and greatest technologies that can restore health, prevent disease, and promote wellness, as well as practitioners who are using cutting-edge technology to help patients take control of their health.Learn more about us here:Website: https://www.changelifedestiny.com/Instagram: https://www.instagram.com/changinglifedestiny/LinkedIn: https://www.linkedin.com/company/changelifedestiny/YouTube: https://www.youtube.com/@changelifedestinyFacebook: https://www.facebook.com/changelifedestinyWant to learn more? Visit our website or follow us on Instagram, Facebook Youtube, and LinkedIn.

Decoding Story-Based Marketing Podcast
Decoding Product Market Fit - What Does it Actually Mean and Why It's Important to all Go-To-Market Leaders

Decoding Story-Based Marketing Podcast

Play Episode Listen Later Dec 17, 2024 23:39


In today's episode, we explore the essential concept of Product Market Fit (PMF) and its vital role in driving business growth. We will clarify what PMF is, what it isn't, and discuss some signs that indicate whether or not you have it in your business. We analyze how PMF impacts marketing strategies during various stages of a business, from testing hypotheses in the early phases to expanding into new markets during periods of success. This episode is perfect for go-to-market leaders and anyone eager to grasp the intricacies of product-market alignment! decodedstrategies.com  https://www.setpiece.co/ 

B2B SaaS Marketing Snacks
74- How to get traction with an MVP product

B2B SaaS Marketing Snacks

Play Episode Listen Later Dec 14, 2024 18:41


Why do some minimum viable SaaS products catch fire immediately, while others struggle to gain traction, even with solid execution? This is a challenge that keeps many early stage founders up at night.Building a great product is just the beginning. You've probably heard people say… “just turn on marketing and revenue will flow.” For some, it can be that simple, but often it isn't. In Episode 74 of the B2B SaaS Marketing Snacks Podcast, our hosts Brian and Stijn dive deep into the hows and whys of finding your true customers and nailing your ideal customer profile. This is absolutely crucial before you can hit the gas on growth. Topics discussed include:Why founders can't cut corners on going from MVP straight into growth without the right customer understandingThe big difference between early adopters and true customers that will pay and help you growThe most important questions you need to answer in the market to gain traction: who's it for and what's it forHow to confirm when you've stepped from MVP to PMFHow to conduct Jobs to be Done (JTBD) research to get the best answersB2B SaaS Marketing Snacks is one of the most respected voices in the SaaS industry. It is hosted by two leading marketing and revenue growth experts for software:Stijn Hendrikse: Author of T2D3 CMO Masterclass & Book, Founder of KalungiBrian Graf: CEO of KalungiB2B SaaS companies move through predictable stages of marketing focus, cost and size (as described in the popular T2D3 book). With people cost being a majority of the cost involved, every hire needs to be well worth the investment!The best founders, CFOs and COOs in B2B SaaS rely on a balance of marketing leadership, strategy and execution to produce the customer and revenue growth they require. Staying flexible and nimble is a key marketing asset in a hard-charging B2B world.Resources shared in this episode:How to use feedback to better your SaaS product and GTM strategy10 milestones to reach product-market fit (PMF) for B2B SaaSBSMS 31 - T2D3: How some software startups scale, where many failBSMS 26 - How to define and create your Ideal Customer Profile (ICP)BSMS 5: When's the time to invest in marketing? Hint: at PMFT2D3 CMO MasterclassSubmit and vote on our podcast topicsABOUT B2B SAAS MARKETING SNACKSSince 2020, The B2B SaaS Marketing Snacks Podcast has offered software company founders, investors and leadership a fresh source of insights into building a complete and efficient engine for growth.Meet our Marketing Snacks Podcast Hosts: Stijn Hendrikse: Author of T2D3 Masterclass & Book, Founder of KalungiAs a serial entrepreneur and marketing leader, Stijn has contributed to the success of 20+ startups as a C-level executive, including Chief Revenue Officer of Acumatica, CEO of MightyCall, a SaaS contact center solution, and leading the initial global Go-to-Market for Atera, a B2B SaaS Unicorn. Before focusing on startups, Stijn led global SMB Marketing and B2B Product Marketing for Microsoft's Office platform.Brian Graf: CEO of KalungiAs CEO of Kalungi, Brian provides high-level strategy, tactical execution, and business leadership expertise to drive long-term growth for B2B SaaS. Brian has successfully led clients in all aspects of marketing growth, from positioning and messaging to event support, product announcements, and channel-spend optimizations, generating qualified leads and brand awareness for clients while prioritizing ROI. Before Kalungi, Brian worked in television advertising, specializing in business intelligence and campaign optimization, and earned his MBA at the University of Washington's Foster School of Business with a focus in finance and marketing.Visit Kalungi.com to learn more about growing your B2B SaaS company.

The Product Market Fit Show
How to build great products—lessons from Amazon, Facebook, Twitter & Deel. | Aaron Goldsmid, Head of Product at Deel

The Product Market Fit Show

Play Episode Listen Later Dec 5, 2024 37:54 Transcription Available


Aaron was Director of Product at Amazon, VP Product at Twilio, PM at Facebook and Twitter. Now he's head of product at Deel where he reports directly to the CEO. Deel was founded in 2019— now, just 5 years later, it's worth $12B and raised over $650M. We go deep on what it takes to build world-class products, how early-stage founders can balance customer feedback, vision and data, and how the best product leaders are leveraging AI today.Keywordsproductivity, AI, product development, user experience, product management, customer feedback, product market fit, technology, innovation, startupsWhy you should listenWhy you're better off trying to enhance than replace with AI— at least today.Why you need to deeply understand your users to build products they truly love. How to avoid the customer feedback trap and only build features that move the needle.How and when to hire your first product manager. What makes a product truly great Timestamps:(00:00:00) Intro(00:10:25) AI Solves the Informational Retrieval Problem(00:14:54) AI Agents Aren't Real Yet(00:18:12) Signals that Could Lead Towards to PMF(00:23:04) Finding the Customers Need(00:25:17) Hiring Product Managers(00:28:00) Difference Between an excellent Product vs an Okay One(00:34:17) Prioritizing Bug Fixing vs Feature Requests from Existing UsersSend me a message to let me know what you think!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Bolt.new, Flow Engineering for Code Agents, and >$8m ARR in 2 months as a Claude Wrapper

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Dec 2, 2024 98:39


The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se

Training Data
Dust's Gabriel Hubert and Stanislas Polu: Getting the Most From AI With Multiple Custom Agents

Training Data

Play Episode Listen Later Nov 26, 2024 63:07


Founded in early 2023 after spending years at Stripe and OpenAI, Gabriel Hubert and Stanislas Polu started Dust with the view that one model will not rule them all, and that multi-model integration will be key to getting the most value out of AI assistants. In this episode we'll hear why they believe the proprietary data you have in silos will be key to unlocking the full power of AI, get their perspective on the evolving model landscape, and how AI can augment rather than replace human capabilities. Hosted by: Konstantine Buhler and Pat Grady, Sequoia Capital 00:00 - Introduction 02:16 - One model will not rule them all 07:15 - Reasoning breakthroughs 11:15 - Trends in AI models 13:32 - The future of the open source ecosystem 16:16 - Model quality and performance 21:44 - “No GPUs before PMF” 27:24 - Dust in action 37:40 - How do you find “the makers” 42:36 - The beliefs Dust lives by 50:03 - Keeping the human in the loop 52:33 - Second time founders 56:15 - Lightning round

Rehash: A Web3 Podcast
S10 E5 | Understanding Crypto Protocol Security w/Laurence Day (Wildcat Finance)

Rehash: A Web3 Podcast

Play Episode Listen Later Nov 21, 2024 53:30


In this episode of Rehash, Dr. Laurence E. Day, founder of Wildcat Finance and creator of the infamous West Ham group chat, joins us to clear up rumors about his alleged crypto hacks and share more about what he's working on at Wildcat Finance, which aims to facilitate under-collateralized borrowing on the Ethereum blockchain. Laurence also shares his thoughts on protocol security, the impact of social engineering attacks, and the future of credit in DeFi. Last but certainly not least, we chat about the role and unique culture of the West Ham group chat in the crypto community. COLLECT THIS EPISODESUBSCRIBE TO REHASH PODCAST CLUB (RPC) FOLLOW REHASH:TwitterWarpcast (Farcaster)TikTokInstagramNewsletterRehash Podcast Club (RPC)Diana Chen (Host) IMPORTANT LINKS:Laurence Day (Twitter)Wildcat Finance TIMESTAMPS0:00 Intro1:05 Laurence's background10:04 The infamous crypto protocol hacks16:13 Protocol security insights25:17 Challenges of onboarding the masses to DeFi26:54 The UX and PMF debate29:30 What is Wildcat Finance?37:52 The importance of credit in DeFi41:24 What is West Ham?51:20 Final thoughts and predictions DISCLAIMER: The information in this video is the opinion of the speaker(s) only and is for informational purposes only. You should not construe it as investment advice, tax advice, or legal advice, and it does not represent any entity's opinion but those of the speaker(s). For investment or legal advice, please seek a duly licensed professional.

Partner Path
E42: Empowering the Next Generation of Founders with Kofi Ampadu (a16z)

Partner Path

Play Episode Listen Later Nov 20, 2024 44:48


This week, we sat down with Kofi Ampadu, a Partner at Andreessen Horowitz who leads the Talent x Opportunity Initiative (TxO). TxO focuses on discovering and supporting high-potential entrepreneurs who have the vision and determination to build transformative businesses but lack traditional access to resources and networks.  We explore his experiences working as an engineer at Kraft, the importance of having hard conversations early on, and what drives innovation in CPG companies. Kofi also shares how to optimize a brand for staying power, investing at the intersection of culture and lifestyle, and how TxO defines PMF.Episode Chapters:What makes Kofi, Kofi - 2:08Transition from enterprise to startups - 5:38SKU'd Ventures - 12:35Innovation in CPG - 17:25AI's impact on CPG - 21:00TxO - 26:00Thesis at TxO - 30:00Donor advised fund - 33:30Quick fire round - 41:53As always, feel free to contact us at partnerpathpodcast@gmail.com. We would love to hear ideas for content, guests, and overall feedback.This episode is brought to you by Grata, the world's leading deal sourcing platform. Our AI-powered search, investment-grade data, and intuitive workflows give you the edge needed to find and win deals in your industry. Visit grata.com to schedule a demo today.Fresh out of Y Combinator's Summer batch, Overlap is an AI-driven app that uses LLMs to curate the best moments from podcast episodes. Imagine having a smart assistant who reads through every podcast transcript, finds the best parts or parts most relevant to your search, and strings them together to form a new curated stream of content - that is what Overlap does. Podcasts are an exponentially growing source of unique information. Make use of it! Check out Overlap 2.0 on the App Store today.

Lenny's Podcast: Product | Growth | Career
Building Wiz: the fastest-growing startup in history | Raaz Herzberg (CMO and VP Product Strategy)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Nov 17, 2024 65:19


Raaz Herzberg is the chief marketing officer and VP of product strategy at Wiz. Wiz hit $100 million ARR within 18 months (the fastest growth in startup history) and, five years in, is generating over $500 million ARR. It also serves over 45% of the Fortune 100. Raaz was one of the first five employees at Wiz, joining as the first product manager, and helped the team pivot to what may be the most intense PMF in history. Before Wiz, Raaz led security products at Microsoft, including Azure Sentinel. In our conversation, we discuss:• How Wiz pivoted from their initial idea and found deep product-market fit• What Raaz learned about listening to customers• Why she moved from product to marketing, despite no prior experience• How she thinks differently as a marketer with a product background• Lessons learned from scaling a hypergrowth startup like Wiz• Much more—Brought to you by:• WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs• Rippling—Automate HR, IT, and finance so you can scale faster• Cloudinary—The foundational technology for all images and video on the internet—Find the transcript at: https://www.lennysnewsletter.com/p/building-wiz-raaz-herzberg—Where to find Raaz Herzberg:• X: https://x.com/raazherzberg• LinkedIn: https://www.linkedin.com/in/raazh—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Raaz's background(02:54) Early challenges and Wiz's essential pivot(06:41) Finding product-market fit(11:31) Lessons from early customer interactions(14:54) The power in speaking up when you don't understand something(17:46) How Wiz pivoted from their initial idea(23:52) Marketing and leadership insights(34:12) The challenges of being a marketing leader(28:05) Following the “heat” in your organization(30:22) How Raaz found success as CMO(34:01) Common CMO mistakes(36:23) Creating noise and standing out(40:28) Embracing failure and taking risks(44:53) The importance of clear communication(48:32) The “dummy” explanation(51:00) Building trust and company culture(53:45) Contrarian corner(56:34) Lightning round—Referenced:• Wiz: https://www.wiz.io/• An inside look at Deel's unprecedented growth | Meltem Kuran Berkowitz (Head of Growth): https://www.lennysnewsletter.com/p/an-inside-look-at-deels-unprecedented• Velocity over everything: How Ramp became the fastest-growing SaaS startup of all time | Geoff Charles (VP of Product): https://www.lennysnewsletter.com/p/velocity-over-everything-how-ramp• Assaf Rappaport on LinkedIn: https://www.linkedin.com/in/assafrappaport/• How LinkedIn became interesting: The inside story | Tomer Cohen (CPO at LinkedIn): https://www.lennysnewsletter.com/p/how-linkedin-became-interesting-tomer-cohen• Shardul Shah on LinkedIn: https://www.linkedin.com/in/shardul-shah-3589062/• Doug Leone on LinkedIn: https://www.linkedin.com/in/douglas-leone-a2714/• Jeff Horing on LinkedIn: https://www.linkedin.com/in/jeffhoring2009/• RSA Conference: https://www.rsaconference.com/• Microsoft acquires Adallom to advance identity and security in the cloud: https://blogs.microsoft.com/blog/2015/09/08/microsoft-acquires-adallom-to-advance-identity-and-security-in-the-cloud/• Imposter syndrome: https://www.psychologytoday.com/us/basics/imposter-syndrome• Careers at Wiz: https://www.wiz.io/careers• Setting the Table: The Transforming Power of Hospitality in Business: https://www.amazon.com/Setting-Table-Transforming-Hospitality-Business/dp/0060742763• No Rules Rules: Netflix and the Culture of Reinvention: https://www.amazon.com/No-Rules-Netflix-Culture-Reinvention/dp/1984877860/• Gong: https://www.gong.io/• The Wire on HBO: https://www.hbo.com/the-wire• Raaz's pen holder recommendation: https://www.paper-republic.com/products/le-loop-penholder—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Made IT
Strategie per Raggiungere il Product Market Fit con Luca Mastella, Founder Learnn [VIDEO]

Made IT

Play Episode Listen Later Nov 14, 2024 39:08


Il "product market fit" è un obiettivo cruciale per il successo di qualsiasi startup. Luca Mastella ha condiviso la sua esperienza e le strategie che Learnn ha adottato per raggiungere il PMF, sottolineando l'importanza di comprendere le esigenze degli utenti e di validare il prodotto prima di investire risorse significative nello sviluppo. Link per avere uno sconto del 30% sull'abbonamento annuale di Learnn

The Peel
Recruiting From Zero to One with Nakul Mandan, Co-founder of Audacious Ventures

The Peel

Play Episode Listen Later Nov 14, 2024 106:21


Nakul Mandan is the founder of Audacious Ventures. Prior to Audacious, he was a partner at Lightspeed, joining from Battery, which he joined in ‘09 in the middle of the financial crisis while living in India. This conversation explores his journey immigrating to Silicon Valley and building an early stage venture firm from the ground up. We get into why most VCs aren't helpful with recruiting at the zero to one stage, his thesis on starting an early stage venture firm to help founders hire A+ teams, a crash course on early stage recruiting and building a sales team, and how COVID hit right after he left Lightspeed to raise Audacious Fund 1. Timestamps:(00:00) Intro(03:43) Evolution of VC platform teams(09:53) How Audacious runs in-house recruiting processes(15:16) The reason large firms can't help with Seed stage recruiting(17:06) Immigrating from India to the US mid-financial crisis(21:59) Silicon Valley's secret weapon(25:59) The opportunity to start a recruiting-focused Seed firm(30:14) Raising Audacious $90m Fund 1 in April of 2020(36:58) The new guard of Seed firms(39:23) Why $50-75m is the minimum viable institutional fund size(41:48) How to work with the best founders(45:30) Navigating deal dynamics, term sheets, and valuations(52:24) The two hardest parts about starting your own fund(54:32) Lessons applied raising Audacious $125m Fund 2 in 2023(58:46) Evolving from a PMF-first to Founder-first investor(01:02:09) Five traits of force of nature founders(01:07:05) How to build an A+ team(01:11:46) The importance of backchanneling(01:13:54) Why everyone thinks they're a good people reader(01:14:35) Two most common mistakes in recruiting(01:20:59) Determining urgency of a customer's problem(01:22:55) Hiring and scaling your first sales team(01:25:55) Why marketing is the hardest role to hire for(01:31:59) What good sales people look like(01:35:43) How to move up market + how to do pilots(01:43:40) Why Nakul admires Rafael Nadal Referenced:Audacious: ⁠https://www.audacious.co/ ⁠ Nakul's immigration journey: ⁠https://www.nakulmandan.com/blog/2024/an-immigrant-living-the-american-dream⁠ Force of nature founders: ⁠https://www.nakulmandan.com/blog/2024/traits-i-look-for-in-founders⁠ Early GTM hiring: ⁠https://www.nakulmandan.com/blog/2023/initial-gtm-hiring-for-saas-startups⁠ Follow Nakul:Twitter: ⁠https://x.com/nakul⁠ LinkedIn: ⁠https://www.linkedin.com/in/nakulmandan⁠ Follow Turner:Twitter: ⁠https://twitter.com/TurnerNovak⁠ LinkedIn: ⁠https://www.linkedin.com/in/turnernovak/Subscribe to my newsletter to get every episode + the transcript in your inbox every week: ⁠https://www.thespl.it⁠

25 O'Clock
Dirty Dollhouse

25 O'Clock

Play Episode Listen Later Nov 12, 2024 76:17


Dan traveled up to Newtown, PA to sit with Dirty Dollhouse front-woman Chelsea Mitchell in her shop, Newtown Book & Record Exchange, along with her bassist Josh Machiz. Chelsea talks about growing up in Newtown, being an anxious, book-obsessed kid, and how she found her way into music by playing in the church worship bands and choir (where, as a bonus, she received classical voice training). She also talks about the trajectory of Dirty Dollhouse, beginning as a trio of her, Amber Twait and Vanessa Winters singing country harmonies, and how it morphed into the full band that it is today with members like Josh, Eric Lawry and Pete Hall. Dan, Chelsea, and Josh reminisce about an era in the Philly folk and rock scene that they all inhabited around the same time, and give their kudos to people who have helped along the way.  Dirty Dollhouse's new LP, 'The End', is out now and available wherever you get digital music. The band will play a release show at Johnny Brenda's on November 22nd with Dominy and Bren.  Also: Dan is getting ready to send the $$ from 'Want To Play A Song?: Live On 25 O'Clock Vol. 1' to Philly Music Fest (they're getting ready to announce their donation numbers very soon), so get yourself the comp exclusively on Bandcamp if you haven't already. All proceeds go to the charitable giving of PMF to music education charites in Philadlephia. 

Partner Path
E41: Setting Sail at Optimist Ventures with Teddy Himler

Partner Path

Play Episode Listen Later Oct 30, 2024 35:43


This week, we sat down with Teddy Himler, General Partner at Optimist Ventures. Optimist is an early-stage fund backing founders focused on critical technologies at the inflection stage. We explore his experiences investing in SE Asia, taking less PMF risk in emerging markets, and signals that led him to start his own fund.  Teddy also shares his perspectives on technology's secular and cyclical trends,  the importance of owning your domain, and AI breakthroughs in healthcare, insurance, and robotics.During the episode, we reference Thomas Laffont's All-In Summit presentationEpisode Chapters:Investing in internet 1.0 - 1:55Historian investing style - 5:35Valuations in emerging markets - 7:50 Launching a fund - 12:00Betting on the next 30 years - 16:14Differentiation as an emerging manager - 19:10Market adoption of AI - 22:39LP learnings - 25:35Quick fire round - 31:20As always, feel free to contact us at partnerpathpodcast@gmail.com. We would love to hear ideas for content, guests, and overall feedback.This episode is brought to you by Grata, the world's leading deal sourcing platform. Our AI-powered search, investment-grade data, and intuitive workflows give you the edge needed to find and win deals in your industry. Visit grata.com to schedule a demo today.Fresh out of Y Combinator's Summer batch, Overlap is an AI-driven app that uses LLMs to curate the best moments from podcast episodes. Imagine having a smart assistant who reads through every podcast transcript, finds the best parts or parts most relevant to your search, and strings them together to form a new curated stream of content - that is what Overlap does. Podcasts are an exponentially growing source of unique information. Make use of it! Check out Overlap 2.0 on the App Store today.

Lenny's Podcast: Product | Growth | Career
The original growth hacker reveals his secrets | Sean Ellis (author of “Hacking Growth”)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Sep 5, 2024 104:25


Sean Ellis is one of the earliest and most influential thinkers and operators in growth. He coined the term “growth hacking,” invented the ICE prioritization framework, was one of the earliest people to use freemium as a growth lever, and, most famously, developed the Sean Ellis Test for product-market fit (which a large percentage of founders use today to track if they've found PMF). Over the course of his career, Sean was head of growth at Dropbox and Eventbrite; helped companies like Microsoft and Nubank refine their growth strategy; was on the founding team of LogMeIn, which sold for over $4 billion; and is the author of one of the most popular growth books of all time, Hacking Growth, which has sold over 750,000 copies. In our conversation, he shares:• The proper use of the Sean Ellis Test for measuring product-market fit• How to increase your activation and retention rates• How to select the right North Star metric for your business• Case studies from his work growing Dropbox and other products• How growth strategy has changed over the past decade• How AI is impacting growth efforts• Much more—Brought to you by:• Gamma—A new way to present, powered by AI• CommandBar—AI-powered user assistance for modern products and impatient users• Merge—A single API to add hundreds of integrations into your app—Find the transcript and show notes at: https://www.lennysnewsletter.com/p/the-original-growth-hacker-sean-ellis—Where to find Sean Ellis:• X: https://x.com/seanellis• LinkedIn: https://www.linkedin.com/in/seanellis/• Website: https://www.seanellis.me/• Substack: https://substack.com/@seanellis—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Sean's background(02:18) The Sean Ellis test explained(06:28) The 40% rule(08:06) Case study: improving product-market fit(12:34) Understanding and leveraging customer feedback(16:50) Challenges and nuances of product-market fit(22:22) When to use the Sean Ellis Test(23:46) When not to use the Sean Ellis Test and other caveats(27:13) Defining your own threshold and how the Sean Ellis Test came about(36:13) Tools for implementing the survey (37:30) Transitioning from surveys to retention cohorts(39:13) Nubank's approach(40:18) Case study: Superhuman's strategy for increasing product-market fit(45:18) Coining the term “growth hacking”(48:24) How to approach growth(57:25) Improving activation and onboarding(01:05:17) Identifying effective growth channels(01:10:28) The power of customer conversations(01:12:43) Developing the Dropbox referral program(01:14:47) The importance of word of mouth(01:15:23) Freemium models and engagement(01:19:21) Picking a North Star metric(01:24:30) The evolution of growth strategies(01:27:12) The ICE and RICE frameworks(01:30:11) AI's role in growth and experimentation(01:32:52) Final thoughts and lightning round—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Balancing Chaos Podcast
Biohacking Your Way to Longevity with Kayla Barnes

Balancing Chaos Podcast

Play Episode Listen Later Sep 1, 2024 68:59


On this week's episode of the Balancing Chaos Podcast, Kelley sits down with certified brain-health coach Kayla Barnes to discuss all things biohacking, brain health and how we can live as long and as well as possible. Kayla is an entrepreneur and biohacker with a mission to help her clients and community achieve optimal health through science and research-backed approaches. Kayla has been named one of the top longevity leaders globally and has been featured in Forbes, Thrive Global, Byrdie and more. Barnes has a background in nutrition, has trained under the renowned brain doctor, Dr. Daniel Amen, and is the owner of the wellness space LYV. Through their conversation, Kelley and Kayla dive into the latest science-backed wellness tools to elevate cognitive function and eliminate brain fog. By understanding the way hormones and gut health play a role in brain health, we can optimize the way we think and feel by harnessing the power of those connections. From what testing you should be doing on your gut, hormones and brain to the best diet for brain health and how to lower your toxic burden and which biohacking tools are actually worth the money and time, we go over everything you've ever wanted to know about brain health, mood, and mental health.To connect with Kelley click HERETo book a lab review click HERETo connect with Kayla click HERE