Podcasts about CTO

  • 9,316PODCASTS
  • 28,354EPISODES
  • 39mAVG DURATION
  • 5DAILY NEW EPISODES
  • Mar 16, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about CTO

    Show all podcasts related to cto

    Latest podcast episodes about CTO

    Gig Gab - The Working Musicians' Podcast
    From Wall Street Hacker to Music Mogul: Mike Grande's Journey

    Gig Gab - The Working Musicians' Podcast

    Play Episode Listen Later Mar 16, 2026 64:07 Transcription Available


    You get a front-row seat to how Michael Grande turned hard-won tech chops and late-night studio hacks into real music-business wins. From escaping NAMM chaos and leveraging smart PR and management, to transforming a throwaway “stupid idea” into Card Chords—an Amazon-topping guitar tool born from a Cricut, Guitar Center testing, and sheer persistence—you see how necessity, experimentation, and saying yes the first time landed him in Jimi Hendrix's old bedroom at Electric Lady Studios, shredding in the lineage of Vai and Satriani, and inventing Tone Picks on the fly. Along the way, you're reminded that when you know you're right, you embrace it, protect your IP, and keep swinging big—whether that's launching music schools, eyeing Shark Tank with a bold offer, or pivoting your career from Wall Street CTO and Certified Ethical Hacker to full-on guitar innovator. Then you're pushed to rethink how you teach, lead, and build your own music brand. You learn why great schools and studios run on clear mission statements, strong unique selling propositions, and a coaching mindset that focuses on the student, not the curriculum—getting them hooked on the songs they actually want to play, then turning them toward what they need. You see how asking potential customers for their own answers, treating every audience like they matter, and showing up like a coach instead of a teacher all point to one core operating principle: you're never off-duty, because you Always Be Performing—ALWAYS. 00:00:00 Gig Gab 525 – Monday, March 16th, 2026 March 16th: Freedom of Information Day Guest co-host: Michael Grande from Card Chords and more 00:02:14 Getting out of NAMM 00:03:10 Have a good PR guy! Christopher Buttner 00:04:15 Hey, NAMM: How high can I go? 00:06:09 Can you afford NOT to hire a manager? Or a PR person? Our Mistakes are Our Tuition – Business Brain 00:08:04 COVID Vaccines lead to Card Chords Mike was a (very successful) ​Certified Ethical Hacker & CTO on Wall Street 00:11:09 Dad – come up with an idea to teach people how to play guitar “That's a stupid idea” – Ignore, and move on. Bought a Cricut machine, built the prototype and tested it on hundreds of guitars at Guitar Center Came out on December 21st, and became Amazon's #1 Musical Accessories item within 30 days Also includes an eBook to teach out Beatles, Bon Jovi, Guns and Roses songs WITH Card Chords 00:16:35 Born of Necessity! 00:18:39 The birth of Tone Picks Story time: I didn't bring a 12-string to Electric Lady Studios at 3am Taped two picks together to simulate a 12-string sound. 00:21:41 How did you get on the list of Electric Lady Studios session players? Mike was a shredder after Steve Vai, Joe Satriani, etc 00:22:27 Recording in Jimi Hendrix's old bedroom at Electric Lady Studios! Say yes the first time! Sponsors 00:25:39 SPONSOR: Factor, America's #1 Ready-To-Eat Meal Kit, can help you fuel up fast with flavorful and nutritious ready-to-eat meals delivered straight to your door. Visit FactorMeals.com/giggab50off and use code giggab50off for 50% off! 00:27:22 SPONSOR: Gusto. Get three months free when you run your first payroll when you start at https://gusto.com/giggab 00:28:51 Mike uses Gusto for his Music Schools! 00:30:33 Running music schools Mike's Book: From Teacher to Coach: (And why you would NEVER want to be a Teacher) Taught private lessons, then students wanted more, so… Mike started The Staten Island School of Rock 00:33:37 Mike's coaching methods are different Learning hands-on Getting students hooked on the songs you want to play THEN turn them around 00:34:42 You gotta be juiced about playing the songs Gig Gab 500 with Skylar and the drum coaching story 00:37:16 You need to have a mission statement Mike's: “We build the confidence and self-esteem through music lessons” You need a Unique Selling Proposition! 00:39:30 Mike's Unique Selling Proposition Never answer the question… ask the potential customer for the answer! 00:41:48 A teacher focuses on the curriculum, a coach focuses on the student 00:42:44 Mary Fanaro's Rwanda Rocks Rwanda's Minister of Education: The children of Rwanda don't need teachers, they need coaches. 00:48:08 When you know you're right, embrace it. 00:49:45 Always Be Performing…ALWAYS! 00:53:18 An audience wants to be treated 00:55:23 We're always wearing 00:57:54 The Chinese stole Mike's IP for Card Chords Mike's got a new product that is in the running for Shark Tank Mike's offer to Shark Tank will be: 20% of his company for $1 01:03:23 Gig Gab 525 Outtro Follow Michael Grande CardChords.com Contact Gig Gab! @GigGabPodcast on Instagram feedback@giggabpodcast.com Sign Up for the Gig Gab Mailing List The post From Wall Street Hacker to Music Mogul: Michael Grande's Journey – Gig Gab 525 appeared first on Gig Gab.

    The CyberWire
    Christian Lees: It's not always textbook. [CTO] [Career Notes]

    The CyberWire

    Play Episode Listen Later Mar 15, 2026 9:53


    Please enjoy this encore of Career Notes. Christian Lees, CTO at Resecurity, shares his story and insight on coming into the cybersecurity world. He considers himself a late bloomer because he did not go to college until he was 23. He wasn't sure of what he wanted to do, and a family friend gave him a computer and the rest was history, he says. He fell in love with computers and started working at different companies trying to get ahead. He says it's not always textbook, and sometimes you just need to cut your teeth on something to get where you're going. Throughout his journey, he was constantly questioning whether he made the right decision, and in the end he says you have to be willing to "define friction points in it, you may join security field, not knowing what you're gonna do, but by being that curious person and breaking things and putting it back together, you'll find the right way and just never stop being curious." We thank Christian for sharing his story. Learn more about your ad choices. Visit megaphone.fm/adchoices

    Career Notes
    Christian Lees: It's not always textbook. [CTO]

    Career Notes

    Play Episode Listen Later Mar 15, 2026 9:53


    Please enjoy this encore of Career Notes. Christian Lees, CTO at Resecurity, shares his story and insight on coming into the cybersecurity world. He considers himself a late bloomer because he did not go to college until he was 23. He wasn't sure of what he wanted to do, and a family friend gave him a computer and the rest was history, he says. He fell in love with computers and started working at different companies trying to get ahead. He says it's not always textbook, and sometimes you just need to cut your teeth on something to get where you're going. Throughout his journey, he was constantly questioning whether he made the right decision, and in the end he says you have to be willing to "define friction points in it, you may join security field, not knowing what you're gonna do, but by being that curious person and breaking things and putting it back together, you'll find the right way and just never stop being curious." We thank Christian for sharing his story. Learn more about your ad choices. Visit megaphone.fm/adchoices

    Follow The Brand Podcast
    The Agent Has an Identity with Mark Lynd and Grant McGaugh

    Follow The Brand Podcast

    Play Episode Listen Later Mar 14, 2026 43:14 Transcription Available


    Send a textAgentic AI stops being “just software” the moment it can take actions across your systems and that's where leadership, cybersecurity, and trust collide. We sit down with Mark Lynd, a globally recognized cybersecurity and AI thought leader and former CIO, CTO, and CISO, to get specific about what enterprise teams misunderstand when they talk about autonomous AI agents. The promise is speed and cost savings; the reality is permissions, accountability, and a threat landscape that changes when agents have identities and privileges.We dig into why “identity is the new perimeter” in an AI-driven world and how attackers target the keys to the kingdom: access, escalated privileges, and the ability to work around security controls. Mark shares how common IAM problems like permission sprawl and forgotten access can become even more dangerous with agents, especially as organizations scale from a few pilots to hundreds or thousands of AI agents. We also talk governance frameworks like NIST and ISO, why frameworks alone don't equal evaluation criteria, and how boards push for innovation while regulators demand control.If you're a CIO, CISO, security leader, or board advisor trying to adopt agentic AI responsibly, this conversation offers a grounded approach: start with small, auditable use cases, keep a real human-in-the-loop model, align every agent to business goals, and build trust through repeatable wins. Listen, share this with a teammate, and subscribe plus leave a review with your answer: what's the first workflow you would trust an AI agent to run?Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates, visit 5starbdm.com. And don't miss Grant McGaugh's new book, First Light — a powerful guide to igniting your purpose and building a BRAVE brand that stands out in a changing world. - https://5starbdm.com/brave-masterclass/ See you next time on Follow The Brand!

    Grumpy Old Geeks
    737: Monetizable Content

    Grumpy Old Geeks

    Play Episode Listen Later Mar 13, 2026 64:27


    In this week's show we start with FOLLOW UP: The world keeps trying to protect kids online — Indonesia just joined Australia, Spain, and Malaysia in banning social media for under-16s, while COPPA 2.0 sailed through the US Senate unanimously. Meanwhile, Roblox is using AI to clean up its chat, because apparently "Hurry TF up" is the hill they've chosen to die on — even as they're still dealing with the whole "pedophile problem" thing from January. On the AI copyright front, Gracenote is the latest company to sue OpenAI for helping itself to proprietary data, joining a growing queue of plaintiffs who apparently didn't get the memo that everything is training data now.IN THE NEWS: Anthropic is suing the Pentagon after being labeled a "supply chain risk" — apparently because the CEO said AI shouldn't be used for mass surveillance or autonomous weapons, which the Trump administration heard as fighting words. The delicious irony: the Pentagon is still running Claude in active operations while trying to phase it out. Speaking of active operations, investigators now think a missile strike on an Iranian girls' school may have been triggered by bad AI-generated intelligence from that same Claude-based system. So yes, the autocomplete that hallucinates your grocery list is also maybe accidentally bombing schools. Meta's Oversight Board is begging the company to get serious about AI-generated content after a fake war video from a Filipino fake news account racked up 700K views — while separately, Zuckerberg dropped cash on Moltbook, a "social network for AI agents" that turned out to be mostly humans larping as bots and had a security flaw that exposed everyone's API keys. The guy who built it basically vibe-coded the whole thing. Meta's own CTO said he didn't "find it particularly interesting." And yet. Oracle is hemorrhaging jobs and drowning in debt chasing AI dreams, its stock down 50% from peak — a timely reminder that "AI will replace workers" is currently manifesting as "companies set money on fire and lay people off to pay the electric bill." Researchers confirmed AI is homogenizing human thought and creativity — a thing some of us have been screaming since day one. A DOGE engineer allegedly walked out of the Social Security Administration with databases containing personal info on 500 million Americans on a thumb drive. The Ig Nobel Prize is relocating to Switzerland because it's no longer safe to invite international guests to America. Nintendo is suing the US government to get its tariff money back. SETI thinks it may have been accidentally filtering out alien signals due to space weather. And Pokémon Go players unknowingly spent a decade building a centimeter-accurate surveillance map of Earth's cities that's now guiding pizza delivery robots — which, honestly, tracks.In APPS & DOODADS: The GOG clan in Clash Royale just hit eight years old — respect. OpenAudible is the cross-platform audiobook manager your Audible library deserves, especially if you've got over a thousand books sitting there judging you.And finally in MEDIA CANDY: Monarch: Legacy of Monsters Season 2 is here, and pretty beige. Live Nation settled its DOJ antitrust case for $200 million, kept Ticketmaster, and avoided a breakup — meanwhile court documents revealed employees joking about "robbing fans blind" and gouging "stupid" customers, which explains basically every concert ticket you've bought in the last decade. YouTube is now officially the world's largest media company at $62 billion in revenue. Bluesky's CEO is stepping down, which is either a bad sign or just the natural order of "person who built the cool thing hands it to the person who scales the cool thing." Dead Set — Charlie Brooker's 2008 zombie-in-the-Big-Brother-house miniseries — is worth a watch if you haven't. And trailers dropped for Daredevil: Born Again Season 2 (March 24th), The Boys final season (April 8th), and The Super Mario Galaxy Movie (April 1st — yes, really).Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.CleanMyMac - Get Tidy Today! Try 7 days free and use code OLDGEEKS for 20% off at clnmy.com/OLDGEEKSPrivate Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/737Watch on YouTube: https://youtu.be/DgSYnFF6twEFOLLOW UPIndonesia announces a social media ban for anyone under 16Anthropic Sues PentagonMetadata company Gracenote is the latest to sue OpenAI for copyright infringementRoblox introduces real-time AI-powered chat rephraser for inappropriate languageIN THE NEWSCOPPA 2.0 passes the Senate again, unanimously this timeAI Error Likely Led to Iran Girl's School BombingThe Oversight Board says Meta needs new rules for AI-generated contentMark Zuckerberg Decides Meta Needs More Slop, Buys the Social Network for AI AgentsOracle Axing Huge Number of Jobs as AI Crisis IntensifiesYou can (sort of) block Grok from editing your uploaded photosResearchers Say AI Is Homogenizing Human Expression and ThoughtSocial Security watchdog investigating claims that DOGE engineer copied its databasesNintendo is suing the US government over Trump's tariffsSETI Thinks It Might Have Missed a Few Alien Calls. Here's WhyIg Nobel Ceremony Relocates to Europe Amid Safety Concerns in Trump's AmericaAPPS & DOODADSClash RoyaleOpenAudibleBluesky's CEO is stepping down after nearly 5 yearsHow Pokémon Go is giving delivery robots an inch-perfect view of the worldRobot Escorted Away By Cops After Terrorizing Old WomanMEDIA CANDYMonarch: Legacy of Monsters Season 2Live Nation settlement avoids breakup with TicketmasterCourt documents reveal Live Nation employees joking about robbing, gouging "stupid" fansYouTube Is the World's Largest Media Company, MoffettNathanson SaysParadise Season 2DAREDEVIL: Born Again Season 2 Official Teaser Trailer 2 (2026)The Boys Final Season TrailerThe Super Mario Galaxy Movie | Final TrailerDead SetSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    The Daily Scoop Podcast
    ChatGPT, Gemini, Copilot approved for use with Senate data

    The Daily Scoop Podcast

    Play Episode Listen Later Mar 13, 2026 5:07


    Staff in the upper chamber of Congress now have the go-ahead to use Senate data with three popular generative AI chatbots thanks to approval from an office that oversees the legislative body's administrative operations. A recent notice from the Senate Sergeant at Arms' chief information officer announced the approvals for Microsoft's Copilot, Google's Gemini, and OpenAI's ChatGPT, expanding on previous policies. That memo was previously reported by the New York Times and independently obtained by FedScoop. According to the document, Copilot is integrated into the Senate's Microsoft 365 environment already, and more information about licenses for Gemini Chat and ChatGPT Enterprise will be coming within the next 30 days. Each Senate employee will be able to get one license for either Gemini or ChatGPT at no cost. Approval of the tools comes as entities across the federal government — including Congress, executive agencies, and the federal judiciary — have been navigating their own use of the growing technology to reduce administrative toil and assist staff. The Senate, for its part, previously allowed ChatGPT, Google Bard, and Microsoft's Bing AI chat in 2023 at “moderate” risk levels, but they were only for research and evaluation or use with non-sensitive data. The new approvals are less restrictive on the type of data that can be ingested, opening the door to more widespread use. The architect of the Department of Veterans Affairs' artificial intelligence program and digital modernization strategy is leaving the agency after nearly nine years. Charles Worthington, the VA's chief AI officer and CTO, said in a LinkedIn post Thursday that “the time is right” for him to step down from his posts. A Harvard grad, Worthington joined the federal government in 2013 as a Presidential Innovation Fellow. He parlayed that experience into a role as senior advisor to the federal CTO, where he co-created the U.S. Digital Service following the disastrous rollout of HealthCare.gov. After nearly three years with USDS, including as the White House tech office's acting deputy administrator, Worthington moved on to the VA in 2017. In addition to leading the agency's digital modernization work, he also supported its adoption of commercial cloud infrastructure, oversaw the creation of vets.gov, rebuilt va.gov and launched VA Notify, per a congressional bio and his LinkedIn profile. In addition to boosting digital services for veterans, Worthington worked in recent years to spur AI adoption across the agency. Under his watch, the VA emerged as one of the most prolific AI users in the federal government, with an inventory that's now 367 use cases strong. Included in that tally is the agency's VA GPT chatbot.Worthington, who also served on the Technology Modernization Fund board for four years, didn't reveal in his LinkedIn post where he's headed next. But he said his time with the VA “has been the most important work” of his career. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

    Sustain
    Episode 286: Jack Skinner of PyCon AU and Regional Confs

    Sustain

    Play Episode Listen Later Mar 13, 2026 40:05


    Guest Jack Skinner Panelist Richard Littauer Show Notes In this episode of Sustain, host Richard Littauer talks with Jack Skinner, PyCon AU organizer and freelance consultant/fractional CTO, to explore why regional conferences matter so much to the long-term health of open source communities. Their conversation looks at how events like PyCon AU do far more than host talks, they create local connections, nurture future leaders, support first-time speakers, and help sustain the broader Python ecosystem in ways that global conferences alone cannot. Drawing on Jack's experience as a conference organizer and community builder, the episode offers a behind-the-scenes look at the challenges of running volunteer-led events, from sponsorships and logistics to burnout, accessibility, and building a stronger pipeline of future organizers. Press download now to hear more! [00:01:49] Jack shares his background and how he got involved in Python and event organizing. [00:02:48] We hear about Jack's first PyCon AU experience. [00:04:14] Jack describes PyCon AU, who it serves, and how it's changed after COVID. [00:07:01] Why do regional conferences exist alongside PyCon US? [00:09:24] Jack talks about what makes Australia and New Zealand different as conference communities. [00:10:55] PyCon AU's attendance goals are discussed as Jack mentions his big goal is to bring attendance back to roughly 500-600 people, restoring pre-pandemic strength. [00:12:04] The discussion turns to conference structure: tracks, workshops, and sponsor interest, with Jack emphasizing sponsorship is not just about money. [00:14:54] Richard asks how organizers know whether conferences help people learn, connect, or build community. Jack explains how they're measuring community impact beyond “good vibes” and rebuilding local Python communities. [00:17:34] Jack explains PyCon AU is trying to build a future organizer pipeline by letting people observe how conference planning works and introduces his proposed program/project, “shadow team.” [00:19:09] Another project Jack is working on is documenting the behind-the-scenes work of organizing the conference through long-form writing. [00:20:38] Jack admits he feels imposter syndrome because he's not paid to write Python, his contribution is centered on the sociotechnical side. [00:23:20] PyCon AU's independence from government and institutions is discussed, and how the conference community is globally aware, even if locally focused. [00:27:05] Call for proposals details, deadline is March 29, and the in-person focus for this year's event are mentioned. Richard discusses the return of the academic track and Jack details more info on poster sessions and workshop submissions. [00:32:08] Volunteering and buying tickets are explained and why you should buy tickets early if you can. Quotes [00:32:20] “Volunteering is an awesome way to be involved in PyCon.” Spotlight [00:35:16] Richard's spotlight is two of his lecturers at the University of Edinburgh, Simon Kirby and Andrew Smith, who introduced him to Python. [00:35:55] Jack's spotlight is two companion projects: pretalx and pretix. Links SustainOSS podcast@sustainoss.org richard@sustainoss.org SustainOSS Discourse SustainOSS Mastodon SustainOSS Bluesky SustainOSS LinkedIn Open Collective-SustainOSS (Contribute) Richard Littauer Socials Jack Skinner LinkedIn Jack Skinner Website PyCon AU, August 26-30, 2026, Brisbane PyCon AU News & Updates Sustain Podcast-Episode 75: Deb Nicholson on the OSI, the future of open source, and SeaGL Sustain Podcast-Episode 137: A How-to Guide for Contributing to Open Source as an Employee, for Corporations (featuring Deb Nicholson as Host) Guido van Rossum Whale song shows language-like statistical structure Simon Kirby (co-lead author) pretalx (GitHub) pretix (GitHub) Sponsor CURIOSS Credits Produced by Richard Littauer Edited by Paul M. Bahr at Peachtree Sound Show notes by DeAnn Bahr Peachtree Sound Special Guest: Jack Skinner.

    Second in Command: The Chief Behind the Chief
    Ep. 561 - FAN FAVORITE | Mindbody Former President & CTO Sunil Rajasekar - How To Build a Legendary Culture Now

    Second in Command: The Chief Behind the Chief

    Play Episode Listen Later Mar 12, 2026 44:45


    What if the biggest threat to your company's growth is how you show up every day—burned out, distracted, or just going through the motions? Most COOs know the cost of chaos, but few stop to ask what's driving it inside themselves.Enter Sunil Rajasekar, former President and CTO of Mindbody, who sits down with Cameron Herold for a no-holds-barred conversation about burnout, resilience, and building a global wellness empire with gratitude at its core. From the backstage mechanics of a platform used by millions to the secret link between world-changing tech and personal wellbeing, this episode delivers the eye-opening truths every leader needs.Listen now before your stress becomes your biggest blind spot. Actionable, exclusive, and radically honest. These insights aren't just a luxury, they're your lifeline.Timestamped Highlights[00:00] – What nobody tells you about burnout (until it's too late)[00:03:31] – Why Sunil reversed the script: an origin story you didn't expect[00:06:15] – The real reasons high-powered execs flame out (and how Sunil rebuilt himself)[00:14:23] – Two CEOs, one mission: Navigating seismic leadership transitions[00:16:50] – Under the hood of Mindbody: Why perfection on the surface means wrestling chaos behind the scenes[00:24:28] – The make-or-break moment for small businesses—and why most lenders get it wrong[00:29:32] – The war for tech talent and how to keep your team's soul intact[00:33:35] – What COVID proved about wellness, grit, and the “missionary vs. mercenary” divide[00:40:50] – The gratitude ritual that saved Sunil—and could save youAbout the GuestSunil Rajasekar is the former President and Chief Technology Officer at Mindbody, the global platform powering the wellness industry in over 100 countries. With more than two decades leading technology and product transformation at eBay, Intuit, Lithium Technologies, and then Mindbody, Sunil is renowned for scaling businesses that shape industries, without sacrificing the humanity at their core. His mission: Connect the world to wellness, one breakthrough at a time.

    Late Confirmation by CoinDesk
    The Blockspace Pod: An Update on TeraWulf's AI Expansion w/ Nazar Khan

    Late Confirmation by CoinDesk

    Play Episode Listen Later Mar 12, 2026 53:05


    TeraWulf is now building 5 sites for AI workloads after recent acquisitions in Kentucky and Maryland. Get your tickets to OPNEXT 2026 before prices increase! Join us on April 16 in NYC for technical discussions, investor talks, and intimate conversation with the brightest minds in Bitcoin. Welcome back to The Blockspace Podcast! Today, Nazar Khan, CTO of TeraWulf, joins us to talk about the company's expansion into AI and HPC. We dive deep into their new sites in Kentucky and Maryland, the strategy behind repurposing brownfield industrial infrastructure, and why battery storage is the key to grid reliability. Nazar explains the transition from bitcoin mining to AI loads and how TeraWulf plans for its bitcoin mining fleet amid the AI expansion. Subscribe to the newsletter! https://newsletter.blockspacemedia.com Notes: * Targeting 480MW capacity in Kentucky by H2 2027. * Maryland site to feature 1GW load and generation. * 500MW battery storage planned for Maryland campus. * Targeting energy availability in MD by 2028-2029. Timestamps: 00:00 Start 04:07 Kentucky site 06:33 Maryland site 08:25 Redundant power 10:13 Battery storage 13:40 New generation 14:49 Tenants for sites 17:08 Brownfield sites 19:28 Lake Mariner & Abernathy sites 24:28 Geopolitical concerns 27:49 Managing a power plant 33:55 168 megawatts 40:55 Local pushback on new data centers 48:21 Bitcoin mining future

    Caveat
    The SBOM where it happens.

    Caveat

    Play Episode Listen Later Mar 12, 2026 42:54


    This week, Dave talks with Jean-Paul Bergeaux, CTO for Federal for GuidePoint Security, about OMB rescinding two Biden era orders, which had mandated that agencies require a software bill of materials (SBOM) from software vendors. Ben shares a follow-up story on the Anthropic/Pentagon dustup. Dave has the latest on the new National Cyber Strategy from the White House. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.  Links to today's stories: ⁠Anthropic sues the Trump administration after it was designated a supply chain risk Trump Admin Cyber Strategy Centers Private Sector in Offensive Cyber Operations Get the weekly Caveat Briefing delivered to your inbox. Like what you heard? Be sure to check out and subscribe to our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Caveat Briefing⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, a weekly newsletter available exclusively to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠N2K Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ members on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠N2K CyberWire's⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ website. N2K Pro members receive our Thursday wrap-up covering the latest in privacy, policy, and research news, including incidents, techniques, compliance, trends, and more. This week's Caveat Briefing covers kids' online safety proposals and Anthropic's suit against the Pentagon. Curious about the details? Head over to the ⁠⁠⁠⁠⁠⁠Caveat Briefing⁠⁠⁠⁠⁠⁠ for the full scoop and additional compelling stories. Got a question you'd like us to answer on our show? You can send your audio file to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠caveat@thecyberwire.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Hope to hear from you. Learn more about your ad choices. Visit megaphone.fm/adchoices

    Sales Secrets From The Top 1%
    Google Tried to Sell for $750K... Nobody Bought It | #1366

    Sales Secrets From The Top 1%

    Play Episode Listen Later Mar 12, 2026 3:10


    Entrepreneurship is often romanticized after success, but the hardest moments happen long before the breakthrough. In this episode, Brandon tells the story of building Seamless through multiple failed CTO hires, losing his last $100,000, and shutting down the entire platform to rebuild it from scratch. He shares how the story of Google being rejected by every major tech company became a reminder that great ideas are often misunderstood early. You'll hear the behind-the-scenes moments most founders never talk about — sleepless nights, impossible technical challenges, and the decision to bet everything on one final rebuild. The lesson isn't genius, luck, or perfect timing. It's persistence. Because sometimes the only thing separating failure from a billion-dollar company is the decision not to quit.

    The New CISO
    Architect and Firefighter: How a Modern CISO Leads in Crisis

    The New CISO

    Play Episode Listen Later Mar 12, 2026 48:43


    Alan Lucas always wanted to be an architect or a firefighter — as CISO of Worldstream and Greenhouse Datacenters, he has become both. In this episode, he joins host Steve Moore to explore leading cybersecurity at the intersection of design and crisis response.Alan traces his path from Fox-IT through a Dutch cryptocurrency exchange where he arrived post-breach to an organization under near-constant attack from nation-state threat actors. Leading a technically sophisticated but security-anxious leadership team, he learned the lasting power of transparency and directness — and his most memorable measure of success was not a technical control, but a CTO who finally slept through the night.The conversation goes deep into crisis communication. Alan and Steve discuss how the industry has matured from reflexive silence around breaches to embracing transparency as a trust-building tool, the danger of well-meaning legal edits that send customers chasing the wrong narrative, and why the CISO should hold final review over all public incident communications. He also shares his Security Champions Program, tabletop exercise design, and why knowing who to call in a crisis must be mapped out before that crisis arrives.Alan also covers his volunteer work with the DIVD, coaching ethical hackers and supporting responsible disclosure worldwide — an extension of his belief that security, done well, creates trust and enables growth for everyone.The episode closes on "bouncing forward" — the idea that true resilience means using every incident as a forcing function for improvement, not just a return to baseline. Alan frames lessons learned as the most important resilience KPI a security team can own. A masterclass in leading through both calm and chaos. Key Topics• The architect-and-firefighter mindset: building security programs while fighting live fires• Alan's career path from Fox-IT (MSSP) to post-breach CISO at a cryptocurrency exchange• Leading security post-breach — and what "sleeping well again" actually means• The unique threat landscape facing cryptocurrency companies, including nation-state adversaries• The Dutch Institute for Vulnerability Disclosure (DIVD): coordinated, ethical vulnerability disclosure worldwide• Mentoring young ethical hackers: communication, confidence, and responsible disclosure process• Crisis communication: balancing transparency with operational security during active incidents• Why legal edits to breach notifications can mislead customers and create dangerous distractions• The CISO's role as final reviewer of all incident communications• Security Champions Programs: bridging the gap between security and non-technical departments• Tabletop exercise design: running effective simulations in under an hour with non-technical staff• Writing the breach notification letter before the breach happens• Bouncing forward, not bouncing back: using lessons learned as a resilience KPI• Security as a business enabler: positioning the CISO role for organizational growth and confidenceGuest BioAlan Lucas is CISO at Worldstream and Greenhouse Datacenters, two of the Netherlands' leading cloud and data center infrastructure providers. With over a decade of cybersecurity experience, he leads security strategy for mission-critical IT and cloud environments. Prior roles include Fox-IT (MSSP) and LiteBit, a Dutch cryptocurrency exchange where he served as CISO post-breach. Alan also volunteers as a coach at the Dutch Institute for Vulnerability Disclosure (DIVD), mentoring ethical hackers and supporting responsible disclosure globally. He is passionate about security as a catalyst for innovation — and about building a safer digital society, one step at a time.LEARN MORE:

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
    Retrieval After RAG: Hybrid Search, Agents, and Database Design — Simon Hørup Eskildsen of Turbopuffer

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    Play Episode Listen Later Mar 12, 2026 60:32


    Turbopuffer came out of a reading app.In 2022, Simon was helping his friends at Readwise scale their infra for a highly requested feature: article recommendations and semantic search. Readwise was paying ~$5k/month for their relational database and vector search would cost ~$20k/month making the feature too expensive to ship. In 2023 after mulling over the problem from Readwise, Simon decided he wanted to “build a search engine” which became Turbopuffer.We discuss:• Simon's path: Denmark → Shopify infra for nearly a decade → “angel engineering” across startups like Readwise, Replicate, and Causal → turbopuffer almost accidentally becoming a company • The Readwise origin story: building an early recommendation engine right after the ChatGPT moment, seeing it work, then realizing it would cost ~$30k/month for a company spending ~$5k/month total on infra and getting obsessed with fixing that cost structure • Why turbopuffer is “a search engine for unstructured data”: Simon's belief that models can learn to reason, but can't compress the world's knowledge into a few terabytes of weights, so they need to connect to systems that hold truth in full fidelity • The three ingredients for building a great database company: a new workload, a new storage architecture, and the ability to eventually support every query plan customers will want on their data • The architecture bet behind turbopuffer: going all in on object storage and NVMe, avoiding a traditional consensus layer, and building around the cloud primitives that only became possible in the last few years • Why Simon hated operating Elasticsearch at Shopify: years of painful on-call experience shaped his obsession with simplicity, performance, and eliminating state spread across multiple systems • The Cursor story: launching turbopuffer as a scrappy side project, getting an email from Cursor the next day, flying out after a 4am call, and helping cut Cursor's costs by 95% while fixing their per-user economics • The Notion story: buying dark fiber, tuning TCP windows, and eating cross-cloud costs because Simon refused to compromise on architecture just to close a deal faster • Why AI changes the build-vs-buy equation: it's less about whether a company can build search infra internally, and more about whether they have time especially if an external team can feel like an extension of their own • Why RAG isn't dead: coding companies still rely heavily on search, and Simon sees hybrid retrieval semantic, text, regex, SQL-style patterns becoming more important, not less • How agentic workloads are changing search: the old pattern was one retrieval call up front; the new pattern is one agent firing many parallel queries at once, turning search into a highly concurrent tool call • Why turbopuffer is reducing query pricing: agentic systems are dramatically increasing query volume, and Simon expects retrieval infra to adapt to huge bursts of concurrent search rather than a small number of carefully chosen calls • The philosophy of “playing with open cards”: Simon's habit of being radically honest with investors, including telling Lachy Groom he'd return the money if turbopuffer didn't hit PMF by year-end • The “P99 engineer”: Simon's framework for building a talent-dense company, rejecting by default unless someone on the team feels strongly enough to fight for the candidate —Simon Hørup Eskildsen• LinkedIn: https://www.linkedin.com/in/sirupsen• X: https://x.com/Sirupsen• https://sirupsen.com/aboutturbopuffer• https://turbopuffer.com/Full Video PodTimestamps00:00:00 The PMF promise to Lachy Groom00:00:25 Intro and Simon's background00:02:19 What turbopuffer actually is00:06:26 Shopify, Elasticsearch, and the pain behind the company00:10:07 The Readwise experiment that sparked turbopuffer00:12:00 The insight Simon couldn't stop thinking about00:17:00 S3 consistency, NVMe, and the architecture bet00:20:12 The Notion story: latency, dark fiber, and conviction00:25:03 Build vs. buy in the age of AI00:26:00 The Cursor story: early launch to breakout customer00:29:00 Why code search still matters00:32:00 Search in the age of agents00:34:22 Pricing turbopuffer in the AI era00:38:17 Why Simon chose Lachy Groom00:41:28 Becoming a founder on purpose00:44:00 The “P99 engineer” philosophy00:49:30 Bending software to your will00:51:13 The future of turbopuffer00:57:05 Simon's tea obsession00:59:03 Tea kits, X Live, and P99 LiveTranscriptSimon Hørup Eskildsen: I don't think I've said this publicly before, but I just called Lockey and was like, local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you. But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working.So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people. We're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards. Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before.Alessio: Hey everyone, welcome to the Leading Space podcast. This is Celesio Pando, Colonel Laz, and I'm joined by Swix, editor of Leading Space.swyx: Hello. Hello, uh, we're still, uh, recording in the Ker studio for the first time. Very excited. And today we are joined by Simon Eski. Of Turbo Farer welcome.Simon Hørup Eskildsen: Thank you so much for having me.swyx: Turbo Farer has like really gone on a huge tear, and I, I do have to mention that like you're one of, you're not my newest member of the Danish AHU Mafia, where like there's a lot of legendary programmers that have come out of it, like, uh, beyond Trotro, Rasmus, lado Berg and the V eight team and, and Google Maps team.Uh, you're mostly a Canadian now, but isn't that interesting? There's so many, so much like strong Danish presence.Simon Hørup Eskildsen: Yeah, I was writing a post, um, not that long ago about sort of the influences. So I grew up in Denmark, right? I left, I left when, when I was 18 to go to Canada to, to work at Shopify. Um, and so I, like, I've, I would still say that I feel more Danish than, than Canadian.This is also the weird accent. I can't say th because it, this is like, I don't, you know, my wife is also Canadian, um, and I think. I think like one of the things in, in Denmark is just like, there's just such a ruthless pragmatism and there's also a big focus on just aesthetics. Like, they're like very, people really care about like where, what things look like.Um, and like Canada has a lot of attributes, US has, has a lot of attributes, but I think there's been lots of the great things to carry. I don't know what's in the water in Ahu though. Um, and I don't know that I could be considered part of the Mafi mafia quite yet, uh, compared to the phenomenal individuals we just mentioned.Barra OV is also, uh, Danish Canadian. Okay. Yeah. I don't know where he lives now, but, and he's the PHP.swyx: Yeah. And obviously Toby German, but moved to Canada as well. Yes. Like this is like import that, uh, that, that is an interesting, um, talent move.Alessio: I think. I would love to get from you. Definition of Turbo puffer, because I think you could be a Vector db, which is maybe a bad word now in some circles, you could be a search engine.It's like, let, let's just start there and then we'll maybe run through the history of how you got to this point.Simon Hørup Eskildsen: For sure. Yeah. So Turbo Puffer is at this point in time, a search engine, right? We do full text search and we do vector search, and that's really what we're specialized in. If you're trying to do much more than that, like then this might not be the right place yet, but Turbo Buffer is all about search.The other way that I think about it is that we can take all of the world's knowledge, all of the exabytes and exabytes of data that there is, and we can use those tokens to train a model, but we can't compress all of that into a few terabytes of weights, right? Compress into a few terabytes of weights, how to reason with the world, how to make sense of the knowledge.But we have to somehow connect it to something externally that actually holds that like in full fidelity and truth. Um, and that's the thing that we intend to become. Right? That's like a very holier than now kind of phrasing, right? But being the search engine for unstructured, unstructured data is the focus of turbo puffer at this point in time.Alessio: And let's break down. So people might say, well, didn't Elasticsearch already do this? And then some other people might say, is this search on my data, is this like closer to rag than to like a xr, like a public search thing? Like how, how do you segment like the different types of search?Simon Hørup Eskildsen: The way that I generally think about this is like, there's a lot of database companies and I think if you wanna build a really big database company, sort of, you need a couple of ingredients to be in the air.We don't, which only happens roughly every 15 years. You need a new workload. You basically need the ambition that every single company on earth is gonna have data in your database. Multiple times you look at a company like Oracle, right? You will, like, I don't think you can find a company on earth with a digital presence that it not, doesn't somehow have some data in an Oracle database.Right? And I think at this point, that's also true for Snowflake and Databricks, right? 15 years later it's, or even more than that, there's not a company on earth that doesn't, in. Or directly is consuming Snowflake or, or Databricks or any of the big analytics databases. Um, and I think we're in that kind of moment now, right?I don't think you're gonna find a company over the next few years that doesn't directly or indirectly, um, have all their data available for, for search and connect it to ai. So you need that new workload, like you need something to be happening where there's a new workload that causes that to happen, and that new workload is connecting very large amounts of data to ai.The second thing you need. The second condition to build a big database company is that you need some new underlying change in the storage architecture that is not possible from the databases that have come before you. If you look at Snowflake and Databricks, right, commoditized, like massive fleet of HDDs, like that was not possible in it.It just wasn't in the air in the nineties, right? So you just didn't, we just didn't build these systems. S3 and and and so on was not around. And I think the architecture that is now possible that wasn't possible 15 years ago is to go all in on NVME SSDs. It requires a particular type of architecture for the database that.It's difficult to retrofit onto the databases that are already there, including the ones you just mentioned. The second thing is to go all in on OIC storage, more so than we could have done 15 years ago. Like we don't have a consensus layer, we don't really have anything. In fact, you could turn off all the servers that Turbo Buffer has, and we would not lose any data because we have all completely all in on OIC storage.And this means that our architecture is just so simple. So that's the second condition, right? First being a new workload. That means that every company on earth, either indirectly or directly, is using your database. Second being, there's some new storage architecture. That means that the, the companies that have come before you can do what you're doing.I think the third thing you need to do to build a big database company is that over time you have to implement more or less every Cory plan on the data. What that means is that you. You can't just get stuck in, like, this is the one thing that a database does. It has to be ever evolving because when someone has data in the database, they over time expect to be able to ask it more or less every question.So you have to do that to get the storage architecture to the limit of what, what it's capable of. Those are the three conditions.swyx: I just wanted to get a little bit of like the motivation, right? Like, so you left Shopify, you're like principal, engineer, infra guy. Um, you also head of kernel labs, uh, inside of Shopify, right?And then you consulted for read wise and that it kind of gave you that, that idea. I just wanted you to tell that story. Um, maybe I, you've told it before, but, uh, just introduce the, the. People to like the, the new workload, the sort of aha moment for turbo PufferSimon Hørup Eskildsen: For sure. So yeah, I spent almost a decade at Shopify.I was on the infrastructure team, um, from the fairly, fairly early days around 2013. Um, at the time it felt like it was growing so quickly and everything, all the metrics were, you know, doubling year on year compared to the, what companies are contending with today. It's very cute in growth. I feel like lot some companies are seeing that month over month.Um, of course. Shopify compound has been compounding for a very long time now, but I spent a decade doing that and the majority of that was just make sure the site is up today and make sure it's up a year from now. And a lot of that was really just the, um, you know, uh, the Kardashians would drive very, very large amounts of, of data to, to uh, to Shopify as they were rotating through all the merch and building out their businesses.And we just needed to make sure we could handle that. Right. And sometimes these were events, a million requests per second. And so, you know, we, we had our own data centers back in the day and we were moving to the cloud and there was so much sharding work and all of that that we were doing. So I spent a decade just scaling databases ‘cause that's fundamentally what's the most difficult thing to scale about these sites.The database that was the most difficult for me to scale during that time, and that was the most aggravating to be on call for, was elastic search. It was very, very difficult to deal with. And I saw a lot of projects that were just being held back in their ambition by using it.swyx: And I mean, self-hosted.Self-hosted. ‘causeSimon Hørup Eskildsen: it's, yeah, and it commercial, this is like 2015, right? So it's like a very particular vintage. Right. It's probably better at a lot of these things now. Um, it was difficult to contend with and I'm just like, I just think about it. It's an inverted index. It should be good at these kinds of queries and do all of this.And it was, we, we often couldn't get it to do exactly what we needed to do or basically get lucine to do, like expose lucine raw to, to, to what we needed to do. Um, so that was like. Just something that we did on the side and just panic scaled when we needed to, but not a particular focus of mine. So I left, and when I left, I, um, wasn't sure exactly what I wanted to do.I mean, it spent like a decade inside of the same company. I'd like grown up there. I started working there when I was 18.swyx: You only do Rails?Simon Hørup Eskildsen: Yeah. I mean, yeah. Rails. And he's a Rails guy. Uh, love Rails. So good. Um,Alessio: we all wish we could still work in Rails.swyx: I know know. I know, but some, I tried learning Ruby.It's just too much, like too many options to do the same thing. It's, that's my, I I know there's a, there's a way to do it.Simon Hørup Eskildsen: I love it. I don't know that I would use it now, like given cloud code and, and, and cursor and everything, but, um, um, but still it, like if I'm just sitting down and writing a teal code, that's how I think.But anyway, I left and I wasn't, I talked to a couple companies and I was like, I don't. I need to see a little bit more of the world here to know what I'm gonna like focus on next. Um, and so what I decided is like I was gonna, I called it like angel engineering, where I just hopped around in my friend's companies in three months increments and just helped them out with something.Right. And, and just vested a bit of equity and solved some interesting infrastructure problem. So I worked with a bunch of companies at the time, um, read Wise was one of them. Replicate was one of them. Um, causal, I dunno if you've tried this, it's like a, it's a spreadsheet engine Yeah. Where you can do distribution.They sold recently. Yeah. Um, we've been, we used that in fp and a at, um, at Turbo Puffer. Um, so a bunch of companies like this and it was super fun. And so we're the Chachi bt moment happened, I was with. With read Wise for a stint, we were preparing for the reader launch, right? Which is where you, you cue articles and read them later.And I was just getting their Postgres up to snuff, like, which basically boils down to tuning, auto vacuum. So I was doing that and then this happened and we were like, oh, maybe we should build a little recommendation engine and some features to try to hook in the lms. They were not that good yet, but it was clear there was something there.And so I built a small recommendation engine just, okay, let's take the articles that you've recently read, right? Like embed all the articles and then do recommendations. It was good enough that when I ran it on one of the co-founders of Rey's, like I found out that I got articles about, about having a child.I'm like, oh my God, I didn't, I, I didn't know that, that they were having a child. I wasn't sure what to do with that information, but the recommendation engine was good enough that it was suggesting articles, um, about that. And so there was, there was recommendations and uh, it actually worked really well.But this was a company that was spending maybe five grand a month in total on all their infrastructure and. When I did the napkin math on running the embeddings of all the articles, putting them into a vector index, putting it in prod, it's gonna be like 30 grand a month. That just wasn't tenable. Right?Like Read Wise is a proudly bootstrapped company and it's paying 30 grand for infrastructure for one feature versus five. It just wasn't tenable. So sort of in the bucket of this is useful, it's pretty good, but let us, let's return to it when the costs come down.swyx: Did you say it grows by feature? So for five to 30 is by the number of, like, what's the, what's the Scaling factor scale?It scales by the number of articles that you embed.Simon Hørup Eskildsen: It does, but what I meant by that is like five grand for like all of the other, like the Heroku, dinos, Postgres, like all the other, and this then storage is 30. Yeah. And then like 30 grand for one feature. Right. Which is like, what other articles are related to this one.Um, so it was just too much right to, to power everything. Their budget would've been maybe a few thousand dollars, which still would've been a lot. And so we put it in a bucket of, okay, we're gonna do that later. We'll wait, we will wait for the cost to come down. And that haunted me. I couldn't stop thinking about it.I was like, okay, there's clearly some latent demand here. If the cost had been a 10th, we would've shipped it and. This was really the only data point that I had. Right. I didn't, I, I didn't, I didn't go out and talk to anyone else. It was just so I started reading Right. I couldn't, I couldn't help myself.Like I didn't know what like a vector index is. I, I generally barely do about how to generate the vectors. There was a lot of hype about, this is a early 2023. There was a lot of hype about vector databases. There were raising a lot of money and it's like, I really didn't know anything about it. It's like, you know, trying these little models, fine tuning them.Like I was just trying to get sort of a lay of the land. So I just sat down. I have this. A GitHub repository called Napkin Math. And on napkin math, there's just, um, rows of like, oh, this is how much bandwidth. Like this is how many, you know, you can do 25 gigabytes per second on average to dram. You can do, you know, five gigabytes per second of rights to an SSD, blah blah.All of these numbers, right? And S3, how many you could do per, how much bandwidth can you drive per connection? I was just sitting down, I was like, why hasn't anyone build a database where you just put everything on O storage and then you puff it into NVME when you use the data and you puff it into dram if you're, if you're querying it alive, it's just like, this seems fairly obvious and you, the only real downside to that is that if you go all in on o storage, every right will take a couple hundred milliseconds of latency, but from there it's really all upside, right?You do the first go, it takes half a second. And it sort of occurred to me as like, well. The architecture is really good for that. It's really good for AB storage, it's really good for nvm ESSD. It's, well, you just couldn't have done that 10 years ago. Back to what we were talking about before. You really have to build a database where you have as few round trips as possible, right?This is how CPUs work today. It's how NVM E SSDs work. It's how as, um, as three works that you want to have a very large amount of outstanding requests, right? Like basically go to S3, do like that thousand requests to ask for data in one round trip. Wait for that. Get that, like, make a new decision. Do it again, and try to do that maybe a maximum of three times.But no databases were designed that way within NVME as is ds. You can drive like within, you know, within a very low multiple of DRAM bandwidth if you use it that way. And same with S3, right? You can fully max out the network card, which generally is not maxed out. You get very, like, very, very good bandwidth.And, but no one had built a database like that. So I was like, okay, well can't you just, you know, take all the vectors right? And plot them in the proverbial coordinate system. Get the clusters, put a file on S3 called clusters, do json, and then put another file for every cluster, you know, cluster one, do js O cluster two, do js ON you know that like it's two round trips, right?So you get the clusters, you find the closest clusters, and then you download the cluster files like the, the closest end. And you could do this in two round trips.swyx: You were nearest neighbors locally.Simon Hørup Eskildsen: Yes. Yes. And then, and you would build this, this file, right? It's just like ultra simplistic, but it's not a far shot from what the first version of Turbo Buffer was.Why hasn't anyone done thatAlessio: in that moment? From a workload perspective, you're thinking this is gonna be like a read heavy thing because they're doing recommend. Like is the fact that like writes are so expensive now? Oh, with ai you're actually not writing that much.Simon Hørup Eskildsen: At that point I hadn't really thought too much about, well no actually it was always clear to me that there was gonna be a lot of rights because at Shopify, the search clusters were doing, you know, I don't know, tens or hundreds of crew QPS, right?‘cause you just have to have a human sit and type in. But we did, you know, I don't know how many updates there were per second. I'm sure it was in the millions, right into the cluster. So I always knew there was like a 10 to 100 ratio on the read write. In the read wise use case. It's, um, even, even in the read wise use case, there'd probably be a lot fewer reads than writes, right?There's just a lot of churn on the amount of stuff that was going through versus the amount of queries. Um, I wasn't thinking too much about that. I was mostly just thinking about what's the fundamentally cheapest way to build a database in the cloud today using the primitives that you have available.And this is it, right? You just, now you have one machine and you know, let's say you have a terabyte of data in S3, you paid the $200 a month for that, and then maybe five to 10% of that data and needs to be an NV ME SSDs and less than that in dram. Well. You're paying very, very little to inflate the data.swyx: By the way, when you say no one else has done that, uh, would you consider Neon, uh, to be on a similar path in terms of being sort of S3 first and, uh, separating the compute and storage?Simon Hørup Eskildsen: Yeah, I think what I meant with that is, uh, just build a completely new database. I don't know if we were the first, like it was very much, it was, I mean, I, I hadn't, I just looked at the napkin math and was like, this seems really obvious.So I'm sure like a hundred people came up with it at the same time. Like the light bulb and every invention ever. Right. It was just in the air. I think Neon Neon was, was first to it. And they're trying, they're retrofitted onto Postgres, right? And then they built this whole architecture where you have, you have it in memory and then you sort of.You know, m map back to S3. And I think that was very novel at the time to do it for, for all LTP, but I hadn't seen a database that was truly all in, right. Not retrofitting it. The database felt built purely for this no consensus layer. Even using compare and swap on optic storage to do consensus. I hadn't seen anyone go that all in.And I, I mean, there, there, I'm sure there was someone that did that before us. I don't know. I was just looking at the napkin mathswyx: and, and when you say consensus layer, uh, are you strongly relying on S3 Strong consistency? You are. Okay.SoSimon Hørup Eskildsen: that is your consensus layer. It, it is the consistency layer. And I think also, like, this is something that most people don't realize, but S3 only became consistent in December of 2020.swyx: I remember this coming out during COVID and like people were like, oh, like, it was like, uh, it was just like a free upgrade.Simon Hørup Eskildsen: Yeah.swyx: They were just, they just announced it. We saw consistency guys and like, okay, cool.Simon Hørup Eskildsen: And I'm sure that they just, they probably had it in prod for a while and they're just like, it's done right.And people were like, okay, cool. But. That's a big moment, right? Like nv, ME SSDs, were also not in the cloud until around 2017, right? So you just sort of had like 2017 nv, ME SSDs, and people were like, okay, cool. There's like one skew that does this, whatever, right? Takes a few years. And then the second thing is like S3 becomes consistent in 2020.So now it means you don't have to have this like big foundation DB or like zookeeper or whatever sitting there contending with the keys, which is how. You know, that's what Snowflake and others have do so muchswyx: for goneSimon Hørup Eskildsen: Exactly. Just gone. Right? And so just push to the, you know, whatever, how many hundreds of people they have working on S3 solved and then compare and swap was not in S3 at this point in time,swyx: by the way.Uh, I don't know what that is, so maybe you wanna explain. Yes. Yeah.Simon Hørup Eskildsen: Yes. So, um, what Compare and swap is, is basically, you can imagine that if you have a database, it might be really nice to have a file called metadata json. And metadata JSON could say things like, Hey, these keys are here and this file means that, and there's lots of metadata that you have to operate in the database, right?But that's the simplest way to do it. So now you have might, you might have a lot of servers that wanna change the metadata. They might have written a file and want the metadata to contain that file. But you have a hundred nodes that are trying to contend with this metadata that JSON well, what compare and Swap allows you to do is basically just you download the file, you make the modifications, and then you write it only if it hasn't changed.While you did the modification and if not you retry. Right? Should just have this retry loops. Now you can imagine if you have a hundred nodes doing that, it's gonna be really slow, but it will converge over time. That primitive was not available in S3. It wasn't available in S3 until late 2024, but it was available in GCP.The real story of this is certainly not that I sat down and like bake brained it. I was like, okay, we're gonna start on GCS S3 is gonna get it later. Like it was really not that we started, we got really lucky, like we started on GCP and we started on GCP because tur um, Shopify ran on GCP. And so that was the platform I was most available with.Right. Um, and I knew the Canadian team there ‘cause I'd worked with them at Shopify and so it was natural for us to start there. And so when we started building the database, we're like, oh yeah, we have to build a, we really thought we had to build a consensus layer, like have a zookeeper or something to do this.But then we discovered the compare and swap. It's like, oh, we can kick the can. Like we'll just do metadata r json and just, it's fine. It's probably fine. Um, and we just kept kicking the can until we had very, very strong conviction in the idea. Um, and then we kind of just hinged the company on the fact that S3 probably was gonna get this, it started getting really painful in like mid 2024.‘cause we were closing deals with, um, um, notion actually that was running in AWS and we're like, trust us. You, you really want us to run this in GCP? And they're like, no, I don't know about that. Like, we're running everything in AWS and the latency across the cloud were so big and we had so much conviction that we bought like, you know, dark fiber between the AWS regions in, in Oregon, like in the InterExchange and GCP is like, we've never seen a startup like do like, what's going on here?And we're just like, no, we don't wanna do this. We were tuning like TCP windows, like everything to get the latency down ‘cause we had so high conviction in not doing like a, a metadata layer on S3. So those were the three conditions, right? Compare and swap. To do metadata, which wasn't in S3 until late 2024 S3 being consistent, which didn't happen until December, 2020.Uh, 2020. And then NVMe ssd, which didn't end in the cloud until 2017.swyx: I mean, in some ways, like a very big like cloud success story that like you were able to like, uh, put this all together, but also doing things like doing, uh, bind our favor. That that actually is something I've never heard.Simon Hørup Eskildsen: I mean, it's very common when you're a big company, right?You're like connecting your own like data center or whatever. But it's like, it was uniquely just a pain with notion because the, um, the org, like most of the, like if you're buying in Ashburn, Virginia, right? Like US East, the Google, like the GCP and, and AWS data centers are like within a millisecond on, on each other, on the public exchanges.But in Oregon uniquely, the GCP data center sits like a couple hundred kilometers, like east of Portland and the AWS region sits in Portland, but the network exchange they go through is through Seattle. So it's like a full, like 14 milliseconds or something like that. And so anyway, yeah. It's, it's, so we were like, okay, we can't, we have to go through an exchange in Portland.Yeah. Andswyx: you'd rather do this than like run your zookeeper and likeSimon Hørup Eskildsen: Yes. Way rather. It doesn't have state, I don't want state and two systems. Um, and I think all that is just informed by Justine, my co-founder and I had just been on call for so long. And the worst outages are the ones where you have state in multiple places that's not syncing up.So it really came from, from a a, like just a, a very pure source of pain, of just imagining what we would be Okay. Being woken up at 3:00 AM about and having something in zookeeper was not one of them.swyx: You, you're talking to like a notion or something. Do they care or do they just, theySimon Hørup Eskildsen: just, they care about latency.swyx: They latency cost. That's it.Simon Hørup Eskildsen: They just cared about latency. Right. And we just absorbed the cost. We're just like, we have high conviction in this. At some point we can move them to AWS. Right. And so we just, we, we'll buy the fiber, it doesn't matter. Right. Um, and it's like $5,000. Usually when you buy fiber, you buy like multiple lines.And we're like, we can only afford one, but we will just test it that when it goes over the public internet, it's like super smooth. And so we did a lot of, anyway, it's, yeah, it was, that's cool.Alessio: You can imagine talking to the GCP rep and it's like, no, we're gonna buy, because we know we're gonna turn, we're gonna turn from you guys and go to AWS in like six months.But in the meantime we'll do this. It'sSimon Hørup Eskildsen: a, I mean, like they, you know, this workload still runs on GCP for what it's worth. Right? ‘cause it's so, it was just, it was so reliable. So it was never about moving off GCP, it was just about honesty. It was just about giving notion the latency that they deserved.Right. Um, and we didn't want ‘em to have to care about any of this. We also, they were like, oh, egress is gonna be bad. It was like, okay, screw it. Like we're just gonna like vvc, VPC peer with you and AWS we'll eat the cost. Yeah. Whatever needs to be done.Alessio: And what were the actual workloads? Because I think when you think about ai, it's like 14 milliseconds.It's like really doesn't really matter in the scheme of like a model generation.Simon Hørup Eskildsen: Yeah. We were told the latency, right. That we had to beat. Oh, right. So, so we're just looking at the traces. Right. And then sort of like hand draw, like, you know, kind of like looking at the trace and then thinking what are the other extensions of the trace?Right. And there's a lot more to it because it's also when you have, if you have 14 versus seven milliseconds, right. You can fit in another round trip. So we had to tune TCP to try to send as much data in every round trip, prewarm all the connections. And there was, there's a lot of things that compound from having these kinds of round trips, but in the grand scheme it was just like, well, we have to beat the latency of whatever we're up against.swyx: Which is like they, I mean, notion is a database company. They could have done this themselves. They, they do lots of database engineering themselves. How do you even get in the door? Like Yeah, just like talk through that kind of.Simon Hørup Eskildsen: Last time I was in San Francisco, I was talking to one of the engineers actually, who, who was one of our champions, um, at, AT Notion.And they were, they were just trying to make sure that the, you know, per user cost matched the economics that they needed. You know, Uhhuh like, it's like the way I think about, it's like I have to earn a return on whatever the clouds charge me and then my customers have to earn a return on that. And it's like very simple, right?And so there has to be gross margin all the way up and that's how you build the product. And so then our customers have to make the right set of trade off the turbo Puffer makes, and if they're happy with that, that's great.swyx: Do you feel like you're competing with build internally versus buy or buy versus buy?Simon Hørup Eskildsen: Yeah, so, sorry, this was all to build up to your question. So one of the notion engineers told me that they'd sat and probably on a napkin, like drawn out like, why hasn't anyone built this? And then they saw terrible. It was like, well, it literally that. So, and I think AI has also changed the buy versus build equation in terms of, it's not really about can we build it, it's about do we have time to build it?I think they like, I think they felt like, okay, if this is a team that can do that and they, they feel enough like an extension of our team, well then we can go a lot faster, which would be very, very good for them. And I mean, they put us through the, through the test, right? Like we had some very, very long nights to to, to do that POC.And they were really our biggest, our second big customer off the cursor, which also was a lot of late nights. Right.swyx: Yeah. That, I mean, should we go into that story? The, the, the sort of Chris's story, like a lot, um, they credit you a lot for. Working very closely with them. So I just wanna hear, I've heard this, uh, story from Sole's point of view, but like, I'm curious what, what it looks like from your side.Simon Hørup Eskildsen: I actually haven't heard it from Sole's point of view, so maybe you can now cross reference it. The way that I remember it was that, um, the day after we launched, which was just, you know, I'd worked the whole summer on, on the first version. Justine wasn't part of it yet. ‘cause I just, I didn't tell anyone that summer that I was working on this.I was just locked in on building it because it's very easy otherwise to confuse talking about something to actually doing it. And so I was just like, I'm not gonna do that. I'm just gonna do the thing. I launched it and at this point turbo puffer is like a rust binary running on a single eight core machine in a T Marks instance.And me deploying it was like looking at the request log and then like command seeing it or like control seeing it to just like, okay, there's no request. Let's upgrade the binary. Like it was like literally the, the, the, the scrappiest thing. You could imagine it was on purpose because just like at Shopify, we did that all the time.Like, we like move, like we ran things in tux all the time to begin with. Before something had like, at least the inkling of PMF, it was like, okay, is anyone gonna hear about this? Um, and one of the cursor co-founders Arvid reached out and he just, you know, the, the cursor team are like all I-O-I-I-M-O like, um, contenders, right?So they just speak in bullet points and, and facts. It was like this amazing email exchange just of, this is how many QPS we have, this is what we're paying, this is where we're going, blah, blah, blah. And so we're just conversing in bullet points. And I tried to get a call with them a few times, but they were, so, they were like really writing the PMF bowl here, just like late 2023.And one time Swally emails me at like five. What was it like 4:00 AM Pacific time saying like, Hey, are you open for a call now? And I'm on the East coast and I, it was like 7:00 AM I was like, yeah, great, sure, whatever. Um, and we just started talking and something. Then I didn't know anything about sales.It was something that just comp compelled me. I have to go see this team. Like, there's something here. So I, I went to San Francisco and I went to their office and the way that I remember it is that Postgres was down when I showed up at the office. Did SW tell you this? No. Okay. So Postgres was down and so it's like they were distracting with that.And I was trying my best to see if I could, if I could help in any way. Like I knew a little bit about databases back to tuning, auto vacuum. It was like, I think you have to tune out a vacuum. Um, and so we, we talked about that and then, um, that evening just talked about like what would it look like, what would it look like to work with us?And I just said. Look like we're all in, like we will just do what we'll do whatever, whatever you tell us, right? They migrated everything over the next like week or two, and we reduced their cost by 95%, which I think like kind of fixed their per user economics. Um, and it solved a lot of other things. And we were just, Justine, this is also when I asked Justine to come on as my co-founder, she was the best engineer, um, that I ever worked with at Shopify.She lived two blocks away and we were just, okay, we're just gonna get this done. Um, and we did, and so we helped them migrate and we just worked like hell over the next like month or two to make sure that we were never an issue. And that was, that was the cursor story. Yeah.swyx: And, and is code a different workload than normal text?I, I don't know. Is is it just text? Is it the same thing?Simon Hørup Eskildsen: Yeah, so cursor's workload is basically, they, um, they will embed the entire code base, right? So they, they will like chunk it up in whatever they would, they do. They have their own embedding model, um, which they've been public about. Um, and they find that on, on, on their evals.It. There's one of their evals where it's like a 25% improvement on a very particular workload. They have a bunch of blog posts about it. Um, I think it works best on larger code basis, but they've trained their own embedding model to do this. Um, and so you'll see it if you use the cursor agent, it will do searches.And they've also been public around, um, how they've, I think they post trained their model to be very good at semantic search as well. Um, and that's, that's how they use it. And so it's very good at, like, can you find me on the code that's similar to this, or code that does this? And just in, in this queries, they also use GR to supplement it.swyx: Yeah.Simon Hørup Eskildsen: Um, of courseswyx: it's been a big topic of discussion like, is rag dead because gr you know,Simon Hørup Eskildsen: and I mean like, I just, we, we see lots of demand from the coding company to ethicsswyx: search in every part. Yes.Simon Hørup Eskildsen: Uh, we, we, we see demand. And so, I mean, I'm. I like case studies. I don't like, like just doing like thought pieces on this is where it's going.And like trying to be all macroeconomic about ai, that's has turned out to be a giant waste of time because no one can really predict any of this. So I just collect case studies and I mean, cursor has done a great job talking about what they're doing and I hope some of the other coding labs that use Turbo Puffer will do the same.Um, but it does seem to make a difference for particular queries. Um, I mean we can also do text, we can also do RegX, but I should also say that cursors like security posture into Tur Puffer is exceptional, right? They have their own embedding model, which makes it very difficult to reverse engineer. They obfuscate the file paths.They like you. It's very difficult to learn anything about a code base by looking at it. And the other thing they do too is that for their customers, they encrypt it with their encryption keys in turbo puffer's bucket. Um, so it's, it's, it's really, really well designed.swyx: And so this is like extra stuff they did to work with you because you are not part of Cursor.Exactly like, and this is just best practice when working in any database, not just you guys. Okay. Yeah, that makes sense. Yeah. I think for me, like the, the, the learning is kind of like you, like all workloads are hybrid. Like, you know, uh, like you, you want the semantic, you want the text, you want the RegX, you want sql.I dunno. Um, but like, it's silly to like be all in on like one particularly query pattern.Simon Hørup Eskildsen: I think, like I really like the way that, um, um, that swally at cursor talks about it, which is, um, I'm gonna butcher it here. Um, and you know, I'm a, I'm a database scalability person. I'm not a, I, I dunno anything about training models other than, um, what the internet tells me and what.The way he describes is that this is just like cash compute, right? It's like you have a point in time where you're looking at some particular context and focused on some chunk and you say, this is the layer of the neural net at this point in time. That seems fundamentally really useful to do cash compute like that.And, um, how the value of that will change over time. I'm, I'm not sure, but there seems to be a lot of value in that.Alessio: Maybe talk a bit about the evolution of the workload, because even like search, like maybe two years ago it was like one search at the start of like an LLM query to build the context. Now you have a gentech search, however you wanna call it, where like the model is both writing and changing the code and it's searching it again later.Yeah. What are maybe some of the new types of workloads or like changes you've had to make to your architecture for it?Simon Hørup Eskildsen: I think you're right. When I think of rag, I think of, Hey, there's an 8,000 token, uh, context window and you better make it count. Um, and search was a way to do that now. Everything is moving towards the, just let the agent do its thing.Right? And so back to the thing before, right? The LLM is very good at reasoning with the data, and so we're just the tool call, right? And that's increasingly what we see our customers doing. Um, what we're seeing more demand from, from our customers now is to do a lot of concurrency, right? Like Notion does a ridiculous amount of queries in every round trip just because they can't.And I'm also now, when I use the cursor agent, I also see them doing more concurrency than I've ever seen before. So a bit similar to how we designed a database to drive as much concurrency in every round trip as possible. That's also what the agents are doing. So that's new. It means just an enormous amount of queries all at once to the dataset while it's warm in as few turns as possible.swyx: Can I clarify one thing on that?Simon Hørup Eskildsen: Yes.swyx: Is it, are they batching multiple users or one user is driving multiple,Simon Hørup Eskildsen: one user driving multiple, one agent driving.swyx: It's parallel searching a bunch of things.Simon Hørup Eskildsen: Exactly.swyx: Yeah. Yeah, exactly. So yeah, the clinician also did, did this for the fast context thing, like eight parallel at once.Simon Hørup Eskildsen: Yes.swyx: And, and like an interesting problem is, well, how do you make sure you have enough diversity so you're not making the the same request eight times?Simon Hørup Eskildsen: And I think like that's probably also where the hybrid comes in, where. That's another way to diversify. It's a completely different way to, to do the search.That's a big change, right? So before it was really just like one call and then, you know, the LLM took however many seconds to return, but now we just see an enormous amount of queries. So the, um, we just see more queries. So we've like tried to reduce query, we've reduced query pricing. Um, this is probably the first time actually I'm saying that, but the query pricing is being reduced, like five x.Um, and we'll probably try to reduce it even more to accommodate some of these workloads of just doing very large amounts of queries. Um, that's one thing that's changed. I think the right, the right ratio is still very high, right? Like there's still a, an enormous amount of rights per read, but we're starting probably to see that change if people really lean into this pattern.Alessio: Can we talk a little bit about the pricing? I'm curious, uh, because traditionally a database would charge on storage, but now you have the token generation that is so expensive, where like the actual. Value of like a good search query is like much higher because they're like saving inference time down the line.How do you structure that as like, what are people receptive to on the other side too?Simon Hørup Eskildsen: Yeah. I, the, the turbo puffer pricing in the beginning was just very simple. The pricing on these on for search engines before Turbo Puffer was very server full, right? It was like, here's the vm, here's the per hour cost, right?Great. And I just sat down with like a piece of paper and said like, if Turbo Puffer was like really good, this is probably what it would cost with a little bit of margin. And that was the first pricing of Turbo Puffer. And I just like sat down and I was like, okay, like this is like probably the storage amp, but whenever on a piece of paper I, it was vibe pricing.It was very vibe price, and I got it wrong. Oh. Um, well I didn't get it wrong, but like Turbo Puffer wasn't at the first principle pricing, right? So when Cursor came on Turbo Puffer, it was like. Like, I didn't know any VCs. I didn't know, like I was just like, I don't know, I didn't know anything about raising money or anything like that.I just saw that my GCP bill was, was high, was a lot higher than the cursor bill. So Justine and I was just like, well, we have to optimize it. Um, and I mean, to the chagrin now of, of it, of, of the VCs, it now means that we're profitable because we've had so much pricing pressure in the beginning. Because it was running on my credit card and Justine and I had spent like, like tens of thousands of dollars on like compute bills and like spinning off the company and like very like, like bad Canadian lawyers and like things like to like get all of this done because we just like, we didn't know.Right. If you're like steeped in San Francisco, you're just like, you just know. Okay. Like you go out, raise a pre-seed round. I, I never heard a word pre-seed at this point in time.swyx: When you had Cursor, you had Notion you, you had no funding.Simon Hørup Eskildsen: Um, with Cursor we had no funding. Yeah. Um, by the time we had Notion Locke was, Locke was here.Yeah. So it was really just, we vibe priced it 100% from first Principles, but it wasn't, it, it was not performing at first principles, so we just did everything we could to optimize it in the beginning for that, so that at least we could have like a 5% margin or something. So I wasn't freaking out because Cursor's bill was also going like this as they were growing.And so my liability and my credit limit was like actively like calling my bank. It was like, I need a bigger credit. Like it was, yeah. Anyway, that was the beginning. Yeah. But the pricing was, yeah, like storage rights and query. Right. And the, the pricing we have today is basically just that pricing with duct tape and spit to try to approach like, you know, like a, as a margin on the physical underlying hardware.And we're doing this year, you're gonna see more and more pricing changes from us. Yeah.swyx: And like is how much does stuff like VVC peering matter because you're working in AWS land where egress is charged and all that, you know.Simon Hørup Eskildsen: We probably don't like, we have like an enterprise plan that just has like a base fee because we haven't had time to figure out SKU pricing for all of this.Um, but I mean, yeah, you can run turbo puffer either in SaaS, right? That's what Cursor does. You can run it in a single tenant cluster. So it's just you. That's what Notion does. And then you can run it in, in, in BYOC where everything is inside the customer's VPC, that's what an for example, philanthropic does.swyx: What I'm hearing is that this is probably the best CRO job for somebody who can come in and,Simon Hørup Eskildsen: I mean,swyx: help you with this.Simon Hørup Eskildsen: Um, like Turbo Puffer hired, like, I don't know what, what number this was, but we had a full-time CFO as like the 12th hire or something at Turbo Puffer, um, I think I hear are a lot of comp.I don't know how they do it. Like they have a hundred employees and not a CFO. It's like having a CFO is like a runningswyx: business man. Like, you know,Simon Hørup Eskildsen: it's so good. Yeah, like money Mike, like he just, you know, just handles the money and a lot of the business stuff and so he came in and just hopped with a lot of the operational side of the business.So like C-O-O-C-F-O, like somewhere in between.swyx: Just as quick mention of Lucky, just ‘cause I'm curious, I've met Lock and like, he's obviously a very good investor and now on physical intelligence, um, I call it generalist super angel, right? He invests in everything. Um, and I always wonder like, you know, is there something appealing about focusing on developer tooling, focusing on databases, going like, I've invested for 10 years in databases versus being like a lock where he can maybe like connect you to all the customers that you need.Simon Hørup Eskildsen: This is an excellent question. No, no one's asked me this. Um, why lockey? Because. There was a couple of people that we were talking to at the time and when we were raising, we were almost a little, we were like a bit distressed because one of our, one of our peers had just launched something that was very similar to Turbo Puffer.And someone just gave me the advice at the time of just choose the person where you just feel like you can just pick up the phone and not prepare anything. And just be completely honest, and I don't think I've said this publicly before, but I just called Lockey and was like local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you.But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working. So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people and we're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards and.Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before. As I said, I didn't even know what a seed or pre-seed round was like before, probably even at this time. So I was just like very honest with him. And I asked him like, Lockie, have you ever have, have you ever invested in database company?He was just like, no. And at the time I was like, am I dumb? Like, but I think there was something that just like really drew me to Lockie. He is so authentic, so honest, like, and there was something just like, I just felt like I could just play like, just say everything openly. And that was, that was, I think that that was like a perfect match at the time, and, and, and honestly still is.He was just like, okay, that's great. This is like the most honest, ridiculous thing I've ever heard anyone say to me. But like that, like that, whyswyx: is this ridiculous? Say competitor launch, this may not work out. It wasSimon Hørup Eskildsen: more just like. If this doesn't work out, I'm gonna close up shop by the end of the mo the year, right?Like it was, I don't know, maybe it's common. I, I don't know. He told me it was uncommon. I don't know. Um, that's why we chose him and he'd been phenomenal. The other people were talking at the, at the time were database experts. Like they, you know, knew a lot about databases and Locke didn't, this turned out to be a phenomenal asset.Right. I like Justine and I know a lot about databases. The people that we hire know a lot about databases. What we needed was just someone who didn't know a lot about databases, didn't pretend to know a lot about databases, and just wanted to help us with candidates and customers. And he did. Yeah. And I have a list, right, of the investors that I have a relationship with, and Lockey has just performed excellent in the number of sub bullets of what we can attribute back to him.Just absolutely incredible. And when people talk about like no ego and just the best thing for the founder, I like, I don't think that anyone, like even my lawyer is like, yeah, Lockey is like the most friendly person you will find.swyx: Okay. This is my most glow recommendation I've ever heard.Alessio: He deserves it.He's very special.swyx: Yeah. Yeah. Yeah. Okay. Amazing.Alessio: Since you mentioned candidates, maybe we can talk about team building, you know, like, especially in sf, it feels like it's just easier to start a company than to join a company. Uh, I'm curious your experience, especially not being n SF full-time and doing something that is maybe, you know, a very low level of detail and technical detail.Simon Hørup Eskildsen: Yeah. So joining versus starting, I never thought that I would be a founder. I would start with it, like Turbo Puffer started as a blog post, and then it became a project and then sort of almost accidentally became a company. And now it feels like it's, it's like becoming a bigger company. That was never the intention.The intentions were very pure. It's just like, why hasn't anyone done this? And it's like, I wanna be the, like, I wanna be the first person to do it. I think some founders have this, like, I could never work for anyone else. I, I really don't feel that way. Like, it's just like, I wanna see this happen. And I wanna see it happen with some people that I really enjoy working with and I wanna have fun doing it and this, this, this has all felt very natural on that, on that sense.So it was never a like join versus versus versus found. It was just dis found me at the right moment.Alessio: Well I think there's an argument for, you should have joined Cursor, right? So I'm curious like how you evaluate it. Okay, I should actually go raise money and make this a company versus like, this is like a company that is like growing like crazy.It's like an interesting technical problem. I should just build it within Cursor and then they don't have to encrypt all this stuff. They don't have to obfuscate things. Like was that on your mind at all orSimon Hørup Eskildsen: before taking the, the small check from Lockie, I did have like a hard like look at myself in the mirror of like, okay, do I really want to do this?And because if I take the money, I really have to do it right. And so the way I almost think about it's like you kind of need to ha like you kind of need to be like fucked up enough to want to go all the way. And that was the conversation where I was like, okay, this is gonna be part of my life's journey to build this company and do it in the best way that I possibly can't.Because if I ask people to join me, ask people to get on the cap table, then I have an ultimate responsibility to give it everything. And I don't, I think some people, it doesn't occur to me that everyone takes it that seriously. And maybe I take it too seriously, I don't know. But that was like a very intentional moment.And so then it was very clear like, okay, I'm gonna do this and I'm gonna give it everything.Alessio: A lot of people don't take it this seriously. But,swyx: uh, let's talk about, you have this concept of the P 99 engineer. Uh, people are 10 x saying, everyone's saying, you know, uh, maybe engineers are out of a job. I don't know.But you definitely see a P 99 engineer, and I just want you to talk about it.Simon Hørup Eskildsen: Yeah, so the P 99 engineer was just a term that we started using internally to talk about candidates and talk about how we wanted to build the company. And you know, like everyone else is, like we want a talent dense company.And I think that's almost become trite at this point. What I credit the cursor founders a lot with is that they just arrived there from first principles of like, we just need a talent dense, um, talent dense team. And I think I've seen some teams that weren't talent dense and like seemed a counterfactual run, which if you've run in been in a large company, you will just see that like it's just logically will happen at a large company.Um, and so that was super important to me and Justine and it's very difficult to maintain. And so we just needed, we needed wording for it. And so I have a document called Traits of the P 99 Engineer, and it's a bullet point list. And I look at that list after every single interview that I do, and in every single recap that we do and every recap we end with.End with, um, some version of I'm gonna reject this candidate completely regardless of what the discourse was, because I wanna see people fight for this person because the default should not be, we're gonna hire this person. The default should be, we're definitely not hiring this person. And you know, if everyone was like, ah, maybe throw a punch, then this is not the right.swyx: Do, do you operate, like if there's one cha there must have at least one champion who's like, yes, I will put my career on, on, on the line for this. You know,Simon Hørup Eskildsen: I think career on the line,swyx: maybe a chair, butSimon Hørup Eskildsen: yeah. You know, like, um, I would say so someone needs to like, have both fists up and be like, I'd fight.Right? Yeah. Yeah. And if one person said, then, okay, let's do it. Right?swyx: Yeah.Simon Hørup Eskildsen: Um. It doesn't have to be absolutely everyone. Right? And like the interviews are always the sign that you're checking for different attributes. And if someone is like knocking it outta the park in every single attribute, that's, that's fairly rare.Um, but that's really important. And so the traits of the P 99 engineer, there's lots of them. There's also the traits of the p like triple nine engineer and the quadruple nine engineer. This is like, it's a long list.swyx: Okay.Simon Hørup Eskildsen: Um, I'll give you some samples, right. Of what we, what we look for. I think that the P 99 engineer has some history of having bent, like their trajectory or something to their will.Right? Some moment where it was just, they just, you know, made the computer do what it needed to do. There's something like that, and it will, it will occur to have them at some point in their career. And, uh. Hopefully multiple times. Right.swyx: Gimme an example of one of your engineers that like,Simon Hørup Eskildsen: I'll give an eng.Uh, so we, we, we launched this thing called A and NV three. Um, we could, we're also, we're working on V four and V five right now, but a and NV three can search a hundred billion vectors with a P 50 of around 40 milliseconds and a p 99 of 200 milliseconds. Um, maybe other people have done this, I'm sure Google and others have done this, but, uh, we haven't seen anyone, um, at least not in like a public consumable SaaS that can do this.And that was an engineer, the chief architect of Turbo Puffer, Nathan, um, who more or less just bent this, the software was not capable of this and he just made it capable for a very particular workload in like a, you know, six to eight week period with the help of a lot of the team. Right. It's been, been, there's numerous of examples of that, like at, at turbo puff, but that's like really bending the software and X 86 to your will.It was incredible to watch. Um. You wanna see some moments like that?swyx: Isn't that triple nine?Simon Hørup Eskildsen: Um, I think Nathan, what's calledAlessio: group nine, that was only nine. I feel like this is too high forSimon Hørup Eskildsen: Nathan. Nathan is, uh, Nathan is like, yeah, there's a lot of nines. Okay. After that p So I think that's one trait. I think another trait is that, uh, the P 99 spends a lot of time looking at maps.Generally it's their preferred ux. They just love looking at maps. You ever seen someone who just like, sits on their phone and just like, scrolls around on a map? Or did you not look at maps A lot? You guys don't look atswyx: maps? I guess I'm not feeling there. I don't know, butSimon Hørup Eskildsen: you just dis What about trains?Do you like trains?swyx: Uh, I mean they, not enough. Okay. This is just like weapon nice. Autism is what I call it. Like, like,Simon Hørup Eskildsen: um, I love looking at maps, like, it's like my preferred UX and just like I, you know, I likeswyx: lotsAlessio: of, of like random places, soswyx: like,youswyx: know.Alessio: Yes. Okay. There you go. So instead of like random places, like how do you explore the maps?Simon Hørup Eskildsen: No, it's, it's just a joke.swyx: It's autism laugh. It's like you are just obsessed by something and you like studying a thing.Simon Hørup Eskildsen: The origin of this was that at some point I read an interview with some IOI gold medalistswyx: Uhhuh,Simon Hørup Eskildsen: and it's like, what do you do in your spare time? I was just like, I like looking at maps.I was like, I feel so seen. Like, I just like love, like swirling out. I was like, oh, Canada is so big. Where's Baffin Island? I don't know. I love it. Yeah. Um, anyway, so the traits of P 99, P 99 is obsessive, right? Like, there's just like, you'll, you'll find traits of that we do an interview at, at, at, at turbo puffer or like multiple interviews that just try to screen for some of these things.Um, so. There's lots of others, but these are the kinds of traits that we look for.swyx: I'll tell you, uh, some people listen for like some of my dere stuff. Uh, I do think about derel as maps. Um, you draw a map for people, uh, maps show you the, uh, what is commonly agreed to be the geographical features of what a boundary is.And it shows also shows you what is not doing. And I, I think a lot of like developer tools, companies try to tell you they can do everything, but like, let's, let's be real. Like you, your, your three landmarks are here, everyone comes here, then here, then here, and you draw a map and, and then you draw a journey through the map.And like that. To me, that's what developer relations looks like. So I do think about things that way.Simon Hørup Eskildsen: I think the P 99 thinks in offs, right? The P 99 is very clear about, you know, hey, turbo puffer, you can't run a high transaction workload on turbo puffer, right? It's like the right latency is a hundred milliseconds.That's a clear trade off. I think the P 99 is very good at articulating the trade offs in every decision. Um. Which is exactly what the map is in your case, right?swyx: Uh, yeah, yeah. My, my, my world. My world.Alessio: How, how do you reconcile some of these things when you're saying you bend the will the computer versus like the trade

    The First Customer
    The First Customer - The Quiet Revolution in Industrial Automation with Co-Founder Carl Gould

    The First Customer

    Play Episode Listen Later Mar 11, 2026 34:48 Transcription Available


    In this episode, I was lucky enough to interview Carl Gould, co-founder and CTO of Inductive Automation.Growing up in California's Bay Area during the rise of the modern internet, Carl developed an early fascination with computers that eventually led him to study computer science at UC Davis. What began as a summer project connecting industrial machine data to SQL databases soon evolved into a full software platform used by engineers around the world to build applications that monitor and control factories, water systems, and other industrial operations.Carl shares the story behind Inductive Automation's earliest days, including how mentorship from industry veteran Steve Heckman helped shape their understanding of the market and how their first independent customer—a project at Sierra Nevada Brewing Company—validated the idea that their solution solved a widespread industry gap. Along the way, Carl reflects on building a company from the ground up, the value of staying close to users, and why solving a real problem matters far more than chasing technology trends. More than two decades later, he remains energized by seeing what engineers create with Ignition and by staying connected to the people whose work the software powers every day.Explore how Carl Gould helped modernize industrial software by focusing on real problems engineers face in this episode of The First Customer!Guest Info:Inductive Automationhttp://www.inductiveautomation.comCarl Gould's LinkedInhttps://www.linkedin.com/in/carl-gould/Connect with Jay on LinkedInhttps://www.linkedin.com/in/jayaigner/The First Customer Youtube Channelhttps://www.youtube.com/@thefirstcustomerpodcastThe First Customer podcast websitehttps://www.firstcustomerpodcast.comFollow The First Customer on LinkedInhttp://www.linkedin.com/company/the-first-customer-podcast/

    Invest Like the Best with Patrick O'Shaughnessy
    Shyam Sankar - Celebrating Heretics - [Invest Like the Best, EP.462]

    Invest Like the Best with Patrick O'Shaughnessy

    Play Episode Listen Later Mar 10, 2026 81:38


    My guest today is Shyam Sankar, the CTO of Palantir Technologies. In this conversation, we explore the ideas that shape how Shyam thinks about technology, talent, and national power. We discuss the origins of Palantir's forward-deployed engineering model and the lessons he learned from Alex Karp about identifying people's "superpowers". We also talk about Shyam's fascination with the "heretics" of American history, the unconventional builders who challenged bureaucracy and created many of the systems that powered America's military and industrial success. Shyam argues that the United States must reindustrialize after decades of moving production overseas, and explains what we can learn from America's industrial past. In a new Colossus profile, our Editor in Chief Jeremy Stern tells the story of how Shyam became one of the most important but largely unseen figures behind Palantir, tracing his journey from immigrant roots to employee #13 and the architect of the company's success and distinctive culture. For the full show notes, transcript, and links to mentioned content, check out the episode page ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠.  ----- Become a Colossus member to get our quarterly print magazine and private audio experience, including exclusive profiles and early access to select episodes. Subscribe at ⁠colossus.com/subscribe⁠. ----- ⁠Ramp's⁠ mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠ramp.com/invest⁠⁠ to sign up for free and get a $250 welcome bonus. ----- Trusted by thousands of businesses, ⁠Vanta⁠ continuously monitors your security posture and streamlines audits so you can win enterprise deals and build customer trust without the traditional overhead. Visit ⁠vanta.com/invest⁠.  ----- ⁠WorkOS⁠ is a developer platform that enables SaaS companies to quickly add enterprise features to their applications. Visit⁠⁠ ⁠WorkOS.com⁠⁠⁠ to transform your application into an enterprise-ready solution in minutes, not months. ----- ⁠Rogo⁠ is an AI-powered platform that automates accounts payable workflows, enabling finance teams to process invoices faster and with greater accuracy. Learn more at ⁠Rogo.ai/invest⁠. ----- ⁠Ridgeline⁠ has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Visit⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ridgeline.ai⁠. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://thepodcastconsultant.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠). Timestamps: (00:00:00) Welcome to Invest Like the Best (00:02:43) Intro: Shyam Sankar (00:03:24) Defining Heretics in US Military History (00:05:01) The Story of Hyman Rickover (00:09:55) Formative Experiences & Worldview (00:14:50) Components of American Greatness (00:17:56) How to Unlock Talent (00:25:56) Palantir's Distinct Culture (00:28:15) Origin of Forward Deployed Engineering (00:34:24) What Does Palantir Actually Do? (00:36:19) Example: Airbus (00:40:20) State of the US Military Today (00:47:33) The U.S. Needs to Reindustrialize (00:52:19) Perspective of China (00:55:56) Our Key Asymmetric Advantages (01:00:57) Executive Orders for a Day (01:02:37) Negative Aspects of US Culture (01:04:47) Managing Rapid Pivots (01:09:17) Where Will AI Value Accrue? (01:12:37) Undeclared State of Emergency (01:15:45) Surprising Aspects of Palantir (01:17:50) To Do or To Be (01:18:50) Reflecting on Fatherhood (01:19:46) The Kindest Thing

    This Week in Machine Learning & Artificial Intelligence (AI) Podcast
    Agent Swarms and Knowledge Graphs for Autonomous Software Development with Siddhant Pardeshi - #763

    This Week in Machine Learning & Artificial Intelligence (AI) Podcast

    Play Episode Listen Later Mar 10, 2026 76:14


    In this episode, Sid Pardeshi, co-founder and CTO of Blitzy, joins us to discuss building autonomous development systems able to deliver production-ready software at enterprise scale. Sid contrasts AI-assisted coding with end-to-end autonomy, arguing that “code is a commodity” and acceptance is the real metric—security, standards, tests, and maintainability included. We explore Blitzy's hybrid graph-plus-vector approach, which grounds agents and combines semantic signals with keyword search to navigate large repositories efficiently. Sid breaks down context and agent engineering, how effective context windows have plateaued, and why dynamic agent personas, tool selection, and model-specific prompting matter at scale. He details their orchestration of large swarms of AI agents to collaboratively analyze codebases, plan tasks, and execute complex tasks in parallel. We also dig into why Agents.md and flat memories break down, storing feedback in the knowledge graph, and building real-world evals beyond leaderboards to choose the right model for each task. The complete show notes for this episode can be found at https://twimlai.com/go/763.

    Packet Pushers - Full Podcast Feed
    HS126: AI Everything, AI Everywhere, AI All At Once

    Packet Pushers - Full Podcast Feed

    Play Episode Listen Later Mar 10, 2026 39:37


    At CES in January, NVIDIA, AMD, Siemens and others spun elaborate tales of a world suffused with AI: AI in the cloud, AI at the desktop, AI in the factory, AI underneath enterprise software and as the UI for enterprise software and agentically accomplishing anything and everything in a world of embodied, physical AI. Johna... Read more »

    Tacos and Tech Podcast
    Logistics Wins Wars

    Tacos and Tech Podcast

    Play Episode Listen Later Mar 10, 2026 33:56


    In this episode of Tacos & Tech, Neal Bloom sits down with Peter Goldsborough, co-founder and CTO of Rune, to unpack one of the most overlooked but decisive factors in modern warfare: logistics.Peter shares how his background in software and defense tech led him to a simple realization-while billions have been spent on weapons and command systems, military logistics still runs on spreadsheets, whiteboards, and paper. The conversation explores why future conflicts will be won or lost on decision speed, not firepower, and how Rune is turning logistics into a real-time, data-driven decision system used by the Army and Marines today.This episode dives into defense innovation, software in degraded environments, and why fixing logistics isn't just a military problem-it's a cognitive one.Key Topics* Why logistics decides wars* The problem with spreadsheets and whiteboards in the DoD* Turning logistics into a real-time decision system* Defense tech speed vs legacy procurement* Software for disconnected and high-stress environments* Lessons from Ukraine and the Pacific theater* When logistics becomes the bottleneck* Building resilient software for the physical worldLinks* Rune Connect on LinkedIn* Peter Goldsborough* Neal Bloom This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit risingtidepartners.substack.com/subscribe

    Develpreneur: Become a Better Developer and Entrepreneur
    Building Forward Momentum as a Developer Entrepreneur

    Develpreneur: Become a Better Developer and Entrepreneur

    Play Episode Listen Later Mar 10, 2026 29:35


    Building forward momentum isn't about moving fast. Rather, it's about moving intentionally — especially when transitioning from developer to entrepreneur. In Season 27 of the Building Better Developers podcast, we explore what it truly means to keep progressing when challenges, distractions, and new responsibilities threaten to slow you down. In this episode, Andrew Stevens — software engineer, multi-time founder, CTO, and board member — shares how building forward momentum has shaped his multi-decade journey through technology and startups. Instead of focusing on overnight success, his story emphasizes sustained curiosity, disciplined execution, and constant recalibration. Over time, momentum is built layer by layer, not in dramatic bursts. Building Forward Momentum Through Collaboration At first, Andrew's entrepreneurial journey didn't begin alone. It started with collaboration. During the early dial-up internet era, local ISPs were emerging everywhere. At that point, Andrew joined forces with two complementary partners. While he focused on writing software, one partner handled infrastructure, and another concentrated on sales and commercialization. Because each person owned a specific strength, the venture gained traction quickly. This alignment created confidence. No single individual carried the entire burden, which reduced risk and accelerated learning. Building forward momentum often begins with the right partnerships, not total independence. In other words, developers don't need to master every business function before launching something new. Clarity about strengths — and awareness of gaps — is far more powerful. Building Forward Momentum During the Engineer-to-Founder Shift Eventually, Andrew transitioned into more solo ventures. At that stage, the dynamic shifted dramatically. Coding was no longer the only priority. Sales conversations, tax planning, customer communication, and financial oversight became daily responsibilities. As complexity increased, the temptation to retreat into technical work grew stronger. Many developers stall at this point. Technical tasks feel comfortable, whereas business responsibilities feel ambiguous. Meanwhile, operational issues quietly accumulate. Andrew openly discusses early financial mistakes and process failures. Nevertheless, those moments didn't stop progress. Instead, they forced adjustments that strengthened the foundation. Building forward momentum requires correction, not perfection. Entrepreneurship rarely follows a straight line. Each misstep generates feedback, and each adjustment reinforces resilience. Building Forward Momentum with AI as Leverage Alongside structured execution, Andrew emphasizes the strategic use of AI. One approach treats AI as a tool. He leverages it for rapid prototyping, static analysis, architecture critiques, and test case generation. In addition, AI significantly shortens debugging cycles, particularly when configuration issues arise. That said, production code still demands human judgment. AI accelerates iteration, but discernment remains essential. A second perspective positions AI as a channel. Increasingly, users ask AI systems for recommendations before making purchasing decisions. Consequently, products must be structured for discoverability within AI-driven ecosystems. Unlike traditional SEO, this requires thinking about how AI systems reference and surface information. AI doesn't replace disciplined builders — it amplifies their capacity. By reducing research time and accelerating experimentation, AI expands a founder's ability to test ideas. More testing leads to stronger building forward momentum. Building Forward Momentum Through Structured Execution Rather than relying on vague annual goals, Andrew breaks execution into focused horizons: Today This week This month This framework creates clarity without overwhelm. At the same time, he rejects the illusion of 100% productivity. Just as engineering teams cannot operate at full capacity indefinitely, founders cannot either. Space must be preserved for: Personal development Industry research Technical skill refinement Creative exploration Even while serving in executive roles, Andrew continues writing code. Staying close to the craft keeps strategic decisions grounded in technical reality. When skill development stops, momentum quietly declines. Protecting growth time is just as important as meeting deadlines. Building Forward Momentum Sustainably Entrepreneurship can feel isolating. Responsibility compounds, and decisions stack up quickly. For that reason, Andrew values trusted collaboration — including working alongside his spouse for nearly two decades. A reliable sounding board provides both stability and accountability. Unfinished edits will always exist. Features will occasionally slip. Competing ideas will demand attention. However, building forward momentum is not about tackling everything at once. Progress comes from choosing the next meaningful step and executing it consistently. The Real Lesson Ultimately, building forward momentum isn't defined by dramatic breakthroughs. It grows from sustained curiosity, strategic collaboration, structured execution, intelligent leverage of tools, and continuous personal development. Developers stepping into entrepreneurship often expect transformation to feel explosive. In reality, momentum compounds through disciplined repetition. Keep building. Keep learning. Keep adjusting. Over time, consistent forward motion turns into lasting impact. Stay Connected: Join the Developreneur Community

    Heavy Strategy
    HS126: AI Everything, AI Everywhere, AI All At Once

    Heavy Strategy

    Play Episode Listen Later Mar 10, 2026 39:37


    At CES in January, NVIDIA, AMD, Siemens and others spun elaborate tales of a world suffused with AI: AI in the cloud, AI at the desktop, AI in the factory, AI underneath enterprise software and as the UI for enterprise software and agentically accomplishing anything and everything in a world of embodied, physical AI. Johna... Read more »

    The Matthews Mentality Podcast
    E101 - Marcus Whitney | From Waiting Tables to Venture Capital

    The Matthews Mentality Podcast

    Play Episode Listen Later Mar 10, 2026 76:17


    Entrepreneur and healthcare investor Marcus Whitney joins the Matthews Mentality podcast to discuss his journey from growing up in Brooklyn and dropping out of college to teaching himself to code, becoming a CTO, and eventually co-founding and leading Jumpstart Health Investors, an early-stage healthcare venture firm. He explains why building companies in healthcare is uniquely difficult due to regulation and structural barriers like certificates of need, and why healthcare is now being “flanked” by multiple forces of disruption. Whitney also shares how he helped build Nashville SC from a fourth-division nonprofit into an MLS franchise, why he wrote Create and Orchestrate after experiences with prison entrepreneurship programs and a near-jail moment in his youth, and what jiu-jitsu taught him about humility, resilience, and pain tolerance.00:00 Entrepreneurship Fine Line00:58 Meet Marcus Whitney02:09 Jiu Jitsu Origins02:39 Healthcare VC Focus05:39 Why Healthcare Is Hard07:40 Brooklyn Eighties10:13 Self Belief And Wrestling13:15 College Dropout Lessons15:18 First Kid Turning Point16:08 Learning To Code17:51 First Startup Reality Check21:12 Entrepreneurship Misconceptions22:38 Velocity Hits Healthcare24:04 Nashville Healthcare Edge25:01 Pitching And Portfolio Math27:56 Knowing You Hit It31:40 Nashville SC Ownership34:33 Why He Wrote The Book38:37 Risk And Learning Styles38:53 Writing The Book39:30 Kids And Being Heard39:56 Finding Jiu Jitsu42:12 Competing And Winning Worlds44:21 Training Pressure And Balance46:12 Jiu Jitsu Lessons For Life49:07 Humility And Team Culture52:59 Why Mexico City55:46 Decisive Moves And Safety58:49 Advice To My Younger Self01:00:55 Integrity For Entrepreneurs01:03:00 Message To Brooklyn Kids01:07:47 A Co Founder Who Had My Back01:10:37 Co Founding Nashville SC01:12:32 Hardest Parts And Politics01:15:28 Closing Thoughts And Farewell

    Bricks & Bytes
    Global CTO: “AI Met Construction Where It Already Was - And Everything Changed” | Alain Waha, Buro Happold

    Bricks & Bytes

    Play Episode Listen Later Mar 10, 2026 64:30


    "We built entire cities using PDFs and drawings. That's not a failure — that's a miracle. Now imagine what we build with the right tools."In today's episode of Bricks and Bytes, we had Alain Waha, CTO at BuroHappold Engineering, discussing AI transformation, the future of physical AI, and why 2026 already feels like three years have passed in nine weeks.Tune in to find out about:✅ Why construction being the least digitized industry is actually its biggest opportunity right now ✅ How AI is finally solving the Tower of Babel problem that's plagued AEC for decades ✅ Why firms need to choose — compete on cost or build a value brand — before it's too late ✅ Why foundational AI models for the physical world don't exist yet, and what it'll take to get thereCatch the full episode on Spotify and YouTube

    Soft Skills Engineering
    Episode 503: Hardware is hard and my PMs are pushing AI slop code

    Soft Skills Engineering

    Play Episode Listen Later Mar 9, 2026 36:30


    In this episode, Dave and Jamison answer these questions: I'm a software developer with about 15 years in the industry, and I am soon starting as the CTO of a robotics company with about 50 employees. Though I have years of experience and an academic background within the field of robotics, I have always been focused on the software side of things. In my new role, I am ultimately responsible for the hardware team as well. How do I go about earning the respect, and becoming an effective leader, of my new colleagues working in a field in which I am not an expert myself? Hi, I'm meowmeow, and I've enjoyed your podcast for a long time. I'm working at a small engineering company which don't have lots of profit. Recently, the PMs at my company(including the CEO) have started “vibe coding” directly on our product. They've even added PMs to the project planning list as contributors. Whenever they open a PR, the code is AI-generated and reflects their personal working style. The code quality is fairly low and engineers end up spending a lot of time reviewing and fixing it, even though we're already under a heavy workload. Our CEO comes from a product management background. He believes PMs should write code and deploy their own implementations, and that engineers are not fast enough and should simply move faster. I've already been feeling stressed due to the workload, and this situation seems to be making it worse. Engineering leadership doesn't seem able to push back effectively. What should I do?

    Go To Market Grit
    How Sierra Outpaced Every AI Startup | Co-founder Bret Taylor

    Go To Market Grit

    Play Episode Listen Later Mar 9, 2026 72:38


    Few founders have seen Silicon Valley from every seat at the table.After co-creating Google Maps at Google, serving as CTO at Facebook, and later as co-CEO of Salesforce, Bret Taylor is now building AI agents at Sierra to redefine customer experience.On Grit, he explains why “competitive intensity” is a core value at their fast-growing company and why he believes AI won't lead to a world where people stop working.Guest: Bret Taylor, co-founder of SierraConnect with Bret XLinkedInConnect with JoubinXLinkedInEmail: grit@kleinerperkins.comFollow GritLinkedInX​Learn more about Kleiner Perkins:https://www.kleinerperkins.com/

    Town Hall Seattle Science Series
    255. Blaise Agüera y Arcas: What Is Intelligence?

    Town Hall Seattle Science Series

    Play Episode Listen Later Mar 9, 2026 75:03


    What intelligence really is, and how AI's emergence is a natural consequence of evolution. It has come as a shock to some AI researchers that a large neural net that predicts next words seems to produce a system with general intelligence. Yet this is consistent with a long-held view among some neuroscientists that the brain evolved precisely to predict the future—the "predictive brain" hypothesis. In What Is Intelligence?, Blaise Agüera y Arcas takes up this idea—that prediction is fundamental not only to intelligence and the brain but to life itself—and explores the wide-ranging implications. These include radical new perspectives on the computational properties of living systems, the evolutionary and social origins of intelligence, the relationship between models and reality, entropy and the nature of time, the meaning of free will, the problem of consciousness, and the ethics of machine intelligence. The book offers a unified picture of intelligence from molecules to organisms, societies, and AI, drawing from a wide array of literature in many fields, including computer science and machine learning, biology, physics, and neuroscience. It also adds recent and novel findings from the author, his research team, and colleagues. Combining technical rigor and deep up-to-the-minute knowledge about AI development, the natural sciences (especially neuroscience), and philosophical literacy, What Is Intelligence? argues—quite against the grain—that certain modern AI systems do indeed have a claim to intelligence, consciousness, and free will.Blaise Agüera y Arcas is a researcher and author focused on artificial intelligence, sociality, evolution, and software development. He is a VP and Fellow at Google, where he is the CTO of Technology & Society and founder of Paradigms of Intelligence (Pi). He is a frequent speaker at TED and has been featured in the Economist and Noēma, and has previously published the books Who Are We Now? and Ubi Sunt. Buy the Book What Is Intelligence?: Lessons from AI About Evolution, Computing, and Minds Elliott Bay Book Company

    Impact Quantum: A Podcast for Engineers
    Quantum's Role in Pushing AI Beyond Its Current Boundaries

    Impact Quantum: A Podcast for Engineers

    Play Episode Listen Later Mar 9, 2026 51:18 Transcription Available


    In this episode, host Frank La Vigne and co-host Candice Gillhoolley sit down with Danny Wall, the founder, CEO, and CTO of OA Quantum Labs, for an in-depth conversation about the real-world intersection of quantum computing and artificial intelligence.You'll hear Danny Wall pull the curtain back on how OA Quantum Labs is pushing quantum solutions beyond the research phase and into commercially viable applications. From accelerating AI training and inference to spinning out novel materials at lightning speed, Danny shares firsthand stories about quantum-enhanced breakthroughs in material science, finance, and more.This episode dives into common misconceptions—like the idea that AI is actually running on quantum computers—and Danny explains the nuanced, current reality: quantum as an incredible mathematical accelerator and enhancement for AI, rather than a full replacement. You'll also get practical advice for developers, researchers, and investors eager to get started with quantum, and insights on what it really takes to stay ahead in a field moving as fast as quantum.If you're curious about how quantum technologies are escaping the confines of the lab and making real commercial impact, this is the episode you've been waiting for!Time Stamps00:00 "Quantum Labs Driving AI Innovation"03:31 "Quantum Computing Enhances AI Efficiency"09:32 Advanced Materials Breakthroughs Revolutionizing Industries12:58 Quantum Investing: Beyond PhD Pedigrees15:47 "Quantum, Solutions, and Strategic Investment"18:05 "Jump Into Quantum Development"20:27 "Quantum Enhancement for AI Solutions"25:38 AI Limits and Misconceptions27:02 "AI Creativity Hack with Roles"33:16 "Challenges in Quantum Error Correction"36:37 Quantum Computing's Material Challenges38:02 "AI Progress Hitting Limits"42:49 "Quantum Encryption and Neural Networks"47:19 "Schrödinger's Cat Explained Simply"48:16 "Quantum Physics Misconceptions Explained"

    Scrum Master Toolbox Podcast
    BONUS: Leadership Is Contextual With Daniel Harcek

    Scrum Master Toolbox Podcast

    Play Episode Listen Later Mar 8, 2026 41:44


    In this CTO Series episode, Daniel Harcek shares how leading engineering teams across radically different scales — from a 7-person fintech startup to a 2,000-person cybersecurity company — taught him that leadership isn't one-size-fits-all. We explore how he builds AI-first organizations, drives agile transformations, and why he believes every person in a company should think like a tech person. What Works at 10 People Breaks at 100 "Leadership is contextual, not absolute. What works with 10 people breaks at 50, at 100." Daniel's career spans from building a 30-person team for a German startup out of Žilina, Slovakia, to leading 70 engineers at Avast's mobile division within a 2,000-person organization, and now running a 7-person team at WageNow. Each scale demanded a fundamentally different approach. At smaller scales, you strip away operational overhead and push ownership directly to the people. At larger scales, you need guardrails, dedicated roles, and structured processes that the smaller team would find suffocating. The lesson: don't carry your playbook from one context to another — rebuild it for the reality you're in. End-to-End Ownership Replaces Specialized Roles "Each engineer owns quality for the task he delivers. And he owns the fact that it comes to production." At WageNow, Daniel runs without dedicated QA people — in a fintech company where quality can't be compromised. Instead, each developer owns quality end-to-end, from code to production. This isn't recklessness; it's intentional design. When teams are small, you set up the system so that it's safe to break things, then trust people with hard tasks. The result: people grow faster, move faster, and care more about what they ship. In larger organizations, you might need specialized DevOps, QA, and platform roles — but the principle of ownership stays the same. The Buddy System and Scaling Without Losing Alignment "The buddy system is one of the easiest things you can do. One buddy for a newcomer for the first 1, 3, 6 months — they often become friends." When scaling fast, Daniel focuses on three things: strong on-boarding guides, well-maintained documentation (now much easier with AI), and a buddy system that pairs every newcomer with a dedicated colleague. The buddy system works because it scales the human side of on-boarding — a tech lead or manager can do one-on-ones, but that's formal, and new people might be scared to speak up. The buddy creates a safe channel for questions, concerns, and cultural integration. Beyond people, scaling also means investing in automation and observability so that as you grow with customers, you grow with failures too — and your incident reporting doesn't burn out the team. Building an AI-First Organization "Every person uses AI. Every person has the capability to use AI. The company builds a second brain so AI can build on top of that." At WageNow, Daniel has implemented what he calls an AI-first organization, inspired by Spotify and other companies pioneering this approach. The concept is simple: before doing any task, ask whether AI can help you deliver the output faster or better. This applies across the entire company — not just engineering. Daniel looks for people in HR, accounting, and UX who understand automation tools like n8n or Make.com alongside AI. The key ingredients: Curate the data: Build a company "second brain" with clean, structured context for AI tools to work with Train the muscle: AI ability is like a muscle — people must use it daily because these skills didn't exist 2-3 years ago Share what works: Exponential AI adoption happened at WageNow once people started sharing their successes and failures with AI tools Respect the guardrails: Data privacy and regulation compliance remain non-negotiable The hidden productivity gains, Daniel argues, lie not in engineering (which gets all the attention) but in operations, accounting, HR, and every other area of the business. Selling Transformation: Financial Arguments for Leaders, Ownership for Teams "For the leaders, it's the financial thing and the cultural thing. For the people doing the work, it's personal development — having more control, having more ownership." At Ringier Axel Springer, Daniel proposed and led a company-wide agile transformation — a 1-2 year effort that required convincing the CEO, product teams, marketing, and sales to change how they operate. His approach: build a dual argument. For leadership, frame the change in financial and cultural terms — more revenue with the same people, better visibility into how work translates to business outcomes. For the people doing the work, emphasize personal growth, increased ownership, and transparency. The transformation breaks silos between engineering and product, creating a shared backlog agreed with all stakeholders. Daniel looks for people with high agency — those who can reinvent and change themselves from the inside, not just wait for a change agent from the outside. Balancing Experimentation with Operational Excellence "The SRE books helped me understand quality as a feature — because quality is basically how reliable you are for your customers." When asked about the books that most influenced his approach as a CTO, Daniel points to the Site Reliability Engineering series from Google — three books that frame quality as reliability, a feature your customers experience directly. Alongside those, he recommends The Lean Startup by Eric Ries, because he believes all tech people should have a sense of business and customer understanding. Together, these books guide how to balance rapid experimentation with operational excellence as the organization scales. About Daniel Harcek Daniel is a technology executive with a proven record scaling engineering organizations across fintech, cybersecurity, and digital media. Builds AI-first teams, operating models, and delivery cultures aligned with product strategy. Led platforms serving 30M MAU, deployed fintech capital pilots, transformed agile delivery at internet scale, and mentors global tech communities and ecosystems worldwide actively. You can link with Daniel Harcek on LinkedIn.

    Risky Business News
    Sponsored: What it means to be a learning organisation

    Risky Business News

    Play Episode Listen Later Mar 8, 2026 14:40


    In this Risky Business sponsor interview, Marco Slaviero, CTO of Thinkst, talks to Tom Uren about how the company ensures that it is a learning organisation. The pair discuss the company's investment in its Thinkst Labs, how it differs from other security research labs, and how it helps grow products and people. Show notes

    GeekWire
    On location at OpenAI in Bellevue, with CTO of Applications Vijaye Raji

    GeekWire

    Play Episode Listen Later Mar 7, 2026 37:01


    OpenAI just opened its largest office outside San Francisco, in downtown Bellevue, Wash. GeekWire was there on day one to tour the space. Chatting inside the OpenAI game room, we share our observations about the Mad Men-meets-Pacific Northwest aesthetic, which features open floor plans and lots of common areas, and try to figure out what it all says about OpenAI's culture. Plus, we talk with Vijaye Raji, the former Statsig CEO who is now OpenAI's CTO of applications, about Codex, infrastructure, hiring, and the evolution and growth of Silicon Valley tech giants in the region. In our final segment, it's the return of the GeekWire trivia challenge, with a question focusing on one of the earliest tech giants to establish an outpost in the Seattle area. Related Story: Inside OpenAI’s new Bellevue office: A swanky statement about AI’s impact on the Seattle region Upcoming Event: Agents of Transformation, March 24. With GeekWire co-founders Todd Bishop and John Cook. Edited by Curt Milton.See omnystudio.com/listener for privacy information.

    The JV Show Podcast
    I Can't Eat Your Loaf

    The JV Show Podcast

    Play Episode Listen Later Mar 6, 2026 81:35 Transcription Available


    On today's 3.6.26 show Chidi joins us for Chidi's Tweets, going to a play by yourself, boy kibble, Eric Dane is posthumously releasing a book, another teacher strike in the Bay Area, more details on Britney Spears's arrest, Wendy's is looking for a CTO, we played our Chug Wheel game and more!See omnystudio.com/listener for privacy information.

    Molecule to Market: Inside the outsourcing space
    M2M Pulse: 7 Trends CDMO You Can't Ignore

    Molecule to Market: Inside the outsourcing space

    Play Episode Listen Later Mar 6, 2026 26:11


    In this episode of Molecule to Market, you'll go inside the outsourcing space of the global drug development sector with 20+ CEO and C-suite leaders from the CDMO ecosystem, exploring the overlooked trends for 2026.   Nick Fortin, CEO, Codis Ankit Gupta, CEO, InstaPill Eric Edwards, MD, PhD, CEO, Phlow USA Dirk T. Lange, CEO, Pyramid Pharma Services Kaan-Fabian Kekec, Partner, Simon-Kucher Healthcare and Life Sciences J.D. Mowery, President, CDMO Division, Bora Pharmaceuticals Matthew Bio, CSO, Cambrex and President, Snapdragon Chemistry Bill Vincent, Biotech Entrepreneur, CEO and Board Member Philip Macnabb, CEO, Curia Christiane Bardroff, COO Leader Jason Anderson, CEO, Ensera Mark B, Anonymous CEO (not to be quoted by name) Adam Siebert, Managing Director, L.E.K. Consulting Stephen Dilly, CEO, Sonoma Biotherapeutics Elisabeth Stampa, CEO, Medichem Jon Alberdi, CEO, Vivebiotech Derek Hennecke, Founder, Investor and Board Member Bruce Thompson, CTO, Kincell Bio   Molecule to Market is also sponsored by Bora Pharmaceuticals, and supported by Lead Candidate. Please subscribe, tell your industry colleagues and join us in celebrating and promoting the value and importance of the global life science outsourcing space. We'd also appreciate a positive rating! 

    SemiWiki.com
    Podcast EP334: The Unique Benefits of LightSolver’s Laser Processing Unit Technology with Dr. Chene Tradonsky

    SemiWiki.com

    Play Episode Listen Later Mar 6, 2026 18:10


    Daniel is joined by Dr. Chene Tradonsky, a physicist and the CTO and co-founder of LightSolver, where he leads the development of a proprietary physics-based computing system built on coupled laser dynamics to accelerate compute-heavy simulations and other computationally demanding workloads. Before moving into physics,… Read More

    Ciena Network Insights
    Episode 94: Submarine Cables in the AI and Cloud Era

    Ciena Network Insights

    Play Episode Listen Later Mar 6, 2026 24:27


    Want to learn more about how some of the oldest submarine cables are still playing a key role in the AI era? In this episode, FLAG's Chief Network Officer, Brad Kneller and VP of Product & Marketing, Nadya Melic join Ciena's Gautam Billa, VP of Sales Engineering & CTO, APJI, to share insights on some of the factors contributing to the continued viability and sustainability of a submarine cable. The three discuss how FLAG is complementing its new cable buildouts by extending the lifespan of its existing cables, using advanced technologies, proactive monitoring and intelligent data analysis. As part of FLAG's Vision 2030 strategy, the private submarine cable operator is taking bold steps to optimize its network to support growing capacity demands driven by AI, cloud and other bandwidth-intensive applications.

    The Confident Commit
    AI at Superhuman (before it was cool) feat. Loïc Houssier

    The Confident Commit

    Play Episode Listen Later Mar 6, 2026 38:37


    What does it actually look like to build an AI-native product and lead an engineering team through the AI era when you've been doing it longer than most? Rob Zuber sits down with Loïc Houssier, CTO at Superhuman, to talk about what it meant to be an AI company before AI was everywhere, and how that early foundation shapes the way they build, ship, and think today.The conversation covers how Loïc drove AI tool adoption across his engineering org without mandates (and which senior engineer's change of heart became a cultural turning point), why great UX is still the real moat in an age where anyone can ship an average product fast, and how email, despite everything, remains the connective tissue of professional life. Plus: what it's like to rethink your entire SDLC when the economics of building software change overnight.Have someone you'd like to hear on the show, reach out to us on X at @CircleCI!

    The Data Exchange with Ben Lorica
    Adaptation: The Missing Layer Between Apps and Foundation Models

    The Data Exchange with Ben Lorica

    Play Episode Listen Later Mar 5, 2026 33:12


    Ben Lorica talks with Sudip Roy (Co-founder & CTO, Adaption Labs) about why enterprise AI adoption stalls in the “last 5%” of reliability — and why waiting for the next frontier model release is usually the wrong bet. They unpack “adaptation” as something broader than post-training, including gradient-free, inference-time techniques that can sit above models to route, combine, and continuously improve behavior.Subscribe to the Gradient Flow Newsletter

    The Tech Blog Writer Podcast
    Hiring AI Talent Across Borders With Alcor

    The Tech Blog Writer Podcast

    Play Episode Listen Later Mar 5, 2026 42:49


    Have you ever looked at a global hiring plan and wondered whether you are building a team, or accidentally buying a bundle of hidden fees, legal risk, and avoidable stress? In this episode, I'm joined by Oksana Petrus from Alcor, where she leads customer success and operations, helping tech companies build and scale engineering teams across Eastern Europe and Latin America. If you have ever tried to expand beyond your home market, you know the promise is real, access to great talent, broader coverage across time zones, and the chance to build faster. But the reality can get messy quickly once contracts, compliance, culture, and cost assumptions collide. Oksana brings a sharp perspective because she has seen both sides. Earlier in her career she worked as a lawyer with outsourcing providers, so she understands how pricing structures and contracts can create surprises once a team is already in motion. We talk about why so many leaders start out thinking outsourcing will be simple, then discover they cannot clearly see what they are paying for, who is actually doing the work, or how much of the spend is going to overhead. We also discuss the growing challenge of trust in recruiting, especially as AI tools make it easier to fake profiles, inflate experience, and even perform better in interviews than the person behind the screen can deliver on the job. Oksana shares how teams are responding with stronger verification, background checks, and a more transparent operating model so hiring managers can feel confident about who they are bringing in. We also dig into the real cost of global scaling, and why "salary charts" are only the starting point. Oksana explains how benefits, taxes, local customs like a 13th salary, currency controls, and even language realities can derail budgets and slow hiring if teams do not have local insight. The result is often frustration on both sides, candidates lose momentum, managers lose time, and projects drift. Culture comes through as a theme too, and not in a vague, feel good way. We talk about how different regions communicate, how expectations need to be set early, and why "challenge culture" can be a strength when leaders welcome it. Oksana shares an example of a CTO who came to value Eastern European teams precisely because they questioned decisions and offered alternatives that improved outcomes. If you are a founder, CTO, or business leader thinking about scaling an engineering team this year, this episode is a practical look at what tends to go wrong, why it gets expensive, and how to build a smarter path forward without overcommitting too early.  Where do you think the line is between smart global expansion and taking on complexity before your business is ready for it, and what has your own experience taught you?

    Modern CTO with Joel Beasley
    Tech Titans: Why Accidental Managers are the Best Leaders with Rajeev Rajan, CTO at Atlassian

    Modern CTO with Joel Beasley

    Play Episode Listen Later Mar 5, 2026 21:19


    The best managers are the ones who never wanted the job. Today, we're talking to Rajeev Rajan, CTO at Atlassian. We discuss why developer joy outperforms productivity as an engineering goal, how the best managers are the ones who never wanted the job, and why every leadership playbook you've built stops working the moment your team grows. All of this right here, right now, on the Modern CTO Podcast!  To learn more about Atlassian, check out their website here.

    Screaming in the Cloud
    Everything Is a Graph (Even Your Dad Jokes) with Roi Lipman

    Screaming in the Cloud

    Play Episode Listen Later Mar 5, 2026 38:53


    In this episode of Screaming in the Cloud, host Corey Quinn sits down with Roi Lipman, CTO and co-founder of Falco DB, to unpack the evolving role of graph databases in a world overflowing with data stores. Roi shares his journey from building RedisGraph at Redis to spinning it out into Falco DB, along with his enduring love of the C programming language (dad jokes included). The conversation explores why graph databases remain niche, but powerful, especially for pathfinding problems like supply chains and access management, how vector search became a feature rather than a standalone database, and what AI-assisted development means for modern engineering. Along the way, they tackle open source sustainability, Rust rewrites, AI-generated pull request chaos, and the looming question of where the next generation of senior engineers will come from.Highlights: (00:00) C Language(00:27) Welcome(01:18) Database Landscape Overview(03:17) Why Graph Databases Matter(07:25) AI Built Apps and Data Choices(10:29) How FalcoDB Fits In(12:20) Vector Search as a Feature(16:48) FalcoDB Origin Story(19:54) Open Source Business and Rust Rewrite(25:23) Toy Graph Problems and Closing ThoughtsSponsored by: duckbillhq.com

    Sub Club
    Why Web Onboarding Should Sell The Problem, Instead Of The Solution – Leon Sasson, Rise Science

    Sub Club

    Play Episode Listen Later Mar 5, 2026 21:21


    On the podcast: why web onboarding should sell the problem instead of the solution, how discounted paid trials are beating free trials, and why creative that flopped for app ads might crush it for web funnels.This conversation is shorter than usual and will be featured in RevenueCat's State of Subscription Apps report. Each episode in this series will explore one crucial topic and share actionable insights from top subscription app operators.Top Takeaways:

    NACE International Podcasts
    Innovation Awards: Portable Plating Shop for On-Aircraft Coating Repair

    NACE International Podcasts

    Play Episode Listen Later Mar 5, 2026 38:00


    Dr. Alan Rose, CEO at Corrdesa, and Dr. Siva Palani, CTO, are the latest guests in our ongoing series profiling MP's 2025 Corrosion Innovation of the Year Awards. In this episode, they discuss the company's award-winning innovation, “Portable Plating Shop for On-Aircraft Coating Repair.” Used in over 100 systems globally, the technology addresses aircraft corrosion issues by providing rapid, non-drip plating solutions.

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually

    Transcending Stuttering with Uri Schneider
    #90 How Microsoft's AI Innovation Officer Actually Uses AI | Dr. Michael J. Jabbour on Thinking, Not Just Tools

    Transcending Stuttering with Uri Schneider

    Play Episode Listen Later Mar 5, 2026 57:32


    Episode #90: How Microsoft's AI Innovation Officer Actually Uses AI | Dr. Michael J. Jabbour on Thinking, Not Just Tools AI is changing our brains. How we work. How we think. And even how we feel.  The question isn't "Should I use AI?" It's "How do I direct the change?" Michael J. Jabbour says he uses AI for 70% of his work. "Not because it's faster. Because it would be irresponsible not to."

    The Product Market Fit Show
    He built heads down for a year. Then landed a $1M contract. | Sam Jones, Co-Founder of Method Security

    The Product Market Fit Show

    Play Episode Listen Later Mar 5, 2026 45:13 Transcription Available


    Sam spent years at the Air Force and Palantir before deciding to build Method Security. Instead of launching an MVP and iterating with customers, he did the opposite: he shut out the world and built in the dark for a year based on his own conviction.In this episode, Sam breaks down his contrarian approach to building a platform for the enterprise and government. He reveals how he raised millions from Andreessen Horowitz with just a prototype, why he refuses to hire a sales team, and how he landed a seven-figure contract right out of the gate.Why You Should ListenWhy he ignored the "talk to users" advice and built in the dark for a year.How to raise a $5.5M seed round from a16z in just 3 days.The "2-Hour Bootcamp" strategy that shortens enterprise sales cycles.Why keeping your engineering team dangerously small creates speed.How to turn a design partnership into a $1M+ contract.Keywordsstartup podcast, startup podcast for founders, product market fit, cybersecurity, a16z, Palantir, enterprise sales, design partners, government contracting, founder led sales00:00:00 Intro00:02:00 From Air Force to Palantir00:06:28 The "Shared Notion Space" of Ideas00:10:04 Raising Seed from a16z in 3 Days00:17:23 The "Dark Period": Building Without Users00:22:23 Structuring Enterprise Design Partnerships00:28:48 The "2-Hour Bootcamp" Sales Strategy00:31:03 Why the Org Chart is Flat (15 Reports to CTO)00:34:02 Converting Pilots to Commercial Contracts00:41:07 The Moment of True Product Market FitSend me a message to let me know what you think!

    Using the Whole Whale Podcast
    Using AI to Turn Human Stories Into Insight (and Build Trust)

    Using the Whole Whale Podcast

    Play Episode Listen Later Mar 4, 2026 35:36


    On the Whole Whale podcast, George interviews Andy Citizen, CTO of Share More Stories, about collecting 150–300+ word experience-based stories (typed or voice) from targeted constituencies via a web app and analyzing them with AI. Share More Stories uses sequential classification across ~70 cloud models to score stories for evidence of emotions like anxiety, joy, and self-transcendence, combining these scores with light demographics and survey data, then using a generative AI agent constrained to the dataset to explore themes, anomalies, and demographic differences iteratively. They discuss nonprofit uses such as before-and-after journaling and program impact, the importance of prompts, and how AI should augment rather than automate, with emphasis on user competency, intent, validation, and avoiding hallucinations. Citizen critiques “AI everywhere” features and AI-written social content as trust-eroding, argues trust is a major opportunity, shares concerns about DevOps at scale, and reflects on community involvement and moving faster by reducing process gaps. 00:00 Meet Andy Citizen 00:50 What Share More Stories Does 01:57 Collecting Real Experience Stories 02:41 AI Scoring Emotional Signals 04:14 How Stories Are Gathered 05:24 Prompts Versus Surveys 06:26 Dashboards Reports And Agents 09:20 Nonprofit Program Evaluation Uses 12:06 Prompt Craft And Hidden Insights 14:05 AI Adoption And Training Gaps 17:46 Use Cases Lanes And Hallucinations 20:27 AI Side Of Fries And Trust 25:44 Rapid Fire Tech Questions 29:39 Personal Advice And Community 31:58 Magic Wand For Human Connection 34:12 How To Connect And Wrap Up  

    The VentureFuel Visionaries
    Building Enterprise AI with Startup Velocity with Microsoft's Director of AI & Venture Ecosystems Taylor Black

    The VentureFuel Visionaries

    Play Episode Listen Later Mar 4, 2026 29:55


    AI is no longer a future bet — it's a board-level mandate. But for corporate innovation leaders, the real question isn't whether to invest in AI… it's how to turn AI from experimentation theater into measurable enterprise value. Taylor Black, Director of AI & Venture Ecosystems in Microsoft's Office of the CTO, works at the intersection of AI strategy, venture ecosystems, and internal venture building. Taylor brings a rare dual perspective: enterprise AI leadership inside one of the world's largest technology companies — combined with firsthand startup-building experience. We unpack how AI takes impossible problems and makes them merely difficult, how this growth mindset of hyper abundance is paired with the enterprise rigor and the internal velocity needed to scale.

    New Books Network
    Jeremy Sosabowski: Community Leader and Entrepreneur

    New Books Network

    Play Episode Listen Later Mar 4, 2026 52:25


    In this episode, Jeremy Sosabowski, CEO and co‑founder of AlgoDynamix, reveals how his company is reinventing market forecasting through behavioral analytics rather than traditional fundamentals or news. By decoding real‑time transactional order flow, AlgoDynamix predicts price movements (hours or days in advance) based on what traders are actually doing — a fresh, practical edge for smaller hedge funds, family offices and HNWI (High Net Worth Individuals) seeking ultimate actionable trading insights. Jeremy shares how the company continues to expand and refine its business model and how they have built a scalable platform capable of handling complex, multi‑asset portfolios. He also dives into Cambridge's vibrant entrepreneurial ecosystem, highlighting how networking, community engagement, and thematic WhatsApp groups have created unexpected opportunities and collaborations. The episode is packed with insights for innovators, investors, and curious listeners. If you want to hear how behavioral science meets financial returns — and how an entrepreneur builds momentum through community — this conversation is absolutely worth your time. Links: CUE Cambridge University Entrepreneurs AlgoDyamix Jeremy Sosabowski Linkedin Richard Lucas TEDxTarnow on “Opportunity Readiness” Jeremy Sosabowski at CAMentrepreneurs Open Coffee Cambridge OptiSynx clock project About Jeremy Sosabowski CEO, AlgoDynamix: Dr. Jeremy Sosabowski is Co-founder & CEO at AlgoDynamix, an AI-based financial price forecasting analytics company. Their products are used by asset managers, including CTAs, hedge funds, and family offices. Jeremy has over a decade of business and technology commercialisation experience. His previous roles include CTO at an instrumentation company (technology acquired) and data analyst within the online transaction space. His 'IP portfolio' includes several granted patents and more than 10 peer-reviewed publications. Jeremy has undergraduate and postgraduate degrees in engineering and signal processing including an Engineering Ph.D. from the University of Cambridge. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

    Between Two COO's with Michael Koenig
    AI Agents Need Logins Too: Identity, Security, and the Future of AI | Greg Keller, CTO, JumpCloud

    Between Two COO's with Michael Koenig

    Play Episode Listen Later Mar 4, 2026 32:01


    Get 90 days of Fellow free at Fellow.ai/coo In this episode, Michael Koenig speaks with Greg Keller, co-founder and CTO of JumpCloud, about identity access management and why it's becoming one of the most important operational systems in the age of AI. Greg explains how traditional identity systems were designed for office-based companies running Microsoft infrastructure and why that model broke as companies moved to SaaS, cloud infrastructure, and remote work. The discussion then turns to the next big shift: the rise of AI agents and synthetic identities inside organizations. As companies deploy more AI tools, the number of machine identities may soon outnumber human employees. Managing what those systems can access will become a critical security and operational challenge.   Topics Covered What a CTO actually does Greg explains the different types of CTO roles and how technology leaders help companies anticipate where the market is headed. Identity Access Management explained simply IAM answers three core questions inside every company: Who are you? What can you access? How is that access managed?   Why the old IT model broke Traditional identity systems were built for on-premise offices and Microsoft infrastructure. Modern companies now operate across: SaaS applications cloud infrastructure remote work environments multiple operating systems How JumpCloud approaches identity JumpCloud was built to manage identity across devices, applications, and infrastructure regardless of platform. Where Okta fits in the ecosystem Okta helped modernize browser-based authentication through Single Sign-On, while JumpCloud focuses on broader identity infrastructure.   AI, Security, and Synthetic Identities Why COOs should push AI adoption Greg argues AI adoption is no longer optional. Companies must encourage teams to improve productivity and efficiency using AI.   The rise of synthetic identities AI agents, bots, APIs, and service accounts are becoming new actors inside companies that require identity governance.   Bots may soon outnumber employees Organizations will soon manage more machine identities than human ones.   AI as a potential insider threat AI systems can become security risks if they are granted excessive permissions or misinterpret policies.   The API key governance problem Many AI integrations rely on API keys, which are often poorly managed and can create hidden security risks.   Key Takeaway As companies adopt AI, identity access management becomes the control layer that determines what both humans and machines are allowed to do inside the organization. The companies that manage identity well will move faster and operate more securely.   Links: Michael on LinkedIn: https://linkedin.com/in/michael-koenig514 Greg on LinkedIn: https://www.linkedin.com/in/gregorykeller/ JumpCloud: https://jumpcloud.com/ Between Two COO's: https://betweentwocoos.com Episode Link: https://betweentwocoos.com/ai-agents-identity-access-greg-keller

    Thirty Minute Mentors
    Episode 321: Zillow Co-Founder David Beitel

    Thirty Minute Mentors

    Play Episode Listen Later Mar 3, 2026 40:06


    David Beitel is the co-founder and Chief Technology Officer of Zillow. David was previously the CTO and one of the earliest team members of Expedia. David joins Adam to share his journey and his best lessons learned along the way. David and Adam discuss a wide range of topics: leadership, leading remotely, how leaders can leverage AI, how technical contributors can develop soft skills and as leaders, career success, product development, and more.

    Chat With Traders
    318 · Dave Mabe - The Shift to Systematic Trading — Building Backtested Confidence

    Chat With Traders

    Play Episode Listen Later Feb 27, 2026 56:22


    When Dave Mabe backtested his strategy, it outperformed his own discretionary trading — and changed how he approached everything. In this episode, we discuss gapping breakouts, expectancy, systematic trading, drawdowns, and the reality gap between backtests and live execution. A practical conversation for traders serious about building durable edge. In this episode, we explore: ·        How Dave got introduced to markets: From early exposure to investing through his family to actively seeking more control over his capital and moving from swing trading into day trading. ·        Why rules matter: The transition from discretionary decisions to systematic frameworks — and why trading without a process is a fast path to inconsistency. ·        Backtesting as a “superpower”: What backtesting really does for strategy development and confidence in your edge. ·        Reconciling backtests with real life: Practical realities of execution, slippage, and market structure — and how to build a feedback loop so your live results get closer to your imulations. ·        Drawdowns and mindset: How to handle periods where a strategy doesn't behave as expected, and why many traders quit in drawdowns rather than at all-time highs. ·        Scaling a trading business: The difference between scaling size versus scaling breadth — and why uncorrelated strategies matter. ·        Practical first step for systematic traders: How to start adding structure to your trading with backtesting, even if you're not a programmer.   About the guest:   Dave has been a professional trader and technologist for over two decades. As a former CTO of Trade-Ideas, he has unique experience at the intersection of algorithm design, real-time market data, and automated execution. Outside trading, he writes a popular daily newsletter on backtesting and systematic strategy development, and hosts the Line Your Own Pockets podcast focused on systematic approaches to markets. Links + Resources: · Link to Better Backtesting —Dave's free multi-day email course on building strategies and improving them over time. · Trade-Ideas, Amibroker, RealTest — examples of backtesting and strategy development platforms discussed in context.   Sponsor of Chat With Traders Podcast:  Trade The Pool:  http://www.tradethepool.com Time Stamps: Please note: Exact times will vary depending on current ads. 00:00 Intro and Background 08:29 Stock Selection and Systematic Trading Rules 11:32   Position Sizing, Expectancy and Risk Management 16:50   Discovering Backtesting and First Backtests 18:40   Backtesting Principles, Sample Size and Common Pitfalls 20:34   Gradual Automation and Live Trading Implementation 22:17   Trading Journal and Reconciling Backtest vs Live 27:27   Scaling through Automation: More Trades, Better Results 29:26   Drawdowns, Psychology and Handling Setbacks 34:14   Tools, AI and Software for Backtesting and Coding 39:56   Common Trading Myths Debunked (Partials, Stops) 48:01   Getting Started: Practical Steps, Resources and Closing   Trading Disclaimer:   Trading in the financial markets involves a risk of loss. Podcast episodes and other content produced by Chat With Traders are for informational or educational purposes only and do not constitute trading or investment recommendations or advice. Learn more about your ad choices. Visit megaphone.fm/adchoices