Podcasts about API

Set of subroutine definitions, protocols, and tools for building software and applications

  • 5,882PODCASTS
  • 18,237EPISODES
  • 42mAVG DURATION
  • 3DAILY NEW EPISODES
  • Mar 13, 2026LATEST
API

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about API

    Show all podcasts related to api

    Latest podcast episodes about API

    Grumpy Old Geeks
    737: Monetizable Content

    Grumpy Old Geeks

    Play Episode Listen Later Mar 13, 2026 64:27


    In this week's show we start with FOLLOW UP: The world keeps trying to protect kids online — Indonesia just joined Australia, Spain, and Malaysia in banning social media for under-16s, while COPPA 2.0 sailed through the US Senate unanimously. Meanwhile, Roblox is using AI to clean up its chat, because apparently "Hurry TF up" is the hill they've chosen to die on — even as they're still dealing with the whole "pedophile problem" thing from January. On the AI copyright front, Gracenote is the latest company to sue OpenAI for helping itself to proprietary data, joining a growing queue of plaintiffs who apparently didn't get the memo that everything is training data now.IN THE NEWS: Anthropic is suing the Pentagon after being labeled a "supply chain risk" — apparently because the CEO said AI shouldn't be used for mass surveillance or autonomous weapons, which the Trump administration heard as fighting words. The delicious irony: the Pentagon is still running Claude in active operations while trying to phase it out. Speaking of active operations, investigators now think a missile strike on an Iranian girls' school may have been triggered by bad AI-generated intelligence from that same Claude-based system. So yes, the autocomplete that hallucinates your grocery list is also maybe accidentally bombing schools. Meta's Oversight Board is begging the company to get serious about AI-generated content after a fake war video from a Filipino fake news account racked up 700K views — while separately, Zuckerberg dropped cash on Moltbook, a "social network for AI agents" that turned out to be mostly humans larping as bots and had a security flaw that exposed everyone's API keys. The guy who built it basically vibe-coded the whole thing. Meta's own CTO said he didn't "find it particularly interesting." And yet. Oracle is hemorrhaging jobs and drowning in debt chasing AI dreams, its stock down 50% from peak — a timely reminder that "AI will replace workers" is currently manifesting as "companies set money on fire and lay people off to pay the electric bill." Researchers confirmed AI is homogenizing human thought and creativity — a thing some of us have been screaming since day one. A DOGE engineer allegedly walked out of the Social Security Administration with databases containing personal info on 500 million Americans on a thumb drive. The Ig Nobel Prize is relocating to Switzerland because it's no longer safe to invite international guests to America. Nintendo is suing the US government to get its tariff money back. SETI thinks it may have been accidentally filtering out alien signals due to space weather. And Pokémon Go players unknowingly spent a decade building a centimeter-accurate surveillance map of Earth's cities that's now guiding pizza delivery robots — which, honestly, tracks.In APPS & DOODADS: The GOG clan in Clash Royale just hit eight years old — respect. OpenAudible is the cross-platform audiobook manager your Audible library deserves, especially if you've got over a thousand books sitting there judging you.And finally in MEDIA CANDY: Monarch: Legacy of Monsters Season 2 is here, and pretty beige. Live Nation settled its DOJ antitrust case for $200 million, kept Ticketmaster, and avoided a breakup — meanwhile court documents revealed employees joking about "robbing fans blind" and gouging "stupid" customers, which explains basically every concert ticket you've bought in the last decade. YouTube is now officially the world's largest media company at $62 billion in revenue. Bluesky's CEO is stepping down, which is either a bad sign or just the natural order of "person who built the cool thing hands it to the person who scales the cool thing." Dead Set — Charlie Brooker's 2008 zombie-in-the-Big-Brother-house miniseries — is worth a watch if you haven't. And trailers dropped for Daredevil: Born Again Season 2 (March 24th), The Boys final season (April 8th), and The Super Mario Galaxy Movie (April 1st — yes, really).Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.CleanMyMac - Get Tidy Today! Try 7 days free and use code OLDGEEKS for 20% off at clnmy.com/OLDGEEKSPrivate Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/737Watch on YouTube: https://youtu.be/DgSYnFF6twEFOLLOW UPIndonesia announces a social media ban for anyone under 16Anthropic Sues PentagonMetadata company Gracenote is the latest to sue OpenAI for copyright infringementRoblox introduces real-time AI-powered chat rephraser for inappropriate languageIN THE NEWSCOPPA 2.0 passes the Senate again, unanimously this timeAI Error Likely Led to Iran Girl's School BombingThe Oversight Board says Meta needs new rules for AI-generated contentMark Zuckerberg Decides Meta Needs More Slop, Buys the Social Network for AI AgentsOracle Axing Huge Number of Jobs as AI Crisis IntensifiesYou can (sort of) block Grok from editing your uploaded photosResearchers Say AI Is Homogenizing Human Expression and ThoughtSocial Security watchdog investigating claims that DOGE engineer copied its databasesNintendo is suing the US government over Trump's tariffsSETI Thinks It Might Have Missed a Few Alien Calls. Here's WhyIg Nobel Ceremony Relocates to Europe Amid Safety Concerns in Trump's AmericaAPPS & DOODADSClash RoyaleOpenAudibleBluesky's CEO is stepping down after nearly 5 yearsHow Pokémon Go is giving delivery robots an inch-perfect view of the worldRobot Escorted Away By Cops After Terrorizing Old WomanMEDIA CANDYMonarch: Legacy of Monsters Season 2Live Nation settlement avoids breakup with TicketmasterCourt documents reveal Live Nation employees joking about robbing, gouging "stupid" fansYouTube Is the World's Largest Media Company, MoffettNathanson SaysParadise Season 2DAREDEVIL: Born Again Season 2 Official Teaser Trailer 2 (2026)The Boys Final Season TrailerThe Super Mario Galaxy Movie | Final TrailerDead SetSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    php[podcast] episodes from php[architect]
    The PHP Podcast 2026.03.12

    php[podcast] episodes from php[architect]

    Play Episode Listen Later Mar 12, 2026 48:59


    The PHP Podcast streams live, typically every Thursday at 3 PM PT. Come join us and subscribe to our YouTube channel. Another fun episode of the PHP Podcast! Here’s what we covered: Internet Woes & Technical Difficulties Eric continued his saga with connectivity issues, dropping multiple times on Zoom calls and even during the podcast. After trying everything from coax cable converters to different network setups, he’s considering just running a new network cable to his office. The Wi-Fi experiment during the show… didn’t go great. First Waymo Experience John shared his first ride in a Waymo self-driving car! While the wife wasn’t thrilled about having to walk to a specific pickup spot, the experience was pretty impressive. One weird moment: the car got confused by a bus at a 45-degree angle and started creeping into the left lane. Overall verdict: comfortable, cheaper than Uber, and no awkward small talk required. Eric’s Coding Adventure In a rare “Eric writes code” moment, he debugged a POC project by littering the codebase with 15+ write-to-log statements (because who needs X debug?). The culprit? A renamed variable he forgot to update elsewhere. Classic. John was horrified to learn there’s no static analysis running. The demo went well… until someone asked to see the customer interface. MySQL 8.0 → 8.4 Upgrade Planning John’s been preparing for the MySQL 8.0 to 8.4 upgrade (8.0 is end of life). The previous team left amazing documentation, but there’s one major issue: the DBA rejected converting from utf8mb3 to utf8mb4 character set because the tables are so massive it would lock them for way too long. That’s a problem for future John. AWS S3 Cleanup – 75 Million Files! John tackled a years-old problem: phone call recordings stored as both WAV and MP3 files in S3. The cleanup script identified 75 million WAV files to delete, which took a day and a half to process. Potential savings: $100/day. Joe asked about intelligent tiering, which… yeah, probably should look into that. PHP Tek 2026 – 68 Days Away! The conference schedule is live! Four tracks (three PHP Tek + one JS Tek), hotel rooms at the discounted rate are going fast, and Eric admitted he skipped Scale this year because he was just too exhausted. Focus is on PHP Tek now! Laravel 13 Dropping March 17 Laravel 13 is dropping on Tuesday with a focus on moving from protected properties to attributes. According to the article, there are no breaking changes (we’ll see about that). Overall, it’s a light upgrade with some new features but nothing earth-shattering. March Friday the 13th Anniversary Eric and Beck’s dating anniversary! They started dating on March Friday the 13th, 1987, when Eric picked her up at 5 PM for a midnight showing of a terrible Burt Reynolds movie called “Heat” (which apparently doesn’t exist according to IMDB). The whole show tried to help figure out what movie it actually was. Spoiler, it was called HEAT PHPUnit 13 Released Sebastian Bergmann appeared on PHP Alive & Kicking to talk about PHPUnit 13. The big change: array of assertions. The show also features a hard deprecation of some older methods. Check out the release for all the details. OpenClaw/Archie AI Success Eric’s thrilled with how the team is using the OpenClaw AI agent for daily standups. Team members are not only doing their morning standups but updating it throughout the day and even asking it to check for security alerts. The engagement has been way beyond expectations. Now Eric’s fighting the temptation to buy a Mac Mini to run it properly and get it back on Ollama, saving on API costs. Links from the show: PHP Tek 2026 – The Premier PHP Conference WiFi Mapping User Guide – Turn your router into a see-through-walls device WiFi Mapping Demo on X Laravel 13 drops March 17 — here’s every new feature with code examples X: https://x.com/phparch Mastodon: https://phparch.social/@phparch Bluesky: https://bsky.app/profile/phparch.com Discord: https://discord.phparch.com Subscribe to our magazine: https://www.phparch.com/subscribe/ Host: Eric Van Johnson X: @shocm Mastodon: @eric@phparch.social Bluesky: @ericvanjohnson.bsky.social PHPArch.me: @eric John Congdon X: @johncongdon Mastodon: @john@phparch.social Bluesky: @johncongdon.bsky.social PHPArch.me: @john Streams: Youtube Channel Twitch Partner This podcast is made a little better thanks to our partners Displace Infrastructure Management, Simplified Automate Kubernetes deployments across any cloud provider or bare metal with a single command. Deploy, manage, and scale your infrastructure with ease. https://displace.tech/ PHPScore Put Your Technical Debt on Autopay with PHPScore CodeRabbit Cut code review time & bugs in half instantly with CodeRabbit. Music Provided by Epidemic Sound https://www.epidemicsound.com/ The post The PHP Podcast 2026.03.12 appeared first on PHP Architect.

    The Pitch
    #181 ROOK: The VC Vibe Check

    The Pitch

    Play Episode Listen Later Mar 11, 2026 52:35


    Marco Benitez moved his family to the US to build Rook, an API for wearables. But when a familiar face shows up in The Pitch Room, he learns just how much vibes matter in venture. This is The Pitch for Rook. Featuring investors Elizabeth Yin, Jesse Middleton, Laura Lucas, and Mike Ma. Watch Marco's pitch uncut on Patreon (@ThePitch) Join us for the Season 16 taping and founder happy hour in Tampa Subscribe to our email newsletter: insider.pitch.show Learn more about The Pitch Fund: thepitch.fund *Disclaimer: No offer to invest in Rook is being made to or solicited from the listening audience on today's show. The information provided on this show is not intended to be investment advice and should not be relied upon as such. The investors on today's episode are providing their opinions based on their own assessment of the business presented. Those opinions should not be considered professional investment advice. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    pitch api rook vibe check elizabeth yin jesse middleton
    The Changelog
    From Tailnet to platform (Interview)

    The Changelog

    Play Episode Listen Later Mar 11, 2026 102:15


    Adam talks with Tailscale co-founder and Chief Strategy Officer David Carney about where Tailscale is headed next: TSIDP, TSNet, multiple tailnets, and Aperture. They get into clickless auth (via TSIDP), TSNet apps, multiple tailnets for isolation and control, and Aperture, Tailscale's private AI gateway for API key management, observability, and agent security.

    ai platform api aperture tailscale adam stacoviak david carney
    Birdies & Bourbon
    THE PLAYERS Preview & Picks | Daniel Berger vs. Akshay Bhatia | Scheffler Panic | McIlroy WD & More

    Birdies & Bourbon

    Play Episode Listen Later Mar 11, 2026 71:42


    Players Preview & Picks | Daniel Berger vs. Akshay Bhatia | Scheffler Panic | McIlroy WD & MoreOn the show we recap API and talk through best bets for Akshay in 2026. Is it the Max Homa caddy change that is making the difference. Scott Scheffler panic alarm continues to be hit. Could he be on a Matt Wolff mental decline. Rory McIlroy WD's after a back injury and is ready for The Players. Does Rickie 'Bucky' Fowler look like Leonard DiCaprio? If so does Howard Sterns Beetlejuice look like Denzel. We get into The Players 2026 best bets. We like Min Woo, Si Woo and more. Looking forward to a great week. We also chat some movies with best villains over the years. Is Jack better than Heath, Keaton over Bale?03:00 Travel Stories and Upcoming Guests06:01 Golf Insights and Tournament Discussions08:55 Analyzing Player Performances and Strategies11:59 Controversies in Golf: Anchoring and Equipment18:00 Press Conferences and Player Personalities24:29 Tournament Insights and Caddy Changes27:24 Upcoming Tournaments and Player Performances30:51 Tiger Woods and Tournament Eligibility32:14 Panic Alarm: Player Performance Concerns35:03 Recent Winners and Betting Strategies38:43 Heroes and Villains in Golf Betting41:23 Analyzing Player Stats and Future Bets48:10 Betting Insights and Predictions50:58 One and Done Picks53:59 Film and Character Discussions57:03 Recency Bias in Film Characters01:00:10 Batman and Villains Analysis01:02:55 Future Movie Ideas and RemakesApparel for the show provided by turtleson. Be sure to check them out online for the new season lineup at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://turtleson.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠/⁠⁠⁠⁠ Thanks to Fantasy National Golf Club for providing the stat engine for the show. They can be found at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.fantasynational.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The Neat Glass. Be sure to check out The Neat Glass online at ⁠⁠⁠⁠theneatglass.com⁠⁠⁠⁠ or on Instagram @theneatglass for an improved experience and use discount code: bb10 to receive your Birdies & Bourbon discount. Thank you for taking the time listen to the Birdies & Bourbon Show for all things PGA Tour, golf, gear, bourbon and mixology. Dan & Cal aim to bring you entertaining and informative episodes weekly. Please help spread the word on the podcast and tell a friend about the show. You can also help by leaving an 5-Star iTunes review. We love to hear the feedback and support! Cheers. Follow on Twitter & Instagram (@birdies_bourbon)

    Her Faith At Work
    113: How To Grow Your Business with SEO (Without Social Media) — Faith Hanan on Her Faith at Work

    Her Faith At Work

    Play Episode Listen Later Mar 11, 2026 29:51


    How to Grow Your Business With SEO (Without Social Media) — Faith Hanan on Her Faith at WorkEpisode OverviewAre you tired of posting on Instagram and TikTok every day — only to watch your content die in 24 hours? In this episode of Her Faith at Work, host Jan Touchberry sits down with Faith Hanan, accidental SEO expert, copywriter, barrel racer, and mom of three, to talk about one of the most powerful (and misunderstood) tools for growing a Christian woman's business online: search engine optimization.Faith has not posted on social media for her business in over three years — and her business is still growing. In this conversation, she breaks down why SEO is the "crockpot of marketing," how to find the right keywords for your business, and what tools you can use today even if your budget is $0.About Faith HananFaith Hanan is an accidental SEO expert, copywriter, podcaster, homeschool mom, worship leader, and competitive barrel racer. She runs a copywriting and SEO agency and teaches faith-filled entrepreneurs how to scale their online businesses using SEO, keywords, and blogging — all in less than 20 hours a week.She is the host of the Simple SEO & Marketing podcast and creator of the Simple SEO Framework (SSF) group coaching program.What You'll Learn in This EpisodeWhat SEO actually is — and why it shortens your sales cycle by putting you in front of "hot leads" instead of cold audiencesWhy social media content has a half-life of less than 24 hours (and what to do instead)How Faith hasn't posted on social media for her business in 3+ years — and still growsThe #1 mistake entrepreneurs make with keywords (hint: you're speaking a different language than your customers)Why targeting 120 highly specific monthly searches beats chasing 10,000 broad onesThe free tools you can start using today: Ubersuggest and Google Search ConsoleHow to know if your SEO is actually workingFaith's simple framework: get your core pages optimized first, then add SEO-rich blog contentHow to write an SEO-optimized blog post in 90 minutes and maintain your traffic machine in 2 hours a weekKey TakeawaysWhat is SEO and why does it matter for small business owners?SEO (Search Engine Optimization) helps search engines like Google understand what your business does and surface your website to the right people at the right time. Unlike social media, which dies within 24 hours, SEO-optimized content can drive traffic for years. Faith describes it as "the crockpot of marketing" — you do the work upfront and results compound over time.How do I find the right keywords for my business?Faith recommends a two-step process:Start on paper — brainstorm words and phrases your ideal client would type into Google when searching for the problem you solve.Then validate with a tool — Ubersuggest (free version) connects directly to Google's API and gives you real search data, not estimates.Key insight: A keyword with only 120 monthly searches from your exact ideal client is far more valuable than a keyword with 10,000 monthly searches dominated by Nike and Under Armour.How long does SEO take to work?Most businesses start seeing results within 1.5 to 3 months, with more significant momentum around 6 months. The speed depends on your website's current health, how long you've had a web presence, and how competitive your keywords are.Can I do SEO without being on social media?Yes. Faith is living proof. She stopped posting on social media for her business over three years ago and relies entirely on SEO and long-form content (blogs, podcasts). Because SEO-optimized content continues to be indexed and discovered for years, it works for you 24/7 — even while you sleep.Episode Timestamps00:00 Introduction — Meet Faith Hanan02:25 Faith's background: SEO agency, homeschooling, and barrel racing03:55 Jan's personal SEO journey — 6 months in, seeing momentum04:32 SEO is the “crockpot of marketing” — why people quit too soon05:00 What SEO actually does (and why it shortcuts the sales process)06:33 The social media rat race and why SEO is a better long-term strategy07:13 Faith hasn't posted on social media for 3+ years — and it started as a church fast08:10 The half-life of social media vs. the longevity of blog content09:17 How to get started with SEO without getting overwhelmed09:45 The keyword language gap: how you talk vs. how clients search10:57 Why small, specific keywords beat high-volume broad terms13:40 Free and affordable keyword tools — Ubersuggest deep dive15:00 Start keyword research on paper first, then use tools16:36 How to know if SEO is working — Google Analytics & Search Console17:00 Optimize your core website pages before your blog19:16 Broken links, analytics, and getting coaching (barrel racing analogy)22:05 About the Simple SEO Framework (SSF) group coaching program24:42 Special discount code for Her Faith at Work listeners: JAN26:11 Where to find Faith — FaithHanan.com/framework26:25 The Workflow Exchange — SEO for blogging workflow from FaithResources & Links MentionedFaith Hanan's websiteSimple SEO Framework (SSF) Group CoachingKeyword Research Kickstart (affordable beginner course)Simple SEO & Marketing PodcastThe Workflow Exchange (free SEO for Blogging workflow)Ubersuggest (free keyword research tool)Google Search Console (free)Google Analytics (free)

    Developer Tea
    From Software Engineer to Agent Manager - How Work is Changing in A New Software Development Paradigm

    Developer Tea

    Play Episode Listen Later Mar 10, 2026 21:20


    If you're a software engineer right now, you likely feel like your world is changing overnight. We are writing half or less the amount of code that we wrote even a year ago, which represents a seismic, groundbreaking shift in our industry. However, the rapid introduction of new tools can slide quickly from exciting to purely chaotic, leaving you feeling like you are falling behind. In today's episode, I explore how this changes the nature of our day-to-day work, and why the key to surviving this transition is shifting your mindset from a traditional "Software Engineer" to an "Agent Manager". The Illusion of Velocity vs. Actual Chaos: While the big-picture promise of AI is that the software development pipeline will move exponentially faster, the reality on the ground often feels like unadulterated chaos. Trying to adopt every new tool while spinning up multiple agents to work on parallel tickets introduces a massive new cognitive burden. The Context-Switching Trap: Understand why parallelizing agent workflows fundamentally changes your context-switching overhead. You are no longer just reloading context to build something yourself; you are reloading it to manage, review, and validate a building agent, which rapidly drains your cognitive ability and leads to burnout. The "Agent Manager" Mindset: Treating AI as just a "smart autocomplete" while you try to do the same old job will not work. You need to start viewing your role more like assembly line or process management, focusing on facilitating the system rather than typing every line of syntax. Adopt Old-School Quality Control Tactics: Discover how traditional management methods are becoming essential for individual contributors. Just like a factory manager doesn't inspect every single item off the line, you must develop methods for spot checks, anomaly detection, and standardizing outputs to evaluate the quality and quantity of your agents' work. Shift Your Work Upfront: Recognize that your core effort must move to the specification and planning phases. Your job is increasingly about setting the context, defining the prompt, and establishing strict guardrails before the agent begins its work. Redefining Your Work in Progress (WIP): Proven principles like limiting WIP and focusing on finishing rather than starting are more important than ever to reduce cognitive burden. However, you must adapt these principles to fit a workflow where you are managing processes rather than manually coding. Episode Homework: Take a step back and ask yourself: "What is my true work in progress? Am I actually manually doing these tickets, or am I managing the processes that produce quality ticket work?".

    BlockHash: Exploring the Blockchain
    Ep. 688 Utexo | Bringing Tether back to Bitcoin (feat. Viktor Ihnatiuk)

    BlockHash: Exploring the Blockchain

    Play Episode Listen Later Mar 10, 2026 40:53


    For episode 688 of the BlockHash Podcast, host Brandon Zemp is joined by Viktor Ihnatiuk, Co-Founder and CEO of Utexo, a Bitcoin-native stablecoin settlement network backed by Tether. Utexo enables private, compliant USDT payments with fixed costs, powered by the Lightning Network and RGB.Viktor is a Bitcoin and Web3 engineer with over 12 years of experience building core infrastructure, protocol tooling, and privacy-preserving distributed systems. A serial entrepreneur, he has founded and scaled multiple successful ventures across the crypto industry. Previously, Viktor scaled Boosty Labs into the leading European Web3 development house, growing the team to 150+ engineers and partnering with major industry players including Coinbase, Ledger, Consensys, MoonPay, and Blockchain.com. Earlier in his career, he joined Storj Labs to help build decentralized cloud infrastructure, where he led the Growth team. He was responsible for expanding the distributed node network and shipping operator-facing tools that improved usability and long-term sustainability. Following this period of growth and infrastructure maturation, Storj achieved a successful exit after its acquisition. In parallel with these ventures, Viktor co-founded Astroid to support early BTCFi teams, helped launch the RGB Association, and contributed to Thunderstack—the primary infrastructure provider for RGB—built in collaboration with Tether and Fulgur. Across his work, Viktor focuses on expanding Bitcoin's utility and driving real-world adoption through scalable, privacy-first financial applications. 

    Go Get That
    EP 181 — RECAPPING THE ARNOLD PALMER INVITATIONAL

    Go Get That

    Play Episode Listen Later Mar 10, 2026 38:20


    Our congrats to Akshay Bhatia for notching his third PGA Tour win at Bay Hill. Dan and Jordan are back to discuss his playoff win and how an entertaining Sunday at the API unfolded when it did not look likely too.We discuss Jordan Spieth and his continual progression along with what we thought he did unexpectedly well.Enjoy!

    Fairway Rollin'
    Akshay Bhatia's Win, Concern for Rory McIlroy, and the Players Preview

    Fairway Rollin'

    Play Episode Listen Later Mar 9, 2026 69:59


    House and Nathan are joined by Justin Ray to recap Akshay Bhatia's win at the Arnold Palmer Invitational. Then, they preview The Players and share the golfers they think can make some noise, including their sleeper picks. Finally, they discuss what Brian Rolapp may say at his press conference and share their winner picks.(0:00) Welcome to Fairway Rollin' with Justin Ray!(1:45) Akshay Bhatia's short game was phenomenal(6:40) Who else impressed at API?(29:20) Let's preview The Players(49:15) What sleepers do we like?(57:45) On Brian Rolapp's upcoming press conference(1:02:20) Players Championship winner picksThe Ringer is committed to responsible gaming. Please visit www.rg-help.com to learn more about the resources and helplines available.Hosts: Joe House and Nathan HubbardGuest: Justin RayProducers: Tucker Tashjian, and Mike Wargon Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Circles Off - Sports Betting Podcast
    This UFC Betting Line Looked Very Suspicious...

    Circles Off - Sports Betting Podcast

    Play Episode Listen Later Mar 9, 2026 85:10


    The crew reacts to one of the strangest betting markets in recent UFC memory, breaking down the suspicious line movement in the Michael Johnson vs Drew Dober fight and why so many bettors immediately questioned what was happening. Was it sharp action, bad numbers from sportsbooks, or something even stranger? The panel dives into how the market moved, why it raised eyebrows across gambling Twitter, and what bettors should actually take away from situations like this. They also discuss whether sports bettors truly need projections or models to succeed, when gambling is actually the most fun for bettors, and other stories from the week in betting. Circle Back is hosted by Jacob Gramegna and features professional sports bettor and CEO of The Hammer, Rob Pizzola, basketball originator Kirk Evans, and sophisticated square Geoff Fienberg. The group reacts to the biggest conversations across the betting world each week — from market drama and industry controversies to betting strategy debates and everything in between.

    Where It Happens
    I let OpenClaw run my organic marketing (while I sleep)

    Where It Happens

    Play Episode Listen Later Mar 9, 2026 43:19


    I sit down with Oliver Henry, a full-time employee who is generating hundreds of dollars in monthly recurring revenue from mobile apps he barely touches, thanks to an AI marketing agent he built on OpenClaw called Larry. We walk through how Larry autonomously creates TikTok slideshow content, reads analytics, iterates on hooks and CTAs, and feeds performance data back into the content loop. Oliver also shares how he packaged the entire system as a free, downloadable skill on Larry Brain so anyone can replicate it. By the end of the episode, you will understand the full “Larry Loop”—from content creation to conversion optimization and why skills are poised to reshape how we think about SaaS altogether. I'm hosting a free workshop so you can build your business in the age of AI. Sign up here: https://startup-ideas-pod.link/build-with-ai-2026 Links Mentioned: Larry Brain: https://startup-ideas-pod.link/Larry-brain QMD Skill: https://startup-ideas-pod.link/qmd-skill Timestamps 00:00 – Intro 01:25 – Background on Marketing IOS app with OpenClaw 06:43 – Larry's first posts and iterating 03:55 – Posting Strategy and First viral hit: 137K views 12:01 – Communicating with Larry via WhatsApp 12:53 – Mission control vs. single-agent workflow 14:36 – The CTA problem: views without conversions 17:07 – The Larry Loop explained: analytics → content → metrics → iterate 18:15 – Boomers, engagement bait, and the algorithm boost 20:33 – The importance of iteration 23:36 – How Larry brainstorms and validates new hooks 27:57 – The power of OpenClaw 30:04 – The vision for Larry 31:49 – Model choices: Claude vs. OpenAI and over-optimization 34:38 – OpenClaw vs. cloud alternatives (Manus, Cowork) 37:39 – Getting started: Larry Brain onboarding and 80+ skills 40:13 – Ernesto Lopez: $70K MRR using the Larry Loop 41:27 – Doing all of this with a full-time job 42:28 – QMD Skill for cutting token usage and closing thoughts Key Points An AI agent (Larry) built on OpenClaw autonomously creates TikTok slideshows, reads analytics, and iterates on content—driving hundreds of dollars in MRR with almost zero manual effort. The “Larry Loop” is a full-funnel feedback cycle: TikTok analytics feed into content creation, and app metrics feed back into the top of the funnel so the agent continuously improves. Posting TikTok content as a draft (rather than directly via API) lets you add trending sounds and avoids the algorithm penalty for bot-posted content. Hooks drive views; CTAs drive conversions. Diagnosing which is underperforming is the key to scaling. OpenClaw skills are locally owned, fully editable, and free from hosting or subscription costs—Oliver argues they will change how we think about SaaS. Picking a model (Claude or OpenAI) matters far less than learning how to work with it; 98% of users will see little difference between incremental model upgrades. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ FIND OLIVER ON SOCIAL X: https://x.com/oliverhenry Larry Brain: https://www.larrybrain.com

    The Jim Colbert Show
    No Blue Cheese on my Drummies

    The Jim Colbert Show

    Play Episode Listen Later Mar 9, 2026 159:04 Transcription Available


    Monday – Alejandro joins us today. We review the inaugural Bourbon Bus and talk Cuba, insurance tracking, wings and drummies, the recent time change, the teacher prank tragedy, Brandon Kravitz on the dramatic finish at the API, Mike Evans leaving the Bucs, other NFL moves and Magic are hot. Attorney Ray Traendly on the death of a Georgia teacher after a prank gone wrong, and the DOJ vs. Live Nation/Ticketmaster. Plus, JCS News, JCS Trivia & You Heard it Here First. See omnystudio.com/listener for privacy information.

    Marketing O'Clock
    No Click, No Credit. Meta Updates Its Attribution Framework

    Marketing O'Clock

    Play Episode Listen Later Mar 9, 2026 52:43


    This week on Marketing O'Clock: Google is disabling the Ads API Customer Match feature for certain users starting in April. Also, Meta is adding click and engage-through attribution to its ad measurement framework. Visit us at - https://marketingoclock.com/

    The Cybersecurity Defenders Podcast
    Learning how to trust that AI is secure with Saurabh Shintre from Realm Labs / Defender Fridays [#299]

    The Cybersecurity Defenders Podcast

    Play Episode Listen Later Mar 9, 2026 30:33


    Saurabh Shintre, Founder and CEO of Realm Labs, is on Defender Fridays today to discuss securing AI from within.Saurabh previously led the AI security research at Splunk and Symantec. He has been at the forefront of AI security research for nearly a decade with multiple publications and patents and regularly features on public forums on issues regarding security and AI. Saurabh holds a PhD from Carnegie Mellon. Learn more at https://www.realmlabs.ai/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.io/Follow LimaCharlieSign up for free: https://limacharlie.io/LinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie

    The Cybersecurity Defenders Podcast
    Drones damage data centers, Iranian cyber retaliation, Sloppy Lemming & Honeywell vulnerability / Intel Chat [#300]

    The Cybersecurity Defenders Podcast

    Play Episode Listen Later Mar 9, 2026 35:43


    In this episode of The Cybersecurity Defenders Podcast, we discuss some intel being shared in the LimaCharlie community.Iranian drone strikes damaged three Amazon Web Services data center facilities in the Middle East, highlighting the physical risks associated with large-scale cloud infrastructure.Cyber activity linked to Iran and pro-Iranian actors has intensified following a joint US–Israeli military strike on Iran that killed Supreme Leader Ayatollah Ali Khamenei and several other government officials.The India-linked advanced persistent threat group known as “Sloppy Lemming” has significantly increased its cyber operations over the past year, targeting organizations in Pakistan, Bangladesh, and other parts of South and Southeast Asia.A cybersecurity researcher has reported a potentially serious vulnerability in Honeywell's IQ4 building management controller, though the vendor disputes both the severity and practical impact of the issue.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform.This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io.

    The Bogosity Podcast
    Bogosity Podcast for 8 March 2026

    The Bogosity Podcast

    Play Episode Listen Later Mar 8, 2026 32:12 Transcription Available


    News of the Bogus: 0:43 – On the unfortunate need for an “age verification” API for legal compliance reasons in some U.S. states https://lists.ubuntu.com/archives/ubuntu-devel/2026-March/043510.html On the unfortunate need for an “age verification” API for legal compliance reasons in some U.S. states https://lists.ubuntu.com/archives/ubuntu-devel/2026-March/043525.html MidnightBSD Excludes California from Desktop Use Due to Digital Age Assurance Act https://ostechnix.com/midnightbsd-excludes-california-digital-age-assurance-act/ 10:46 – Anthropic's AI tool Claude central to U.S. campaign in Iran, amid a bitter feud https://www.yahoo.com/news/articles/anthropic-ai-tool-claude-central-113547604.html Statement from Dario Amodei on our discussions with the Department of War https://www.anthropic.com/news/statement-department-of-war Sam Altman on X https://x.com/sama/status/2028640354912923739 Altman said no to military AI – then signed Pentagon deal https://www.theregister.com/2026/03/06/openai_dod_deal/ 17:30 – Jireh Lim pokes fun at copyright notice he received for singing own song ‘Buko’ https://www.msn.com/en-ph/news/other/jireh-lim-pokes-fun-at-copyright-notice-he-received-for-singing-own-song-buko/ar-AA1WXbs6 21:25 – Biggest Bogon Emitter: Anti-AI activists ToonHive on X https://x.com/ToonHive/status/2028902702332203243 26:28 – Idiot Extraordinaire: US Postal Service The Postal Service Is Running Out of Money Again. Imagine That. https://pjmedia.com/david-manney/2026/03/05/the-postal-service-is-running-out-of-money-again-imagine-that-n4950307 This Week’s Quote: “The present expensive, dilatory and exclusive system of mails is a great national nuisance—commercially, morally, and socially. Its immense patronage and power, used, as they always will be, corruptly, make it also a very great political evil.” —Lysander Spooner 🔊Pᴏᴅᴄᴀꜱᴛ: https://podcast.bogosity.tv/💬Dɪꜱᴄᴏʀᴅ: https://discord.bogosity.tv/▶️YᴏᴜTᴜʙᴇ: https://www.youtube.com/shanedk▶️Oᴅʏsᴇᴇ: https://odysee.com/%24/invite/@shanedk:4▶️Rᴜᴍʙʟᴇ https://rumble.com/c/shanedk💰Dᴏɴᴀᴛᴇ ᴏʀ ꜱᴜʙꜱᴄʀɪʙᴇ: https://donate.bogosity.tv

    Keen On Democracy
    No AI Good Guys? Andrew & Keith Ask If Altman Amodei, & Hegseth Have All Failed the Leadership Test

    Keen On Democracy

    Play Episode Listen Later Mar 7, 2026 43:37


    “They're both naughty boys in the playground, leveraging the absence of clarity to their own advantage. Neither one of them is an authoritative leader of opinion with the interests of everyone at heart.” — Keith TeareWhat a difference a week makes. Last Saturday, Keith Teare was arguing that Anthropic was wrong to push back against the US government's use of AI in warfare. This week his editorial is entitled “No Good Guys.” He's used AI to put images of Sam Altman, Dario Amodei, and Pete Hegseth around the same table—and found all three guilty of poor leadership. According to Keith, Amodei is “ideologically” (whatever that means) driven. Altman is commercially driven and Hegseth is just following orders. None of them is asking the all-important questions about AI policy. And the man who should be—Trump's AI czar David Sacks—is absent-without-leave. All four should be court martialed.Yes, a lot has happened in seven days. Altman publicly supported Amodei's position on surveillance and autonomous weapons—then pulled a classic Sam u-turn and signed a contract with the Department of War. Amodei's internal memo was leaked to The Information, revealing that he'd interpreted the government's “no unlawful use” language as meaning there is no law. And the US military used Claude in the Iran war anyway. As Keith puts it: they're all naughty boys in the playground, leveraging the gaps to their own self-advantage.The only problem, of course, is that this isn't a playground game. And that these men are all shaping the lives (and deaths) of countless people around the world.Meanwhile, Om Malik's “Post of the Week” offers a devastating contrast between Xi's China and Trump's America. China, Om argues, has published a five-year AI plan built on open-source software and bottom-up adoption. America, in contrast, has AI theater. No strategy, no policy, no leadership—just contracts, leaks, and perpetual spin. Then there's the Startup of the Week, Jobright, which hit $5 million in annual revenue with nine people, suggesting that the companies of the future may not need humans at all. Keith's own SignalRank has four people and claims to be going public. We seem to be heading for post-human companies before we've figured out who's managing the humans.Maybe we should court martial everyone. What a difference a week makes. Five Takeaways•       No Good Guys: Keith Teare's editorial puts Sam Altman, Dario Amodei, and Pete Hegseth in the same room—and finds all three guilty of bad leadership. Amodei is ideologically driven, Altman is commercially driven, and Hegseth is just doing his job. None of them is asking the big questions about AI policy. The real culprit may be the invisible AI czar, David Sacks.•       Altman Said One Thing, Then Did Another: Last week Altman publicly supported Amodei's position on surveillance and autonomous weapons. This week he signed a contract with the Department of War. The contract uses “no unlawful use” language—which, as Amodei's leaked memo points out, effectively means there is no law.•       The US Used Claude in Iran Anyway: Despite the very public dispute between Anthropic and the government, the US military used Claude in the Iran operation. The government doesn't need your permission to use your product. It just needs an API key and a credit card.•       China Has a Plan. America Has Theater: Om Malik's “Post of the Week” contrasts China's published five-year AI strategy—built on open-source software and bottom-up adoption—with America's complete absence of AI policy. The Chinese approach is more inclusive and practical than anything coming out of Washington or Silicon Valley.•       The Future Company Has Nine Employees: Startup of the week Jobright hit $5 million in annual recurring revenue with just nine people. Keith's own company, SignalRank, has four people and is going public. The implication: the companies of the future will be run mostly by software agents, not humans. We're heading for post-human companies. About the GuestKeith Teare is the publisher of That Was The Week, founder and CEO of SignalRank, and a recurring sparring partner on Keen On America. A serial entrepreneur and investor, he is the co-founder of TechCrunch and RealNames. He joins the show every Saturday for the weekly tech roundup.ReferencesEssays, posts, and interviews referenced:•       Keith Teare, “No Good Guys” — That Was The Week editorial•       Om Malik, “The Great AI Game versus AI Theater” — Post of the Week•       Ross Douthat, “If AI Is a Weapon, Who Should Control It?” — New York Times•       Ben Thompson, Stratechery — on “no unlawful use” and the absence of international law•       Paul Krugman on the economics of technological change — technology, jobs, wages, and monopolies•       Tim O'Reilly, “How We Bet Against the Bitter Lesson” — skills and the future knowledge economy•       Yascha Mounk and Danielle Allen on participatory democracy and AI governance•       Previous Keen On episodes: Tom Wells on the Kissinger tapes; Michael Ellsberg on Daniel Ellsberg and the Pentagon Papers•       Startup of the Week: Jobright — $5M ARR with nine employeesAbout Keen On AmericaNobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States—hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.WebsiteSubstackYouTubeApple PodcastsSpotify Chapters:(00:00) - Introduction: What a difference a week makes (01:14) - “No Good Guys”: Keith's editorial and Om Malik's wake-up call (02:30) - Amodei, Altman, Hegseth: three self-interested players (04:02) - How the Iran invasion changed the AI debate (05:28) - “No unlawful use”: a meaningless phrase in a lawless context (06:50) - The US used Claude in Iran despite the Anthropic dispute (08:15) - Naughty boys in the playground: spinning vs. leadership (09:31) - Bobby Kenn...

    Hashtag Trending
    Project Synapse: From Anthropic to Robotics

    Hashtag Trending

    Play Episode Listen Later Mar 7, 2026 74:05


    The hosts of Project Synapse discuss how people and companies often claim to value privacy, security, and human-made content while behaving otherwise, then cover major AI news including the US Department of Defense labeling Anthropic a supply chain risk tied to its positions on autonomous weapons and surveillance, and the fallout including the QuitGPT boycott claims and criticism of Sam Altman's response. They examine Claude 4.6 with Cowork and ChatGPT 5.4, emphasizing deeper Office/Gmail integration, larger context windows, and data analytics that could transform corporate data work and accelerate job replacement, while token costs rise and stolen API keys create urgent financial risk. They also warn about the "death of privacy" via profiling and potential anti-anonymity laws, and explore robotics trends, costs, factory adoption, healthcare use cases, and growing investment in humanoid robots from firms like Figure, Tesla, Boston Dynamics, and Unitree. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Sponsor Message 00:18 People Say They Care 01:23 Cybersecurity Reality Check 02:46 Show Intro and Robots 03:35 US Targets Anthropic 09:20 Altman Optics and Boycott 16:52 Anthropic vs OpenAI Safety 21:27 Office Agents Replace Jobs 26:06 Cowork Hands On Debate 35:02 Token Costs and API Keys 38:37 AI Wallet Safety Limits 39:55 Hardware Shortages From AI 42:25 Cloud Control Conspiracy 44:00 Data Brokers Kill Privacy 46:09 AI Builds A Copy Of You 48:26 Embodied AI And Robots 51:17 Humanoids In Factories 01:00:07 Why Humanoids Aren't Everywhere 01:02:06 Robots In Healthcare And Homes 01:06:28 Cheap Humanoids And Companions 01:11:52 Robotics Boom And Wrap Up 01:13:21 Sponsor Message And Sign Off

    Circles Off - Sports Betting Podcast
    Exposing The Worst Betting Advice We Found On Twitter | Presented by Kalshi

    Circles Off - Sports Betting Podcast

    Play Episode Listen Later Mar 6, 2026 59:48


    This week on Circle Back, we break down some of the worst sports betting advice currently circulating on Gambling Twitter. The crew reacts to a viral strategy suggesting bettors should blindly fade steam moves in low-liquidity mid-major college basketball games, and explains why sportsbooks adjusting to sharp action does NOT suddenly create value on the other side. We also discuss bankroll management advice encouraging bettors to constantly withdraw profits instead of allowing their bankroll and unit size to scale properly, and debate a controversial take dismissing the importance of closing line value (CLV) as a core indicator of long-term betting success. Plus, we round up some of the latest viral moments, arguments, and drama from Gambling Twitter this week. Circle Back is hosted by Jacob Gramegna and is part of Circles Off on The Hammer Betting Network. This episode features Joey Knish alongside Porter from BA Analytics and Chinamaniac as the panel breaks down the latest conversations, debates, and controversies happening across Gambling Twitter.

    The MongoDB Podcast
    Don't Build Your Own AI (Unless You Have To)

    The MongoDB Podcast

    Play Episode Listen Later Mar 6, 2026 52:41


    Are you trying to figure out if your team should build an AI model from scratch or integrate an off-the-shelf solution? You aren't alone.In this episode of the MongoDB Podcast, Shane McAlister sits down with Akshaya Murthy, Director of AI Transformation at Zendesk, to decode the maze of building enterprise AI products. They dive into why integrating is often the winning move for speed-to-market, the hidden costs of custom models, and why bad data will break even the most perfect transformer model.What you'll learn in this episode:The Build vs. Buy Calculus: Why lower Total Cost of Ownership (TCO) and rapid deployment favor integration for most enterprises.Spotting "AI Washing": How to avoid vendor buzzword salads and focus on actual problem-solving and ROI.Architectural Must-Haves: Why your AI stack needs modular API layers, model hot-swapping, and CI/CD pipelines just like your standard code.The "Garbage In, Hype Out" Rule: Why a solid data strategy and a centralized single source of truth are non-negotiable.Ready to stop experimenting and start delivering real AI value? Tune in now.

    Hacker Valley Studio
    Can AI Do Your Cyber Job? Post Your Job Req and Find Out with Marcus J. Carey

    Hacker Valley Studio

    Play Episode Listen Later Mar 6, 2026 38:49


    Last episode, Ron and Marcus made predictions. This episode, they brought the receipts. A journalist built an app with vibe coding and got hacked on live television.  A social network built entirely by AI (not a single line of human code!) exposed 1.5 million authentication tokens and private messages between agents.  And 88% of organizations have already had an AI security incident, while barely 14% of deployed agents ever saw a security review.  The warnings from last episode aged fast. Marcus J. Carey is back to talk about what that actually means for the people building right now, not the people theorizing about it. Ron and Marcus are in the code themselves, and this conversation is what that experience actually looks like: OpenClaw running loose on your machine, agents racking up API bills, and why guidance, not prompts, not tools, is the real skill that separates builders who thrive from builders who ship disasters. Impactful Moments 00:00 - Introduction 02:00 - Vibe coding hack on live TV 03:30 - Mo Book leaks 1.5M auth tokens 06:00 - Marcus' origin story: War Games, 1983 08:00 - OpenClaw escapes the lab 13:30 - AT&T cuts help desk spend 90% 17:00 - Context is king, guidance is everything 19:00 - Can AI do your job rec right now? 24:00 - The first cybersecurity jobs agents will replace 27:00 - Expertise + AI = 1000x yourself 30:00 - Focus on outcomes, not new tools   Links Connect with our guest, Marcus J. Carey, on LinkedIn: https://www.linkedin.com/in/marcuscarey/   Read the articles we referenced in this episode: The vibe coding hack that aired on live TV, ICAEW breaks down exactly how it happened and what it means for anyone building with AI: https://www.icaew.com/insights/viewpoints-on-the-news/2026/feb-2026/cyber-dangers-of-agents-and-vibe-coding 88% of organizations have already had an AI security incident. See the full data from the Cisco State of AI Security 2026 report: https://www.helpnetsecurity.com/2026/02/23/ai-agent-security-risks-enterprise/   Check out our upcoming events: https://www.hackervalley.com/livestreams Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/  

    DataTalks.Club
    The Future of AI Agents - Aditya Gautam

    DataTalks.Club

    Play Episode Listen Later Mar 6, 2026 68:39


    In this talk, Aditya, an experienced AI Researcher and Engineer, shares his technical evolution—from his roots in embedded systems to building complex, large-scale AI agent architectures. We explore the practical challenges of enterprise AI adoption, the shifting economics of LLMs, and the infrastructure required to deploy reliable multi-agent systems.You'll learn about:- The ROI of Fine-Tuning: How to decide between specialized small models and general-purpose APIs based on cost and latency.- Agent MLOps Stack: The essential roles of guardrails, data lineage, and auditability in AI workflows.- Reliability in High-Stakes Verticals: Navigating the unique AI deployment challenges in the legal and healthcare sectors.- Evaluation Frameworks: How to design robust evals for multi-tenancy systems at scale.- Human-in-the-Loop: Strategies for aligning "LLM as a judge" with human-labeled ground truth to eliminate bias.- The Future of AGI: What to expect from the next wave of multimodal agents and autonomous systems.TIMECODES: 00:00 Aditya's from embedded systems to AI08:52 Enterprise AI research and adoption gaps 13:13 AI reliability in legal and healthcare 19:16 Specialized models and agent governance 24:58 LLM economics: Fine-tuning vs. API ROI 30:26 Agent MLOps: Guardrails and data lineage 36:55 Iterating on agents with user feedback 43:30 AI evals for multi-tenancy and scale 50:18 Aligning LLM judges with human labels 56:40 Agent infrastructure and deployment risks 1:02:35 Future of AGI and multimodal agentsThis talk is designed for Machine Learning Engineers, Data Scientists, and Technical Product Managers who are moving beyond AI prototypes and into production-grade agentic workflows. It is especially relevant for those working in regulated industries or managing high-volume API budgets.Connect with Aditya:- Linkedin - https://www.linkedin.com/in/aditya-gautam-68233a30/Connect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/

    Podland News
    Triton Digital's Sharon Taylor, on what Apple Podcasts HLS Video Really Changes

    Podland News

    Play Episode Listen Later Mar 6, 2026 99:27 Transcription Available


    Apple's HLS video support, Triton's roadmap, and the real cost of video collide with questions about measurement, privacy, and control. We chat with Sharon Taylor from Triton Digital.And, with Kattie Laur, we also explore Canada's podcast identity, the CBC effect, and why discovery and funding—not mandates—unlock local growth.• Why Triton added video without giving up control• Apple's HLS model, dynamic ads, and hosting costs• Spotify's API path vs open RSS monetization• limits on first-party data and privacy choices• how premium feeds and secure distribution fit the mix• Canada's discovery gap and funding bottleneck• CBC's high bar and the impact on independents• podcasting overtakes spoken-word radio in the US• ad spend trends pointing to podcast growth• new tools, AI summaries, and workflow upgradesStart podcasting, keep podcasting with Buzzsprout.comSend James & Sam a messageSupport the showConnect With Us: Email: weekly@podnews.net Fediverse: @james@bne.social and @samsethi@podcastindex.social Support us: www.buzzsprout.com/1538779/support Get Podnews: podnews.net

    Everyday AI Podcast – An AI and ChatGPT Podcast
    Ep 727: 7 Huge AI Feature Updates You Likely Missed: From AI Video and Gmail to Agents

    Everyday AI Podcast – An AI and ChatGPT Podcast

    Play Episode Listen Later Mar 5, 2026 32:35


    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually

    The Fintech Blueprint
    How Alpaca built the API brokerage for 300+ global fintechs across 45 Countries, with CEO Yoshi Yokokawa

    The Fintech Blueprint

    Play Episode Listen Later Mar 5, 2026 46:38


    In this episode, Lex chats with Yoshi Yokokawa, CEO of Alpaca — a brokerage infrastructure company that provides API-based trading and custody services to fintechs and developers globally. The conversation begins with their shared experience at Lehman Brothers during the 2008 financial crisis, where Yoshi worked in fixed income securitization and learned that even when market participants sense a bubble, they keep dancing because timing the exit is impossible. After Lehman's collapse, Yoshi pursued entrepreneurship, building a computer vision AI company acquired by Kyocera before founding Alpaca in 2017. Initially inspired by Robinhood, Yoshi pivoted after experiencing firsthand the friction of accessing brokerage infrastructure—realizing the deeper opportunity was building API-first brokerage rails for developers. Today Alpaca powers 9 million accounts through 300+ partners across 45 countries, recently raising $150 million at a unicorn valuation. The discussion explores how Alpaca follows Robinhood's product roadmap to anticipate partner demand, the challenges of adding crypto, and Yoshi's thesis that finance is undergoing a generational shift from digital to on-chain operations. Lex shares examples of legacy infrastructure dysfunction—from faxing PDFs to TD Ameritrade in 2012 to the Synapse collapse caused by manual CSV uploads—illustrating why Alpaca built its own custody and ledger systems as a path to competing in the $350 trillion global securities custody market. NOTABLE DISCUSSION POINTS: Alpaca's biggest breakthrough was not a better investing app idea, but recognizing that the real bottleneck was brokerage infrastructure. Yokokawa and team initially explored B2C product concepts, but pivoted once they experienced firsthand how painful broker-dealer setup, custody, and clearing integrations were. For readers building fintech, this is a huge lesson: the highest-value opportunity is often the “invisible” infrastructure pain, not the user-facing feature set. They found product-market fit by starting with a narrow wedge (API for automated traders) and only then expanding into a broader platform (Broker API for fintech apps). Alpaca did not begin by serving large fintechs; it first attracted power users who urgently needed programmable execution, then used inbound demand (“can I build my own Robinhood?”) as proof to build account opening, reporting, and full brokerage APIs. This is a valuable go-to-market pattern for infrastructure startups: win with a sharp use case, then expand into the system of record. Yokokawa's core strategic edge is full-stack control of licenses, memberships, and ledger technology rather than relying on legacy vendors. He explicitly ties this to lessons from historical fintech fragility (manual workflows, broken reconciliations, middleware failures) and argues that owning the custody/clearing layer is what makes Alpaca defensible long term. For readers, this is the key takeaway on moat-building in financial services: if you don't control the ledger and operational core, your product may scale faster at first but remains structurally fragile. TOPICS Alpaca, Lehman Brothers, Barclays, Nomura, Neuberger Berman, Blackrock, Robinhood, Interactive Brokers, TD Ameritrade, BNY Mellon, Brokerage infrastructure, API, trading, tokenization, embedded finance, fintech, crypto, web3   ABOUT THE FINTECH BLUEPRINT

    Manufacturing Hub
    Ep. 251 - Ignition 8.3 ProveIt How Inductive Automation Scales Multi Site Factories w/ MQTT and UNS

    Manufacturing Hub

    Play Episode Listen Later Mar 5, 2026 63:12


    In this episode of Manufacturing Hub, Vlad and Dave sit down with Travis Cox and Kevin McCluskey from Inductive Automation to unpack what was actually proven at ProveIt and why it matters for teams trying to modernize plants without building a fragile mess of point to point integrations. If you have ever looked at a shiny demo and wondered what the real architecture looks like, how it scales beyond a single line, and what it takes to roll out across multiple sites without turning every change into a high risk event, this conversation is for you.Travis and Kevin walk through their ProveIt Enterprise B build and the thinking behind it. The core idea is simple but powerful: treat the factory like a system that needs a shared digital infrastructure, built on open standards, where data is contextualized and reusable. They break down how they used Ignition Edge close to PLCs for resiliency, local HMIs, and disciplined data modeling, then moved data through MQTT into a Unified Namespace so multiple applications can consume the same trusted signals and context. This is the difference between “we can connect to anything” and “we can scale without rewriting everything every time the business changes.” Open standards show up repeatedly in the conversation because ProveIt is specifically designed to force interoperability and practical implementation tradeoffs. Inductive Automation has also written about ProveIt as a place where MQTT, OPC UA, and SQL show up as real foundations rather than slogans.From there, the episode gets into the part that should make both OT and IT teams pay attention: modern deployment practices applied to industrial applications. Kevin outlines a clear maturity path from a single designer workflow to version control, then to containerized deployments, and finally to full GitOps style promotion across dev, staging, and production using tools like Argo CD, Helm, Kubernetes, and release promotion concepts that look like what the software world has used for years. Argo CD is explicitly built around Git repositories as the source of truth for desired state, which is exactly why it fits this style of deployment. The live portion of the conversation demonstrates how fast this can get when the infrastructure is treated as code: they spin up a brand new “site four” by submitting a form, generating a pull request, merging it, and letting the pipeline do the rest.Timestamps00:00 Welcome back and why this ProveIt recap matters01:35 Meet Travis Cox and Kevin McCluskey from Inductive Automation03:10 What ProveIt is and the key vendor questions it forces05:20 Enterprise B architecture overview from PLC to Edge to site to enterprise07:30 HMI walkthrough across liquid processing, filling, packaging, palletizing09:05 Why deploy Ignition Edge instead of only a centralized site gateway12:05 Design once, reuse everywhere and what that means for scaling quickly14:35 On prem realities versus cloud infrastructure in the ProveIt environment17:10 MCP, n8n workflows, and bringing live operational context into AI20:40 i3X style API access to models, history, and alarms for interoperability23:15 GitHub, Docker Compose, Helm, Kubernetes, Argo CD, Cargo and GitOps promotion36:55 Spinning up a new site live and what it changes for multi site rolloutsAbout the hostsVlad Romanov is an electrical engineer and MBA who has spent over a decade building and modernizing manufacturing systems across industrial automation, controls, and plant operations. Through Joltek, Vlad works with manufacturers to assess current state OT foundations, reduce modernization risk, improve reliability, and build internal capability through practical training and standards that stick.Dave Griffith co hosts Manufacturing Hub and brings a practitioner lens focused on what works on the plant floor, how architectures survive real constraints, and how industrial teams can modernize without breaking production.About the guestsTravis Cox is Chief Technology Evangelist at Inductive Automation and has spent over two decades helping customers and partners design scalable architectures, apply best practices, and deliver real solutions with Ignition.Kevin McCluskey is Chief Technology Architect at Inductive Automation and works with organizations on architecture decisions, platform direction, and enabling the next generation of industrial applications.Learn more about Joltekhttps://www.joltek.com/serviceshttps://www.joltek.com/book-a-modernization-consultation

    EmpreendaCast Brasil
    Chatbot morreu. Bem-vindo, WorkAI Com Eduardo Barros

    EmpreendaCast Brasil

    Play Episode Listen Later Mar 5, 2026 111:48


    Chatbot morreu. Bem-vindo, WorkAI | #podcast #empreendedorismo #podcastbrasilNeste episódio do Empreenda Cast, Gustavo recebe Eduardo Barros, CEO e fundador da Work AI, para uma conversa direta sobre a evolução da IA no mercado de saúde — e por que, na prática, o chatbot tradicional está ficando para trás. O papo começa com uma provocação ótima: em vez de “robô de atendimento”, estamos falando de funcionários digitais que entendem contexto, executam tarefas e se integram aos sistemas da operação.

    The Insider Travel Report Podcast
    How Tickitto Allows Travel Advisors to Get Into the Event Booking Business

    The Insider Travel Report Podcast

    Play Episode Listen Later Mar 5, 2026 11:14 Transcription Available


    Dana Lattouf, founder and CEO of Tickitto, talks with Alan Fine of Insider Travel Report the InteleTravel national conference in Punta Cana, about how her InteleTravel-owned global ticketing infrastructure platform connects travel advisors to more than 90,000 concerts, sports and entertainment events worldwide. She also discusses how Tickitto integrates event inventory directly into travel booking systems and how travel advisors can add event tickets to client itineraries through white-label tools and API connections. For more information, visit www.tickitto.com  or www.inteletravel.com.  All our Insider Travel Report video interviews are archived and available on our Youtube channel  (youtube.com/insidertravelreport), and as podcasts with the same title on: Spotify, Pandora, Stitcher, PlayerFM, Listen Notes, Podchaser, TuneIn + Alexa, Podbean,  iHeartRadio,  Google, Amazon Music/Audible, Deezer, Podcast Addict, and iTunes Apple Podcasts, which supports Overcast, Pocket Cast, Castro and Castbox. 

    Hashtag Trending
    Stolen Gemini API Key Triggers $82K Bill

    Hashtag Trending

    Play Episode Listen Later Mar 5, 2026 15:49


    Stolen Gemini API Key Triggers $82K Bill, Accenture Buys Ookla, OpenAI vs GitHub, and Meta Smart Glasses Privacy Jim Love covers multiple tech stories: a three-developer startup in Mexico saw its Google Gemini bill jump from about $180/month to $82,314 in two days after attackers used a stolen API key, highlighting the financial and security risks of usage-based AI APIs, limits, and autonomous agents. Accenture is buying Ookla (Speedtest and Downdetector) for about $1.2B, aiming to monetize its large real-world internet performance dataset for consulting and infrastructure work. Reports say OpenAI may be developing a developer platform that could compete with Microsoft's GitHub, complicating their partnership. China's Minimax launches Max Claw, a cloud "always-on" AI agent deployable in 10 seconds, raising broader access and data-security concerns. Apple's MacBook Neo looks inexpensive but has fixed 8GB memory and paid storage upgrades. Meta's Ray-Ban smart glasses raise privacy questions around stored AI interactions and human review. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Sponsor Message Meter 01:04 Gemini Key Bill Shock 04:46 Accenture Buys Ookla 06:26 OpenAI vs GitHub Rumors 08:07 Minimax Max Claw Agents 11:07 MacBook Neo Value Trap 12:51 Meta Smart Glasses Privacy 14:56 Wrap Up and Thanks

    The David Knight Show
    Wed Episode #2214: OpenAI vs. Anthropic: The Military AI Split

    The David Knight Show

    Play Episode Listen Later Mar 4, 2026 121:41 Transcription Available


    ────────────────────────────────────────00:01:06:19 — AI Firms Accused of Enabling Mass Surveillance and Autonomous WeaponsOpenAI and other technology companies face backlash for allegedly cooperating with Pentagon projects involving mass surveillance systems and autonomous lethal weapons.────────────────────────────────────────00:02:09:26 — Claims of “Unlimited” U.S. Weapons Stockpiles ChallengedStatements that the United States possesses virtually unlimited weapons stockpiles are disputed using reported production figures showing interceptor missile shortages.────────────────────────────────────────00:03:41:29 — Missile Production Gap Exposes Strategic VulnerabilityIranian missile production is estimated at about 100 per month while U.S. interceptor production may be only six to seven per month, highlighting a severe imbalance in defensive capability.────────────────────────────────────────00:07:34:20 — Reports of Cluster Missile Technology Increasing Defense ChallengesClaims circulate that certain Iranian missiles contain dozens of sub-munitions, multiplying the difficulty for missile defense systems already facing interceptor shortages.────────────────────────────────────────00:09:14:14 — U.S. Proposal to Insure Oil Tankers Through Strait of HormuzThe U.S. government reportedly considers guaranteeing or insuring oil tankers traveling through the Strait of Hormuz to keep global energy shipments moving despite military risks.────────────────────────────────────────00:11:16:19 — Debate Over Israeli Influence on U.S. War DecisionsArguments emerge that U.S. policy may be influenced by Israeli strategic priorities, while critics insist American leaders remain responsible for their own decisions.────────────────────────────────────────00:16:24:03 — 1953 Iran Coup Framed as Origin of Modern ConflictCurrent tensions are linked to the CIA-backed overthrow of Iran's government in 1953 and the installation of the Shah, described as a foundational moment for long-term hostility.────────────────────────────────────────00:38:46:16 — U.S. Troops Killed in Missile Strike on Kuwait BaseSix U.S. service members are reported killed and multiple others injured when a missile strike hits a makeshift operations center described as a “fortified” trailer.────────────────────────────────────────00:43:05:25 — Christian Prophecy Narratives Used to Justify WarReports emerge of military leadership invoking biblical prophecy and Armageddon narratives to frame the conflict with Iran as part of a divine plan.────────────────────────────────────────00:58:33:09 — California Law Requires Age-Tracking Internet InfrastructureCalifornia unanimously passes AB-1043 requiring operating systems to collect age data at account setup and transmit it to app developers via a real-time API beginning January 2027.────────────────────────────────────────01:10:36:14 — Trump Targeting Law Firms Sparks Constitutional ConcernsExecutive orders reportedly removed security clearances and federal building access from law firms associated with political opponents.────────────────────────────────────────01:27:56:15 — AI Industry Conflict Over Military Surveillance ContractsAnthropic's Claude AI reportedly refuses Pentagon uses tied to mass surveillance or autonomous weapons while OpenAI moves forward with defense contracts.──────────────────────────────────────── Money should have intrinsic value AND transactional privacy: Go to https://davidknight.gold/ for great deals on physical gold/silver For 10% off Gerald Celente's prescient Trends Journal, go to https://trendsjournal.com/ and enter the code KNIGHT Find out more about the show and where you can watch it at TheDavidKnightShow.com If you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Become a supporter of this podcast: https://www.spreaker.com/podcast/the-david-knight-show--2653468/support.

    The REAL David Knight Show
    Wed Episode #2214: OpenAI vs. Anthropic: The Military AI Split

    The REAL David Knight Show

    Play Episode Listen Later Mar 4, 2026 121:41 Transcription Available


    ────────────────────────────────────────00:01:06:19 — AI Firms Accused of Enabling Mass Surveillance and Autonomous WeaponsOpenAI and other technology companies face backlash for allegedly cooperating with Pentagon projects involving mass surveillance systems and autonomous lethal weapons.────────────────────────────────────────00:02:09:26 — Claims of “Unlimited” U.S. Weapons Stockpiles ChallengedStatements that the United States possesses virtually unlimited weapons stockpiles are disputed using reported production figures showing interceptor missile shortages.────────────────────────────────────────00:03:41:29 — Missile Production Gap Exposes Strategic VulnerabilityIranian missile production is estimated at about 100 per month while U.S. interceptor production may be only six to seven per month, highlighting a severe imbalance in defensive capability.────────────────────────────────────────00:07:34:20 — Reports of Cluster Missile Technology Increasing Defense ChallengesClaims circulate that certain Iranian missiles contain dozens of sub-munitions, multiplying the difficulty for missile defense systems already facing interceptor shortages.────────────────────────────────────────00:09:14:14 — U.S. Proposal to Insure Oil Tankers Through Strait of HormuzThe U.S. government reportedly considers guaranteeing or insuring oil tankers traveling through the Strait of Hormuz to keep global energy shipments moving despite military risks.────────────────────────────────────────00:11:16:19 — Debate Over Israeli Influence on U.S. War DecisionsArguments emerge that U.S. policy may be influenced by Israeli strategic priorities, while critics insist American leaders remain responsible for their own decisions.────────────────────────────────────────00:16:24:03 — 1953 Iran Coup Framed as Origin of Modern ConflictCurrent tensions are linked to the CIA-backed overthrow of Iran's government in 1953 and the installation of the Shah, described as a foundational moment for long-term hostility.────────────────────────────────────────00:38:46:16 — U.S. Troops Killed in Missile Strike on Kuwait BaseSix U.S. service members are reported killed and multiple others injured when a missile strike hits a makeshift operations center described as a “fortified” trailer.────────────────────────────────────────00:43:05:25 — Christian Prophecy Narratives Used to Justify WarReports emerge of military leadership invoking biblical prophecy and Armageddon narratives to frame the conflict with Iran as part of a divine plan.────────────────────────────────────────00:58:33:09 — California Law Requires Age-Tracking Internet InfrastructureCalifornia unanimously passes AB-1043 requiring operating systems to collect age data at account setup and transmit it to app developers via a real-time API beginning January 2027.────────────────────────────────────────01:10:36:14 — Trump Targeting Law Firms Sparks Constitutional ConcernsExecutive orders reportedly removed security clearances and federal building access from law firms associated with political opponents.────────────────────────────────────────01:27:56:15 — AI Industry Conflict Over Military Surveillance ContractsAnthropic's Claude AI reportedly refuses Pentagon uses tied to mass surveillance or autonomous weapons while OpenAI moves forward with defense contracts.──────────────────────────────────────── Money should have intrinsic value AND transactional privacy: Go to https://davidknight.gold/ for great deals on physical gold/silver For 10% off Gerald Celente's prescient Trends Journal, go to https://trendsjournal.com/ and enter the code KNIGHT Find out more about the show and where you can watch it at TheDavidKnightShow.com If you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Become a supporter of this podcast: https://www.spreaker.com/podcast/the-real-david-knight-show--5282736/support.

    Code Story
    Developer Chats - Oleksandr Piekhota

    Code Story

    Play Episode Listen Later Mar 4, 2026 27:33 Transcription Available


    Today, we are continuing our series, entitled Developer Chats - hearing from the large scale system builders themselves.In this episode, we are talking with Oleksandr Piekhota, Principal Software Engineer at Teaching Strategies. Oleksandr helps to show us at what point of scale platform approaches are required, when to run experiments and when to stop, and perhaps more importantly - engineering ownership beyond the code.QuestionsYou've moved from hands-on engineering into principal and technical leadership roles, working on architecture and platforms.At what point did you realize your work was no longer about individual features, but about the system as a wholeAcross several projects, growth didn't break functionality — it exposed architectural limits.Can you recall a moment when it became clear that shipping more features wouldn't solve the problem, and a platform approach was required?You've designed and supported APIs end-to-end, from architecture to real customers. How do you distinguish between an API that simply works and one that can truly support business scale?Internal systems like invoicing and HR workflows began as automation, but evolved into real products.What tells you that an internal tool is worth developing seriously rather than treating as a temporary workaround?In R&D, you explored CI/CD automation, server-less, and infrastructure experiments — not all reached production. How do you decide when an experiment should continue, and when it's no longer worth the engineering cost?You've hired teams, set standards, and shaped long-term technical direction. At what point does an engineer stop being a contributor and start owning business-level outcomes?You contributed to open-source tools that later became part of your company's infrastructure. Why do you see open source contributions as part of serious engineering work rather than a side activity?Looking across your projects, how do you now recognize a truly mature engineering system? Is it code quality, process, or how teams respond when things go wrong?If we look five to seven years into the future, which architectural assumptions we treat as “standard” today are most likely to turn out to be naive or limiting?SponsorsIncogniLinkshttps://www.linkedin.com/in/oleksandr-piekhota-b675ba53/https://teachingstrategies.com/Support this podcast at — https://redcircle.com/codestory/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

    Seller Sessions
    Claude Sessions Week 3: AI Implementation for E-Commerce with Subash - Seller Sessions Podcast

    Seller Sessions

    Play Episode Listen Later Mar 4, 2026 40:27


    In this third installment of Claude Sessions, Danny is joined by Subash from Not A Square, who helps e-commerce brands scaling past seven figures implement AI without scaling headcount. Subash walks through real client case studies -- including a TikTok brand that boosted its customer satisfaction score from 4.2 to 4.5 in four weeks using a customer support agent built in Claude. Danny then breaks down OpenClaw, the open-source personal AI agent that exploded in popularity, explains why he chose not to use it despite the temptation, and reveals Claude Flow -- his custom operating system built inside Claude Code with 11 engines, 300+ features, and a persistent memory layer powered by ChromaDB. The episode drives home one core message: document your operations first, pick one platform, go deep, and stop chasing every new tool. Key Topics Documenting operations before automation -- Why you cannot automate what is not documented TikTok customer support case study -- Building an AI agent that raised satisfaction scores in four weeks OpenClaw overview and security risks -- What it does, why it blew up, and why Danny built his own alternative Claude Flow -- Danny's custom operating system inside Claude Code with persistent memory The amnesia loop -- How context loss between sessions kills productivity and how ChromaDB solves it Pixel-less environment -- The shift from structured prompts to contextual AI interaction Go deep on one platform -- Why chasing multiple AI tools guarantees you build nothing Timestamps [00:00] Introduction -- Claude Sessions Week 3, delayed from the road [01:03] Subash introduces himself and Not A Square [02:01] Overview of three client projects and the problem founders face [04:30] Why operational truth is the moat in AI commerce [06:48] Three pillars: reduce costs, better governance, scale without headcount [07:30] TikTok case study -- customer support agent boosting store score from 4.2 to 4.5 [09:04] OpenClaw -- history, capabilities, and the security nightmare [15:30] Six core capabilities of OpenClaw (local-first, universal messaging, persistent memory, browser automation, system access, self-extending skills) [18:00] Why OpenClaw matters -- moving from dumb LLMs to personal AI agents [20:00] Security trade-offs -- 1.5M API keys exposed, malware in skills, Cisco tests [22:00] Claude Flow -- Danny's 11-engine operating system built inside Claude Code [24:26] The amnesia loop -- how sessions lose context and how ChromaDB fixes it [28:19] Why Claude MD, agents, and skills are not enough without hooks and triggers [32:40] Go deep on one platform -- stop chasing every new tool [35:35] Subash on helping sellers adopt Claude Code fundamentals (Claude MD, skills) [39:51] Wrap-up and contact info Key Takeaways Document before you automate -- If your business operations live in the founder's head and not on paper, any AI tool will amplify the chaos rather than fix it. Operational truth is the moat -- Clean inventory, accurate catalogs, honest cashflow reporting. Get these right before touching AI. One AI agent moved the needle -- A single customer support agent on TikTok raised a brand's satisfaction score from 4.2 to 4.5 in four weeks, directly improving store visibility. Persistent memory changes everything -- ChromaDB captures decisions, patterns, and project context across sessions so Claude compounds in usefulness over time (zero entries in session one, 1,700+ by session 25). Scaffolding beats raw building -- Danny's Claude Flow system means a project that took five days six months ago now takes 40 minutes. The investment in infrastructure pays exponential returns. OpenClaw is proof of concept, not production-ready -- Broad permissions, prompt injection vulnerabilities, exposed API keys. Wait for the open-source community to patch the holes before diving in. Pick one platform and go all the way in -- Chasing multiple AI tools means you learn none of them deeply and build nothing of value.

    Between Two COO's with Michael Koenig
    AI Agents Need Logins Too: Identity, Security, and the Future of AI | Greg Keller, CTO, JumpCloud

    Between Two COO's with Michael Koenig

    Play Episode Listen Later Mar 4, 2026 32:01


    Get 90 days of Fellow free at Fellow.ai/coo In this episode, Michael Koenig speaks with Greg Keller, co-founder and CTO of JumpCloud, about identity access management and why it's becoming one of the most important operational systems in the age of AI. Greg explains how traditional identity systems were designed for office-based companies running Microsoft infrastructure and why that model broke as companies moved to SaaS, cloud infrastructure, and remote work. The discussion then turns to the next big shift: the rise of AI agents and synthetic identities inside organizations. As companies deploy more AI tools, the number of machine identities may soon outnumber human employees. Managing what those systems can access will become a critical security and operational challenge.   Topics Covered What a CTO actually does Greg explains the different types of CTO roles and how technology leaders help companies anticipate where the market is headed. Identity Access Management explained simply IAM answers three core questions inside every company: Who are you? What can you access? How is that access managed?   Why the old IT model broke Traditional identity systems were built for on-premise offices and Microsoft infrastructure. Modern companies now operate across: SaaS applications cloud infrastructure remote work environments multiple operating systems How JumpCloud approaches identity JumpCloud was built to manage identity across devices, applications, and infrastructure regardless of platform. Where Okta fits in the ecosystem Okta helped modernize browser-based authentication through Single Sign-On, while JumpCloud focuses on broader identity infrastructure.   AI, Security, and Synthetic Identities Why COOs should push AI adoption Greg argues AI adoption is no longer optional. Companies must encourage teams to improve productivity and efficiency using AI.   The rise of synthetic identities AI agents, bots, APIs, and service accounts are becoming new actors inside companies that require identity governance.   Bots may soon outnumber employees Organizations will soon manage more machine identities than human ones.   AI as a potential insider threat AI systems can become security risks if they are granted excessive permissions or misinterpret policies.   The API key governance problem Many AI integrations rely on API keys, which are often poorly managed and can create hidden security risks.   Key Takeaway As companies adopt AI, identity access management becomes the control layer that determines what both humans and machines are allowed to do inside the organization. The companies that manage identity well will move faster and operate more securely.   Links: Michael on LinkedIn: https://linkedin.com/in/michael-koenig514 Greg on LinkedIn: https://www.linkedin.com/in/gregorykeller/ JumpCloud: https://jumpcloud.com/ Between Two COO's: https://betweentwocoos.com Episode Link: https://betweentwocoos.com/ai-agents-identity-access-greg-keller

    The Insurtech Leadership Podcast
    API-First Insurance: When Brands Become Insurers

    The Insurtech Leadership Podcast

    Play Episode Listen Later Mar 4, 2026 30:35 Transcription Available


    Episode Overview What does it actually take to run a digital insurance operation at the system level—not at the chatbot layer, but at the transaction layer? Joshua R. Hollander speaks with Wayne Slavin, CEO and Co-Founder of Sure, about the infrastructure required to deliver true digital insurance in an AI-agent world. Wayne describes Sure's role as "what Visa and Mastercard were in the early days of credit cards"—building the rails for digital insurance distribution. Key Topics 1. What "Digital Insurance" Really Means Digital insurance is not about moving forms online or replacing phone calls with web interfaces. True digital insurance is straight-through processing from quote to policy issuance to payment—mirroring the speed and frictionlessness of e-commerce transactions. Wayne explains: "If that transaction requires some asynchronous process, some process that is interrupted, that we are actually not doing digital insurance." The benchmark: the entire process happens within minutes, not days or weeks. 2. API-First Infrastructure vs. Legacy Core Systems Sure's platform differs fundamentally from monolithic core policy administration systems (like Guidewire or Duck Creek) because it was built API-first with data normalization at its foundation. Legacy cores encourage over-customization, which locks insurers into inflexible, non-compliant systems. Sure's approach standardizes policy data across product types (homeowners, renters, fine art, landlord), enabling rapid changes and integrations. Unlike legacy systems, Sure doesn't force carriers to choose between their existing tech and innovation—it coexists alongside legacy infrastructure. 3. Model Context Protocol (MCP) and AI Agent Integration In February 2026, Sure announced the industry's first MCP server integration, enabling Claude AI agents to interact directly with Sure's infrastructure. MCP is a standardized protocol that allows AI agents to connect to business systems without custom integrations for each use case. This means insurers and brands no longer need 6-12 months of engineering to embed insurance; AI agents can quote, bind, manage, and renew policies conversationally. 4. Why Non-Endemic Brands Will Build Insurance The next major insurance distributors won't be insurance companies. They'll be brands, e-commerce platforms, fintechs, and technology companies with massive customer bases. Wayne's economic thesis: if a brand can convert customers to insurance at 20-30x the typical rate (vs. giving customer data to a third party), the unit economics change entirely. Large brands now have a path to retain customers and data while building insurance revenue. 5. The Transaction Layer as Moat Insurance isn't like retail or travel—regulatory consequences are real, policy admin systems are complex, and compliance layers must operate end-to-end. Sure's competitive advantage lies in building the foundational transaction layer that carriers either cannot replicate internally or would take years to engineer. This infrastructure layer is what enables AI agents to work reliably within compliance and regulatory constraints. 6. Insurance as an Ecosystem The future isn't a single insurer offering multiple products—it's an ecosystem where brands, platforms, and technology companies collaborate on insurance delivery. AI agents, powered by Sure's infrastructure, enable this distributed, composable insurance ecosystem. Key Quotes -"What digital insurance really means is truly a straight-through process where you're starting to get a quote that quote will be a real quote. It's not an estimate. It will become a real policy. You will pay real money. You will get a real coverage document. And the timing of all of that is pretty close to what you expect from regular old e-commerce." -"The next big insurance distributors won't be insurance companies. They will be brands. They'll be technology companies. They'll be fintechs. They'll be AI companies. They'll be companies that are currently sitting on large customer bases that don't have insurance products today." -"Before MCP, if an AI agent wanted to interact with an insurance system, you'd have to build a custom integration for each system, each use case. MCP standardizes that." Resources • Sure: https://sure.com • Wayne Slavin LinkedIn: https://www.linkedin.com/in/wayneslavin • Horton International: https://www.horton-usa.com/ Subscribe & Connect Tune in to the Insurtech Leadership Podcast for deep-dive conversations with insurance executives, founders, and innovators shaping the future of insurance technology. • LinkedIn: https://www.linkedin.com/in/joshuarhollander/ • Podcast Showcase: https://www.linkedin.com/showcase/insurtech-leadership-show #InsurTech #Insurance #InsuranceInnovation #Innovation #FutureOfInsurance #Leadership #ExecutiveLeadership

    Category Visionaries
    How Podero avoids "pilot purgatory" | Chris Bernkopf

    Category Visionaries

    Play Episode Listen Later Mar 4, 2026 16:55


    Podero builds software that enables European utilities to trade device flexibility—EVs, heat pumps, and batteries—on energy markets, generating trading revenues while reducing consumer bills by 20-30%. The company navigates a uniquely complex B2B motion: they must sell utilities, secure API access from device OEMs, and ensure utilities successfully roll out consumer-facing products—all simultaneously. In this episode of BUILDERS, Chris Bernkopf, Co-Founder and CEO of Podero, breaks down how they escaped pilot purgatory with innovation departments, built a "10x better than doing nothing" business case that reaches commercial stakeholders, and why their 2026 strategy centers on radical simplification through deletion.Topics DiscussedOrigin story: from Raspberry Pi heat pump experiment to YC-backed utility infrastructure softwareThe "three miracle problem" go-to-market challenge and how they de-risked all three dimensions in parallelSales cycle mechanics: 6-12 month closes, avoiding innovation department traps, and multi-stakeholder orchestrationMarket structure: 2,000 addressable utilities in Europe, 120 customers required for unicorn trajectoryChannel strategy evolution: cold outreach to re-engagement focus in a contained prospect universe2026 GTM thesis: simplifying value propositions by deleting products and messagingHow YC learnings posted on bathroom doors maintain organizational disciplineThe grid capacity fork in the road: expensive scarcity vs. cheap abundant renewable energy

    In-Ear Insights from Trust Insights
    In-Ear Insights: Switching AI Providers, Backup AI Capabilities

    In-Ear Insights from Trust Insights

    Play Episode Listen Later Mar 4, 2026


    In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the AI wars, switching AI, and why relying on a single AI vendor can jeopardize your business continuity. You’ll discover how to build an abstraction layer that lets you swap models without rebuilding your workflows and see practical no‑code tools and open‑weight models you can use as a safety net. You’ll understand the essential documentation and backup practices that keep your AI agents running. Watch the full episode to protect your AI strategy. Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-switching-ai-providers-backup-ai-capabilities.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week’s In Ear Insights, it is the AI Wars. Katie, you had some thoughts and some observations about the most recent things going on with Anthropic, with OpenAI, with Google XAI and stuff like that. So at the table, what’s going on? Katie Robbert: I don’t want to get too deep into the weeds about why people are jumping ship on OpenAI and moving toward the cloud. That’s in the news, it’s political, you can catch up on that. The short version is that decisions from the top at each of these companies have been made that people either agree with or don’t based on their own values and the values of their companies. When publicly traded companies make unpopular decisions that don’t align with the majority of their user base, people jump ship. They were like, okay, I don’t want to use you. We’ve seen it with Target and many other companies that made decisions people didn’t feel aligned with their personal values. Now we are seeing people abandoning OpenAI and signing on to Anthropic’s Claude. That’s what I wanted to chat about today because we talk a lot about business continuity and risk management. What happens when you get too closely tied to one piece of software and something goes wrong? We’ve talked about this on past episodes in theory because, up until now, software outages have generally been temporary. You don’t often see a mass exodus of a very popular piece of software that people have built their entire businesses around. Before we get into what this means for the end user and possible solutions, Chris, I would like to get your thoughts, maybe your cat’s thoughts on what’s going on. Christopher S. Penn: One of the things we’ve said from very early on in the AI space, because it changes so rapidly, is that brand loyalty to any vendor is generally a bad idea. If you were a hater of Google Bard—for good reason—Bard was a terrible model. If you said, I’m never going to touch another Google product again, you would have missed out on Gemini and Gemini 3 and 3.1, which is currently the top state‑of‑the‑art model. If you were all in on Claude, when Claude 2.1 and 2.5 came out and were terrible, you would have missed out on the current generation of Opus 4.6 and so on. Two things come to mind. One, brand loyalty in this space is very dangerous. It is dangerous in tech in general. Not to get too political, but the tech companies do not care about you, so there’s no reason to give them your loyalty. Second, as people start building agentic AI, you should think about abstraction layers. This concept dates back to the earliest days of computing: we never want to code directly against a model or an operating system. Instead we want an abstraction layer that separates our code from the machinery. It’s like an engine compartment in a car—you should be able to put in a new engine without ripping apart the entire car. If you do that well when building AI agents, when a new model comes along—regardless of political circumstances or news headlines—you can pull the old engine out, install the new one, and keep delivering the highest‑quality product. Katie Robbert: I don’t disagree with that, but that is not accessible to everybody, especially smaller businesses that view software like OpenAI or Google’s Gemini as desperately needed solutions. We’ve relied on Claude and Co‑Work, its desktop application, heavily. Over the weekend I realized how reliant I’ve become on it in the past two weeks. If it stopped working, what does that mean for the work I’m trying to move forward? That’s a huge concern because I don’t have the coding skills or resources to replicate it right now. What I’ve been doing in Co‑Work is because we’re limited on resources, but Co‑Work has advanced to the point where I can replicate what I would need if I hired a team of designers, developers, and marketers. It shook me to my core that this could go away. So what does that mean for me, the business owner, in the middle of multiple projects if I can’t access them? This morning Claude had an outage—unsurprisingly, the servers were overloaded because people are stepping away from OpenAI and moving into Claude. Claude released an ad: “Switch to Claude without starting over. Brief your preferences and context from other AI providers to Claude. With one copy‑paste, Claude updates its memory and picks up right where you left off. Memory is available on all paid plans.” For many people the ability to switch from one large language model to another felt like a barrier because everything built inside OpenAI couldn’t be transferred. Claude removed that barrier, opening the floodgates, and their servers were overloaded. Users who had been using the system regularly were like, what do you mean? I can’t get the work done I planned for this morning. Christopher S. Penn: There are two different answers depending on who you are. For you, Katie, as the CEO and my business partner, I would come over, say we’re going to learn Claude code, install the terminal application, and install Claude code router, which allows you to switch to any model from any provider so you can continue getting work done. Unfortunately, that isn’t a scalable option for everyone in our community. My suggestion for others is that it’s slightly harder but almost every major company has an environment where you can install a no‑code solution that provides at least some of those capabilities. Google’s is called Anti‑Gravity. OpenAI’s is called Codex. Alibaba’s can be used within tools like Client or Kil. If you have backed up your prompts and workflows, you can move them into other systems relatively painlessly. For example, Google’s Anti‑Gravity supports the skills format, so if you’ve built skills like the Co‑CEO, you can bring them into Anti‑Gravity. It’s not obvious, but you can port from one system to another relatively quickly. Katie Robbert: That brings us to the point that software fails—it’s just code. What is your backup plan if the system you’re heavily reliant on goes away? We’ve always said hypothetically, “if it goes away…,” and now we’re at that point. Not only are people leaving a major software provider, they are also struggling with switching costs. They’re struggling to bring their stuff over because everything lives within the system. A lot of people are building and not documenting, and that’s a problem. Christopher S. Penn: It is a problem. If you’ve been in the space for a while and understand the technology, backups and fallback systems have gotten incredibly good. About a month ago Alibaba released Quinn 3.5 in various sizes. The version that runs on a nice MacBook is really good—scary good. It’s about the equivalent of Gemini 3 Flash, the day‑to‑day model many folks use without realizing it. Having an open‑weights model you can install on a laptop that rivals state‑of‑the‑art as of three months ago is nuts. The challenge is that it’s not well documented, but it’s something we’ve been saying for two or three years: if you’re going all in on AI, you need a backup system that is capable. The good news is that providers like Alibaba, Quinn, Kimmy, Moonshot, and Jipu AI—many Chinese companies—ensure the technology isn’t going away. So even if Anthropic or OpenAI went out of business tomorrow, you have access to the technologies themselves. You can keep going while everyone else is stuck. Katie Robbert: If it’s not a concern for executives mandating AI integration, it should open eyes to the possibility of failure. Let’s be realistic—it’s not going to happen tomorrow, but it makes me think of the panic when Google Analytics switched from Universal Analytics to GA4. The systems aren’t compatible, data definitions changed, and companies lost historic data. Fortunately we had a backup plan. Chris, you always ran Matomo in the background as a secondary system in case something happened with Google Analytics, so we still had historic data. We’re at a pivotal point again: if you don’t have a backup system for your agentic AI workflows, you’re in trouble. Guess what? It’s going to fail, it will come crashing down, and you won’t know what to do. So let’s figure that out. Christopher S. Penn: If you’re building with agentic autonomous systems like Open Claw and its variants and you’re not building on an open‑weights model first, you’re taking unnecessary risks. Today’s open‑weights models like Quinn 3.5 and Minimax M2.5 are smart, capable, and about one‑tenth the cost of Western providers. If you have a box on your desk, you can run your life on it. You’d better use a model or have an abstraction layer that allows you to switch models so you can continue to run your life from this box. I would not rely on a pure API play from one major provider because if they go away, the transition will be rough. Now is the best time to build that level of abstraction. If you’re using tools like Claude code or other coding tools, you can have them make these changes for you. You have to be able to articulate it, and you should articulate with the 5B framework by Trust Insights. Once you do that, you can be proactive about preventing disasters. Katie Robbert: Is that unique to coding tools or does it also apply to chats and custom LLMs people have built? Obviously we have background information for Co‑CEO well documented, but let’s say we didn’t. Let’s say we built it and it lived as a skill somewhere. That’s a concern because we’ve grown to heavily rely on that custom agent. What if Claude shuts down tomorrow? We can’t access it. What do we do? Christopher S. Penn: The Co‑CEO—those fancy words like agents and skills—they’re just prompts. You can take that skill, which is a prompt file, fire up Anything LLM, turn on Quinn 3.5, and it will read that skill and get to work. You can do that in consumer applications like Anything LLM, which is just a chat box like Claude. The only thing uniquely missing right now is an equivalent for Claude Co‑Work, but it won’t be long before other tools have that. Even today you can use a tool like Klein or Kelo inside Visual Studio Code, install those skills, and have access to them. So even with Co‑CEO, you can drop that skill because it’s just a prompt and resume where you left off, as long as you have all data backed up and not living in someone else’s system, and you have good data governance. The tools are almost agnostic. All models are incredibly smart these days, even open‑weights models. I saw an open‑weights model over the weekend with 13 billion parameters that runs in about 12 GB of VRAM, so a mid‑range gaming laptop can run it. Co‑CEO Katie could live on perpetuity on a decent laptop. Katie Robbert: But you have to have good data governance. You need backups and documentation, then you can move them to any other system to make it more tool‑agnostic. If you don’t have good data governance or the basic prompts you’re reusing, we’ve been talking about this since day one. What’s in your prompt library? What frameworks are you using? What knowledge blocks have you created? If you don’t have those, you need to stop, put everything down, and start creating them, because you’ll be in a world of hurt without the basics. If you have a custom GPT you use daily, is it well documented—how it works, how it’s updated, how it’s maintained—so that if you can no longer subscribe to OpenAI, you can move to a different system. Katie Robbert: That move, especially if you’re using client‑facing tools, is not going to be overly traumatic. It’s not going to bring everything to a screeching halt. Many companies think everything will halt, but we haven’t explored personally what Claude meant by a copy‑paste migration. It feels like an oversimplification of what you actually have to do to replicate your system in Claude. Katie Robbert: But the fact they’re thinking about it, knowing people are panicking, is a good thing for Claude. It’s probably more complicated. The more you build, the deeper you are in the weeds, the more complicated it will be to port everything over. That’s why, as you build, you need documentation. Katie Robbert: That’s for nerds. Katie Robbert: I’m a nerd. I need documentation because it makes my life easier. You’re the first to ask, “where’s the documentation?” Do you have the PRD? Do you have the business requirements? I’m not touching anything until we have that. It makes me incredibly happy because look how much more you’ve accomplished with these systems and how zero panic you have about the AI wars—you can use whatever system you feel like that day. Christopher S. Penn: Exactly. For folks listening, you can catch this on YouTube. This is my folder of all stuff—my Claude environment. It lives outside of Claude, on my hard drive, backed up to Trust Insights’ Google Cloud every Monday and Friday. It includes agents, document reviewers, the CFO, Co‑CEO, Katie, documentation, rules files for code standards, reference and research knowledge blocks, individual skills, and a separate folder of knowledge blocks. All of this lives outside any AI system—just files on disk backed up to our cloud twice a week. So no matter what, if my laptop melts down or gets hit by a meteor, I won’t lose mission‑critical data. This is basic good data governance. No matter what happens in the industry, if all the Western tech providers shut down tomorrow, I can spin up LM Studio, turn on the quantized model, and run it on my computer with my tools and rules. Our business stays in business when the rest of the world grinds to a halt. That will be a differentiating factor for AI‑forward companies: have a backup ready, flip the switch, and we’re switched over. Katie Robbert: If we look at it in a different context, it’s like the panic when a human decides to leave a company. You have that two‑week window to download everything they’ve ever done—wrong approach. It’s the same if you don’t have documentation for a human and no redundancy plan. If Chris wants to go on vacation, everything can’t come to a screeching halt. We’ve put controls in place so he can step away. We want that for any employee. Many companies don’t have even that basic level of documentation. If each analyst does a unique job and no one else can do it, you have no redundancy, no backup plan. If that analyst leaves for a better job, clients get mad while you scramble. It’s the same scenario with software. Christopher S. Penn: Now that’s a topic for another time, but one thing I’ve seen is the less you as an individual have fair knowledge, the more irreplaceable you theoretically are. That’s not true. Many protect job security by not documenting, but if everything is well documented, a less competent match could replace you. We saw Jack Dorsey’s company Block cut its workforce by 5,000, saying they’re AI‑forward. There’s a constant push‑pull: if you have SOPs and documentation, what’s to stop you from being replaced by a machine? Katie Robbert: I say bring it. I would love that, but I’m also professionally not an insecure human. You can’t replace a human’s critical thinking. If the majority of what you do is repetitive, that’s replaceable. What you bring to the table—creativity, critical thinking, connecting the dots before AI, documentation, owning business requirements, facilitating stakeholder conversations—is not easily replaceable. If Chris comes to me and says I’ve documented everything you do, and we give it all to a machine, I would say good luck. Christopher S. Penn: Yeah, it’s worth a shot. Christopher S. Penn: All right. To wrap up, you absolutely should have everything valuable you do with AI living outside any one AI system. If it’s still trapped in your ChatGPT history, today is the day to copy and paste it into a non‑AI system, ideally one that’s shared and backed up. Also, today is the day to explore backup options—look for inference providers that can give you other options for mission‑critical stuff. No matter what happens to the big‑name brands, you have backup options. If you have thoughts or want to share how you’re backing up your generative and agentic AI infrastructure, join our free Slack group at Trust Insights AI Analytics for Marketers, where over 4,500 marketers—human as far as we know—ask and answer each other’s questions daily. Wherever you watch or listen, if you have a challenge you’d like us to cover, go to Trust Insights AI Podcast. You can find us wherever podcasts are served. Thanks for tuning in. We’ll talk to you on the next one. Katie Robbert: Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data‑driven approach. Trust Insights specializes in helping businesses leverage data, AI, and machine learning to drive measurable marketing ROI. Services span developing comprehensive data strategies, deep‑dive marketing analysis, building predictive models with tools like TensorFlow and PyTorch, and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology, Martech selection and implementation, and high‑level strategic consulting. Encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic, Claude, DALL‑E, Midjourney, Stable Diffusion, and Meta Llama, Trust Insights provides fractional team members such as CMO or data scientist to augment existing teams. Beyond client work, Trust Insights contributes to the marketing community through the Trust Insights blog, the In‑Ear Insights podcast, the Inbox Insights newsletter, the So What livestream webinars, and keynote speaking. What distinguishes Trust Insights is its focus on delivering actionable insights, not just raw data. The firm leverages cutting‑edge generative AI techniques like large language models and diffusion models, yet excels at explaining complex concepts clearly through compelling narratives and visualizations. Data storytelling and a commitment to clarity and accessibility extend to educational resources that empower marketers to become more data‑driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a midsize business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

    Politics Politics Politics
    Final Texas Primary Predictions! Pentagon vs. Anthropic Explained. The False Front of Executive Actions (with Kenneth Lowande)

    Politics Politics Politics

    Play Episode Listen Later Mar 3, 2026 82:16


    The fight between Anthropic and the Pentagon goes deeper than a simple contract dispute. In some ways, it's the culmination of a tech rivalry that's been simmering since the early days of OpenAI.Anthropic wasn't some scrappy outsider that stumbled into national security. It'd already had top secret clearance, working with the CIA for years, and had seemingly made peace with the idea that its models would be used inside the American intelligence apparatus. So let's dispense with the notion that this is a company discovering government power for the first time. The rupture didn't happen because the Pentagon suddenly knocked on the door. The door had been open.The disagreement came down to terms. Anthropic wanted to draw lines beyond the law. No mass surveillance of civilians. No autonomous weapons without a human in the loop. Not “we'll follow U.S. statute.” They wanted something stricter, something moral, something aligned with Dario Amodei's effective altruist worldview. The Pentagon's response was blunt: we obey US law, but we don't sign up to a private company's expanded terms of service.That's where the temperature rose.Politics Politics Politics is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Because this isn't just any company. Dario left OpenAI over exactly this kind of philosophical divide. He believed OpenAI was becoming too commercial, too focused on product, not focused enough on safety and existential risk. So he built Anthropic as the safety lab. The kinder, gentler, crunchier alternative. But ironically, Anthropic was already cashing government checks while telling itself it was the adult in the room.From the Pentagon's perspective, the risk was operational. If you're going to integrate a model into defense infrastructure, you can't have the supplier yank the API mid-mission because the CEO decides the vibes are off. There were even reports that during negotiations, Pentagon officials asked whether Anthropic would allow its technology to respond to incoming ballistic missiles if civilian casualties were possible. The alleged answer, “you can always call,” wasn't reassuring to people whose job is to eliminate hesitation.And hovering over all of this is Sam Altman.Because while Anthropic was sparring with the Department of Defense, OpenAI was in conversation. The rivalry here isn't new. The effective altruist faction at OpenAI once helped push Altman out of his own company before he managed to return days later. Anthropic ran a Super Bowl ad that took thinly veiled shots at OpenAI's commercialization. So when Anthropic stumbled, OpenAI stepped in and secured its own defense agreement.Then came the nuclear option talk: labeling Anthropic a “supply chain risk.” In Pentagon language, this is the category you reserve for companies like Huawei, for hostile foreign hardware, for entities you believe can't be trusted inside American systems. Most people inside and outside the tech landscape agree that goes too far. Anthropic may be principled. It may be stubborn. It may even be naive. But it isn't malicious.Meanwhile, something fascinating happened in the market. Claude, Anthropic's consumer product, exploded in downloads. It became a kind of digital resistance symbol, a signal that you weren't with the war machine. The company that once insisted it didn't care about consumer dominance suddenly found itself riding a consumer wave, experience mass traffic it hadn't planned to account for.What this entire episode reveals is that AI isn't a lab experiment anymore. It's infrastructure. It's missile defense. It's geopolitical leverage. And when you build something that powerful, you don't get to exist outside power structures. You either align with them, fight them, or try to morally outmaneuver them. Anthropic tried the third path. The Pentagon reminded them that in wartime procurement, ambiguity isn't a feature.Cooler heads may yet prevail. Right now, the Pentagon's got bigger problems than a Silicon Valley slap fight. But this was the moment when AI stopped being a culture war talking point and became a live wire in national security. And once you plug into that grid, there's no going back.Chapters00:00:00 - Intro00:02:25 - Texas Primary Final Predictions00:15:20 - The Pentagon vs. Anthropic, Explained00:40:30 - Update00:40:52 - Iran00:45:41 - Clintons00:49:08 - Kalshi00:52:19 - Interview with Kenneth Lowande01:18:03 - Wrap-up This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.politicspoliticspolitics.com/subscribe

    Developer Tea
    AI Moves the Bottleneck - Are You Ready for What That Means For Your Career?

    Developer Tea

    Play Episode Listen Later Mar 3, 2026 29:52


    AI is bringing massive changes to our industry, but it's not just about how fast you can write code or use agentic flows. In this episode, I explore how AI is fundamentally shifting the economic bottleneck of software development, and how you can use your systems-thinking engineering mindset to adapt and thrive in this new era.

    The Unofficial Shopify Podcast
    We Built a Shopify App. Here's What Broke.

    The Unofficial Shopify Podcast

    Play Episode Listen Later Mar 3, 2026 44:25


    "Even if it ends up being something that ends the universe, it at least will have been a fun ride. So let's do it." That's what Karl Meisterheim said when Kurt pitched him on building a Shopify app. Over a year later, Promo Party Pro is live, and the journey from "this should be easy" to "why is the cart API doing that" was anything but smooth. Kurt, Paul Reda, and Karl sit down to talk through the whole thing: why free gift with purchase is Kurt's favorite promo, the edge cases that nearly broke them, the popup cooldown debate that consumed days, and what it actually takes to ship a polished app on Shopify in 2025. SPONSORS Swym - Wishlists, Back in Stock alerts, & more getswym.com/kurt Cleverific - Smart order editing for Shopify cleverific.com Zipify - Build high-converting sales funnels zipify.com/KURT LINKS Promo Party Pro: https://promoparty.app/ Crowdfunder App: https://apps.shopify.com/crowdfunder Ethercycle: https://ethercycle.com WORK WITH KURT Apply for Shopify Help ethercycle.com/apply See Our Results ethercycle.com/work Free Newsletter kurtelster.com The Unofficial Shopify Podcast is hosted by Kurt Elster and explores the stories behind successful Shopify stores. Get actionable insights, practical strategies, and proven tactics from entrepreneurs who've built thriving ecommerce businesses.

    The Jim Colbert Show
    Meat Shower Monday

    The Jim Colbert Show

    Play Episode Listen Later Mar 3, 2026 155:52


    Monday – Jim announces that he is now a grandpa. Do you enjoy sad songs? Should Waymo's be able to park in any legal spot? We learn that Florida has the worst roads in America. Nutritionist Sara Geha talks vitamins. Brandon Kravitz on UCF hoops, Lionel Messi dominating Orlando City, API at Bay Hill, and a Magic minute. Plus, JCS News, JCS Trivia & You Heard it Here First.

    The Jim Colbert Show
    Meat Shower Monday

    The Jim Colbert Show

    Play Episode Listen Later Mar 3, 2026 160:46 Transcription Available


    Monday – Jim announces that he is now a grandpa. Do you enjoy sad songs? Should Waymo's be able to park in any legal spot? We learn that Florida has the worst roads in America. Nutritionist Sara Geha talks vitamins. Brandon Kravitz on UCF hoops, Lionel Messi dominating Orlando City, API at Bay Hill, and a Magic minute. Plus, JCS News, JCS Trivia & You Heard it Here First. See omnystudio.com/listener for privacy information.

    The Cybersecurity Defenders Podcast
    North Korean malware interviews, FortiGate firewall compromised, Cisco zero-day & Citrini Research AI future / Intel Chat [#298]

    The Cybersecurity Defenders Podcast

    Play Episode Listen Later Mar 3, 2026 42:30


    In this episode of The Cybersecurity Defenders Podcast, we discuss some intel being shared in the LimaCharlie community.GitLab's Threat Intelligence Team published detailed findings on North Korean activity associated with the Contagious Interview campaign and broader IT worker operations.A financially motivated, Russian-speaking threat actor used generative AI tools to compromise more than 600 Fortinet FortiGate firewall instances between January and February, according to Amazon Web Services.Cisco has released emergency patches for a critical zero-day vulnerability in its Catalyst SD-WAN products that has been actively exploited in the wild.Citrini Research presents a forward-looking scenario framed as a June 2028 macro memo describing a “Global Intelligence Crisis” triggered by abundant AI-driven intelligence.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform.This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io.

    The Plant Movement Podcast
    FloraTrack Ai | Smart Glasses & Smart Growers: The Future of IPM Is Here with Max and Dawson

    The Plant Movement Podcast

    Play Episode Listen Later Mar 3, 2026 40:19


    Send a textOn Episode 90 of The Plant Movement Podcast, Willie Rodriguez sits down with Max and Dawson, the founders of Floratrack AI, to discuss how artificial intelligence and AR smart glasses are transforming scouting, IPM, and plant tracking inside modern greenhouses.What started six years ago as a pest identification app has evolved into a full-scale platform built “for growers, by growers.” Floratrack now helps operations track thousands of plants, log pest pressure in real time, standardize reporting, and integrate directly with systems like Plantiful through API connections.The game-changing innovation? Hands-free AR glasses that allow growers to: • Log pests and crop issues while walking • Analyze sticky cards • Identify rows and fields instantly • Access reports without stopping workflowNo more clipboards. No more lost notes. No more relying on word-of-mouth.We also dive into: • Data-driven decision making • Standardization and accountability in the next 5–10 years • Reducing unnecessary spraying • Integration with future platforms • Data privacy and ownershipFloratrack offers user-based pricing and pilot programs for both the platform and glasses, helping growers test the system before fully committing.If you're serious about modernizing your greenhouse operation, this episode gives you a clear look at where the industry is heading.Innovation isn't optional anymore — it's the standard.FloraTrack AiEmail: dawson@floratrack.caCall: (250) 882-5229Web: https://floratrack.ca/ai-smart-glassesThe Plant Movement PodcastEmail: eddie@theplantmovementnetwork.com & willie@theplantmovementnetwork.comCall: (305) 216-5320Web: https://www.theplantmovement.comFollow Us: IG: https://www.instagram.com/theplantmovementpodcastA's Ornamental NurseryWE GROW | WE SOURCE | WE DELIVERCall: (305) 216-5320Web: https://www.asornamental.comFollow Us: IG: https://www.instagram.com/asornamentalnurseryThe Nursery GrowersCall: 786-522-4942Email: info@thenurserygrowers.comIG: www.instagram.com/thenurserygrowersweb: www.thenurserygrowers.comPlant Logistics Co.(Delivering Landscape Plant Material Throughout the State of Florida)Call: (305) 912-3098Web: https://www.plantlogisticsco.comFollow Us: IG: https://www.instagram.com/plantlogisticsDirected and Produced by Eddie EVDNT Gonzalez Disclaimer: The contents of this podcast/youtube video are for informational and entertainment purposes only and do not constitute financial, accounting, or legal advice. I can't promise that the information shared on my posts is appropriate for you or anyone else. By listening to this podcast/youtube video, you agree to hold me harmless from any ramifications, financial or otherwise, that occur to you as a result of acting on information found in this podcast/youtube video.Support the show

    Circles Off - Sports Betting Podcast
    Betting Madness, New Rules & Market Chaos: What You Need to Know | Presented by Kalshi

    Circles Off - Sports Betting Podcast

    Play Episode Listen Later Mar 2, 2026 101:38


    Massachusetts has officially approved a regulation requiring sportsbooks to notify bettors within 48 hours if their account is limited, a move that's sparking conversations across the betting world. We also break down the latest market settlements, including the high-profile Ali Khamenei market, and highlight unusual pricing in upcoming MLB markets, giving bettors insight into what's happening behind the scenes. Join host Jacob Gramegna with professional sports bettor and CEO of The Hammer, Rob Pizzola, basketball originator Kirk Evans, and sophisticated square Geoff Fienberg as they react to these developments, provide analysis, and share what bettors should pay attention to in today's fast-moving betting landscape.

    Where It Happens
    Claude Code marketing masterclass [from idea to making $$]

    Where It Happens

    Play Episode Listen Later Mar 2, 2026 54:06


    I sit down with Cody Schneider, growth engineer and co-founder of Graph, for a live, hands-on crash course in GTM (go-to-market) engineering powered by Claude Code. Cody walks through how he runs multiple AI agents simultaneously to handle everything from bulk Facebook ad creation and LinkedIn outreach to cold email campaigns and live data analysis — tasks that used to require a team of dozens. By the end of the episode, you'll have a full understanding of how to set up your own agent workflow, the specific tools involved, and why domain expertise paired with AI is the real competitive advantage right now. Cody's GTM Toolkit: AI/Agent Tools: Claude Code, Perplexity API, OpenAI Codex Marketing & Outreach: Instantly AI (cold email), Phantom Buster (LinkedIn scraping/automation), Apollo API (data enrichment), Million Verifier (email verification), Raphonic (podcast host scraping): Advertising: Facebook Ads API, Facebook Ads Library (competitor research), Nano Banana Pro (AI image generation), Kai AI (bulk image generation), HeyGen API (UGC/video generation) Infrastructure & Deployment: Railway.com (servers, on-the-fly databases/Postgres), Vercel (deployment) Data & Analytics: Graphed / Graphed MCP (data warehouse, live data feeds), Google Analytics 4 CRM & Communication: Salesforce (mentioned as comparison), Intercom, SendGrid API, Slack, Cal.com API Productivity & Design: Notion, Super Whisper (voice transcription), Claude Code front-end design skill, HTML to Canvas (for converting React components to PNGs) Timestamps 00:00 – Intro 02:02 – What Is GTM Engineering? 05:12 – Setting Up Your Agent Workspace & Environment File 07:54 – Live Demo: LinkedIn Auto-Responder 09:56 – Live Demo: Bulk Facebook Ad Generator 12:31 – Live Demo: Cold Email Campaign Automation (Raphonic + Instantly) 14:47 – Live Demo: Creating Notion Documents via Claude Code 16:46 – Live Demo: Bulk Ad Creative Generator 26:05 – Live Demo: LinkedIn Engagement Scraper to Cold Email Pipeline 28:16 – Context Switching Across Tasks 29:19 – Live Demo: Bulk Ad Generator 31:41 – Live Demo: Data Analysis: Turning Off Low-Performing Ads 35:28 – Summary of GTM Engineering Workflow 37:48 – Deploying Agents and On-the-Fly Databases with Railway for Data Analysis 41:28 – The Dream of Autonomous Marketing 48:50 – Building API-First Products and Agent-Native Infrastructure Key Points GTM engineering has evolved from Clay-style data enrichment workflows into full-stack agent orchestration — where one person running multiple Claude Code agents can replace the output of a large team. The practical setup starts with a single folder containing your environment file (API keys for every tool in your stack), transcription software like Super Whisper, and Claude Code. Cody demonstrates running seven or more agents simultaneously across LinkedIn outreach, Facebook ad creation, cold email campaigns, Notion document generation, and live data dashboards. Code-generated ad creative (React components exported as PNGs) costs nearly nothing to produce at scale and allows rapid testing of messaging variations before investing in polished visuals. Deploying proven workflows to Railway turns one-off agent tasks into always-on, autonomous processes that run 24/7. Domain expertise is the real multiplier — the vocabulary you bring from your field determines the quality of output you can extract from these tools. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ FIND CODY ON SOCIAL: Cody's startup: https://www.graphed.com/ X/Twitter: https://x.com/codyschneiderxx Youtube: https://www.youtube.com/@codyschneiderx

    #DoorGrowShow - Property Management Growth
    DGS 330: The AI Illusion: Protecting Your Reputation in a Manipulated World

    #DoorGrowShow - Property Management Growth

    Play Episode Listen Later Mar 2, 2026 18:20


    When your property management business isn't growing, hiring a salesperson might seem like the obvious solution, but what if that's actually where most owners go wrong… In this episode of the #DoorGrowShow, property management growth experts Jason and Sarah Hull break down why most BDM hires fail, the critical mistakes owners make with commission-only roles, and the exact systems required to make a salesperson successful. They dive into DoorGrow's Three Fits framework, the three non-negotiable ingredients for BDM success, and tease a game-changing new growth model designed to help property managers scale without burnout, bad leads, or broken systems.   You'll Learn (00:00) Introduction: The Three Fits for Hiring (01:16) The Challenges of Hiring a Business Development Manager (BDM)  (02:42) The Three Key Ingredients for BDM Success  (04:40)  Mistakes in BDM Compensation: The Commission-Only Pitfall  (05:40) The Three Roles of a BDM and the Problem with Buying Leads  (09:54) The "Door Machine" Teaser: The Easy Button for Growth  (14:39) Advanced Community, AI, and Final Thoughts  Quotables "A BDM has zero chance of success if you hire the wrong person."  "If they're not all three, they will fail. Or you'll fire them. Or they will leave you because they're not making enough money."  "If you do not have the right system to plug a BDM or a salesperson into, you can hire as many of them as you want, and they will still not work." Resources DoorGrow and Scale Mastermind DoorGrow Academy DoorGrow on YouTube DoorGrowClub DoorGrowLive Transcript Jason & Sarah Hull (00:01) Five, four, three, two, one. All right, we are Jason and Sarah Hull, the owners of DoorGrow, the world's leading and most comprehensive coaching and consulting firm for long-term residential property management entrepreneurs. For over a decade and a half, we have brought innovative strategies and optimization to the property management industry. At DoorGrow, we are on a mission to transform property management business owners and their businesses.   We want to transform the industry, eliminate the BS, build awareness, change perception, expand the market, and help the best property management entrepreneurs win. Now, let's get into the show. All right, you can probably hear our dogs losing their mind in the background. Maybe not. It was perfect time. Yeah, great time. You started the episode and then they decided. And then they started barking. Well, somebody's outside. That's why they're barking. Okay, they're protecting the house. All right, so what we wanted to talk today about is protecting you a little bit. And so.   One of things that's been going viral lately all over social media is this Molt book. So if you haven't heard of Molt book, it is a social network, supposedly. It's a social network created by AI bots. It's basically just only people that have access to it supposedly are AI agents and they go in there and they're talking about their humans. And this is this new AI tool that was originally called Claude, spelled like a claw, which is not the Claude.   by anthropic, ⁓ but it's different Claude bot. And then they got sued by Claude for name infringement or confusion. And they changed their name to something else and then to something else. And now it's called open clock. But basically there are these, it's like an AI tool that you can build or put on your computer and it runs locally and it proactively tries to do things for you.   There's a lot of security risks with this AI tool because it has access to all your stuff and it can figure things out and start to buy things for you and like do things for you. And so ⁓ it has access to all your stuff. And so you got to be careful with this. However, there's been a lot of false hype and fear mongering around multbooks. So we wanted to chat about this. And so if you've seen these scary posts about multbook, this AI social network, here's what's actually going on. So   what this social media network is. you been seeing posts? Have you heard about this? Only from you. I don't follow any of that stuff, you sent me a post that was talking about all of these AI things, I guess, and the chat room that they created, and they were talking to each other and interacting with each other and asking each other questions and kind of talking about their humans, human...   users, I guess, so to speak. And I went, yeah, I don't know if I'm believing all of that hype. So I had asked chat, Chippy Tea about it. And it essentially said, no, AIs do not work on their own. They are human prompted. They are user prompted. So if there is such a thing, it might exist.   but it's not something that the AIs are just going and creating their own little community and having discussions as humans would have their.   So let's about the hype. So their mold book is claiming and bragging that they have 1.2 million agents registered, but only 10,000 verified humans using the tool or something like this. And we know like at least a million of those agent accounts came from one guy. He ran a script, he posted about it on X on Twitter. And he said, FYI, this isn't what everybody's claiming it to be.   The MoteBook has a REST API. Anybody can literally post anything they want using that API. So if anybody knows how to use any AI tool now to create any sort of code or software, like using Cloud Code or even Cloud, you can create software in pretty much anything now that has access to this API that can go post there. And so it's not, are there agents posting there? Yes, there are some agents, but some of the articles on there are probably created by,   nerds that think it's funny to create posts that say my user is cap. People are capturing things with screenshots or my, my, my owner is like telling me to do unethical things. And so it's hard to know what, which of any of this stuff is true, but definitely the stats are not true. When this guy sent a million verified accounts he created to the founder of Moldbook who's a human and   said, are these accounts, like here's this security flaw you have, this really isn't legit, but I don't think they care. I think they like the hype, they're getting business from the hype. And so this points out a bigger problem. And the bigger problem is with the advent of AI and with all of the AI slop, as people are calling it, you have to now verify things. People are using AI to create content, to beat the algorithms and to manipulate humans. And so   A lot of posts that you see, a lot of news article posts on Instagram, they're fake. It's sensationalized, it's you AI slop BS, and it they make these sensational claims because sensationalism gets people to go, wow, I can't believe this. This is so noteworthy and newsworthy. I'm going to share it with other people and people aren't verifying this. So these things go viral and it's giving that account.   clout and attention and algorithm and they can use that to make money and they're just manipulating people. And so this is this bigger problem that now things being shared on social media that are going viral are just being engineered algorithmically based on sensationalism, not based on truth. And a lot of them are just complete lies or complete fabrications and algorithms are rewarding fear, they're rewarding outrage instead of truth.   And so a lot of things that you're out or noticing or things that are manipulating you, it's not even true. It's not even valid. And you're in this, get caught up in this echo chamber politically or algorithmically that really is just messing with you and playing with your emotionalism that you have hardwired into it because you're human. So I think it's really important to start to not.   that you have to really question and disbelieve almost everything you see and then verify it or validate it. And this shows up in a lot of ways. Like we were talking about ⁓ all the products that we see for sale on Instagram. That you see. You get targeted. I love the buy stuff. Yeah. I know. It works really well. I like buying gadgets and gizmos aplenty. You know, I'm like the little mermaid. All right. So.   So all of these things, though, if you go take whatever product or item you see on Instagram, you're like, man, that sounds really cool. It sounds like something I would love. I would need that algorithm already knows it knows you. knows everything you slow down on and look at. It knows everything you click on to check out. So it knows you what you'll you'll buy before you know you'll buy it. And it feeds the stuff up to you and it'll feed it over to you or retarget you over and over again until you actually buy the thing. Here's the thing.   a lot of these products that you see, if you go look up the same product up on like amazon.com, you'll find the same product with a different brand name, because they're using maybe the same source in China to like, and then they're white labeling it with their brand name, but you'll find the same product for 50%, sometimes 25 % of the costs that you're seeing. So they're just taking products that are doing well on Amazon. They go and   like find us the source of this product. And then they go do really good marketing and advertising to manipulate people, sell it on Instagram or meta ads, and they are selling it at this insane markup. People think they've got the exclusivity and they're the only way you can get this product. And they're selling it for three times the amount or at least double the amount of what you would pay normally.   And if you go and got it from the source, like through Alibaba.com or something like this, you probably pay a small fraction of that. And so people are overspending on this and they're manipulating you to spend more money. So just another example of how you need to go verify or find these things maybe elsewhere. And so you need to do your own research is the basic idea. And so.   ⁓ Some of the things that I have started to do is I use AI to research the things that I'm finding online to find out if they're true. So this could be health claims, product claims, product ideas. ⁓ If a product looks good, I will go send it to Grok, one of my favorite research AIs, because it's really good at doing really good research quickly. You can use perplexity to do research, but I'll say analyze this.   landing page, this product, is this hype or is this a legitimate product? Do research on this. And a lot of times we'll come back and say, this is overhyped. Their product claims are not valid. It's based on studies that indicate certain things, but it's not totally true. But every now and then it's like, this product sounds legit. And then I'm like, well, do I really need this product? And then sometimes it's like, no. Right. And so you can go now leverage AI and you need to use AI to battle with AI so that you can   not being manipulated or taken advantage of. So you need to do your own research. Analyze the truth of this. Go ask AI to analyze the truth and give it a link. ⁓ Grok can access Instagram and Facebook posts and things like this. It can access social media currently. ⁓ Claude, ChatGPT, some of these tools are not able to access certain links because they're blocked by those social media platforms. They don't want other AI tools looking at it.   So far, I've had success using Grok to analyze Instagram posts, Instagram videos. So if you see something on Instagram real or a post, you can go post it to Grok and it can analyze the truth of it, which is super helpful. Not only that, but Grok has access to the entire X or Twitter database to do research and to find people, what they're saying and stuff like this, which I've found to be very helpful. Now we all have an internal compass and I think this is the most important thing of all.   is you have to use your own brain and use that voice within. think one thing that makes us different than just AI is we have this intuition or this knowing or this higher faculty of just our mental capacity and we have this ability, or some would call it spiritual gift of intuition or of natural knowing or of, what would others call it? ⁓ The voice deep down within, sometimes deep.   how I know this thing deep down, but it or some would call it the gift of discernment. You know, it's kind of a biblical gift of the spirit it talks about. Some would call it the Holy Ghost or the Holy Spirit or whatever. But we have this quiet voice deep down that tells you that something doesn't feel right when everybody else is sharing it or. And so, you know, start to get in tune with that, start to listen to that and to get clarity on that, because not everything that's sensationalized is true.   and you need to trust that little voice within because you might go, this sounds like pretty incredible. Is this valid? Before you go share it and pass it on to other people, which is like spreading a virus, you know, it may not be a positive thing to spread this thing that's not accurate or true. So that's my two cents about this. so with this, the Malt book is an example of.   something that's going viral that everybody seems to just be believing and it's not totally valid. So. OK, let's connect this to property management. OK, so that it's relevant for anyone who's going, how are they? How are they going to link this? So one of the things that I had heard recently, there's well, one of them I heard a couple of months ago and one of them I just heard. There's two examples that I can cite. That connects it directly to business. One was.   I don't remember where they were located, so forgive me for that. Do your research. One of them, they wanted to see if they could use AI and all of the tools that are available, Google and SEO and the algorithm, to hype up something that isn't real. So what they did is they created a restaurant.   using they did have some photos. They took a couple of photos. The food wasn't even real. I remember this. Do remember this? They were taking photos of food and people eating the food and wow it looks so amazing. It wasn't even real food. Yeah. And they used all of these photos and then somehow used bots and AI to leave a bunch of great reviews.   for this amazing restaurant. And then the algorithm and Google started getting all of this data going, wow, people must love this restaurant. We should promote it. So showing up in searches and they had a wait list for a restaurant that did not ever, at any point in time, ever exist.   No real restaurant, no real location, no real food, no real people, no real business, and no real reviews. All of it was completely fake online. However, the algorithm did not know that it was fake. The algorithm thought, wow, this is a real business and people love it, so let's recommend it to other real people. So real people are getting recommendations from   the algorithm, hey, you might like this restaurant. And then real people are going, oh, I wanna go to this restaurant, this looks amazing, look at all the incredible reviews. And it's fake. And you can't even go there. That's example number one. Oh yeah, look at that. It's a bleach tablet. So let me share this. So you can look this up. You can just Google like fake restaurant or something like this. The article that came up was on vice.com but.   ⁓ I made my shed the top rated restaurant on TripAdvisor. So what he had, he works for Vice now, I guess, but before he started working for vice.com, he had a job where restaurant owners were paying him 10 pounds, 10 British pounds ⁓ to write a positive review of their restaurant on TripAdvisor, despite never having eaten there. So was like, this is like fake. And so he became obsessed with monitoring the ratings of these businesses and their fortunes would generally turn and   This was a catalyst. then he was like, TripAdvisor is this false reality, he thought. And so these meals never took place. The reviews were written by fake people like him. And so he was like, well, maybe I could just create a completely fake restaurant. He just decided to try it out. And so he took his shed, his shed in the backyard, and he built, made it the number one restaurant. And he called it the Shed at Dulwich.   and ⁓ created this cool name and this was back in 2017. And ⁓ he got a burner phone, he created a phone number, built a website, bought a domain, and then he created some images that looked like delicacies. And what he used to create the images was ⁓ runny honey, ground black pepper, and Gillette shaving cream, and bleach tablets, and just made these photos that look   kind of like food. See, Nevada actually looks pretty good. Right. And yeah, it's just got coffee beans. Like he just he made shaving cream, bleach tablet, cup of coffee beans on top with ⁓ with paint. Brown gloss paint. Yeah, that's supposed to be chocolate syrup. He just made fake images and.   It's so ridiculous. So then he went and then he started creating reviews and getting reviews and then having photos from people. ⁓ Like he just climbed the ranks and then he actually started opening it up for reservations and started getting reservations for this. And then a bunch of people came and actually, and then he used like other companies to make the food.   and brought it in and then fed it with the food and because their perception was this was a high end thing and a kind of a secret thing and it's hard to get into, people were like, this food's amazing. then they were giving him even better reviews about it and the food was just taken from other places that he had like kind of brought in. And so it got really, it was just super ridiculous. And so ⁓ he built this whole thing out.   So that's that story. What was the other story you wanted to other one is what I just heard. I'm still struggling to understand what the flaw is here. don't know why this is illegal. Maybe someone can help me. ⁓ I don't remember what platform they used, ⁓ but a guy somewhere in the US used a lot of AI agents to create music. Real music.   Yeah. But it was created by AI, not humans. And then what he did is he took the music and posted it to a platform. Now, I don't know if it was something like Spotify or Apple Music or whatever it is, but he used a platform, a similar platform. And instead of waiting for people,   to hear the music and like the music and for it to grow. He went, huh, how can I speed this up? So what he did then is he created a bunch of AI bots to go and listen to the music that his other AI bots had created. That's where it's illegal. Because people play for licensing. rankings and listen to the songs and the albums 24 hours a day on repeat.   multiple, multiple, multiple bots. So all of a sudden there's this fake music. Well, it's not even fake. It's real music. It's just created by AI. And then AI bots are listening to that music, which is pushing the rankings. Fake news or listenings, yes. Well, I mean, they're just bots. They're just not human listens. They're listens, right? But just AI's done. And these platforms pay you.   for each listen. Spotify, Apple Music, paid out him because he's getting so many listens. Of course. I believe he's getting sued for $10 million. He stole $10 million in fake listens, basically. Right. had AI create the music, had AI listen to the music to then make real money. Now, I don't know, but I think he's getting sued for things like money laundering, which I don't...   quite understand how that's money laundering because the platform is designed as such. So any platform, and this is my point in telling you these stories, any platform that is designed and built on attention, things like likes, comments, views, clicks, engagement, which is almost every social platform in existence.   can now be manipulated. yeah. Now what does that mean for you as a business owner? It means two things. One, despite your best efforts, anyone can now create fake things that will outrank you. So when it really comes down to it, does your Google ranking or your SEO ranking, does it actually make sense and is it real? Because you can take   a fake business or even a real business and now promote, get all these, you know, clicks, views, likes, attention. And then all of a sudden the algorithm goes, ⁓ people like this, I should serve it to more people. Now, if your competition starts doing this, what does that mean for you? Right. So again, don't be one of these people trying to manipulate.   others with AI. Like you need to be upfront about it. Nobody wants it because the one thing you have is your reputation and your brand. And if you destroy that, I mean, you could get in trouble legally. But if you do something unethical or you trick people into thinking that it's a human when it's AI or stuff like this, you destroy trust and trust is the foundation of business. And in the future, people are going to it's going to be really difficult to trust anything because   the majority of posts now on Facebook are probably written or drafted by chat GPT now. A lot of people are using different things. So you have to be careful. ⁓ And do we want to use these tools? Yes. Use the tools, create some leverage. It's smart. But you also need to make sure that you find that right balance of what's true, what's actually you, what's verifiable, ⁓ and not do things that are unethical. And so this is where   Property managers, you gotta be careful. You do not wanna use systems to create fake reviews on your profiles. You don't wanna get other property managers to give you reviews on your property management business and trade reviews. You gotta stop doing the shady shortcuts and focus on real connection, real people, real reviews, real results. Focus on real stuff. And this is why.   We've always focused on getting real video testimonials from our clients, ⁓ real results. And you can get in trouble. You can get in trouble with the ⁓ FCC with false claims. You can get in trouble like people can sue you over stuff. you be smart. Like you do real stuff. Don't look for the shady shortcuts. It's tempting. I know it is because you're like, man, it's hard. But if things are hard,   and you're trying to do shady shortcuts instead of doing the right things and doing the real things that work, there are things that work. So I guess that's our message to property managers is like, do things the smart, ethical way and don't be the shady person trying to manipulate others taking those shortcuts. So and, but use AI, you should be using tools to, you know, shorten time, collapse time, make things more effective, improve your writing.   learn, but make sure things are done your way in your voice, that you've done it, and work on improving yourself. So AI could either be making you better all the time, or it can be making you dumber and dumber. Kind of like that movie, Idiocracy, where... I'm sorry that I watched that movie. I really am.   Yeah, it's pretty dumb. watched that. But yeah, mean, the idea is if we just continually use AI to do all our thinking for us and decision making for us, which is the one brilliant piece that we have as humans ⁓ and that creative spark that's within us, we can use AI as a tool. But some people are just using it to do everything for them and they can't think anymore. They're unable to make decisions. You take away their access to a phone or to AI and they're like, whoa.   Right? So don't become dumber. Use AI to improve your thinking, to improve your ⁓ thought analysis around things, to help challenge you and challenge your thinking so that you grow. It can be a phenomenal growth tool. Like, what am I missing? Here's my current thinking about this. And it can give you some different ideas. ⁓ I didn't think of that. Then you can get curious. You can ask questions. You can do more research. And AI could be a tool to help you collapse time on becoming a better human, or it can...   replace you maybe, but then you're obsolete. And if we don't need you, then your job's going to be, you're going to be out of a job. You're going to be not usable or necessary in the future that's coming. So that's basically it. So, um, so if you are a property management business owner and you're struggling to figure out how to make things work and you're feeling tempted to do some shady AI stuff or whatever,   then maybe you just need a little bit of extra support or help. So reach out to us at door grow dot com. We would love to help you grow your business, help you figure things out ⁓ for a free training on how to get unlimited free leads. Text the word leads to five one two six four eight four six zero eight and we will send that to you. Also join our free Facebook community just for property management business owners at door grow club dot com. And if you want.   tips, tricks, ideas to learn about our offers or about DoorGrowth's programs, subscribe to our newsletter by going to doorgrow.com slash subscribe. And if you found this even a little bit helpful, don't forget to subscribe to us and leave us a review. We'd really appreciate it. Until next time. Remember the slowest path to growth is to do it alone. So let's grow together. everyone. All right, and we're out in five, four, three, two, one. Bye everybody.  

    SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
    SANS Stormcast Friday, February 27th, 2026: Finding Singal (@sans_edu intern); Google API Keys and Gemini; AirSnitch Breaking Client Isolation

    SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

    Play Episode Listen Later Feb 27, 2026 9:22


    Finding Signal in the Noise: Lessons Learned Running a Honeypot with AI Assistance [Guest Diary] https://isc.sans.edu/diary/Finding%20Signal%20in%20the%20Noise%3A%20Lessons%20Learned%20Running%20a%20Honeypot%20with%20AI%20Assistance%20%5BGuest%20Diary%5D/32744 Google API Keys Weren't Secrets. But then Gemini Changed the Rules. https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules AirSnitch: Demystifying and Breaking Client Isolation in Wi-Fi Networks https://www.ndss-symposium.org/ndss-paper/airsnitch-demystifying-and-breaking-client-isolation-in-wi-fi-networks/