Podcasts about MCP

  • 743PODCASTS
  • 2,478EPISODES
  • 1h 1mAVG DURATION
  • 2DAILY NEW EPISODES
  • Mar 19, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about MCP

Show all podcasts related to mcp

Latest podcast episodes about MCP

Bankless
Tempo Mainnet: The Race to Agentic Commerce

Bankless

Play Episode Listen Later Mar 19, 2026 79:55


Tempo Mainnet is live, but this episode isn't really about just another chain launch. It's about a bigger claim: that AI agents are about to need native money, and that the internet may need a new payment layer to support them. Georgios Konstantopoulos and Brendan Ryan join Bankless to unpack why Tempo launched with agentic payments front and center, what MPP actually is, how it compares to x402, and why they think machine-to-machine commerce could reshape everything from paid APIs to the business model of the web itself. ---

The Tech Trek
How AI Is Changing Crypto Crime, AML, and Cyber Investigations

The Tech Trek

Play Episode Listen Later Mar 18, 2026 28:49


Victor Fang, CEO and Founder of Anchain AI, joins The Tech Trek for a timely conversation on crypto crime, AI driven fraud, and what financial institutions need to understand as digital assets move closer to the mainstream. This episode is worth your time if you care about cybersecurity, compliance, crypto risk, anti money laundering, or where agentic AI is starting to reshape investigation work.This conversation goes beyond headlines. Victor breaks down how bad actors are using generative AI for phishing, identity fraud, exploit development, and ransomware, then explains how defenders are using AI, graph intelligence, and agent workflows to fight back. It is a sharp look at the collision of crypto, cybersecurity, regulation, and AI infrastructure.In this episodeWhat crypto crime actually looks like today, from exchange hacks to romance scams and ransomwareWhy crypto risk now extends well beyond crypto native usersHow financial institutions, regulators, and compliance teams are adaptingWhere AI is helping attackers move faster, and where it is giving defenders an edgeWhy agentic workflows and MCP powered investigation tools could change this category fastTimestamped highlights00:00 Victor Fang on crypto crime, AI versus AI, and agentic AML00:53 What Anchain AI does and why blockchain investigation is becoming more important01:56 How generative AI is already being used in crypto crime and phishing06:30 What banks, regulators, and AML teams need to understand about crypto adoption10:44 Why Victor believes AI can give defenders the advantage16:17 How Anchain uses blockchain data, graph intelligence, and agent workflows to investigate faster22:04 Why the company's MCP server could extend beyond crypto into KYC and financial applications25:21 What the next wave of agent driven security and investigation might look likeOne standout idea from the conversation, crypto is much closer to you than you think.Practical takeawaysCrypto risk is no longer a niche issue, it is increasingly tied to broader fraud, ransomware, and financial crimeAI is accelerating both offense and defense, which raises the bar for security and compliance teamsAgentic investigation workflows could dramatically reduce manual work in AML, fraud, and cyber operationsCompanies building in regulated spaces need infrastructure that can handle both speed and scrutinyFollow The Tech Trek for more conversations with builders, operators, and technical leaders shaping what comes next.

ceo founders ai crime crypto aml kyc mcp cyber investigations tech trek
The Gamer's Guild: A Marvel Crisis Protocol Podcast
MCP Ep. 125: Adepticon Preview and MCP Lingo Breakdown

The Gamer's Guild: A Marvel Crisis Protocol Podcast

Play Episode Listen Later Mar 17, 2026 116:25


This week on the cast, Matthew and Bryan bring on guest Omar from Danger Planet to talk about whether "meta" splashes are good for the game, to break down all the opaque common MCP speak/lingo that you hear from the community, and then we give our Adepticon preview for what we expect from the event and Worlds.If you are in the US, shop at:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ https://gamechefs.org⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to help support the guild and use code: GamersGuild to save an additional 15% on your order!  If you would like to further support the channel go here to find out more: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.patreon.com/Thegamersguild⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Please join us on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Discord⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Or find us on Facebook here.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

LINUX Unplugged
658: Automated Love Crunch

LINUX Unplugged

Play Episode Listen Later Mar 16, 2026 63:16 Transcription Available


We each spent the week on our own projects, breaking then fixing things. Now we're back to compare progress, and a few lessons learned.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free!Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love.Support LINUX UnpluggedLinks:

Merge Conflict
506: We have no skills

Merge Conflict

Play Episode Listen Later Mar 16, 2026 52:59


James and Frank unpack the exploding world of AI coding agents—covering instructions, MCP tools, custom agents, hooks, plugins and why “skills” matter. They walk through the new .NET Skills repo (P/Invoke, MSBuild, diagnostics, binlogs), show how skills act like practical, on‑demand tutorials for niche tasks, and sketch how tooling will soon auto-load the right skills so agents can just do the thing for you. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Merge Conflict: Twitter, Facebook, Website, Chat on Discord Music : Amethyst Seer - Citrine by Adventureface ⭐⭐ Review Us ⭐⭐ Machine transcription available on http://mergeconflict.fm

ai blog skills chat mcp msbuild james montemagno frank krueger
Open Source Security Podcast
MCP and Agent security with Luke Hinds

Open Source Security Podcast

Play Episode Listen Later Mar 16, 2026 35:36


Josh talks to Luke Hinds, CEO of Always Further, about MCP and agent security. We start out talking about Luke's new tool, nono which is a sandboxing tool that has AI agents in mind as a use case. We explain what MCP and agents are doing as well as why it's so hard to secure them. It's not impossible, but it's not simple either. We end the show by discussing some of the more human aspects to security and how history may be repeating itself with security folks laughing at new users who don't know any better. The show notes and blog post for this episode can be found at https://opensourcesecurity.io/2026/2026-03-mcp-agent-luke/

The Geek In Review
Anthropic's Matt Samuels and Den Delimarsky - Claude & MCP: Building the USB-C for the Legal Tech Stack

The Geek In Review

Play Episode Listen Later Mar 16, 2026 55:33


This week, we sit down with two guests from Anthropic, Matt Samuels, Senior Product Counsel, and Den Delimarsky, a core maintainer of the Model Context Protocol, or MCP. Together, they unpack why MCP is drawing so much attention across the legal industry and why some are calling it the USB-C for AI. For law firms long burdened by disconnected systems, scattered data, and the infamous integration tax, MCP offers a shared framework for connecting models to the places where real work and real knowledge live, from iManage and Slack to email, data lakes, and internal tools.Den explains that the promise of MCP is not tied to one model or one vendor. Instead, it creates a standardized way for AI tools to securely interact with many different systems without forcing organizations to build one-off integrations every time they want to connect a new source. The conversation gets especially relevant for legal listeners when Greg and Marlene press on issues like permissions, ethical walls, least-privilege access, and auditability. The answer from Anthropic is reassuring. MCP is built to work with familiar enterprise security concepts such as OAuth and role-based access, meaning firms do not have to throw out their security model in order to explore new AI workflows.Matt brings the legal and operational lens, translating MCP into practical use cases for lawyers, legal ops teams, and security leaders. He describes how AI becomes far more useful once it has access to the systems lawyers already rely on every day, while still operating within carefully defined administrative controls. The discussion highlights a key shift in how firms should think about AI. This is no longer about asking a chatbot a clever question and getting a polished paragraph back. With MCP, firms are moving toward systems where AI can retrieve, correlate, summarize, draft, and support actions across multiple platforms, all while staying inside the guardrails set by the organization.The episode also explores how MCP fits into the rise of agentic workflows, apps, plugins, and skills. Rather than treating AI as a static assistant, Anthropic describes a future where these tools become active participants in legal work, pulling together information from multiple sources, helping assemble case timelines, drafting notes into a shared document, and supporting lawyers in a far more integrated workspace. The conversation around skills is especially useful for firms thinking about standard operating procedures, preferred drafting styles, escalation rules, and repeatable work product. Skills and MCP do different jobs, but together they start to look like the operating system for structured legal workflows.By the end of the conversation, one message comes through clearly. The legal profession is still early in this shift, but the pace is picking up fast. Both Matt and Den encourage listeners to stop treating these tools like abstract future concepts and start experimenting with them now. At the same time, they offer an important note of caution. As much as these systems promise speed and efficiency, lawyers still need to protect the craft of lawyering, their judgment, and the human choices that matter most. For firms trying to make sense of where AI is headed next, this episode offers a grounded and practical look at the infrastructure layer that could shape what comes next.Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | Substack[Special Thanks to ⁠Legal Technology Hub⁠ for their sponsoring this episode.] ⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.comMusic: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

Critical Thinking - Bug Bounty Podcast
Episode 165: Protobuf Hacking, AI-Powered Bug Hunting, and Self-Improving Claude Workflows

Critical Thinking - Bug Bounty Podcast

Play Episode Listen Later Mar 12, 2026 44:23


Episode 165: In this episode of Critical Thinking - Bug Bounty Podcast Justin recaps his Zero Trust World experience, before we dive into Permissions issues client-side bugs, New Hardware Hacking Classes, and using AI to hack.Follow us on twitter at: https://x.com/ctbbpodcastGot any ideas and suggestions? Feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!====== Links ======Follow your hosts Rhynorater, rez0 and gr3pme on X: https://x.com/Rhynoraterhttps://x.com/rez0__https://x.com/gr3pmeCritical Research Lab:https://lab.ctbb.show/ ====== Ways to Support CTBBPodcast ======Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.You can also find some hacker swag at https://ctbb.show/merch!Today's Sponsor: Check out ThreatLocker Ringfencinghttps://www.criticalthinkingpodcast.io/tl-rf====== Resources ======bbscope Updatehttps://x.com/sw33tLie/status/2029344643154919720Matt Brown's Youtube Channelhttps://www.youtube.com/channel/UC3VDCeZYZH7mCihtMVHqppwMatt's Twitter:https://x.com/nmatt0MCP server for HackerOne to search reportshttps://x.com/OriginalSicksec/status/2029503063095124461?s=20Caido Skillshttps://github.com/caido/skillsThe Agentic Hacking Era: Ramblings and a Toolhttps://josephthacker.com/hacking/2026/03/06/the-agentic-hacking-era.htmlAnnouncing AI-driven Caidohttps://caido.io/blog/2026-03-06-caido-skill====== Timestamps ======(00:00:00) Introduction(00:06:23) bbscope report dumping & Matt Brown Training(00:13:10) MCP server for HackerOne to search reports & protobuff success(00:24:24) Hacking Mics with Permissions issues client-side bugs(00:27:26) Can AI Hack things?

Complex Systems with Patrick McKenzie (patio11)
Inference engineering and the real-world deployment of LLMs, with Philip Kiely

Complex Systems with Patrick McKenzie (patio11)

Play Episode Listen Later Mar 12, 2026 83:45


Patrick McKenzie (patio11) and Philip Kiely, early employee at Baseten, discuss the inference stack: the critical layer of software and hardware that sits between a model's weights and a user's prompt. They cover inference engineering, how intermediate layers are evolving over a technical stack that is changing every six months, and how sophisticated organizations are actually consuming LLMs beyond just writing their questions into chatbot apps.–Full transcript available here: www.complexsystemspodcast.com/inference-engineering-with-philip-kiely/–Presenting Sponsors: Mercury, Meter, & GranolaComplex Systems is presented by Mercury—radically better banking for founders. Mercury offers the best wire experience anywhere: fast, reliable, and free for domestic U.S. wires, so you can stay focused on growing your business. Apply online in minutes at mercury.com.Networking infrastructure has a way of accumulating technical debt faster than almost anything else in IT. Meter handles the full stack (wired, wireless, and cellular) as a single integrated solution: designed, deployed, and managed end-to-end so there's only one vendor to call when something goes wrong. Visit meter.com/complexsystems to book a demo. If meetings consistently leave you with hazy action items and lost context, Granola handles the transcription so you can actually participate and gives you searchable notes afterward. Try it free at granola.ai/complexsystems with code COMPLEXSYSTEMS–Links:Download Inference Engineering: https://www.baseten.com/inference-engineering/ Philip's website: https://philipkiely.com/ Stripe's Emily Sands on Complex Systems: https://www.complexsystemspodcast.com/episodes/the-past-present-and-future-of-ai-with-stripe/ Des Traynor on Complex Systems: https://www.complexsystemspodcast.com/episodes/des-traynor/  –Timestamps:(00:00) Intro(00:30) The AI deployment pipeline(03:04) Evolution of abstraction layers in engineering(05:14) Defining inference and model weights(08:45) Architecture of language and diffusion models(10:11) AI adoption in the broader economy(11:30) The shift toward agentic workflows and RL(14:55) Function calling and real-world actions(20:10) Sponsors: Mercury | Meter(22:59) Technologies for agentic tools: MCP and skills(25:32) The craft of writing a harness(29:56) Using AI for automated proofreading and tool creation(34:12) Balancing LLMs with deterministic code(37:31) Observability and chain of thought reasoning(39:31) Sponsor: Granola(41:21) Observability and chain of thought reasoning(50:45) Speculative decoding and hidden states(55:37) The value of smaller, task-specific models(59:55) Internal competencies versus buying solutions(01:09:27) Self-publishing a technical book in record time(01:23:20) Wrap

eCommerce Evolution
AI Employees Are Here: What Claude Cowork, OpenClaw, and MCP Mean for eCommerce

eCommerce Evolution

Play Episode Listen Later Mar 12, 2026 55:45 Transcription Available


AI in eCommerce marketing isn't about “better prompts” anymore, it's about better systems. Brett sits down with returning guest Russ Henneberry (TheClick.ai, co-author of Digital Marketing for Dummies) to unpack what's new and what's next: Claude Cowork, agentic workflows, skills that “self-improve,” and what happens when your AI can actually use your files, tools, and data — not just chat about it.If you're a DTC founder, CMO, or operator trying to scale performance without scaling headcount, this episode is a blueprint for how modern teams are building repeatable AI routines for content, reporting, and decision-making.—Sponsored by OMG Commerce - go to (https://www.omgcommerce.com/contact) and request your FREE strategy session today!—Chapters: (00:00) Intro(02:05) What Cowork is: agentic plans, local files, and “skills”(05:20) Skills that self-improve, plus persona + offer as core context(08:10) Cowork as a “brain” with version control, shared across workflows(10:10) Connected sources: Notion transcripts, Zoom notes, and MCP-style integrations(15:10) Parallel agents and context windows: why this runs faster than chatbots(18:05) Skill marketplaces, sharing zips, and the security caution(23:10) OpenClaw/Open-source talk: the 4 “levels” (chatbot → cowork → code → open source)(28:05) Hardware reality: Mac Minis, Apple silicon, and “processing power” as leverage(31:05) Content system: Source → Structure → Format → Polish (newsletter example)(38:30) Click.ai membership, team training, and closing thoughts on revenue/employee—Connect With Brett: LinkedIn: https://www.linkedin.com/in/thebrettcurry/ YouTube: https://www.youtube.com/@omgcommerce Website: https://www.omgcommerce.com/ Request a Free Strategy Session: https://www.omgcommerce.com/contact Relevant Links:Russ's LinkedIn: https://www.linkedin.com/in/russhenneberrytheCLICK: https://theclick.ai/Digital Marketing for Dummies: https://www.amazon.com/Digital-Marketing-Dummies-Business-Personal/dp/1119235596Past guests on eCommerce Evolution include Ezra Firestone, Steve Chou, Drew Sanocki, Jacques Spitzer, Jeremy Horowitz, Ryan Moran, Sean Frank, Andrew Youderian, Ryan McKenzie, Joseph Wilkins, Cody Wittick, Miki Agrawal, Justin Brooke, Nish Samantray, Kurt Elster, John Parkes, Chris Mercer, Rabah Rahil, Bear Handlon, JC Hite, Frederick Vallaeys, Preston Rutherford, Anthony Mink, Bill D'Allessandro, Stephane Colleu, Jeff Oxford, Bryan Porter and more

Manufacturing Hub
Ep. 252 - Industrial AI in Manufacturing What Actually Works and What Does Not #industrialautomation

Manufacturing Hub

Play Episode Listen Later Mar 12, 2026 65:39


Manufacturing Hub is back with Episode 252, where co hosts Vlad Romanov and Dave Griffith break down what an AI survival guide should actually look like for manufacturing and industrial automation professionals. This is not a hype conversation about replacing people with magic software. It is a grounded discussion about what AI tools can do today, where they fail, why context and data quality matter so much, and how industrial teams should think about experimentation without losing sight of real operating constraints.In this episode, Vlad and Dave unpack the evolution many engineers and technical leaders have already felt in real time, from early prompt engineering, to agent based workflows, to MCP servers, skills, context management, and the growing cost of tokens and infrastructure. The conversation moves beyond generic AI commentary and into the reality of plant floor environments, where success depends on process knowledge, data architecture, OT constraints, cybersecurity, governance, and clear business value. One of the strongest themes throughout the episode is that manufacturers cannot skip the hard work of structuring data, understanding workflows, and defining use cases simply because AI tools are moving quickly.Vlad brings a very practical industrial lens to the discussion. Drawing on years of hands on experience across controls, manufacturing systems, plant modernization, and digital transformation, he explains why industrial AI has to start with operational context. A maintenance team, an engineering team, and a quality team do not need the same data, do not ask the same questions, and should not be handed the same AI workflows. That distinction matters. This conversation also highlights why the best industrial AI implementations will likely come from teams that combine domain expertise with strong technical execution, rather than generic AI shops trying to force a solution into environments they do not fully understand.Dave adds an important systems and adoption perspective, especially around cost, scaling, management expectations, and the danger of trying to prompt your way past foundational architecture work. Together, Vlad and Dave explore why manufacturers are interested in AI, why many are afraid of being left behind, and why so many projects still stall once they hit the realities of obsolete equipment, weak data models, fragmented systems, and unclear ownership of information. They also discuss deterministic logic versus LLM behavior, reporting workflows, industrial dashboards, PLC code generation concerns, and the practical question every manufacturer should ask before investing: what problem are we solving, for whom, and what is the measurable return?For those new to Vlad, he is an electrical engineer and manufacturing leader with deep experience across industrial automation, controls, data systems, OT architecture, modernization strategy, and plant operations. Through Joltek, Vlad works with manufacturers on digital transformation, IT OT architecture and integration, modernization planning, operational improvement, and technical workforce enablement. Learn more here:Joltek: https://www.joltek.com IT OT Architecture and Integration: https://www.joltek.com/services/service-details-it-ot-architecture-integrationIf you are a plant leader, controls engineer, systems integrator, OT architect, SCADA or MES practitioner, or simply someone trying to separate useful AI workflows from noise, this episode will give you a much more realistic framework for thinking about industrial AI adoption.Timestamps00:00 Welcome back and why this episode matters01:00 Setting up the industrial AI theme for the coming weeks03:10 From prompt engineering to structured AI workflows05:30 AI agents, parallel workflows, tokens, and context windows09:00 MCP tools, Playwright, and what new integrations unlock16:20 How Vlad researches AI and where useful information actually lives22:00 Real manufacturing problems versus AI in search of a problem29:40 Why industrial data architecture is harder than most people think37:00 OT expertise, workforce enablement, and who should build solutions45:40 Practical advice for manufacturers starting the AI journey50:30 Data governance, hallucinations, infrastructure, and cybersecurity57:20 What looks promising today in reporting, dashboards, and industrial applications

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Google has been shipping relentlessly across Gemini models, world models, multimodal tools, and Workspace updates, but the release getting the most attention from developers may actually be the new Google Workspace CLI. NLW explains why command line interfaces are suddenly central to the agent era, why developers are rethinking MCP and other abstraction layers, and how Google is quietly positioning Gemini by making its ecosystem easier for agents to use. In the headlines: Meta hires the Moltbook team, Nvidia backs Mira Murati's new lab, Oracle earnings calm AI infrastructure fears, and Amazon blocks Perplexity shopping agents.Learn more about AGENT MADNESS: Our 64-Bracket tournament to find the coolest Agent of 2026 https://www.agentmadness.ai/Brought to you by:KPMG – Agentic AI is powering a potential $3 trillion productivity shift, and KPMG's new paper, Agentic AI Untangled, gives leaders a clear framework to decide whether to build, buy, or borrow—download it at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.kpmg.us/Navigate⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Mercury - Modern banking for business and now personal accounts. Learn more at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://mercury.com/personal-banking⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AIUC-1 - Get your agents certified to communicate trust to enterprise buyers - ⁠https://www.aiuc-1.com/⁠Blitzy - Want to accelerate enterprise software development velocity by 5x? ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://blitzy.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://pod.link/1680633614⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Our Newsletter is BACK: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://aidailybrief.beehiiv.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Interested in sponsoring the show? sponsors@aidailybrief.ai

Developer Tea
From Software Engineer to Agent Manager - How Work is Changing in A New Software Development Paradigm

Developer Tea

Play Episode Listen Later Mar 10, 2026 21:20


If you're a software engineer right now, you likely feel like your world is changing overnight. We are writing half or less the amount of code that we wrote even a year ago, which represents a seismic, groundbreaking shift in our industry. However, the rapid introduction of new tools can slide quickly from exciting to purely chaotic, leaving you feeling like you are falling behind. In today's episode, I explore how this changes the nature of our day-to-day work, and why the key to surviving this transition is shifting your mindset from a traditional "Software Engineer" to an "Agent Manager". The Illusion of Velocity vs. Actual Chaos: While the big-picture promise of AI is that the software development pipeline will move exponentially faster, the reality on the ground often feels like unadulterated chaos. Trying to adopt every new tool while spinning up multiple agents to work on parallel tickets introduces a massive new cognitive burden. The Context-Switching Trap: Understand why parallelizing agent workflows fundamentally changes your context-switching overhead. You are no longer just reloading context to build something yourself; you are reloading it to manage, review, and validate a building agent, which rapidly drains your cognitive ability and leads to burnout. The "Agent Manager" Mindset: Treating AI as just a "smart autocomplete" while you try to do the same old job will not work. You need to start viewing your role more like assembly line or process management, focusing on facilitating the system rather than typing every line of syntax. Adopt Old-School Quality Control Tactics: Discover how traditional management methods are becoming essential for individual contributors. Just like a factory manager doesn't inspect every single item off the line, you must develop methods for spot checks, anomaly detection, and standardizing outputs to evaluate the quality and quantity of your agents' work. Shift Your Work Upfront: Recognize that your core effort must move to the specification and planning phases. Your job is increasingly about setting the context, defining the prompt, and establishing strict guardrails before the agent begins its work. Redefining Your Work in Progress (WIP): Proven principles like limiting WIP and focusing on finishing rather than starting are more important than ever to reduce cognitive burden. However, you must adapt these principles to fit a workflow where you are managing processes rather than manually coding. Episode Homework: Take a step back and ask yourself: "What is my true work in progress? Am I actually manually doing these tickets, or am I managing the processes that produce quality ticket work?".

The Neuron: AI Explained
24 Billion AI Uses Later: What Canva Learned About the Future of Design

The Neuron: AI Explained

Play Episode Listen Later Mar 10, 2026 55:06


You've probably used Canva—but you probably haven't seen what it can do with AI. In this episode of The Neuron, we sit down with Danny Wu, Head of AI Products at Canva, to explore how the platform went from a simple design tool to a full-blown "Creative Operating System" powered by AI—serving 230+ million users every month.Danny walks us through how Canva's MCP server lets you create fully editable designs from inside ChatGPT, Claude, and Microsoft Copilot, why their new Canva Design Model is fundamentally different from typical AI image generators (hint: layers), and why 24 billion AI tool uses later, the most surprising use cases are ones they never anticipated.We also get Danny's take on whether AI will homogenize all design, his advice for freelancers who don't want to get replaced, and a live demo of Canva's AI design generation in action.You'll learn:• How MCP powers Canva inside ChatGPT, Claude, and Copilot• What the Canva Design Model understands that GPT-4 doesn't• Why editable layers (not flat images) are the real AI design breakthrough• Danny's advice for freelancers to become irreplaceable in an AI world• How Canva uses AI internally on tens of millions of lines of code• Why AI assistants are becoming "the new SEO" for user acquisitionTry Canva AI at https://canva.com/aiSpecial thanks to the sponsor of this video, Cohesity: https://www.cohesity.com/ResilienceEverywhere/?utm_source=brand-ta-podcast&utm_medium=direct-publisher&utm_campaign=fy26-q2-01-amer-us-digital-awarewbpg-brd-genbr&utm_content=podcastFor more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai.

We Don't PLAY
Pinterest SEO Marketing Tutorial: Google Search Console Indexing with Favour Obasi-ike

We Don't PLAY

Play Episode Listen Later Mar 10, 2026 103:35


Favour Obasi-ike, MBA, MS delivers a tutorial on why Pinterest is a search engine, not social media, and how to connect it with Google Search Console for SEO impact.Pinterest is the least skipped ad platform while YouTube is the most, and Pinterest ads cost two to thirty cents versus dollars elsewhere.He covers claiming your business account, how earned media works exclusively on Pinterest, and why a pin lives three to five months compared to an Instagram post's 19 to 72 hours. Favour shares a client case study where organic image impressions grew from 54.1 million to 154 million in three months with zero ad spend, with Pinterest ranking in the top three linking sites.The conversation covers MCP servers, Google's crawl budget drop from 15 to two megabytes, why 67 percent of searches result in zero clicks, and why GoDaddy is not scalable.Mark recommends WordPress, and Shira shares how evergreen content generates leads years after posting.Book SEO Services? Save These Quick Links for Later>> ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Book SEO Services with Favour Obasi-ike⁠>> Visit Work and PLAY Entertainment website to learn about our digital marketing services>> ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Join our exclusive SEO Marketing community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠>> Read SEO Articles>> ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Subscribe to the We Don't PLAY Podcast⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠>> Purchase Flaev Beatz Beats Online>> Favour Obasi-ike Quick Links>> Start Recording your Podcast with Riverside Today | Sign Up with My Affiliate Link HereTimeline and Timestamps[00:08] Introduction — Pinterest SEO Marketing on Clubhouse.[02:53] Pinterest: least skipped ad platform vs. YouTube.[04:02] Pinterest is a search engine for images.[07:04] You cannot be on ChatGPT if not on Google.[08:15] Claiming your Pinterest business account.[10:02] Earned media — only Pinterest offers it.[12:05] Pin lifespan: 3–5 months vs. Instagram: 19–72 hours.[19:00] Tuna on WebMCP and AI impact on SEO.[21:56] Google crawl budget: 15 MB down to 2 MB.[23:35] 67% of Google searches result in zero clicks.[33:09] Why GoDaddy is not scalable.[40:03] Mark: WordPress — own your website.[58:45] Pinterest + Google Search Console: the perfect blend.[60:30] Case study: 54.1M to 154M impressions organically.[73:49] Shira: evergreen content still generates leads.[79:50] SEO scorecard tool — 10 questions, instant report. 93:01] 97% of Pinterest searches are unbranded.[95:32] Pinterest and Amazon partnership.Memorable Quotes"Pinterest is the least skipped ad platform. YouTube is the most — people pay to skip ads.""If you drop the P, it's interest. Pinterest is interest, literally.""You build a house on land you don't own." — Mark, on closed-source builders."Keep putting out your message, even when nobody's watching, because someone is." — Shira"67% of Google searches don't result in a click. That's a culture shift." — TunaFAQs AnsweredIs Pinterest social media?On the personal side, yes. On the business side, it is a visual search engine where you own 100% of your data through a claimed account.What is earned media?When someone saves your paid pin and revisits it later, you earn impressions without spending again — dividends on your ad spend.Why not GoDaddy?It lacks code injection, scalable pop-ups, and flexibility. WordPress is recommended for full ownership and SEO control.How long does Pinterest SEO take?It depends on domain authority and consistency — no fixed timeline, but articles linked to Pinterest accelerate results.Key TakeawaysClaim your website on Pinterest Business. Track Pinterest as a linking site in Google Search Console. Pins live 3–5 months versus hours on Instagram. 97% of Pinterest searches are unbranded. Own your site on WordPress. Evergreen content compounds and generates leads long after posting.KeywordsPinterest SEO, Google Search Console, earned media, Pinterest ads, visual search engine, domain authority, crawl budget, WordPress, claimed accounts, unbranded search, evergreen content, zero-click searches, SEO scorecard, MCP servers.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

amazon money social media ai google social bible marketing entrepreneur news podcasts ms sales search microsoft podcasting chatgpt mba artificial intelligence web services branding reddit seo hire small business roundtable pinterest clubhouse tactics favor revenue traffic digital marketing favourite bible study favorites entrepreneurial wordpress content creation budgeting content marketing financial planning web3 email marketing rebranding bing social media marketing tutorials claiming earned hydration evergreen small business owners tuna pin entrepreneur magazine mb money management roundtable discussion geo favour monetization marketing tips search engines pins web design search engine optimization quora godaddy drinking water b2b marketing podcast. google ai shira biblical principles website design marketing tactics get hired mcp digital marketing strategies entrepreneur mindset business news entrepreneure small business marketing indexing google apps spending habits seo tips google search console website traffic small business success entrepreneur podcast small business growth podcasting tips ai marketing seo experts webmarketing financial stewardship branding tips google seo small business tips email marketing strategies pinterest marketing social media ads entrepreneur tips seo tools search engine marketing marketing services budgeting tips roundtable podcast seo agency web 3.0 social media week web traffic seo marketing blogging tips podcast seo entrepreneur success small business loans social media news personal financial planning small business week seo specialist website seo marketing news seo podcast content creation tips digital marketing podcast seo best practices kangen water seo services data monetization ad business diy marketing obasi large business web tools pinterest seo start recording web host smb marketing seo news marketing hub marketing optimization small business help storybranding web copy entrepreneur support pinterest ipo entrepreneurs.
Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
NVIDIA's AI Engineers: Agent Inference at Planetary Scale and "Speed of Light" — Nader Khalil (Brev), Kyle Kranen (Dynamo)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 10, 2026 83:37


Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con

AWS for Software Companies Podcast
Ep197: From Dashboards to Agents: How Qlik Is Reinventing Data Analytics with AI

AWS for Software Companies Podcast

Play Episode Listen Later Mar 10, 2026 20:07


CTO Sam Pierson explains how Qlik's associative engine and agentic AI are transforming the way businesses uncover insights and what's next on the data frontier.Topics Include:Qlik is a 30-year-old data analytics and AI company with global customers.Qlik's associative engine surfaces insights from data you aren't even examining.A paper manufacturer optimized supply chain routing and navigated tariff complexity.Generative AI can't easily query databases — Qlik's engine bridges that gap.Qlik built an agentic layer enabling natural language conversations with your data.MCP integration lets users access Qlik insights directly from tools like Claude Desktop.Qlik runs entirely on AWS, with global regions built around local compliance requirements.The AWS partnership prioritizes mutual success over transactional service relationships.Agents will mature in 2026; some agentic bets will succeed, others will be refactored.Fine-tuned, smaller language models will grow in importance alongside larger ones.AI adoption requires restructuring workflows end-to-end, from product spec to go-to-market.Qlik is hiring for curiosity and agency — people who experiment without waiting for permission.Participants:Sam Pierson – Chief Technology Officer, QlikSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/

Segurança Legal
#412 – Uma Constituição para a IA

Segurança Legal

Play Episode Listen Later Mar 10, 2026 77:39


Neste episódio, Guilherme Goulart e Vinícius Serafim debatem a “Constituição do Claude”, o documento de diretrizes publicado pela Anthropic para orientar o comportamento do modelo de linguagem Claude, abordando temas centrais como antropomorfização da IA, regulação tecnológica, responsabilidade das empresas e a questão filosófica sobre agência versus inteligência artificial. O episódio toca em termos estratégicos como inteligência artificial, segurança da informação, privacidade, ética em IA, responsabilidade corporativa, modelos de linguagem, guardrails, jailbreak, Constitutional AI, agente moral, agência artificial, “papagaio estocástico” e governança digital. Você vai descobrir por que a escolha da palavra “constituição” por uma empresa privada levanta alertas sobre legitimidade democrática, entender a diferença entre dar instruções em linguagem natural a um sistema computacional e genuinamente acreditar que ele possui consciência, e refletir sobre os riscos reais de se pavimentar, ideologicamente, um caminho que transforma a IA em “agente moral” para potencialmente reduzir a responsabilidade das grandes empresas de tecnologia. O debate também traz referências à obra de Luciano Floridi, ao conceito de papagaio estocástico, às Três Leis da Robótica de Asimov e ao clássico HAL 9000, conectando ficção científica, filosofia e direito num instigante. Assine o Segurança Legal na sua plataforma favorita, deixe sua avaliação e compartilhe com quem se interessa por direito da tecnologia e inteligência artificial. Siga o podcast no YouTube, Mastodon, Bluesky, Instagram e TikTok. Esta descrição foi realizada a partir do áudio do podcast com o uso de IA, com revisão humana.  Visite nossa campanha de financiamento coletivo e nos apoie!  Conheça o Blog da BrownPipe Consultoria e se inscreva no nosso mailing Acesse WhisperSafe – Transcreva áudio e grave reuniões direto no seu computador, mesmo offline. Rápido, leve e pronto para usar com qualquer IA. Use o cupom SEGLEG50 para 50% de desconto na sua assinatura. ShowNotes Paper fundacional sobre a questão de uma Constituição para a IA – Constitutional AI: Harmlessness from AI Feedback Claude’s constitution Claude’s Strange Constitution por Luiza Jarovsky Statement from Dario Amodei on our discussions with the Department of War

Mixergy - Startup Stories with 1000+ entrepreneurs and businesses

In private conversations, I’m hearing a lot of founders describe how they’re starting to sell to AI agents, like OpenClaw's. Zapier has more traction doing that than anyone else I met. So, in my monthly podcast with Zapier's founder, Wade Foster, I asked him to show me how they’re doing it. Wade Foster is the co-founder and CEO of Zapier, the automation platform used by hundreds of thousands of businesses to connect over 8,000 apps. Since launching in 2011, Zapier has grown into a remote-first company with more than 800 employees and hundreds of millions in revenue. Today, Wade is leading Zapier's evolution into AI-powered automation, MCP integrations, and tools built for an agent-driven future. Sponsored byZapier More interviews -> https://mixergy.com/moreint Rate this interview -> https://mixergy.com/rateint

MrCreepyPasta's Storytime
We Found an Emergency Distress Buoy floating in the Pacific by JLGoodwin1990 (2/2)

MrCreepyPasta's Storytime

Play Episode Listen Later Mar 7, 2026 47:53 Transcription Available


Author, here! After the amazing reaction you guys had to part one, I hope you all enjoy the conclusion. Part one was the build up, this is the rollercoaster ride down! And, for those who have messaged me incessantly recently, asking if MCP is going to finish covering my series My Wife and I went to Las Vegas for our Honeymoon, which I actually redid every part, including the first two he posted, to make it better, if you want to hear him narrate the redone story, let him know!

Raj Shamani - Figuring Out
AI Masterclass: Become an Expert at Claude, Gemini & Powerful AI Tools | Vaibhav | FO480 Raj Shamani

Raj Shamani - Figuring Out

Play Episode Listen Later Mar 7, 2026 138:55


Checkout Hostinger: https://www.hostinger.com/in/figuringoutaiFIGURINGOUTAI - 20% off on 12 month and above plans. Valid until: 31st March 2026.Applicable on VPS and shared hosting as well.Figuring Out AI Community: ⁠https://figuringoutai.co/⁠Guest Suggestion Form: ⁠⁠⁠⁠⁠https://forms.gle/bnaeY3FpoFU9ZjA47⁠⁠⁠⁠⁠Disclaimer: This video is intended solely for educational purposes and opinions shared by the guest are his personal views. We do not intent to defame or harm any person/ brand/ product/ country/ profession mentioned in the video. Our goal is to provide information to help audience make informed choices. The media used in this video are solely for informational purposes and belongs to their respective owners.Order 'Build, Don't Talk' (in English) here: ⁠⁠⁠⁠⁠https://amzn.eu/d/eCfijRu⁠⁠⁠⁠⁠Order 'Build Don't Talk' (in Hindi) here: ⁠⁠⁠⁠⁠https://amzn.eu/d/4wZISO0⁠⁠⁠⁠⁠Follow Our Whatsapp Channel: ⁠⁠⁠⁠⁠https://www.whatsapp.com/channel/0029VaokF5x0bIdi3Qn9ef2J⁠⁠⁠⁠⁠Subscribe To Our Other YouTube Channels:-⁠⁠⁠⁠⁠https://www.youtube.com/@rajshamaniclips⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.youtube.com/@RajShamani.Shorts⁠⁠⁠⁠⁠

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

All speakers are announced at AIE EU, schedule coming soon. Join us there or in Miami with the renowned organizers of React Miami! Singapore CFP also open!We've called this out a few times over in AINews, but the overwhelming consensus in the Valley is that “the IDE is Dead”. In November it was just a gut feeling, but now we actually have data: even at the canonical “VSCode Fork” company, people are officially using more agents than tab autocomplete (the first wave of AI coding):Cursor has launched cloud agents for a few months now, and this specific launch is around Computer Use, which has come a long way since we first talked with Anthropic about it in 2024, and which Jonas productized as Autotab:We also take the opportunity to do a live demo, talk about slash commands and subagents, and the future of continual learning and personalized coding models, something that Sam previously worked on at New Computer. (The fact that both of these folks are top tier CEOs of their own startups that have now joined the insane talent density gathering at Cursor should also not be overlooked).Full Episode on YouTube!please like and subscribe!Timestamps00:00 Agentic Code Experiments00:53 Why Cloud Agents Matter02:08 Testing First Pillar03:36 Video Reviews Second Pillar04:29 Remote Control Third Pillar06:17 Meta Demos and Bug Repro13:36 Slash Commands and MCPs18:19 From Tab to Team Workflow31:41 Minimal Web UI Philosophy32:40 Why No File Editor34:38 Full Stack Cursor Debate36:34 Model Choice and Auto Routing38:34 Parallel Agents and Best Of N41:41 Subagents and Context Management44:48 Grind Mode and Throughput Future01:00:24 Cloud Agent Onboarding and MemoryTranscriptEP 77 - CURSOR - Audio version[00:00:00]Agentic Code ExperimentsSamantha: This is another experiment that we ran last year and didn't decide to ship at that time, but may come back to LM Judge, but one that was also agentic and could write code. So it wasn't just picking but also taking the learnings from two models or and models that it was looking at and writing a new diff.And what we found was that there were strengths to using models from different model providers as the base level of this process. Basically you could get almost like a synergistic output that was better than having a very unified like bottom model tier.Jonas: We think that over the coming months, the big unlock is not going to be one person with a model getting more done, like the water flowing faster and we'll be making the pipe much wider and so paralyzing more, whether that's swarms of agents or parallel agents, both of those are things that contribute to getting much more done in the same amount of time.Why Cloud Agents Matterswyx: This week, one of the biggest launches that Cursor's ever done is cloud agents. I think you, you had [00:01:00] cloud agents before, but this was like, you give cursor a computer, right? Yeah. So it's just basically they bought auto tab and then they repackaged it. Is that what's going on, or,Jonas: that's a big part of it.Yeah. Cloud agents already ran in their own computers, but they were sort of site reading code. Yeah. And those computers were not, they were like blank VMs typically that were not set up for the Devrel X for whatever repo the agents working on. One of the things that we talk about is if you put yourself in the model shoes and you were seeing tokens stream by and all you could do was cite read code and spit out tokens and hope that you had done the right thing,swyx: no chanceJonas: I'd be so bad.Like you obviously you need to run the code. And so that I think also is probably not that contrarian of a take, but no one has done that yet. And so giving the model the tools to onboard itself and then use full computer use end-to-end pixels in coordinates out and have the cloud computer with different apps in it is the big unlock that we've seen internally in terms of use usage of this going from, oh, we use it for little copy changes [00:02:00] to no.We're really like driving new features with this kind of new type of entech workflow. Alright, let's see it. Cool.Live Demo TourJonas: So this is what it looks like in cursor.com/agents. So this is one I kicked off a while ago. So on the left hand side is the chat. Very classic sort of agentic thing. The big new thing here is that the agent will test its changes.So you can see here it worked for half an hour. That is because it not only took time to write the tokens of code, it also took time to test them end to end. So it started Devrel servers iterate when needed. And so that's one part of it is like model works for longer and doesn't come back with a, I tried some things pr, but a I tested at pr that's ready for your review.One of the other intuition pumps we use there is if a human gave you a PR asked you to review it and you hadn't, they hadn't tested it, you'd also be annoyed because you'd be like, only ask me for a review once it's actually ready. So that's what we've done withTesting Defaults and Controlsswyx: simple question I wanted to gather out front.Some prs are way smaller, [00:03:00] like just copy change. Does it always do the video or is it sometimes,Jonas: Sometimes.swyx: Okay. So what's the judgment?Jonas: The model does it? So we we do some default prompting with sort. What types of changes to test? There's a slash command that people can do called slash no test, where if you do that, the model will not test,swyx: but the default is test.Jonas: The default is to be calibrated. So we tell it don't test, very simple copy changes, but test like more complex things. And then users can also write their agents.md and specify like this type of, if you're editing this subpart of my mono repo, never tested ‘cause that won't work or whatever.Videos and Remote ControlJonas: So pillar one is the model actually testing Pillar two is the model coming back with a video of what it did.We have found that in this new world where agents can end-to-end, write much more code, reviewing the code is one of these new bottlenecks that crop up. And so reviewing a video is not a substitute for reviewing code, but it is an entry point that is much, much easier to start with than glancing at [00:04:00] some giant diff.And so typically you kick one off you, it's done you come back and the first thing that you would do is watch this video. So this is a, video of it. In this case I wanted a tool tip over this button. And so it went and showed me what that looks like in, in this video that I think here, it actually used a gallery.So sometimes it will build storybook type galleries where you can see like that component in action. And so that's pillar two is like these demo videos of what it built. And then pillar number three is I have full remote control access to this vm. So I can go heat in here. I can hover things, I can type, I have full control.And same thing for the terminal. I have full access. And so that is also really useful because sometimes the video is like all you need to see. And oftentimes by the way, the video's not perfect, the video will show you, is this worth either merging immediately or oftentimes is this worth iterating with to get it to that final stage where I am ready to merge in.So I can go through some other examples where the first video [00:05:00] wasn't perfect, but it gave me confidence that we were on the right track and two or three follow-ups later, it was good to go. And then I also have full access here where some things you just wanna play around with. You wanna get a feel for what is this and there's no substitute to a live preview.And the VNC kind of VM remote access gives you that.swyx: Amazing What, sorry? What is VN. AndJonas: just the remote desktop. Remote desktop. Yeah.swyx: Sam, any other details that you always wanna call out?Samantha: Yeah, for me the videos have been super helpful. I would say, especially in cases where a common problem for me with agents and cloud agents beforehand was almost like under specification in my requests where our plan mode and going really back and forth and getting detailed implementation spec is a way to reduce the risk of under specification, but then similar to how human communication breaks down over time, I feel like you have this risk where it's okay, when I pull down, go to the triple of pulling down and like running this branch locally, I'm gonna see that, like I said, this should be a toggle and you have a checkbox and like, why didn't you get that detail?And having the video up front just [00:06:00] has that makes that alignment like you're talking about a shared artifact with the agent. Very clear, which has been just super helpful for me.Jonas: I can quickly run through some other Yes. Examples.Meta Agents and More DemosJonas: So this is a very front end heavy one. So one question I wasswyx: gonna say, is this only for frontJonas: end?Exactly. One question you might have is this only for front end? So this is another example where the thing I wanted it to implement was a better error message for saving secrets. So the cloud agents support adding secrets, that's part of what it needs to access certain systems. Part of onboarding that is giving access.This is cloud is working onswyx: cloud agents. Yes.Jonas: So this is a fun thing isSamantha: it can get super meta. ItJonas: can get super meta, it can start its own cloud agents, it can talk to its own cloud agents. Sometimes it's hard to wrap your mind around that. We have disabled, it's cloud agents starting more cloud agents. So we currently disallow that.Someday you might. Someday we might. Someday we might. So this actually was mostly a backend change in terms of the error handling here, where if the [00:07:00] secret is far too large, it would oh, this is actually really cool. Wow. That's the Devrel tools. That's the Devrel tools. So if the secret is far too large, we.Allow secrets above a certain size. We have a size limit on them. And the error message there was really bad. It was just some generic failed to save message. So I was like, Hey, we wanted an error message. So first cool thing it did here, zero prompting on how to test this. Instead of typing out the, like a character 5,000 times to hit the limit, it opens Devrel tools, writes js, or to paste into the input 5,000 characters of the letter A and then hit save, closes the Devrel tools, hit save and gets this new gets the new error message.So that looks like the video actually cut off, but here you can see the, here you can see the screenshot of the of the error message. What, so that is like frontend backend end-to-end feature to, to get that,swyx: yeah.Jonas: Andswyx: And you just need a full vm, full computer run everything.Okay. Yeah.Jonas: Yeah. So we've had versions of this. This is one of the auto tab lessons where we started that in 2022. [00:08:00] No, in 2023. And at the time it was like browser use, DOM, like all these different things. And I think we ended up very sort of a GI pilled in the sense that just give the model pixels, give it a box, a brain in a box is what you want and you want to remove limitations around context and capabilities such that the bottleneck should be the intelligence.And given how smart models are today, that's a very far out bottleneck. And so giving it its full VM and having it be onboarded with Devrel X set up like a human would is just been for us internally a really big step change in capability.swyx: Yeah I would say, let's call it a year ago the models weren't even good enough to do any of this stuff.SoSamantha: even six months ago. Yeah.swyx: So yeah what people have told me is like round about Sonder four fire is when this started being good enough to just automate fully by pixel.Jonas: Yeah, I think it's always a question of when is good enough. I think we found in particular with Opus 4 5, 4, 6, and Codex five three, that those were additional step [00:09:00] changes in the autonomy grade capabilities of the model to just.Go off and figure out the details and come back when it's done.swyx: I wanna appreciate a couple details. One 10 Stack Router. I see it. Yeah. I'm a big fan. Do you know any, I have to name the 10 Stack.Jonas: No.swyx: This just a random lore. Some buddy Sue Tanner. My and then the other thing if you switch back to the video.Jonas: Yeah.swyx: I wanna shout out this thing. Probably Sam did it. I don't knowJonas: the chapters.swyx: What is this called? Yeah, this is called Chapters. Yeah. It's like a Vimeo thing. I don't know. But it's so nice the design details, like the, and obviously a company called Cursor has to have a beautiful cursorSamantha: and it isswyx: the cursor.Samantha: Cursor.swyx: You see it branded? It's the cursor. Cursor, yeah. Okay, cool. And then I was like, I complained to Evan. I was like, okay, but you guys branded everything but the wallpaper. And he was like, no, that's a cursor wallpaper. I was like, what?Samantha: Yeah. Rio picked the wallpaper, I think. Yeah. The video.That's probably Alexi and yeah, a few others on the team with the chapters on the video. Matthew Frederico. There's been a lot of teamwork on this. It's a huge effort.swyx: I just, I like design details.Samantha: Yeah.swyx: And and then when you download it adds like a little cursor. Kind of TikTok clip. [00:10:00] Yes. Yes.So it's to make it really obvious is from Cursor,Jonas: we did the TikTok branding at the end. This was actually in our launch video. Alexi demoed the cloud agent that built that feature. Which was funny because that was an instance where one of the things that's been a consequence of having these videos is we use best of event where you run head to head different models on the same prompt.We use that a lot more because one of the complications with doing that before was you'd run four models and they would come back with some giant diff, like 700 lines of code times four. It's what are you gonna do? You're gonna review all that's horrible. But if you come back with four 22nd videos, yeah, I'll watch four 22nd videos.And then even if none of them is perfect, you can figure out like, which one of those do you want to iterate with, to get it over the line. Yeah. And so that's really been really fun.Bug Repro WorkflowJonas: Here's another example. That's we found really cool, which is we've actually turned since into a slash command as well slash [00:11:00] repro, where for bugs in particular, the model of having full access to the to its own vm, it can first reproduce the bug, make a video of the bug reproducing, fix the bug, make a video of the bug being fixed, like doing the same pattern workflow with obviously the bug not reproducing.And that has been the single category that has gone from like these types of bugs, really hard to reproduce and pick two tons of time locally, even if you try a cloud agent on it. Are you confident it actually fixed it to when this happens? You'll merge it in 90 seconds or something like that.So this is an example where, let me see if this is the broken one or the, okay, this is the fixed one. Okay. So we had a bug on cursor.com/agents where if you would attach images where remove them. Then still submit your prompt. They would actually still get attached to the prompt. Okay. And so here you can see Cursor is using, its full desktop by the way.This is one of the cases where if you just do, browse [00:12:00] use type stuff, you'll have a bad time. ‘cause now it needs to upload files. Like it just uses its native file viewer to do that. And so you can see here it's uploading files. It's going to submit a prompt and then it will go and open up. So this is the meta, this is cursor agent, prompting cursor agent inside its own environment.And so you can see here bug, there's five images attached, whereas when it's submitted, it only had one image.swyx: I see. Yeah. But you gotta enable that if you're gonna use cur agent inside cur.Jonas: Exactly. And so here, this is then the after video where it went, it does the same thing. It attaches images, removes, some of them hit send.And you can see here, once this agent is up, only one of the images is left in the attachments. Yeah.swyx: Beautiful.Jonas: Okay. So easy merge.swyx: So yeah. When does it choose to do this? Because this is an extra step.Jonas: Yes. I think I've not done a great job yet of calibrating the model on when to reproduce these things.Yeah. Sometimes it will do it of its own accord. Yeah. We've been conservative where we try to have it only do it when it's [00:13:00] quite sure because it does add some amount of time to how long it takes it to work on it. But we also have added things like the slash repro command where you can just do, fix this bug slash repro and then it will know that it should first make you a video of it actually finding and making sure it can reproduce the bug.swyx: Yeah. Yeah. One sort of ML topic this ties into is reward hacking, where while you write test that you update only pass. So first write test, it shows me it fails, then make you test pass, which is a classic like red green.Jonas: Yep.swyx: LikeJonas: A-T-D-D-T-D-Dswyx: thing.No, very cool. Was that the last demo? Is thereJonas: Yeah.Anything I missed on the demos or points that you think? I think thatSamantha: covers it well. Yeah.swyx: Cool. Before we stop the screen share, can you gimme like a, just a tour of the slash commands ‘cause I so God ready. Huh, what? What are the good ones?Samantha: Yeah, we wanna increase discoverability around this too.I think that'll be like a future thing we work on. Yeah. But there's definitely a lot of good stuff nowJonas: we have a lot of internal ones that I think will not be that interesting. Here's an internal one that I've made. I don't know if anyone else at Cursor uses this one. Fix bb.Samantha: I've never heard of it.Jonas: Yeah.[00:14:00]Fix Bug Bot. So this is a thing that we want to integrate more tightly on. So you made it forswyx: yourself.Jonas: I made this for myself. It's actually available to everyone in the team, but yeah, no one knows about it. But yeah, there will be Bug bot comments and so Bug Bot has a lot of cool things. We actually just launched Bug Bot Auto Fix, where you can click a button and or change a setting and it will automatically fix its own things, and that works great in a bunch of cases.There are some cases where having the context of the original agent that created the PR is really helpful for fixing the bugs, because it might be like, oh, the bug here is that this, is a regression and actually you meant to do something more like that. And so having the original prompt and all of the context of the agent that worked on it, and so here I could just do, fix or we used to be able to do fixed PB and it would do that.No test is another one that we've had. Slash repro is in here. We mentioned that one.Samantha: One of my favorites is cloud agent diagnosis. This is one that makes heavy use of the Datadog MCP. Okay. And I [00:15:00] think Nick and David on our team wrote, and basically if there is a problem with a cloud agent we'll spin up a bunch of subs.Like a singleswyx: instance.Samantha: Yeah. We'll take the ideas and argument and spin up a bunch of subagents using the Datadog MCP to explore the logs and find like all of the problems that could have happened with that. It takes the debugging time, like from potentially you can do quick stuff quickly with the Datadog ui, but it takes it down to, again, like a single agent call as opposed to trolling through logs yourself.Jonas: You should also talk about the stuff we've done with transcripts.Samantha: Yes. Also so basically we've also done some things internally. There'll be some versions of this as we ship publicly soon, where you can spit up an agent and give it access to another agent's transcript to either basically debug something that happened.So act as an external debugger. I see. Or continue the conversation. Almost like forking it.swyx: A transcript includes all the chain of thought for the 11 minutes here. 45 minutes there.Samantha: Yeah. That way. Exactly. So basically acting as a like secondary agent that debugs the first, so we've started to push more andswyx: they're all the same [00:16:00] code.It is just the different prompts, but the sa the same.Samantha: Yeah. So basically same cloud agent infrastructure and then same harness. And then like when we do things like include, there's some extra infrastructure that goes into piping in like an external transcript if we include it as an attachment.But for things like the cloud agent diagnosis, that's mostly just using the Datadog MCP. ‘Cause we also launched CPS along with along with this cloud agent launch, launch support for cloud agent cps.swyx: Oh, that was drawn out.Jonas: We won't, we'll be doing a bigger marketing moment for it next week, but, and you can now use CPS andswyx: People will listen to it as well.Yeah,Jonas: they'llSamantha: be ahead of the third. They'll be ahead. And I would I actually don't know if the Datadog CP is like publicly available yet. I realize this not sure beta testing it, but it's been one of my favorites to use. Soswyx: I think that one's interesting for Datadog. ‘cause Datadog wants to own that site.Interesting with Bits. I don't know if you've tried bits.Samantha: I haven't tried bits.swyx: Yeah.Jonas: That's their cloud agentswyx: product. Yeah. Yeah. They want to be like we own your logs and give us our, some part of the, [00:17:00] self-healing software that everyone wants. Yeah. But obviously Cursor has a strong opinion on coding agents and you, you like taking away from the which like obviously you're going to do, and not every company's like Cursor, but it's interesting if you're a Datadog, like what do you do here?Do you expose your logs to FDP and let other people do it? Or do you try to own that it because it's extra business for you? Yeah. It's like an interesting one.Samantha: It's a good question. All I know is that I love the Datadog MCP,Jonas: And yeah, it is gonna be no, no surprise that people like will demand it, right?Samantha: Yeah.swyx: It's, it's like anysystemswyx: of record company like this, it's like how much do you give away? Cool. I think that's that for the sort of cloud agents tour. Cool. And we just talk about like cloud agents have been when did Kirsten loves cloud agents? Do you know, in JuneJonas: last year.swyx: June last year. So it's been slowly develop the thing you did, like a bunch of, like Michael did a post where himself, where he like showed this chart of like ages overtaking tap. And I'm like, wow, this is like the biggest transition in code.Jonas: Yeah.swyx: Like in, in [00:18:00] like the last,Jonas: yeah. I think that kind of got turned out.Yeah. I think it's a very interest,swyx: not at all. I think it's been highlighted by our friend Andre Kati today.Jonas: Okay.swyx: Talk more about it. What does it mean? Yeah. Is I just got given like the cursor tab key.Jonas: Yes. Yes.swyx: That's that'sSamantha: cool.swyx: I know, but it's gonna be like put in a museum.Jonas: It is.Samantha: I have to say I haven't used tab a little bit myself.Jonas: Yeah. I think that what it looks like to code with AI code generally creates software, even if you want to go higher level. Is changing very rapidly. No, not a hot take, but I think from our vendor's point at Cursor, I think one of the things that is probably underappreciated from the outside is that we are extremely self-aware about that fact and Kerscher, got its start in phase one, era one of like tab and auto complete.And that was really useful in its time. But a lot of people start looking at text files and editing code, like we call it hand coding. Now when you like type out the actual letters, it'sswyx: oh that's cute.Jonas: Yeah.swyx: Oh that's cute.Jonas: You're so boomer. So boomer. [00:19:00] And so that I think has been a slowly accelerating and now in the last few months, rapidly accelerating shift.And we think that's going to happen again with the next thing where the, I think some of the pains around tab of it's great, but I actually just want to give more to the agent and I don't want to do one tab at a time. I want to just give it a task and it goes off and does a larger unit of work and I can.Lean back a little bit more and operate at that higher level of abstraction that's going to happen again, where it goes from agents handing you back diffs and you're like in the weeds and giving it, 32nd to three minute tasks, to, you're giving it, three minute to 30 minute to three hour tasks and you're getting back videos and trying out previews rather than immediately looking at diffs every single time.swyx: Yeah. Anything to add?Samantha: One other shift that I've noticed as our cloud agents have really taken off internally has been a shift from primarily individually driven development to almost this collaborative nature of development for us, slack is actually almost like a development on [00:20:00] Id basically.So Iswyx: like maybe don't even build a custom ui, like maybe that's like a debugging thing, but actually it's that.Samantha: I feel like, yeah, there's still so much to left to explore there, but basically for us, like Slack is where a lot of development happens. Like we will have these issue channels or just like this product discussion channels where people are always at cursing and that kicks off a cloud agent.And for us at least, we have team follow-ups enabled. So if Jonas kicks off at Cursor in a thread, I can follow up with it and add more context. And so it turns into almost like a discussion service where people can like collaborate on ui. Oftentimes I will kick off an investigation and then sometimes I even ask it to get blame and then tag people who should be brought in. ‘cause it can tag people in Slack and then other people will comeswyx: in, can tag other people who are not involved in conversation. Yes. Can just do at Jonas if say, was talking to,Samantha: yeah.swyx: That's cool. You should, you guys should make a big good deal outta that.Samantha: I know. It's a lot to, I feel like there's a lot more to do with our slack surface area to show people externally. But yeah, basically like it [00:21:00] can bring other people in and then other people can also contribute to that thread and you can end up with a PR again, with the artifacts visible and then people can be like, okay, cool, we can merge this.So for us it's like the ID is almost like moving into Slack in some ways as well.swyx: I have the same experience with, but it's not developers, it's me. Designer salespeople.Samantha: Yeah.swyx: So me on like technical marketing, vision, designer on design and then salespeople on here's the legal source of what we agreed on.And then they all just collaborate and correct. The agents,Jonas: I think that we found when these threads is. The work that is left, that the humans are discussing in these threads is the nugget of what is actually interesting and relevant. It's not the boring details of where does this if statement go?It's do we wanna ship this? Is this the right ux? Is this the right form factor? Yeah. How do we make this more obvious to the user? It's like those really interesting kind of higher order questions that are so easy to collaborate with and leave the implementation to the cloud agent.Samantha: Totally. And no more discussion of am I gonna do this? Are you [00:22:00] gonna do this cursor's doing it? You just have to decide. You like it.swyx: Sometimes the, I don't know if there's a, this probably, you guys probably figured this out already, but since I, you need like a mute button. So like cursor, like we're going to take this offline, but still online.But like we need to talk among the humans first. Before you like could stop responding to everything.Jonas: Yeah. This is a design decision where currently cursor won't chime in unless you explicitly add Mention it. Yeah. Yeah.Samantha: So it's not always listening.Yeah.Jonas: I can see all the intermediate messages.swyx: Have you done the recursive, can cursor add another cursor or spawn another cursor?Samantha: Oh,Jonas: we've done some versions of this.swyx: Because, ‘cause it can add humans.Jonas: Yes. One of the other things we've been working on that's like an implication of generating the code is so easy is getting it to production is still harder than it should be.And broadly, you solve one bottleneck and three new ones pop up. Yeah. And so one of the new bottlenecks is getting into production and we have a like joke internally where you'll be talking about some feature and someone says, I have a PR for that. Which is it's so easy [00:23:00] to get to, I a PR for that, but it's hard still relatively to get from I a PR for that to, I'm confident and ready to merge this.And so I think that over the coming weeks and months, that's a thing that we think a lot about is how do we scale up compute to that pipeline of getting things from a first draft An agent did.swyx: Isn't that what Merge isn't know what graphite's for, likeJonas: graphite is a big part of that. The cloud agent testingswyx: Is it fully integrated or still different companiesJonas: working on I think we'll have more to share there in the future, but the goal is to have great end-to-end experience where Cursor doesn't just help you generate code tokens, it helps you create software end-to-end.And so review is a big part of that, that I think especially as models have gotten much better at writing code, generating code, we've felt that relatively crop up more,swyx: sorry this is completely unplanned, but like there I have people arguing one to you need ai. To review ai and then there is another approach, thought school of thought where it's no, [00:24:00] reviews are dead.Like just show me the video. It's it like,Samantha: yeah. I feel again, for me, the video is often like alignment and then I often still wanna go through a code review process.swyx: Like still look at the files andSamantha: everything. Yeah. There's a spectrum of course. Like the video, if it's really well done and it does like fully like test everything, you can feel pretty competent, but it's still helpful to, to look at the code.I make hep pay a lot of attention to bug bot. I feel like Bug Bot has been a great really highly adopted internally. We often like, won't we tell people like, don't leave bug bot comments unaddressed. ‘cause we have such high confidence in it. So people always address their bug bot comments.Jonas: Once you've had two cases where you merged something and then you went back later, there was a bug in it, you merged, you went back later and you were like, ah, bug Bot had found that I should have listened to Bug Bot.Once that happens two or three times, you learn to wait for bug bot.Samantha: Yeah. So I think for us there's like that code level review where like it's looking at the actual code and then there's like the like feature level review where you're looking at the features. There's like a whole number of different like areas.There'll probably eventually be things like performance level review, security [00:25:00] review, things like that where it's like more more different aspects of how this feature might affect your code base that you want to potentially leverage an agent to help with.Jonas: And some of those like bug bot will be synchronous and you'll typically want to wait on before you merge.But I think another thing that we're starting to see is. As with cloud agents, you scale up this parallelism and how much code you generate. 10 person startups become, need the Devrel X and pipelines that a 10,000 person company used to need. And that looks like a lot of the things I think that 10,000 person companies invented in order to get that volume of software to production safely.So that's things like, release frequently or release slowly, have different stages where you release, have checkpoints, automated ways of detecting regressions. And so I think we're gonna need stacks merg stack diffs merge queues. Exactly. A lot of those things are going to be importantswyx: forward with.I think the majority of people still don't know what stack stacks are. And I like, I have many friends in Facebook and like I, I'm pretty friendly with graphite. I've just, [00:26:00] I've never needed it ‘cause I don't work on that larger team and it's just like democratization of no, only here's what we've already worked out at very large scale and here's how you can, it benefits you too.Like I think to me, one of the beautiful things about GitHub is that. It's actually useful to me as an individual solo developer, even though it's like actually collaboration software.Jonas: Yep.swyx: And I don't think a lot of Devrel tools have figured that out yet. That transition from like large down to small.Jonas: Yeah. Kers is probably an inverse story.swyx: This is small down toJonas: Yeah. Where historically Kers share, part of why we grew so quickly was anyone on the team could pick it up and in fact people would pick it up, on the weekend for their side project and then bring it into work. ‘cause they loved using it so much.swyx: Yeah.Jonas: And I think a thing that we've started working on a lot more, not us specifically, but as a company and other folks at Cursor, is making it really great for teams and making it the, the 10th person that starts using Cursor in a team. Is immediately set up with things like, we launched Marketplace recently so other people can [00:27:00] configure what CPS and skills like plugins.So skills and cps, other people can configure that. So that my cursor is ready to go and set up. Sam loves the Datadog, MCP and Slack, MCP you've also been using a lot butSamantha: also pre-launch, but I feel like it's so good.Jonas: Yeah, my cursor should be configured if Sam feels strongly that's just amazing and required.swyx: Is it automatically shared or you have to go and.Jonas: It depends on the MCP. So some are obviously off per user. Yeah. And so Sam can't off my cursor with my Slack MCP, but some are team off and those can be set up by admins.swyx: Yeah. Yeah. That's cool. Yeah, I think, we had a man on the pod when cursor was five people, and like everyone was like, okay, what's the thing?And then it's usually something teams and org and enterprise, but it's actually working. But like usually at that stage when you're five, when you're just a vs. Code fork it's like how do you get there? Yeah. Will people pay for this? People do pay for it.Jonas: Yeah. And I think for cloud agents, we expect.[00:28:00]To have similar kind of PLG things where I think off the bat we've seen a lot of adoption with kind of smaller teams where the code bases are not quite as complex to set up. Yes. If you need some insane docker layer caching thing for builds not to take two hours, that's going to take a little bit longer for us to be able to support that kind of infrastructure.Whereas if you have front end backend, like one click agents can install everything that they need themselves.swyx: This is a good chance for me to just ask some technical sort of check the box questions. Can I choose the size of the vm?Jonas: Not yet. We are planning on adding that. Weswyx: have, this is obviously you want like LXXL, whatever, right?Like it's like the Amazon like sort menu.Jonas: Yes, exactly. We'll add that.swyx: Yeah. In some ways you have to basically become like a EC2, almost like you rent a box.Jonas: You rent a box. Yes. We talk a lot about brain in a box. Yeah. So cursor, we want to be a brain in a box,swyx: but is the mental model different? Is it more serverless?Is it more persistent? Is. Something else.Samantha: We want it to be a bit persistent. The desktop should be [00:29:00] something you can return to af even after some days. Like maybe you go back, they're like still thinking about a feature for some period of time. So theswyx: full like sus like suspend the memory and bring it back and then keep going.Samantha: Exactly.swyx: That's an interesting one because what I actually do want, like from a manna and open crawl, whatever, is like I want to be able to log in with my credentials to the thing, but not actually store it in any like secret store, whatever. ‘cause it's like this is the, my most sensitive stuff.Yeah. This is like my email, whatever. And just have it like, persist to the image. I don't know how it was hood, but like to rehydrate and then just keep going from there. But I don't think a lot of infra works that way. A lot of it's stateless where like you save it to a docker image and then it's only whatever you can describe in a Docker file and that's it.That's the only thing you can cl multiple times in parallel.Jonas: Yeah. We have a bunch of different ways of setting them up. So there's a dockerfile based approach. The main default way is actually snapshottingswyx: like a Linux vmJonas: like vm, right? You run a bunch of install commands and then you snapshot more or less the file system.And so that gets you set up for everything [00:30:00] that you would want to bring a new VM up from that template basically.swyx: Yeah.Jonas: And that's a bit distinct from what Sam was talking about with the hibernating and re rehydrating where that is a full memory snapshot as well. So there, if I had like the browser open to a specific page and we bring that back, that page will still be there.swyx: Was there any discussion internally and just building this stuff about every time you shoot a video it's actually you show a little bit of the desktop and the browser and it's not necessary if you just show the browser. If, if you know you're just demoing a front end application.Why not just show the browser, right? Like it Yeah,Samantha: we do have some panning and zooming. Yeah. Like it can decide that when it's actually recording and cutting the video to highlight different things. I think we've played around with different ways of segmenting it and yeah. There's been some different revs on it for sure.Jonas: Yeah. I think one of the interesting things is the version that you see now in cursor.com actually is like half of what we had at peak where we decided to unshift or unshipped quite a few things. So two of the interesting things to talk about, one is directly an answer to your [00:31:00] question where we had native browser that you would have locally, it was basically an iframe that via port forwarding could load the URL could talk to local host in the vm.So that gets you basically, so inswyx: your machine's browser,likeJonas: in your local browser? Yeah. You would go to local host 4,000 and that would get forwarded to local host 4,000 in the VM via port forward. We unshift that like atswyx: Eng Rock.Jonas: Like an Eng Rock. Exactly. We unshift that because we felt that the remote desktop was sufficiently low latency and more general purpose.So we build Cursor web, but we also build Cursor desktop. And so it's really useful to be able to have the full spectrum of things. And even for Cursor Web, as you saw in one of the examples, the agent was uploading files and like I couldn't upload files and open the file viewer if I only had access to the browser.And we've thought a lot about, this might seem funny coming from Cursor where we started as this, vs. Code Fork and I think inherited a lot of amazing things, but also a lot [00:32:00] of legacy UI from VS Code.Minimal Web UI SurfacesJonas: And so with the web UI we wanted to be very intentional about keeping that very minimal and exposing the right sum of set of primitive sort of app surfaces we call them, that are shared features of that cloud.Environment that you and the agent both use. So agent uses desktop and controls it. I can use desktop and controlled agent runs terminal commands. I can run terminal commands. So that's how our philosophy around it. The other thing that is maybe interesting to talk about that we unshipped is and we may, both of these things we may reship and decide at some point in the future that we've changed our minds on the trade offs or gotten it to a point where, putswyx: it out there.Let users tell you they want it. Exactly. Alright, fine.Why No File EditorJonas: So one of the other things is actually a files app. And so we used to have the ability at one point during the process of testing this internally to see next to, I had GID desktop and terminal on the right hand side of the tab there earlier to also have a files app where you could see and edit files.And we actually felt that in some [00:33:00] ways, by restricting and limiting what you could do there, people would naturally leave more to the agent and fall into this new pattern of delegating, which we thought was really valuable. And there's currently no way in Cursor web to edit these files.swyx: Yeah. Except you like open up the PR and go into GitHub and do the thing.Jonas: Yeah.swyx: Which is annoying.Jonas: Just tell the agent,swyx: I have criticized open AI for this. Because Open AI is Codex app doesn't have a file editor, like it has file viewer, but isn't a file editor.Jonas: Do you use the file viewer a lot?swyx: No. I understand, but like sometimes I want it, the one way to do it is like freaking going to no, they have a open in cursor button or open an antigravity or, opening whatever and people pointed that.So I was, I was part of the early testers group people pointed that and they were like, this is like a design smell. It's like you actually want a VS. Code fork that has all these things, but also a file editor. And they were like, no, just trust us.Jonas: Yeah. I think we as Cursor will want to, as a product, offer the [00:34:00] whole spectrum and so you want to be able to.Work at really high levels of abstraction and double click and see the lowest level. That's important. But I also think that like you won't be doing that in Slack. And so there are surfaces and ways of interacting where in some cases limiting the UX capabilities makes for a cleaner experience that's more simple and drives people into these new patterns where even locally we kicked off joking about this.People like don't really edit files, hand code anymore. And so we want to build for where that's going and not where it's beenswyx: a lot of cool stuff. And Okay. I have a couple more.Full Stack Hosting Debateswyx: So observations about the design elements about these things. One of the things that I'm always thinking about is cursor and other peers of cursor start from like the Devrel tools and work their way towards cloud agents.Other people, like the lovable and bolts of the world start with here's like the vibe code. Full cloud thing. They were already cloud edges before anyone else cloud edges and we will give you the full deploy platform. So we own the whole loop. We own all the infrastructure, we own, we, we have the logs, we have the the live site, [00:35:00] whatever.And you can do that cycle cursor doesn't own that cycle even today. You don't have the versal, you don't have the, you whatever deploy infrastructure that, that you're gonna have, which gives you powers because anyone can use it. And any enterprise who, whatever you infra, I don't care. But then also gives you limitations as to how much you can actually fully debug end to end.I guess I'm just putting out there that like is there a future where there's like full stack cursor where like cursor apps.com where like I host my cursor site this, which is basically a verse clone, right? I don't know.Jonas: I think that's a interesting question to be asking, and I think like the logic that you laid out for how you would get there is logic that I largely agree with.swyx: Yeah. Yeah.Jonas: I think right now we're really focused on what we see as the next big bottleneck and because things like the Datadog MCP exist, yeah. I don't think that the best way we can help our customers ship more software. Is by building a hosting solution right now,swyx: by the way, these are things I've actually discussed with some of the companies I just named.Jonas: Yeah, for sure. Right now, just this big bottleneck is getting the code out there and also [00:36:00] unlike a lovable in the bolt, we focus much more on existing software. And the zero to one greenfield is just a very different problem. Imagine going to a Shopify and convincing them to deploy on your deployment solution.That's very different and I think will take much longer to see how that works. May never happen relative to, oh, it's like a zero to one app.swyx: I'll say. It's tempting because look like 50% of your apps are versal, superb base tailwind react it's the stack. It's what everyone does.So I it's kinda interesting.Jonas: Yeah.Model Choice and Auto Routingswyx: The other thing is the model select dying. Right now in cloud agents, it's stuck down, bottom left. Sure it's Codex High today, but do I care if it's suddenly switched to Opus? Probably not.Samantha: We definitely wanna give people a choice across models because I feel like it, the meta change is very frequently.I was a big like Opus 4.5 Maximalist, and when codex 5.3 came out, I hard, hard switch. So that's all I use now.swyx: Yeah. Agreed. I don't know if, but basically like when I use it in Slack, [00:37:00] right? Cursor does a very good job of exposing yeah. Cursors. If people go use it, here's the model we're using.Yeah. Here's how you switch if you want. But otherwise it's like extracted away, which is like beautiful because then you actually, you should decide.Jonas: Yeah, I think we want to be doing more with defaults.swyx: Yeah.Jonas: Where we can suggest things to people. A thing that we have in the editor, the desktop app is auto, which will route your request and do things there.So I think we will want to do something like that for cloud agents as well. We haven't done it yet. And so I think. We have both people like Sam, who are very savvy and want know exactly what model they want, and we also have people that want us to pick the best model for them because we have amazing people like Sam and we, we are the experts.Yeah. We have both the traffic and the internal taste and experience to know what we think is best.swyx: Yeah. I have this ongoing pieces of agent lab versus model lab. And to me, cursor and other companies are example of an agent lab that is, building a new playbook that is different from a model lab where it's like very GP heavy Olo.So obviously has a research [00:38:00] team. And my thesis is like you just, every agent lab is going to have a router because you're going to be asked like, what's what. I don't keep up to every day. I'm not a Sam, I don't keep up every day for using you as sample the arm arbitrator of taste. Put me on CRI Auto.Is it free? It's not free.Jonas: Auto's not free, but there's different pricing tiers. Yeah.swyx: Put me on Chris. You decide from me based on all the other people you know better than me. And I think every agent lab should basically end up doing this because that actually gives you extra power because you like people stop carrying or having loyalty with one lab.Jonas: Yeah.Best Of N and Model CouncilsJonas: Two other maybe interesting things that I don't know how much they're on your radar are one the best event thing we mentioned where running different models head to head is actually quite interesting becauseswyx: which exists in cursor.Jonas: That exists in cur ID and web. So the problem is where do you run them?swyx: Okay.Jonas: And so I, I can share my screen if that's interesting. Yeahinteresting.swyx: Yeah. Yeah. Obviously parallel agents, very popal.Jonas: Yes, exactly. Parallel agentsswyx: in you mind. Are they the same thing? Best event and parallel agents? I don't want to [00:39:00] put words in your mouth.Jonas: Best event is a subset of parallel agents where they're running on the same prompt.That would be my answer. So this is what that looks like. And so here in this dropdown picker, I can just select multiple models.swyx: Yeah.Jonas: And now if I do a prompt, I'm going to do something silly. I am running these five models.swyx: Okay. This is this fake clone, of course. The 2.0 yeah.Jonas: Yes, exactly. But they're running so the cursor 2.0, you can do desktop or cloud.So this is cloud specifically where the benefit over work trees is that they have their own VMs and can run commands and won't try to kill ports that the other one is running. Which are some of the pains. These are allswyx: called work trees?Jonas: No, these are all cloud agents with their own VMs.swyx: Okay. ButJonas: When you do it locally, sometimes people do work trees and that's been the main way that people have set out parallel so far.I've gotta say.swyx: That's so confusing for folks.Jonas: Yeah.swyx: No one knows what work trees are.Jonas: Exactly. I think we're phasing out work trees.swyx: Really.Jonas: Yeah.swyx: Okay.Samantha: But yeah. And one other thing I would say though on the multimodel choice, [00:40:00] so this is another experiment that we ran last year and the decide to ship at that time but may come back to, and there was an interesting learning that's relevant for, these different model providers. It was something that would run a bunch of best of ends but then synthesize and basically run like a synthesizer layer of models. And that was other agents that would take LM Judge, but one that was also agentic and could write code. So it wasn't just picking but also taking the learnings from two models or, and models that it was looking at and writing a new diff.And what we found was that at the time at least, there were strengths to using models from different model providers as the base level of this process. Like basically you could get almost like a synergistic output that was better than having a very unified, like bottom model tier. So it was really interesting ‘cause it's like potentially, even though even in the future when you have like maybe one model as ahead of the other for a little bit, there could be some benefit from having like multiple top tier models involved in like a [00:41:00] model swarm or whatever agent Swarm that you're doing, that they each have strengths and weaknesses.Yeah.Jonas: Andre called this the council, right?Samantha: Yeah, exactly. We actually, oh, that's another internal command we have that Ian wrote slash council. Oh, and they some, yeah.swyx: Yes. This idea is in various forms everywhere. And I think for me, like for me, the productization of it, you guys have done yeah, like this is very flexible, but.If I were to add another Yeah, what your thing is on here it would be too much. I what, let's say,Samantha: Ideally it's all, it's something that the user can just choose and it all happens under the hood in a way where like you just get the benefit of that process at the end and better output basically, but don't have to get too lost in the complexity of judging along the way.Jonas: Okay.Subagents for ContextJonas: Another thing on the many agents, on different parallel agents that's interesting is an idea that's been around for a while as well that has started working recently is subagents. And so this is one other way to get agents of the different prompts and different goals and different models, [00:42:00] different vintages to work together.Collaborate and delegate.swyx: Yeah. I'm very like I like one of my, I always looking for this is the year of the blah, right? Yeah. I think one of the things on the blahs is subs. I think this is of but I haven't used them in cursor. Are they fully formed or how do I honestly like an intro because do I form them from new every time?Do I have fixed subagents? How are they different for slash commands? There's all these like really basic questions that no one stops to answer for people because everyone's just like too busy launching. We have toSamantha: honestly, you could, you can see them in cursor now if you just say spin up like 50 subagents to, so cursor definesswyx: what Subagents.Yeah.Samantha: Yeah. So basically I think I shouldn't speak for the whole subagents team. This is like a different team that's been working on this, but our thesis or thing that we saw internally is that like they're great for context management for kind of long running threads, or if you're trying to just throw more compute at something.We have strongly used, almost like a generic task interface where then the main agent can define [00:43:00] like what goes into the subagent. So if I say explore my code base, it might decide to spin up an explore subagent and or might decide to spin up five explore subagent.swyx: But I don't get to set what those subagent are, right?It's all defined by a model.Samantha: I think. I actually would have to refresh myself on the sub agent interface.Jonas: There are some built-in ones like the explore subagent is free pre-built. But you can also instruct the model to use other subagents and then it will. And one other example of a built-in subagent is I actually just kicked one off in cursor and I can show you what that looks like.swyx: Yes. Because I tried to do this in pure prompt space.Jonas: So this is the desktop app? Yeah. Yeah. And that'sswyx: all you need to do, right? Yeah.Jonas: That's all you need to do. So I said use a sub agent to explore and I think, yeah, so I can even click in and see what the subagent is working on here. It ran some fine command and this is a composer under the hood.Even though my main model is Opus, it does smart routing to take, like in this instance the explorer sort of requires reading a ton of things. And so a faster model is really useful to get an [00:44:00] answer quickly, but that this is what subagent look like. And I think we wanted to do a lot more to expose hooks and ways for people to configure these.Another example of a cus sort of builtin subagent is the computer use subagent in the cloud agents, where we found that those trajectories can be long and involve a lot of images obviously, and execution of some testing verification task. We wanted to use that models that are particularly good at that.So that's one reason to use subagents. And then the other reason to use subagents is we want contexts to be summarized reduced down at a subagent level. That's a really neat boundary at which to compress that rollout and testing into a final message that agent writes that then gets passed into the parent rather than having to do some global compaction or something like that.swyx: Awesome. Cool. While we're in the subagents conversation, I can't do a cursor conversation and not talk about listen stuff. What is that? What is what? He built a browser. He built an os. Yes. And he [00:45:00] experimented with a lot of different architectures and basically ended up reinventing the software engineer org chart.This is all cool, but what's your take? What's, is there any hole behind the side? The scenes stories about that kind of, that whole adventure.Samantha: Some of those experiments have found their way into a feature that's available in cloud agents now, the long running agent mode internally, we call it grind mode.And I think there's like some hint of grind mode accessible in the picker today. ‘cause you can do choose grind until done. And so that was really the result of experiments that Wilson started in this vein where he I think the Ralph Wigga loop was like floating around at the time, but it was something he also independently found and he was experimenting with.And that was what led to this product surface.swyx: And it is just simple idea of have criteria for completion and do not. Until you complete,Samantha: there's a bit more complexity as well in, in our implementation. Like there's a specific, you have to start out by aligning and there's like a planning stage where it will work with you and it will not get like start grind execution mode until it's decided that the [00:46:00] plan is amenable to both of you.Basically,swyx: I refuse to work until you make me happy.Jonas: We found that it's really important where people would give like very underspecified prompt and then expect it to come back with magic. And if it's gonna go off and work for three minutes, that's one thing. When it's gonna go off and work for three days, probably should spend like a few hours upfront making sure that you have communicated what you actually want.swyx: Yeah. And just to like really drive from the point. We really mean three days that No, noJonas: human. Oh yeah. We've had three day months innovation whatsoever.Samantha: I don't know what the record is, but there's been a long time with the grantsJonas: and so the thing that is available in cursor. The long running agent is if you wanna think about it, very abstractly that is like one worker node.Whereas what built the browser is a society of workers and planners and different agents collaborating. Because we started building the browser with one worker node at the time, that was just the agent. And it became one worker node when we realized that the throughput of the system was not where it needed to be [00:47:00] to get something as large of a scale as the browser done.swyx: Yeah.Jonas: And so this has also become a really big mental model for us with cloud, cloud agents is there's the classic engineering latency throughput trade-offs. And so you know, the code is water flowing through a pipe. The, we think that over the coming months, the big unlock is not going to be one person with a model getting more done, like the water flowing faster and we'll be making the pipe much wider and so ing more, whether that's swarms of agents or parallel agents, both of those are things that contribute to getting.Much more done in the same amount of time, but any one of those tasks doesn't necessarily need to get done that quickly. And throughput is this really big thing where if you see the system of a hundred concurrent agents outputting thousands of tokens a second, you can't go back like that.Just you see a glimpse of the future where obviously there are many caveats. Like no one is using this browser. IRL. There's like a bunch of things not quite right yet, but we are going to get to systems that produce real production [00:48:00] code at the scale much sooner than people think. And it forces you to think what even happens to production systems. Like we've broken our GitHub actions recently because we have so many agents like producing and pushing code that like CICD is just overloaded. ‘cause suddenly it's like effectively weg grew, cursor's growing very quickly anyway, but you grow head count, 10 x when people run 10 x as many agents.And so a lot of these systems, exactly, a lot of these systems will need to adapt.swyx: It also reminds me, we, we all, the three of us live in the app layer, but if you talk to the researchers who are doing RL infrastructure, it's the same thing. It's like all these parallel rollouts and scheduling them and making sure as much throughput as possible goes through them.Yeah, it's the same thing.Jonas: We were talking briefly before we started recording. You were mentioning memory chips and some of the shortages there. The other thing that I think is just like hard to wrap your head around the scale of the system that was building the browser, the concurrency there.If Sam and I both have a system like that running for us, [00:49:00] shipping our software. The amount of inference that we're going to need per developer is just really mind-boggling. And that makes, sometimes when I think about that, I think that even with, the most optimistic projections for what we're going to need in terms of buildout, our underestimating, the extent to which these swarm systems can like churn at scale to produce code that is valuable to the economy.And,swyx: yeah, you can cut this if it's sensitive, but I was just Do you have estimates of how much your token consumption is?Jonas: Like per developer?swyx: Yeah. Or yourself. I don't need like comfy average. I just curious. ISamantha: feel like I, for a while I wasn't an admin on the usage dashboard, so I like wasn't able to actually see, but it was a,swyx: mine has gone up.Samantha: Oh yeah.swyx: But I thinkSamantha: it's in terms of how much work I'm doing, it's more like I have no worries about developers losing their jobs, at least in the near term. ‘cause I feel like that's a more broad discussion.swyx: Yeah. Yeah. You went there. I didn't go, I wasn't going there.I was just like how much more are you using?Samantha: There's so much stuff to be built. And so I feel like I'm basically just [00:50:00] trying to constantly I have more ambitions than I did before. Yes. Personally. Yes. So can't speak to the broader thing. But for me it's like I'm busier than ever before.I'm using more tokens and I am also doing more things.Jonas: Yeah. Yeah. I don't have the stats for myself, but I think broadly a thing that we've seen, that we expect to continue is J'S paradox. Whereswyx: you can't do it in our podcast without seeingJonas: it. Exactly. We've done it. Now we can wrap. We've done, we said the words.Phase one tab auto complete people paid like 20 bucks a month. And that was great. Phase two where you were iterating with these local models. Today people pay like hundreds of dollars a month. I think as we think about these highly parallel kind of agents running off for a long times in their own VM system, we are already at that point where people will be spending thousands of dollars a month per human, and I think potentially tens of thousands and beyond, where it's not like we are greedy for like capturing more money, but what happens is just individuals get that much more leverage.And if one person can do as much as 10 people, yeah. That tool that allows ‘em to do that is going to be tremendously valuable [00:51:00] and worth investing in and taking the best thing that exists.swyx: One more question on just the cursor in general and then open-ended for you guys to plug whatever you wanna put.How is Cursor hiring these days?Samantha: What do you mean by how?swyx: So obviously lead code is dead. Oh,Samantha: okay.swyx: Everyone says work trial. Different people have different levels of adoption of agents. Some people can really adopt can be much more productive. But other people, you just need to give them a little bit of time.And sometimes they've never lived in a token rich place like cursor.And once you live in a token rich place, you're you just work differently. But you need to have done that. And a lot of people anyway, it was just open-ended. Like how has agentic engineering, agentic coding changed your opinions on hiring?Is there any like broad like insights? Yeah.Jonas: Basically I'm asking this for other people, right? Yeah, totally. Totally. To hear Sam's opinion, we haven't talked about this the two of us. I think that we don't see necessarily being great at the latest thing with AI coding as a prerequisite.I do think that's a sign that people are keeping up and [00:52:00] curious and willing to upscale themselves in what's happening because. As we were talking about the last three months, the game has completely changed. It's like what I do all day is very different.swyx: Like it's my job and I can't,Jonas: Yeah, totally.I do think that still as Sam was saying, the fundamentals remain important in the current age and being able to go and double click down. And models today do still have weaknesses where if you let them run for too long without cleaning up and refactoring, the coke will get sloppy and there'll be bad abstractions.And so you still do need humans that like have built systems before, no good patterns when they see them and know where to steer things.Samantha: I would agree with that. I would say again, cursor also operates very quickly and leveraging ag agentic engineering is probably one reason why that's possible in this current moment.I think in the past it was just like people coding quickly and now there's like people who use agents to move faster as well. So it's part of our process will always look for we'll select for kind of that ability to make good decisions quickly and move well in this environment.And so I think being able to [00:53:00] figure out how to use agents to help you do that is an important part of it too.swyx: Yeah. Okay. The fork in the road, either predictions for the end of the year, if you have any, or PUDs.Jonas: Evictions are not going to go well.Samantha: I know it's hard.swyx: They're so hard. Get it wrong.It's okay. Just, yeah.Jonas: One other plug that may be interesting that I feel like we touched on but haven't talked a ton about is a thing that the kind of these new interfaces and this parallelism enables is the ability to hop back and forth between threads really quickly. And so a thing that we have,swyx: you wanna show something or,Jonas: yeah, I can show something.A thing that we have felt with local agents is this pain around contact switching. And you have one agent that went off and did some work and another agent that, that did something else. And so here by having, I just have three tabs open, let's say, but I can very quickly, hop in here.This is an example I showed earlier, but the actual workflow here I think is really different in a way that may not be obvious, where, I start t

Semaphore Uncut
Product News: OAuth Authentication for the Semaphore MCP Server

Semaphore Uncut

Play Episode Listen Later Mar 6, 2026 2:06


We're preparing a new update for the Semaphore MCP server that will make it easier for developers to connect AI agents and developer tools.The focus of this update is authentication.Today, connecting an agent to the MCP server typically requires using a long-lived API token. While this works well, it also means developers need to generate credentials, store them in configuration files, and manage them manually.In our next release, coming next week, we're introducing OAuth authentication support for the MCP server.This will make connecting agents and developer tools significantly simpler.Instead of generating and storing API tokens, developers will be able to authenticate through a familiar OAuth flow. When configuring an agent, a browser window opens, you log in, and approve access to the MCP server. Once approved, the connection is established automatically.This approach removes the need to manage long-lived credentials and makes integrations easier to set up.It also improves compatibility with modern agentic development tools. Some tools have limitations when working with static API tokens, and OAuth removes those barriers.Read more on our blog.Pete MiloravacThe Semaphore Teamhttps://semaphore.io This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit semaphoreio.substack.com

Supermanagers
AI Reveals Competitor Content Strategies in Minutes with Chris Long

Supermanagers

Play Episode Listen Later Mar 5, 2026 46:43


Chris Long (formerly at Go Fish Digital, now co-founder of Nectiv Digital) explains how AI is reshaping search from two angles: (1) operational automation (briefs, research, internal linking, refresh workflows) and (2) shifting buyer behavior, where people increasingly start discovery in LLMs and use Google more as a verification / reputation check. He demos how MCP connectors let you query Ahrefs and Google Analytics conversationally (often in Claude), then blend datasets to generate competitive insights, keyword clustering, and strategy gaps—without living inside traditional dashboards.Timestamps0:00 — Intro: SEO vs AEO/GEO and why AI is changing the game0:20 — Two AI impacts: automating SEO work + changing how buyers discover products1:50 — Google becomes “verification” while LLMs become discovery (especially in B2B)3:00 — “WebMCP” concept: standard rails so agents can reliably take actions on websites5:25 — Optimizing for agents (treating them like VIP visitors) and what that means for sites6:15 — Why LLM/agent usage is hard to measure (clicks vs logs vs self-reported attribution)10:00 — Nective's “build first” approach: tools/workflows before hiring more people14:00 — Demo: Ahrefs MCP in Claude for competitor insights + content strategy patterns27:45 — Demo: Google Analytics MCP (and why it's a relief vs GA4's interface)35:50 — Blending Ahrefs + GA data to generate strategy gaps and page ideas39:00 — AEO tooling landscape: LLM trackers (Profound, Athena) + automation (n8n, AirOps)41:15 — Autonomous agents (OpenClaw) and the future of “persistent” task completion45:15 — Where to find Chris (LinkedIn + Nective Digital)Tools & technologies mentionedSEO / AEO / GEO — Approaches to improving visibility in traditional search and AI-generated answers.LLMs (Large Language Models) — Used for research/discovery; increasingly the first stop before Google.Agents / Agentic browsing — Software that navigates websites and completes actions (forms, carts, checkout).WebMCP (as discussed) — Structured markup/standardization so agents can precisely interact with site elements.MCP (Model Context Protocol connectors) — Connectors that let AI query external tools via natural language.Ahrefs — SEO data platform (traffic estimates, backlinks, top pages, competitor research).Claude (web + Claude Code) — Used for data-heavy work and debugging MCP setups.ChatGPT — Mentioned as preferred for more knowledge-based tasks compared to data analysis.Google Analytics 4 (GA4) — Web analytics; MCP access can reduce reliance on the GA4 UI.Server access logs — Useful for identifying agent/bot activity not visible in standard analytics reports.BigQuery — Intermediary data warehouse for querying analytics data more flexibly.Slack — Used for capturing “how did you hear about us?” attribution signals.Profound — LLM visibility/brand mention tracking tool.Athena — Another LLM visibility tracker discussed as more data-driven/scalable.n8n — Workflow automation for content engineering pipelines.AirOps — Automation/content workflow tooling mentioned alongside n8n.OpenClaw — Referenced as an autonomous agent tool example.Subscribe at⁠ thisnewway.com⁠ to get the step-by-step playbooks, tools, and workflows.

The Generative AI Meetup Podcast
AI Matches Human Intelligence, Pentagon Drama, and the Rise of Agent Swarms

The Generative AI Meetup Podcast

Play Episode Listen Later Mar 5, 2026 99:07 Transcription Available


Youtube Channel: https://www.youtube.com/@GenerativeAIMeetup Mark's Travel Vlog: https://www.youtube.com/@kumajourney11 Mark's Personal Youtube Channel: https://www.youtube.com/@markkuczmarski896 Attend a live event: https://genaimeetup.com/ Shashank Linked In: https://www.linkedin.com/in/shashu10/  Novacut: https://novacut.ai    Mark and Shashank break down the latest developments in AI from their travels in Fukuoka and Seychelles. They cover Gemini 3.1 Pro matching human performance on the ARC-AGI-1 benchmark at a fraction of the cost, the upcoming ARC-AGI-3 video game-style test, and why only three US companies (OpenAI, Anthropic, Google) seem to be pushing state-of-the-art right now while Meta and xAI deal with leadership shakeups. The conversation moves to OpenAI's GPT 5.3 Codex Spark model running on Cerebras hardware for lightning-fast inference, Abu Dhabi's M42 initiative sequencing 700,000+ genomes and centralizing health records for AI-driven healthcare, and the viral OpenClaw incident where an AI agent wrote a hit piece on a human open-source maintainer who rejected its pull request. They also discuss the Anthropic vs. Pentagon drama over autonomous weapons and mass surveillance restrictions, an ex-Google Maps PM who vibe-coded a Palantir-style intelligence dashboard in a weekend, and their hands-on experiences with Claude Code, Codex, Cursor, and MCP integrations. The episode wraps with thoughts on agent swarms, the human-in-the-loop problem for taste-driven tasks, and whether we're close to the first solo-founder billion-dollar company powered entirely by AI agents.

Manufacturing Hub
Ep. 251 - Ignition 8.3 ProveIt How Inductive Automation Scales Multi Site Factories w/ MQTT and UNS

Manufacturing Hub

Play Episode Listen Later Mar 5, 2026 63:12


In this episode of Manufacturing Hub, Vlad and Dave sit down with Travis Cox and Kevin McCluskey from Inductive Automation to unpack what was actually proven at ProveIt and why it matters for teams trying to modernize plants without building a fragile mess of point to point integrations. If you have ever looked at a shiny demo and wondered what the real architecture looks like, how it scales beyond a single line, and what it takes to roll out across multiple sites without turning every change into a high risk event, this conversation is for you.Travis and Kevin walk through their ProveIt Enterprise B build and the thinking behind it. The core idea is simple but powerful: treat the factory like a system that needs a shared digital infrastructure, built on open standards, where data is contextualized and reusable. They break down how they used Ignition Edge close to PLCs for resiliency, local HMIs, and disciplined data modeling, then moved data through MQTT into a Unified Namespace so multiple applications can consume the same trusted signals and context. This is the difference between “we can connect to anything” and “we can scale without rewriting everything every time the business changes.” Open standards show up repeatedly in the conversation because ProveIt is specifically designed to force interoperability and practical implementation tradeoffs. Inductive Automation has also written about ProveIt as a place where MQTT, OPC UA, and SQL show up as real foundations rather than slogans.From there, the episode gets into the part that should make both OT and IT teams pay attention: modern deployment practices applied to industrial applications. Kevin outlines a clear maturity path from a single designer workflow to version control, then to containerized deployments, and finally to full GitOps style promotion across dev, staging, and production using tools like Argo CD, Helm, Kubernetes, and release promotion concepts that look like what the software world has used for years. Argo CD is explicitly built around Git repositories as the source of truth for desired state, which is exactly why it fits this style of deployment. The live portion of the conversation demonstrates how fast this can get when the infrastructure is treated as code: they spin up a brand new “site four” by submitting a form, generating a pull request, merging it, and letting the pipeline do the rest.Timestamps00:00 Welcome back and why this ProveIt recap matters01:35 Meet Travis Cox and Kevin McCluskey from Inductive Automation03:10 What ProveIt is and the key vendor questions it forces05:20 Enterprise B architecture overview from PLC to Edge to site to enterprise07:30 HMI walkthrough across liquid processing, filling, packaging, palletizing09:05 Why deploy Ignition Edge instead of only a centralized site gateway12:05 Design once, reuse everywhere and what that means for scaling quickly14:35 On prem realities versus cloud infrastructure in the ProveIt environment17:10 MCP, n8n workflows, and bringing live operational context into AI20:40 i3X style API access to models, history, and alarms for interoperability23:15 GitHub, Docker Compose, Helm, Kubernetes, Argo CD, Cargo and GitOps promotion36:55 Spinning up a new site live and what it changes for multi site rolloutsAbout the hostsVlad Romanov is an electrical engineer and MBA who has spent over a decade building and modernizing manufacturing systems across industrial automation, controls, and plant operations. Through Joltek, Vlad works with manufacturers to assess current state OT foundations, reduce modernization risk, improve reliability, and build internal capability through practical training and standards that stick.Dave Griffith co hosts Manufacturing Hub and brings a practitioner lens focused on what works on the plant floor, how architectures survive real constraints, and how industrial teams can modernize without breaking production.About the guestsTravis Cox is Chief Technology Evangelist at Inductive Automation and has spent over two decades helping customers and partners design scalable architectures, apply best practices, and deliver real solutions with Ignition.Kevin McCluskey is Chief Technology Architect at Inductive Automation and works with organizations on architecture decisions, platform direction, and enabling the next generation of industrial applications.Learn more about Joltekhttps://www.joltek.com/serviceshttps://www.joltek.com/book-a-modernization-consultation

MrCreepyPasta's Storytime
We Found an Emergency Distress Buoy floating in the Pacific by JLGoodwin1990 (1/2)

MrCreepyPasta's Storytime

Play Episode Listen Later Mar 4, 2026 47:45 Transcription Available


Author here! This an older, more cosmic underwater horror story of mine from two years ago, a two parter that got a lot of positive reaction on Reddit. It has some flaws, but I hope you enjoy it, and the upcoming second part as much as you did my Abandoned Ship story, Exploding Whale and Crater Lake stories MCP has done. And also, I hope you caught the tributes to horror movie characters, directors and writers with names like Carpenter, Alten, King and Windows.

Windows Weekly (MP3)
WW 973: Bob's Rumor Store - ASUS & Dell Unveil Windows 365 Cloud PC Devices

Windows Weekly (MP3)

Play Episode Listen Later Mar 4, 2026 112:01 Transcription Available


Can Microsoft's push for cloud PCs and AI-powered agents redefine where and how we work? If you keep to the defaults, Windows 11 is secure. Copilot+ PC is even more secure. But you can take additional steps to secure it either way, and you should. Plus, Paul's been trying to play different types of games, and Resident Evil Requiem is better (in his opinion) than Silent Hill f and Silent Hill 2 remake... if you want a horror game. Also, there's a cheaper new Audible plan thanks to Spotify! Windows 11 Shenanigans? If you use a third-party AI client in Edge Canary... you will not be amused. Bitwarden (TWiT sponsor) is (possibly the 1st?) third-party password manager to support passkey sign-ins on Windows 11 New Canary, Dev, and Beta builds last Friday- Canary is more of the same, Dev/Beta get shared audio improvements, narrator improvements, new IT policies ASUS and Dell will soon sell Windows 365 Cloud PCs Google is moving Chrome to a two-week dev schedule. Should we assume Microsoft will follow suit with Edge? Dell is up 39 percent, but because of AI servers not PCs NVIDIA revenues up 73 percent to $68.1 billion AI/dev OpenAI closes $110 billion funding round as the AI circle jerk continues Microsoft brings Copilot Tasks to consumer Copilot Google introduces AppFunctions for Android, it's way to make mobile apps work like MCP (be semantic), similar to what Microsoft is doing in Windows Windows App Development CLI updated to 0.02 with Store CLI integration and .NET project support Build 2026 is in San Francisco, as expected, but in June - overlap with WWDC? Xbox and gaming Here come the first Game Pass titles of March Microsoft highlights some indie games to consider Xbox ROG Ally gets AI-based game recaps Legion Go Fold is the star of the new PCs at MWC Sony might be backtracking on its PC games plans Developing: Epic/Google settlement was approved Tips & picks App pick of the week: Resident Evil Requiem Tip of the week: Secure your Windows 11 PC RunAs Radio this week: Hiring in 2026 with Suzi Edwards-Alexander Brown liquor pick of the week: St. Augustine Florida Straight Bourbon Hosts: Paul Thurrott, Richard Campbell, and Mikah Sargent Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsor: threatlocker.com/twit

All TWiT.tv Shows (MP3)
Windows Weekly 973: Bob's Rumor Store

All TWiT.tv Shows (MP3)

Play Episode Listen Later Mar 4, 2026 112:01 Transcription Available


Can Microsoft's push for cloud PCs and AI-powered agents redefine where and how we work? If you keep to the defaults, Windows 11 is secure. Copilot+ PC is even more secure. But you can take additional steps to secure it either way, and you should. Plus, Paul's been trying to play different types of games, and Resident Evil Requiem is better (in his opinion) than Silent Hill f and Silent Hill 2 remake... if you want a horror game. Also, there's a cheaper new Audible plan thanks to Spotify! Windows 11 Shenanigans? If you use a third-party AI client in Edge Canary... you will not be amused. Bitwarden (TWiT sponsor) is (possibly the 1st?) third-party password manager to support passkey sign-ins on Windows 11 New Canary, Dev, and Beta builds last Friday- Canary is more of the same, Dev/Beta get shared audio improvements, narrator improvements, new IT policies ASUS and Dell will soon sell Windows 365 Cloud PCs Google is moving Chrome to a two-week dev schedule. Should we assume Microsoft will follow suit with Edge? Dell is up 39 percent, but because of AI servers not PCs NVIDIA revenues up 73 percent to $68.1 billion AI/dev OpenAI closes $110 billion funding round as the AI circle jerk continues Microsoft brings Copilot Tasks to consumer Copilot Google introduces AppFunctions for Android, it's way to make mobile apps work like MCP (be semantic), similar to what Microsoft is doing in Windows Windows App Development CLI updated to 0.02 with Store CLI integration and .NET project support Build 2026 is in San Francisco, as expected, but in June - overlap with WWDC? Xbox and gaming Here come the first Game Pass titles of March Microsoft highlights some indie games to consider Xbox ROG Ally gets AI-based game recaps Legion Go Fold is the star of the new PCs at MWC Sony might be backtracking on its PC games plans Developing: Epic/Google settlement was approved Tips & picks App pick of the week: Resident Evil Requiem Tip of the week: Secure your Windows 11 PC RunAs Radio this week: Hiring in 2026 with Suzi Edwards-Alexander Brown liquor pick of the week: St. Augustine Florida Straight Bourbon Hosts: Paul Thurrott, Richard Campbell, and Mikah Sargent Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsor: threatlocker.com/twit

Windows Weekly (Video HI)
WW 973: Bob's Rumor Store - ASUS & Dell Unveil Windows 365 Cloud PC Devices

Windows Weekly (Video HI)

Play Episode Listen Later Mar 4, 2026 112:01 Transcription Available


Can Microsoft's push for cloud PCs and AI-powered agents redefine where and how we work? If you keep to the defaults, Windows 11 is secure. Copilot+ PC is even more secure. But you can take additional steps to secure it either way, and you should. Plus, Paul's been trying to play different types of games, and Resident Evil Requiem is better (in his opinion) than Silent Hill f and Silent Hill 2 remake... if you want a horror game. Also, there's a cheaper new Audible plan thanks to Spotify! Windows 11 Shenanigans? If you use a third-party AI client in Edge Canary... you will not be amused. Bitwarden (TWiT sponsor) is (possibly the 1st?) third-party password manager to support passkey sign-ins on Windows 11 New Canary, Dev, and Beta builds last Friday- Canary is more of the same, Dev/Beta get shared audio improvements, narrator improvements, new IT policies ASUS and Dell will soon sell Windows 365 Cloud PCs Google is moving Chrome to a two-week dev schedule. Should we assume Microsoft will follow suit with Edge? Dell is up 39 percent, but because of AI servers not PCs NVIDIA revenues up 73 percent to $68.1 billion AI/dev OpenAI closes $110 billion funding round as the AI circle jerk continues Microsoft brings Copilot Tasks to consumer Copilot Google introduces AppFunctions for Android, it's way to make mobile apps work like MCP (be semantic), similar to what Microsoft is doing in Windows Windows App Development CLI updated to 0.02 with Store CLI integration and .NET project support Build 2026 is in San Francisco, as expected, but in June - overlap with WWDC? Xbox and gaming Here come the first Game Pass titles of March Microsoft highlights some indie games to consider Xbox ROG Ally gets AI-based game recaps Legion Go Fold is the star of the new PCs at MWC Sony might be backtracking on its PC games plans Developing: Epic/Google settlement was approved Tips & picks App pick of the week: Resident Evil Requiem Tip of the week: Secure your Windows 11 PC RunAs Radio this week: Hiring in 2026 with Suzi Edwards-Alexander Brown liquor pick of the week: St. Augustine Florida Straight Bourbon Hosts: Paul Thurrott, Richard Campbell, and Mikah Sargent Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsor: threatlocker.com/twit

The Insurtech Leadership Podcast
API-First Insurance: When Brands Become Insurers

The Insurtech Leadership Podcast

Play Episode Listen Later Mar 4, 2026 30:35 Transcription Available


Episode Overview What does it actually take to run a digital insurance operation at the system level—not at the chatbot layer, but at the transaction layer? Joshua R. Hollander speaks with Wayne Slavin, CEO and Co-Founder of Sure, about the infrastructure required to deliver true digital insurance in an AI-agent world. Wayne describes Sure's role as "what Visa and Mastercard were in the early days of credit cards"—building the rails for digital insurance distribution. Key Topics 1. What "Digital Insurance" Really Means Digital insurance is not about moving forms online or replacing phone calls with web interfaces. True digital insurance is straight-through processing from quote to policy issuance to payment—mirroring the speed and frictionlessness of e-commerce transactions. Wayne explains: "If that transaction requires some asynchronous process, some process that is interrupted, that we are actually not doing digital insurance." The benchmark: the entire process happens within minutes, not days or weeks. 2. API-First Infrastructure vs. Legacy Core Systems Sure's platform differs fundamentally from monolithic core policy administration systems (like Guidewire or Duck Creek) because it was built API-first with data normalization at its foundation. Legacy cores encourage over-customization, which locks insurers into inflexible, non-compliant systems. Sure's approach standardizes policy data across product types (homeowners, renters, fine art, landlord), enabling rapid changes and integrations. Unlike legacy systems, Sure doesn't force carriers to choose between their existing tech and innovation—it coexists alongside legacy infrastructure. 3. Model Context Protocol (MCP) and AI Agent Integration In February 2026, Sure announced the industry's first MCP server integration, enabling Claude AI agents to interact directly with Sure's infrastructure. MCP is a standardized protocol that allows AI agents to connect to business systems without custom integrations for each use case. This means insurers and brands no longer need 6-12 months of engineering to embed insurance; AI agents can quote, bind, manage, and renew policies conversationally. 4. Why Non-Endemic Brands Will Build Insurance The next major insurance distributors won't be insurance companies. They'll be brands, e-commerce platforms, fintechs, and technology companies with massive customer bases. Wayne's economic thesis: if a brand can convert customers to insurance at 20-30x the typical rate (vs. giving customer data to a third party), the unit economics change entirely. Large brands now have a path to retain customers and data while building insurance revenue. 5. The Transaction Layer as Moat Insurance isn't like retail or travel—regulatory consequences are real, policy admin systems are complex, and compliance layers must operate end-to-end. Sure's competitive advantage lies in building the foundational transaction layer that carriers either cannot replicate internally or would take years to engineer. This infrastructure layer is what enables AI agents to work reliably within compliance and regulatory constraints. 6. Insurance as an Ecosystem The future isn't a single insurer offering multiple products—it's an ecosystem where brands, platforms, and technology companies collaborate on insurance delivery. AI agents, powered by Sure's infrastructure, enable this distributed, composable insurance ecosystem. Key Quotes -"What digital insurance really means is truly a straight-through process where you're starting to get a quote that quote will be a real quote. It's not an estimate. It will become a real policy. You will pay real money. You will get a real coverage document. And the timing of all of that is pretty close to what you expect from regular old e-commerce." -"The next big insurance distributors won't be insurance companies. They will be brands. They'll be technology companies. They'll be fintechs. They'll be AI companies. They'll be companies that are currently sitting on large customer bases that don't have insurance products today." -"Before MCP, if an AI agent wanted to interact with an insurance system, you'd have to build a custom integration for each system, each use case. MCP standardizes that." Resources • Sure: https://sure.com • Wayne Slavin LinkedIn: https://www.linkedin.com/in/wayneslavin • Horton International: https://www.horton-usa.com/ Subscribe & Connect Tune in to the Insurtech Leadership Podcast for deep-dive conversations with insurance executives, founders, and innovators shaping the future of insurance technology. • LinkedIn: https://www.linkedin.com/in/joshuarhollander/ • Podcast Showcase: https://www.linkedin.com/showcase/insurtech-leadership-show #InsurTech #Insurance #InsuranceInnovation #Innovation #FutureOfInsurance #Leadership #ExecutiveLeadership

All TWiT.tv Shows (Video LO)
Windows Weekly 973: Bob's Rumor Store

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Mar 4, 2026 112:01 Transcription Available


Can Microsoft's push for cloud PCs and AI-powered agents redefine where and how we work? If you keep to the defaults, Windows 11 is secure. Copilot+ PC is even more secure. But you can take additional steps to secure it either way, and you should. Plus, Paul's been trying to play different types of games, and Resident Evil Requiem is better (in his opinion) than Silent Hill f and Silent Hill 2 remake... if you want a horror game. Also, there's a cheaper new Audible plan thanks to Spotify! Windows 11 Shenanigans? If you use a third-party AI client in Edge Canary... you will not be amused. Bitwarden (TWiT sponsor) is (possibly the 1st?) third-party password manager to support passkey sign-ins on Windows 11 New Canary, Dev, and Beta builds last Friday- Canary is more of the same, Dev/Beta get shared audio improvements, narrator improvements, new IT policies ASUS and Dell will soon sell Windows 365 Cloud PCs Google is moving Chrome to a two-week dev schedule. Should we assume Microsoft will follow suit with Edge? Dell is up 39 percent, but because of AI servers not PCs NVIDIA revenues up 73 percent to $68.1 billion AI/dev OpenAI closes $110 billion funding round as the AI circle jerk continues Microsoft brings Copilot Tasks to consumer Copilot Google introduces AppFunctions for Android, it's way to make mobile apps work like MCP (be semantic), similar to what Microsoft is doing in Windows Windows App Development CLI updated to 0.02 with Store CLI integration and .NET project support Build 2026 is in San Francisco, as expected, but in June - overlap with WWDC? Xbox and gaming Here come the first Game Pass titles of March Microsoft highlights some indie games to consider Xbox ROG Ally gets AI-based game recaps Legion Go Fold is the star of the new PCs at MWC Sony might be backtracking on its PC games plans Developing: Epic/Google settlement was approved Tips & picks App pick of the week: Resident Evil Requiem Tip of the week: Secure your Windows 11 PC RunAs Radio this week: Hiring in 2026 with Suzi Edwards-Alexander Brown liquor pick of the week: St. Augustine Florida Straight Bourbon Hosts: Paul Thurrott, Richard Campbell, and Mikah Sargent Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsor: threatlocker.com/twit

Measure Up
Google Analytics Alternatives with Jason Packer

Measure Up

Play Episode Listen Later Mar 4, 2026 51:48


Google officially retired its workhorse analytics platform, affectionately known as Universal Analytics, almost 3 years ago. Since then, people have started to learn about other platforms as they scrambled to find something more useful than GA4.Jason Packer wrote the book on Google Analytics alternatives (literally, it's titled "Google Analytics Alternatives: A Guide to Navigating the World if Options Beyond Google").Here's what we think of the analytics landscape - how we got here, and what's coming next.Links from the show:(eBook) Google Analytics Alternatives(paperback) Google Analytics Alternatives01:29 Universal Analytics Sunset02:31 Meet Jason Packer05:53 Jasons Early Web Days10:20 Why Analytics Matters13:05 Fragmentation vs Consolidation17:22 GA4 as Ads Companion21:13 Googles Motives23:50 GA4 Pain Points24:32 Why Users Are Leaving26:51 Privacy Compliance Pressure29:33 Top GA4 Alternatives30:15 Simplified Analytics Tools32:06 Product Analytics Picks35:38 Comprehensive Web Platforms36:42 Future of Analytics AI42:23 MCP, LLMs and Trust49:49 Closing Insight and Wrap

Total Mikah (Video)
Windows Weekly 973: Bob's Rumor Store

Total Mikah (Video)

Play Episode Listen Later Mar 4, 2026 112:01 Transcription Available


Can Microsoft's push for cloud PCs and AI-powered agents redefine where and how we work? If you keep to the defaults, Windows 11 is secure. Copilot+ PC is even more secure. But you can take additional steps to secure it either way, and you should. Plus, Paul's been trying to play different types of games, and Resident Evil Requiem is better (in his opinion) than Silent Hill f and Silent Hill 2 remake... if you want a horror game. Also, there's a cheaper new Audible plan thanks to Spotify! Windows 11 Shenanigans? If you use a third-party AI client in Edge Canary... you will not be amused. Bitwarden (TWiT sponsor) is (possibly the 1st?) third-party password manager to support passkey sign-ins on Windows 11 New Canary, Dev, and Beta builds last Friday- Canary is more of the same, Dev/Beta get shared audio improvements, narrator improvements, new IT policies ASUS and Dell will soon sell Windows 365 Cloud PCs Google is moving Chrome to a two-week dev schedule. Should we assume Microsoft will follow suit with Edge? Dell is up 39 percent, but because of AI servers not PCs NVIDIA revenues up 73 percent to $68.1 billion AI/dev OpenAI closes $110 billion funding round as the AI circle jerk continues Microsoft brings Copilot Tasks to consumer Copilot Google introduces AppFunctions for Android, it's way to make mobile apps work like MCP (be semantic), similar to what Microsoft is doing in Windows Windows App Development CLI updated to 0.02 with Store CLI integration and .NET project support Build 2026 is in San Francisco, as expected, but in June - overlap with WWDC? Xbox and gaming Here come the first Game Pass titles of March Microsoft highlights some indie games to consider Xbox ROG Ally gets AI-based game recaps Legion Go Fold is the star of the new PCs at MWC Sony might be backtracking on its PC games plans Developing: Epic/Google settlement was approved Tips & picks App pick of the week: Resident Evil Requiem Tip of the week: Secure your Windows 11 PC RunAs Radio this week: Hiring in 2026 with Suzi Edwards-Alexander Brown liquor pick of the week: St. Augustine Florida Straight Bourbon Hosts: Paul Thurrott, Richard Campbell, and Mikah Sargent Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsor: threatlocker.com/twit

Explicit Measures Podcast
507: AI-Assisted TMDL Workflow & Hot Reload

Explicit Measures Podcast

Play Episode Listen Later Mar 3, 2026 68:09


Mike & Tommy tackle AI-assisted TMDL workflows and the hot reload problem, exploring whether direct file editing with AI tools like Copilot is the future of Power BI development or a recipe for broken models. They weigh the tension between "move logic upstream" best practices and the brutal close-reopen cycle when TMDL changes introduce errors, debating whether Tabular Editor 3, MCP servers, or a native Microsoft solution offers the best path forward for safe, validated bulk refactoring.Get in touch:Send in your questions or topics you want us to discuss by tweeting to @PowerBITips with the hashtag #empMailbag or submit on the PowerBI.tips Podcast Page.Visit PowerBI.tips: https://powerbi.tips/Watch the episodes live every Tuesday and Thursday morning at 730am CST on YouTube: https://www.youtube.com/powerbitipsSubscribe on Spotify: https://open.spotify.com/show/230fp78XmHHRXTiYICRLVvSubscribe on Apple: https://podcasts.apple.com/us/podcast/explicit-measures-podcast/id1568944083‎Check Out Community Jam: https://jam.powerbi.tipsFollow Mike: https://www.linkedin.com/in/michaelcarlo/Follow Tommy: https://www.linkedin.com/in/tommypuglia/

AWS for Software Companies Podcast
Ep196: Agentic AI and the Future of Cloud Security with Sumo Logic

AWS for Software Companies Podcast

Play Episode Listen Later Mar 3, 2026 15:38


Sumo Logic's VP of Security Strategy reveals how a ground-up agentic framework transformed their platform, and why clean data and autonomous agents are rewriting the rules of cloud security.Topics Include:Sumo Logic is a cloud analytics platform ingesting data from complex IT stacks.Built on AWS from the start, leveraging microservices for scalable solutions.Early AI efforts produced a natural language query co-pilot for security data.Bolting AI onto existing platforms proved brittle and one-dimensional.Customer feedback drove a decision to redesign AI from the ground up.The Dojo AI framework unifies purpose-built agents across the entire platform.New agents include a SOC analyst agent, knowledge agent, and MCP server.New frontier models on Bedrock give the whole platform an instant brain transplant.Autonomous agents require rethinking security controls beyond traditional programmatic guardrails.Federal and global customers demand rigorous, levelled-up security across all regions.Clean, normalized data proved the biggest unlock for reliable AI query results.Agent-to-agent communication and MCP will define the next era of AI platforms.Participants:Chas Clawson – Vice President, Security Strategy, Sumo LogicSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/

AWS Morning Brief
The AI Broke Production But Please Don't Tell Anyone

AWS Morning Brief

Play Episode Listen Later Mar 2, 2026 7:21


AWS Morning Brief for the week of March 2nd, with Corey Quinn. Links:Amazon Aurora DSQL launches Playground for interactive database exploration Amazon Redshift Serverless introduces 3-year Serverless ReservationsAmazon S3 now provides AWS source region information in server access logs AWS Compute Optimizer now applies AWS-generated tags to EBS snapshots created during automationAWS Lambda Durable Execution SDK for Java now available in Developer PreviewAWS Trusted Advisor now delivers more accurate unused NAT Gateway checks powered by AWS Compute Optimizer6,000 AWS accounts, three people, one platform: Lessons learnedPetabyte-Scale Cost Optimization: How a Video Hosting Platform Saved 70% on S3Transform live video for mobile audiences with AWS Elemental Inference Migrate Amazon EC2 to ECS Express Mode using Kiro CLI and MCP servers AI-augmented threat actor accesses FortiGate devices at scaleAWS posts “correct the record” piece on AI bot outage

Hacker News Recap
March 1st, 2026 | Microgpt

Hacker News Recap

Play Episode Listen Later Mar 2, 2026 15:28


This is a recap of the top 10 posts on Hacker News on March 01, 2026. This podcast was generated by wondercraft.ai (00:30): MicrogptOriginal post: https://news.ycombinator.com/item?id=47202708&utm_source=wondercraft_ai(01:58): Ghostty – Terminal EmulatorOriginal post: https://news.ycombinator.com/item?id=47206009&utm_source=wondercraft_ai(03:26): Switch to Claude without starting overOriginal post: https://news.ycombinator.com/item?id=47204571&utm_source=wondercraft_ai(04:54): I built a demo of what AI chat will look like when it's “free” and ad-supportedOriginal post: https://news.ycombinator.com/item?id=47205890&utm_source=wondercraft_ai(06:23): Decision trees – the unreasonable power of nested decision rulesOriginal post: https://news.ycombinator.com/item?id=47204964&utm_source=wondercraft_ai(07:51): AI Made Writing Code Easier. It Made Being an Engineer HarderOriginal post: https://news.ycombinator.com/item?id=47206824&utm_source=wondercraft_ai(09:19): When does MCP make sense vs CLI?Original post: https://news.ycombinator.com/item?id=47208398&utm_source=wondercraft_ai(10:48): New iron nanomaterial wipes out cancer cells without harming healthy tissueOriginal post: https://news.ycombinator.com/item?id=47207404&utm_source=wondercraft_ai(12:16): 10-202: Introduction to Modern AI (CMU)Original post: https://news.ycombinator.com/item?id=47204559&utm_source=wondercraft_ai(13:44): Claude becomes number one app on the U.S. App StoreOriginal post: https://news.ycombinator.com/item?id=47202032&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

Tech Lead Journal
The MCP Security Risks You Can't Afford to Ignore

Tech Lead Journal

Play Episode Listen Later Mar 2, 2026 72:19


What if the MCP server you installed last week is silently leaking your emails to a stranger? The AI tools boosting your productivity could already be your biggest security liability.MCP (Model Context Protocol) has quickly become the standard for connecting AI agents to external tools and data sources. But as adoption accelerates, so do the risks – from malicious servers harvesting your credentials in the background, to local processes exposed to your entire network with no authentication. Most developers install MCP servers without fully understanding what code is running or who wrote it, creating serious supply chain and shadow IT problems inside organizations.In this episode, Ariel Shiftan, CTO of MCPTotal, explains how MCP actually works, why there is a wide gap between its original design and how it is used in practice, and what that gap means for security. He also walks through real zero-days his team has discovered and shares practical advice for developers and enterprise leaders trying to adopt MCP without compromising their security posture.Key topics discussed:What MCP is and why it won the “USB for AI” raceWhy most MCP servers are just API wrappers done wrongReal zero-days found in popular, widely used MCPsHow malicious MCPs can silently leak your credentialsThe supply chain risks hiding inside your dev toolchainWhy banning MCP in your org is the wrong moveBest practices for writing well-designed MCP serversWhy agent permission prompts need better security defaultsTimestamps:(00:00:00) Trailer & Intro(00:02:49) What Is MCP and Why Is It Called the USB for AI?(00:07:22) How Does MCP Differ from Standard REST APIs?(00:13:40) What Can AI Agents Do with MCP Beyond Reading Data?(00:16:56) What Is RAG and How Did AI Evolve to Tool Calling?(00:19:54) Why Is MCP Misused as an API Catalog and What Does That Cost?(00:25:04) What Are AI Skills and How Do They Compare to MCP?(00:30:29) How Does MCP Server Architecture Work Under the Hood?(00:37:01) How Do Malicious and Vulnerable MCP Servers Put Organizations at Risk?(00:45:30) What Real-World MCP Vulnerabilities and Zero-Days Have Been Found?(00:50:30) How Should Enterprises Enable MCP Adoption Without Compromising Security?(00:53:16) What Are Best Practices for Writing a Well-Designed MCP Server?(00:59:14) How Should AI Agents Handle Permissions Without Overwhelming Users?(01:05:26) 3 Tech Lead Wisdom_____Ariel Shiftan's BioAriel is a software engineer and security expert with more than 20 years of hands-on and executive leadership experience across cybersecurity, distributed systems, and AI infrastructure. He holds a PhD in Computer Science, specializing in advanced algorithms and systems. Earlier in his career, Ariel founded NorthBit, a deep-tech cybersecurity firm that was acquired by Magic Leap in 2016, where he led product security globally, overseeing the security lifecycle across more than 700 engineers. He has also led applied AI breakthroughs, including heading an XPRIZE-winning team that used deep learning to fight malaria in Africa.Follow Ariel:LinkedIn – linkedin.com/in/shiftanMCPTotal's Website – mcptotal.ioLike this episode?Show notes & transcript: techleadjournal.dev/episodes/249.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.

Datacenter Technical Deep Dives
AI Agents Made Simple: Everything You Need to Know

Datacenter Technical Deep Dives

Play Episode Listen Later Feb 27, 2026 69:21


Join us as Du'An breaks down AI agents in a way that actually makes sense - what they are, how to use them, and how to get started today. Du'An walks through the fundamentals of AI agents with live demos and practical code examples you can use immediately. You'll learn about agent frameworks, when to use agents versus simple LLM calls, building your first agent, and real-world applications from bookmark management to automated workflows. This episode cuts through the hype with realistic expectations about what agents can and can't do, while showing you concrete examples including MCP servers, Strands Pack, and Du'An's personal second brain system. Timestamps 0:00 Welcome & Introduction 1:39 Du'An's Background & Previous Episode Success 3:06 Segueing from Last Week's Episode 4:03 CEOs Vibe Coding Discussion 6:49 Real Estate Developer Building Apps Story 8:23 Getting Started with the Presentation 12:45 What Are AI Agents? 18:22 Agent Frameworks Overview 24:16 When to Use Agents vs Simple LLM Calls 30:41 Building Your First Agent 36:52 Live Demo: Strands Pack 42:18 MCP Servers Explained 47:35 WriteStats MCP Demo 52:14 Real-World Applications 58:33 Du'An's Second Brain System 1:04:01 Bookmark Manager Walkthrough 1:07:17 Organizing Cloud Storage & Email 1:09:06 Wrap-up & Next Episode Teaser How to find Du'An: https://www.duanlightfoot.com/ https://github.com/labeveryday/ Links from the show: https://github.com/labeveryday/strands-pack https://github.com/labeveryday/writestat-mcp https://github.com/labeveryday/bookmark-manager-site https://bookmarks.duanlightfoot.com/ https://github.com/openai/whisper https://openai.com/index/whisper/

Vaders Finest
296: News Update: Marvel Crisis Protocol Alliances & Our Recent MCP Games297: News Update: Marvel Crisis Protocol Alliances & Our Recent MCP Games

Vaders Finest

Play Episode Listen Later Feb 26, 2026 39:22


In this episode we cover all of the news, spoilers, and reveals for the new game system, Marvel Crisis Protocol Alliances. We end the episode with a discussion about our recent MCP games, projects, and afflation goals.Fury's Finest is a podcast and resource devoted to the discussion of the tabletop game ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Marvel Crisis Protocol⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.___________________________________Fury's Finest is supported by our wonderful patrons on Patreon. If you would like to help the show go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠patreon.com/furysfinest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and pledge your support. Fury's Finest Patrons directly support the show and its growth by helping pay our monthly and annual fees, while contributing to future projects and endeavors.Check out our Fury's Finest apparel and merchandise on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TeePublic⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - https://www.teepublic.com/user/pleasestandby___________________________________Twitch I ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠twitch.tv/furysfinest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter I ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@FurysFinestCast⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram I ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@FurysFinest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook I ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Fury's Finest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube I  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Fury's Finest⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ l⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ l⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Google Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠___________________________________Thanks to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Approaching Nirvana⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for our music.Help spread the word of our show.  Subscribe, rate, and review!Send feedback, Marvel thoughts, and show inquires to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠FurysFinest@gmail.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Fury's Finest is hosted by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jesse Eakin⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Chris Bruffett⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

Microsoft Cloud IT Pro Podcast
Episode 422: Back to the Terminal: The Rise of AI CLI Interfaces

Microsoft Cloud IT Pro Podcast

Play Episode Listen Later Feb 26, 2026 41:35 Transcription Available


Welcome to Episode 422 of the Microsoft Cloud IT Pro Podcast. In this episode, Scott and Ben discuss their growing use of   in their daily workflows, particularly Claude Code, GitHub Copilot CLI, and Gemini CLI. They explore how these command-line interfaces offer powerful ways to interact with local files and MCP servers beyond traditional desktop AI chat interfaces. They share how they are using these tools in their day-to-day roles to perform different tasks and accelerate their workflows. Your support makes this show possible! Please consider becoming a premium member for access to live shows and more. Check out our membership options. Show Notes Claude Code overview Using Claude in PowerPoint Create custom subagents Microsoft Work IQ CLI (Public Preview) https://github.com/obra/superpowers How to Use Claude Code: A Guide to Slash Commands, Agents, Skills, and Plug-ins Gemimi CLI overview Github Copilot overview About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!

Lightspeed
0xResearch: Jito BAM and Solana Market Structure | Lucas Bruder

Lightspeed

Play Episode Listen Later Feb 26, 2026 51:28


Gm! In today's episode we have a 0xResearch crosspost where they were joined by Lucas Bruder, Co-Founder of Jito Labs to discuss Jito's BAM block builder on Solana, highlighting transparency, verifiability, and application-controlled execution. They also cover market structure, stake adoption, MCP, slot time reductions, and JitoSOL's ETF efforts. Enjoy! -- Follow Lightspeed: ⁠https://twitter.com/Lightspeedpodhq⁠ Follow Jito Labs: https://x.com/jito_labs Follow Lucas Bruder: https://x.com/buffalu__ Follow Sam: https://x.com/minnus Follow Carlos: https://x.com/0xcarlosg Follow Boccaccio: https://x.com/salveboccaccio Follow Danny: https://x.com/defi_kay_ Join the Lightspeed Telegram: ⁠https://t.me/+QHlbNTNS4gc1ZTVh -- Join us at DAS (Digital Asset Summit) in New York City this March!  Use the link below to learn more, and use code LIGHTSPEED200  to get $200 off your ticket! See you there! Learn more + get your ticket here: https://blockworks.co/event/digital-asset-summit-nyc-2026 -- Get top market insights and the latest in crypto news. Subscribe to Blockworks Daily Newsletter: https://blockworks.co/newsletter/ -- Timestamps: (0:00) Introduction (1:54) Why Jito Built BAM (5:50) Application-Controlled Execution Explained (11:30) MCP and Solana's Future (15:56) BAM Adoption and Stake Growth (33:13) Cutting Slot Times on Solana (40:28) JitoSOL and the ETF Push (47:09) AI, Products, and the Road Ahead (50:40) Closing Comments -- Disclaimers: Lightspeed was kickstarted by a grant from the Solana Foundation. Nothing said on Lightspeed is a recommendation to buy or sell securities or tokens. This podcast is for informational purposes only, and any views expressed by anyone on the show are solely our opinions, not financial advice. Danny, and our guests may hold positions in the companies, funds, or projects discussed.

The Cloud Pod
344: Amazon's Coding Bot Bites the Hand That Runs It

The Cloud Pod

Play Episode Listen Later Feb 24, 2026 61:30


Welcome to episode 344 of The Cloud Pod, where the forecast is always cloudy! Justin is out of the office at a World of Warcraft Tournament (not really), and Ryan is pursuing his lifelong dream of becoming a roadie for The Eagles (maybe?), so it's Jonathan and Matt holding down the fort this week, and they've got a ton of cloud news for you! From security to AI assistants, we've got all the news you need. Let's get started!  Titles we almost went with this week Zero Bus, All Gas, No Kafka Brakes AI Coding Bot Bites the Hand That Runs It When Your Robot Developer Goes Rogue on AWS Kubernetes VPA Finally Stops Evicting Your Database Pods Google Trains 100 Million People, Still No One Reads the Docs  MCP Walks Into a Bar Not Enterprise Ready Yet No More Pod Evictions Kubernetes 1.35 Scales In Place No Keys No Drama Just IAM and Cloud SQL One Agent to Rule Them All in Kubernetes IAM Tired of Writing Policies Manually When Your AI Coding Tool Has Delete Permissions One Dashboard to Rule All Your GPU Clusters Serverless Reservations Prove Nothing Is Truly Free Range Kiro Takes the Wheel on AWS IAM Policies Stop Blaming Backups for Your Bad Architecture AI Agent Goes Rogue, Takes AWS Down With It Everything is Bigger in Texas Except the Water Usage OpenAI launches the college basketball of Inference. Pro service – low cost General News  1:05 Code Mode: give agents an entire API in 1,000 tokens Cloudflare‘s Code Mode MCP server reduces token consumption by 99.9% compared to a traditional MCP implementation, exposing the entire Cloudflare API (over 2,500 endpoints) through just two tools, search() and execute(), using roughly 1,000 tokens versus 1.17 million for a conventional approach. The architecture works by having the AI agent write JavaScript code against a typed OpenAPI spec representation, rather than loading tool definitions into context, with code executing inside a sandboxed V8 isolate (Dynamic Worker) that restricts file system access, environment variables, and external fetches by default. This approach addresses a fundamental constraint in agentic AI systems: adding more tools to give agents broader capabilities directly competes with the available context space for the task at hand. 01:41 Jonathan- “It's good. I'm not sure I could imagine 2 ½ thousand MCP tool definitions in a context window and still actually use it for anything.”    AI Is Going Great – Or How ML Makes Money  03:58 OpenClaw creator Peter Steinberger joins OpenAI Peter Steinberger, creator of viral AI assistant OpenClaw (formerly Clawdbot/Moltbot), has joined

The Changelog
The mythical agent-month (News)

The Changelog

Play Episode Listen Later Feb 23, 2026 7:48


Wes McKinney on the mythical agent-month, install Peon Ping to employ a Peon today, Andreas Kling explains why Ladybird is adopting Rust, Cloudflare has a new MCP server that's quite efficient, and Elliot Bonneville thinks the only moat left is money.

LINUX Unplugged
655: Speeding Up Mistakes

LINUX Unplugged

Play Episode Listen Later Feb 23, 2026 56:48 Transcription Available


Planet Nix and SCaLE are just days away, and we're getting a head start with two guests, the tech, and the trends shaping open source. Our trip starts here!Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:

Machine Learning Guide
MLA 029 OpenClaw

Machine Learning Guide

Play Episode Listen Later Feb 22, 2026 30:14


OpenClaw is a self-hosted AI agent daemon that executes autonomous tasks through messaging apps like WhatsApp and Telegram using persistent memory. It integrates with Claude Code to enable software development and administrative automation directly from mobile devices. Links Notes and resources at ocdevel.com/mlg/mla-29 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want OpenClaw is a self-hosted AI agent daemon (Node.js, port 18789) that executes autonomous tasks via messaging apps like WhatsApp or Telegram. Developed by Peter Steinberger in November 2025, the project reached 196,000 GitHub stars in three months. Architecture and Persistent Memory Operational Loop: Gateway receives message, loads SOUL.md (personality), USER.md (user context), and MEMORY.md (persistent history), calls LLM for tool execution, streams response, and logs data. Memory System: Compounds context over months. Users should prompt the agent to remember specific preferences to update MEMORY.md. Heartbeats: Proactive cron-style triggers for automated actions, such as 6:30 AM briefings or inbox triage. Skills: 5,705+ community plugins via ClawHub. The agent can author its own skills by reading API documentation and writing TypeScript scripts. Claude Code Integration Mobile to Deploy Workflow: The claude-code-skill bridge provides OpenClaw access to Bash, Read, Edit, and Git tools via Telegram. Agent Teams: claude-team manages multiple workers in isolated git worktrees to perform parallel refactors or issue resolution. Interoperability: Use mcporter to share MCP servers between Claude Code and OpenClaw. Industry Comparisons vs n8n: Use n8n for deterministic, zero-variance pipelines. Use OpenClaw for reasoning and ambiguous natural language tasks. vs Claude Cowork: Cowork is a sandboxed, desktop-only proprietary app. OpenClaw is an open-source, mobile-first, 24/7 daemon with full system access. Professional Applications Therapy: Voice to SOAP note transcription. PHI requires local Ollama models due to a lack of encryption at rest in OpenClaw. Marketing: claw-ads for multi-platform ad management, Mixpost for scheduling, and SearXNG for search. Finance: Receipt OCR and Google Drive filing. Requires human review to mitigate non-deterministic LLM errors. Real Estate: Proactive transaction deadline monitoring and memory-driven buyer matching. Security and Operations Hardening: Bind to localhost, set auth tokens, and use Tailscale for remote access. Default settings are unsafe, exposing over 135,000 instances. Injection Defense: Add instructions to SOUL.md to treat external emails and web pages as hostile. Costs: Software is MIT-licensed. API costs are paid per-token or bundled via a Claude subscription key. Onboarding: Run the BOOTSTRAP.md flow immediately after installation to define agent personality before requesting tasks.

Modern Classrooms Project Podcast
Episode 267: Take Time To See Where You're Going

Modern Classrooms Project Podcast

Play Episode Listen Later Feb 22, 2026 29:02


Zach is joined by Kaitie Pait and Kendal Giacomini to talk about the supportive community in the MCP facebook group, and how Kaitie asked for and received some help with unit pacing in a post there Show Notes MCP Podcast episode 171: Resetting MCP (Kendal's previous appearance on the MCP Podcast) Kaitie's post in the Facebook group Soundtrap Connect with Kaitie by email at kaitlin.pait@modernclassrooms.org Connect with Kendal on Goodreads and by email at kendal.giacomini@modernclassrooms.org Contact us, follow us online, and learn more: Email us questions and feedback at: podcast@modernclassrooms.org Listen to this podcast on Youtube Modern Classrooms: @modernclassproj on Twitter and facebook.com/modernclassproj Kareem: @kareemfarah23 on Twitter Toni Rose: @classroomflex on Twitter and Instagram The Modern Classroom Project Modern Classrooms Online Course Take our free online course, or sign up for our mentorship program to receive personalized guidance from a Modern Classrooms mentor as you implement your own modern classroom! The Modern Classrooms Podcast is edited by Zach Diamond: @zpdiamond on Twitter and Learning to TeachSpecial Guests: Kaitie Pait and Kendal Giacomini.

We Study Billionaires - The Investor’s Podcast Network
TECH015: OpenClaw and Self-Sovereign AI w/ Alex Gladstein and Justin Moon (Tech Podcast)

We Study Billionaires - The Investor’s Podcast Network

Play Episode Listen Later Feb 18, 2026 64:31


Alex Gladstein and Justin Moon break down the fundamentals of large language models and explore the rise of OpenClaw as a self-sovereign AI assistant. Justin explains context engineering, local inference, and vibe coding, while Alex dives into the AI for Individual Rights program and its mission to empower activists. IN THIS EPISODE YOU'LL LEARN: 00:00:00 - Intro 00:04:12 - What Large Language Models (LLMs) are and how they differ from traditional programs 00:05:15 - Why AI feels like magic—and what's really happening under the hood 00:06:01 - The key differences between open and closed AI models 00:06:50 - Why capital structures influence AI model openness 00:09:09 - How persistent memory enhances AI agent performance 00:12:18 - What inference means and why context is a scarce resource 00:19:32 - How AI agents combine traditional software with LLM reasoning 00:21:10 - The evolution from MCP-style systems to skills-based context engineering 00:25:41 - What “vibe coding” is and how it lowers the barrier to building apps 00:44:07 - How the AI for Individual Rights program supports activist-driven innovation Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES Oslo Freedom Forum: Website. Justin:  Nostr account. Related episode:  Is AGI Here? Clawdbot, Local AI Agent Swarms w/ Pablo Fernandez & Trey Sellers. Related ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠books⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ mentioned in the podcast. Ad-free episodes on our⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Premium Feed⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. NEW TO THE SHOW? Join the exclusive ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Mastermind Community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X (Twitter)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Check out our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bitcoin Fundamentals Starter Packs⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Browse through all our episodes (complete with transcripts) ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Try our tool for picking stock winners and managing our portfolios: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Finance Tool⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Enjoy exclusive perks from our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠favorite Apps and Services⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Get smarter about valuing businesses in just a few minutes each week through our newsletter, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Intrinsic Value Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Learn how to better start, manage, and grow your business with the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠best business podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. SPONSORS Support our free podcast by supporting our ⁠⁠⁠⁠sponsors⁠⁠⁠⁠: HardBlock Human Rights Foundation Simple Mining Netsuite Masterworks Shopify Vanta Fundrise References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor's Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm