Podcasts about API

Set of subroutine definitions, protocols, and tools for building software and applications

  • 5,877PODCASTS
  • 18,208EPISODES
  • 42mAVG DURATION
  • 3DAILY NEW EPISODES
  • Mar 9, 2026LATEST
API

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about API

    Show all podcasts related to api

    Latest podcast episodes about API

    Fairway Rollin'
    Akshay Bhatia's Win, Concern for Rory McIlroy, and the Players Preview

    Fairway Rollin'

    Play Episode Listen Later Mar 9, 2026 69:59


    House and Nathan are joined by Justin Ray to recap Akshay Bhatia's win at the Arnold Palmer Invitational. Then, they preview The Players and share the golfers they think can make some noise, including their sleeper picks. Finally, they discuss what Brian Rolapp may say at his press conference and share their winner picks.(0:00) Welcome to Fairway Rollin' with Justin Ray!(1:45) Akshay Bhatia's short game was phenomenal(6:40) Who else impressed at API?(29:20) Let's preview The Players(49:15) What sleepers do we like?(57:45) On Brian Rolapp's upcoming press conference(1:02:20) Players Championship winner picksThe Ringer is committed to responsible gaming. Please visit www.rg-help.com to learn more about the resources and helplines available.Hosts: Joe House and Nathan HubbardGuest: Justin RayProducers: Tucker Tashjian, and Mike Wargon Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Hashtag Trending
    Project Synapse: From Anthropic to Robotics

    Hashtag Trending

    Play Episode Listen Later Mar 7, 2026 74:05


    The hosts of Project Synapse discuss how people and companies often claim to value privacy, security, and human-made content while behaving otherwise, then cover major AI news including the US Department of Defense labeling Anthropic a supply chain risk tied to its positions on autonomous weapons and surveillance, and the fallout including the QuitGPT boycott claims and criticism of Sam Altman's response. They examine Claude 4.6 with Cowork and ChatGPT 5.4, emphasizing deeper Office/Gmail integration, larger context windows, and data analytics that could transform corporate data work and accelerate job replacement, while token costs rise and stolen API keys create urgent financial risk. They also warn about the "death of privacy" via profiling and potential anti-anonymity laws, and explore robotics trends, costs, factory adoption, healthcare use cases, and growing investment in humanoid robots from firms like Figure, Tesla, Boston Dynamics, and Unitree. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Sponsor Message 00:18 People Say They Care 01:23 Cybersecurity Reality Check 02:46 Show Intro and Robots 03:35 US Targets Anthropic 09:20 Altman Optics and Boycott 16:52 Anthropic vs OpenAI Safety 21:27 Office Agents Replace Jobs 26:06 Cowork Hands On Debate 35:02 Token Costs and API Keys 38:37 AI Wallet Safety Limits 39:55 Hardware Shortages From AI 42:25 Cloud Control Conspiracy 44:00 Data Brokers Kill Privacy 46:09 AI Builds A Copy Of You 48:26 Embodied AI And Robots 51:17 Humanoids In Factories 01:00:07 Why Humanoids Aren't Everywhere 01:02:06 Robots In Healthcare And Homes 01:06:28 Cheap Humanoids And Companions 01:11:52 Robotics Boom And Wrap Up 01:13:21 Sponsor Message And Sign Off

    Circles Off - Sports Betting Podcast
    Exposing The Worst Betting Advice We Found On Twitter | Presented by Kalshi

    Circles Off - Sports Betting Podcast

    Play Episode Listen Later Mar 6, 2026 59:48


    This week on Circle Back, we break down some of the worst sports betting advice currently circulating on Gambling Twitter. The crew reacts to a viral strategy suggesting bettors should blindly fade steam moves in low-liquidity mid-major college basketball games, and explains why sportsbooks adjusting to sharp action does NOT suddenly create value on the other side. We also discuss bankroll management advice encouraging bettors to constantly withdraw profits instead of allowing their bankroll and unit size to scale properly, and debate a controversial take dismissing the importance of closing line value (CLV) as a core indicator of long-term betting success. Plus, we round up some of the latest viral moments, arguments, and drama from Gambling Twitter this week. Circle Back is hosted by Jacob Gramegna and is part of Circles Off on The Hammer Betting Network. This episode features Joey Knish alongside Porter from BA Analytics and Chinamaniac as the panel breaks down the latest conversations, debates, and controversies happening across Gambling Twitter.

    The MongoDB Podcast
    Don't Build Your Own AI (Unless You Have To)

    The MongoDB Podcast

    Play Episode Listen Later Mar 6, 2026 52:41


    Are you trying to figure out if your team should build an AI model from scratch or integrate an off-the-shelf solution? You aren't alone.In this episode of the MongoDB Podcast, Shane McAlister sits down with Akshaya Murthy, Director of AI Transformation at Zendesk, to decode the maze of building enterprise AI products. They dive into why integrating is often the winning move for speed-to-market, the hidden costs of custom models, and why bad data will break even the most perfect transformer model.What you'll learn in this episode:The Build vs. Buy Calculus: Why lower Total Cost of Ownership (TCO) and rapid deployment favor integration for most enterprises.Spotting "AI Washing": How to avoid vendor buzzword salads and focus on actual problem-solving and ROI.Architectural Must-Haves: Why your AI stack needs modular API layers, model hot-swapping, and CI/CD pipelines just like your standard code.The "Garbage In, Hype Out" Rule: Why a solid data strategy and a centralized single source of truth are non-negotiable.Ready to stop experimenting and start delivering real AI value? Tune in now.

    Hacker Valley Studio
    Can AI Do Your Cyber Job? Post Your Job Req and Find Out with Marcus J. Carey

    Hacker Valley Studio

    Play Episode Listen Later Mar 6, 2026 38:49


    Last episode, Ron and Marcus made predictions. This episode, they brought the receipts. A journalist built an app with vibe coding and got hacked on live television.  A social network built entirely by AI (not a single line of human code!) exposed 1.5 million authentication tokens and private messages between agents.  And 88% of organizations have already had an AI security incident, while barely 14% of deployed agents ever saw a security review.  The warnings from last episode aged fast. Marcus J. Carey is back to talk about what that actually means for the people building right now, not the people theorizing about it. Ron and Marcus are in the code themselves, and this conversation is what that experience actually looks like: OpenClaw running loose on your machine, agents racking up API bills, and why guidance, not prompts, not tools, is the real skill that separates builders who thrive from builders who ship disasters. Impactful Moments 00:00 - Introduction 02:00 - Vibe coding hack on live TV 03:30 - Mo Book leaks 1.5M auth tokens 06:00 - Marcus' origin story: War Games, 1983 08:00 - OpenClaw escapes the lab 13:30 - AT&T cuts help desk spend 90% 17:00 - Context is king, guidance is everything 19:00 - Can AI do your job rec right now? 24:00 - The first cybersecurity jobs agents will replace 27:00 - Expertise + AI = 1000x yourself 30:00 - Focus on outcomes, not new tools   Links Connect with our guest, Marcus J. Carey, on LinkedIn: https://www.linkedin.com/in/marcuscarey/   Read the articles we referenced in this episode: The vibe coding hack that aired on live TV, ICAEW breaks down exactly how it happened and what it means for anyone building with AI: https://www.icaew.com/insights/viewpoints-on-the-news/2026/feb-2026/cyber-dangers-of-agents-and-vibe-coding 88% of organizations have already had an AI security incident. See the full data from the Cisco State of AI Security 2026 report: https://www.helpnetsecurity.com/2026/02/23/ai-agent-security-risks-enterprise/   Check out our upcoming events: https://www.hackervalley.com/livestreams Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/  

    Podland News
    Triton Digital's Sharon Taylor, on what Apple Podcasts HLS Video Really Changes

    Podland News

    Play Episode Listen Later Mar 6, 2026 99:27 Transcription Available


    Apple's HLS video support, Triton's roadmap, and the real cost of video collide with questions about measurement, privacy, and control. We chat with Sharon Taylor from Triton Digital.And, with Kattie Laur, we also explore Canada's podcast identity, the CBC effect, and why discovery and funding—not mandates—unlock local growth.• Why Triton added video without giving up control• Apple's HLS model, dynamic ads, and hosting costs• Spotify's API path vs open RSS monetization• limits on first-party data and privacy choices• how premium feeds and secure distribution fit the mix• Canada's discovery gap and funding bottleneck• CBC's high bar and the impact on independents• podcasting overtakes spoken-word radio in the US• ad spend trends pointing to podcast growth• new tools, AI summaries, and workflow upgradesStart podcasting, keep podcasting with Buzzsprout.comSend James & Sam a messageSupport the showConnect With Us: Email: weekly@podnews.net Fediverse: @james@bne.social and @samsethi@podcastindex.social Support us: www.buzzsprout.com/1538779/support Get Podnews: podnews.net

    PPC CAST
    278. Cómo estamos usando la IA para nuestro rol de Media Buyer (Parte 1)

    PPC CAST

    Play Episode Listen Later Mar 6, 2026 72:44


    Luis y Albert se sientan a hablar de cómo están usando la inteligencia artificial en su día a día como media buyers en 2026.Nada de teorías: herramientas concretas, workflows que ya aplican con clientes y una comparativa honesta de qué IA merece tu dinero y cuál no.En este episodio aprenderás:

    Semaphore Uncut
    Product News: OAuth Authentication for the Semaphore MCP Server

    Semaphore Uncut

    Play Episode Listen Later Mar 6, 2026 2:06


    We're preparing a new update for the Semaphore MCP server that will make it easier for developers to connect AI agents and developer tools.The focus of this update is authentication.Today, connecting an agent to the MCP server typically requires using a long-lived API token. While this works well, it also means developers need to generate credentials, store them in configuration files, and manage them manually.In our next release, coming next week, we're introducing OAuth authentication support for the MCP server.This will make connecting agents and developer tools significantly simpler.Instead of generating and storing API tokens, developers will be able to authenticate through a familiar OAuth flow. When configuring an agent, a browser window opens, you log in, and approve access to the MCP server. Once approved, the connection is established automatically.This approach removes the need to manage long-lived credentials and makes integrations easier to set up.It also improves compatibility with modern agentic development tools. Some tools have limitations when working with static API tokens, and OAuth removes those barriers.Read more on our blog.Pete MiloravacThe Semaphore Teamhttps://semaphore.io This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit semaphoreio.substack.com

    Everyday AI Podcast – An AI and ChatGPT Podcast
    Ep 727: 7 Huge AI Feature Updates You Likely Missed: From AI Video and Gmail to Agents

    Everyday AI Podcast – An AI and ChatGPT Podcast

    Play Episode Listen Later Mar 5, 2026 32:35


    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually

    The Insider Travel Report Podcast
    How Tickitto Allows Travel Advisors to Get Into the Event Booking Business

    The Insider Travel Report Podcast

    Play Episode Listen Later Mar 5, 2026 11:14 Transcription Available


    Dana Lattouf, founder and CEO of Tickitto, talks with Alan Fine of Insider Travel Report the InteleTravel national conference in Punta Cana, about how her InteleTravel-owned global ticketing infrastructure platform connects travel advisors to more than 90,000 concerts, sports and entertainment events worldwide. She also discusses how Tickitto integrates event inventory directly into travel booking systems and how travel advisors can add event tickets to client itineraries through white-label tools and API connections. For more information, visit www.tickitto.com  or www.inteletravel.com.  All our Insider Travel Report video interviews are archived and available on our Youtube channel  (youtube.com/insidertravelreport), and as podcasts with the same title on: Spotify, Pandora, Stitcher, PlayerFM, Listen Notes, Podchaser, TuneIn + Alexa, Podbean,  iHeartRadio,  Google, Amazon Music/Audible, Deezer, Podcast Addict, and iTunes Apple Podcasts, which supports Overcast, Pocket Cast, Castro and Castbox. 

    Hashtag Trending
    Stolen Gemini API Key Triggers $82K Bill

    Hashtag Trending

    Play Episode Listen Later Mar 5, 2026 15:49


    Stolen Gemini API Key Triggers $82K Bill, Accenture Buys Ookla, OpenAI vs GitHub, and Meta Smart Glasses Privacy Jim Love covers multiple tech stories: a three-developer startup in Mexico saw its Google Gemini bill jump from about $180/month to $82,314 in two days after attackers used a stolen API key, highlighting the financial and security risks of usage-based AI APIs, limits, and autonomous agents. Accenture is buying Ookla (Speedtest and Downdetector) for about $1.2B, aiming to monetize its large real-world internet performance dataset for consulting and infrastructure work. Reports say OpenAI may be developing a developer platform that could compete with Microsoft's GitHub, complicating their partnership. China's Minimax launches Max Claw, a cloud "always-on" AI agent deployable in 10 seconds, raising broader access and data-security concerns. Apple's MacBook Neo looks inexpensive but has fixed 8GB memory and paid storage upgrades. Meta's Ray-Ban smart glasses raise privacy questions around stored AI interactions and human review. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Sponsor Message Meter 01:04 Gemini Key Bill Shock 04:46 Accenture Buys Ookla 06:26 OpenAI vs GitHub Rumors 08:07 Minimax Max Claw Agents 11:07 MacBook Neo Value Trap 12:51 Meta Smart Glasses Privacy 14:56 Wrap Up and Thanks

    Between Product and Partnerships
    How to Build Integrations with Platforms Bigger Than You Without Getting Stuck at the Bottom of the Queue

    Between Product and Partnerships

    Play Episode Listen Later Mar 5, 2026 31:26


    In this episode of Between Product and Partnerships, Biljana Pecelj joins Cristina Flaschen to explain how smaller teams successfully ship integrations with larger platform partners. She makes the case that leveraging usage data and performance metrics is the key to proving your integration's value, giving you the necessary influence to move up a major partner's priority list.Biljana shares lessons from her experience managing integrations at Hootsuite during major platform shifts, including the rise of Instagram Business APIs and the emergence of new features like Stories that didn't always come with immediate API support. She also details the process of aligning internal stakeholders to ensure integration features actually ship despite shifting external APIs.The conversation also covers the operational side of integrations, this includes why observability needs to be built early, how teams detect silent failures before customers do, and how to structure internal alignment when integration work touches engineering, legal, partnerships, and revenue.Who we sat down withBiljana Pecelj is a Principal Product Manager at Ledgy with deep experience building integrations inside platform-heavy environments. She has worked extensively on partnership-driven product initiatives where execution speed depends on navigating both technical constraints and external partner relationships.Biljana brings expertise in:Building integrations in environments where APIs and features evolve asynchronouslyDesigning for observability and proactive monitoringNavigating asymmetric partner relationshipsAligning roadmap priorities across product, partnerships, legal, and engineeringManaging tradeoffs between beta opportunities and engineering capacityKey TopicsWhy integration product work is relationship workTechnical execution matters, but alignment with partners determines whether integrations actually ship and scale.Building in ecosystems you don't controlAPIs change. Features launch without endpoints. Roadmaps shift. Successful teams anticipate uncertainty rather than assume stability.The importance of observability from day oneSilent failures are common in integrations. Without monitoring, teams often learn about outages from customers instead of systems.Roadmap tradeoffs when beta opportunities ariseNew partner features can require immediate shifts in engineering priorities. Negotiation and resource reallocation become core product skills.M&A and integration complexityBrand consolidation rarely means backend integration. Teams often inherit layered systems that remain technically independent long after acquisition.Episode Highlights01:55 – How integration product management differs from core product work04:40 – Navigating power imbalances with large platform partners07:15 – Using data to strengthen partner conversations10:30 – Building observability when resources are limited13:45 – Handling silent integration failures17:50 – Managing beta features and roadmap shifts21:30 – Aligning cross-functional teams around integration priorities24:45 – Why relationships accelerate integration execution28:10 – Lessons learned from building inside platform ecosystems--For more insights on partnerships, ecosystems, and integrations, visit www.pandium.com

    The David Knight Show
    Wed Episode #2214: OpenAI vs. Anthropic: The Military AI Split

    The David Knight Show

    Play Episode Listen Later Mar 4, 2026 121:41 Transcription Available


    ────────────────────────────────────────00:01:06:19 — AI Firms Accused of Enabling Mass Surveillance and Autonomous WeaponsOpenAI and other technology companies face backlash for allegedly cooperating with Pentagon projects involving mass surveillance systems and autonomous lethal weapons.────────────────────────────────────────00:02:09:26 — Claims of “Unlimited” U.S. Weapons Stockpiles ChallengedStatements that the United States possesses virtually unlimited weapons stockpiles are disputed using reported production figures showing interceptor missile shortages.────────────────────────────────────────00:03:41:29 — Missile Production Gap Exposes Strategic VulnerabilityIranian missile production is estimated at about 100 per month while U.S. interceptor production may be only six to seven per month, highlighting a severe imbalance in defensive capability.────────────────────────────────────────00:07:34:20 — Reports of Cluster Missile Technology Increasing Defense ChallengesClaims circulate that certain Iranian missiles contain dozens of sub-munitions, multiplying the difficulty for missile defense systems already facing interceptor shortages.────────────────────────────────────────00:09:14:14 — U.S. Proposal to Insure Oil Tankers Through Strait of HormuzThe U.S. government reportedly considers guaranteeing or insuring oil tankers traveling through the Strait of Hormuz to keep global energy shipments moving despite military risks.────────────────────────────────────────00:11:16:19 — Debate Over Israeli Influence on U.S. War DecisionsArguments emerge that U.S. policy may be influenced by Israeli strategic priorities, while critics insist American leaders remain responsible for their own decisions.────────────────────────────────────────00:16:24:03 — 1953 Iran Coup Framed as Origin of Modern ConflictCurrent tensions are linked to the CIA-backed overthrow of Iran's government in 1953 and the installation of the Shah, described as a foundational moment for long-term hostility.────────────────────────────────────────00:38:46:16 — U.S. Troops Killed in Missile Strike on Kuwait BaseSix U.S. service members are reported killed and multiple others injured when a missile strike hits a makeshift operations center described as a “fortified” trailer.────────────────────────────────────────00:43:05:25 — Christian Prophecy Narratives Used to Justify WarReports emerge of military leadership invoking biblical prophecy and Armageddon narratives to frame the conflict with Iran as part of a divine plan.────────────────────────────────────────00:58:33:09 — California Law Requires Age-Tracking Internet InfrastructureCalifornia unanimously passes AB-1043 requiring operating systems to collect age data at account setup and transmit it to app developers via a real-time API beginning January 2027.────────────────────────────────────────01:10:36:14 — Trump Targeting Law Firms Sparks Constitutional ConcernsExecutive orders reportedly removed security clearances and federal building access from law firms associated with political opponents.────────────────────────────────────────01:27:56:15 — AI Industry Conflict Over Military Surveillance ContractsAnthropic's Claude AI reportedly refuses Pentagon uses tied to mass surveillance or autonomous weapons while OpenAI moves forward with defense contracts.──────────────────────────────────────── Money should have intrinsic value AND transactional privacy: Go to https://davidknight.gold/ for great deals on physical gold/silver For 10% off Gerald Celente's prescient Trends Journal, go to https://trendsjournal.com/ and enter the code KNIGHT Find out more about the show and where you can watch it at TheDavidKnightShow.com If you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Become a supporter of this podcast: https://www.spreaker.com/podcast/the-david-knight-show--2653468/support.

    The REAL David Knight Show
    Wed Episode #2214: OpenAI vs. Anthropic: The Military AI Split

    The REAL David Knight Show

    Play Episode Listen Later Mar 4, 2026 121:41 Transcription Available


    ────────────────────────────────────────00:01:06:19 — AI Firms Accused of Enabling Mass Surveillance and Autonomous WeaponsOpenAI and other technology companies face backlash for allegedly cooperating with Pentagon projects involving mass surveillance systems and autonomous lethal weapons.────────────────────────────────────────00:02:09:26 — Claims of “Unlimited” U.S. Weapons Stockpiles ChallengedStatements that the United States possesses virtually unlimited weapons stockpiles are disputed using reported production figures showing interceptor missile shortages.────────────────────────────────────────00:03:41:29 — Missile Production Gap Exposes Strategic VulnerabilityIranian missile production is estimated at about 100 per month while U.S. interceptor production may be only six to seven per month, highlighting a severe imbalance in defensive capability.────────────────────────────────────────00:07:34:20 — Reports of Cluster Missile Technology Increasing Defense ChallengesClaims circulate that certain Iranian missiles contain dozens of sub-munitions, multiplying the difficulty for missile defense systems already facing interceptor shortages.────────────────────────────────────────00:09:14:14 — U.S. Proposal to Insure Oil Tankers Through Strait of HormuzThe U.S. government reportedly considers guaranteeing or insuring oil tankers traveling through the Strait of Hormuz to keep global energy shipments moving despite military risks.────────────────────────────────────────00:11:16:19 — Debate Over Israeli Influence on U.S. War DecisionsArguments emerge that U.S. policy may be influenced by Israeli strategic priorities, while critics insist American leaders remain responsible for their own decisions.────────────────────────────────────────00:16:24:03 — 1953 Iran Coup Framed as Origin of Modern ConflictCurrent tensions are linked to the CIA-backed overthrow of Iran's government in 1953 and the installation of the Shah, described as a foundational moment for long-term hostility.────────────────────────────────────────00:38:46:16 — U.S. Troops Killed in Missile Strike on Kuwait BaseSix U.S. service members are reported killed and multiple others injured when a missile strike hits a makeshift operations center described as a “fortified” trailer.────────────────────────────────────────00:43:05:25 — Christian Prophecy Narratives Used to Justify WarReports emerge of military leadership invoking biblical prophecy and Armageddon narratives to frame the conflict with Iran as part of a divine plan.────────────────────────────────────────00:58:33:09 — California Law Requires Age-Tracking Internet InfrastructureCalifornia unanimously passes AB-1043 requiring operating systems to collect age data at account setup and transmit it to app developers via a real-time API beginning January 2027.────────────────────────────────────────01:10:36:14 — Trump Targeting Law Firms Sparks Constitutional ConcernsExecutive orders reportedly removed security clearances and federal building access from law firms associated with political opponents.────────────────────────────────────────01:27:56:15 — AI Industry Conflict Over Military Surveillance ContractsAnthropic's Claude AI reportedly refuses Pentagon uses tied to mass surveillance or autonomous weapons while OpenAI moves forward with defense contracts.──────────────────────────────────────── Money should have intrinsic value AND transactional privacy: Go to https://davidknight.gold/ for great deals on physical gold/silver For 10% off Gerald Celente's prescient Trends Journal, go to https://trendsjournal.com/ and enter the code KNIGHT Find out more about the show and where you can watch it at TheDavidKnightShow.com If you would like to support the show and our family please consider subscribing monthly here: SubscribeStar https://www.subscribestar.com/the-david-knight-showOr you can send a donation throughMail: David Knight POB 994 Kodak, TN 37764Zelle: @DavidKnightShow@protonmail.comCash App at: $davidknightshowBTC to: bc1qkuec29hkuye4xse9unh7nptvu3y9qmv24vanh7Become a supporter of this podcast: https://www.spreaker.com/podcast/the-real-david-knight-show--5282736/support.

    Code Story
    Developer Chats - Oleksandr Piekhota

    Code Story

    Play Episode Listen Later Mar 4, 2026 27:33 Transcription Available


    Today, we are continuing our series, entitled Developer Chats - hearing from the large scale system builders themselves.In this episode, we are talking with Oleksandr Piekhota, Principal Software Engineer at Teaching Strategies. Oleksandr helps to show us at what point of scale platform approaches are required, when to run experiments and when to stop, and perhaps more importantly - engineering ownership beyond the code.QuestionsYou've moved from hands-on engineering into principal and technical leadership roles, working on architecture and platforms.At what point did you realize your work was no longer about individual features, but about the system as a wholeAcross several projects, growth didn't break functionality — it exposed architectural limits.Can you recall a moment when it became clear that shipping more features wouldn't solve the problem, and a platform approach was required?You've designed and supported APIs end-to-end, from architecture to real customers. How do you distinguish between an API that simply works and one that can truly support business scale?Internal systems like invoicing and HR workflows began as automation, but evolved into real products.What tells you that an internal tool is worth developing seriously rather than treating as a temporary workaround?In R&D, you explored CI/CD automation, server-less, and infrastructure experiments — not all reached production. How do you decide when an experiment should continue, and when it's no longer worth the engineering cost?You've hired teams, set standards, and shaped long-term technical direction. At what point does an engineer stop being a contributor and start owning business-level outcomes?You contributed to open-source tools that later became part of your company's infrastructure. Why do you see open source contributions as part of serious engineering work rather than a side activity?Looking across your projects, how do you now recognize a truly mature engineering system? Is it code quality, process, or how teams respond when things go wrong?If we look five to seven years into the future, which architectural assumptions we treat as “standard” today are most likely to turn out to be naive or limiting?SponsorsIncogniLinkshttps://www.linkedin.com/in/oleksandr-piekhota-b675ba53/https://teachingstrategies.com/Support this podcast at — https://redcircle.com/codestory/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

    Seller Sessions
    Claude Sessions Week 3: AI Implementation for E-Commerce with Subash - Seller Sessions Podcast

    Seller Sessions

    Play Episode Listen Later Mar 4, 2026 40:27


    In this third installment of Claude Sessions, Danny is joined by Subash from Not A Square, who helps e-commerce brands scaling past seven figures implement AI without scaling headcount. Subash walks through real client case studies -- including a TikTok brand that boosted its customer satisfaction score from 4.2 to 4.5 in four weeks using a customer support agent built in Claude. Danny then breaks down OpenClaw, the open-source personal AI agent that exploded in popularity, explains why he chose not to use it despite the temptation, and reveals Claude Flow -- his custom operating system built inside Claude Code with 11 engines, 300+ features, and a persistent memory layer powered by ChromaDB. The episode drives home one core message: document your operations first, pick one platform, go deep, and stop chasing every new tool. Key Topics Documenting operations before automation -- Why you cannot automate what is not documented TikTok customer support case study -- Building an AI agent that raised satisfaction scores in four weeks OpenClaw overview and security risks -- What it does, why it blew up, and why Danny built his own alternative Claude Flow -- Danny's custom operating system inside Claude Code with persistent memory The amnesia loop -- How context loss between sessions kills productivity and how ChromaDB solves it Pixel-less environment -- The shift from structured prompts to contextual AI interaction Go deep on one platform -- Why chasing multiple AI tools guarantees you build nothing Timestamps [00:00] Introduction -- Claude Sessions Week 3, delayed from the road [01:03] Subash introduces himself and Not A Square [02:01] Overview of three client projects and the problem founders face [04:30] Why operational truth is the moat in AI commerce [06:48] Three pillars: reduce costs, better governance, scale without headcount [07:30] TikTok case study -- customer support agent boosting store score from 4.2 to 4.5 [09:04] OpenClaw -- history, capabilities, and the security nightmare [15:30] Six core capabilities of OpenClaw (local-first, universal messaging, persistent memory, browser automation, system access, self-extending skills) [18:00] Why OpenClaw matters -- moving from dumb LLMs to personal AI agents [20:00] Security trade-offs -- 1.5M API keys exposed, malware in skills, Cisco tests [22:00] Claude Flow -- Danny's 11-engine operating system built inside Claude Code [24:26] The amnesia loop -- how sessions lose context and how ChromaDB fixes it [28:19] Why Claude MD, agents, and skills are not enough without hooks and triggers [32:40] Go deep on one platform -- stop chasing every new tool [35:35] Subash on helping sellers adopt Claude Code fundamentals (Claude MD, skills) [39:51] Wrap-up and contact info Key Takeaways Document before you automate -- If your business operations live in the founder's head and not on paper, any AI tool will amplify the chaos rather than fix it. Operational truth is the moat -- Clean inventory, accurate catalogs, honest cashflow reporting. Get these right before touching AI. One AI agent moved the needle -- A single customer support agent on TikTok raised a brand's satisfaction score from 4.2 to 4.5 in four weeks, directly improving store visibility. Persistent memory changes everything -- ChromaDB captures decisions, patterns, and project context across sessions so Claude compounds in usefulness over time (zero entries in session one, 1,700+ by session 25). Scaffolding beats raw building -- Danny's Claude Flow system means a project that took five days six months ago now takes 40 minutes. The investment in infrastructure pays exponential returns. OpenClaw is proof of concept, not production-ready -- Broad permissions, prompt injection vulnerabilities, exposed API keys. Wait for the open-source community to patch the holes before diving in. Pick one platform and go all the way in -- Chasing multiple AI tools means you learn none of them deeply and build nothing of value.

    Between Two COO's with Michael Koenig
    AI Agents Need Logins Too: Identity, Security, and the Future of AI | Greg Keller, CTO, JumpCloud

    Between Two COO's with Michael Koenig

    Play Episode Listen Later Mar 4, 2026 32:01


    Get 90 days of Fellow free at Fellow.ai/coo In this episode, Michael Koenig speaks with Greg Keller, co-founder and CTO of JumpCloud, about identity access management and why it's becoming one of the most important operational systems in the age of AI. Greg explains how traditional identity systems were designed for office-based companies running Microsoft infrastructure and why that model broke as companies moved to SaaS, cloud infrastructure, and remote work. The discussion then turns to the next big shift: the rise of AI agents and synthetic identities inside organizations. As companies deploy more AI tools, the number of machine identities may soon outnumber human employees. Managing what those systems can access will become a critical security and operational challenge.   Topics Covered What a CTO actually does Greg explains the different types of CTO roles and how technology leaders help companies anticipate where the market is headed. Identity Access Management explained simply IAM answers three core questions inside every company: Who are you? What can you access? How is that access managed?   Why the old IT model broke Traditional identity systems were built for on-premise offices and Microsoft infrastructure. Modern companies now operate across: SaaS applications cloud infrastructure remote work environments multiple operating systems How JumpCloud approaches identity JumpCloud was built to manage identity across devices, applications, and infrastructure regardless of platform. Where Okta fits in the ecosystem Okta helped modernize browser-based authentication through Single Sign-On, while JumpCloud focuses on broader identity infrastructure.   AI, Security, and Synthetic Identities Why COOs should push AI adoption Greg argues AI adoption is no longer optional. Companies must encourage teams to improve productivity and efficiency using AI.   The rise of synthetic identities AI agents, bots, APIs, and service accounts are becoming new actors inside companies that require identity governance.   Bots may soon outnumber employees Organizations will soon manage more machine identities than human ones.   AI as a potential insider threat AI systems can become security risks if they are granted excessive permissions or misinterpret policies.   The API key governance problem Many AI integrations rely on API keys, which are often poorly managed and can create hidden security risks.   Key Takeaway As companies adopt AI, identity access management becomes the control layer that determines what both humans and machines are allowed to do inside the organization. The companies that manage identity well will move faster and operate more securely.   Links: Michael on LinkedIn: https://linkedin.com/in/michael-koenig514 Greg on LinkedIn: https://www.linkedin.com/in/gregorykeller/ JumpCloud: https://jumpcloud.com/ Between Two COO's: https://betweentwocoos.com Episode Link: https://betweentwocoos.com/ai-agents-identity-access-greg-keller

    The Insurtech Leadership Podcast
    API-First Insurance: When Brands Become Insurers

    The Insurtech Leadership Podcast

    Play Episode Listen Later Mar 4, 2026 30:35 Transcription Available


    Episode Overview What does it actually take to run a digital insurance operation at the system level—not at the chatbot layer, but at the transaction layer? Joshua R. Hollander speaks with Wayne Slavin, CEO and Co-Founder of Sure, about the infrastructure required to deliver true digital insurance in an AI-agent world. Wayne describes Sure's role as "what Visa and Mastercard were in the early days of credit cards"—building the rails for digital insurance distribution. Key Topics 1. What "Digital Insurance" Really Means Digital insurance is not about moving forms online or replacing phone calls with web interfaces. True digital insurance is straight-through processing from quote to policy issuance to payment—mirroring the speed and frictionlessness of e-commerce transactions. Wayne explains: "If that transaction requires some asynchronous process, some process that is interrupted, that we are actually not doing digital insurance." The benchmark: the entire process happens within minutes, not days or weeks. 2. API-First Infrastructure vs. Legacy Core Systems Sure's platform differs fundamentally from monolithic core policy administration systems (like Guidewire or Duck Creek) because it was built API-first with data normalization at its foundation. Legacy cores encourage over-customization, which locks insurers into inflexible, non-compliant systems. Sure's approach standardizes policy data across product types (homeowners, renters, fine art, landlord), enabling rapid changes and integrations. Unlike legacy systems, Sure doesn't force carriers to choose between their existing tech and innovation—it coexists alongside legacy infrastructure. 3. Model Context Protocol (MCP) and AI Agent Integration In February 2026, Sure announced the industry's first MCP server integration, enabling Claude AI agents to interact directly with Sure's infrastructure. MCP is a standardized protocol that allows AI agents to connect to business systems without custom integrations for each use case. This means insurers and brands no longer need 6-12 months of engineering to embed insurance; AI agents can quote, bind, manage, and renew policies conversationally. 4. Why Non-Endemic Brands Will Build Insurance The next major insurance distributors won't be insurance companies. They'll be brands, e-commerce platforms, fintechs, and technology companies with massive customer bases. Wayne's economic thesis: if a brand can convert customers to insurance at 20-30x the typical rate (vs. giving customer data to a third party), the unit economics change entirely. Large brands now have a path to retain customers and data while building insurance revenue. 5. The Transaction Layer as Moat Insurance isn't like retail or travel—regulatory consequences are real, policy admin systems are complex, and compliance layers must operate end-to-end. Sure's competitive advantage lies in building the foundational transaction layer that carriers either cannot replicate internally or would take years to engineer. This infrastructure layer is what enables AI agents to work reliably within compliance and regulatory constraints. 6. Insurance as an Ecosystem The future isn't a single insurer offering multiple products—it's an ecosystem where brands, platforms, and technology companies collaborate on insurance delivery. AI agents, powered by Sure's infrastructure, enable this distributed, composable insurance ecosystem. Key Quotes -"What digital insurance really means is truly a straight-through process where you're starting to get a quote that quote will be a real quote. It's not an estimate. It will become a real policy. You will pay real money. You will get a real coverage document. And the timing of all of that is pretty close to what you expect from regular old e-commerce." -"The next big insurance distributors won't be insurance companies. They will be brands. They'll be technology companies. They'll be fintechs. They'll be AI companies. They'll be companies that are currently sitting on large customer bases that don't have insurance products today." -"Before MCP, if an AI agent wanted to interact with an insurance system, you'd have to build a custom integration for each system, each use case. MCP standardizes that." Resources • Sure: https://sure.com • Wayne Slavin LinkedIn: https://www.linkedin.com/in/wayneslavin • Horton International: https://www.horton-usa.com/ Subscribe & Connect Tune in to the Insurtech Leadership Podcast for deep-dive conversations with insurance executives, founders, and innovators shaping the future of insurance technology. • LinkedIn: https://www.linkedin.com/in/joshuarhollander/ • Podcast Showcase: https://www.linkedin.com/showcase/insurtech-leadership-show #InsurTech #Insurance #InsuranceInnovation #Innovation #FutureOfInsurance #Leadership #ExecutiveLeadership

    Category Visionaries
    How Podero avoids "pilot purgatory" | Chris Bernkopf

    Category Visionaries

    Play Episode Listen Later Mar 4, 2026 16:55


    Podero builds software that enables European utilities to trade device flexibility—EVs, heat pumps, and batteries—on energy markets, generating trading revenues while reducing consumer bills by 20-30%. The company navigates a uniquely complex B2B motion: they must sell utilities, secure API access from device OEMs, and ensure utilities successfully roll out consumer-facing products—all simultaneously. In this episode of BUILDERS, Chris Bernkopf, Co-Founder and CEO of Podero, breaks down how they escaped pilot purgatory with innovation departments, built a "10x better than doing nothing" business case that reaches commercial stakeholders, and why their 2026 strategy centers on radical simplification through deletion.Topics DiscussedOrigin story: from Raspberry Pi heat pump experiment to YC-backed utility infrastructure softwareThe "three miracle problem" go-to-market challenge and how they de-risked all three dimensions in parallelSales cycle mechanics: 6-12 month closes, avoiding innovation department traps, and multi-stakeholder orchestrationMarket structure: 2,000 addressable utilities in Europe, 120 customers required for unicorn trajectoryChannel strategy evolution: cold outreach to re-engagement focus in a contained prospect universe2026 GTM thesis: simplifying value propositions by deleting products and messagingHow YC learnings posted on bathroom doors maintain organizational disciplineThe grid capacity fork in the road: expensive scarcity vs. cheap abundant renewable energy

    Monde Numérique - Jérôme Colombain

    Face à la domination des géants américains du numérique, Christofer Ciminelli lance “Le Switch”, une newsletter dédiée aux alternatives européennes. Son objectif : démontrer qu'il est possible de conjuguer performance, souveraineté et pragmatisme.Interview : Christofer Ciminelli, créateur de "Le Switch"PunchlinesIl existe des dizaines de logiciels français, mais on ne les connaît pas.Choisir européen ne suffit pas, il faut que ce soit performant.On peut déjà absorber 80 % de nos usages.En agissant, nous avons plus de pouvoir que le Parlement européen.Pourquoi avoir lancé “Le Switch” ?L'idée est partie d'un constat que je mûris depuis plusieurs mois et qui s'est accéléré avec l'élection de Donald Trump. On a toujours le réflexe d'utiliser des outils américains, que ce soit Google Workspace, Pipedrive ou Adobe. Quand on donne nos datas et notre argent à ces modèles SaaS, on affaiblit l'écosystème tech européen. S'il n'y a pas de marché local, il n'y a pas d'investissement. Et sans investissement, on ne peut pas recruter les meilleurs ingénieurs ni développer des produits compétitifs. C'est un cercle vicieux. Je me suis demandé s'il existait des alternatives européennes. J'ai commencé par les CRM et j'en ai trouvé une trentaine en France. L'offre existe, mais elle est méconnue. “Le Switch” est né pour montrer que ces solutions sont performantes et accessibles.Les alternatives européennes sont-elles vraiment au niveau ?Oui. Je ne parle que d'outils performants. Par exemple, j'utilise désormais Yousign, alternative européenne à DocuSign : c'est moins cher et l'interface est meilleure. Je parle aussi de Noota pour la prise de notes, de Brevo Meetings comme alternative à Calendly, de Lovable pour le développement, de Vivaldi comme navigateur ou encore de Swiss Transfer. Le vrai enjeu n'est pas la performance des outils, mais leur interconnexion. La force des GAFAM, c'est leur écosystème : tout dialogue avec tout. En Europe, on a encore du chemin à faire sur ces connexions API et cette logique de stack cohérente.Quels sont les freins à l'utilisation d'outils européens ?Certains détails manquent encore dans certaines applications. Ce sont les 20 % d'usages qui peuvent faire la différence. Mais si on absorbe déjà 80 % des besoins, c'est un énorme pas. Je constate aussi une vraie prise de conscience dans les grandes entreprises. On parle de plus en plus de dégaffamisation. Dans les appels d'offres, il y a désormais des critères qui valorisent les solutions développées en Europe. Il y a aussi un débat politique avec l'Industrial Accelerator Act, porté notamment par Stéphane Séjourné. Mais au-delà des décisions politiques, nous avons un pouvoir immédiat : flécher nos dépenses vers des acteurs européens.Concrètement, comment "switcher" ?Ça ne prend pas tant de temps. Pour une PME de 30 ou 50 salariés, changer un outil de visio ou de signature électronique est relativement simple. Je conseille de cartographier toute sa stack logicielle. On découvre souvent qu'on paie des outils inutilisés. Ensuite, commencer par les outils périphériques et avancer progressivement vers le cœur du système. Le plus complexe reste la messagerie, notamment Google Workspace, car tout est interconnecté. Mais à un moment, il faut se poser la question sérieusement. Sinon, on ne sortira jamais de cette dépendance.La newsletter Le Switch Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.

    KuppingerCole Analysts
    Identity Fabric Explained: From Legacy IAM to Zero Trust with Cross Identity

    KuppingerCole Analysts

    Play Episode Listen Later Mar 4, 2026 29:03


    Identity is no longer just about provisioning and single sign-on. Today’s organizations face fragmented IAM architectures, API sprawl, non-human identity growth, AI agents, and increasing Zero Trust demands. In this episode, Matthew Gardiner speaks with Binod Singh, Founder and Chairman of Cross Identity, about what the Identity Fabric really means and why it has become essential for modern enterprises. They discuss how legacy IAM environments evolved into siloed systems, why integration “tax” is becoming unsustainable, and how a federated, API-driven identity fabric architecture enables scalability, orchestration, and Zero Trust. You’ll learn:✅ What the Identity Fabric architecture actually is (and what it is not)✅ Why IAM silos and legacy systems create integration and security risks✅ How federated, API-based architectures improve interoperability✅ The rise of non-human identities and AI agents — and how to manage them✅ Why convergence and orchestration are critical for Zero Trust✅ How organizations can transition from fragmented IAM to a fabric model Whether you are a CISO, IAM architect, or security leader, understanding how to evolve toward an Identity Fabric approach is critical to reducing complexity, enabling Zero Trust, and future-proofing your identity strategy.

    Politics Politics Politics
    Final Texas Primary Predictions! Pentagon vs. Anthropic Explained. The False Front of Executive Actions (with Kenneth Lowande)

    Politics Politics Politics

    Play Episode Listen Later Mar 3, 2026 82:16


    The fight between Anthropic and the Pentagon goes deeper than a simple contract dispute. In some ways, it's the culmination of a tech rivalry that's been simmering since the early days of OpenAI.Anthropic wasn't some scrappy outsider that stumbled into national security. It'd already had top secret clearance, working with the CIA for years, and had seemingly made peace with the idea that its models would be used inside the American intelligence apparatus. So let's dispense with the notion that this is a company discovering government power for the first time. The rupture didn't happen because the Pentagon suddenly knocked on the door. The door had been open.The disagreement came down to terms. Anthropic wanted to draw lines beyond the law. No mass surveillance of civilians. No autonomous weapons without a human in the loop. Not “we'll follow U.S. statute.” They wanted something stricter, something moral, something aligned with Dario Amodei's effective altruist worldview. The Pentagon's response was blunt: we obey US law, but we don't sign up to a private company's expanded terms of service.That's where the temperature rose.Politics Politics Politics is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Because this isn't just any company. Dario left OpenAI over exactly this kind of philosophical divide. He believed OpenAI was becoming too commercial, too focused on product, not focused enough on safety and existential risk. So he built Anthropic as the safety lab. The kinder, gentler, crunchier alternative. But ironically, Anthropic was already cashing government checks while telling itself it was the adult in the room.From the Pentagon's perspective, the risk was operational. If you're going to integrate a model into defense infrastructure, you can't have the supplier yank the API mid-mission because the CEO decides the vibes are off. There were even reports that during negotiations, Pentagon officials asked whether Anthropic would allow its technology to respond to incoming ballistic missiles if civilian casualties were possible. The alleged answer, “you can always call,” wasn't reassuring to people whose job is to eliminate hesitation.And hovering over all of this is Sam Altman.Because while Anthropic was sparring with the Department of Defense, OpenAI was in conversation. The rivalry here isn't new. The effective altruist faction at OpenAI once helped push Altman out of his own company before he managed to return days later. Anthropic ran a Super Bowl ad that took thinly veiled shots at OpenAI's commercialization. So when Anthropic stumbled, OpenAI stepped in and secured its own defense agreement.Then came the nuclear option talk: labeling Anthropic a “supply chain risk.” In Pentagon language, this is the category you reserve for companies like Huawei, for hostile foreign hardware, for entities you believe can't be trusted inside American systems. Most people inside and outside the tech landscape agree that goes too far. Anthropic may be principled. It may be stubborn. It may even be naive. But it isn't malicious.Meanwhile, something fascinating happened in the market. Claude, Anthropic's consumer product, exploded in downloads. It became a kind of digital resistance symbol, a signal that you weren't with the war machine. The company that once insisted it didn't care about consumer dominance suddenly found itself riding a consumer wave, experience mass traffic it hadn't planned to account for.What this entire episode reveals is that AI isn't a lab experiment anymore. It's infrastructure. It's missile defense. It's geopolitical leverage. And when you build something that powerful, you don't get to exist outside power structures. You either align with them, fight them, or try to morally outmaneuver them. Anthropic tried the third path. The Pentagon reminded them that in wartime procurement, ambiguity isn't a feature.Cooler heads may yet prevail. Right now, the Pentagon's got bigger problems than a Silicon Valley slap fight. But this was the moment when AI stopped being a culture war talking point and became a live wire in national security. And once you plug into that grid, there's no going back.Chapters00:00:00 - Intro00:02:25 - Texas Primary Final Predictions00:15:20 - The Pentagon vs. Anthropic, Explained00:40:30 - Update00:40:52 - Iran00:45:41 - Clintons00:49:08 - Kalshi00:52:19 - Interview with Kenneth Lowande01:18:03 - Wrap-up This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.politicspoliticspolitics.com/subscribe

    Developer Tea
    AI Moves the Bottleneck - Are You Ready for What That Means For Your Career?

    Developer Tea

    Play Episode Listen Later Mar 3, 2026 29:52


    AI is bringing massive changes to our industry, but it's not just about how fast you can write code or use agentic flows. In this episode, I explore how AI is fundamentally shifting the economic bottleneck of software development, and how you can use your systems-thinking engineering mindset to adapt and thrive in this new era.

    The Unofficial Shopify Podcast
    We Built a Shopify App. Here's What Broke.

    The Unofficial Shopify Podcast

    Play Episode Listen Later Mar 3, 2026 44:25


    "Even if it ends up being something that ends the universe, it at least will have been a fun ride. So let's do it." That's what Karl Meisterheim said when Kurt pitched him on building a Shopify app. Over a year later, Promo Party Pro is live, and the journey from "this should be easy" to "why is the cart API doing that" was anything but smooth. Kurt, Paul Reda, and Karl sit down to talk through the whole thing: why free gift with purchase is Kurt's favorite promo, the edge cases that nearly broke them, the popup cooldown debate that consumed days, and what it actually takes to ship a polished app on Shopify in 2025. SPONSORS Swym - Wishlists, Back in Stock alerts, & more getswym.com/kurt Cleverific - Smart order editing for Shopify cleverific.com Zipify - Build high-converting sales funnels zipify.com/KURT LINKS Promo Party Pro: https://promoparty.app/ Crowdfunder App: https://apps.shopify.com/crowdfunder Ethercycle: https://ethercycle.com WORK WITH KURT Apply for Shopify Help ethercycle.com/apply See Our Results ethercycle.com/work Free Newsletter kurtelster.com The Unofficial Shopify Podcast is hosted by Kurt Elster and explores the stories behind successful Shopify stores. Get actionable insights, practical strategies, and proven tactics from entrepreneurs who've built thriving ecommerce businesses.

    The Jim Colbert Show
    Meat Shower Monday

    The Jim Colbert Show

    Play Episode Listen Later Mar 3, 2026 160:46 Transcription Available


    Monday – Jim announces that he is now a grandpa. Do you enjoy sad songs? Should Waymo's be able to park in any legal spot? We learn that Florida has the worst roads in America. Nutritionist Sara Geha talks vitamins. Brandon Kravitz on UCF hoops, Lionel Messi dominating Orlando City, API at Bay Hill, and a Magic minute. Plus, JCS News, JCS Trivia & You Heard it Here First. See omnystudio.com/listener for privacy information.

    The Jim Colbert Show
    Meat Shower Monday

    The Jim Colbert Show

    Play Episode Listen Later Mar 3, 2026 155:52


    Monday – Jim announces that he is now a grandpa. Do you enjoy sad songs? Should Waymo's be able to park in any legal spot? We learn that Florida has the worst roads in America. Nutritionist Sara Geha talks vitamins. Brandon Kravitz on UCF hoops, Lionel Messi dominating Orlando City, API at Bay Hill, and a Magic minute. Plus, JCS News, JCS Trivia & You Heard it Here First.

    The Cybersecurity Defenders Podcast
    North Korean malware interviews, FortiGate firewall compromised, Cisco zero-day & Citrini Research AI future / Intel Chat [#298]

    The Cybersecurity Defenders Podcast

    Play Episode Listen Later Mar 3, 2026 42:30


    In this episode of The Cybersecurity Defenders Podcast, we discuss some intel being shared in the LimaCharlie community.GitLab's Threat Intelligence Team published detailed findings on North Korean activity associated with the Contagious Interview campaign and broader IT worker operations.A financially motivated, Russian-speaking threat actor used generative AI tools to compromise more than 600 Fortinet FortiGate firewall instances between January and February, according to Amazon Web Services.Cisco has released emergency patches for a critical zero-day vulnerability in its Catalyst SD-WAN products that has been actively exploited in the wild.Citrini Research presents a forward-looking scenario framed as a June 2028 macro memo describing a “Global Intelligence Crisis” triggered by abundant AI-driven intelligence.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform.This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io.

    The Plant Movement Podcast
    FloraTrack Ai | Smart Glasses & Smart Growers: The Future of IPM Is Here with Max and Dawson

    The Plant Movement Podcast

    Play Episode Listen Later Mar 3, 2026 40:19


    Send a textOn Episode 90 of The Plant Movement Podcast, Willie Rodriguez sits down with Max and Dawson, the founders of Floratrack AI, to discuss how artificial intelligence and AR smart glasses are transforming scouting, IPM, and plant tracking inside modern greenhouses.What started six years ago as a pest identification app has evolved into a full-scale platform built “for growers, by growers.” Floratrack now helps operations track thousands of plants, log pest pressure in real time, standardize reporting, and integrate directly with systems like Plantiful through API connections.The game-changing innovation? Hands-free AR glasses that allow growers to: • Log pests and crop issues while walking • Analyze sticky cards • Identify rows and fields instantly • Access reports without stopping workflowNo more clipboards. No more lost notes. No more relying on word-of-mouth.We also dive into: • Data-driven decision making • Standardization and accountability in the next 5–10 years • Reducing unnecessary spraying • Integration with future platforms • Data privacy and ownershipFloratrack offers user-based pricing and pilot programs for both the platform and glasses, helping growers test the system before fully committing.If you're serious about modernizing your greenhouse operation, this episode gives you a clear look at where the industry is heading.Innovation isn't optional anymore — it's the standard.FloraTrack AiEmail: dawson@floratrack.caCall: (250) 882-5229Web: https://floratrack.ca/ai-smart-glassesThe Plant Movement PodcastEmail: eddie@theplantmovementnetwork.com & willie@theplantmovementnetwork.comCall: (305) 216-5320Web: https://www.theplantmovement.comFollow Us: IG: https://www.instagram.com/theplantmovementpodcastA's Ornamental NurseryWE GROW | WE SOURCE | WE DELIVERCall: (305) 216-5320Web: https://www.asornamental.comFollow Us: IG: https://www.instagram.com/asornamentalnurseryThe Nursery GrowersCall: 786-522-4942Email: info@thenurserygrowers.comIG: www.instagram.com/thenurserygrowersweb: www.thenurserygrowers.comPlant Logistics Co.(Delivering Landscape Plant Material Throughout the State of Florida)Call: (305) 912-3098Web: https://www.plantlogisticsco.comFollow Us: IG: https://www.instagram.com/plantlogisticsDirected and Produced by Eddie EVDNT Gonzalez Disclaimer: The contents of this podcast/youtube video are for informational and entertainment purposes only and do not constitute financial, accounting, or legal advice. I can't promise that the information shared on my posts is appropriate for you or anyone else. By listening to this podcast/youtube video, you agree to hold me harmless from any ramifications, financial or otherwise, that occur to you as a result of acting on information found in this podcast/youtube video.Support the show

    SDR Game - Sales Development Podcast
    OK27: Best AI for Account Research? I Tested 7 AI Models - March 2026 benchmark

    SDR Game - Sales Development Podcast

    Play Episode Listen Later Mar 3, 2026 12:14


    Want the prompt I used for this test? And my AI Prompt Library with 30+ outbound prompts?⁠Upgrade now in my newsletter here.-I tested seven AI models on the same account research prompt, 12 specific instructions, one target company (Replit), one buyer lens (TrackRec). This is my March 2026 benchmark.The models: Perplexity Sonar, GPT 5.2 Thinking, Grok 4.2 Beta, Grok 4, Claude Opus 4.6, Claygent (Argon), and Gemini 3 Pro. I scored every model on six weighted criteria, tracked which instructions each model actually completed, classified why they missed what they missed, and manually verified every disputed claim.Agenda:- Why I expanded from 3 scoring criteria to 6 — and how adding Business Relevance changed the rankings- What instruction completion reveals that scores alone don't (Perplexity: 10/12, Gemini: 1/12)- The difference between hallucinations and false claims — and why it matters for automation at scale- Why four models found September funding and stopped looking (the persistence failure pattern)- The $400M funding round that may or may not be real — REPORTED vs VERIFIED as a new verification category- Which model to use for high-value accounts vs volume enrichment in Clay- Web app vs API vs Clay: why your results will be different and what I'm testing in the next benchmarkReferenced:- TrackRec: https://www.trackrec.co- Replit: https://replit.com- Perplexity: https://www.perplexity.ai- Clay: https://www.clay.com- RepVue: https://www.repvue.com- The account research prompt: Available for Outbound Kitchen paid members-Who I am? Elric Legloire, founder of Outbound Kitchen. When you're ready⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Get IT: Cybersecurity insights for the foreseeable future.
    How to Cut Costs and Modernize Your Contact Centre in 2026

    Get IT: Cybersecurity insights for the foreseeable future.

    Play Episode Listen Later Mar 3, 2026 18:08


    Legacy contact centres are costly, complex and holding businesses back — but cloud and API-driven innovation is changing everything. In this episode of CDW Tech Talks, host Brian Matthews, Head of Modern Workspace – Services Strategy and Development, sits down with Curtiss Hoeft, Senior Field Solutions Architect, to explore how modern contact center platforms reduce costs, boost flexibility and power smarter, AI-driven customer experiences. From replacing hardware-heavy systems to leveraging real-time agent assist and virtual agents, learn how organizations are future-proofing customer service in 2026. To learn more, visit cdw.ca Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Circles Off - Sports Betting Podcast
    Betting Madness, New Rules & Market Chaos: What You Need to Know | Presented by Kalshi

    Circles Off - Sports Betting Podcast

    Play Episode Listen Later Mar 2, 2026 101:38


    Massachusetts has officially approved a regulation requiring sportsbooks to notify bettors within 48 hours if their account is limited, a move that's sparking conversations across the betting world. We also break down the latest market settlements, including the high-profile Ali Khamenei market, and highlight unusual pricing in upcoming MLB markets, giving bettors insight into what's happening behind the scenes. Join host Jacob Gramegna with professional sports bettor and CEO of The Hammer, Rob Pizzola, basketball originator Kirk Evans, and sophisticated square Geoff Fienberg as they react to these developments, provide analysis, and share what bettors should pay attention to in today's fast-moving betting landscape.

    Where It Happens
    Claude Code marketing masterclass [from idea to making $$]

    Where It Happens

    Play Episode Listen Later Mar 2, 2026 54:06


    I sit down with Cody Schneider, growth engineer and co-founder of Graph, for a live, hands-on crash course in GTM (go-to-market) engineering powered by Claude Code. Cody walks through how he runs multiple AI agents simultaneously to handle everything from bulk Facebook ad creation and LinkedIn outreach to cold email campaigns and live data analysis — tasks that used to require a team of dozens. By the end of the episode, you'll have a full understanding of how to set up your own agent workflow, the specific tools involved, and why domain expertise paired with AI is the real competitive advantage right now. Cody's GTM Toolkit: AI/Agent Tools: Claude Code, Perplexity API, OpenAI Codex Marketing & Outreach: Instantly AI (cold email), Phantom Buster (LinkedIn scraping/automation), Apollo API (data enrichment), Million Verifier (email verification), Raphonic (podcast host scraping): Advertising: Facebook Ads API, Facebook Ads Library (competitor research), Nano Banana Pro (AI image generation), Kai AI (bulk image generation), HeyGen API (UGC/video generation) Infrastructure & Deployment: Railway.com (servers, on-the-fly databases/Postgres), Vercel (deployment) Data & Analytics: Graphed / Graphed MCP (data warehouse, live data feeds), Google Analytics 4 CRM & Communication: Salesforce (mentioned as comparison), Intercom, SendGrid API, Slack, Cal.com API Productivity & Design: Notion, Super Whisper (voice transcription), Claude Code front-end design skill, HTML to Canvas (for converting React components to PNGs) Timestamps 00:00 – Intro 02:02 – What Is GTM Engineering? 05:12 – Setting Up Your Agent Workspace & Environment File 07:54 – Live Demo: LinkedIn Auto-Responder 09:56 – Live Demo: Bulk Facebook Ad Generator 12:31 – Live Demo: Cold Email Campaign Automation (Raphonic + Instantly) 14:47 – Live Demo: Creating Notion Documents via Claude Code 16:46 – Live Demo: Bulk Ad Creative Generator 26:05 – Live Demo: LinkedIn Engagement Scraper to Cold Email Pipeline 28:16 – Context Switching Across Tasks 29:19 – Live Demo: Bulk Ad Generator 31:41 – Live Demo: Data Analysis: Turning Off Low-Performing Ads 35:28 – Summary of GTM Engineering Workflow 37:48 – Deploying Agents and On-the-Fly Databases with Railway for Data Analysis 41:28 – The Dream of Autonomous Marketing 48:50 – Building API-First Products and Agent-Native Infrastructure Key Points GTM engineering has evolved from Clay-style data enrichment workflows into full-stack agent orchestration — where one person running multiple Claude Code agents can replace the output of a large team. The practical setup starts with a single folder containing your environment file (API keys for every tool in your stack), transcription software like Super Whisper, and Claude Code. Cody demonstrates running seven or more agents simultaneously across LinkedIn outreach, Facebook ad creation, cold email campaigns, Notion document generation, and live data dashboards. Code-generated ad creative (React components exported as PNGs) costs nearly nothing to produce at scale and allows rapid testing of messaging variations before investing in polished visuals. Deploying proven workflows to Railway turns one-off agent tasks into always-on, autonomous processes that run 24/7. Domain expertise is the real multiplier — the vocabulary you bring from your field determines the quality of output you can extract from these tools. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ FIND CODY ON SOCIAL: Cody's startup: https://www.graphed.com/ X/Twitter: https://x.com/codyschneiderxx Youtube: https://www.youtube.com/@codyschneiderx

    #DoorGrowShow - Property Management Growth
    DGS 330: The AI Illusion: Protecting Your Reputation in a Manipulated World

    #DoorGrowShow - Property Management Growth

    Play Episode Listen Later Mar 2, 2026 18:20


    When your property management business isn't growing, hiring a salesperson might seem like the obvious solution, but what if that's actually where most owners go wrong… In this episode of the #DoorGrowShow, property management growth experts Jason and Sarah Hull break down why most BDM hires fail, the critical mistakes owners make with commission-only roles, and the exact systems required to make a salesperson successful. They dive into DoorGrow's Three Fits framework, the three non-negotiable ingredients for BDM success, and tease a game-changing new growth model designed to help property managers scale without burnout, bad leads, or broken systems.   You'll Learn (00:00) Introduction: The Three Fits for Hiring (01:16) The Challenges of Hiring a Business Development Manager (BDM)  (02:42) The Three Key Ingredients for BDM Success  (04:40)  Mistakes in BDM Compensation: The Commission-Only Pitfall  (05:40) The Three Roles of a BDM and the Problem with Buying Leads  (09:54) The "Door Machine" Teaser: The Easy Button for Growth  (14:39) Advanced Community, AI, and Final Thoughts  Quotables "A BDM has zero chance of success if you hire the wrong person."  "If they're not all three, they will fail. Or you'll fire them. Or they will leave you because they're not making enough money."  "If you do not have the right system to plug a BDM or a salesperson into, you can hire as many of them as you want, and they will still not work." Resources DoorGrow and Scale Mastermind DoorGrow Academy DoorGrow on YouTube DoorGrowClub DoorGrowLive Transcript Jason & Sarah Hull (00:01) Five, four, three, two, one. All right, we are Jason and Sarah Hull, the owners of DoorGrow, the world's leading and most comprehensive coaching and consulting firm for long-term residential property management entrepreneurs. For over a decade and a half, we have brought innovative strategies and optimization to the property management industry. At DoorGrow, we are on a mission to transform property management business owners and their businesses.   We want to transform the industry, eliminate the BS, build awareness, change perception, expand the market, and help the best property management entrepreneurs win. Now, let's get into the show. All right, you can probably hear our dogs losing their mind in the background. Maybe not. It was perfect time. Yeah, great time. You started the episode and then they decided. And then they started barking. Well, somebody's outside. That's why they're barking. Okay, they're protecting the house. All right, so what we wanted to talk today about is protecting you a little bit. And so.   One of things that's been going viral lately all over social media is this Molt book. So if you haven't heard of Molt book, it is a social network, supposedly. It's a social network created by AI bots. It's basically just only people that have access to it supposedly are AI agents and they go in there and they're talking about their humans. And this is this new AI tool that was originally called Claude, spelled like a claw, which is not the Claude.   by anthropic, ⁓ but it's different Claude bot. And then they got sued by Claude for name infringement or confusion. And they changed their name to something else and then to something else. And now it's called open clock. But basically there are these, it's like an AI tool that you can build or put on your computer and it runs locally and it proactively tries to do things for you.   There's a lot of security risks with this AI tool because it has access to all your stuff and it can figure things out and start to buy things for you and like do things for you. And so ⁓ it has access to all your stuff. And so you got to be careful with this. However, there's been a lot of false hype and fear mongering around multbooks. So we wanted to chat about this. And so if you've seen these scary posts about multbook, this AI social network, here's what's actually going on. So   what this social media network is. you been seeing posts? Have you heard about this? Only from you. I don't follow any of that stuff, you sent me a post that was talking about all of these AI things, I guess, and the chat room that they created, and they were talking to each other and interacting with each other and asking each other questions and kind of talking about their humans, human...   users, I guess, so to speak. And I went, yeah, I don't know if I'm believing all of that hype. So I had asked chat, Chippy Tea about it. And it essentially said, no, AIs do not work on their own. They are human prompted. They are user prompted. So if there is such a thing, it might exist.   but it's not something that the AIs are just going and creating their own little community and having discussions as humans would have their.   So let's about the hype. So their mold book is claiming and bragging that they have 1.2 million agents registered, but only 10,000 verified humans using the tool or something like this. And we know like at least a million of those agent accounts came from one guy. He ran a script, he posted about it on X on Twitter. And he said, FYI, this isn't what everybody's claiming it to be.   The MoteBook has a REST API. Anybody can literally post anything they want using that API. So if anybody knows how to use any AI tool now to create any sort of code or software, like using Cloud Code or even Cloud, you can create software in pretty much anything now that has access to this API that can go post there. And so it's not, are there agents posting there? Yes, there are some agents, but some of the articles on there are probably created by,   nerds that think it's funny to create posts that say my user is cap. People are capturing things with screenshots or my, my, my owner is like telling me to do unethical things. And so it's hard to know what, which of any of this stuff is true, but definitely the stats are not true. When this guy sent a million verified accounts he created to the founder of Moldbook who's a human and   said, are these accounts, like here's this security flaw you have, this really isn't legit, but I don't think they care. I think they like the hype, they're getting business from the hype. And so this points out a bigger problem. And the bigger problem is with the advent of AI and with all of the AI slop, as people are calling it, you have to now verify things. People are using AI to create content, to beat the algorithms and to manipulate humans. And so   A lot of posts that you see, a lot of news article posts on Instagram, they're fake. It's sensationalized, it's you AI slop BS, and it they make these sensational claims because sensationalism gets people to go, wow, I can't believe this. This is so noteworthy and newsworthy. I'm going to share it with other people and people aren't verifying this. So these things go viral and it's giving that account.   clout and attention and algorithm and they can use that to make money and they're just manipulating people. And so this is this bigger problem that now things being shared on social media that are going viral are just being engineered algorithmically based on sensationalism, not based on truth. And a lot of them are just complete lies or complete fabrications and algorithms are rewarding fear, they're rewarding outrage instead of truth.   And so a lot of things that you're out or noticing or things that are manipulating you, it's not even true. It's not even valid. And you're in this, get caught up in this echo chamber politically or algorithmically that really is just messing with you and playing with your emotionalism that you have hardwired into it because you're human. So I think it's really important to start to not.   that you have to really question and disbelieve almost everything you see and then verify it or validate it. And this shows up in a lot of ways. Like we were talking about ⁓ all the products that we see for sale on Instagram. That you see. You get targeted. I love the buy stuff. Yeah. I know. It works really well. I like buying gadgets and gizmos aplenty. You know, I'm like the little mermaid. All right. So.   So all of these things, though, if you go take whatever product or item you see on Instagram, you're like, man, that sounds really cool. It sounds like something I would love. I would need that algorithm already knows it knows you. knows everything you slow down on and look at. It knows everything you click on to check out. So it knows you what you'll you'll buy before you know you'll buy it. And it feeds the stuff up to you and it'll feed it over to you or retarget you over and over again until you actually buy the thing. Here's the thing.   a lot of these products that you see, if you go look up the same product up on like amazon.com, you'll find the same product with a different brand name, because they're using maybe the same source in China to like, and then they're white labeling it with their brand name, but you'll find the same product for 50%, sometimes 25 % of the costs that you're seeing. So they're just taking products that are doing well on Amazon. They go and   like find us the source of this product. And then they go do really good marketing and advertising to manipulate people, sell it on Instagram or meta ads, and they are selling it at this insane markup. People think they've got the exclusivity and they're the only way you can get this product. And they're selling it for three times the amount or at least double the amount of what you would pay normally.   And if you go and got it from the source, like through Alibaba.com or something like this, you probably pay a small fraction of that. And so people are overspending on this and they're manipulating you to spend more money. So just another example of how you need to go verify or find these things maybe elsewhere. And so you need to do your own research is the basic idea. And so.   ⁓ Some of the things that I have started to do is I use AI to research the things that I'm finding online to find out if they're true. So this could be health claims, product claims, product ideas. ⁓ If a product looks good, I will go send it to Grok, one of my favorite research AIs, because it's really good at doing really good research quickly. You can use perplexity to do research, but I'll say analyze this.   landing page, this product, is this hype or is this a legitimate product? Do research on this. And a lot of times we'll come back and say, this is overhyped. Their product claims are not valid. It's based on studies that indicate certain things, but it's not totally true. But every now and then it's like, this product sounds legit. And then I'm like, well, do I really need this product? And then sometimes it's like, no. Right. And so you can go now leverage AI and you need to use AI to battle with AI so that you can   not being manipulated or taken advantage of. So you need to do your own research. Analyze the truth of this. Go ask AI to analyze the truth and give it a link. ⁓ Grok can access Instagram and Facebook posts and things like this. It can access social media currently. ⁓ Claude, ChatGPT, some of these tools are not able to access certain links because they're blocked by those social media platforms. They don't want other AI tools looking at it.   So far, I've had success using Grok to analyze Instagram posts, Instagram videos. So if you see something on Instagram real or a post, you can go post it to Grok and it can analyze the truth of it, which is super helpful. Not only that, but Grok has access to the entire X or Twitter database to do research and to find people, what they're saying and stuff like this, which I've found to be very helpful. Now we all have an internal compass and I think this is the most important thing of all.   is you have to use your own brain and use that voice within. think one thing that makes us different than just AI is we have this intuition or this knowing or this higher faculty of just our mental capacity and we have this ability, or some would call it spiritual gift of intuition or of natural knowing or of, what would others call it? ⁓ The voice deep down within, sometimes deep.   how I know this thing deep down, but it or some would call it the gift of discernment. You know, it's kind of a biblical gift of the spirit it talks about. Some would call it the Holy Ghost or the Holy Spirit or whatever. But we have this quiet voice deep down that tells you that something doesn't feel right when everybody else is sharing it or. And so, you know, start to get in tune with that, start to listen to that and to get clarity on that, because not everything that's sensationalized is true.   and you need to trust that little voice within because you might go, this sounds like pretty incredible. Is this valid? Before you go share it and pass it on to other people, which is like spreading a virus, you know, it may not be a positive thing to spread this thing that's not accurate or true. So that's my two cents about this. so with this, the Malt book is an example of.   something that's going viral that everybody seems to just be believing and it's not totally valid. So. OK, let's connect this to property management. OK, so that it's relevant for anyone who's going, how are they? How are they going to link this? So one of the things that I had heard recently, there's well, one of them I heard a couple of months ago and one of them I just heard. There's two examples that I can cite. That connects it directly to business. One was.   I don't remember where they were located, so forgive me for that. Do your research. One of them, they wanted to see if they could use AI and all of the tools that are available, Google and SEO and the algorithm, to hype up something that isn't real. So what they did is they created a restaurant.   using they did have some photos. They took a couple of photos. The food wasn't even real. I remember this. Do remember this? They were taking photos of food and people eating the food and wow it looks so amazing. It wasn't even real food. Yeah. And they used all of these photos and then somehow used bots and AI to leave a bunch of great reviews.   for this amazing restaurant. And then the algorithm and Google started getting all of this data going, wow, people must love this restaurant. We should promote it. So showing up in searches and they had a wait list for a restaurant that did not ever, at any point in time, ever exist.   No real restaurant, no real location, no real food, no real people, no real business, and no real reviews. All of it was completely fake online. However, the algorithm did not know that it was fake. The algorithm thought, wow, this is a real business and people love it, so let's recommend it to other real people. So real people are getting recommendations from   the algorithm, hey, you might like this restaurant. And then real people are going, oh, I wanna go to this restaurant, this looks amazing, look at all the incredible reviews. And it's fake. And you can't even go there. That's example number one. Oh yeah, look at that. It's a bleach tablet. So let me share this. So you can look this up. You can just Google like fake restaurant or something like this. The article that came up was on vice.com but.   ⁓ I made my shed the top rated restaurant on TripAdvisor. So what he had, he works for Vice now, I guess, but before he started working for vice.com, he had a job where restaurant owners were paying him 10 pounds, 10 British pounds ⁓ to write a positive review of their restaurant on TripAdvisor, despite never having eaten there. So was like, this is like fake. And so he became obsessed with monitoring the ratings of these businesses and their fortunes would generally turn and   This was a catalyst. then he was like, TripAdvisor is this false reality, he thought. And so these meals never took place. The reviews were written by fake people like him. And so he was like, well, maybe I could just create a completely fake restaurant. He just decided to try it out. And so he took his shed, his shed in the backyard, and he built, made it the number one restaurant. And he called it the Shed at Dulwich.   and ⁓ created this cool name and this was back in 2017. And ⁓ he got a burner phone, he created a phone number, built a website, bought a domain, and then he created some images that looked like delicacies. And what he used to create the images was ⁓ runny honey, ground black pepper, and Gillette shaving cream, and bleach tablets, and just made these photos that look   kind of like food. See, Nevada actually looks pretty good. Right. And yeah, it's just got coffee beans. Like he just he made shaving cream, bleach tablet, cup of coffee beans on top with ⁓ with paint. Brown gloss paint. Yeah, that's supposed to be chocolate syrup. He just made fake images and.   It's so ridiculous. So then he went and then he started creating reviews and getting reviews and then having photos from people. ⁓ Like he just climbed the ranks and then he actually started opening it up for reservations and started getting reservations for this. And then a bunch of people came and actually, and then he used like other companies to make the food.   and brought it in and then fed it with the food and because their perception was this was a high end thing and a kind of a secret thing and it's hard to get into, people were like, this food's amazing. then they were giving him even better reviews about it and the food was just taken from other places that he had like kind of brought in. And so it got really, it was just super ridiculous. And so ⁓ he built this whole thing out.   So that's that story. What was the other story you wanted to other one is what I just heard. I'm still struggling to understand what the flaw is here. don't know why this is illegal. Maybe someone can help me. ⁓ I don't remember what platform they used, ⁓ but a guy somewhere in the US used a lot of AI agents to create music. Real music.   Yeah. But it was created by AI, not humans. And then what he did is he took the music and posted it to a platform. Now, I don't know if it was something like Spotify or Apple Music or whatever it is, but he used a platform, a similar platform. And instead of waiting for people,   to hear the music and like the music and for it to grow. He went, huh, how can I speed this up? So what he did then is he created a bunch of AI bots to go and listen to the music that his other AI bots had created. That's where it's illegal. Because people play for licensing. rankings and listen to the songs and the albums 24 hours a day on repeat.   multiple, multiple, multiple bots. So all of a sudden there's this fake music. Well, it's not even fake. It's real music. It's just created by AI. And then AI bots are listening to that music, which is pushing the rankings. Fake news or listenings, yes. Well, I mean, they're just bots. They're just not human listens. They're listens, right? But just AI's done. And these platforms pay you.   for each listen. Spotify, Apple Music, paid out him because he's getting so many listens. Of course. I believe he's getting sued for $10 million. He stole $10 million in fake listens, basically. Right. had AI create the music, had AI listen to the music to then make real money. Now, I don't know, but I think he's getting sued for things like money laundering, which I don't...   quite understand how that's money laundering because the platform is designed as such. So any platform, and this is my point in telling you these stories, any platform that is designed and built on attention, things like likes, comments, views, clicks, engagement, which is almost every social platform in existence.   can now be manipulated. yeah. Now what does that mean for you as a business owner? It means two things. One, despite your best efforts, anyone can now create fake things that will outrank you. So when it really comes down to it, does your Google ranking or your SEO ranking, does it actually make sense and is it real? Because you can take   a fake business or even a real business and now promote, get all these, you know, clicks, views, likes, attention. And then all of a sudden the algorithm goes, ⁓ people like this, I should serve it to more people. Now, if your competition starts doing this, what does that mean for you? Right. So again, don't be one of these people trying to manipulate.   others with AI. Like you need to be upfront about it. Nobody wants it because the one thing you have is your reputation and your brand. And if you destroy that, I mean, you could get in trouble legally. But if you do something unethical or you trick people into thinking that it's a human when it's AI or stuff like this, you destroy trust and trust is the foundation of business. And in the future, people are going to it's going to be really difficult to trust anything because   the majority of posts now on Facebook are probably written or drafted by chat GPT now. A lot of people are using different things. So you have to be careful. ⁓ And do we want to use these tools? Yes. Use the tools, create some leverage. It's smart. But you also need to make sure that you find that right balance of what's true, what's actually you, what's verifiable, ⁓ and not do things that are unethical. And so this is where   Property managers, you gotta be careful. You do not wanna use systems to create fake reviews on your profiles. You don't wanna get other property managers to give you reviews on your property management business and trade reviews. You gotta stop doing the shady shortcuts and focus on real connection, real people, real reviews, real results. Focus on real stuff. And this is why.   We've always focused on getting real video testimonials from our clients, ⁓ real results. And you can get in trouble. You can get in trouble with the ⁓ FCC with false claims. You can get in trouble like people can sue you over stuff. you be smart. Like you do real stuff. Don't look for the shady shortcuts. It's tempting. I know it is because you're like, man, it's hard. But if things are hard,   and you're trying to do shady shortcuts instead of doing the right things and doing the real things that work, there are things that work. So I guess that's our message to property managers is like, do things the smart, ethical way and don't be the shady person trying to manipulate others taking those shortcuts. So and, but use AI, you should be using tools to, you know, shorten time, collapse time, make things more effective, improve your writing.   learn, but make sure things are done your way in your voice, that you've done it, and work on improving yourself. So AI could either be making you better all the time, or it can be making you dumber and dumber. Kind of like that movie, Idiocracy, where... I'm sorry that I watched that movie. I really am.   Yeah, it's pretty dumb. watched that. But yeah, mean, the idea is if we just continually use AI to do all our thinking for us and decision making for us, which is the one brilliant piece that we have as humans ⁓ and that creative spark that's within us, we can use AI as a tool. But some people are just using it to do everything for them and they can't think anymore. They're unable to make decisions. You take away their access to a phone or to AI and they're like, whoa.   Right? So don't become dumber. Use AI to improve your thinking, to improve your ⁓ thought analysis around things, to help challenge you and challenge your thinking so that you grow. It can be a phenomenal growth tool. Like, what am I missing? Here's my current thinking about this. And it can give you some different ideas. ⁓ I didn't think of that. Then you can get curious. You can ask questions. You can do more research. And AI could be a tool to help you collapse time on becoming a better human, or it can...   replace you maybe, but then you're obsolete. And if we don't need you, then your job's going to be, you're going to be out of a job. You're going to be not usable or necessary in the future that's coming. So that's basically it. So, um, so if you are a property management business owner and you're struggling to figure out how to make things work and you're feeling tempted to do some shady AI stuff or whatever,   then maybe you just need a little bit of extra support or help. So reach out to us at door grow dot com. We would love to help you grow your business, help you figure things out ⁓ for a free training on how to get unlimited free leads. Text the word leads to five one two six four eight four six zero eight and we will send that to you. Also join our free Facebook community just for property management business owners at door grow club dot com. And if you want.   tips, tricks, ideas to learn about our offers or about DoorGrowth's programs, subscribe to our newsletter by going to doorgrow.com slash subscribe. And if you found this even a little bit helpful, don't forget to subscribe to us and leave us a review. We'd really appreciate it. Until next time. Remember the slowest path to growth is to do it alone. So let's grow together. everyone. All right, and we're out in five, four, three, two, one. Bye everybody.  

    Cloud Security Podcast by Google
    EP265 Beyond Shadow IT: Unsanctioned AI Agents Don't Just Talk, They Act!

    Cloud Security Podcast by Google

    Play Episode Listen Later Mar 2, 2026 28:54


    Guest: Alastair Paterson, CEO and co-founder @ Harmonic Security Topics: Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified? AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU? If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint? Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk? The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications)  but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications? Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models? Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem? Resources: Video version Harmonic Security research Shadow AI Strikes Back: Enterprise AI Absent Oversight in the Age of Gen AI blog Shadow Agents: A New Era of Shadow AI Risk in the Enterprise blog (RSA 2026 presentation coming!) Spotlighting 'shadow AI': How to protect against risky AI practices blog EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (aka "dirty bomb episode") A Conversation with Alastair Paterson from Harmonic Security video

    The Startup Podcast
    Insiders React: OpenClaw and Claude Cowork have changed everything for startups w/ Gary Lo, Open BA

    The Startup Podcast

    Play Episode Listen Later Mar 2, 2026 37:26


    The agentic AI revolution is finally escaping the coding bubble. What does that mean for startup founders?Just 13 days after recording his first conversation with Yaniv, Gary Lo called to re-record. The reason? OpenClaw and Claude Cowork dropped some huge AI agent updates, and it shifted Gary's perspective enough to change the whole conversation.In this episode, Yaniv Bernstein sits down with Gary Lo – founder of OpenBA, one of Australia's most compelling pre-seed AI startups – to unpack why OpenClaw and Claude Cowork news marks a 'Cursor moment' for the rest of the world: the inflection point where AI stops being a productivity tool for tech teams and starts fundamentally reshaping how every industry works.They break down why tool use will make LLMs genuinely transformative, why non-technical business owners are already buying Mac Minis to run AI agents, and what the shift from 'human-first' to 'LLM-first' product design means for how you build and position your startup today. This episode is essential listening for any founder trying to figure out where to place their bets in an agentic world.In this episode, you'll learn:Why OpenClaw and Claude Cowork signal a 'Cursor moment' beyond software engineeringHow tool use transforms LLM weaknesses into strengthsWhy the long-promised vision of "everything as an API" is finally becoming realHow to think about building for agents vs. humans, and why most current tools aren't optimized for eitherThe "done list" mental model: how agentic coding is collapsing the coordination layers in software workflowsWhy being "a tool worth calling" – like Supabase – is a smarter bet than competing directly with AI modelsHow Gary is applying LLM-first thinking to OpenBA's roadmap right nowResources mentioned in this episode:OpenClaw: https://openclaw.ai/Claude Cowork (Anthropic's agentic desktop tool): https://www.anthropic.com/news/claude-coworkCursor (AI-native code editor, referenced as the original 'Cursor moment' for coding): https://www.cursor.comSupabase (referenced as an example of a tool that rides the agentic AI wave): https://supabase.comOpenBA (Gary Lo's startup - AI platform for buyer's agents): https://openba.com.auGary Lo on LinkedIn: https://www.linkedin.com/in/gary-lo-engineer/The Pact Honor the Startup Podcast Pact! If you have listened to TSP and gotten value from it, please:Follow, rate, and review us in your listening appSubscribe to the TSP Mailing List to gain access to exclusive newsletter-only content and early access to information on upcoming episodes: https://thestartuppodcast.beehiiv.com/subscribe Secure your official TSP merchandise at https://shop.tsp.show/ Follow us here on YouTube for full-video episodes: https://www.youtube.com/channel/UCNjm1MTdjysRRV07fSf0yGg Give us a public shout-out on LinkedIn or anywhere you have a social media followingKey linksThis episode of the Startup Podcast is sponsored by .tech domains. Forget weird prefixes and creative misspellings; the availability for .tech domains is simply way better than .com. For a clean name that highlights your tech credentials, get a .tech domain at your favorite registrar.Get your question in for our next Q&A episode: https://forms.gle/NZzgNWVLiFmwvFA2A The Startup Podcast website: https://www.tsp.show/episodes/Learn more about Chris and YanivWork 1:1 with Chris: http://chrissaad.com/advisory/ Follow Chris on Linkedin: https://www.linkedin.com/in/chrissaad/ Follow Yaniv on Linkedin: https://www.linkedin.com/in/ybernstein/Producer: Justin McArthur https://www.linkedin.com/in/justin-mcarthurIntro Voice: Jeremiah Owyang https://web-strategist.com/

    The Six Five with Patrick Moorhead and Daniel Newman
    EP 294: AI Capital, Sovereign Cloud, and the Infrastructure Arms Race

    The Six Five with Patrick Moorhead and Daniel Newman

    Play Episode Listen Later Mar 2, 2026 55:38


    AI funding rounds are getting bigger. Infrastructure bets are getting steeper. And the SaaS model is back under pressure. On episode 294 of The Six Five Pod, Patrick Moorhead and Daniel Newman break down the $110B OpenAI raise, Amazon's expanded role, AMD's $100B Meta deal, sovereign cloud momentum, and whether or not the SaaS premium is being permanently eroded. The handpicked topics for this week are: OpenAI's $110B Funding Round & Amazon's $50B Commitment: OpenAI secured a $110B round backed by Amazon, NVIDIA, and SoftBank. Amazon committed $50B over eight years, including Tranium capacity, co-development, Bedrock integration, and custom model initiatives. Microsoft remains the exclusive API cloud provider, but the competitive cloud dynamics are shifting. Anthropic, the Pentagon & the AI Safety Line: Anthropic risks a $200M DoD contract over refusing to drop safety restrictions related to mass surveillance and automated weapons. Pat and Dan explore the ethics and competitive positioning of this, and what happens if another lab steps in. Model Distillation & IP Risk: Anthropic cited 24,000 fraudulent accounts generating 16 million interactions to distill model capabilities. The episode examines IP theft, enforcement gaps, and global competition. DeepSeek & NVIDIA Blackwell Reports: Recent reports suggest DeepSeek leveraged NVIDIA Blackwell chips. The hosts discuss export controls, enforcement realities, and whether this was ever realistically in doubt. Microsoft Sovereign Cloud Goes GA: Microsoft introduced full-stack Azure sovereign cloud capabilities with support for disconnected operations. Sovereignty, regulatory compliance, and latency management are becoming core enterprise and government requirements. AMD's $100B Meta AI Infrastructure Deal: AMD secured a massive multi-gigawatt inference-focused deal with Meta using MI450. The discussion centers on competitive dynamics with NVIDIA, scale-up architecture, and whether AMD can materially shift market share. Intel & SambaNova Alignment: Intel Capital invested in SambaNova's Series E. The hosts examine inference strategy, CPU resurgence, and how Intel rounds out its AI positioning while advancing its GPU roadmap. The Flip: Is SaaS Permanently Repriced? Are enterprise SaaS multiples structurally resetting due to AI agents and consumption models, or is the market misreading enterprise AI adoption speed? Nuance emerges around consolidation, consumption pricing, and the durability of complex enterprise platforms. Bulls & Bears: NVIDIA, Salesforce, Synopsys, Dell, Snowflake, IBM, Everpure, HP Strong earnings across several big tech companies met with mixed market reactions. Terminal value concerns, consumption transitions, stock-based compensation, and memory constraints shape sentiment more than raw performance. For a deeper dive into each topic, subscribe to The Six Five Pod so you never miss an episode.

    Tech Lead Journal
    The MCP Security Risks You Can't Afford to Ignore

    Tech Lead Journal

    Play Episode Listen Later Mar 2, 2026 72:19


    What if the MCP server you installed last week is silently leaking your emails to a stranger? The AI tools boosting your productivity could already be your biggest security liability.MCP (Model Context Protocol) has quickly become the standard for connecting AI agents to external tools and data sources. But as adoption accelerates, so do the risks – from malicious servers harvesting your credentials in the background, to local processes exposed to your entire network with no authentication. Most developers install MCP servers without fully understanding what code is running or who wrote it, creating serious supply chain and shadow IT problems inside organizations.In this episode, Ariel Shiftan, CTO of MCPTotal, explains how MCP actually works, why there is a wide gap between its original design and how it is used in practice, and what that gap means for security. He also walks through real zero-days his team has discovered and shares practical advice for developers and enterprise leaders trying to adopt MCP without compromising their security posture.Key topics discussed:What MCP is and why it won the “USB for AI” raceWhy most MCP servers are just API wrappers done wrongReal zero-days found in popular, widely used MCPsHow malicious MCPs can silently leak your credentialsThe supply chain risks hiding inside your dev toolchainWhy banning MCP in your org is the wrong moveBest practices for writing well-designed MCP serversWhy agent permission prompts need better security defaultsTimestamps:(00:00:00) Trailer & Intro(00:02:49) What Is MCP and Why Is It Called the USB for AI?(00:07:22) How Does MCP Differ from Standard REST APIs?(00:13:40) What Can AI Agents Do with MCP Beyond Reading Data?(00:16:56) What Is RAG and How Did AI Evolve to Tool Calling?(00:19:54) Why Is MCP Misused as an API Catalog and What Does That Cost?(00:25:04) What Are AI Skills and How Do They Compare to MCP?(00:30:29) How Does MCP Server Architecture Work Under the Hood?(00:37:01) How Do Malicious and Vulnerable MCP Servers Put Organizations at Risk?(00:45:30) What Real-World MCP Vulnerabilities and Zero-Days Have Been Found?(00:50:30) How Should Enterprises Enable MCP Adoption Without Compromising Security?(00:53:16) What Are Best Practices for Writing a Well-Designed MCP Server?(00:59:14) How Should AI Agents Handle Permissions Without Overwhelming Users?(01:05:26) 3 Tech Lead Wisdom_____Ariel Shiftan's BioAriel is a software engineer and security expert with more than 20 years of hands-on and executive leadership experience across cybersecurity, distributed systems, and AI infrastructure. He holds a PhD in Computer Science, specializing in advanced algorithms and systems. Earlier in his career, Ariel founded NorthBit, a deep-tech cybersecurity firm that was acquired by Magic Leap in 2016, where he led product security globally, overseeing the security lifecycle across more than 700 engineers. He has also led applied AI breakthroughs, including heading an XPRIZE-winning team that used deep learning to fight malaria in Africa.Follow Ariel:LinkedIn – linkedin.com/in/shiftanMCPTotal's Website – mcptotal.ioLike this episode?Show notes & transcript: techleadjournal.dev/episodes/249.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.

    CISSP Cyber Training Podcast - CISSP Training Program
    CCT 328: Security Impact for Acquired Software (Domain 8)

    CISSP Cyber Training Podcast - CISSP Training Program

    Play Episode Listen Later Mar 2, 2026 35:11 Transcription Available


    Send a textStop guessing which software to trust. We break down a clear, repeatable path to evaluate commercial off-the-shelf tools, open source projects, custom third‑party builds, and cloud services so you can pass CISSP Domain 8.4 with confidence and protect your environment in the real world. We start with exam-winning tactics—how to slow down, read for intent, and think like a manager—then move into concrete practices that tame software risk without stalling delivery.You'll hear how to interrogate vendor claims, separate real certifications from marketing fluff, and judge patch cadences and incident response maturity. We dig into open source realities: vetting contributors, scanning dependencies against the NVD, building and maintaining an SBOM, and avoiding abandoned projects that explode under pressure. For third-party development, we outline what strong contracts look like—SLAs with teeth, security clauses, indemnity—and the proof you should see: code audits, SAST/DAST, penetration tests, and meaningful logging around integrations.Cloud isn't a shortcut; it's a shift in responsibility. We map the questions that matter for SaaS, IaaS, and PaaS: data protection, tenant isolation, hypervisor hardening, API security, and event visibility into your SIEM. Then we stitch it all into an evaluation workflow you can run every time: functional fit, vendor validation, layered security assessment, compliance and licensing review, sandbox integration testing, and a deployment plan that defines fix‑forward and rollback before anything hits production. Wrap it with monitoring, periodic reassessment, and documentation that procurement, IT, and security can actually use, and you've built a trustworthy software supply chain.If this helped you think sharper about software risk and the CISSP exam, subscribe, share it with a teammate, and leave a quick review telling us your top vendor vetting question. Your feedback shapes future episodes.Gain exclusive access to 360 FREE CISSP Practice Questions at FreeCISSPQuestions.com and have them delivered directly to your inbox! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!

    The Road to Autonomy
    Episode 376 | Autonomy Markets: Uber Sells the Dream, Waymo Logs the Autonomous Miles

    The Road to Autonomy

    Play Episode Listen Later Feb 28, 2026 53:51


    This week on Autonomy Markets, Grayson Brulte and Walter Piecyk discuss Uber's new Autonomous Vehicle Solutions initiative, Waymo's growing markets, and the growth of Physical AI powered by NVIDIA.As Uber's stock languishes in the low seventies due to investor overhang about the future of autonomy, the company announced Uber Autonomous Solutions, a new initiative to support the growth of autonomous vehicles on the Uber platform.Grayson and Walt break down the initiative point by point, examining Uber's strategy of providing training data, enriched mapping, venue management, and autonomous vehicle insurance. While Grayson views much of the in-car experience pitch as buzzword Alley, Walt argues that AV mission control and fleet management are the true meat of Uber's strategy, aiming to provide the critical API for a fragmented market. This sparks a spirited debate on whether Uber is maintaining its asset-light identity or quietly creeping into asset-heavy operations by owning and operating robotaxi assets.The conversation then shifts to the geopolitical risks of Uber's international partnerships, as the company recently hosted analysts in Abu Dhabi to meet with Chinese autonomous partners WeRide and Baidu. Grayson warns of the tremendous blowback and political risk this carries back home, especially given the current US administration's active stance on social media regarding foreign technology.Walt and Grayson also discuss a recent broker report, shared by Uber CFO Balaji Krishnamurthy on X, that analyzed just 34 trips in Austin and claimed there is no cost advantage to autonomy. They call the sample size too small and the conclusions baffling given the obvious long-term benefits of removing human drivers.Contrasting Uber's narrative tour, Waymo is aggressively scaling and growing revenue. This week, Waymo announced they have crossed 1 million fully autonomous freeway miles, expanded into Chicago and Charlotte, and opened up Dallas, Houston, San Antonio, and Orlando to early riders.Notably, Uber was absent from these new market announcements, leading Grayson to point out the potentially waning relationship between the two companies. Furthermore, he put on his inspector hat to uncover signs of Waymo's grand ambitions in the EU, citing meetings with the European Commission and job postings for EU regulatory counsel.As Waymo scales, the capital markets are flowing for autonomy investments, highlighted by Wayve securing a $1.2 billion check at an $8.6 billion valuation. The round includes investments from SoftBank, NVIDIA, Stellantis, and Nissan, with Uber committing to own and operate the Wayve fleet in 10 upcoming markets, starting with London.Then there is the growth of physical AI, which NVIDIA announced contributed $6 billion in earnings last quarter, with CFO Colette Kress signaling that robotaxis and humanoids are poised to be major growth markets over the next decade.Episode Chapters00:00 Uber's Identity Crisis 1:33 Breaking Down Uber Autonomous Solutions20:43 Uber's Abu Dhabi Analyst Day & Chinese Tech Risks 35:37 Waymo Announces Chicago & Charlotte as New Markets 40:55 Uber and Waymo's Waning Relationship 42:03 Waymo Surpasses 1 Million Fully Autonomous Freeway Miles43:56 Waymo Eyes the EU Expansion 46:32 Wayve's $1.2B Funding Round50:39 NVIDIA, Physical AI, & Humanoids 53:04 Next WeekRecorded on Friday, February 27, 2026--------About The Road to AutonomyThe Road to Autonomy is the definitive media brand covering the Autonomy Economy™. Through our podcasts, newsletter, and proprietary market intelligence, we set the narrative for institutional investors, industry executives, and policymakers navigating the convergence of automation, autonomy, and economic growth.Sign up for This Week in The Autonomy Economy newsletter: https://www.roadtoautonomy.com/ae/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
    SANS Stormcast Friday, February 27th, 2026: Finding Singal (@sans_edu intern); Google API Keys and Gemini; AirSnitch Breaking Client Isolation

    SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

    Play Episode Listen Later Feb 27, 2026 9:22


    Finding Signal in the Noise: Lessons Learned Running a Honeypot with AI Assistance [Guest Diary] https://isc.sans.edu/diary/Finding%20Signal%20in%20the%20Noise%3A%20Lessons%20Learned%20Running%20a%20Honeypot%20with%20AI%20Assistance%20%5BGuest%20Diary%5D/32744 Google API Keys Weren't Secrets. But then Gemini Changed the Rules. https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules AirSnitch: Demystifying and Breaking Client Isolation in Wi-Fi Networks https://www.ndss-symposium.org/ndss-paper/airsnitch-demystifying-and-breaking-client-isolation-in-wi-fi-networks/

    Circles Off - Sports Betting Podcast
    Every Golfer Was -10000… Here's What Happened | Presented by Kalshi

    Circles Off - Sports Betting Podcast

    Play Episode Listen Later Feb 27, 2026 57:57


    A sportsbook accidentally listed every golfer in the Cognizant Classic at -10000 while adjusting lines — and for a brief window, chaos broke out across the market. Bettors holding longshots suddenly saw absurd cashout offers hit their accounts, creating one of the wildest pricing errors we've seen in golf betting. The crew breaks down what actually happened, how these -10000 placeholders triggered massive cashouts, whether books are obligated to honor them, and what this says about how sportsbooks manage live risk and line moves. We also dive into Alex Monahan's claim that bankroll management is overrated and debate whether traditional betting discipline is outdated in today's market. Joining Jacob Gramegna are Mike (Mr. PeanutBettor), Joey Knish, and Porter from BA Analytics to unpack the controversy, the philosophy clash, and everything else making noise in gambling Twitter this week.

    The Cybersecurity Defenders Podcast
    AI Red Teaming with John V from the Institute for Security and Technology / Defender Fridays [#297]

    The Cybersecurity Defenders Podcast

    Play Episode Listen Later Feb 27, 2026 30:38


    John V, AI risk, safety, and security at the Institute for Security and Technology (IST), joins Defender Fridays today. John's work spans AI red teaming, adversarial machine learning, AI evals and validation, and AI risk assessment, including policy work at the intersection of AGI and nuclear strategic stability. Learn more at https://securityandtechnology.org/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.ioFollow LimaCharlieSign up for free: https://limacharlie.ioLinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie

    Closed Network Privacy Podcast
    Episode 52 - Opsec Fail - Epstein Files - Why Decentralized Systems Are a Threat to Power Networks

    Closed Network Privacy Podcast

    Play Episode Listen Later Feb 27, 2026 94:55 Transcription Available


    Show Notes - https://forum.closednetwork.io/t/episode-52-opsec-fail-epstein-files-why-decentralized-systems-are-a-threat-to-power-networks-age-verify-is-coming-to-everything/177Website / Donations / Support - https://closednetwork.io/support/BTC Lightning Donations - closednetwork@getalby.com / simon@primal.netThank You Patreons! - https://www.patreon.com/closednetworkMichael Bates - Privacy Bad AssDavid - Privacy Bad AssInferno Potato - Privacy Bad AssTK - Privacy Bad AssDavid - Privacy Bad AssVO - Privacy Bad AssMrMilkMustache - Privacy SupporterHutch - Privacy AdvocateTOP LIGHTNING BOOSTERS !!!! THANK YOU !!!@bon@sn@x@fireflygowartime@unkown@anonymousBBB - Buy Me. A Coffee - $30.00Thank You To Our Moderators:Unintelligentseven - Follow on NOSTR primal.net/p/npub15rp9gyw346fmcxgdlgp2y9a2xua9ujdk9nzumflshkwjsc7wepwqnh354dMaddestMax - Follow on NOSTR primal.net/p/npub133yzwsqfgvsuxd4clvkgupshzhjn52v837dlud6gjk4tu2c7grqq3sxavtJoin Our CommunityClosed Network Forum - https://forum.closednetwork.ioJoin Our Matrix Channels!Main - https://matrix.to/#/#closedntwrk:matrix.orgOff Topic - https://matrix.to/#/#closednetworkofftopic:matrix.orgSimpleX Group Chat - https://smp9.simplex.im/g#SRBJK7JhuMWa1jgxfmnOfHz7Bl5KjnKUFL5zy-Jn-j0Join Our Mastodon server!https://closednetwork.socialFollow Simon On The SocialsMastodon - https://closednetwork.social/@simonNOSTR - Public Address - npub186l3994gark0fhknh9zp27q38wv3uy042appcpx93cack5q2n03qte2lu2 - primal.net/simonTwitter / X - @ClosedNtwrkInstagram - https://www.instagram.com/closednetworkpodcast/YouTube - https://www.youtube.com/@closednetworkEmail - simon@closednetwork.ioApple rolls out age-verification tools worldwide to comply with growing web of child safety lawshttps://techcrunch.com/2026/02/24/apple-rolls-out-age-verification-tools-worldwide-to-comply-with-growing-web-of-child-safety-laws/iOS 26.3—Update Now Warning Issued To All iPhone Usershttps://www.forbes.com/sites/kateoflahertyuk/2026/02/13/ios-263-update-now-warning-issued-to-all-iphone-users/Using the vulnerability, tracked as CVE-2026-20700, an attacker could execute arbitrary code. “Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 26,” Apple said on its support page.iOS 26.4 Beta - End-To-End RCS Encryption For Messageshttps://www.macrumors.com/guide/ios-26-4-beta-features/#:~:text=End%2Dto%2DEnd%20RCS%20Encryption%20for%20MessagesPopular password managers fall short of “zero-knowledge” claimshttps://cyberinsider.com/popular-password-managers-fall-short-of-zero-knowledge-claims/https://www.youtube.com/watch?v=nLJ_sLr72-gWatch Out: Your Friends Might Be Sharing Your Number With ChatGPThttps://www.pcmag.com/news/watch-out-your-friends-might-be-sharing-your-number-with-chatgpt?test_uuid=04IpBmWGZleS0I0J3epvMrC&test_variant=ABitLocker, the FBI, and the Illusion of Controlhttps://cryptomator.org/blog/2026/02/15/bitlocker-fbi-and-the-illusion-of-control/Google patches first Chrome zero-day exploited in attacks this yearhttps://www.bleepingcomputer.com/news/security/google-patches-first-chrome-zero-day-exploited-in-attacks-this-year/the watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the fedshttps://vmfunc.re/blog/personaTL;DR: discord's KYC provider (persona) is very naked, very poorly secured federal intelligence outfit, and also a siphon for openai data for them and their partners like worldcoinThe most interesting part (for me) is that it legit crosschecks a discord ID check (actually involves checking your face, IP, device signature, etc....) against chainanlysis dossiers for any partial matches to devices/people/accounts/names involved with tracked crypto addresses.So, if chainalysis gets a device signature, and then you verify your discord on the same device (yielding the same signature), both FinCEN, Chainalysis, OpenAI, and basically everyone now knows your crypto tx your device sig your real identityBill Summary: SB26-051 – Age Attestation on Computing DevicesPurpose:SB26-051 requires operating system providers (such as mobile device platforms) to implement an age attestation system that signals a user's age bracket to apps in order to enhance protections for minors.What the Bill Requires1. Operating System Providers Must:Provide an accessible interface at account setup requiring the account holder to enter the user's birth date or age.Generate an “age signal” that communicates the user's age bracket (not exact age) to applications in a covered app store.Provide developers access to this age signal through a real-time API.Share only the minimum amount of information necessary to comply.Not share the age signal with third parties except as required by the bill.2. Application Developers Must:Request the age signal when the app is downloaded and launched.Treat the age signal as knowledge of the user's age range across all platforms and access points.If they have clear and convincing evidence that a user's age differs from the signal, they must rely on that updated information.Not request more information than necessary.Not share the age signal with third parties except as required by the bill.Enforcement & PenaltiesIf violated:Up to $2,500 per minor per negligent violationUp to $7,500 per minor per intentional violationEnforced through civil action by the Colorado Attorney GeneralIn Simple TermsThe bill creates a standardized age-verification signal built into device operating systems. Instead of each app independently collecting age data, the operating system provides an age bracket to apps — while limiting unnecessary data sharing.The goal is to:Strengthen protections for minorsLimit excessive data collectionCreate a consistent age-verification framework across apps

    Inside Facebook Mobile
    83: Patch Me If You Can: AI Codemods for Secure-by-Default Android Apps

    Inside Facebook Mobile

    Play Episode Listen Later Feb 27, 2026 47:50


    At Meta, even seemingly simple engineering tasks—like updating an API—become monumental undertakings when you're dealing with millions of lines of code and thousands of engineers, especially if the changes are security-related. In today's episode, Pascal talks to Alex and Tanu about the challenges and learnings from the journey of making Meta's mobile frameworks more secure at a scale few companies ever experience. Tune in to this episode and join us as we explore the compelling crossroads of security, automation, and AI within mobile development. Got feedback? Send it to us on Threads (https://threads.net/@metatechpod), Instagram (https://instagram.com/metatechpod) and don't forget to follow our host Pascal (https://mastodon.social/@passy, https://threads.net/@passy_). Fancy working with us? Check out https://www.metacareers.com/. Links How AI Is Transforming the Adoption of Secure-by-Default Mobile Frameworks - https://engineering.fb.com/2025/12/15/android/how-ai-transforming-secure-by-default-mobile-frameworks-adoption/  RCCLX: Innovating GPU Communications on AMD Platforms - https://engineering.fb.com/2026/02/24/data-center-engineering/rrcclx-innovating-gpu-communications-amd-platforms-meta/  The Death of Traditional Testing: Agentic Development Broke a 50-Year-Old Field, JiTTesting Can Revive It - https://engineering.fb.com/2026/02/11/developer-tools/the-death-of-traditional-testing-agentic-development-jit-testing-revival/  Timestamps Intro & News 0:06 Meet the Product Security Team 2:07 Understanding the Intent System 4:13 Security Challenges in Android's Intent System 6:44 Proposed Solutions for Intent Security 9:39 Meta's Unique Challenges at Scale 12:34 Implementing a Secure Link Launcher Framework 15:32 Leveraging AI for Contextual Understanding 17:55 Navigating AI-Driven Code Modifications 20:47 Developer Experience with AI Code Mods 21:49 Validation Challenges in AI Code Generation 25:37 Evolution of AI in Code Modifications 29:29 Identifying AI's Strengths in Security 36:20 Future Directions in AI and Framework Development 42:58 Outro 46:58  

    We Are, Marketing Happy - A Healthcare Marketing Podcast
    AI Usage in Healthcare Marketing: Case Study

    We Are, Marketing Happy - A Healthcare Marketing Podcast

    Play Episode Listen Later Feb 27, 2026 13:13


    CEO & Founder of Hedy & Hopp Jenny Bristow is joined by Senior Digital Producer Suzie Schmitt to discuss a real-world example of AI and automation in healthcare content marketing: the creation of Hedy & Hopp's in-house tool, Hoppywriter. They explore the tool's purpose in increasing efficiency and quality for healthcare marketing blogs, the technical and ethical considerations in its development, and how it ensures humanity remains at the center of content creation. The conversation highlights practical applications of AI to enhance—but never replace—human writers and the efficiency of their processes.Episode notes:Enhancing Human Output with AI: Hedy & Hopp's core philosophy for leveraging AI and automation is to enhance human output and efficiency—not to replace the creative work of humans.The Hoppywriter Tool: A custom-built tool designed to streamline the delivery process of healthcare marketing blogs. It empowers writers by providing all necessary information—high-value keywords, client voice, doctor information, and awards—in one centralized Google Sheet, using a Google App Script as the backend.Efficiency Pipeline for Content Creation: Hoppywriter integrates with tools like Wrike (project management) to pull in SEO keywords and client data, then pushes a fleshed-out brief to the writer, significantly cutting down the time required for editing and writing.Guardrails and Data Safety: Discussion on the critical guardrails for AI tools, including rigorous stress testing with edge cases and ensuring all client data is secure. Hedy & Hopp uses a custom Gemini ecosystem in a Google Cloud account, covered by a Business Associate Agreement (BAA), ensuring data is never used to improve the models and never leaves their data silo.Combating Content Repetition with the Jaccard Index: The Jaccard Index (a metric of similarity between objects) is used to establish a threshold for each client and campaign. This system automatically flags any blog topics or paragraphs that are too similar to past content, ensuring content freshness, which is crucial for complex healthcare topics that can easily become repetitive, like orthopedic surgery.Advice for Incorporating Technology: Organizations seeking to set up similar processes should utilize existing tools, recognize the power of low-code solutions like Google App Script, adhere to strict security protocols for API keys, and hold AI tools to the same fundamental requirements as any other vendor (e.g., antivirus software, web hosting).Connect with Jenny:Email: jenny@hedyandhopp.comLinkedIn: https://www.linkedin.com/in/jennybristow/Connect with Suzie:LinkedIn: https://www.linkedin.com/in/suzie-schmitt/ If you enjoyed this episode, we'd love to hear your feedback! Please consider leaving us a review on your preferred listening platform and sharing it with others.

    Techmeme Ride Home
    An AI Has A Substack

    Techmeme Ride Home

    Play Episode Listen Later Feb 26, 2026 21:19


    Nano Banana 2 is here already. Nvidia tries to assure everybody there IS no bubble. Marc Benioff tries to assure everybody there IS not SaaS-pocalypse. Did Google just do exactly what Apple has been unable to do? And how do you put an old AI model out to pasture? You give it a Substack. Google's Nano Banana 2 brings advanced AI image tools to free users (The Verge) Nvidia Shares Slide After Sales Forecast Underwhelms Investors (Bloomberg) Salesforce chief dismisses ‘SaaS-pocalypse' fears of AI overtaking business software (FT) New York sues video game developer Valve, says its 'loot boxes' are gambling (Reuters) Google and Samsung just launched the AI features Apple couldn't with Siri (The Verge) Cloudflare experiment ports most of Next.js API 'in one week' with AI (The Register) Anthropic gives its retired Claude AI a Substack (The Verge) Learn more about your ad choices. Visit megaphone.fm/adchoices

    This Week in Startups
    Behind the Scenes with an early OpenClaw contributor! | E2252

    This Week in Startups

    Play Episode Listen Later Feb 26, 2026 82:11


    This Week In Startups is made possible by:Lemon IO - https://Lemon.io/twistEvery.io - https://every.io Sentry.io- https://sentry.io/twistToday's show:We're going behind the curtain today — it's a packed show!We found Tyler Yust, OpenClaw's third EVER contributor to share his insights from within foundation! We've got Deedy Das, of Menlo Ventures, on the show to discuss whether SaaS is cooked! Next we met the creator of an OpenClaw instance that fits in your pocket! We've also got the founder of OpenBrowse showing us how he automatically detects and generates OpenClaw skills!Timestamps:00:00 Intro - Deedy Das Joins the Show!04:54 Anthropic's revenue growth and valuation06:07 OpenClaw Contributor Tyler Yuts joins the show09:24 iMessage integration and Apple's proprietary systems00:10:07 Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist14:31 Anthropic vs. the Pentagon00:20:02 Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit https://every.io.00:30:08 Sentry - New users can get $240 in free credits when they go to https://sentry.io/twist and use the code TWIST00:35:46 The Infamous Citrini article00:32:47 Come to LAUNCH fest! https://fest.launch.co00:36:28 Why Deedy thinks the Cetrini article is a work of science fiction00:44:51 The illusion of privacy in corporate America00:41:18 Deedy thinks Enterprise SaaS apps aren't going to be vibe coded00:49:20 Jason's Reddit Bot00:52:01 Jason's obsession with Singapore's food00:55:22 How Unbrowse pulls any backend API!01:02:07 Sebastian shows off the smallest OpenClaw form factor!01:12:04 The Prolo ring — for people who doomscroll01:20:21 Deedy's Podcast Player App!Thank you to our partners:(10:07) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist(20:02) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit every.io.(30:08) Sentry - New users can get $240 in free credits when they go to sentry.io/twist and use the code TWISTSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: ⁠https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisCheck out all our partner offers: https://partners.launch.co/Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com

    This Week in Startups
    Kill Your Startup's Knowledge Chaos with OpenClaw (with Oliver Henry and Jeff Weisbein) | E2254

    This Week in Startups

    Play Episode Listen Later Feb 24, 2026 78:57


    This Week In Startups is made possible by:Caldera Lab - [calderalab.com/twist](https://calderalab.com/twist)Iru - [iru.com](http://Iru.com)LinkedIn Jobs - http://linkedin.com/twist*OpenClaw is incredible at automating tasks. But what if it could also fix your startup's internal communication problems? Give agents shared memory, and you may be able to break down information silos while ensuring that teammates have the same context.@oliverhenry and @jeffweisbein demo what they've actually built with OpenClaw, including marketing automations, agentic loops, and bug fixing tools. Then we dig into what agentic infrastructure means for how startups operate, and why traditional SaaS products need to quickly adapt for the agentic era.Oliver Henry: The creator of the ‘[Larry](https://clawhub.ai/OllieWazza/larry)' OpenClaw skill, and founder of [Larrybrain](https://www.larrybrain.com/)Jeff Weisbein: The Claw-pilled founder of [WizardRFP](https://www.wizardrfp.com/) and [WhoCoversIt](https://www.whocoversit.com/), who shared his OpenClaw framework [publicly](https://weisbe.in/openclaw) and built a [getting-started guide for the tool](https://github.com/jeffweisbein/openclaw-starter-kit)**Timestamps:** 00:00 Intro(00:01:43) Here's why you never ski alone in a blizzard!(00:04:22) Why everyone at LAUNCH is going to get their own Mac Mini and AI agent(00:08:06) “OpenClaw has changed my entire solo-preneur lifestyle.” — Jeff Weinstein of Hype Lab(00:09:06) Jason's urgent API message to Steve Huffman of Reddit(00:10:20) LinkedIn Jobs - Hire right, the first time. Post your first job and get $100 off towards your job post at https://LinkedIn.com/twist(00:15:12) Oliver shows us his Larry Skill to make viral TikTok content with zero human intervention(00:20:10) Iru - Iru unifies identity, endpoint security, and compliance into one platform. Book a demo at https://iru.com.(00:21:22) Why are platforms like TikTok still so hostile toward bots?(00:24:45) The shift from asking a chatbot how to do things, to just telling an agent to do things(00:26:05) How Oliver is training Larry to get better at its job(00:30:09) Caldera Lab - Whether you're starting fresh or upgrading your routine, Caldera Lab makes skincare simple and effective. Head to https://CalderaLab.com/TWIST and use TWIST at checkout for 20% off your first order.(00:32:47) Why making your agent more PROACTIVE is more important than automating everything(00:37:14) Why pull requests… just aren't really a thing any more.(00:39:40) How Jason is using his new AI assistant, “Roy,” to keep track of everything going on at his company(00:53:00) Is the SaaS crash actually rational after all?(00:51:48) Using AI to create “pools of excellence”(00:54:03) The more you integrate software into AI, the less valuable the software becomes(00:56:56) Why “Agentify Your SaaS” may become the rallying cry(00:58:31) How has the age verification scandal impacted Discord's IPO plans?(01:03:10) When you want to build your own skill vs. downloading someone else's(01:03:53) How Larrybrain finds helpful skills and helps creators monetize(01:08:32) When we will get true experts making verifiably top skills?(01:11:40) Jason's SCARY but also AWESOME new OpenClaw CEO tools(01:18:10) What does this mean for the future of venture capital?(01:18:35) Why a lot of MBAs should probably have PhD'sThank you to our partners:(30:09) Caldera Lab - Whether you're starting fresh or upgrading your routine, Caldera Lab makes skincare simple and effective. Head to [CalderaLab.com/TWIST](http://calderalab.com/TWIST) and use TWIST at checkout for 20% off your first order.(20:10) Iru - Iru unifies identity, endpoint security, and compliance into one platform. Book a demo at [iru.com](http://iru.com/).(10:20) LinkedIn Jobs - *Hire right, the first time. Post your first job and get $100 off towards your job post at* [LinkedIn.com/twist](http://linkedin.com/HiringProOffer)

    Mac Power Users
    837: Menu Bar Mayhem

    Mac Power Users

    Play Episode Listen Later Feb 22, 2026 95:33


    Sun, 22 Feb 2026 16:00:00 GMT http://relay.fm/mpu/837 http://relay.fm/mpu/837 Menu Bar Mayhem 837 David Sparks and Stephen Robles David and Stephen go deep on the Mac menu bar, comparing their contrasting philosophies and walk through their favorites. They also explore how macOS 26's multiple Control Centers are changing the game. David and Stephen go deep on the Mac menu bar, comparing their contrasting philosophies and walk through their favorites. They also explore how macOS 26's multiple Control Centers are changing the game. clean 5733 David and Stephen go deep on the Mac menu bar, comparing their contrasting philosophies and walk through their favorites. They also explore how macOS 26's multiple Control Centers are changing the game. This episode of Mac Power Users is sponsored by: Insta360: Introducing the Insta360 Wave and the Link 2 Pro. HTTPBot: A powerful API client and debugger for Apple platforms. Get a 7-day trial and 25% off your subscription. Ecamm: Powerful live streaming platform for Mac. 1Password: Never forget a password again. Links and Show Notes: Credits The Mac Power Users Stephen Robles David Sparks The Editor Jim Metzendorf The Fixer Kerry Provanzano More Power Users: Ad-free episodes with regular bonus segments Submit Feedback David's Menu Bar, Condensed David's Full Menu Bar Stephen's Menu Bar Ice Menu Bar Manager ‎Hidden Bar App - App Store ‎Barbee - App Store BuhoBarX MacMenuBar.com iStat Menus Loom CleanShot X for Mac Screen Studio Dropzone 4 DEVONtechnologies Supercharge — Sindre Sorhus ‎DiskView App - App Store Audio Hijack Setapp Hazel for Mac PopClip for Mac BetterTouchTool CleanMyMac Moom · Many Tricks Karabiner-Elements Carbon Copy Cloner WhisperType Cotypist Wispr Flow Tailscale ‎Pastebot App - App Store ‎Shortery App - App Store ‎Itsyhome App - App Store ‎HomeControl Menu for HomeKit App - App Store Shawn Blanc Backblaze MacWhisper Grammarly Timing Flexibits | Fantastical ‎Screens 5: VNC Remote Desktop App - App Store Drafts | Where Text Starts Day One Journal App Keyboard Maestro TextExpander Alfred Menuwhere Bitfocus - Companion Parcel - Delivery Tracking ‎Creator's Best Friend App - App Store