POPULARITY
Categories
In 1263 BCE, priests announced the death of the APIS BULL. Sacred to Ptah, the bull dwelled in the temple at Men-nefer (Memphis). Now, in year 30 of Ramesses II, the King's son KHA-EM-WASET would lead the funerary processions. Shortly after, the prince inaugurated the first phase of a now famous monument. The Lesser Vaults of the SERAPEUM begin to take shape. The prince also starts a project for which he is renowned: the preservation and restoration of old monuments. These acts have earned him the moniker "the first Egyptologist." Logo: Statue of Khaemwaset from Asyut, now in the British Museum (Photo Dominic Perry). Music: Keith Zizza www.keithzizza.net, used with artist's permission. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this week's Podcast: Nosema apis and Nosema Ceranae. Two, spore forming, parasitic Microsporidia! They sound like something of a horror show for our bees and the effects of a heavy infection can be quite damaging. Listen in as I explain what it is, how you can identify it, and ultimately deal with it, so your bees can have a healthy and productive Summer.Hi, I'm Stewart Spinks, welcome to Episode 382 of my podcast, Beekeeping Short and Sweet.Please support us throught affiliate links below, they cost you nothing and help us continue to produce our content.References: Glavinic U, Blagojevic J, Ristanic M, Stevanovic J, Lakic N, Mirilovic M, Stanimirovic Z. Use of Thymol in Nosema ceranae Control and Health Improvement of Infected Honey Bees. Insects. 2022 Jun 24;13(7):574. doi: 10.3390/insects13070574. PMID: 35886750; PMCID: PMC9319372.Hive Five Multi Guard EntrancesBeekeeping Courses at Thorne Beehvies in Wragby Lincolnshire 2026Some of my Favourite Microscopy Books:Pollen Loads of the Honeybee by Dorothy HodgesRex Sawyer's Pollen IdentificationPollen Grains and Honeydew by Margaret AdamsThe Pollen Landscape by Joss BartlettPollen Microscopy by Norman ChapmanThe National Bee Unit Varroa Information can be found HEREBee Aware Varroa Information can be found HEREThorne Beehives Bees on a Budget Hive The Beekeeper's Dictionary websiteEthyl Acetate for colony destructions can be found hereGardening Potting Tray for effective frame cleaningStainless Steel Stock Pots for use as a double boiler. Get one slightly larger than the other to fit inside.Gas Stove for outdoor use to render wax and old comb.Contact Me at The Norfolk Honey CompanyVMD Website: Click HEREJoin Our Beekeeping Community in the following ways:Early Release & Additional Video and Podcast Content - Access HereStewart's Beekeeping Basics Facebook Private Group - Click HereTwitter - @NorfolkHoneyCo - Check Out Our FeedInstagram - @norfolkhoneyco - View Our Great PhotographsSign Up for my email updates by visiting my website hereAmazon links are affiliate links. I recieve a small commission should you choose to purchase.Support the show
Woodson Martin, CEO ofOutSystems, argues that successful enterprise AI deployments rarely rely on standalone agents. Instead, production systems combine AI agents with data, workflows, APIs, applications, and human oversight. While claims that “95% of agent pilots fail” are common, Martin suggests many of those pilots were simply low-commitment experiments made possible by the low cost of testing AI. Enterprises that succeed typically keep humans in the loop, at least initially, to review recommendations and maintain control over decisions. Current enterprise use cases for agents include document processing, decision support, and personalized outputs. When integrated into broader systems, these applications can deliver measurable productivity gains. For example,Travel Essencebuilt an agentic system that reduced a two-hour customer planning process to three minutes, allowing staff to focus more on sales and helping drive 20% top-line growth. Martin also believes AI will pressure traditional SaaS seat-based pricing and accelerate custom software development. In this environment, governed platforms like OutSystems can help enterprises adopt “vibe coding” while maintaining compliance, security, and lifecycle management. Learn more from The New Stack about the latest developments around enterprise adoption of vibe coding: How To Use Vibe Coding Safely in the Enterprise 5 Challenges With Vibe Coding for Enterprises Vibe Coding: The Shadow IT Problem No One Saw Coming Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
From WEDI's 2026 Winter Forum, Michael chats with three payer representatives who discuss how access APIs are improving the patient experience by making data easier to access, use, and share across care journeys. Tom Loomis, Enterprise Architecture- Interoperability, Evernorth Nancy Bevin, Director, Provider Connectivity, Medica Ron Wampler, Executive Director, Interoperability, Aetna, a CVS Health Company
Luis y Albert se sientan a hablar de cómo están usando la inteligencia artificial en su día a día como media buyers en 2026.Nada de teorías: herramientas concretas, workflows que ya aplican con clientes y una comparativa honesta de qué IA merece tu dinero y cuál no.En este episodio aprenderás:
The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually
In this episode, Lex chats with Yoshi Yokokawa, CEO of Alpaca — a brokerage infrastructure company that provides API-based trading and custody services to fintechs and developers globally. The conversation begins with their shared experience at Lehman Brothers during the 2008 financial crisis, where Yoshi worked in fixed income securitization and learned that even when market participants sense a bubble, they keep dancing because timing the exit is impossible. After Lehman's collapse, Yoshi pursued entrepreneurship, building a computer vision AI company acquired by Kyocera before founding Alpaca in 2017. Initially inspired by Robinhood, Yoshi pivoted after experiencing firsthand the friction of accessing brokerage infrastructure—realizing the deeper opportunity was building API-first brokerage rails for developers. Today Alpaca powers 9 million accounts through 300+ partners across 45 countries, recently raising $150 million at a unicorn valuation. The discussion explores how Alpaca follows Robinhood's product roadmap to anticipate partner demand, the challenges of adding crypto, and Yoshi's thesis that finance is undergoing a generational shift from digital to on-chain operations. Lex shares examples of legacy infrastructure dysfunction—from faxing PDFs to TD Ameritrade in 2012 to the Synapse collapse caused by manual CSV uploads—illustrating why Alpaca built its own custody and ledger systems as a path to competing in the $350 trillion global securities custody market. NOTABLE DISCUSSION POINTS: Alpaca's biggest breakthrough was not a better investing app idea, but recognizing that the real bottleneck was brokerage infrastructure. Yokokawa and team initially explored B2C product concepts, but pivoted once they experienced firsthand how painful broker-dealer setup, custody, and clearing integrations were. For readers building fintech, this is a huge lesson: the highest-value opportunity is often the “invisible” infrastructure pain, not the user-facing feature set. They found product-market fit by starting with a narrow wedge (API for automated traders) and only then expanding into a broader platform (Broker API for fintech apps). Alpaca did not begin by serving large fintechs; it first attracted power users who urgently needed programmable execution, then used inbound demand (“can I build my own Robinhood?”) as proof to build account opening, reporting, and full brokerage APIs. This is a valuable go-to-market pattern for infrastructure startups: win with a sharp use case, then expand into the system of record. Yokokawa's core strategic edge is full-stack control of licenses, memberships, and ledger technology rather than relying on legacy vendors. He explicitly ties this to lessons from historical fintech fragility (manual workflows, broken reconciliations, middleware failures) and argues that owning the custody/clearing layer is what makes Alpaca defensible long term. For readers, this is the key takeaway on moat-building in financial services: if you don't control the ledger and operational core, your product may scale faster at first but remains structurally fragile. TOPICS Alpaca, Lehman Brothers, Barclays, Nomura, Neuberger Berman, Blackrock, Robinhood, Interactive Brokers, TD Ameritrade, BNY Mellon, Brokerage infrastructure, API, trading, tokenization, embedded finance, fintech, crypto, web3 ABOUT THE FINTECH BLUEPRINT
In this episode of Between Product and Partnerships, Biljana Pecelj joins Cristina Flaschen to explain how smaller teams successfully ship integrations with larger platform partners. She makes the case that leveraging usage data and performance metrics is the key to proving your integration's value, giving you the necessary influence to move up a major partner's priority list.Biljana shares lessons from her experience managing integrations at Hootsuite during major platform shifts, including the rise of Instagram Business APIs and the emergence of new features like Stories that didn't always come with immediate API support. She also details the process of aligning internal stakeholders to ensure integration features actually ship despite shifting external APIs.The conversation also covers the operational side of integrations, this includes why observability needs to be built early, how teams detect silent failures before customers do, and how to structure internal alignment when integration work touches engineering, legal, partnerships, and revenue.Who we sat down withBiljana Pecelj is a Principal Product Manager at Ledgy with deep experience building integrations inside platform-heavy environments. She has worked extensively on partnership-driven product initiatives where execution speed depends on navigating both technical constraints and external partner relationships.Biljana brings expertise in:Building integrations in environments where APIs and features evolve asynchronouslyDesigning for observability and proactive monitoringNavigating asymmetric partner relationshipsAligning roadmap priorities across product, partnerships, legal, and engineeringManaging tradeoffs between beta opportunities and engineering capacityKey TopicsWhy integration product work is relationship workTechnical execution matters, but alignment with partners determines whether integrations actually ship and scale.Building in ecosystems you don't controlAPIs change. Features launch without endpoints. Roadmaps shift. Successful teams anticipate uncertainty rather than assume stability.The importance of observability from day oneSilent failures are common in integrations. Without monitoring, teams often learn about outages from customers instead of systems.Roadmap tradeoffs when beta opportunities ariseNew partner features can require immediate shifts in engineering priorities. Negotiation and resource reallocation become core product skills.M&A and integration complexityBrand consolidation rarely means backend integration. Teams often inherit layered systems that remain technically independent long after acquisition.Episode Highlights01:55 – How integration product management differs from core product work04:40 – Navigating power imbalances with large platform partners07:15 – Using data to strengthen partner conversations10:30 – Building observability when resources are limited13:45 – Handling silent integration failures17:50 – Managing beta features and roadmap shifts21:30 – Aligning cross-functional teams around integration priorities24:45 – Why relationships accelerate integration execution28:10 – Lessons learned from building inside platform ecosystems--For more insights on partnerships, ecosystems, and integrations, visit www.pandium.com
Pinar Ormeci, CEO of Lexful For MSPs, documentation is essential. But it's also one of the hardest parts of running a service business. Inaccurate, outdated, or inaccessible documentation slows teams down, increases onboarding time for new technicians, and can even put service quality at risk. That's the problem Lexful is aiming to solve with a new approach. In this episode, we sit down with Pinar Ormeci, CEO of Lexful, to discuss the company's new AI-native platform built specifically for managed service providers. Pinar explains how Lexful uses artificial intelligence to capture and organize MSP best practices in real time, making documentation not just a compliance task, but a practical tool that drives efficiency and reduces errors. We also dive into some of the challenges MSPs face when adopting AI tools — like ensuring sensitive client data stays secure and meets regulatory or geographic requirements — and how Lexful addresses these concerns with flexible data residency options. Plus, Pinar shares her thoughts on global expansion, including the Canadian MSP market, and what makes Lexful different from traditional IT documentation tools. Whether you're looking for ways to improve operational efficiency, reduce technician burnout, or future-proof your MSP business with AI, this conversation offers practical insights and a glimpse at where documentation technology is heading. Tune in to hear Pinar Ormeci explain how AI can transform the way MSPs capture, store, and use the knowledge that keeps their businesses running. Read Full Transcript Hello and welcome to the ChannelBuzz.ca podcast, bringing news and information to the Canadian IT channel for the last 16 years. I’m Robert Dutt, editor of ChannelBuzz.ca, and as always your host for the show. If you’re an MSP, you know that documentation is both critical and, let’s be honest, often a pain. From onboarding new technologies to keeping client procedures up to date, maintaining clean, accurate and accessible documentation can feel like a full-time job and even then it’s rarely perfect. That’s where Lexful comes in. Founded by Pinar Ormeci, Lexful is a new AI-native platform designed specifically for managed service providers. The goal is to make documentation smarter, faster and more useful, not just for the teams doing the work today, but for future technicians, clients and partners. Think of it as giving your organization a digital brain that learns your processes, organizes your best practices and helps your team actually use the documentation you spent so long building. In today’s conversation, Pinar walks us through what makes Lexful different from traditional IT documentation tools, how the platform’s AI assistant Ask Lex works, and how MSPs can balance the need for actionable insights with security and control over sensitive client data. We also talk about global expansion, including Canada, of course, and what it takes to bring AI-powered documentation to MSPs operating in regulated markets or multiple geographies. Whether you’re curious about AI in the MSP workflow, looking for ways to improve operational efficiency, or just interested in the next wave of tools that may be shaping the channel, this episode’s full of insights from someone who’s building a platform designed for exactly that. Grab your headphones and let’s jump into a conversation with Pinar Ormeci, CEO of Lexful. Robert Dutt: Thanks for taking the time. I appreciate you’re joining us to talk a little bit about what’s going on over at Lexful. Pinar Ormeci: Thank you so much for having me, Robert. Robert Dutt: You’re entering a market that MSPs already know well in terms of documentation tools. What was it that was broken enough about the status quo, the situation, that you felt like, “Oh, it’s time to start from scratch with something brand new.” Pinar Ormeci: Yeah, as you can imagine, everything changed with AI, with the advent of AI and the pace of doing things and how MSPs must react and are reacting to an AI-first world even today, and it’s even accelerating as we continue. So as such, we fundamentally believe that the things that worked yesterday will not work today and definitely not tomorrow, right, for the workforce that contains humans and AI agents. So we are the response to a long-standing pain point that the MSPs have when it comes to documenting what they have, finding answers and context when they need, and also having the ability to update that documentation as needed, right? So MSPs, when they’re operating, they’re going 100 miles an hour across clients, across tabs, across tools, and the last thing they need is wasting time trying to find the right answer, right network diagram, trying to see if that’s actually the latest and greatest. And usually that doesn’t happen. There’s a lot of tribal knowledge that lives in the MSPs because they honestly, at some point, stop trusting the data that they have and things start living in their minds. And that’s the reason why we exist. So yes, we are an IT documentation solution, but we are an AI-native platform that is starting with documentation and our goal is to really help MSPs move into knowledge operations, an AI operating layer, where the knowledge becomes autonomous, the outcomes become autonomous, and really the knowledge becomes a living thing. Robert Dutt: Well, let’s start with where you’re at in that regard. From your perspective and from what you were hearing as you were building up Lexful and planning it out, what’s the real cost of bad, outdated, unfindable documentation inside an MSP’s operation? Both in terms of operational stuff for the organization, but also in terms of ability to grow, margins of the business, the experience that technicians have, those kinds of things that are not peripheral, but not right at the center of operations. Pinar Ormeci: Excellent question. And what we say is that MSP documentation as it stands today is really broken. And ultimately, this is an economic problem. This is not a technical problem in the sense that it costs MSPs real margin. And how does that happen? So today, documents become stale as soon as they are written. Technicians waste hours collectively trying to find the right information, and manual updates really don’t scale. So what this ends up resulting in is missed signals, right? So you don’t act when you should be acting. You don’t find answers as fast as you could. Your technicians get burned out because literally after five, ten minutes of searching and not being able to find what they need, technicians go to other technicians. So everybody’s pinging each other, disrupting. So there’s also a lot of context switching. And this results in errors where you’re trying to solve different clients’ problems. And ultimately and fundamentally, this really results in eroding client trust and churn, right? So we see this documentation problem not as a technical problem, but fundamentally an economic problem that has real impact on the bottom line of the MSPs. And also their top line, because knowledge is also critical, Robert, for AI agents, for workflows. Your AI workflow or your agentic workforce is only as strong as the data that they rely on. So if you have a bunch of unstructured data lying around across different tools and you have no clue how stale or up to date they are, your agents won’t be as useful as they could be. So we are approaching the problem on both sides, both reducing your costs and increasing your margins, but also really preparing you for the agentic workflow and also AI-driven new revenue streams. Robert Dutt: You’ve positioned Lexful as an AI-native platform rather than a traditional documentation tool with AI built in, strapped on, however you want to phrase that. What does that mean in practice for an MSP that’s using Lexful on a day-to-day basis as opposed to using traditional documentation tools or methodologies? Pinar Ormeci: Sure. Legacy documentation tools were built in a different era, right? Before AI existed, they really depended on manual entry, keyword search, and they’re optimized for storage really, not to be an operational workhorse. Not for knowledge operations, where you’re able to put data to work for you 24/7. So our goal with Lexful is to move from this world of scattered docs and tribal knowledge to a unified AI-native platform that delivers the right solution to the right technician, anchored to the right context, to the right client, instantly. So this is how this looks in real life. Let’s say that you’re using a legacy documentation tool and you say, “Hey, I’m going to give Lexful a go. I want to try it.” By the way, you can have a completely free trial where you get to use the full functionality of Lexful in parallel to your existing tool. So there’s no risk. We call it migration without mayhem. So if you don’t like it, no feelings hurt. You can always continue with your existing platform. But this is how it looks. The first thing that we do is we migrate all your existing documentation. That means including your SOPs, onboarding guidelines, runbooks, what have you, your MSP-specific documentation, plus all your client assets and passwords and their documents into the Lexful schema. And while we are doing that, we transform that data into context, relationships, assets. So everything becomes structured so that AI can operate seamlessly and securely, very fast, within the guardrails that we put. So that’s fundamentally different than bolting AI into the scattered docs that are unstructured and expecting much from that AI agent. Before we even migrate the documents, Robert, what we’ve done is we completely context-engineered an LLM model to live in the MSP space. So you have this, let’s say, AI technician now that has access to all your data. And the things that you can do with this are really amazing. So we have AI as UI, as entry point to Lexful. And what that means is you can ask natural query questions in plain English. For example, a technician can easily ask, “Hey, what’s the admin password for this client?” Or they can ask, “Hey, what devices need patching for the clients that are in the Ohio area?” Or “What should I do about it?” Or you can say, “Hey, give me a project plan for me to patch these devices and make sure you’re prioritizing them based on urgency.” Or an L1 tech who you just hired and you’re trying to onboard, instead of pinging the senior technicians all the time, they can literally go to Ask Lex, which is our AI-powered knowledge assistant, and say, “Hey, how does my MSP do onboarding? What’s the best way for me to increase my learning curve immediately? What would you propose?” Because this is an LLM now that has access to all your knowledge and is context-engineered, as I mentioned, in the MSP and all things IT. Robert Dutt: And you mentioned data throughout that. And clearly, for Ask Lex, for the AI infrastructure to have the value that it potentially has, it has to have access to both an MSP’s most valuable data, the best practices, the procedures, the stuff that folks have developed over the however many years the business has been in place, and customer data, network diagrams and passwords, et cetera. How are you balancing getting the most out of that and getting the most value out of Lexful with trust, security, control, all those kinds of things that MSPs and rightly customers are going to be asking about? Pinar Ormeci: Yeah, 100%. And that’s why vibe coding is not going to work for any production-grade solution, but also definitely for MSPs, where you have multi-tenancy, security is of utmost importance. You have all these compliances and regulations and all of that, right? So you have to have a real MSP-grade solution. So in our case, obviously, we are handling really sensitive data, the client’s data, and also passwords, right? As a documentation tool, we have password management as part of that, a rich document creator and asset management. So it’s as sensitive as it gets. What we do is zero-trust security from day one. So Robert, I was the CEO of another MSP-first vendor before I joined Lexful, and what we did was Secure Access Service Edge, which is a SASE solution, right? So I’m so security-first because I’ve seen firsthand all the horrible consequences when security is optional. Security is a must-have. It has to belong in an MSP stack, and MSPs actually shouldn’t even deal with clients if the client says, “Oh, security is optional for me.” So I am very, very security-first. So from day one, what we’ve done at Lexful is we said that we’re going to be SOC 2 Type 2 compliant. So the whole thing that we’re building is built in that framework. We are already in SOC 2 audit, by the way, so hopefully we’ll get the SOC 2 Type 2 compliance. That’s the earliest you can get, by the way, as a young company, by the end of this half. Yeah, so we have a never trust, always verify framework, and we do take it very seriously. Robert Dutt: And similar issue, but from a different point of view, many MSPs, especially those outside the US, care about where data lives or even is in transit, or are required by regulation to care about where data lives or is in transit, whether that’s in-country, region-specific, or even locked down to the level of on-prem. I guess, how are you guys thinking about data residency and deployment flexibility as you scale and as your customer base scales? Pinar Ormeci: Oh, yeah, 100%. So as part of the SOC 2 Type 2, we are GDPR compliant. We are California CCPA compliant. So from a data residency perspective, similarly, we use AWS because we’re a global cloud-native platform. So we have data centers in the US, but also in Europe, in Canada, in Australia. So based on need, we have no problems having data centers locally in the region the MSP resides. Robert Dutt: You touched on this a little bit earlier, but I think for a lot of MSPs who are changing something like a documentation system that’s core to the business, it feels like there’s a risk there. Even if you see potential benefits, there’s also the challenge of leaving familiar systems, even if they aren’t your favorite things in the world. Can you elaborate a little bit on how you guys approach migration and early adoption so that partners can evaluate Lexful and still keep the business running at the same time? You touched on kind of having that parallel migration path. How exactly does that look for an MSP? Pinar Ormeci: Oh, yeah. As an operational tool, you cannot disrupt the MSP operations. That’s fundamental. So that’s why we say migration without mayhem, and it’s actually one of our core features. The other thing is we are very API-first, meaning even the product that we built is built on APIs. Our front end and back end are decoupled. Everything we do is via APIs. We have a RESTful API already out there for the MSPs to utilize. And for the migration as well, we have an API that automates the migration from an existing tool into the Lexful schema. But while we do that, we also have the MSP continue to use their existing tool while we bring that knowledge into Lexful. And then in that two-week trial, the MSP can use both platforms at the same time, really make sure all that data is there. They can validate that everything is to their liking and all of that. And at the end of that trial, if they continue to move with Lexful, then they can let go of their existing tool. So yeah, migration is very important. And like I say, we automate the migration to the extent possible using the API. Of course, migration is not trivial in any tool, let alone a documentation tool, especially if the MSP has so much documentation. So we always suggest, do this after Friday. Your workday is over, or during the weekend. So just don’t do it Monday 9 AM, just in case, because it might take one hour, two hours or whatever. But having said that, hopefully the migration is the easiest part of switching to Lexful. Robert Dutt: You’re working with AWS. I think you’re thinking on sort of a global scale, and why wouldn’t you, since it’s all online, it’s all technology. But as you think about global expansion, and I’m going to be biased here and say Canada in particular since that’s where this audience lives, how are you thinking about global focus? And also, I’m curious, as you’re talking to MSPs, what differences do you see in how MSPs think about and approach documentation, compliance, AI across the various regions that you’re talking to partners in? Pinar Ormeci: I think Canadian MSPs are pretty amazing and very innovation-forward. They’re definitely thinking about AI, their clients. They’re not that different from the North American ones, obviously. So we have very mature MSPs in Canada. And I don’t see massive differences when it comes to Canadian MSPs versus American MSPs, honestly, because the level of maturity in both countries is similar. So from a distribution perspective, we want to go wherever the pain points exist today when it comes to knowledge and documentation. And that is literally everywhere, right, Robert? So we are a global player and we also want to make it easy for the MSPs to get access to Lexful. We are working with Sherweb, we are working with Pax8. So the hope is that we will be part of those marketplaces definitely within this year. So by the way, a lot of our developers are in Vancouver. So we have great ties to Canada. I’m actually flying on Sunday to Vancouver for some internal meetings next week. So from our perspective, everything we do, everything we envision, our vision, we are a global player. We want to be the de facto central intelligence layer the MSPs trust for years to come. Robert Dutt: And along those lines, kind of looking forward, for an MSP who comes on board early days, as you guys are launching, how do you hope their business looks different a year from now after they’ve fully realized what you guys are doing and what you guys will do with Lexful over the course of that year? Pinar Ormeci: Yeah, excellent question. So we are a paradigm shift. I really see us, remember those days, for people who are old enough, like we used to have no internet, man. Like we used to have encyclopedias and the books, and like, my background is in engineering, I’m an electrical engineer. If I didn’t know something, I had to go open a book and like, it was these weird times without the internet. And then suddenly there was the internet, where this collective information and you can search for anything and, you know, then Google and so on. So that’s the paradigm shift that we are trying to bring the MSPs into. Instead of manual keyword-based search, manual updates and so on, now you live in that knowledge. Knowledge is always up to date. You do in-context troubleshooting. The technicians, they can be in co-pilot, they can be in their PSA, they can be in their Teams and they can just ask Lex to get the right answer contextually. The next steps, and then whatever is new discovered in that discussion is automatically detected if there is a gap and then trickled down to the right SOP, right KB. So this is the paradigm shift that we are talking about, so that MSPs can focus on not the mundane, like, “Hey, we need to update this document,” try to incentivize technicians on actually what makes the money, what delights their customers. They can be so much more strategic with their clients because just imagine now all the insights you can bubble up utilizing an AI and LLM that knows all your clients, that knows all the trends, that knows all the compliance needs. It is just a different game. So we’re really trying to bring the MSPs into an AI-first world because otherwise people will get left behind, right? The old ways don’t scale. Robert Dutt: And finally, probably the most important question we’re going to ask today, and that’s good journalistic practice, right, to wait till the very end to ask the most important question. I do have to ask though, is it true that your AI is also your channel chief? And if so, how sure are you that Lex isn’t coming for your job? Pinar Ormeci: Yeah, so I was like, you know, if you’re an AI-native company, we need to have some teammates that are not just human, but humanoid, let’s say. So we have as our channel chief a humanoid robot that has an LLM, has an NVIDIA chip. We have trained him on all the right things. Although at Right of Boom, people told me, “Oh, we thought he was a female,” but so yeah, Lex is amazing. And he is very clumsy though, so I don’t know that he’s coming after our jobs that fast. But yeah, we’re living in some amazing times. It’s just really fascinating as a technical person myself who’s been in the tech industry for 20-plus years. It’s fascinating to be living in these times where everything is moving exponentially. And yeah, so we do have a channel chief that is not a human. And he is with us at all the events that we go to. You can come to our booth and say hello, and then you can converse with him as well, right? Ask him like, “Hey dude, what do you think the MSP’s pain points are? Is Lex doing a good job? Is Pinar a good boss?” So he’ll have an opinion for you. Robert Dutt: All right, so flesh-and-bone channel chiefs have been put on notice. They are in fact on the list of roles that can be replaced. But jokes aside, no matter how good Lex and his AI pals get, what’s kind of the one role in all of this that you think humans will always play no matter where the technology goes? Pinar Ormeci: I think the judgment layer, at least for the, let’s say, near term, right? I honestly don’t know, 20 years… the thing is moving so fast. I keep reading Anthropic’s CEO and it’s just, things are changing a lot. But in the near term, the human judgment is still paramount. Human in the loop is paramount. And with AI, you have to always trust, but verify. So at Lexful, we make it such that we give all the reasoning the AI is doing to reach that conclusion, all the links where it’s going. So we make sure that the hallucinations, if there are any, are minimized and the humans can verify everything. So the human in the loop is ultimately critical and they are the judgment factor. And especially in the MSP channel, relationships are key. One of the things I love about the MSPs and this ecosystem is the community aspect, people helping each other. Then there’s MSPs being like, “Hey, we’re all on the same team” attitude. So I don’t think you can replace that for small, medium businesses. Ultimately, the best we can be is human. We are not AI, we are not robots. Humans, we’ve evolved to be social animals and community is such an important part of the MSP ecosystem. I don’t think that’s going anywhere soon. So we are here, as we say at Lexful, not to replace expertise. We’re just here to expose it to more people so that the technicians can do more important jobs other than just wasting hours documenting or finding the right information. Robert Dutt: I appreciate your taking the time. Good luck on rolling out and evolving Lexful. It will be exciting to see where things go from here. Thank you very much. Pinar Ormeci: Thank you so much. Thanks for having me. There you have it, a look at how AI may change your documentation system and maybe even provide a new business platform for your managed services business in the long run, courtesy of Lexful’s Pinar Ormeci. I’d like to thank Pinar for joining us and thank you for listening. That wraps up this week on the podcast. We’ll be back on Monday with In Case You Missed It, our weekly roundup of channel news and trends that you need to know about. And next week and into the near future, we’ll be taking a look at why modern IT environments are increasingly hard to monitor and have a chat with our frequent guest, Tony Anscombe, about the security forces you need to know about. Between now and then, please do subscribe to or follow the podcast in your podcast app of choice. And if it allows you to do so, please consider leaving a review or rating for the show. Have a great weekend. I’m Robert Dutt for ChannelBuzz.ca and I’ll see you around the channel.
Today, we are continuing our series, entitled Developer Chats - hearing from the large scale system builders themselves.In this episode, we are talking with Oleksandr Piekhota, Principal Software Engineer at Teaching Strategies. Oleksandr helps to show us at what point of scale platform approaches are required, when to run experiments and when to stop, and perhaps more importantly - engineering ownership beyond the code.QuestionsYou've moved from hands-on engineering into principal and technical leadership roles, working on architecture and platforms.At what point did you realize your work was no longer about individual features, but about the system as a wholeAcross several projects, growth didn't break functionality — it exposed architectural limits.Can you recall a moment when it became clear that shipping more features wouldn't solve the problem, and a platform approach was required?You've designed and supported APIs end-to-end, from architecture to real customers. How do you distinguish between an API that simply works and one that can truly support business scale?Internal systems like invoicing and HR workflows began as automation, but evolved into real products.What tells you that an internal tool is worth developing seriously rather than treating as a temporary workaround?In R&D, you explored CI/CD automation, server-less, and infrastructure experiments — not all reached production. How do you decide when an experiment should continue, and when it's no longer worth the engineering cost?You've hired teams, set standards, and shaped long-term technical direction. At what point does an engineer stop being a contributor and start owning business-level outcomes?You contributed to open-source tools that later became part of your company's infrastructure. Why do you see open source contributions as part of serious engineering work rather than a side activity?Looking across your projects, how do you now recognize a truly mature engineering system? Is it code quality, process, or how teams respond when things go wrong?If we look five to seven years into the future, which architectural assumptions we treat as “standard” today are most likely to turn out to be naive or limiting?SponsorsIncogniLinkshttps://www.linkedin.com/in/oleksandr-piekhota-b675ba53/https://teachingstrategies.com/Support this podcast at — https://redcircle.com/codestory/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Software Engineering Radio - The Podcast for Professional Software Developers
Marc Brooker, VP and Distinguished Engineer at AWS, joins host Kanchan Shringi to explore specification-driven development as a scalable alternative to prompt-by-prompt "vibe coding" in AI-assisted software engineering. Marc explains how accelerating code generation shifts the bottleneck to requirements, design, testing, and validation, making explicit specifications the central artifact for maintaining quality and velocity over time. He describes how specifications can guide both code generation and automated testing, including property-based testing, enabling teams to catch regressions earlier and reason about behavior without relying on line-by-line code review. The conversation examines how spec-driven development fits into modern SDLC practices; how AI agents can support design, code review, documentation, and testing; and why managing context is now one of the hardest problems in agentic development. Marc shares examples from AWS, including building drivers and cloud services using this approach, and discusses the role of modularity, APIs, and strong typing in making both humans and AI more effective. The episode concludes with guidance on rollout, evaluation metrics, cultural readiness, and why AI-driven development shifts the engineer's role toward problem definition, system design, and long-term maintainability rather than raw code production. Brought to you by IEEE Computer Society and IEEE Software magazine.
Get 90 days of Fellow free at Fellow.ai/coo In this episode, Michael Koenig speaks with Greg Keller, co-founder and CTO of JumpCloud, about identity access management and why it's becoming one of the most important operational systems in the age of AI. Greg explains how traditional identity systems were designed for office-based companies running Microsoft infrastructure and why that model broke as companies moved to SaaS, cloud infrastructure, and remote work. The discussion then turns to the next big shift: the rise of AI agents and synthetic identities inside organizations. As companies deploy more AI tools, the number of machine identities may soon outnumber human employees. Managing what those systems can access will become a critical security and operational challenge. Topics Covered What a CTO actually does Greg explains the different types of CTO roles and how technology leaders help companies anticipate where the market is headed. Identity Access Management explained simply IAM answers three core questions inside every company: Who are you? What can you access? How is that access managed? Why the old IT model broke Traditional identity systems were built for on-premise offices and Microsoft infrastructure. Modern companies now operate across: SaaS applications cloud infrastructure remote work environments multiple operating systems How JumpCloud approaches identity JumpCloud was built to manage identity across devices, applications, and infrastructure regardless of platform. Where Okta fits in the ecosystem Okta helped modernize browser-based authentication through Single Sign-On, while JumpCloud focuses on broader identity infrastructure. AI, Security, and Synthetic Identities Why COOs should push AI adoption Greg argues AI adoption is no longer optional. Companies must encourage teams to improve productivity and efficiency using AI. The rise of synthetic identities AI agents, bots, APIs, and service accounts are becoming new actors inside companies that require identity governance. Bots may soon outnumber employees Organizations will soon manage more machine identities than human ones. AI as a potential insider threat AI systems can become security risks if they are granted excessive permissions or misinterpret policies. The API key governance problem Many AI integrations rely on API keys, which are often poorly managed and can create hidden security risks. Key Takeaway As companies adopt AI, identity access management becomes the control layer that determines what both humans and machines are allowed to do inside the organization. The companies that manage identity well will move faster and operate more securely. Links: Michael on LinkedIn: https://linkedin.com/in/michael-koenig514 Greg on LinkedIn: https://www.linkedin.com/in/gregorykeller/ JumpCloud: https://jumpcloud.com/ Between Two COO's: https://betweentwocoos.com Episode Link: https://betweentwocoos.com/ai-agents-identity-access-greg-keller
In this episode, we debrief the second annual Heatpunk Summit from the legendary Hashtub in Denver. We recap how builders from HVAC, hydronics, and home mining came together to advance hashrate heating—complete with live hardware demos, workshops, and a brutally constructive critique of our boiler setup from a pro hydronics engineer. We dig into galvanic corrosion gotchas, smarter system design, and why practical, hands-on education is the real unlock for bringing Bitcoin miners back into homes and businesses as useful heaters.We also break down the big development with Canaan's openness to support the home-mining and heat reuse market, what a “willing partner” ASIC manufacturer could mean for decentralization, and how small improvements—docs, APIs, and integrations—can catalyze a whole ecosystem. From workshop highlights (Home Assistant control, hydronics integration, open-source mining OS, and regulatory/insurance insights) to the industry's AI pivots and the investability of open source, this is a high-signal builder's recap with clear next steps and renewed momentum for hashrate heating.
What if innovation is not about moving faster, but moving with purpose? In this episode of Innovators Inside, Ian Bergman sits down with Dr. Hisham Alasad, head of innovation enablement at Qatar Airways, to unpack a human-first view of innovation shaped by fintech, academia, and a bold move to Qatar. They break down what open banking really changes, why banks fight it, and how open finance could unlock better, cheaper products for consumers. Then they go deeper: why innovation requires overcoming fear, why closed systems stall progress, and what a “Responsible Innovation” framework could look like that is ethical, inclusive, scalable, and beneficial beyond the balance sheet. They close with a big vision: using AI to help create opportunity and peace in the Middle East.Topics & Timestamps
In this engaging episode of MSP Business School, host Brian Doyle sits down with Shane Naugher, a pioneering figure in the world of AI and automation for MSPs. The discussion takes a deep dive into the real-world application of AI, focusing on how it can be utilized to streamline operations and deliver tangible ROI for businesses. Whether you're curious about how AI fits into your MSP strategy or eager to learn about automation opportunities, this episode delivers practical insights into what Shane calls the "mature business model" of MSPs. As the conversation unfolds, Shane shares his dual expertise as the CEO of DaZZee IT Services and founder of Innovative Automations, offering a rare glimpse into the intersection of AI, automation, and managed services. The episode explores the challenges of integrating AI into everyday business operations, shedding light on how AI-enabled automations can transform traditional processes, particularly in professional services and industries reliant on legacy systems. Shane shares valuable experiences and success stories, highlighting key automation opportunities and the significance of partnering with trusted AI advisors to navigate the rapidly evolving tech landscape. Key Takeaways: Practical AI Application: Understanding the difference between shiny AI tools and meaningful automation that drives business outcomes. Industry-Specific Automation: How different sectors, particularly professional services, can benefit from AI to achieve significant ROI. The Role of APIs: Leveraging open APIs and traditional RPA platforms for connecting disparate business applications and optimizing workflows. Partnership Model: The importance of MSPs partnering with AI and automation specialists to provide comprehensive client solutions. Strategic AI Conversations: Encouraging MSPs to lead AI integration discussions with clients to maintain a competitive edge. Guest Name: Shane Naugher LinkedIn page: https://www.linkedin.com/in/shanenaugher/ Company: Innovative Automations / DaZZee IT Website: https://innovativeautomations.ai/ / https://dazzee.com/ Show Website: https://mspbusinessschool.com/ Host Brian Doyle: https://www.linkedin.com/in/briandoylevciotoolbox/ Sponsor vCIOToolbox: https://vciotoolbox.com
Wes and Scott talk about building v_framer, Scott's custom multi-source video recording app, and why Electron beat Tauri and native APIs for the job. They dig into MKV vs WebM, crash-proof recording, licensing with Stripe and Keygen, auto-updates, and the real challenges of shipping a polished desktop app. Show Notes 00:00 Welcome to Syntax! March MadCSS 02:28 Why screen recording apps are so frustrating 07:14 The requirements behind Scott's app, v_framer 09:47 Tauri, WKWebView, and blurry screen recording headaches 13:00 Why switching to Electron was a game changer 14:02 Electrobun and the hybrid desktop experiment 16:29 Browser-based capture vs native APIs 18:50 Brought to you by Sentry.io 22:32 Notarization, certificates, and shipping a Mac app 24:52 One-time purchases, trials, and selling desktop software 26:37 Self-hosting Keygen for license keys 30:27 A scrappy Google Sheets-powered waitlist 31:56 Keyboard shortcuts, FPS locks, and app customization 34:50 CI/CD and painless auto-updates with Electron Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
Show DescriptionWe talk with Frederik Braun from Mozilla about the Sanitizer API, how it works with HTML tags and web components, what it does with malformed HTML, and where CSP fits in alongside the Sanitizer API. Listen on WebsiteWatch on YouTubeGuestsFrederik BraunGuest's Main URL • Guest's SocialSecurity engineer and manager working on the Mozilla Firefox web browser Links Frederik Braun: Why the Sanitizer API is just setHTML() Frederik Braun freddyb (Frederik B) SponsorsBluehostDo you ever feel like pre-configured hosting is slowing you down? That is where VPS hosting starts to make a lot more sense. With Bluehost VPS, you are not stuck inside someone else's environment. You get full control of the server. You can spin up Docker, deploy containerized apps, run workflows, and connect your CRM, databases, and APIs without weird restrictions. No shared bottlenecks. No artificial limits. If you want to actually own your stack, your data, your performance, your roadmap, VPS is the move.
In this episode of the Wharton FinTech Podcast, Bobby Ma sits down with Kyle Mack, CEO & Co-Founder of Middesk, a Series B company. Kyle shares his experience building Middesk, the leading business identify platform modernizing business verification, risk evaluation, and compliance. Its fast, frictionless APIs support KYB, credit assessment, and tax registration use cases, with data updated in days, not months. More than 500 customers trust Middesk to verify, underwrite, and grow with confidence. The company has raised over $70 million in funding and is backed by top-tier investors including Accel, Sequoia, and Insight Partners. We discuss: - Kyle's journey building Middesk starting from developing proprietary data pipelines to creating a leading business identity platform - The value proposition of KYB and how it is fundamentally more complex than KYC - How Middesk serves and plugs into its customers' decisioning workflows -The future of business identity as it evolves with AI and other technology trends
What if public health agencies could access better, faster, and more complete data without giving up control? In this episode, we sit down with Dr. Jen Layden, senior vice president of population and innovation at ASTHO, to explore the new Public Health Data Consortium and what it means for the future of public health decision-making. Dr. Layden explains how this unique public–private partnership is designed to improve data access, quality, and analytics while keeping governance firmly in the hands of state and territorial health agencies. She discusses why mortality data is a critical starting point, how emerging technologies like APIs and advanced analytics can help close long-standing data gaps, and what new insights could come from linking public health data with sources like pharmacy, claims, and real-world data.Leadership Power Hour: Your Launchpad for Impact | ASTHO
I am thrilled to welcome Marenza Altieri Douglas, an executive in sales and technology. She's trained in structured enterprise environments, start ups, and is steeped in opening new markets and building commercial enterprise. That's not going to be our focus today, instead we talk about how she is an incredible storyteller, rooted in concepts like disruption and cultivation. Her personal story is key to the narrative, and I was thrilled she is joining us to share that story and how she ties it all together, leading and operating in the current business climate. Marenza Altieri Douglas' career sits at the intersection of technology evangelism and disciplined execution. Trained in structured, enterprise environments and refined in startups and scale-ups, she specializes in defining strategic direction, opening new markets, and building compelling commercial propositions for enterprise and C-suite customers across Fortune 500 and Global 5000 organizations. She has worked across and alongside technologies including Conversational and Generative AI, APIs, DevOps, open-source platforms, cloud and containerized architectures, enterprise mobility, security, communications, media and broadcast, telecoms, and digital platforms. AI is a natural evolution of this journey, alongside a strong strategic interest in GPU-enabled infrastructure and quantum technologies. Marenza is known for building high-trust relationships, spotting and growing talent, and connecting product, engineering, and commercial teams around clear outcomes. A natural storyteller and facilitator, I enjoy shaping narratives that help organizations and customers understand why a technology matters, not just what it does.(4:50) We delve into Marenza's formative years that put her on her current path. She shares her personal and professional story. (17:18) When did Marenza realized that “disruption” and challenging things become a part of her brand? (22:38) What does Marenza feel are some of the important qualities that people should embody? (28:20) Marenza shares how she focuses on the future and the next generation. (39:16) We reflect on what Marenza would like her impact to be over the next couple of years.Connect with Marenza Altieri-Douglashttps://www.linkedin.com/in/marenza/ Subscribe: Warriors At Work PodcastsWebsite: https://jeaniecoomber.comFacebook: https://www.facebook.com/groups/986666321719033/Instagram: https://www.instagram.com/jeanie_coomber/Twitter: https://twitter.com/jeanie_coomberLinkedIn: https://www.linkedin.com/in/jeanie-coomber-90973b4/YouTube: https://www.youtube.com/channel/UCbMZ2HyNNyPoeCSqKClBC_w
Host: Annik Sobing Guest: Kenneth G. Peters Published: February 2026 Length: ~20 minutes Presented by: Global Training Center GTM Software Prep: Don't Install Until You've Done These 3 Things First In this Simply Trade Roundup, Annik talks with Kenneth G. Peters, President at MIC US and Director of Commercial Operations in North America, about Global Trade Management (GTM) software—specifically, what trade teams must do before implementation to avoid creating “digital chaos.” Ken shares real talk from his ATCC presentation on data cleanup, process mapping, and testing, plus why “cleaning your data like you're hosting the in-laws” is now his signature advice. Shoutout to Alison for the killer slides. What You'll Learn in This Episode Ken's new grandpa status (the little guy is 7 months old—congrats!) and why it's the “next step in life” that keeps him energized for trade tech. The #1 mistake companies make with GTM software Data cleanup first: Don't dump junk into GTM. Scrub inactive vendors, obsolete parts, invalid HS codes (like 111111 or all zeros). Clean it like you're hosting the in-laws—no mess allowed. Why: GTM amplifies what you give it. Bad data in = faster mistakes out. Avoid the “Big Bang” implementation trap Don't try to do everything at once (denied party screening + classification + FTA rules + solicitation). Start small: Classification (builds the foundation—parts, HS codes, values). Denied party screening (uses your vendor/part data). FTA analysis (relies on classification/HS from step 1). Why: Master data dependencies mean you build once and reuse everywhere. Processes over pixels GTM won't fix broken workflows. Map your processes before going live. If your current setup is emailing Excel files between systems, you're not automating—you're digitizing chaos. True automation: ERP ↔ GTM via SFTP, APIs, XML—no human hands on keyboards. Reduces errors, speeds everything up. Who owns what after go‑live MIC US (GTM provider): Manages the software backend—reg updates, HS databases, platform maintenance. Your team: Owns the process (classification, entry creation, decision‑making). Someone still reviews outputs for accuracy. No “managed services” from MIC—GTM is a tool, not a full‑service outsource. Testing: where most implementations fail Allocate real time and resources to testing—don't rush it. Test end‑to‑end: data flow, workflows, edge cases. Why: Skipped or rushed testing = live problems that cost more to fix later. “If your systems are emailing Excel files to each other, you're not automating” Ken's golden rule: Hands‑off data flow (ERP → GTM) eliminates errors. Excel handoffs = manual errors waiting to happen. Key Takeaways Clean data first: Active parts, valid HS, no ghosts—GTM makes good data shine and bad data explode. Start small, build smart: Classification → screening → FTA, not “big bang everything.” Fix processes before pixels: GTM won't save broken workflows; it speeds them up. Testing = non‑negotiable: Rushed testing = expensive live fixes. GTM is a force multiplier—if your foundation is solid. Credits Host: Annik Sobing Guest: Kenneth G. Peters, President, MIC US Producer: Annik Sobing Listen & Subscribe Simply Trade main page: https://simplytrade.podbean.com Apple Podcasts: https://podcasts.apple.com/us/podcast/simply-trade/id1640329690 Spotify: https://open.spotify.com/show/09m199JO6fuNumbcrHTkGq Amazon Music: https://music.amazon.com/podcasts/8de7d7fa-38e0-41b2-bad3-b8a3c5dc4cda/simply-trade Connect with Simply Trade Podcast page: https://www.globaltrainingcenter.com/simply-trade-podcast LinkedIn: https://www.linkedin.com/showcase/simply-trade-podcast YouTube: https://www.youtube.com/@SimplyTradePod Join the Trade Geeks Community Trade Geeks (by Global Training Center): https://globaltrainingcenter.com/trade-geeks/
En el episodio 200 de BIMrras nos metemos en terreno peligroso. ¿Y si en vez de usar un CDE lo programamos? ¿Y si dejamos de tratarlo como una marca comercial y empezamos a entenderlo como lo que realmente es: un sistema de reglas para gestionar información? Hablamos de qué significa construir un CDE Open Source, de traducir la ISO 19650 a código, de dejar de confundir software con metodología y de lo incómodo que resulta descubrir que no entendemos tan bien el sistema que usamos todos los días. Porque automatizar el caos no lo ordena. Lo acelera. Un episodio sobre soberanía digital, procesos y responsabilidad. Y sobre una idea que puede doler un poco: si no entiendes tu CDE, no estás gestionando información. Estás alquilando comodidad. Bienvenido al episodio 200 de BIMrras! Contenido del episodio: 00:00:00 Introducción y celebración del episodio 200 00:06:00 Origen del proyecto de CDE Open Source 00:10:30 Adaptar el software a la forma de trabajar 00:28:00 CDE como infraestructura frente a plataforma cerrada 00:38:30 Control de versiones y problemas con archivos IFC 00:45:00 Gestión de datos frente a gestión de archivos en BIM 00:57:00 Interoperabilidad, APIs y automatización 01:02:00 Seguridad y copias de respaldo 01:07:00 Open Source y soberanía del dato 01:13:00 Responsabilidades y trazabilidad en el proyecto
John V, AI risk, safety, and security at the Institute for Security and Technology (IST), joins Defender Fridays today. John's work spans AI red teaming, adversarial machine learning, AI evals and validation, and AI risk assessment, including policy work at the intersection of AGI and nuclear strategic stability. Learn more at https://securityandtechnology.org/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.ioFollow LimaCharlieSign up for free: https://limacharlie.ioLinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie
In this episode of The Ross Simmonds Show, Ross breaks down the so-called “SaaSpocalypse” after $1 trillion in SaaS market cap vanished in a single week. While headlines scream that “AI will replace SaaS,” Ross argues the reality is far more nuanced. He introduces a three-part framework ; Exposed, Embedded, Evolved , and outlines the strategic shifts founders and marketers must make to survive and compound in the age of AI agents. Key Takeaways and Insights: 1. The $1 Trillion Wake-Up Call -SaaS stocks were crushed in early 2026, triggering fear across markets. -AI agents, LLM advancements, and disappointing earnings accelerated the correction. -The dominant narrative says AI will replace SaaS , but the situation is more complex. -Market fear is loud. Structural change is quieter, but very real. 2.AI Agents, Vibe Coding & the Death of Per-Seat Pricing? -AI agents interacting directly with APIs challenge traditional SaaS interfaces. -“Vibe coding” demonstrates how quickly software can now be replicated. -Per-seat pricing models are under pressure as automation scales output. -The interface is shifting from dashboards to conversations. 3.The Data Reality Most People Ignore -Global SaaS spending is projected to grow from $318B (2025) to $500B+ (2028). -Enterprise contracts and deep dependencies don't disappear overnight. -Pricing models may change. Market leaders may change. -Software demand isn't vanishing, it's evolving. 4.The Extinction Stack: Exposed, Embedded, Evolved -SaaS companies fall into three survival tiers. -Not all SaaS companies face equal risk. -Your future depends on depth of integration and data moat. -Operators must identify where they sit, now. 5.Type 1: The Exposed -Horizontal point solutions with weak moats and low switching costs. -Easily replicated with AI tools in days or weeks. -Rely on habit rather than proprietary advantage. -Most vulnerable to margin compression and churn. 6.Type 2: The Embedded -Deeply integrated systems of record inside enterprises. -Painful and complex to replace due to migration risk. -The risk isn't extinction ,it's interface disruption. -Must become AI-first before agents abstract them away. 7. Type 3: The Evolved -AI-native or aggressively AI-integrated platforms. -Built on proprietary data, regulatory moats, and deep user memory. -AI increases the value of their data advantage. -Positioned not just to survive, but accelerate. 8.Distribution Is the New Defensive Moat -AI can replicate features. It cannot replicate trust. -Brand equity, audience relationships, and distribution compound. -As product development gets cheaper, distribution becomes the advantage. -This is the moment to double down on quality and amplification. 9.From Time-Based to Outcome-Based Thinking -Per-seat and time-based pricing models face structural pressure. -The future favors outcome-driven pricing and accountability. -Buyers will demand measurable impact, not access. -Service businesses must shift from hours sold to results delivered. 10. Intentional AI vs Fear-Based AI -Two types of teams are emerging: intentional adopters and reactive adopters. -AI without process creates noise, not leverage. -10,000 mediocre AI assets won't move the needle. -10 strategic, AI-enabled assets can change a business trajectory. —
In this episode of Scene from Above, Julia Wagemann speaks with Matthias Mohr, independent software developer and one of the key contributors to the STAC (SpatioTemporal Asset Catalog) and STAC API specifications. STAC has become foundational to how Earth observation data is discovered and accessed across cloud platforms. But its origins lie in a fragmented landscape of portals, inconsistent metadata, and incompatible APIs. Matthias shares how STAC emerged from practical needs within the community and how it evolved into a widely adopted standard for geospatial data discovery. Together, Julia and Matthias unpack: Why STAC was created and what problem it solved The difference between static STAC catalogues and STAC APIs How organisations struggle when adopting STAC internally The role of extensions and interoperability Where cloud-native geospatial infrastructure may head next A thoughtful conversation for anyone working with large-scale Earth observation data, from analysts querying data, to engineers publishing catalogues, to decision-makers shaping data infrastructure. Host: Julia Wagemann Guest: Matthias Mohr
Vitalik outlines Ethereum's Post-Quantum roadmap. Ethereum researchers introduce the leanSig signature scheme. Alchemy releases crypto APIs for agents. And Brevis reduces RTP costs on its ZKVM. Read more: https://ethdaily.io/892 Borrow against ETH at the lowest fixed rates in DeFi. Liquity V2 lets you use ETH as collateral to mint BOLD, the Ethereum native dollar. Learn more at liquity.org Disclaimer: Content is for informational purposes only, not endorsement or investment advice. The accuracy of information is not guaranteed.
As has become tradition, the guys are delighted to welcome back special guest Analyst Dean Bubley, to look ahead to the main themes of this year's big telecoms trade show. They start with the obvious – AI – but strive to inject focus and substance into this ubiquitous and often hyperbolic topic by discussing agents, automation, and APIs. They then move on to the matter of sovereignty and how viable it is for countries to become more self-sufficient in a time of ultra globalisation. The final big theme is satellite telecoms, with direct-to-device likely to be a hot topic at the show. They conclude by examining recent conjecture on the effect AI will have on the world and its workers.
Send a textStablecoin yield doesn't have to mean complexity, counterparty mystery, or a leap of faith. We sit down with Jeff Handler, co‑founder and CCO of OpenTrade, to unpack how enterprise‑grade infrastructure turns on‑chain dollars into real returns, why tokenization only matters when it solves a user's problem, and how crypto‑native strategies like delta neutral Solana staking can deliver yield without riding the market's mood swings.Jeff walks us through his journey from early Bitcoin wallets to USDC's formative years, then into building a platform that looks more like SaaS than a protocol. We dig into the operations hiding behind clean APIs: bank‑grade asset management, reporting, and legal structures that meet treasury standards. If you've wondered how fintechs, exchanges, and neobanks can keep funds on chain while accessing money market exposure or hedged staking strategies, this is the blueprint.We also get practical about adoption. Trust is earned through credible investors and counterparties, but it's cemented with enforceable contracts, account controls, and bankruptcy‑aware structures. For product teams, the takeaway is clear: avoid vanity metrics, pursue product‑market fit, and accept that real usage trails real utility. On regulation, Jeff advocates a proven path—operate responsibly under existing laws, engage policymakers, and keep shipping rather than waiting for a perfect rulebook.To close, we explore how embedded yield becomes a retention and growth engine. With configurable terms, rates, and minimums, teams can shape offerings to reduce churn or boost balances while keeping a “stablecoins in, stablecoins out” experience. If you're building in fintech or web3 and need a clear, compliant, and scalable way to deliver yield, this conversation will sharpen your roadmap. Enjoy the episode, then subscribe, share with a teammate, and leave a quick review so others can find it too.This episode was recorded through a Descript call on January 30, 2026. Read the blog article and show notes here: https://webdrie.net/stablecoin-yield-without-the-headache..........................................................................
Episode 300 is a milestone we never imagined when we hit record for the first time in 2018 at 470 Claims, which was acquired by Alacrity Solutions. Seven years later, Rob and Lee sit down to reflect on how FNO: InsureTech began, how it evolved, and what has surprised us most along the way. In this special episode, we talk through the genesis of the podcast and how a simple idea turned into hundreds of conversations across insurance and insuretech. We reflect on the consistency, curiosity, and commitment it took to keep showing up week after week, and how the journey shaped us as hosts. Key Highlights • [3:19] The genesis of FNO: InsureTech and how the podcast started in 2018 • [9:03] The unexpected relationships and networking that grew from the show • [25:42] How insuretech conversations evolved from APIs to AI • [29:36] The industry shifts between 2018 and 2025 that quietly changed startup thinking • [31:09] Reflections on seven years of recording and the plans ahead We are grateful to everyone who has listened, shared, or joined us as a guest, and to our sponsor Alacrity Solutions for supporting us all these years. Cheers to the next 100!
In this episode of Valley of Depth, we dive into Aalyria's newly announced $100 million raise at a $1.3 billion valuation with cofounder and CTO Brian Barritt and unpack why investors are betting big on the future of networks that don't sit still. Aalyria is building two core technologies born inside Google: Spacetime, a software orchestration layer designed to manage networks in motion, and Tightbeam, a laser communications system delivering fiber-like speeds through the atmosphere. Together, they aim to solve one of the hardest infrastructure challenges in aerospace and defense: how to coordinate satellites, aircraft, drones, ships, and ground systems into a seamless “network of networks.” The conversation spans laser physics, diffraction challenges in space-to-ground links, feeder link bottlenecks in mega-constellations, and why routing data across moving infrastructure is fundamentally different than routing across fixed networks. We cover: Why Aalyria's $100M raise signals a shift from R&D to deployment What “network in motion” really means and why it's so hard How laser communications can reach 100 gigabits per second through atmosphere The technical challenge of Earth-to-space vs. space-to-Earth optical links Why interoperability has been a 40-year ambition inside the DoD How open APIs could become the connective tissue for JADC2 and beyond What resilience and roaming look like in hybrid satellite architectures Why optical ground stations require orchestration software to scale • Chapters • 00:00 - Intro 00:59 – The history of Aalyria 02:47 – Aalyria's Spacetime 06:09 – Building the connective software stack that links all of Aalyria's technology together 07:12 – The non-geostationary network problem 11:12 – The rebirth of Loon Technology 14:50 – How Tightbeam ties in to Aalyria 17:21 – 100gb/s through the atmosphere 19:42 – Brian's mandate as CTO when Aalyria forms 20:37 – State of Tightbeam at formation of Aalyria 22:17 – Why can't other companies do what Spacetime does yet? 26:05 – The significance of having different architectures with different source codes talk to each other without modification 28:21 – How Aalyria integrates a new customer's network 31:05 – What is a long distance for Tightbeam and customer reaction to demos 32:48 – Who has Aalyria surprised the most with their demos? 34:28 – What has prevented the government from making a network of networks? 39:14 – Why wouldn't a space version of the Tightbeam terminal not work? 42:01 – How Aalyria is thinking about customer adopting Tightbeam 45:15 – Aalyria in the defense industry 47:05 – Aalyria's commercial aspects 48:30 – Aalyria's latest investment round 51:39 – Next milestones 53:00 – What keeps Brian up at night? 54:00 – Longterm vision for Aalyria 56:16 – What does Brian do for fun? • Show notes • Aalyria's website — https://www.aalyria.com/ Mo's socials — https://x.com/itsmoislam Payload's socials — https://twitter.com/payloadspace / https://www.linkedin.com/company/payloadspace Ignition's socials — https://twitter.com/ignitionnuclear / https://www.linkedin.com/company/ignition-nuclear/ Tectonic's socials — https://twitter.com/tectonicdefense / https://www.linkedin.com/company/tectonicdefense/ Valley of Depth archive — Listen: https://pod.payloadspace.com/ • About us • Valley of Depth is a podcast about the technologies that matter — and the people building them. Brought to you by Arkaea Media, the team behind Payload (space), Ignition (nuclear energy), and Tectonic (defense tech), this show goes beyond headlines and hype. We talk to founders, investors, government officials, and military leaders shaping the future of national security and deep tech. From breakthrough science to strategic policy, we dive into the high-stakes decisions behind the world's hardest technologies. Payload: www.payloadspace.com Tectonic: www.tectonicdefense.com Ignition: www.ignition-news.com
In this episode of the Ardan Labs Podcast, Ale Kennedy talks with Jens Neuse, CEO and co-founder of WunderGraph, about his unconventional path into technology and entrepreneurship. After a life-altering accident ended his carpentry career, Jens taught himself to code during recovery and eventually built WunderGraph to solve modern API challenges.Jens shares the evolution of WunderGraph from an early-stage startup to a successful open-source platform, including pivotal moments like securing eBay as a customer. The conversation highlights the importance of resilience, community-driven development, and balancing startup life with family, offering insight into what it takes to build meaningful technology through adversity and persistence.00:00 Introduction and Current Life07:19 Dropping Out and Carpentry Career10:52 Life-Altering Accident and Recovery18:01 Learning to Walk and Finding Direction27:46 Discovering Coding and Technology31:17 Starting the Startup Journey33:07 Discovering the Power of APIs40:50 Building a Team and Leadership Growth48:17 Founding WunderGraph59:07 Pivoting to Open Source01:05:32 eBay Breakthrough and Validation01:10:08 Balancing Family and Startup LifeConnect with Jens: LinkedIn: https://www.linkedin.com/in/jens-neuseMentioned in this Episode:Wundergraph: https://wundergraph.comWant more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs
How real-time security transforms ERP systems in a cloud-driven world, spotting threats instantly, leveraging AI for proactive defense, and closing common blind spots before breaches escalate. Curious about staying ahead of cyber risks?=====Mohammed Moidheen, SAP security architect at Infosys, unpacks why real-time monitoring is vital amid 2,200 daily cyber attacks costing trillions annually. He highlights blind spots like unmonitored access vulnerabilities, ignored audit logs, unsecured APIs, privileged accounts, insider threats, and poor event correlation in S/4HANA Cloud setups. AI evolves detection with predictive intelligence, automated responses, natural language queries, and cross-system pattern spotting, shifting from reactive to proactive security. Real-world cases show systems halting unusual data downloads and insider data exfiltration in minutes. Advice includes aligning with governance, prioritizing crown jewels, setting baselines, training teams, and correlating data. Infosys aids via assessments and foundational builds.Listen now and rethink what ERP can do for your organization!Download Episode TranscriptUseful Links: SAP Cloud ERPInfosys.comFollow Us on Social Media!SAP S/4HANA Cloud ERP: LinkedIn=====Guest: Mohammed Khan Moidheen, SAP Security Architect at Infosys ConsultingMohammed Khan Moidheen is a Senior SAP Security architect with over 12 years of experience securing and operating large scale SAP landscapes across global enterprises. His expertise spans SAP S/4HANA security, ERP platform services, DevSecOps enablement, and designing audit ready security architectures aligned with frameworks such as ISO 27001, NIST, and GDPR.Mohammed is CISSP and CISA certified and I excel at translating complex security requirements into actionable strategies that are practical , strategically aligned and strengthen organisational resilience.Host 1: Richard Howells, SAPRichard Howells has been working in the Supply Chain Management and Manufacturing space for over 30 years. He is responsible for driving the thought leadership and awareness of SAP's ERP, Finance, and Supply Chain solutions and is an active writer, podcaster, and thought leader on the topics of supply chain, Industry 4.0, digitization, and sustainability.Follow Richard Howell on LinkedIn and XHost 2: Oyku Ilgar, SAPOyku Ilgar is a marketer and thought leader specializing in SAP's digital supply chain and ERP solutions since 2017. As a marketer, blogger, and podcaster, she creates engaging content that highlights innovative SAP technologies and explores key topics including business trends, AI, Industry 4.0, and sustainability.She holds dual bachelor's degrees in Finance & Accounting and English Translation, along with a master's degree in Business Administration and Foreign Trade, specializing in marketing. With her background in digital transformation, Oyku communicates technology trends and industry insights to help professionals navigate the evolving business landscape.Oyku's LinkedIn and SAP Community=====Key Topics: real-time security, ERP monitoring, cloud threats, SAP S/4HANA, access management, audit logs, AI threat detection, insider threats, privileged accounts, predictive intelligence
Dylan and Max sit down with Aaron, Software Architect at Airplane Manager, to talk business aviation ops tech and where AI is headed. If you're running lean (two pilots, one tail, no dispatcher), this is the roadmap for reducing busywork without losing operational control. They dig into integrations, offline trip tools, and why "apps" might just become background APIs. Listen in and subscribe for more pilot-to-pilot ops talk. Check out the software Dylan and Max both use to run their departments: Airplane Manager Show Notes 0:00 Intro 2:01 Airplane Manager Overview 11:07 App, AI, and Security 21:08 Flight Operations Efficiency 36:18 Evolving Best Practices with Tech 49:39 Final Thoughts Our Sponsors Tim Pope, CFP® — Tim is both a CERTIFIED FINANCIAL PLANNER™ and a pilot. His practice specializes in aviation professionals and aviation 401k plans, helping clients pursue their financial goals by defining them, optimizing resources, and monitoring progress. Click here to learn more. Also check out The Pilot's Portfolio Podcast. Advanced Aircrew Academy — Enables flight operations to fulfill their training needs in the most efficient and affordable way—anywhere, at any time. They provide high-quality training for professional pilots, flight attendants, flight coordinators, maintenance, and line service teams, all delivered via a world-class online system. Click here to learn more. Raven Careers — Helping your career take flight. Raven Careers supports professional pilots with resume prep, interview strategy, and long-term career planning. Whether you're a CFI eyeing your first regional, a captain debating your upgrade path, or a legacy hopeful refining your application, their one-on-one coaching and insider knowledge give you a real advantage. Click here to learn more. The AirComp Calculator™ is business aviation's only online compensation analysis system. It can provide precise compensation ranges for 14 business aviation positions in six aircraft classes at over 50 locations throughout the United States in seconds. Click here to learn more. Vaerus Jet Sales — Vaerus means right, true, and real. Buy or sell an aircraft the right way, with a true partner to make your dream of flight real. Connect with Brooks at Vaerus Jet Sales or learn more about their DC-3 Referral Program. Harvey Watt — Offers the only true Loss of Medical License Insurance available to individuals and small groups. Because Harvey Watt manages most airlines' plans, they can assist you in identifying the right coverage to supplement your airline's plan. Many buy coverage to supplement the loss of retirement benefits while grounded. Click here to learn more. VSL ACE Guide — Your all-in-one pilot training resource. Includes the most up-to-date Airman Certification Standards (ACS) and Practical Test Standards (PTS) for Private, Instrument, Commercial, ATP, CFI, and CFII. 21.Five listeners get a discount on the guide—click here to learn more. ProPilotWorld.com — The premier information and networking resource for professional pilots. Click here to learn more. Feedback & Contact Have feedback, suggestions, or a great aviation story to share? Email us at info@21fivepodcast.com. Check out our Instagram feed @21FivePodcast for more great content (and our collection of aviation license plates). The statements made in this show are our own opinions and do not reflect, nor were they under any direction of any of our employers.
The host of episode 108 of Venture Everywhere is Harm-Julian Schumacher, co-founder and CEO of OneLot, a financing platform for used car dealers in the Philippines. He talks with Reto Bolliger, co-founder and CEO of Chaiz, an online marketplace for extended vehicle warranties. Reto shares how climbing Kilimanjaro led him to build a travel company, and how an investor in that business introduced him to the surprisingly profitable world of extended car warranties. He discusses how Chaiz challenges the industry consensus that warranties “must be sold” through aggressive tactics, instead building trust through transparency and offering consumers prices up to 40% cheaper than dealerships.In this episode, you will hear:Building the first online marketplace to compare and buy extended car warranties.Offering dealership products at 40% lower prices through digital channels.Replacing aggressive sales tactics with transparency and education.Leveraging AI for customer support and AI search optimization.Embedding warranty APIs for cross-selling through partner platforms.Learn more about Reto Bolliger | ChaizLinkedIn: https://www.linkedin.com/in/reto-bolligerWebsite: https://www.chaiz.comLearn more about Harm-Julian Schumacher | OneLotLinkedin: https://www.linkedin.com/in/harm-julian-schumacherWebsite: https://www.onelot.ph
Today, host Sandy Vance sits down with Jeff McCool, the AVP of Healthcare Conversational AI at Amelia. Join a discussion with SoundHound AI, the leader in conversational intelligence, to learn how AI agents are helping healthcare companies overcome challenges like improving patient care and streamlining operations. Hear how the SoundHound Amelia Platform lets you build AI agents that understand, reason, and act so you can create the most seamless conversational experience. In this episode, they talk about: The types of healthcare organizations Amelia partners with How Amelia's platform approach supports health systems in multiple ways beyond a single tool Working with clients to establish guardrails for safe and effective AI adoption How conversational AI is expected to evolve in the coming years Real-world implementation success stories and lessons learned What differentiates SoundHound AI's agents and the broader ecosystem created through partnerships Advice for healthcare leaders at provider and payer organizations navigating next steps with AI A Little About Jeff: Jeff McCool works at the intersection of healthcare and AI, helping organizations use conversational technology to solve real operational challenges. He is AVP of Healthcare Conversational AI at Amelia, where he partners with health systems to deploy AI-powered virtual agents that improve patient and employee experiences while reducing friction in everyday workflows. His focus is on practical AI adoption, what works in production, how teams implement it, and how to scale responsibly. Previously, Jeff held leadership roles at Ciox and Datavant Health, leading digital growth initiatives centered on interoperability, APIs, and healthcare data exchange. His background combines healthcare operations, technology, and go-to-market strategy. Jeff holds an MBA from UNC Kenan-Flagler Business School and a degree in Banking and Finance from the University of Georgia's Terry College of Business.
The voices telling you it won't work usually belong to people who never tried. Nobody gives you permission to take a chance. You just do it.Chris built a 50K MRR business without a formal education, a tech background, or a plan. As an actor, a car dealership paid him $400 to be in a commercial and he thought, "If I can pretend to do this, what happens if I just actually do it?"From there it was taking on teaching himself APIs, webhooks integrations, and enough failures to make most people quit. He's now responsible for 40% of some dealerships' bottom lines, working remotely from Ottawa, heading to Costa Rica.We talked about why people don't take that first step. Chris's take is it's mostly the room you're in. When you move somewhere nobody knows you, the risk calculus changes. The voices telling you you're going to look stupid usually belong to people who never left.We also got into social media, the throttled notification drip sequences designed to keep you coming back, the rage bait economy, the positive reinforcement loop that rewards the most outrageous behavior. His advice was simple: put your phone down and tackle your life goals head on.Chris also hosts Bad Hombres TV on YouTube.
Michael Truell, CEO of Cursor, sits down with Patrick Collison, CEO of Stripe and an investor in Anysphere, to talk about Collison's history with Smalltalk and Lisp, the MongoDB and Ruby decisions Stripe still lives with 15 years later, why he'd spend even more time on API design if he could do it over, and whether AI is actually showing up in economic productivity data. This episode originally aired on Cursor's podcast. Resources: Follow Patrick Collison on X: https://twitter.com/patrickc Follow Michael Truell on X: https://twitter.com/mntruell Follow Cursor: https://www.youtube.com/@cursor_ai Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Have you ever wondered why "compliance" still gets treated like a slow, spreadsheet-heavy chore, even though the rest of the business is moving at machine speed? In this episode of Tech Talks Daily, I sit down with Matt Hillary, Chief Information Security Officer at Drata, to talk about what actually changes when AI and automation land in the middle of governance, risk, and compliance. Matt brings a rare viewpoint because he lives this day-to-day as "customer zero," running Drata internally while also leading IT, security, GRC, and enterprise apps. We get practical fast. Matt shares how AI-assisted questionnaire workflows can turn a 120-question security assessment from a late-afternoon time sink into something you can complete with confidence in minutes, then still make it upstairs in time for dinner. He also explains how automation flips the audit dynamic by moving from random sampling to continuous, full-population checks, using APIs to validate evidence at scale, without hounding control owners unless something is actually wrong. We also talk about what security leadership really looks like when the stakes rise. Matt reflects on lessons from his time at AWS, why curiosity and adaptability matter when the "canvas" keeps changing, and how customer focus becomes the foundation of trust. That theme runs through the whole conversation, including the idea that the CISO role is steadily turning into a chief trust officer role, where integrity, transparency, and credibility under pressure matter as much as tooling. And because burnout is never far away in security, we dig into the human side too. Matt unpacks how automation can reduce cognitive load, but also warns about swapping one kind of pressure for another, especially when teams get trapped producing endless dashboards and vanity metrics instead of focusing on the few measures that actually reduce risk. To wrap things up, Matt leaves a song for the playlist, Illenium's "You're Alive," plus a book recommendation, "Lessons from the Front Lines, Insights from a Cybersecurity Career" by Asaf Karen, which he says stands out for how it treats the human side of security leadership. If you're thinking about modernizing compliance in 2026 without losing the human element, his parting principle is simple and powerful: be intentional, keep asking why, and spend your limited time on what truly matters. So where do you land on this shift toward continuous trust, do you see it becoming the default expectation for buyers and auditors, and what should leaders do now to make sure automation reduces pressure instead of quietly adding more? Share your thoughts with me, I'd love to hear how you're approaching it.
This week on Defender Fridays, Farshad Abasi, Founder and CEO of Forward Security and Eureka DevSecOps, discusses how AI can help us set a new standard in app and cloud security. Farshad brings over 27 years of industry experience to the forefront of cybersecurity innovation. His professional journey includes key technical roles at Intel and Motorola, evolving into senior security positions as the Principal Security Architect for HSBC Global, and Head of IT Security for the Canadian division. Farshad's commitment to the field extends to his role as an instructor at BCIT, where he imparts his wealth of knowledge to the next generation of cybersecurity experts. His diverse experience, which spans startups to large enterprises, informs his approach to delivering adaptive and reliable solutions.Engaged actively in the cybersecurity community through roles in BSides Vancouver/MARS, OWASP Vancouver/AppSec PNW, and as a CISSP designate, Farshad's vision and leadership continue to drive the industry forward. Under his guidance, Forward Security is setting new standards in application and cloud security. Learn more at https://www.eurekadevsecops.com/ and https://forwardsecurity.com/Register for Live SessionsJoin us every Friday at 10:30am PT for live, interactive discussions with industry experts. Whether you're a seasoned professional or just curious about the field, these sessions offer an engaging dialogue between our guests, hosts, and you – our audience.Register here: https://limacharlie.io/defender-fridaysSubscribe to our YouTube channel and hit the notification bell to never miss a live session or catch up on past episodes!Sponsored by LimaCharlieThis episode is brought to you by LimaCharlie, a cloud-native SecOps platform where AI agents operate security infrastructure directly. Founded in 2018, LimaCharlie provides complete API coverage across detection, response, automation, and telemetry, with multi-tenant architecture designed for MSSPs and MDR providers managing thousands of unique client environments.Why LimaCharlie?Transparency: Complete visibility into every action and decision. No black boxes, no vendor lock-in.Scalability: Security operations that scale like infrastructure, not like procurement cycles. Move at cloud speed.Unopinionated Design: Integrate the tools you need, not just those contracts allow. Build security on your terms.Agentic SecOps Workspace (ASW): AI agents that operate alongside your team with observable, auditable actions through the same APIs human analysts use.Security Primitives: Composable building blocks that endure as tools come and go. Build once, evolve continuously.Try the Agentic SecOps Workspace free: https://limacharlie.ioLearn more: https://docs.limacharlie.ioFollow LimaCharlieSign up for free: https://limacharlie.ioLinkedIn: / limacharlieio X: https://x.com/limacharlieioCommunity Discourse: https://community.limacharlie.com/Host: Maxime Lamothe-Brassard - CEO / Co-founder at LimaCharlie
There's a lethal trifecta of AI risks: access to private data, exposure to untrusted content, and external communication. In this conversation, Risky Business host Patrick Gray chats with Josh Devon, the co-founder of Sondera, about how to best address these risks. There is no magic solution to this problem. AI models mix code and data, are non-deterministic, and are crawling around all over your enterprise data and APIs as you read this. But in this sponsored interview, Josh outlines how we can start to wrap our hands around the problem. This episode is also available on Youtube. Show notes
Open banking in the United States has been on a long and winding road, and the journey is far from over. In this episode, I sit down with Steve Boms, Executive Director of FDATA North America, the trade association representing the fintech companies at the heart of the open banking ecosystem. Steve has been one of the most active voices in shaping U.S. open banking policy for over a decade, and he brings a uniquely informed perspective to where things stand today.We dig into the current state of the 1033 rule and what amendments are likely coming, FDATA's firm stance that banks should not be permitted to charge fees for consumer-directed data access, and the growing complexity created by a patchwork of state-level regulations on data privacy, AI, and fintech products. We close with a fascinating discussion on how agentic AI, with its need for clear consent frameworks, robust APIs, and defined liability rules, could become the next major catalyst that finally forces meaningful open banking progress in this country.In this podcast you will learn:The origin story of FDATA in the UK and how it came to the US.How Steve has been involved with CFPB and Section 1033 since 2015.Over the next 10+ years, how FDATA has been engaged in open banking policy.How open banking and open finance has evolved in the UK.Who their members are and what FDATA does for them.Where we are at today when it comes to the 1033 rule.The FDATA view on banks charging fees for access to their data.Why this is not really a bank versus fintech fight.Why it may be many years before we have a final rule for open banking.Why data access negotiations have been put on pause for now.What else Steve is working on beyond open banking.Why he is increasing concerned about the Balkanization of financial services regulation (see his recent Open Banker column).How they coordinate with the other fintech trade associations.How they think about the standardization of API and other data standards.Why Steve is optimistic about the future of open banking in the U.S.Why AI agents could be a catalyzing force for clear open banking rules.Connect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes
This episode is a full “build a business in 40 minutes” demo showing how AI collapses what used to take teams (creative production + sales ops + support) into a handful of prompts. Samruddhi generates a high-production video ad in Google AI Studio using a JSON-style prompt framework, then spins up a working voice sales/support agent in Vapi via Claude Desktop + MCP—so the agent is created from a single prompt instead of clicking through the UI. The conversation also covers why “interfaces matter less” in an agent-first world, why workflow tools (like n8n) still have a role, and how memory layers like Mem0 unify context across channels (email/WhatsApp/etc.) so you can take actions without hunting.Timestamps0:00 — “Single person billion-dollar company” belief + AI driving 10x execution speed1:57 — Plan: create the ad in Google AI Studio (Veo 3.1) + build a voice agent using Vapi MCP via Claude Desktop2:42 — Smithery: marketplace for MCP servers3:39 — MCP for non-technical listeners: “like an API, but agents use it to talk to external services”4:22 — Inside Vapi MCP: tool list = APIs the agent can choose from5:06 — AI Studio setup: video generation playground + select Veo 3.16:16 — JSON prompting framework begins (structure → production-level output)6:28 — Keys: description, style, camera, lighting, environment, elements, motion, ending, text9:05 — Prompts/scripts can be AI-generated (humans provide guardrails)10:41 — Need an API key to generate videos in AI Studio10:54 — Ad review: strong realism; last segment looks AI-ish → iterate prompt13:05 — Install Vapi MCP via npx from Smithery + add Vapi API key13:46 — Claude Desktop: Vapi MCP appears under Connectors/Tools (not Claude web)14:05 — Prompt the agent build: “Fresh Pause” + role, tasks, FAQs, call flows18:23 — Testing: “Talk to assistant” starts a live call simulation19:20 — Deployment: assign a phone number; Vapi provides free/test numbers (up to a limit)21:57 — Mem0 / Supermemory: memory layer across apps/agents to keep context24:13 — Why memory layers help: fewer MCPs → less slowdown/hallucination; no need to specify where to search26:36 — MCPs + slide decks: mention of Gamma MCP via Claude27:34 — Future of n8n/Zapier: they persist, but prompting increasingly generates workflows31:38 — Prediction market trading algos (Kalshi/Polymarket) + AI improves speed/decision-making36:02 — Closing vision: help orgs 10x execution speed, especially non-technical leaders (40+) with domain expertiseTools & technologies mentionedGoogle AI Studio (Video Generation Playground) — Generate an 8-second video ad.Veo 3.1 — Google video model used for “production-level” output.JSON Prompting Framework — Structured key/value prompts for story, visuals, camera, lighting, motion, ending frame.Claude Desktop — Runs connectors/tools (including MCP servers).MCP (Model Context Protocol) — Lets agents call external services/tools based on intent.Smithery — Directory/marketplace for MCP servers.Vapi — Voice agent platform; create agents + assign phone numbers.Vapi MCP Server — Enables Claude to operate Vapi via prompts (create/list/configure).npx — Installs MCP server quickly from the terminal.API Keys — Required for AI Studio generation + Vapi authentication.Mem0 / Supermemory — Cross-channel memory layer to retrieve context automatically.Knowledge Graph — Underlying structure for semantic retrieval across interactions.Glean — Referenced as a comparison point for search/context retrieval.Gamma MCP — Example of generating slide decks via MCP.n8n / Zapier — Workflow automation tools discussed in an MCP-first future.OpenClaw — Mentioned as agent tooling that can help with steps like obtaining API keys.Kalshi / Polymarket — Prediction markets referenced in the trading/AI speed discussion.Subscribe at thisnewway.com to get the step-by-step playbooks, tools, and workflows.
You ever wonder what happens when you mix AI, multifamily real estate, and a bit of jet lag?In this episode, Mike Brewer welcomes Dan Smith — the man who's turning global PropTech upside down. From student housing in the UK to shaking hands in Dubai, Dan shares his journey from launching gin distilleries to launching VerbaFlow, an AI-driven communications platform revolutionizing how we operate multifamily communities.You'll hear how Dan went from sports travel to student housing and landed right in the heart of real estate tech — and how he's using AI to actually solve problems, not just throw buzzwords at them.They unpack:The true power of AI in streamlining operations and enhancing resident experienceWhy staffing strategies must evolve (and how AI gives time back)What today's operators can learn from startups with zero legacy systemsThe future of centralized systems, open APIs, and AI-first operationsWhy community and hospitality are more vital now than everDan also sounds off on leapfrogging legacy tech, the critical role of empathy in leadership, and how multifamily pros can gear up for what's not just a transformation — but a metamorphosis.Oh, and yes, he does look tired. And no, it's not because of his baby — it's because he's singlehandedly bringing AI to the global stage.Smash that like button if you believe the future of multifamily is more human because of AI — not in spite of it.
We're here for a CHIPS Act megapod, in person with Mike Schmidt and Todd Fisher, the director and founding CIO of the CHIPS Program Office, respectively. We discuss… The mechanisms behind the success of the CHIPS Act, What CHIPS can teach us about other industrial policy challenges, like APIs and rare earths, What it takes to build a successful industrial policy implementation team, How the fear of “another Solyndra” is holding back US industrial policy, Chris Miller's recent interest in revitalizing America's chemical industry. This post is a collaboration with the Factory Settings Substack: https://www.factorysettings.org/. Subscribe for more insights from former CHIPS Program Office leaders! Suno song link: https://suno.com/s/wwVYK10LfrAD5zK2 Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Next in Media, I sit down live at the Kochava Summit in Sandpoint, Idaho, with Charles Manning, founder and CEO of Kochava. We go deep on one of the most pressing questions facing the industry right now: how profound is the shift to agentic advertising and AI-driven workflows? Charles argues it is not a decade-long evolution like programmatic was. It is breathtakingly faster, and the companies that understand how to use their first-party data as a competitive kernel, rather than leaking it to the walled gardens, are the ones that will come out ahead. He draws a compelling analogy: if programmatic changed the auction, AI is about to change the workflow.We also dig into Kochava's CTV journey, from its mobile app roots to building measurement tools adopted by LG, Samsung, Vizio, and Roku, and how the view-and-do combo between the TV screen and the mobile device is creating powerful new outcome-based measurement opportunities for brands. Charles breaks down what holding companies should fear (and fix), why the ad tech supply chain is due for serious consolidation, and why he predicts a wave of take-privates and roll-ups followed by a bonanza of public offerings over the next two years. He also introduces Station One, Kochava's integrative AI hub that acts like a Slack for AI workflows, designed to help teams transform how they work without giving up control of their data. Key Highlights:⚡ AI vs. Programmatic: Charles explains why the shift to agentic advertising is moving breathtakingly faster than programmatic did. While programmatic took over a decade to fully reshape the auction, AI is set to transform the entire workflow within the next 16 months.
Scott and Wes unpack WebMCP, a new standard that lets AI interact with websites through structured tools instead of slow, bot-style clicking. They demo it, debate imperative vs declarative APIs, and share their hottest take: this might be the web's real AI moment. Show Notes 00:00 Welcome to Syntax! 00:16 Introduction to WebMCP 01:07 Understanding WebMCP Functionality. 03:06 Interacting with AI through WebMCP. 06:49 WebMCP browser integration. 08:25 Brought to you by Sentry.io. 08:49 Benefits of WebMCP. 11:51 Token efficiency. 13:02 My biggest questions. 14:13 My take on this tech. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
In this episode, I'm joined by Bill Briggs, CTO at Deloitte, for a straight-talking conversation about why so many organizations get stuck in what he calls "pilot purgatory," and what it takes to move from impressive demos to measurable outcomes. Bill has spent nearly three decades helping leaders translate the "what" of new technology into the "so what," and the "now what," and he brings that lens to everything from GenAI to agentic systems, core modernization, and the messy reality of technical debt. We start with a moment of real-world context, Bill calling in from San Francisco with Super Bowl week chaos nearby, and the funny way Waymo selfies quickly turn into "oh, another Waymo" once the novelty fades. That same pattern shows up in enterprise tech, where shiny tools can grab attention fast, while the harder work, data foundations, APIs, governance, and process redesign, gets pushed to the side. Bill breaks down why layering AI on top of old workflows can backfire, including the idea that you can "weaponize inefficiency" and end up paying for it twice, once in complexity and again in compute costs. From there, we get into his "innovation flywheel" view, where progress depends on getting AI into the hands of everyday teams, building trust beyond the C-suite, and embedding guardrails into engineering pipelines so safety and discipline do not rely on wishful thinking. We also dig into technical debt with a framing I suspect will stick with a lot of listeners. Bill explains three types, malfeasance, misfeasance, and non-feasance, and why most debt comes from understandable trade-offs, not bad intent. It leads into a practical discussion on how to prioritize modernization without falling for simplistic "cloud good, mainframe bad" narratives. We finish with a myth-busting riff on infrastructure choices, a quick look at what he sees coming next in physical AI and robotics, and a human ending that somehow lands on Beach Boys songs and pinball machines, because tech leadership is still leadership, and leaders are still people. So after hearing Bill's take, where do you think your organization is right now, measurable outcomes, success theater, or somewhere in between, and what would you change first, and please share your thoughts? Useful Links Connect With Bill Briggs Deloitte Tech Trends 2026 report Deloitte The State of AI in the Enterprise report
Justin Moon leads the open source ai initiative at the Human Rights Foundation.Justin on Nostr: https://primal.net/justinmoonHuman Rights Foundation: https://hrf.org/program/ai-for-individual-rights/Easy Open Claw Deployment: https://clawi.ai/EPISODE: 191BLOCK: 936962PRICE: 1473 sats per dollar(00:01:35) Justin Moon and early show memories(00:03:52) OpenClaw(00:04:16) Agents change how we use computers(00:07:07) OpenClaws light bulb moment(00:09:25) Agents as UX glue for Freedom Tech(00:10:00) HRF AI work, self-hosting breakthrough, and running your own stack(00:12:50) AI simplifies hard Bitcoin UX: coin control, backups, photos(00:14:22) OpenClaw + OpenAI: does it matter?(00:16:01) AI leverage for builders: open protocols win(00:19:22) Positive feedback loop: agents and open protocols(00:20:14) Costs vs privacy: local models, token spend, and KYC walls(00:23:15) Local hardware economics and historical parallels(00:27:20) Will capability gaps narrow? Mobile and on-device futures(00:29:56) Cutting-edge vs private setups; data lock-in and training moats(00:31:53) Competition, regulation risks, and hidden capabilities(00:34:05) Chinas open models: incentives, biases, and global adoption(00:38:56) American and European open models; Big Tech dynamics(00:40:56) Apple, hardware positioning, and agent UX form factors(00:42:48) Googles advantage: data, integration, and vertical stack(00:44:32) Acceleration ahead: productivity leaps and societal shifts(00:45:21) Jobs, layoffs, and disruptive labor realignment(00:47:55) From global commons to gated neighborhoods: bots and slop(00:50:21) Nostr as local internet: webs of trust and bot filters(00:51:57) Cancel culture contagion and shrinking public square(00:54:59) Demographic decentralization and small-town resilience(00:55:00) Lean platforms: X/Twitter staffing as canary(00:56:59) Universal high income: incentives and realism(00:58:48) Prepare your household: seize tools, avoid flat feet(01:01:01) Marmot DMs over Nostr: agents need open messaging(01:03:11) Building Pika: encrypted chat and voice over Marmot(01:07:00) Generative UI and real-time media over Nostr(01:10:07) APIs, bans, and why open protocols become the convenient path(01:14:02) Future gates: Bitcoin paywalls, webs of trust, or dystopian KYC(01:17:19) Getting started: try OpenClaw safely and learn by play(01:22:14) Agents, Cashu, and Lightning UX: bots as channel managers(01:25:10) Federations run by machines? Enclaves and AI guardians(01:27:50) Maple, Vora, and bringing self-sovereign AI to mainstream(01:29:00) Security kudos and caveats; Coinbase and cold storage(01:30:02) Justins education plan and upcoming streamsmore info on the show: https://citadeldispatch.comlearn more about me: https://odell.xyz
Máscaras, disfraces, confeti y costosas carrozas para la política mundialComo en las saturnales y lupercales, como en las fiestas del toro Apis, desde el Mardi Grass de Nueva Orleans a Venecia con sus máscaras, scolas do samba, comparsas, diabladas, repique de los tambres del candome, murgas y todo lo que podemos celebrar en esta triste realidad de discriminación, persecución, violencia y ambiciónECDQEMSD podcast episodio 6241 Carnaval DescarnadoConducen: El Pirata y El Sr. Lagartija https://canaltrans.comNoticias Del Mundo: Navalni murió envenenado - El pentágono usó I.A. para capturar a Maduro - Ju-ae La hija de Kim Jong-un - Una popularidad sin medida - Acomodando la agenda - Therian fuera de control - La grasa de las Capitales - El monito de la regidoraHistorias Desintegradas: La maquina sensual - Demasiada presión - En las cosas del amor - Regresar el producto - En la fría Punta Arenas - Lo que no quería escuchar - Tampoco sabe bailar? - Kempes en el 78 - Error en el registro - Cigarrillos larguísimos - Nombre original - Pleno carnaval - Las ladronas - Almendras empanizadas - El almendro - Amores imposibles y más...En Caso De Que El Mundo Se Desintegre - Podcast no tiene publicidad, sponsors ni organizaciones que aporten para mantenerlo al aire. Solo el sistema cooperativo de los que aportan a través de las suscripciones hacen posible que todo esto siga siendo una realidad. Gracias Dragones Dorados!!NO AI: ECDQEMSD Podcast no utiliza ninguna inteligencia artificial de manera directa para su realización. Diseño, guionado, música, edición y voces son de nuestra completa intervención humana.
Human-Centric Merchant Services: Optimizing Payment Systems with Jimmy EstradaIn this episode ofThe Thoughtful Entrepreneur Podcast, host Josh Elledge sits down with Jimmy Estrada, the Founder and Owner ofJELA Payments Systems, to demystify the complex world of merchant services. Jimmy shares how his firm bridges the gap between massive, faceless payment processors and the independent business owners who often find themselves stranded when technical glitches or funding holds arise. This conversation offers a strategic look at how a high-touch, consultative approach to payment systems can help B2B firms and retailers reclaim their profit margins, mitigate risk, and ensure that their financial "plumbing" remains reliable and transparent in an increasingly automated marketplace.The Power of Personal Partnership in Payment ProcessingThe modern payment landscape is dominated by automated "plug-and-play" solutions, yet many business owners discover the limitations of these platforms only when a crisis occurs. Jimmy explains that the primary value of a merchant services partner lies in providing a direct human advocate who understands the specific risk profile of a client's industry. When funds are unexpectedly held or a technical integration fails, having a dedicated account manager often means the difference between a 24-hour resolution and weeks of lost revenue. By moving beyond a "set it and forget it" mentality, businesses can proactively address industry-specific risk factors—such as those found in medical or legal services—before they escalate into costly holds or compliance headaches.Transparency in pricing remains one of the greatest challenges for entrepreneurs, as merchant statements are notoriously difficult to decipher. Jimmy advocates for a "clear-water" approach to fee structures, emphasizing that business owners should have a granular understanding of non-negotiable interchange fees versus provider markups. Whether a business utilizes an interchange-plus model, compliant surcharging, or dual pricing, the key to long-term profitability is consistent monitoring and "junk fee" audits. These regular reviews ensure that businesses aren't paying for redundant services or hidden charges that frequently creep into statements over time, allowing leaders to reinvest those savings back into their core operations.Optimization is not just about chasing the lowest possible rate; it is about ensuring that a payment system is fully integrated with a company's existing software and customer journey. Jimmy discusses how his firm works with various hardware and software vendors to create seamless APIs that simplify the checkout experience for both in-person and card-not-present transactions. For businesses lacking in-house technical expertise, a trusted payment partner acts as an outsourced department that manages the technical burden of PCI compliance and security updates. Ultimately, a true partnership is built on integrity—where the provider prioritizes the client's long-term stability over a quick sale, even if that means advising a client to stay with their current provider if the rates are already fair.About Jimmy EstradaJimmy Estrada is the Founder and Owner of JELA Payments Systems, where he leverages over a decade of experience in the merchant services industry. Known for his "integrity-first" approach, Jimmy specializes in helping high-volume and B2B merchants navigate the technical and financial complexities of credit card processing with a focus on education and personalized support.About JELA Payments SystemsJELA Payments Systems is a merchant services provider that offers customized payment solutions ranging from mobile processing to enterprise-level integrations. The company prioritizes human-to-human interaction, providing dedicated account management...