POPULARITY
Categories
The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually
Bierzemy na warsztat jedną z najbardziej absurdalnych i jednocześnie najciekawszych historii AI ostatnich miesięcy: viralowy projekt ClawdBot, później MoltBot, a dziś OpenClaw — narzędzie, które miało być osobistym agentem AI działającym 24/7, a stało się centrum internetowej burzy, rebrandingu, kontrowersji, problemów bezpieczeństwa i ostatecznie… wejścia Petera Steinbergera do OpenAI. Rozkładamy ten fenomen na czynniki pierwsze.W drugiej części przechodzimy do pytania: czy AGI w programowaniu właśnie przestało być futurystyczną tezą, a stało się praktyką? Na bazie naszych testów, obserwacji i wyników 10xBench rozmawiamy o tym, co naprawdę zmieniają nowe modele i środowiska agentowe, dlaczego klasyczne „chatowanie z AI” staje się przestarzałym nawykiem i czemu programiści powinni już teraz zaktualizować swój workflow, sposób myślenia i prognozy dotyczące przyszłości tworzenia oprogramowania.
Published as a 47-page pamphlet in colonial America on January 10, 1776, Common Sense challenged the authority of the British government and the royal monarchy. The elegantly plain and persuasive language that Thomas Paine used touched the hearts and minds of the average American and was the first work to openly ask for political freedom and independence from Great Britain. Paine’s powerful words came to symbolize the spirit of the Revolution itself. General George Washington had it read to his troops. Common Sense by Thomas Paine (read by Walter Dixon) at https://amzn.to/3MHAIYr Common Sense by Thomas Paine (book) available at https://amzn.to/3MKX77b Writings of Thomas Paine available at https://amzn.to/3MCaFC2 Books about Thomas Paine available at https://amzn.to/4s3qxOg ENJOY Ad-Free content, Bonus episodes, and Extra materials when joining our growing community on https://patreon.com/markvinet SUPPORT this channel by purchasing any product on Amazon using this FREE entry LINK https://amzn.to/3POlrUD (Amazon gives us credit at NO extra charge to you). Mark Vinet's HISTORICAL JESUS podcast at https://parthenonpodcast.com/historical-jesus Mark's TIMELINE video channel: https://youtube.com/c/TIMELINE_MarkVinet Website: https://markvinet.com/podcast Facebook: https://www.facebook.com/mark.vinet.9 Twitter: https://twitter.com/MarkVinet_HNA Instagram: https://www.instagram.com/denarynovels Mark's books: https://amzn.to/3k8qrGM Audio credits: Common Sense—The Origin and Design of Government by Thomas Paine, audio recording read by Walter Dixon (Public Domain 2011 Gildan Media). Audio excerpts reproduced under the Fair Use (Fair Dealings) Legal Doctrine for purposes such as criticism, comment, teaching, education, scholarship, research and news reporting.See omnystudio.com/listener for privacy information.
On this episode, Richard & Tyler discuss Widdershin's Thrice-Bound Codex – A Starting Adventure & Item, from Heart of Daggers!Links to Stuff & Things:https://heartofdaggers.com/products/widdershins-thrice-bound-codex-a-level-1-adventure/https://heartofdaggers.com/https://foundryborne.online/https://www.daggerheart.com/downloads/https://www.daggerheart.com/thevoid/Welcome to True Strike, a podcast for tabletop nerds.Each Tuesday, listen in while two friends discuss their completely unwarranted opinions about all things tabletop. Topics vary each week from D&D and Daggerheart, to whatever TTRPG or board game they happen to be playing!Hosts: Richard Cullen/Tyler WortheySong by: WILDJOE1
Hello golden lovers, this week I have the big Pyle (Folger Pyles) himself join me to delve into what the emperors finest of finest and doing great currently custodes!We give this book the medical examination, going through it's reception on release and evolution since!
Codex History of Video Games with Mike Coletta and Tyler Ostby - Podaholics
Mike and Tyler talk about the history of the Nintendo DS. They also go over some of the Wii U games they missed thanks to Matt and Eric! The theme music is by RoccoW. The logo was created by Dani Dodge.
The Codex of the Modern Gentlemen A Manual of Modern Masculinity for the 21st Century “Chivalry is not a relic; it is the radiant armour of the soul.” Preamble: A Prayer for the Forge The world has declared masculinity a disease. It has looked at the twisted expressions of broken men—the tyrant, the slacker, the brute, the perpetual adolescent—and mistaken the symptom for the condition. It has, in its wisdom, sought to cure the man by unmaking him. This is the work of the lesser architect* (*every man who tries to build his life, his identity, his legacy, his security, apart from the Living God.) We, however, are builders under the Great Architect** (**Yahweh, the Master Craftsman, the One who works with clay, who sketches the cosmos in wisdom, who tabernacles among us in the person of Jesus Christ—the Word made flesh, the Logos through whom all things were made. He is the "Author and Perfecter of our faith." He is the Potter; we are the clay.) We understand that fire can destroy, but it can also forge. We are here to reclaim the forge. This Codex is not a return to a mythical past of rigid roles and unearned dominance. That past was a shadow, a partial blueprint corrupted by the Fall. True Chivalry was always a rebellion against the baser instincts: the impulse to hoard, to dominate, to flee, to destroy. It was a code to lift the eyes of the heart from the mud to the stars. In the 21st century, the mud is deeper, the stars harder to see – obscured by the light pollution of a world of distraction. The clamour of the marketplace and the needy media, the noise is deafening. But the need for a man—a ‘real' man—has never been greater. He is not needed to fix a wagon wheel, but to fix a broken spirit. He is not needed to slay a physical dragon, but to slay the dragons of despair, apathy, and cynicism in his own heart and in the hearts of those he is sworn to protect. This is the Manual for that man. This is the Chivalry for this age.
Weirdly Magical with Jen and Lou - Astrology - Numerology - Weird Magic - Akashic Records
To register for the Ceres Reborn Immersion https://www.louiseedington.com/Ceres-Reborn-ImmersionLouise Edington Wisdom Weaver discusses the astrological forecast for March 1-7, highlighting key astrological events and personal growth opportunities. She mentions the ongoing eclipse season, Mercury retrograde in Pisces, and the significance of various planetary positions. Louise invites participants to her two-day workshop on the archetype of Ceres, offering detailed insights into astrological charts, shamanic journeying, and meditation. She also shares her creative process, the challenges of having an assistant, and the deeper immersion into the energy of Ceres and Demeter. The forecast includes specific astrological aspects, the importance of inner voice, and the need for strategic action and healing.
If your entire company was using ChatGPT in 2022.... good chance you ended up in some trouble.
Burke Holland works on GitHub Copilot by day and codes with his AI agents always. Early January, Burke posted about how Opus 4.5 changed everything. We were all still buzzing from the holiday-season 2x usage bump Claude gave us, and Opus 4.5 felt like a genuine step function in capability. Burke and I get into all the details. Opus 4.5 may have started the fire, but GPT-5.3 Codex is certainly living up to the hype.
Join us on the STILL RELEVANT tour: https://simulationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80Join Simtheory: https://simtheory.aiTDIA Discord: https://discord.gg/gTW4RkAJvnHorse Egg Lifecycle Infographic: https://staging.simtheory.ai/share/file/UZ2KJU----So Chris, this week... we're diving into Google's new Nano Banana 2 image model - 50% cheaper and supposedly faster (when the servers aren't melting). We put it through its paces with annotation-based editing, slide generation, and yes, the return of the legendary horse egg experiment.Plus: Google quietly kills Gemini-3 after just a few months (good riddance?), we discuss why the model was "dead on arrival" for agentic workflows, and break down the real story behind those massive AI layoff announcements from Block and WiseTech. Spoiler: it's probably not actually about AI.We also get into the current state of the model wars (Opus 4.6 vs Codex 5.3), why smaller models like GLM-5 might be the future for enterprise agentic tasks, and Chris's wife teaching Claude to literally speak to her using Mac's text-to-speech. The models are getting creative.---0:00 - Intro0:36 - Nano Banana 2: Price, Speed & First Impressions3:19 - The Compositing Problem & Last Mile Design5:41 - Annotation-Based Editing (This Changes Everything)9:52 - Slide Editing & Real-World Use Cases12:34 - The Horse Egg Experiment Returns14:30 - Image Degradation & Cost Breakdown17:47 - Text-to-Image Leaderboard Discussion20:01 - Why Nano Banana Dominates for Work22:07 - Codex 5.3 vs Opus 4.622:54 - Google Kills Gemini-3 (What Went Wrong?)26:48 - Google's Agentic Problem30:08 - The Model Loyalty Cycle34:22 - Why Opus 4.6 is Still the Best37:05 - Cost Optimization & Smart Model Routing43:30 - When Models Get Stuck on the Wrong Path45:36 - Nicole's AI Learns to Talk Back46:54 - Can Anyone Build Software Now?52:26 - Anthropic's Legal/Finance Plugins & Market Panic57:08 - Block Lays Off 4,000: AI or Excuse?1:00:05 - The AI Job Apocalypse Isn't RealThanks for listening like and sub xoxo
Burke Holland works on GitHub Copilot by day and codes with his AI agents always. Early January, Burke posted about how Opus 4.5 changed everything. We were all still buzzing from the holiday-season 2x usage bump Claude gave us, and Opus 4.5 felt like a genuine step function in capability. Burke and I get into all the details. Opus 4.5 may have started the fire, but GPT-5.3 Codex is certainly living up to the hype.
Don't Whistle At Night welcomes Rye the Codega February 22nd. 2026 EP: 45 About Our Guest: Rye the Codega is the host of Codega's Codex of Curiosities, a podcast exploring high strangeness, ancient civilizations, hidden histories, and the spiritual forces that shape human experience. A researcher and storyteller by nature, Rye approaches fringe and controversial topics with curiosity, discernment, and a commitment to thoughtful conversation. He is a husband, father, and beekeeper who has returned to walking the path with Christ, grounding his work in faith while encouraging listeners to question narratives, think critically, and seek understanding for themselves. Links Website: https://codexofcuriosities.wixsite.com/ccofc YouTube: https://www.youtube.com/@CodexofCuriosities Spotify: https://open.spotify.com/show/7rUNl7pIe1K28TBHUgy9qY? si=c3184d36df3b427c Instagram: https://www.instagram.com/codex.of.curiosities Facebook: https://www.facebook.com/groups/ccofcuriosities
This is a free preview of a paid episode. To hear more, visit www.latent.spaceFirst speakers for AIE Europe and AIEi Miami have been announced. If you're in Asia/Aus, come by Singapore and Melbourne. AI Engineering is going global!One year ago today, Anthropic launched Claude Code, to not much fanfare:The word of mouth was incredibly strong however, and so we were glad to be one of the first podcasts to invite Boris and Cat on in early May:As we discussed on the pod, all CC usage was API-based and therefore it was ridiculously expensive to do anything. This was then fixed by the team including Claude Code in the Claude Pro plan in early June, and then the virality caused us to make a rare trend call in late June:Now, 6 months on, Doug has just calculated that around 4% of GitHub is written by Claude Code:We talk about how Doug uses Claude Code to do SemiAnalysis work.Memory ManiaIn the second part of this episode, we also check in on Memory Mania, which is going to affect you (yes, you) at home if it hasn't already:Full Episode on YouTubeTimestamps00:00 AI as Junior Analyst00:59 Meet Swyx and Doug03:30 From Value Mule to Semis06:28 Moore's Law Ends Thesis12:02 Claude Code Awakening32:02 Agent Swarms Reality Check32:53 Kimi Swarm Benchmarks37:31 Bots vs Zapier Automation39:44 Claude Code Workflow Setup57:54 AGI Metrics and GDP01:04:48 Railroad CapEx Analogy01:06:00 Funding Bubbles and Demand01:08:11 Agents Replace Work Tools01:13:56 Codex vs Claude Race01:21:15 Microsoft and TPU Strategy01:34:13 TPU Window vs Nvidia01:36:30 HBM Supply Chain Squeeze01:39:41 Memory Shock and CXL01:45:20 Context Rationing Future01:54:37 Writing and Trail LessonsTranscript[00:00:00] AI as Junior Analyst[00:00:00] Doug: This crap makes mistakes all the time. All the time. It is still just like a, like I think of it once again as like a junior analyst, right? The analyst goes and does all this like really pain in the ass information and you bring it all together to make a good decision at the top. Historically what happens is that junior analyst, who I once was, went and gathered all that information, and after doing this enough times, there's a meta level thinking that's happening where it's like, okay, here's what I really understand and how this type of analysis, I'm an expert in, actually I'm very good at, I consistently have a hit rate.[00:00:28] Now I'm the expert, right? I don't think that meta level learning is there yet. We'll see if l ones do it, right? Everyone who's spending one quadrillion dollars in the world thinks it will, it better, it better happen by if you're spending, you know, a trillion dollars and there's not meta level learning.[00:00:44] But for me, in our firm, that massively amplifies everyone who is an expert. ‘cause like you have to still do something that you can just like lop it up. It's very obvious to me. What It's slop.[00:00:59] Meet Swyx and Doug
Published as a 47-page pamphlet in colonial America on January 10, 1776, Common Sense challenged the authority of the British government and the royal monarchy. The elegantly plain and persuasive language that Thomas Paine used touched the hearts and minds of the average American and was the first work to openly ask for political freedom and independence from Great Britain. Paine’s powerful words came to symbolize the spirit of the Revolution itself. General George Washington had it read to his troops. Common Sense by Thomas Paine (read by Walter Dixon) at https://amzn.to/3MHAIYr Common Sense by Thomas Paine (book) available at https://amzn.to/3MKX77b Writings of Thomas Paine available at https://amzn.to/3MCaFC2 Books about Thomas Paine available at https://amzn.to/4s3qxOg ENJOY Ad-Free content, Bonus episodes, and Extra materials when joining our growing community on https://patreon.com/markvinet SUPPORT this channel by purchasing any product on Amazon using this FREE entry LINK https://amzn.to/3POlrUD (Amazon gives us credit at NO extra charge to you). Mark Vinet's HISTORICAL JESUS podcast at https://parthenonpodcast.com/historical-jesus Mark's TIMELINE video channel: https://youtube.com/c/TIMELINE_MarkVinet Website: https://markvinet.com/podcast Facebook: https://www.facebook.com/mark.vinet.9 Twitter: https://twitter.com/MarkVinet_HNA Instagram: https://www.instagram.com/denarynovels Mark's books: https://amzn.to/3k8qrGM Audio credits: Common Sense—The Origin and Design of Government by Thomas Paine, audio recording read by Walter Dixon (Public Domain 2011 Gildan Media). Audio excerpts reproduced under the Fair Use (Fair Dealings) Legal Doctrine for purposes such as criticism, comment, teaching, education, scholarship, research and news reporting.See omnystudio.com/listener for privacy information.
My guest today on the Online for Authors podcast is Michael Colon, author of the book The Gift from Aelius. Michael Colon is a creative freelance writer and novelist, born and raised in the Big Apple, New York City. He uses his craft to profoundly impact the lives of others with thought-provoking words that breathe life into his characters. He often equates his writing to painting masterpieces with prose. His inspiration comes from various societal art forms and his own life experiences. When he isn't writing he enjoys working out, watching sports, visiting museums, and exploring nature trails. In my book review, I stated The Gift from Aelius is a fantasy novella. Despite not being a hardcore fantasy reader, I like the premise of this book. What happens when AI is smart enough to take over? What do humans do? And more importantly, what does AI, known in this story as Codex, do? And wouldn't it be ironic if Codex determined power at any cost was the answer, given they were created by humans who believe that power at any cost is the answer? And all would be going as planned, except one Codex is more than meets the eye. As he begins having 'glitches', he comes to understand that the world would be a better place if Codex and humans could live side by side in harmony. For A191, this becomes his personal mission. But will he be decommissioned before he can reach his goal? At times, I struggled with the writing, feeling like I was being told rather than left to experience. However, the author did a good enough job that I had to finish reading to find out what happened in the end. Subscribe to Online for Authors to learn about more great books! https://www.youtube.com/@onlineforauthors?sub_confirmation=1 Join the Novels N Latte Book Club community to discuss this and other books with like-minded readers: https://www.facebook.com/groups/3576519880426290 You can follow Author Michael Colon Website: https://www.twbpress.com/authormichaelcolon.html FB: @Michael Colon IG: @michaelcolonauthor Purchase The Gift from Aelius on Amazon: Paperback: https://amzn.to/3Nayf9r Ebook: https://amzn.to/4rb39gD Teri M Brown, Author and Podcast Host: https://www.terimbrown.com FB: @TeriMBrownAuthor IG: @terimbrown_author X: @terimbrown1 Want to be a guest on Online for Authors? Send Teri M Brown a message on PodMatch, here: https://www.podmatch.com/member/onlineforauthors #michaelcolon #thegiftfromaelius #fantasy #terimbrownauthor #authorpodcast #onlineforauthors #characterdriven #researchjunkie #awardwinningauthor #podcasthost #podcast #readerpodcast #bookpodcast #writerpodcast #author #books #goodreads #bookclub #fiction #writer #bookreview *As an Amazon Associate I earn from qualifying purchases.
March 3rd, Computer History Museum CODING AGENTS CONFERENCE, come join us while there are still tickets left.https://luma.com/codingagentsChris Fregly is currently focused on building and scaling high-performance AI systems, writing and teaching about AI infrastructure, helping organizations adopt generative AI and performance engineering principles on AWS, and fostering large developer communities around these topics.Performance Optimization and Software/Hardware Co-design across PyTorch, CUDA, and NVIDIA GPUs // MLOps Podcast #363 with Chris Fregly, Founder, AI Performance Engineer, and InvestorJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps GPU Guide: https://go.mlops.community/gpuguide// AbstractIn today's era of massive generative models, it's important to understand the full scope of AI systems' performance engineering. This talk discusses the new O'Reilly book, AI Systems Performance Engineering, and the accompanying GitHub repo (https://github.com/cfregly/ai-performance-engineering). This talk provides engineers, researchers, and developers with a set of actionable optimization strategies. You'll learn techniques to co-design and co-optimize hardware, software, and algorithms to build resilient, scalable, and cost-effective AI systems for both training and inference. // BioChris Fregly is an AI performance engineer and startup founder with experience at AWS, Databricks, and Netflix. He's the author of three (3) O'Reilly books, including Data Science on AWS (2021), Generative AI on AWS (2023), and AI Systems Performance Engineering (2025). He also runs the global AI Performance Engineering meetup and speaks at many AI-related conferences, including Nvidia GTC, ODSC, Big Data London, and more.// Related LinksAI Systems Performance Engineering: Optimizing Model Training and Inference Workloads with GPUs, CUDA, and PyTorch 1st Edition by Chris Fregly: https://www.amazon.com/Systems-Performance-Engineering-Optimizing-Algorithms/dp/B0F47689K8/Coding Agents Conference: https://luma.com/codingagents~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Chris on LinkedIn: /cfreglyTimestamps:[00:00] SageMaker HyperPod Resilience[00:27] Book Creation and Software Engineering[04:57] Software Engineers and Maintenance[11:49] AI Systems Performance Engineering[22:03] Cognitive Biases and Optimization / "Mechanical Sympathy"[29:36] GPU Rack-Scale Architecture[33:58] Data Center Reliability Issues[43:52] AI Compute Platforms[49:05] Hardware vs Ecosystem Choice[1:00:05] Claude vs Codex vs Gemini[1:14:53] Kernel Budget Allocation[1:18:49] Steerable Reasoning Challenges[1:24:18] Data Chain Value Awareness
Olivia Watkins (Frontier Evals team) and Mia Glaese (VP of Research at OpenAI, leading the Codex, human data, and alignment teams) discuss a new blog post (https://openai.com/index/why-we-no-longer-evaluate-swe-bench-verified/) arguing that SWE-Bench Verified—long treated as a key “North Star” coding benchmark—has become saturated and highly contaminated, making it less useful for measuring real coding progress. SWE-Bench Verified originated as a major OpenAI-led cleanup of the original Princeton SWE-Bench benchmark, including a large human review effort with nearly 100 software engineers and multiple independent reviews to curate ~500 higher-quality tasks. But recent findings show that many remaining failures can reflect unfair or overly narrow tests (e.g., requiring specific naming or unspecified implementation details) rather than true model inability, and cite examples suggesting contamination such as models recalling repository-specific implementation details or task identifiers. From now on, OpenAI plans to stop reporting SWE-Bench Verified and instead focus on SWE-Bench Pro (from Scale), which is harder, more diverse (more repos and languages), includes longer tasks (1–4 hours and 4+ hours), and shows substantially less evidence of contamination under their “contamination auditor agent” analysis. We also discuss what future coding/agent benchmarks should measure beyond pass/fail tests—longer-horizon tasks, open-ended design decisions, code quality/maintainability, and real-world product-building—along with the tradeoffs between fast automated grading and human-intensive evaluation. 00:00 Meet the Frontier Evals Team00:56 Why SWE Bench Stalled01:47 How Verified Was Built04:32 Contamination In The Wild06:16 Unfair Tests And Narrow Specs08:40 When Benchmarks Saturate10:28 Switching To SWE Bench Pro12:31 What Great Coding Evals Measure18:17 Beyond Tests Dollars And Autonomy21:49 Preparedness And Future Directions Get full access to Latent.Space at www.latent.space/subscribe
Weirdly Magical with Jen and Lou - Astrology - Numerology - Weird Magic - Akashic Records
Louise Edington Wisdom Weaver discusses the impact of the recent Saturn-Neptune conjunction on her personal and professional life, leading to the termination of a virtual assistant. She promotes her "Reborn Immersion" series, a two-day journey into the Demeter-Persephone myth, using the Red Seeds Tarot deck. The forecast for February 22-28 highlights significant astrological aspects, including Mercury's retrograde in Pisces, the moon's movements, and the conjunction of Venus and Ceres. Louise emphasizes the shift towards a more grounded spirituality and the importance of addressing patriarchal systems and abuses of power.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Alexander Embiricos is the Head of Codex at OpenAI, leading the development of the company's flagship AI coding systems that power automated software generation, debugging and developer workflows. Under his leadership, Codex has become one of the most widely adopted AI developer platforms. AGENDA: 05:13 Will Coding Be Automated? Why AI Could Create More Engineers, Not Fewer 07:17 Do We Need PMs? The "Undefined" Product Role and When It Matters 08:06 The Real AGI Bottleneck: Human Prompting, Validation, and "Too Much Effort" 13:04 Three Phases of Agents: Coding → Computer Use → Productized Workflows 13:52 Enterprise Reality Check: Security, Permissions, and Safe Agentic Browsing 17:57 Is Inference the New Sales and Marketing? 18:49 What % of Codex Was Written by AI? 21:33 Do OpenAI Use AI for Code Review? 23:31 Is there any stickiness to AI coding tools? 28:22 What Does "Winning" Mean at OpenAI? Mission, Competition, and Moats 32:04 The Future UI: Chat or Voice 34:10 Agent-to-Agent Workflows: Designing for Approvals, Compliance, and Automation 35:39 Do Coding Models Have a Data Moat? 36:50 How does Codex View Data: Will They Build Their Own Mercor and Turing? 37:27 How Does Codex View Consumer: Will They Compete with Lovable? 41:56 Benchmarks vs "Vibes": How People Actually Judge Models 42:43 Cursor's Edge and the Case for Building Your Own Models 47:37 Is SaaS Dead? What Still Defends Value (Humans + Systems of Record) 51:28 Talent Wars and Career Advice for New Engineers in the AI Era 01:01:03 Guardrails, the Fully AI-Managed Stack, and a 10-Year Vision for Everyone
Ep 278 Pages, Keynote, and Numbers 15 Go Freemium Kuzu database company joins Apple's list of recent acquisitions iOS 26.3 Features: Everything New in iOS 26.3 Tauri 2.0 — The cross-platform app building toolkit Rork — Create mobile app in minutes, using AI OpenClaw, OpenAI and the future | Peter Steinberger jordy: I wasted 80 hours and $800 setting up OpenClaw - so you don't have to. I used Xcode 26.3 to build an iOS app with my voice in just two days - and it was exhilarating Steve Troughton-Smith: In case you missed it, I've been testing the limits of Xcode 26.3's agentic programming support this week, using Codex. This entire app used 7% of my weekly Codex usage limit. Compare that to a single (awful) slideshow in Keynote using 47% of my monthly Apple Creator Studio usage limit. Aditya: Cons of being a software engineer no one really talks about… HackerTyper: Use This Site To Prank Your Friends With Your Coding Skills :) Virtualization Explained: We Install 1TB of RAM for HyperVisors, Virtual Machines, and Docker! Mr. Macintosh: The very first email from space was sent on a Macintosh Portable by James Adamson & Shannon Lucid aboard the Shuttle Atlantis STS-43 Public Domain Remastered — Looney Tunes MEGA Compilation, 118 FULL Episodes in 4K 60FPS Zahvalnice Snimano 20.2.2026. Uvodna muzika by Vladimir Tošić, stari sajt je ovde. Logotip by Aleksandra Ilić. Artwork epizode by Saša Montiljo, njegov kutak na Devianartu
Pour l'épisode de cette semaine, je reçois Gilles Barbier, entrepreneur récidiviste et fondateur de TinyStaff.Gilles évolue dans l'écosystème tech depuis plus de 20 ans : créateur de startups, ancien CTO de The Family, contributeur open source… Il suit aujourd'hui de très près la révolution en cours autour des agents IA et des nouveaux outils de développement.Au cours de cet épisode, nous avons parlé d'OpenClaw, le projet open source qui a explosé en quelques semaines (plus de 200 000 stars sur GitHub), et de ce qu'il change concrètement dans la façon de travailler.Nous avons abordé :Ce qu'est réellement OpenClaw et pourquoi il a suscité un tel engouementLa différence entre une IA “chat” classique et une IA agentique proactiveComment Gilles a construit TinyStaff au-dessus d'OpenClaw pour proposer des “virtual employees” prêts à l'emploiL'impact des outils comme Claude Code, Codex ou Cursor sur la productivité des développeursLe coût réel des tokens et la question des abonnements vs APIL'avenir des SaaS face aux agents : disparition, transformation ou adaptation ?Pourquoi les éditeurs devront rendre leurs produits “agent-compatible” (API, CLI, MCP…)Ce que cette révolution va changer, au-delà des développeurs, pour tous les métiersUn épisode un peu différent, plus “actu chaude” que d'habitude, mais passionnant pour comprendre la vague en cours et anticiper ses conséquences sur l'écosystème SaaS.Vous pouvez suivre Gilles sur LinkedIn.Bonne écoute !Pour soutenir SaaS Connection en 1 minute⏱ (et 2 secondes) :Abonnez-vous à SaaS Connection sur votre plateforme préférée pour ne rater aucun épisode
Weirdly Magical with Jen and Lou - Astrology - Numerology - Weird Magic - Akashic Records
Louise Edington discusses the significance of the current Saturn-Neptune conjunction at 0 degrees Aries, a rare event not seen since before 4300 BCE at 0˚ Aries. She highlights its impact on personal and collective levels, referencing historical events from 1989, such as the fall of the Berlin Wall and the Tiananmen Square protests. Louise emphasizes the conjunction's influence on boundaries, dissolution, and structural changes, particularly in politics and societal norms. She also mentions the conjunction's alignment with eclipses and other astrological factors, suggesting profound shifts in identity, values, and community dynamics.
Join Scott as he shows off CircuitPython running locally in the Zephyr native simulator and discusses how it provides a feedback loop for LLM agents. He'll also answer any questions folks have. Thanks to dcd for the timecodes: 0:00 Getting started 3:00 Hello everyone - welcome to deep dive 4:10 adafruit ESP32-S2 example microcomputer running circuitpython 5:32 using LLM agents to generate code 5:55 new monitor - mouse tiler 6:37 mouse tiler using absolute positioning 7:25 resumed pi session with generate_mousetiler_layouts.py 8:49 example how LLM's are game changing 9:19 update KWIN scripts settings 11:00 "My AI Adoption Journey" https://mitchellh.com/writing/my-ai-adoption-journey 15:00 How to test USB without the linux kernel 16:55 Testing is more important now that LLMs are in the loop 17:43 Low level USB IP - using Raspberry Pi to share mouse and keyboard over internet 18:48 USB OCD esp32p4-usbip $35 asked Codex to write code overnight to send USB over wifi 20:30 usbip-pyusb-test w/MNS 21:49 upgraded from $20 to $200 subscription ( only 14% used ) 23:00 S3 USB Host not supported yet 23:46 esp32-S3-USB-OTG https://docs.espressif.com/projects/esp-dev-kits/en/latest/esp32s3/esp32-s3-usb-otg/user_guide.html 25:04 ESP P4 has Ethernet 29:13 considering Octo probes could be accessible over the internet ( over tailscale ) 31:33 Gross PR with job server (build all boards - agent generated) 34:00 demo the TUI interface 38:07 chef analogy in https://www.avo.app/blog/from-pairing-to-leading 40:25 Keep PRs small! ( multiple branches ) 42:20 skip to the testing virtual desktop 43:10 using the zephyr simulator 44:50 edit settings.toml / using pi 47:50 testing to verify web workflow 49:25 web workflow test not working 50:20 pi: "figure out why web workflow not working" 52:07 look at tests/test_web_workflow.py 59:56 wrap back to "My AI aboption" 1:01:23 prioritize step 5 engineer the harnesses 1:03 Wrapping up - new channel #coding-agents-and-llms 1:05:48 out on the 6th ( 2 weeks from now ) Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------
"I Have a Dream" is a public speech that was delivered by American civil rights activist and Baptist minister Martin Luther King Jr. during the March on Washington for Jobs and Freedom on August 28, 1963. Black History Month is an annually observed commemorative month originating in the United States, where it is also known as African-American History Month. MLK books available at https://amzn.to/49zwY32 Civil Rights books available at https://amzn.to/4q0jbJf Inquisikids products available at https://amzn.to/49ZRrhV ENJOY Ad-Free content, Bonus episodes, and Extra materials when joining our growing community on https://patreon.com/markvinet SUPPORT this channel by purchasing any product on Amazon using this FREE entry LINK https://amzn.to/3POlrUD (Amazon gives us credit at NO extra charge to you). Mark Vinet's HISTORICAL JESUS podcast at https://parthenonpodcast.com/historical-jesus Mark's TIMELINE video channel: https://youtube.com/c/TIMELINE_MarkVinet Website: https://markvinet.com/podcast Facebook: https://www.facebook.com/mark.vinet.9 X (Twitter): https://twitter.com/MarkVinet_HNA Instagram: https://www.instagram.com/denarynovels Mark's books: https://amzn.to/3k8qrGM Audio credits: Inquisikids Daily 15jan2024 Who Was Martin Luther King Jr.?; I Have a Dream speech by Martin Luther King Jr. (Archive.org). Audio excerpts reproduced under the Fair Use (Fair Dealings) Legal Doctrine for purposes such as criticism, comment, teaching, education, scholarship, research and news reporting.See omnystudio.com/listener for privacy information.
Andrew and David hold down the fort without Chris and catch up on what they've been watching and reading, before welcoming back Joe Masilotti, the show's most listened to guest from last year. They talk about Hotwire Native's momentum, why “Bridge Components” are the unlock for truly native features, Joe's push toward SwiftUI compatibility, the messy reality of in-app purchases, and how his “PurchaseKit” aims to simplify the whole Apple/Google webhook maze. We also hear about Joe's new podcast with Colleen, the hosts' AI tool usage (Claude, Augment, Codex), and Joe's intent to submit a CFP to speak at RubyConf in Vegas. Hit download now to hear more! LinksJudoscale- Remote Ruby listener giftJoe Masilotti WebsiteJoe Masilotti XBridge ComponentsPurchaseKitPermission Not Required Podcast Dungeon Crawler CarlGodfather of HarlemClaude CodeCodexmissing (APIdock)RubyConf 2026, July 14-16, Las Vegas, NVHoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleMake your deployments bulletproof with autoscaling that just works.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter
Agradece a este podcast tantas horas de entretenimiento y disfruta de episodios exclusivos como éste. ¡Apóyale en iVoox! En esta ocasión nos adentramos dentro de una casa abandonada en medio del bosque. Podéis encontrar algunos vídeos por Youtube en los que se exploradores urbex visitan sus entrañas y realizan algunas pruebas, relatando gran actividad paranormal. Nosotros pudimos comprobarlo. Un enclave "virgen" por así decirlo en el mundo del podcast. Adéntrate junto a nosotros en esta indagación y descubre todos los secretos de la también llamada Casa Gaudí. Nos puedes encontrar también en Youtube, Tik Tok y en el grupo de Telegram Codex más allá del misterio. Ensayos y novelas publicadas: ENTRE HISTORIAS EXTRAÑAS. Amazon CAZADORES DE MISTERIOS. Ediciones Cydonia CAZADORES DE MISTERIOS 2. Editorial Guante Blanco CAZADORES DE MISTERIOS 3. Amazon CAZAVAMPIROS. MITO Y REALIDAD. Colección Biblioteca del Misterio de ediciones Oblicuas ENIGMA VALLÉS. Bohodón ediciones ARCA SACRARIUMEscucha este episodio completo y accede a todo el contenido exclusivo de CODEX... más allá del misterio PODCAST. Descubre antes que nadie los nuevos episodios, y participa en la comunidad exclusiva de oyentes en https://go.ivoox.com/sq/130420
The Tata Group, Tata Consultancy Services (TCS), and OpenAI have announced a multi-dimensional strategic partnership that will drive AI-powered innovation across enterprise, consumer, and social sectors. TCS, a leading global IT services, consulting, and business solutions company, operates a global delivery centre in Letterkenny. The announcement, which coincides with this week's India AI Impact Summit, shows a continued aspiration for TCS to become the world's largest AI-enabled tech services company. This partnership spans multiple high-impact areas, including powering AI-led innovation across Tata Group companies, joint efforts to drive AI transformation across industries globally, and setting up AI infrastructure. Key Partnership Highlights: Empowering Tata Group employees with Enterprise ChatGPT: Several thousand Tata Group employees will get access to Enterprise ChatGPT, accelerating innovation and productivity. In addition, TCS will leverage OpenAI's Codex to boost software engineering outcomes. Building industry-specific Agentic AI solutions: Under this partnership, OpenAI, with its leading Agentic AI solutions, and TCS, with its contextual knowledge of industries and deep AI skills, will come together to build impactful industry-specific solutions. Joint go-to-market (GTM) initiatives: TCS and OpenAI will jointly enable global enterprises to transform with AI-powered solutions specific to their organisational context. Through this collaboration, TCS will help its customers accelerate AI-led transformation by deploying, integrating, and scaling OpenAI's advanced AI platforms worldwide. Developing AI infrastructure: TCS's HyperVault unit and OpenAI have agreed to a multi-year partnership to develop AI infrastructure. In the initial phase, TCS will develop AI infrastructure with 100MW capacity, with an option to scale to 1 GW. This infrastructure will power next-generation AI workloads and position India as a global AI hub. Sam Altman, CEO, OpenAI, said, "India is already leading the way in AI adoption, and with its talent, ambition, and strong government support, it is well placed to help shape its future. Through OpenAI for India and our partnership with Tata Group, we're working together to build the infrastructure, skills, and local partnerships needed to build AI with India, for India, and in India, so that more people across the country can access and benefit from it." N Chandrasekaran, Chairman, Tata Sons, said, "This deep collaboration between OpenAI and Tata Group marks a major milestone in India's vision to become a global leader in AI. We are pleased to partner with OpenAI to create state-of-the-art AI infrastructure in India. This is a unique opportunity for OpenAI and TCS to transform industries. Together we will skill India's youth and empower them to succeed in the AI era." TCS established HyperVault in 2025 with a vision to deliver gigawatt-scale secure, reliable, large-scale AI-ready infrastructure for hyperscalers and AI-driven organisations. Powered by green energy, it will offer purpose-built, liquid-cooled data centres with high rack densities, and network connectivity across all key cloud regions. This partnership marks a pivotal moment in India's vision to become a global leader in AI and build an ecosystem that accelerates AI development and adoption. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Faceboo...
Boris Cherny is the creator and head of Claude Code at Anthropic. What began as a simple terminal-based prototype just a year ago has transformed the role of software engineering and is increasingly transforming all professional work.We discuss:1. How Claude Code grew from a quick hack to 4% of public GitHub commits, with daily active users doubling last month2. The counterintuitive product principles that drove Claude Code's success3. Why Boris believes coding is “solved”4. The latent demand that shaped Claude Code and Cowork5. Practical tips for getting the most out of Claude Code and Cowork6. How underfunding teams and giving them unlimited tokens leads to better AI products7. Why Boris briefly left Anthropic for Cursor, then returned after just two weeks8. Three principles Boris shares with every new team member—Brought to you by:DX—The developer intelligence platform designed by leading researchers: https://getdx.com/lennySentry—Code breaks, fix it faster: https://sentry.io/lennyMetaview—The AI platform for recruiting: https://metaview.ai/lenny—Episode transcript: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Boris Cherny:• X: https://x.com/bcherny• LinkedIn: https://www.linkedin.com/in/bcherny• Website: https://borischerny.com—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Boris and Claude Code(03:45) Why Boris briefly left Anthropic for Cursor (and what brought him back)(05:35) One year of Claude Code(08:41) The origin story of Claude Code(13:29) How fast AI is transforming software development(15:01) The importance of experimentation in AI innovation(16:17) Boris's current coding workflow (100% AI-written)(17:32) The next frontier(22:24) The downside of rapid innovation (24:02) Principles for the Claude Code team(26:48) Why you should give engineers unlimited tokens(27:55) Will coding skills still matter in the future?(32:15) The printing press analogy for AI's impact(36:01) Which roles will AI transform next?(40:41) Tips for succeeding in the AI era(44:37) Poll: Which roles are enjoying their jobs more with AI(46:32) The principle of latent demand in product development(51:53) How Cowork was built in just 10 days(54:04) The three layers of AI safety at Anthropic(59:35) Anxiety when AI agents aren't working(01:02:25) Boris's Ukrainian roots(01:03:21) Advice for building AI products(01:08:38) Pro tips for using Claude Code effectively(01:11:16) Thoughts on Codex(01:12:13) Boris's post-AGI plans(01:14:02) Lightning round and final thoughts—References: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Weirdly Magical with Jen and Lou - Astrology - Numerology - Weird Magic - Akashic Records
Louise Edington discusses the upcoming lunar eclipse on March 3, 2026, and its astrological implications. She highlights the significance of the Virgo and Pisces nodes, which have been integrating the material and spiritual since early last year. The eclipse at 12 degrees will mark a completion and a shift to Leo and Aquarius nodes on July 26, 2022. Key astrological aspects include Mercury retrograde, Mars in Pisces, and Neptune in Aries. Louise emphasizes the eclipse's impact on emotional and spiritual balance, urging viewers to prioritize their passions and prepare for significant changes.
The u-blox SAM-M8Q has been sitting on my bench for months. This little GPS module has a built-in antenna, coin cell backup, speaks both NMEA and UBX binary protocol over UART or I2C. So why isn't it in the shop already? Well, it's mostly cause of the 475-page interfacing datasheet documenting every command, struct, and config register. Hundreds of message types. I got partway through by hand with some Claude Code Sonnet assistance, but ran out of time - plus it was still tedious when babysitting Sonnet. However, now we're living in an Opus + Codex era! So I pointed my Raspberry Pi OpenClaw at it. https://github.com/adafruit/openclaw Here's the setup: Raspberry Pi 5 running OpenClaw, wired to a QT Py RP2040, which talks to the SAM-M8Q. Opus 4.6 reads the datasheet (converted to markdown first by Sonnet 4.6 with 1M context to minimize re-parsing that PDF every session) and builds the implementation plan. I review the plan to make sure it prioritizes the most common commands and reports, and flagged some unessential sections like automotive-assist or RTK-specific. Then Codex is assigned each message implementation task as a sub-agent and writes the actual C code for the Arduino library. Opus suggested using struct-based parsing rather than digging through each uint8_t array; we just memcpy the checksummed message raw bytes onto the matching struct and extract the typed bit fields. We've got four message types done so far. After each message is implemented, Codex also writes a test sketch that will exercise / pretty-print the results of each message, great for self-testing as well as regression testing later. Tonight I'm telling it to keep going while I sleep: code, parse, test against live satellite data, fix failures, commit and push on success, then move on to the next. To me this is a great usage of "agentic" firmware development: there's no creativity in transcribing 84 different structs from a 475-page datasheet. Once the LLMs are done, I can review the PRs as if it were an everyday contributor and even make revision suggestions. Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ ----------------------------------------- #openclaw #raspberrypi #adafruit
In this episode, we unpack a viral AI essay that compares today's AI moment to February 2020 the calm before global disruption.The author argues that AI has entered a new “phase change.” That systems like GPT-5.3 Codex and Claude Opus 4.6 are no longer just assistants, but autonomous agents capable of planning, executing, and iterating complex work independently.But is this reality or hype?As UX designers, we don't need panic.We need perspective.In this episode, I explore:What the viral article actually claimsWhy it's spreading so fastWhat “agentic AI” really meansHow leading voices are reacting — from optimism to skepticismAnd how you can prepare long-term as a designer in an accelerating worldThis isn't about fear.It's about clarity, adaptability, and momentum.Here are some of the voices referenced in this episode. I highly recommend exploring their work and forming your own opinion:Matt ShumerFounder & AI entrepreneur. Author of the viral “February 2020 moment” essay.Nate B. JonesAI commentator discussing the “phase change” toward agent swarms and autonomous systems.YouTubeAllie K. MillerAI advisor and former Amazon AI leader. Talks about “information asymmetry” and hands-on benchmarking with advanced AI systems.LinkedInAnn HandleyMarketing leader and author advocating against AI panic — emphasizing human judgment and relationships.LinkedInGary MarcusAI researcher and cognitive scientist offering a skeptical counterpoint on reliability and hype.SubstackAI for Designers: 5-week Bootcamp
Raghu Raghuram, Managing Partner at a16z, and Sarah Wang, General Partner at a16z, speak with Samar Abbas, CEO of Temporal, about how durable execution became the infrastructure layer behind some of the world's most widely used AI agents. They cover why long-running agents require state management and recoverability, how Temporal powers OpenAI's Codex and Snap's Story processing, and why the shift from interactive to background agents is creating distributed systems challenges at a scale that didn't exist two years ago. Resources: Follow Samar Abbas: https://x.com/SamarAtTemporal Follow Sarah Wang: https://x.com/sarahdingwang Follow Raghu Raghuram: https://x.com/RaghuRaghuram Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Live from Wild West Hackin' Fest Denver 2026, the Black Hills Information Security crew brings their signature mix of sharp security insight and off-the-cuff banter to a packed in-person audience. This episode centers on a controversial Notepad update that introduced Markdown rendering—along with a potential remote code execution (RCE) issue. The hosts unpack what this says about modern software bloat, “vibe coding,” and the growing push to embed AI into everything—whether it belongs there or not. They also explore the implications of Discord's Age verification requirements, AI-generated code, including OpenAI's latest Codex model, and debate whether we're headed toward a wave of AI-assisted vulnerabilities.Join us LIVE on Mondays, 4:30pm EST.A weekly Podcast with BHIS and Friends. We discuss notable Infosec, and infosec-adjacent news stories gathered by our community news team.https://www.youtube.com/@BlackHillsInformationSecurityChat with us on Discord! - https://discord.gg/bhis
Aaron and Brian review some of the latest AI model releases and discuss how they would evaluate them through the lens of an Enterprise AI Architect. SHOW: 1003SHOW TRANSCRIPT: The Cloudcast #1003 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET NEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - "CLOUDCAST BASICS" SHOW NOTES:Last Week in AI Podcast #234Artificial Analysis.AIOpus 4.6 ReleaseGPT Codex 5.3 ReleaseGLM-5 ReleaseOpenAI Preparedness FrameworkSam's Tweet that 5.3 Codex hit “high” ranking for cybersecurityFortune Article on 5.3 high rankingTAKEAWAYSThe frequency of AI model releases can lead to numbness among users.Evaluating AI models requires understanding their specific use cases and benchmarks.Enterprises must consider the compatibility and integration of new models with existing systems.Benchmarks are becoming more accessible but still require careful interpretation.The rapid pace of AI development creates challenges for enterprise adoption and integration.Companies need to be proactive in managing the versioning of AI models.The industry may need to establish clearer standards for evaluating AI performance.Efficiency and cost-effectiveness are becoming critical metrics for AI adoption.The timing of model releases can impact their market reception and user adoption.Businesses must adapt to the fast-paced changes in AI technology to remain competitive.FEEDBACK?Email: show at the cloudcast dot netBluesky: @cloudcastpod.bsky.socialTwitter/X: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
In today's episode of the RattlerGator Report, JB White conducts a live read-through and reaction to a powerful essay by Matt Schumer detailing the rapid acceleration of artificial intelligence. Framing the moment as a “February 2020” style inflection point, JB walks through Schumer's firsthand account of GPT-5.3 Codex and Opus 4.6, highlighting AI systems that now write code, debug themselves, iterate independently, and even help build their own successors. The discussion centers on exponential growth, task-duration benchmarks, AI-assisted model training, and projections that superhuman capability across most cognitive work could arrive within just a few years. JB expands the conversation into geopolitics, energy infrastructure, Elon Musk's role in robotics and satellite systems, and what AI dominance could mean for military, economic, and cultural power. He also ties the technological shift to 2026–2028 political positioning, global alliances, and internal GOP battles, arguing that America's strategic advantage depends on recognizing the scale of change underway. The episode blends technological urgency, political forecasting, and philosophical reflection on generational responsibility in the face of accelerating transformation.
"I Have a Dream" is a public speech that was delivered by American civil rights activist and Baptist minister Martin Luther King Jr. during the March on Washington for Jobs and Freedom on August 28, 1963. Black History Month is an annually observed commemorative month originating in the United States, where it is also known as African-American History Month. MLK books available at https://amzn.to/49zwY32 Civil Rights books available at https://amzn.to/4q0jbJf Inquisikids products available at https://amzn.to/49ZRrhV ENJOY Ad-Free content, Bonus episodes, and Extra materials when joining our growing community on https://patreon.com/markvinet SUPPORT this channel by purchasing any product on Amazon using this FREE entry LINK https://amzn.to/3POlrUD (Amazon gives us credit at NO extra charge to you). Mark Vinet's HISTORICAL JESUS podcast at https://parthenonpodcast.com/historical-jesus Mark's TIMELINE video channel: https://youtube.com/c/TIMELINE_MarkVinet Website: https://markvinet.com/podcast Facebook: https://www.facebook.com/mark.vinet.9 X (Twitter): https://twitter.com/MarkVinet_HNA Instagram: https://www.instagram.com/denarynovels Mark's books: https://amzn.to/3k8qrGM Audio credits: Inquisikids Daily 15jan2024 Who Was Martin Luther King Jr.?; I Have a Dream speech by Martin Luther King Jr. (Archive.org). Audio excerpts reproduced under the Fair Use (Fair Dealings) Legal Doctrine for purposes such as criticism, comment, teaching, education, scholarship, research and news reporting.See omnystudio.com/listener for privacy information.
Focus sur le nouveau MacBook à 600 euros d’Apple, les LLMs et leur impact sur le travail, les ambitions spatiales de Musk et Bezos, et les nouveautés d’Android 17. Me soutenir sur Patreon Me retrouver sur YouTube On discute ensemble sur Discord Interactions auditeurs Lou et le Mac low cost. Gremi, t'es dur avec moi sur Temu et… Lucy 2 !? Heureusement… peut-être pas. Mika : Braga part, Seedance 2 est une boucherie. Will Smith, valide. Alih s'homard bien, les IA aussi. Per Aspera Claude Wars : la revanche de Kimi. Codex passe la seconde avec Cerebras. Culture pub : la mauvaise foi, ça marche. Gemini man : l'ARC se termine. Médecines alternatives : pas encore généraliste, mais peut-être proctologues ? Les usages numériques excessifs des français. C'est l'eXode chez X / Space X / xAI. Ad Astra Le lièvre, la tortue, la grenouille et le panda. Crystal chronicles : des semi-conducteurs très spatiaux. Aurora est beau comme un camion mais aux US, l'électrique est sous tension. Android 17, je vis pour cannelle. Jeux vidéo Contrôle, vampires, poulet géant, du kratos et du Saros, c’était le state of play. Participants Avec Cassim Montilla Présenté par Guillaume Poggiaspalla
OpenAI's hottest app isn't ChatGPT—it's Codex.In the last few weeks alone, the Codex team shipped a desktop app, GPT-5.3 Codex (a new flagship model), and Spark, the fastest coding model I've ever used. Usage has grown fivefold since January, and over a million people now use Codex weekly. Codex was also the app that OpenAI chose to run an ad for in the Super Bowl.Dan Shipper talked to Thibault Sottiaux, head of Codex, and Andrew Ambrosino, a member of technical staff who built the Codex app, for Every's AI & I about what OpenAI is building and how they're using it internally.If you found this episode interesting, please like, subscribe, comment, and share! Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Head to granola.ai/every and get 3 months free with the code EVERY.Timestamps: 00:00:00 - Start00:01:27 - Introduction 00:05:27 - OpenAI's evolving bet on its coding agent 00:09:42 - The choice to invest in a GUI (over a terminal) 00:20:38 - The AI workflows that the Codex team relies on to ship 00:26:45 - Teaching Codex how to read between the lines 00:28:45 - Building affordances for a lightening fast model 00:33:15 - Why speed is a dimension of intelligence 00:36:30 - Code review is the next bottleneck for coding agents 00:41:24 - How the Codex team positions against the competition Links to resources mentioned in the episode:Thibault Sottiaux: Tibo (@thsottiaux)Andrew Ambrosino: Andrew Ambrosino (@ajambrosino)Every's vibe check on everything the Codex team launched: OpenAI's Codex App Gains Ground on Claude Code, GPT-5.3 Codex—The 10x Engineer, Now More Fun at Parties, AI as Fast as Your Train of Thought
Yeah, you prolly saw the news: OpenAI acquihired OpenClaw.
Weirdly Magical with Jen and Lou - Astrology - Numerology - Weird Magic - Akashic Records
Louise Edington discusses the significance of the February 17, 2026, solar annular eclipse, noting its impact on Aquarius and its conjunction with Uranus. She highlights the eclipse's path through the U.S., Greenland, Russia, and Iran, and its proximity to the Saturn-Neptune conjunction at zero Aries. Louise promotes her "Reborn Immersion" series, offering early bird pricing and additional resources. She reflects on the Year of the Fire Horse, linking it to social movements and the U.S. chart. Louise also mentions the astrological influences on key figures like Donald Trump and the potential for revolutionary changes. She concludes with tarot and oracle card readings, emphasizing personal growth and the importance of imperfection.
Get our AI Video Guide: https://clickhubspot.com/dth Episode 97: How close are we to a world where AI-generated videos are indistinguishable from reality? Matt Wolfe (https://x.com/mreflow) and Joe Fier (linkedin.com/in/joefier) dive deep into Seedance 2.0—ByteDance's new AI video model that could outpace giants like Sora and Veo. Joe, a marketing and business expert known for his hands-on approach and insights into AI's rapid evolution, helps to break down the five most fascinating developments in the AI space this week. They tackles game-changing AI advances: Seedance 2.0's mind-blowing video generation for ads and motion graphics, the rollout of Google's Veo 3.1 in Google Ads, the GPT-5.3 Codex Spark coding model built on specialized inference chips, Gemini's DeepThink model for scientific research, and the early rollout of ChatGPT ads. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) Seedance 2.0 arrives – AI video generation blurs reality, ad creation moves fast. (03:03) Google's Veo 3.1 powers video ads, advertisers can now generate clips directly from image uploads. (05:33) Comparison of Runway, Kling, Veo, and Sora—head-to-head prompt showdown. (07:00) Motion graphics and explainers—AI's take on the creative industry. (08:35) US vs. China—Copyright, IP, and training data debates. (12:10) Deepfake and video authenticity—why we now default to skepticism. (13:30) Google's edge in visual AI via YouTube's massive corpus. (14:39) The next frontier: Longer, more consistent video generation. (15:14) Where do humans fit in? Taste, storytelling, and creative direction. (18:30) GPT-5.3 Codex Spark—coding models on Cerebras inference chips, demo generating a website in 18 seconds. (24:34) AI tool comparisons—Codex vs. Cursor vs. Claude Code. (25:12) Speed as the key bottleneck breaker in creative and technical workflows. (28:02) Google's Gemini DeepThink—state-of-the-art research, advanced coding and physics capabilities. (32:52) Gemini demo attempt—3D-printable STL file and solving the three-body problem. (33:20) ChatGPT rolls out ads—impact on monetization and user trust. (40:02) Google's ad history—how “sponsored” is becoming harder to distinguish. (44:02) Democratizing AI access via ad-supported models. (45:03) Matt Schumer's viral article—why AI is moving even faster than most people realize. (51:11) Tools that build tools—AGI's path and the new role for humans. (53:12) Real-world skills and taste—where humanity still wins (for now). (54:01) Final thoughts—wake up, pay attention, and stay on the leading edge. — Mentions: Seedance 2.0: https://www.seedance.com/ ByteDance: https://www.bytedance.com/ CapCut: https://www.capcut.com/ Veo: https://deepmind.google/models/veo/ Runway: https://runwayml.com/ ChatGPT Codex: https://chatgpt.com/codex Matt Schumer's Viral Article: https://www.mattshumer.com/blog/ai-changes-everything Super Bowl Claude Commercial: https://www.anthropic.com/news/super-bowl-ad Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Our 235th episode with a summary and discussion of last week's big AI news!Recorded on 01/02/2026Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:* Major model launches include Anthropic's Opus 4.6 with a 1M-token context window and “agent teams,” OpenAI's GPT-5.3 Codex and faster Codex Spark via Cerebras, and Google's Gemini 3 Deep Think posting big jumps on ARC-AGI-2 and other STEM benchmarks amid criticism about missing safety documentation.* Generative media advances feature ByteDance's Seedance 2.0 text-to-video with high realism and broad prompting inputs, new image models Seedream 5.0 and Alibaba's Qwen Image 2.0, plus xAI's Grok Imagine API for text/image-to-video.* Open and competitive releases expand with Zhipu's GLM-5, DeepSeek's 1M-token context model, Cursor Composer 1.5, and open-weight Qwen3 Coder Next using hybrid attention aimed at efficient local/agentic coding.* Business updates include ElevenLabs raising $500M at an $11B valuation, Runway raising $315M at a $5.3B valuation, humanoid robotics firm Apptronik raising $935M at a $5.3B valuation, Waymo announcing readiness for high-volume production of its 6th-gen hardware, plus industry drama around Anthropic's Super Bowl ad and departures from xAI.Timestamps:(00:00:10) Intro / Banter(00:02:03) Sponsor Break(00:05:33) Response to listener commentsTools & Apps(00:07:27) Anthropic releases Opus 4.6 with new 'agent teams' | TechCrunch(00:11:28) OpenAI's new GPT-5.3-Codex is 25% faster and goes way beyond coding now - what's new | ZDNET(00:25:30) OpenAI launches new macOS app for agentic coding | TechCrunch(00:26:38) Google Unveils Gemini 3 Deep Think for Science & Engineering | The Tech Buzz(00:31:26) ByteDance's Seedance 2.0 Might be the Best AI Video Generator Yet - TechEBlog(00:35:14) China's ByteDance, Alibaba unveil AI image tools to rival Google's popular Nano Banana | South China Morning Post(00:36:54) DeepSeek boosts AI model with 10-fold token addition as Zhipu AI unveils GLM-5 | South China Morning Post(00:43:11) Cursor launches Composer 1.5 with upgrades for complex tasks(00:44:03) xAI launches Grok Imagine API for text and image to videoApplications & Business(00:45:47) Nvidia-backed AI voice startups ElevenLabs hits $11 billion valuation(00:52:04) AI video startup Runway raises $315M at $5.3B valuation, eyes more capable world models | TechCrunch(00:54:02) Humanoid robot startup Apptronik has now raised $935M at a $5B+ valuation | TechCrunch(00:57:10) Anthropic says 'Claude will remain ad-free,' unlike an unnamed rival | The Verge(01:00:18) Okay, now exactly half of xAI's founding team has left the company | TechCrunch(01:04:03) Waymo's next-gen robotaxi is ready for passengers — and also 'high-volume production' | The VergeProjects & Open Source(01:04:59) Qwen3-Coder-Next: Pushing Small Hybrid Models on Agentic Coding(01:08:38) OpenClaw's AI 'skill' extensions are a security nightmare | The VergeResearch & Advancements(01:10:40) Learning to Reason in 13 Parameters(01:16:01) Reinforcement World Model Learning for LLM-based Agents(01:20:00) Opus 4.6 on Vending-Bench – Not Just a Helpful AssistantPolicy & Safety(01:22:28) METR GPT-5.2(01:26:59) The Hot Mess of AI: How Does Misalignment Scale with Model Intelligence and Task Complexity?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
如果你喜歡我的內容,歡迎加入會員支持我,讓我更有動力繼續分享更多好內容!
"Two years from now, all white-collar jobs may be gone." — Dario Amodei (via Keith Teare)Keith Teare leads this week's tech roundup with a video he made on Google's Veo: one glass half-full of water, another half-full of spiders. It's a metaphor for the AI moment. The water represents the tools released in the past two weeks—Anthropic's Claude 4.6, OpenAI's CodeX 5.3—which Keith calls "beyond belief." The spiders represent the fear, which he acknowledges is not irrational. But maybe spiders are the wrong metaphor. Maybe we're the frogs being slowly boiled, not noticing the temperature rise until it's too late.The trigger was Matt Schumer's viral essay "Something Big is Happening," which got 50 million views by telling engineers to become AI experts immediately or become irrelevant. Keith tested the thesis: he built venturebets.io, a prediction market, in a single day. He automated That Was The Week so completely that his weekly workflow dropped from six hours to under one. But then Dario Amodei and Satya Nadella both said the quiet part loud: in two years, there may be no white-collar jobs left. Keith's response? The glass doesn't contain jobs—it contains the future of life. And he'd rather have time to make videos of spiders crawling out of glasses than spend six hours curating links. The rest of us may not have the luxury of choosing. About the GuestKeith Teare is a serial entrepreneur and investor, founder of SignalRank, and author of the newsletter That Was The Week. He co-hosts the weekly tech roundup on Keen On America.ReferencesEssays discussed:● Matt Schumer's "Something Big is Happening" went viral with 50 million views, arguing that engineers must become AI experts immediately or face obsolescence.● Noah Smith published two essays: "The Fall of the Nerds" and "You Are No Longer the Smartest Type of Thing on Earth," arguing that humanity's destiny is now mostly out of our own hands.● Josh Tyrangiel wrote "America Isn't Ready for What AI Will Do to Jobs" in The Atlantic.● The Financial Times published "Anthropic's Breakout Moment" on the company's enterprise momentum.Tools and companies mentioned:● Claude 4.6 from Anthropic and CodeX 5.3 from OpenAI represent a "step change" in agentic AI—you give tasks, not prompts, and sub-agents complete them autonomously.● Google Veo is Google's video generation tool, which Keith used to create the glass-half-full-of-spiders metaphor.● Polymarket and Kalshi are prediction markets that Keith's new venturebets.io aims to match in quality.People mentioned:● Dario Amodei, CEO of Anthropic, predicted that white-collar jobs may be gone in two years.● Satya Nadella, CEO of Microsoft, echoed Amodei's prediction about the end of white-collar work.About Keen On AmericaNobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States—hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.WebsiteSubstackYouTubeApple PodcastsSpotifyChapters:(00:00) - The glass half-full of spiders (01:30) - Matt Schumer's viral essay (03:15) - Every week is the biggest week in AI (04:30) - Claude 4.6 and CodeX 5.3: a step change (06:00) - Keith builds a prediction market in a day (07:45) - Fear is a bad operating system (09:30) - What's actually changed with That Was The Week? (12:00) - Trusting the algorithm to read for you (14:00) - Noah Smith: You're no longer the smartest thing on Earth (16:00) - The rabbit vs. the tiger (17:30) - Google's quantum computer and parallel universes (19:00) - America isn't ready for what AI will do to jobs (20:30) - Amodei and Nadella: two years to no white-collar jobs (22:00) - What's in the glass is the future of life (24:00) - Anthropic's breakout moment (26:00) - Claude Code vs. CodeX: Keith switches sides
HTML All The Things - Web Development, Web Design, Small Business
The pace of AI model releases is becoming almost impossible to follow. In just two weeks we saw GPT-5.3-Codex, GPT-5.2 updates, Gemini 3 Deep Think upgrades, Claude Opus 4.6 with a 1M context window in beta, Qwen3-Coder-Next, GLM-5, MiniMax M2.5, Cursor Composer 1.5, and even Kimi 2.5 just outside the window. This isn't a quarterly product cycle anymore - it's a daily arms race. In this episode Matt and Mike break down what this acceleration means for developers, open source, frontier labs, and the broader industry. Are we witnessing healthy innovation, or unsustainable velocity? At what point does this stabilize - if it ever does? If you're trying to build, learn, or compete in AI right now… this conversation is for you. Show Notes: https://www.htmlallthethings.com/podcast/ai-competition-is-out-of-control
Join Simtheory: https://simtheory.aiRegister for the STILL RELEVANT tour: https://simulationtheory.ai/16c0d1db-a8d0-4ac9-bae3-d25074589a80GLM-5 just dropped and it's trained entirely on Huawei chips – zero US hardware dependency. Meanwhile, we're having existential crises about whether we're even needed anymore. In this episode, we break down China's new frontier model that's competing with Opus 4.6 and Codex at a fraction of the price, why agentic loops are making 200K context windows the sweet spot (sorry, million-token dreams), and the very real phenomenon of AI productivity psychosis. We dive into why coding-optimized models are secretly winning at everything, the Harvard study confirming AI doesn't reduce work – it intensifies it, and the exodus of safety researchers from XAI, Anthropic, and OpenAI (spoiler: they're not giving back their shares). Plus: Mike's arm is failing from too much mouse usage, we debate whether the chatbot era is actually fading, and yes – there's a safety researcher diss track called "Is This The End?"CHAPTERS:0:00 Intro - Is This The End? (Song Preview)0:11 Still Relevant Tour Update & NASA Listener Callout1:42 AI Productivity Psychosis: The Pressure of Infinite Capability4:25 GLM-5 Breakdown: China's New Frontier Model on Huawei Chips7:24 First Impressions: GLM-5 in Agentic Loops9:48 Why Cheap Models Matter & The New Model War14:09 Codex Vibe Shift: Is OpenAI Winning?16:24 Does Context Window Size Even Matter Anymore?22:27 The Parallelization Problem & Cognitive Overload27:27 Mike's Arm Injury & The Voice Input Pivot31:17 Single-Threaded Work & The 95% Problem35:06 UX is Unsolved: Rolling Back Agentic Mistakes38:45 Harvard Study: AI Doesn't Reduce Work, It Intensifies It44:01 How AI Erodes Company Structure & Why Adoption Takes Years50:14 My AI vs Your AI: Household Debates50:43 The Safety Researcher Exodus: XAI, Anthropic, OpenAI56:49 Final Thoughts: Are We All Still Relevant?59:04 BONUS: Full "Is This The End?" Diss TrackThanks for listening. Like & Sub. Links above for the Still Relevant Tour signup and Simtheory. GLM-5 is here, your productivity psychosis is valid, and the safety researchers are becoming poets. xoxo
Sherwin Wu leads engineering for OpenAI's API platform, where roughly 95% of engineers use Codex, often working with fleets of 10 to 20 parallel AI agents.We discuss:1. What OpenAI did to cut code review times from 10-15 minutes to 2-3 minutes2. How AI is changing the role of managers3. Why the productivity gap between AI power users and everyone else is widening4. Why “models will eat your scaffolding for breakfast”5. Why the next 12 to 24 months are a rare window where engineers can leap ahead before the role fully transforms—Brought to you by:DX—The developer intelligence platform designed by leading researchersSentry—Code breaks, fix it fasterDatadog—Now home to Eppo, the leading experimentation and feature flagging platform—Episode transcript: https://www.lennysnewsletter.com/p/engineers-are-becoming-sorcerers—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Sherwin Wu:• X: https://x.com/sherwinwu• LinkedIn: https://www.linkedin.com/in/sherwinwu1—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Sherwin Wu(03:10) AI's role in coding at OpenAI(06:53) The future of software engineering with AI(12:26) The stress of managing agents(15:07) Codex and code review automation(19:29) The changing role of engineering managers(24:14) The one-person billion-dollar startup(31:40) Management lessons(37:28) Challenges and best practices in AI deployment(43:56) Hot takes on AI and customer feedback(48:57) Building for future AI capabilities(50:16) Where models are headed in the next 18 months(53:35) Business process automation(57:22) OpenAI's ecosystem and platform strategy(01:00:50) OpenAI's mission and global impact(01:05:21) Building on OpenAI's API and tools(01:08:16) Lightning round and final thoughts—Referenced:• Codex: https://openai.com/codex• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• OpenClaw: https://openclaw.ai• The creator of Clawd: “I ship code I don't read”: https://newsletter.pragmaticengineer.com/p/the-creator-of-clawd-i-ship-code• The Sorcerer's Apprentice: https://en.wikipedia.org/wiki/The_Sorcerer%27s_Apprentice_(Dukas)• Quora: https://www.quora.com• Marc Andreessen: The real AI boom hasn't even started yet: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom• Sarah Friar on LinkedIn: https://www.linkedin.com/in/sarah-friar• Sam Altman on X: https://x.com/sama• Nicolas Bustamante's “LLMs Eat Scaffolding for Breakfast” post on X: https://x.com/nicbstme/status/2015795605524901957• The Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html• Overton window: https://en.wikipedia.org/wiki/Overton_window• Developers can now submit apps to ChatGPT: https://openai.com/index/developers-can-now-submit-apps-to-chatgpt• Responses: https://platform.openai.com/docs/api-reference/responses• Agents SDK: https://platform.openai.com/docs/guides/agents-sdk• AgentKit: https://openai.com/index/introducing-agentkit• Ubiquiti: https://ui.com• Jujutsu Kaisen on Crunchyroll: https://www.crunchyroll.com/series/GRDV0019R/jujutsu-kaisen?srsltid=AfmBOoqvfzKQ6SZOgzyJwNQ43eceaJTQA2nUxTQfjA1Ko4OxlpUoBNRB• eero: https://eero.com• Opendoor: https://www.opendoor.com—Recommended books:• Structure and Interpretation of Computer Programs: https://www.amazon.com/Structure-Interpretation-Computer-Programs-Engineering/dp/0262510871• The Mythical Man-Month: Essays on Software Engineering: https://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959• There Is No Antimemetics Division: A Novel: https://www.amazon.com/There-No-Antimemetics-Division-Novel/dp/0593983750• Breakneck: China's Quest to Engineer the Future: https://www.amazon.com/Breakneck-Chinas-Quest-Engineer-Future/dp/1324106034• Apple in China: The Capture of the World's Greatest Company: https://www.amazon.com/Apple-China-Capture-Greatest-Company/dp/1668053373—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Interview starts at 40:35 Jason Wilde joins us for a great chat about his 4 year sabbatical from the West after meeting his Master in a miraculous way. We talk about the big pattern, atheism, UFO's and ancient texts, reality, studying the wrong thing, no free will, Guru's, life is the message, Hinduism, 108, the big pattern, loosh, higher power as protection, Vimana's, the Great Flood constant, the hive, the split in humanity, and how people are sleeping more and not waking up. Check out the Codex of Infinite Beings https://x.com/JasonWilde108 https://www.youtube.com/@jasonwilde5829 https://www.jasonwildephotography.com/ Become a Lord or Lady with 1k donations over time. And a Noble with any donation. Leave Serfdom behind and help Grimerica stick to 0 ads and sponsors and fully listener supported. Thanks for listening!! Help support the show, because we can't do it without ya. https://www.amazon.com/Unlearned-School-Failed-What-About/dp/1998704904/ref=sr_1_3?sr=8-3 Support the show directly: https://open.spotify.com/show/2punSyd9Cw76ZtvHxMKenI?si=ImKxfMHgQZ-oshl499O4dQ&nd=1&dlsi=4c25fa9c78674de3 Watch or Listen on Spotify https://grimericacbd.com/ CBD / THC Gummies and Tinctures http://www.grimerica.ca/support https://www.patreon.com/grimerica http://www.grimericaoutlawed.ca/support www.Rokfin.com/Grimerica Adultbrain Audiobook YouTube Channel: https://www.youtube.com/@adultbrainaudiobookpublishing https://grimericaoutlawed.ca/The newer controversial Grimerica Outlawed Grimerica Show Check out our next trip/conference/meetup - Contact at the Cabin www.contactatthecabin.com Our audio book website: www.adultbrain.ca www.grimerica.ca/shrooms and Micro Dosing Darren's book www.acanadianshame.ca Grimerica on Rumble: https://rumble.com/c/c-2312992 Join the chat / hangout with a bunch of fellow Grimericans Https://t.me.grimerica https://www.guilded.gg/i/EvxJ44rk The Eh- List site. Canadian Propaganda Deconstruction https://eh-list.ca/ The Eh-List YouTube Channel: https://youtube.com/@theeh-list?si=d_ThkEYAK6UG_hGX Leave a review on iTunes and/or Stitcher: https://itunes.apple.com/ca/podcast/grimerica-outlawed http://www.stitcher.com/podcast/grimerica-outlawed Sign up for our newsletter https://grimerica.substack.com/ SPAM Graham = and send him your synchronicities, feedback, strange experiences and psychedelic trip reports!! graham@grimerica.com InstaGRAM https://www.instagram.com/the_grimerica_show_podcast/ Tweet Darren https://twitter.com/Grimerica Can't. Darren is still deleted. Purchase swag, with partial proceeds donated to the show: www.grimerica.ca/swag Send us a postcard or letter http://www.grimerica.ca/contact/ Episode ART - Napolean Duheme's site http://www.lostbreadcomic.com/ MUSIC https://brokeforfree.bandcamp.com/ - Something Jah Felix's Site sirfelix.bandcamp.com - A Grimerica Christmas Carols
You can develop a working Mac app in minutes.