Podcasts about graphs

  • 1,744PODCASTS
  • 3,639EPISODES
  • 40mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 5, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about graphs

Show all podcasts related to graphs

Latest podcast episodes about graphs

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually

Where It Happens
Claude Code marketing masterclass [from idea to making $$]

Where It Happens

Play Episode Listen Later Mar 2, 2026 54:06


I sit down with Cody Schneider, growth engineer and co-founder of Graph, for a live, hands-on crash course in GTM (go-to-market) engineering powered by Claude Code. Cody walks through how he runs multiple AI agents simultaneously to handle everything from bulk Facebook ad creation and LinkedIn outreach to cold email campaigns and live data analysis — tasks that used to require a team of dozens. By the end of the episode, you'll have a full understanding of how to set up your own agent workflow, the specific tools involved, and why domain expertise paired with AI is the real competitive advantage right now. Cody's GTM Toolkit: AI/Agent Tools: Claude Code, Perplexity API, OpenAI Codex Marketing & Outreach: Instantly AI (cold email), Phantom Buster (LinkedIn scraping/automation), Apollo API (data enrichment), Million Verifier (email verification), Raphonic (podcast host scraping): Advertising: Facebook Ads API, Facebook Ads Library (competitor research), Nano Banana Pro (AI image generation), Kai AI (bulk image generation), HeyGen API (UGC/video generation) Infrastructure & Deployment: Railway.com (servers, on-the-fly databases/Postgres), Vercel (deployment) Data & Analytics: Graphed / Graphed MCP (data warehouse, live data feeds), Google Analytics 4 CRM & Communication: Salesforce (mentioned as comparison), Intercom, SendGrid API, Slack, Cal.com API Productivity & Design: Notion, Super Whisper (voice transcription), Claude Code front-end design skill, HTML to Canvas (for converting React components to PNGs) Timestamps 00:00 – Intro 02:02 – What Is GTM Engineering? 05:12 – Setting Up Your Agent Workspace & Environment File 07:54 – Live Demo: LinkedIn Auto-Responder 09:56 – Live Demo: Bulk Facebook Ad Generator 12:31 – Live Demo: Cold Email Campaign Automation (Raphonic + Instantly) 14:47 – Live Demo: Creating Notion Documents via Claude Code 16:46 – Live Demo: Bulk Ad Creative Generator 26:05 – Live Demo: LinkedIn Engagement Scraper to Cold Email Pipeline 28:16 – Context Switching Across Tasks 29:19 – Live Demo: Bulk Ad Generator 31:41 – Live Demo: Data Analysis: Turning Off Low-Performing Ads 35:28 – Summary of GTM Engineering Workflow 37:48 – Deploying Agents and On-the-Fly Databases with Railway for Data Analysis 41:28 – The Dream of Autonomous Marketing 48:50 – Building API-First Products and Agent-Native Infrastructure Key Points GTM engineering has evolved from Clay-style data enrichment workflows into full-stack agent orchestration — where one person running multiple Claude Code agents can replace the output of a large team. The practical setup starts with a single folder containing your environment file (API keys for every tool in your stack), transcription software like Super Whisper, and Claude Code. Cody demonstrates running seven or more agents simultaneously across LinkedIn outreach, Facebook ad creation, cold email campaigns, Notion document generation, and live data dashboards. Code-generated ad creative (React components exported as PNGs) costs nearly nothing to produce at scale and allows rapid testing of messaging variations before investing in polished visuals. Deploying proven workflows to Railway turns one-off agent tasks into always-on, autonomous processes that run 24/7. Domain expertise is the real multiplier — the vocabulary you bring from your field determines the quality of output you can extract from these tools. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ FIND CODY ON SOCIAL: Cody's startup: https://www.graphed.com/ X/Twitter: https://x.com/codyschneiderxx Youtube: https://www.youtube.com/@codyschneiderx

The Ravit Show
Data + AI: Reltio's Big DataDriven Announcements (Agent Flow, Intelligent Data Graph)

The Ravit Show

Play Episode Listen Later Feb 26, 2026 12:54


Most companies think they are building data platforms. What they are actually building is the foundation for AI agents to make decisions!!!! At DataDriven 2026, I sat down with Manish Sood, Founder and CEO of Reltio, and our conversation made one thing clear: the role of data platforms is changing fast.Reltio just crossed $185M in ARR with strong growth, but the bigger story is how they are redefining master data management for the AI era. Instead of focusing only on data storage, they are pushing toward Context Intelligence, where AI systems operate on real-time, curated, and governed data.We discussed their new AgentFlow approach, where prebuilt AI agents automate stewardship work like match resolution, profiling, and data exploration. This moves data teams from manual cleanup to intelligent automation.We also talked about speed and access. Their Lightspeed data delivery network aims to make enterprise data globally available in milliseconds so customer-facing and agentic AI systems can actually function in real time.What stood out from customers at the event is simple:- The conversation is no longer about adopting AI- It is about whether their data foundation can support itSharing the full interview here where Manish breaks down the announcements, the strategy behind AgentFlow, and what enterprises should be preparing for next.#data #ai #datadriven #reltio #theravitshow

Humans of Martech
208: Anthony Rotio: Exploring causal context graphs and machine customers, starting in retail media networks

Humans of Martech

Play Episode Listen Later Feb 24, 2026 58:53


What's up folks, today we have the pleasure of sitting down with Anthony Rotio, Chief Data Strategy Officer at GrowthLoop.(00:00) - Intro (01:10) - In this episode (04:05) - Journeying From Robotics to Modern Marketing Systems (11:05) - Most Marketing Systems Don't Learn Because They Lack Feedback Loops (16:10) - The Martech Engineering Talent Gap (19:51) - AI Will Amplify Whoever Has the Cleanest Causal Feedback Loop (29:17) - Agent Context Graphs for Drift Detection in Marketing Systems (31:51) - Humans Will Set Hypotheses, AI Will Accelerates Iteration (35:50) - The Evolution of Retail Media Networks (45:07) - How Commerce Networks Redefine Targeting With Governed Data (48:26) - How Agent to Agent Commerce Operates Inside Marketing Funnels (53:04) - Google Universal Commerce Protocol Explained (54:43) - Personal Happiness System (56:30) - Favorite Books Summary: Anthony traces a path from robotics and computer science to his current role where he approaches marketing as an engineering system. He explains how execution-first marketing stacks weaken feedback loops and fragment data, which slows learning and iteration. He introduces the agent context graph as a causality model that lets AI simulate and predict customer behavior with greater confidence. The conversation also covers retail media networks, first-party data monetization through governed access, and a shift toward zero-to-zero marketing driven by agent-to-agent transactions. He closes by stressing that strong data foundations determine who can compete as marketing becomes more automated and agent-driven.About AnthonyAnthony Rotio is the Chief Data Strategy Officer at GrowthLoop, where he leads partnerships and builds generative AI product features for marketers, including multi-agent systems, AI-driven audience building, and benchmarking and evaluation work. He previously served as GrowthLoop's Chief Customer Officer, where he built and led teams across data engineering, data science, and solutions architecture while supporting product development and strategic sales efforts.Before GrowthLoop, Anthony spent nearly six years at AB InBev, where he led a $100M owned retail business unit with full P&L responsibility and drove major growth through operational and digital transformation work. He also led U.S. marketing for Budweiser, Bud Light, Michelob Ultra, Stella Artois, and other brands across music, food, and related consumer programs. He earned a B.A. in computer science from Harvard, played linebacker on the Harvard football team, founded the consumer product Pizza Shelf, and holds a Google Professional Cloud Architect certification.Journeying From Robotics to Modern Marketing SystemsAnthony's career started far away from marketing. He trained as a computer scientist and spent his early years working with robotics and reinforcement learning. His first exposure to a learning agent left a lasting impression because the system behaved less like traditional software and more like something adaptive. That experience shaped how he would later think about work, systems, and feedback. He learned early that progress comes from loops that learn, not static instructions.That mindset followed him into an unexpected chapter at AB InBev. Anthony entered a world defined by scale, brands, and operational complexity. He treated his technical background like a carpenter treats tools, useful only when applied to real problems. Running marketing across major beer brands taught him how value is created inside large organizations. It also exposed a recurring issue. Marketing teams had ambition and data, but execution moved slowly because ideas had to travel through layers of translation before anything reached customers.That friction became impossible to ignore. Audience definitions moved through tickets. Campaigns waited on queries. Data teams became bottlenecks through no fault of their own. Anthony felt the pull back toward technology, where systems could shorten the distance between intent and action. That pull led him to GrowthLoop, where he joined early and worked directly with customers. The appeal was immediate. The product connected straight to cloud data and removed several layers of mediation that marketing teams had accepted as normal.As language models improved, Anthony recognized a familiar pattern. Audience building behaved like a translation problem. Marketers described people and intent in natural language, while systems demanded structured logic. Early experiments showed that natural language models could close that gap. Anthony framed the idea clearly.“Audience building is a translation problem. You start with a business idea and you end with a query on top of data.”Momentum followed quickly. Customers like Indeed and Google responded because speed changed behavior. Teams experimented more often and refined audiences based on results instead of assumptions. Conversations with Sam Altman and collaboration with OpenAI reinforced that this capability belonged in the core workflow. Standing on stage at Google Cloud Next marked a clear moment of validation.That arc reshaped Anthony's role into Chief Data Strategy Officer. His work now focuses on building systems that learn over time. Faster audience creation leads to shorter feedback loops. Shorter loops improve decision quality. Better decisions compound. The throughline from robotics to marketing holds steady. Systems improve when learning sits at the center of execution.Key takeaway: Career leverage often comes from carrying one mental model across multiple domains. Anthony applied learning systems thinking from computer science to enterprise marketing, then rebuilt the tooling to match that mindset. You can do the same by identifying where translation slows your work, then designing interfaces that move intent directly into action. When feedback loops tighten, progress accelerates naturally.Most Marketing Systems Don't Learn Because They Lack Feedback LoopsMarketing organizations generate enormous amounts of activity, but learning often lags behind execution. Campaigns launch on schedule, dashboards fill with numbers, and post-campaign reviews happen right on time. The pattern repeats month after month with small adjustments and familiar explanations. Over time, teams become highly efficient at producing output while remaining surprisingly weak at retaining knowledge. The system rewards motion, visibility, and short-term lifts, which slowly conditions teams to forget what they learned last quarter.Anthony connects this behavior to structural pressure inside large organizations. Quarterly reporting cycles dominate priorities, and executive tenures continue to compress. Leaders feel urgency to show impact quickly and publicly. Compounding growth requires early patience and repeated reinforcement, which rarely aligns with board expectations or career incentives. Short time horizons shape long-term behavior, even when everyone agrees that learning should stack over time.“When you think about compound interest in finance, the early part looks almost linear. People want big bumps now, even if those bumps never build momentum.”Technology choices deepen the problem. Many companies invested heavily in customer data and built impressive data clouds that capture transactions, events, and engagement in detail. Activation remains slow because teams still rely on handoffs between marketing and data groups. A familiar sequence plays out:A marketer defines a campaign and requests an audience.A ticket moves to a data team for interpretation and SQL.The audience returns weeks later.The marketer realizes the audience lacks scale for ne...

Crazy Wisdom
Episode #533: The Universe Doing Its Thing: AI Evolution Is Already Here

Crazy Wisdom

Play Episode Listen Later Feb 20, 2026 73:51


In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Markus Buehler, the McAfee Professor of Engineering at MIT, to explore how seemingly different systems—from proteins and music to knowledge structures and AI reasoning—share underlying patterns through hierarchy, self-organization, and scale-free networks. The conversation ranges from the limits of current AI interpolation versus true discovery (using the fire-to-fusion example), to the emergence of agent swarms and their non-linear effects, to practical questions about ontologies, knowledge graphs, and whether humans will remain necessary in the creative discovery process. Markus discusses his lab's work automating scientific discovery through AI agents that can generate hypotheses, run simulations, and even retrain themselves, while Stewart shares his own experiences building applications with AI coding agents and grapples with questions about intellectual property, material science constraints, and the future of human creativity in an AI-abundant world.Timestamps00:00 - Introduction to Marcus Buehler's work on knowledge graphs, structural grammar across proteins, music, and AI reasoning05:00 - Discussion of AI discovery versus interpolation, using fire and fusion as examples of fundamental versus incremental innovation10:00 - Language models as connective glue between agents, enabling communication despite imperfect outputs and canonical averaging15:00 - Embodiment and agency in AI systems, creating adversarial agents that challenge theories and expand world models20:00 - Emergent properties in materials and AI, comparing dislocations in metals to behaviors in agent swarms25:00 - Human role-playing and phase separation in society, parallels to composite materials and heterogeneity30:00 - Physical world challenges, atom-by-atom manufacturing at MIT.nano, limitations of lithography machines35:00 - Synthetic biology as alternative to nanotechnology, programming microorganisms for materials discovery40:00 - Intellectual property debates, commodification of AI models, control layers more valuable than model architecture45:00 - Automation of ontologies, agent self-testing, daughter's coding success at age 1150:00 - Graph theory for knowledge compression, neurosymbolic approaches combining symbolic and neural methods55:00 - Nonlinear acceleration in AI, emergence from accumulated innovations, restaurant owner embracing AI01:00:00 - Future generations possibly rejecting AI, democratization of knowledge, social media as real-time scientific discourseKey Insights1. Universal Patterns Across Disciplines: Seemingly different systems in nature—proteins, music, social networks, and knowledge itself—share fundamental structural patterns including hierarchy, self-organization, and scale-free networks. This commonality allows creative thinkers to draw insights across disciplines, applying principles from one domain to solve problems in another. As an engineer and materials scientist, Buehler has leveraged these isomorphisms to advance scientific understanding by mapping the "plumbing" of different systems onto each other, revealing hidden relationships that enable extrapolation beyond what's observable in any single domain.2. The Discovery Versus Interpolation Problem: Current AI systems, particularly large language models, excel at interpolation—recombining existing knowledge in new ways—but struggle with genuine discovery that requires fundamental rewiring of world models. Using the example of fire versus fusion, Buehler explains that an AI trained on combustion chemistry would propose bigger fires or new fuels, but couldn't conceive of fusion because that requires stepping back to more fundamental physics. True discovery demands the ability to recognize when existing theories have boundaries and to develop entirely new frameworks, something current AI architectures aren't designed to achieve due to their training objective of predicting the most likely outcome.3. The Role of Ontologies and Knowledge Graphs: While some AI researchers argue that ontologies are unnecessary because models form internal representations, Buehler advocates for explicit knowledge graphs as essential discovery tools. External ontologies provide sharp, analytical, symbolic representations that complement the fuzzy internal representations of neural networks. They enable verification of rare connections—like obscure papers that might hold key insights—which would be averaged away in standard AI training. This neurosymbolic approach combines the generalization capabilities of neural networks with the precision of formal knowledge structures, creating more powerful discovery systems.4. Emergent Properties and Agent Swarms: Just as materials science shows that collections of atoms exhibit properties impossible to predict from individual components, AI agent swarms demonstrate emergent behaviors beyond single models. When agents are incentivized not just to answer questions but to challenge each other adversarially, propose theories, and test hypotheses, they can spawn new copies of themselves and evolve understanding beyond their initial programming. This emergence isn't surprising from a materials science perspective—dislocations, grain boundaries, and other collective phenomena only appear at scale, fundamentally determining material behavior in ways unpredictable from studying just a few atoms.5. The Commoditization of Intelligence: The fundamental AI models themselves are becoming commodities, as evidenced by events like the Moldbug phenomenon where people built agents using various providers interchangeably. The real value is shifting from who has the smartest model to how models are orchestrated, integrated, and deployed. This parallels historical technology adoption patterns—just as we moved past debating who makes the best electricity to focusing on applications, AI is transitioning from a horse race over model capabilities to questions of infrastructure, energy, access speed, and agent coordination at the systems level.6. Human-AI Collaboration and Creative Control: Rather than wholesale replacement, AI enables humans to operate in an intensely creative space as orchestrators sampling from vast possibility spaces. Similar to how Buehler's 11-year-old daughter now builds sophisticated applications that would have required professional developers years ago, AI democratizes access to capabilities while humans retain the creative judgment about direction and meaning. The human role becomes curating emergence, finding rare connections, playing at the edges of knowledge, and exercising the kind of curiosity-driven exploration that AI systems lack without embodied stakes in their own survival and continuation.7. Technology as Evolutionary Inevitability: The development of AI represents not an unnatural threat but the next stage of human evolution—an extension of our innate drive to build models of ourselves and our world. From cave paintings to partial differential equations to artificial intelligence, humans continuously create increasingly sophisticated representations and tools. Attempting to stop this technological evolution is futile; instead, the focus should be on steering it ...

Talking About Birds: A St. Louis Cardinals Podcast

Ben Clemens of FanGraphs joins the show to break down where the 2026 St. Louis Cardinals are headed, from win expectations and outfield needs to Chaim Bloom's “down to the studs” rebuild and the strategy behind recent trades and draft-pick flexibility. We dive into the upside-heavy approach highlighted by Jurrangelo Cijntje and other recent additions, react to FanGraphs' updated Cardinals prospect rankings, see a sneak preview of a new feature on Fangraphs, and wrap with a quick spin around the latest league news.Have a question or comment for the show? Text or leave us a voicemail at: (848) 48-BIRDS (848-482-4737)Talking About Birds is listener supported on Patreon. Support the show and join our private discord server at: www.patreon.com/talkingaboutbirds.

Web3 CMO Stories
How AI Agents Will Spend, Earn, And Prove Trust On Blockchain Rails | S6 E09

Web3 CMO Stories

Play Episode Listen Later Feb 19, 2026 27:10 Transcription Available


Send a textImagine an autonomous agent that dreams up a business, raises funds, ships code, and starts earning—all without a human in the loop. That's no longer sci‑fi. We sit down with Rodrigo Coelho to map the rails that make it plausible: reliable blockchain data, open payment standards, and human‑grade controls that keep machine spenders on track.We start with a myth many still believe: blockchains are easy to read. Rodrigo explains why they were write‑first, and how The Graph became a quiet backbone of DeFi by turning messy ledgers into queryable data. Years of running high‑throughput infrastructure set the stage for AMP, a SQL‑first, local‑first approach that unifies access across chains, runs on‑prem for banks, and proves that internal datasets match on‑chain truth—fuel for compliance, audit, and real‑world finance moving on blockchain rails.Then we connect the dots with AI. Leaders who once shrugged at crypto now see agents as the perfect fit: low fees, transparency, and observability. With X402 enabling open micropayments over HTTP, the next missing piece was control. Enter "ampersend", a dashboard and policy plane for agent wallets, spend limits, batching, and reputation‑aware routing. Think: “only transact with agents above a reputation threshold,” “cap this task at 50 cents,” or “enforce daily budgets,” all verifiable and auditable. We also unpack emerging standards like ERC‑8004 for reputation and the Advanced AI Society's proof of control, outlining the identity, trust, and policy stack enterprises need before they unleash agents at scale.By 2026, expect major institutions to settle on blockchain rails, blending privacy with auditability, and tokenizing everything from bonds to real estate. The opportunity is clear: give agents the autonomy to create value while giving humans the levers to define, observe, and verify. If you care about AI agents, Web3 data, enterprise compliance, and the future of payments, this conversation connects the technical dots to the business outcomes.Enjoyed the episode? Follow the show, share it with a friend who loves AI or Web3, and leave a 5‑star review to help more people find us.This episode was recorded through a Descript call on February 5, 2026. Read the blog article and show notes here: https://webdrie.net/how-ai-agents-will-spend-earn-and-prove-trust-on-blockchain-rails/..........................................................................

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Monday, February 16th, 2026: Graph Generator; nslookup and clickfix; Chrome 0-Day; TURN Threats

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Feb 16, 2026 6:00


AI-Powered Knowledge Graph Generator & APTs https://isc.sans.edu/diary/AI-Powered%20Knowledge%20Graph%20Generator%20%26%20APTs/32712 nslookup and ClickFix https://x.com/MsftSecIntel/status/2022456612120629742 Google Chrome 0-Day Patch https://chromereleases.googleblog.com/2026/02/stable-channel-update-for-desktop_13.html TURN Security Threats https://www.enablesecurity.com/blog/turn-server-security-threats/

Daily Short Stories - Science Fiction
The 4D Doodler - Graph Waldeyer

Daily Short Stories - Science Fiction

Play Episode Listen Later Feb 13, 2026 33:31 Transcription Available


Immerse yourself in captivating science fiction short stories, delivered daily! Explore futuristic worlds, time travel, alien encounters, and mind-bending adventures. Perfect for sci-fi lovers looking for a quick and engaging listen each day.

Run Your Mouth Podcast
Mainstream Myths about Labor Income And Profit Rates

Run Your Mouth Podcast

Play Episode Listen Later Feb 12, 2026 179:49


@GeneSohoForum coming through with the Graphs and walkthrough of the real story in American Labor markets. Are American's underpaid by greedy corporations? Find out on today's episode.

Run Your Mouth
Mainstream Myths about Labor Income And Profit Rates

Run Your Mouth

Play Episode Listen Later Feb 12, 2026 179:49


@GeneSohoForum coming through with the Graphs and walkthrough of the real story in American Labor markets. Are American's underpaid by greedy corporations? Find out on today's episode.

Gospel Simplicity Podcast
The State of Religion in America | Dr. Ryan Burge

Gospel Simplicity Podcast

Play Episode Listen Later Feb 9, 2026 49:03


In this interview I'm joined by Dr. Ryan Burge (aka, Graphs about Religion) to discuss the state of religion in America. We cover claims of a Gen Z revival, the decline of mainline Protestantism, and what the data tells us about polarization. Read the Book: https://amzn.to/3ZrUpqnWant to support the channel? Here's how!Give monthly: https://patreon.com/gospelsimplicity  Make a one-time donation: https://paypal.me/gospelsimplicityBook a meeting: https://calendly.com/gospelsimplicity/meet-with-austinRead my writings: https://austinsuggs.substack.com/Support the show

Crazy Wisdom
Episode #529: Semantic Sovereignty: Why Knowledge Graphs Beat $100 Billion Context Graphs

Crazy Wisdom

Play Episode Listen Later Feb 6, 2026 56:29


In this episode of the Crazy Wisdom Podcast, host Stewart Alsop explores the complex world of context and knowledge graphs with guest Youssef Tharwat, the founder of NoodlBox who is building dot get for context. Their conversation spans from the philosophical nature of context and its crucial role in AI development, to the technical challenges of creating deterministic tools for software development. Tharwat explains how his product creates portable, versionable knowledge graphs from code repositories, leveraging the semantic relationships already present in programming languages to provide agents with better contextual understanding. They discuss the limitations of large context windows, the advantages of Rust for AI-assisted development, the recent Claude/Bun acquisition, and the broader geopolitical implications of the AI race between big tech companies and open-source alternatives. The conversation also touches on the sustainability of current AI business models and the potential for more efficient, locally-run solutions to challenge the dominance of compute-heavy approaches.For more information about NoodlBox and to join the beta, visit NoodlBox.io.Timestamps00:00 Stewart introduces Youssef Tharwat, founder of NoodlBox, building context management tools for programming05:00 Context as relevant information for reasoning; importance when hitting coding barriers10:00 Knowledge graphs enable semantic traversal through meaning vs keywords/files15:00 Deterministic vs probabilistic systems; why critical applications need 100% reliability20:00 CLI tool makes knowledge graphs portable, versionable artifacts with code repos25:00 Compiler front-ends, syntax trees, and Rust's superior feedback for AI-assisted coding30:00 Claude's Bun acquisition signals potential shift toward runtime compilation and graph-based context35:00 Open source vs proprietary models; user frustration with rate limits and subscription tactics40:00 Singularity path vs distributed sovereignty of developers building alternative architectures45:00 Global economics and why brute force compute isn't sustainable worldwide50:00 Corporate inefficiencies vs independent engineering; changing workplace dynamics55:00 February open beta for NoodlBox.io; vision for new development tool standardsKey Insights1. Context is semantic information that enables proper reasoning, and traditional LLM approaches miss the mark. Youssef defines context as the information you need to reason correctly about something. He argues that larger context windows don't scale because quality degrades with more input, similar to human cognitive limitations. This insight challenges the Silicon Valley approach of throwing more compute at the problem and suggests that semantic separation of information is more optimal than brute force methods.2. Code naturally contains semantic boundaries that can be modeled into knowledge graphs without LLM intervention. Unlike other domains where knowledge graphs require complex labeling, code already has inherent relationships like function calls, imports, and dependencies. Youssef leverages these existing semantic structures to automatically build knowledge graphs, making his approach deterministic rather than probabilistic. This provides the reliability that software development has historically required.3. Knowledge graphs can be made portable, versionable, and shareable as artifacts alongside code repositories. Youssef's vision treats context as a first-class citizen in version control, similar to how Git manages code. Each commit gets a knowledge graph snapshot, allowing developers to see conceptual changes over time and share semantic understanding with collaborators. This transforms context from an ephemeral concept into a concrete, manageable asset.4. The dependency problem in modern development can be solved through pre-indexed knowledge graphs of popular packages. Rather than agents struggling with outdated API documentation, Youssef pre-indexes popular npm packages into knowledge graphs that automatically integrate with developers' projects. This federated approach ensures agents understand exact APIs and current versions, eliminating common frustrations with deprecated methods and unclear documentation.5. Rust provides superior feedback loops for AI-assisted programming due to its explicit compiler constraints. Youssef rebuilt his tool multiple times in different languages, ultimately settling on Rust because its picky compiler provides constant feedback to LLMs about subtle issues. This creates a natural quality control mechanism that helps AI generate more reliable code, making Rust an ideal candidate for AI-assisted development workflows.6. The current AI landscape faces a fundamental tension between expensive centralized models and the need for global accessibility. The conversation reveals growing frustration with rate limiting and subscription costs from major providers like Claude and Google. Youssef believes something must fundamentally change because $200-300 monthly plans only serve a fraction of the world's developers, creating pressure for more efficient architectures and open alternatives.7. Deterministic tooling built on semantic understanding may provide a competitive advantage against probabilistic AI monopolies. While big tech companies pursue brute force scaling with massive data centers, Youssef's approach suggests that clever architecture using existing semantic structures could level the playing field. This represents a broader philosophical divide between the "singularity" path of infinite compute and the "disagreeably autistic engineer" path of elegant solutions that work locally and affordably.

Infinite Code Context: AI Coding at Enterprise Scale w/ Blitzy CEO Brian Elliott & CTO Sid Pardeshi

Play Episode Listen Later Feb 5, 2026 116:32


Blitzy founders Brian and Sid break down how their “infinite code context” system lets AI autonomously complete over 80% of major enterprise software projects in days. They dive into their dynamic agent architecture, how they choose and cross-check different models, and why they prioritize advances in AI memory over fine-tuning. The conversation also covers their 20¢/line pricing model, the path to 99%+ autonomous project completion, and what this all means for the future software engineering job market. Sponsors: Blitzy: Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Serval: Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive CHAPTERS: (00:00) About the Episode (03:02) AGI effects without AGI (07:07) Domain-specific context engineering (16:54) Dynamic harness and evals (Part 1) (17:00) Sponsors: Blitzy | Tasklet (20:00) Dynamic harness and evals (Part 2) (30:42) Graphs, RAG, and memory (Part 1) (30:49) Sponsor: Serval (32:26) Graphs, RAG, and memory (Part 2) (41:17) Model zoo and memory (50:07) Planning, scaling, and parallelism (56:13) Pricing, onboarding, and autonomy (01:04:24) Closing the last 20% (01:12:34) Strange behaviors and judges (01:22:23) Reasoning budgets and autonomy (01:33:36) Fine-tuning, benchmarks, and training (01:42:31) Securing AI-generated code (01:49:52) Future of software work (01:57:05) Outro PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

IBM Analytics Insights Podcasts
This Episode is Deep into the Tech! From Graph DB Fundamentals to Grid Security: Engineering the Next-Gen Infrastructure

IBM Analytics Insights Podcasts

Play Episode Listen Later Feb 4, 2026 19:54


Send us a textPrasad Calyam, Curators' Distinguished Professor and Center Director at the University of Missouri, joins the show to explore how knowledge graphs, modern data platforms, and AI are reshaping power grids and cybersecurity. He breaks down graph database fundamentals, real-world research projects, and how industry can tap into cutting-edge university work—all in language that engineers, data folks, and developers can put to use.Timestamps 01:30 Meet Prasad Calyam 02:57 Why Higher Education? 05:22 Data Analytics 06:59 The Modern Power Grid 09:40 Graph DB Fundamentals 12:21 Cybersecurity via Graphs and RAG 13:45 Research Projects 14:38 Industry Leveraging University Research 16:07 Advice for Students 17:16 What's Fun for ProfessorsLinks LinkedIn: linkedin.com/in/prasadcalyam Website: http://www.missouri.edu#KnowledgeGraphs #GraphDatabase #RAG #Cybersecurity #PowerGrid #DataEngineering #AI #MLOps #TechPodcast #Developers #ResearchToProduction #UniversityResearchWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Making Data Simple
This Episode is Deep into the Tech! From Graph DB Fundamentals to Grid Security: Engineering the Next-Gen Infrastructure

Making Data Simple

Play Episode Listen Later Feb 4, 2026 19:54


Send us a textPrasad Calyam, Curators' Distinguished Professor and Center Director at the University of Missouri, joins the show to explore how knowledge graphs, modern data platforms, and AI are reshaping power grids and cybersecurity. He breaks down graph database fundamentals, real-world research projects, and how industry can tap into cutting-edge university work—all in language that engineers, data folks, and developers can put to use.Timestamps 01:30 Meet Prasad Calyam 02:57 Why Higher Education? 05:22 Data Analytics 06:59 The Modern Power Grid 09:40 Graph DB Fundamentals 12:21 Cybersecurity via Graphs and RAG 13:45 Research Projects 14:38 Industry Leveraging University Research 16:07 Advice for Students 17:16 What's Fun for ProfessorsLinks LinkedIn: linkedin.com/in/prasadcalyam Website: http://www.missouri.edu#KnowledgeGraphs #GraphDatabase #RAG #Cybersecurity #PowerGrid #DataEngineering #AI #MLOps #TechPodcast #Developers #ResearchToProduction #UniversityResearchWant to be featured as a guest on Making Data Simple? Reach out to us at almartintalksdata@gmail.com and tell us why you should be next. The Making Data Simple Podcast is hosted by Al Martin, WW VP Technical Sales, IBM, where we explore trending technologies, business innovation, and leadership ... while keeping it simple & fun.

Effective Altruism Forum Podcast
[Linkpost] “Inference Scaling Reshapes AI Governance” by Toby_Ord

Effective Altruism Forum Podcast

Play Episode Listen Later Feb 2, 2026 34:49


This is a link post. The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab. Rapid scaling of inference-at-deployment would: lower the importance of open-weight models (and of securing the weights of closed models), reduce the impact of the first human-level models, change the business model for frontier AI, reduce the need for power-intense data centres, and derail the current paradigm of AI governance via training compute thresholds. Rapid scaling of inference-during-training would have more ambiguous effects that range from a revitalisation of pre-training scaling to a form of recursive self-improvement via iterated distillation and amplification. The end of an era — for both training and governance The intense year-on-year scaling up of AI training runs has been one of the most dramatic and stable markers of the Large Language Model era. Indeed it had been widely taken to be a permanent fixture of the AI landscape and the basis of many approaches to [...] ---Outline:(01:06) The end of an era -- for both training and governance(05:24) Scaling inference-at-deployment(06:42) Reducing the number of simultaneously served copies of each new model(08:45) Reducing the value of securing model weights(09:30) Reducing the benefits and risks of open-weight models(10:05) Unequal performance for different tasks and for different users(12:08) Changing the business model and industry structure(12:50) Reducing the need for monolithic data centres(17:16) Scaling inference-during-training(28:07) Conclusions(30:17) Appendix. Comparing the costs of scaling pre-training vs inference-at-deployment --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/RnsgMzsnXcceFfKip/inference-scaling-reshapes-ai-governance Linkpost URL:https://www.tobyord.com/writing/inference-scaling-reshapes-ai-governance --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Effective Altruism Forum Podcast
[Linkpost] “The Extreme Inefficiency of RL for Frontier Models” by Toby_Ord

Effective Altruism Forum Podcast

Play Episode Listen Later Feb 2, 2026 14:34


This is a link post. The new scaling paradigm for AI reduces the amount of information a model can learn from per hour of training by a factor of 1,000 to 1,000,000. I explore what this means and its implications for scaling. The last year has seen a massive shift in how leading AI models are trained. 2018–2023 was the era of pre-training scaling. LLMs were primarily trained by next-token prediction (also known as pre-training). Much of OpenAI's progress from GPT-1 to GPT-4, came from scaling up the amount of pre-training by a factor of 1,000,000. New capabilities were unlocked not through scientific breakthroughs, but through doing more-or-less the same thing at ever-larger scales. Everyone was talking about the success of scaling, from AI labs to venture capitalists to policy makers. However, there's been markedly little progress in scaling up this kind of training since (GPT-4.5 added one more factor of 10, but was then quietly retired). Instead, there has been a shift to taking one of these pre-trained models and further training it with large amounts of Reinforcement Learning (RL). This has produced models like OpenAI's o1, o3, and GPT-5, with dramatic improvements in reasoning (such as solving [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/64iwgmMvGSTBHPdHg/the-extreme-inefficiency-of-rl-for-frontier-models Linkpost URL:https://www.tobyord.com/writing/inefficiency-of-reinforcement-learning --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Effective Altruism Forum Podcast
[Linkpost] “Evidence that Recent AI Gains are Mostly from Inference-Scaling” by Toby_Ord

Effective Altruism Forum Podcast

Play Episode Listen Later Feb 2, 2026 10:01


This is a link post. In the last year or two, the most important trend in modern AI came to an end. The scaling-up of computational resources used to train ever-larger AI models through next-token prediction (pre-training) stalled out. Since late 2024, we've seen a new trend of using reinforcement learning (RL) in the second stage of training (post-training). Through RL, the AI models learn to do superior chain-of-thought reasoning about the problem they are being asked to solve. This new era involves scaling up two kinds of compute: the amount of compute used in RL post-training the amount of compute used every time the model answers a question Industry insiders are excited about the first new kind of scaling, because the amount of compute needed for RL post-training started off being small compared to the tremendous amounts already used in next-token prediction pre-training. Thus, one could scale the RL post-training up by a factor of 10 or 100 before even doubling the total compute used to train the model. But the second new kind of scaling is a problem. Major AI companies were already starting to spend more compute serving their models to customers than in the training [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/5zfubGrJnBuR5toiK/evidence-that-recent-ai-gains-are-mostly-from-inference Linkpost URL:https://www.tobyord.com/writing/mostly-inference-scaling --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Effective Altruism Forum Podcast
[Linkpost] “Are the Costs of AI Agents Also Rising Exponentially?” by Toby_Ord

Effective Altruism Forum Podcast

Play Episode Listen Later Feb 2, 2026 15:16


This is a link post. There is an extremely important question about the near-future of AI that almost no-one is asking. We've all seen the graphs from METR showing that the length of tasks AI agents can perform has been growing exponentially over the last 7 years. While GPT-2 could only do software engineering tasks that would take someone a few seconds, the latest models can (50% of the time) do tasks that would take a human a few hours. As this trend shows no signs of stopping, people have naturally taken to extrapolating it out, to forecast when we might expect AI to be able to do tasks that take an engineer a full work-day; or week; or year. But we are missing a key piece of information — the cost of performing this work. Over those 7 years AI systems have grown exponentially. The size of the models (parameter count) has grown by 4,000x and the number of times they are run in each task (tokens generated) has grown by about 100,000x. AI researchers have also found massive efficiencies, but it is eminently plausible that the cost for the peak performance measured by METR has been [...] ---Outline:(13:02) Conclusions(14:05) Appendix(14:08) METR has a similar graph on their page for GPT-5.1 codex. It includes more models and compares them by token counts rather than dollar costs: --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/AbHPpGTtAMyenWGX8/are-the-costs-of-ai-agents-also-rising-exponentially Linkpost URL:https://www.tobyord.com/writing/hourly-costs-for-ai-agents --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Effective Altruism Forum Podcast
[Linkpost] “Inference Scaling and the Log-x Chart” by Toby_Ord

Effective Altruism Forum Podcast

Play Episode Listen Later Feb 2, 2026 16:32


This is a link post. Improving model performance by scaling up inference compute is the next big thing in frontier AI. But the charts being used to trumpet this new paradigm can be misleading. While they initially appear to show steady scaling and impressive performance for models like o1 and o3, they really show poor scaling (characteristic of brute force) and little evidence of improvement between o1 and o3. I explore how to interpret these new charts and what evidence for strong scaling and progress would look like. From scaling training to scaling inference The dominant trend in frontier AI over the last few years has been the rapid scale-up of training — using more and more compute to produce smarter and smarter models. Since GPT-4, this kind of scaling has run into challenges, so we haven't yet seen models much larger than GPT-4. But we have seen a recent shift towards scaling up the compute used during deployment (aka 'test-time compute' or ‘inference compute'), with more inference compute producing smarter models. You could think of this as a change in strategy from improving the quality of your employees' work via giving them more years of training in which acquire [...] --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/zNymXezwySidkeRun/inference-scaling-and-the-log-x-chart Linkpost URL:https://www.tobyord.com/writing/inference-scaling-and-the-log-x-chart --- Narrated by TYPE III AUDIO. ---Images from the article:

Effective Altruism Forum Podcast
[Linkpost] “Is there a Half-Life for the Success Rates of AI Agents?” by Toby_Ord

Effective Altruism Forum Podcast

Play Episode Listen Later Feb 2, 2026 19:45


This is a link post. Building on the recent empirical work of Kwa et al. (2025), I show that within their suite of research-engineering tasks the performance of AI agents on longer-duration tasks can be explained by an extremely simple mathematical model — a constant rate of failing during each minute a human would take to do the task. This implies an exponentially declining success rate with the length of the task and that each agent could be characterised by its own half-life. This empirical regularity allows us to estimate the success rate for an agent at different task lengths. And the fact that this model is a good fit for the data is suggestive of the underlying causes of failure on longer tasks — that they involve increasingly large sets of subtasks where failing any one fails the task. Whether this model applies more generally on other suites of tasks is unknown and an important subject for further work. METR's results on the length of tasks agents can reliably complete A recent paper by Kwa et al. (2025) from the research organisation METR has found an exponential trend in the duration of the tasks that frontier AI agents can [...] ---Outline:(05:33) Explaining these results via a constant hazard rate(14:54) Upshots of the constant hazard rate model(18:47) Further work(19:25) References --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/qz3xyqCeriFHeTAJs/is-there-a-half-life-for-the-success-rates-of-ai-agents-3 Linkpost URL:https://www.tobyord.com/writing/half-life --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Eggplant: The Secret Lives of Games
All Systems Brough - Vertex Dispenser

Eggplant: The Secret Lives of Games

Play Episode Listen Later Jan 30, 2026 112:29


We're joined by droqen (The End of Gameplay, Starseed Pilgrim), Darius Kazemi (Tiny Subversions, Harvard Applied Social Media Lab), and Tara Macalister (mathematician, composer) to discuss Vertex Dispenser, the second game in our year-long exploration of the work of Michael Brough. Next Month: Kompendium Audio edited by Dylan Shumway. Discussed in this episode: Vertex Dispenser https://store.steampowered.com/app/102400/Vertex_Dispenser/ Michael Brough's Website https://www.smestorp.com/ Four color theorem https://en.wikipedia.org/wiki/Four_color_theorem Graph coloring https://en.wikipedia.org/wiki/Graph_coloring  Starcraft II https://starcraft2.blizzard.com/en-us/ Splatoon https://splatoon.nintendo.com/  Dota 2 https://www.dota2.com/home  Droqen's rare color graph/explanation https://discord.com/channels/690388280767807518/1442554518092120186/1465039921147412510  lots of michael brough games https://smestorp.itch.io/lots-of-michael-brough-games  The Sense of Connectedness https://forums.tigsource.com/index.php?topic=16151.0  Kompendium https://mightyvision.blogspot.com/2012/06/kompendium.html The End of Gameplay https://droqen.itch.io/the-end-of-gameplay Utopia Clicker https://tinysubversions.com/game/utopia/ A Jackpot of Skulls https://brainfruit.studio/games/jackpot _update() Jam https://adamatomic.itch.io/update-jam   https://secretlives.games/  https://discord.gg/tslog https://www.patreon.com/tslog https://www.youtube.com/eggplantshow

Effective Altruism Forum Podcast
[Linkpost] “The Scaling Paradox” by Toby_Ord

Effective Altruism Forum Podcast

Play Episode Listen Later Jan 30, 2026 16:16


This is a link post. AI capabilities have improved remarkably quickly, fuelled by the explosive scale-up of resources being used to train the leading models. But if you examine the scaling laws that inspired this rush, they actually show extremely poor returns to scale. What's going on? AI Scaling is Shockingly Impressive The era of LLMs has seen remarkable improvements in AI capabilities over a very short time. This is often attributed to the AI scaling laws — statistical relationships which govern how AI capabilities improve with more parameters, compute, or data. Indeed AI thought-leaders such as Ilya Sutskever and Dario Amodei have said that the discovery of these laws led them to the current paradigm of rapid AI progress via a dizzying increase in the size of frontier systems. Before the 2020s, most AI researchers were looking for architectural changes to push the frontiers of AI forwards. The idea that scale alone was sufficient to provide the entire range of faculties involved in intelligent thought was unfashionable and seen as simplistic. A key reason it worked was the tremendous versatility of text. As Turing had noted more than 60 years earlier, almost any challenge that one could pose to [...] --- First published: January 30th, 2026 Source: https://forum.effectivealtruism.org/posts/742xJNTqer2Dt9Cxx/the-scaling-paradox Linkpost URL:https://www.tobyord.com/writing/the-scaling-paradox --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Letting & Estate Agent Podcast
UK Property Market Stats Show - Week 3 2026 - Ep.2464

Letting & Estate Agent Podcast

Play Episode Listen Later Jan 30, 2026 46:21


UK Property Market Weekly Update - Week 3 of 2026 I look at the UK property market in the ‘UK Property Market Stats Show“ for the week ending Sunday 25th January 2026 (week 3) with the brilliant Steph Vass. YouTube https://youtu.be/496XoAgOVIU ✅ New Listings * 35.2k new properties came to market this week in week 3, up as expected from 32.8k last week. * 2025 weekly average: 30.6k. * 10-year week 3 average : 31.8k * Year-to-date (YTD): 96.5k new listings, 0.5% above than 2025 YTD (96.1k), 17.5% above 2024 YTD (82.1k) and 34% above the 2017–19 average (72k) ✅ Price Reductions * 20k reductions this week * 7.6% of resi homes for sale were reduced in December. Compared to Oct 12.8%, Sept 14.1%, August 11.1%, July 14.1% in July and 14% in June. * 2025 average was 12.8%, versus the five-year long-term average of 10.74%. ✅ Sales Agreed * 24.6k homes sold stc this week 3, up expectedly from 21.2k last week. * Week 3 average (for last 10 years) : 23.4k * 2026 weekly average : 19.1k. * YTD: 62.7k gross sales, which is 8.7% behind Week 2 * 3 YTD of 2025 (68.7k), yet 23.5% ahead of wk.3 2024 (50.8k) and 30.6% above the 2017–19 average (48k). * Thoughts - January 2025 was an exceptional month as we had the stamp duty deadline for April 2025 - here it was a good sales month. To be ahead of 2024 and pre Covid years by such a amount is good to see. ✅ Price Diff between Listings & Sales * Average Asking Price of listings last week £413k * Average asking price of Sales Agreed (SSTC) last week was £348k * A 18.8% difference (long term 9 year average is 16% to 17%). ✅ Sell-Through Rate * 9.9% of homes on agents' books went SSTC in December '25. Down as expected from 13.5% in November, 15% in October, 14.1% in Sept, 14.5% in Aug, 15.4% in July, 15.3% in June, and 16.1% in May. * Pre-Covid average: 15.5%. ✅ Fall-Throughs * 4,783 fall-throughs last week (pipeline of 482k home Sold STC). * Weekly average for 2025: 6,100. * Fall-through rate: 25.8%, slightly up from 24.9% last week. * Long-term average: 24.2% (post-Truss chaos saw levels exceed 40%). ✅ Net Sales * Huge jump in net sales from last week. 19.3k, up from 15.8k last week. * Ten-year Week 3 average: 18.2k. * Weekly average for 2026: 15.4k. * Weekly average for the whole of 2025: 19.2k. * YTD: 46.1k, which is 8.3% behind Wk.3 of 2025 (30.6k), 35% ahead of wk2 2024 (19.9k) and 40% ahead of wk2 2017–19 (19.1k). ✅ Probability of Selling (% that Exchange vs withdrawal) * December Stats : 60.2% of homes that left agents' books exchanged & completed in December. (Note this figure will change throughout the month as more December stats come in). * November 55.2% / October 53.3% / September: 53.1% / August :55.8% / July: 50.9% / June: 51.3% / May: 51.7% / April: 53.2%. * Dec 24: 60.3% / Dec 23: 57.7% / Dec 22: 64.4% / Dec 21: 73.7% ✅ Stock Levels * 613k homes on the market on the 1st of January '26 , down from 678k on 1st of December '25 . (605k on the market on 1st Jan '25 for comparison) * 434k homes in agent's sales pipeline on the 1st Jan 2026, almost identical than 12 months ago on 1st Jan '25 (439k). ✅ House Prices (£/sq.ft) * December 2025 agreed sales averaged £337.09 per sq.ft. 0.6% higher than 12 months ago (£335.04) and 12.6% than 5 years ago (£299.30). The £/sqft at sale agreed matches the HM Land Registry Index with a 98% accuracy, 5 months in advance. That is why it is so important. ✅ UK Rental Market Overview * Average Rent in December 2025 - £1,702 pcm - compared to £1,719 pcm in Dec 2024 and £1,301 pcm in Dec 2017. * Available Rental Properties in December '25 - 285k compared to 321k in November '25. (Dec '24 - 258k and Dec '23 - 235k) ✅ Graphs https://youtu.be/496XoAgOVIU

Analyst Talk With Jason Elder
Analyst Talk - Dr. Andreas Olligschlaeger - AI is Challenging, Not Replacing

Analyst Talk With Jason Elder

Play Episode Listen Later Jan 19, 2026 47:02 Transcription Available


Episode: 00302 Released on January 19, 2026 Description: Artificial intelligence is everywhere but to what degree? Andreas Olligschlaeger returns to Analyst Talk for a deep dive into AI in law enforcement analysis. We break down what AI really is (and isn't), explore graph databases, anomaly detection, and Graph RAG, and discuss how analysts can use AI without replacing human judgment. The conversation also tackles ethics, explainability, and why validation and transparency matter more than ever. This episode is a must-listen for analysts trying to separate real capability from AI hype.

NCLEX High Yield
Prioritization - ASK GRAPH©️, AASH©️ and Test-Taking Strategies

NCLEX High Yield

Play Episode Listen Later Jan 18, 2026 10:11


Submit your CPR Report here. Get a call from Dr. Zeeshan or Nurse Brittany fill out form here!https://docs.google.com/forms/d/e/1FAIpQLSeAO_cq5OE6ONYgDFSz0HHrUqKt2Nk1JfC-3D7eXUl8LlzGdg/viewformOur February course is $149.99 JOIN ASAP!~https://nclexhighyield.com/collections/february-coursesOur Self-Paced Online Videos are on sale for $44.99 and has updated notes, videos, and practice questions! You can join at https://nclexhighyieldcourse.com/p/full-nclex-course7

Catalog & Cocktails
It's Friday! Juan and Tim rant about data and context graphs

Catalog & Cocktails

Play Episode Listen Later Jan 16, 2026 20:23


Juan and Tim rant about Context Graphs and categorizations of how companies work with data.See omnystudio.com/listener for privacy information.

ServiceNow Podcasts
It's Friday! Juan and Tim rant about data and context graphs

ServiceNow Podcasts

Play Episode Listen Later Jan 16, 2026 20:23


Juan and Tim rant about Context Graphs and categorizations of how companies work with data.See omnystudio.com/listener for privacy information.

Future of Data and AI
Emil Eifrem on Neo4j, Graph Databases, Connected Data & Graph-Native AI

Future of Data and AI

Play Episode Listen Later Jan 16, 2026 70:43


How to B2B a CEO (with Ashu Garg)
Why context graphs are the missing layer for AI

How to B2B a CEO (with Ashu Garg)

Play Episode Listen Later Jan 15, 2026 48:43


My guests today are Animesh Koratana and Jamin Ball. Animesh is the founder and CEO of our portfolio company PlayerZero, which is building AI production engineers that operate complex enterprise software autonomously - resolving production incidents, catching defects before release, and building durable models of how systems actually behave.Jamin is a partner at Altimeter Capital and the writer behind Clouded Judgement, a Substack where he analyzes emerging trends in enterprise software. Jamin recently sparked a debate with an essay titled “Long Live Systems of Record.” His core argument is that while agents are changing how software is used and where value accrues, they still depend on ground truth. Systems of record won't disappear so much as get pushed down the stack as new agent-native interfaces emerge on top.My partner Jaya and I felt compelled to respond, with Animesh contributing insights based on what he's seeing on the ground as he builds PlayerZero. From our perspective, the missing layer is what happens inside the workflow itself: the judgment, exceptions, and reasoning that agents and humans apply as work gets done. We call these decision traces, and we believe the context graph they form over time will become the most valuable asset for companies building and deploying AI systems.It's a genuine debate - and one that's only going to matter more as agents move from demos to production.Looking forward to keeping the conversation going!Chapters00:00 Why Jamin's essay sparked debate00:35 Jamin's thesis: why agents need ground truth02:00 Animesh on why context graphs become the new source of leverage07:58 What current systems of record miss08:28 PlayerZero's perspective: context graphs in practice10:00 How context graphs could change org structures11:10 How to capture decision traces without forcing humans to log it?14:35 Which systems of record are most at risk17:04 Two workflows ripe for disruption: GTM and software development22:31 Animesh on where context graphs can add most value 28:50 Why context graphs create durability vs short-lived point solutions30:00 Will context graphs be verticalized or universal?34:00 Bear case: do context graphs fail like semantic layers?43:27 2026 predictions: big AI IPOs, world models, enterprise agent adoption45:00 Hot takes: point solutions die; AI job-loss discourse hits a fever pitch47:30 Jevons paradox: why agents create more work, not less

Catalog & Cocktails
2026 Trends in the data world with Tony Baer and Matt Housley

Catalog & Cocktails

Play Episode Listen Later Jan 15, 2026 49:46


Tim and Juan chat with Tony Baer and Matt Housley, hosts of the “It’s About Data” podcast about what are the trends they are seeing with the start of 2026. We talked about AI magical thinking, Agentic architectures, Graphs, careers and much more. See omnystudio.com/listener for privacy information.

ServiceNow Podcasts
2026 Trends in the data world with Tony Baer and Matt Housley

ServiceNow Podcasts

Play Episode Listen Later Jan 15, 2026 49:46


Tim and Juan chat with Tony Baer and Matt Housley, hosts of the “It’s About Data” podcast about what are the trends they are seeing with the start of 2026. We talked about AI magical thinking, Agentic architectures, Graphs, careers and much more. See omnystudio.com/listener for privacy information.

Catalog & Cocktails
TAKEAWAY - 2026 Trends in the data world with Tony Baer and Matt Housley

Catalog & Cocktails

Play Episode Listen Later Jan 14, 2026 4:33


This is the takeaway episode about the chat that Tim and Juan have with Tony Baer and Matt Housley, hosts of the “It’s About Data” podcast about what are the trends they are seeing with the start of 2026. We talked about AI magical thinking, Agentic architectures, Graphs, careers and much more.See omnystudio.com/listener for privacy information.

Sean White's Solar and Energy Storage Podcast
Why Stories Beat Graphs in Solar with Aaron Nichols

Sean White's Solar and Energy Storage Podcast

Play Episode Listen Later Jan 14, 2026 22:30


Join Sean White and Aaron Nichols as they turn the world of solar energy upside down with humor, stories, and a fresh take on education. Discover why facts and graphs aren't enough, and how laughter and creativity can make even the most technical topics memorable. From fake ads to satirical brainstorms, this episode proves that learning about solar can be fun, engaging, and unforgettable. Tune in for a brighter perspective!   Topics Covered Sitcom = Situation Comedy Making Boring Stuff Fun Graphs & Acronyms Exact Solar This Week in Solar Substack www.exactsolar.substack.com The Modern Mythmaker www.themodernmythmaker.substack.com Podbean Colbert Report Solyndra Fake Advertisement Big Oil Beverly Hillbillies YouTube Channel Repurposing   Reach out to Aaron Nichols here: LinkedIn: www.linkedin.com/in/aaron-nichols Substack: www.exactsolar.substack.com   Learn more at www.solarSEAN.com and be sure to get NABCEP certified by taking Sean's classes at www.heatspring.com/sean www.solarsean.com/pvip www.solarsean.com/esip

ServiceNow Podcasts
TAKEAWAY - 2026 Trends in the data world with Tony Baer and Matt Housley

ServiceNow Podcasts

Play Episode Listen Later Jan 14, 2026 4:33


This is the takeaway episode about the chat that Tim and Juan have with Tony Baer and Matt Housley, hosts of the “It’s About Data” podcast about what are the trends they are seeing with the start of 2026. We talked about AI magical thinking, Agentic architectures, Graphs, careers and much more.See omnystudio.com/listener for privacy information.

Effective Altruism Forum Podcast
“Is EA underfunding animal advocacy according to our own preferences?” by ElliotTep

Effective Altruism Forum Podcast

Play Episode Listen Later Jan 14, 2026 7:37


TL;DR When surveyed, the EA community and leaders think ~18-24% of resources should go towards animal advocacy. The actual figure is about 7%. We as the EA ecosystem are putting less resources (money and time) into animal advocacy than the movement thinks we should when surveyed. This disparity could be because of loss of message fidelity, it's a harder cause area to pitch donors, or the role of large funders, but I'm honestly not too sure. My job at Senterra Funders involves making the case to EA/EA adjacent prospective donors that they can do a tonne of good by donating to animal advocacy charities. As part of this work I've noticed a certain level of inconsistency in the EA ecosystem: I encounter a lot more people who want the animal advocacy movement to 'win' than people working in or donating to the space.The numbersIt turns out this intuition is backed up by survey data. Sources (see Appendix for extra details): Meta Coordination Forum (MCF; 2024) / Talent Need Survey on ideal allocation of financial resources EA Community survey data from 2023 on jobs by cause area I obtained in private correspondence with David Moss. Historical EA [...] ---Outline:(01:07) The numbers(02:37) Accounting for the disparity(05:04) Appendix 1. Data Sources --- First published: January 13th, 2026 Source: https://forum.effectivealtruism.org/posts/FxZdQJXs45fTFnMEe/is-ea-underfunding-animal-advocacy-according-to-our-own --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Crypto Altruism Podcast
Episode 234 - Geo - Community-Governed Knowledge for the Open Internet, with The Graph Co-Founder Yaniv Tal

Crypto Altruism Podcast

Play Episode Listen Later Jan 13, 2026 46:23


For episode 234, we're excited to welcome Yaniv Tal, a legendary builder who has helped shape the foundations of Web3 as we know it. Yaniv is the co-founder and former CEO of The Graph, one of the most critical pieces of decentralized infrastructure in the ecosystem, powering tens of thousands of applications across Web3. Today, he's building Geo, a project focused not on scaling transactions, but on rebuilding trust, knowledge, and coordination on the internet itself.In today's episode you'll learn:

Retirement Talk for Boomers, Seniors, and Retirees

Graphs and charts: One dealt with the average life span. I still have 17 years left according to statistical data. Another dealt with how much time we have left after arriving at age 65 to enjoy good health.

The Secret World of Slimming Clubs

We're a week into 2026 and the optimism is still flowing! But that doesn't stop us hearing about all the binging and gains you had over the festive period. Plus, joining gyms, chocolate oranges, mixing Marmite with avocado and clothes not fitting. NOTE: there is more Easter chat in this episode than you'd expect from a January podcast.Send us a voicenote: 07468 286104 If you'd like to mark your weight loss with our exclusive certificates, get Extra Portions of this podcast and win CASH PRIZES go to patreon.com/noshameinagain or find us on the Patreon app. Hosted on Acast. See acast.com/privacy for more information.

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Why “context graphs” have suddenly become one of the most important ideas in enterprise AI, and what they reveal about why agents fail or succeed at real work. This episode explains the core idea behind context graphs, how they differ from systems of record and knowledge graphs, and why capturing decision traces — the why, not just the what — may be the key to scalable autonomy inside organizations. In the headlines: AI wearables make another run at relevance, China reports early success using AI for cancer detection, X faces global backlash over Grok moderation failures, and Yann LeCun publicly breaks with Meta's AI strategy. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠http://zencoder.ai/zenflow⁠Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai

Jim and Them
Corey's Angels Live Continued (Love or Poop) - #893 Part 2

Jim and Them

Play Episode Listen Later Dec 22, 2025 121:32


Tots TURNT: We have an update on the final push for Tots TURNT. Shout outs to everyone that has donated. Corey's Angels Live: We must continue our watch of the best show on the Internet, Corey's Angels Live. This is a complete disaster. Special Guests: Fred Durst is in the building and Gerard McMahon, all the celebs! THE BEAR!, FUCK YOU, WATCH THIS!, THE KINKS!, FATHER CHRISTMAS!, SENTIENT NECK PUSSY!, AI!, ROAST!, TOTS TURNT!, DONATIONS!, SUPPORT!, COREY'S ANGELS LIVE!, COMMENTS!, CHAT!, SCROLLING!, DAISY DE LA HOYA!, HATERS!, TROLLS!, MODELS!, MERCH!, MONOLOGUE!, HOWARD STERN!, ROAST!, MICHAEL JACKSON!, SURGERY!, PHILIP SEYMOUR HOFFMAN!, LOVE OR POOP!, VOTES!, GRAPH!, GUY ON THE BOARDS!, TRYOUTS!, HATERS!, FRED DURST!, CRY LITTLE SISTER!, GERARD MCMAHON!, MICHAEL SCOTT!, SCUMBAG JOSH!, SUPERCHATS!, JAMESONANDJACK!, JUSTIN BIEBER!, PHILIP SEYMOUR HOFFMAN!, OVERDOSE!, BILL SHYTE!, FASHION SHOW!, ROCK OF LOVE!, DAISY OF LOVE!, SCIENCE!, BEANIE!, FISHERMAN HAT!, COOCOO!, AMERICA'S GOT TALENT!, BOOGIE DOWN!, PERFECT ENDING!  You can find the videos from this episode at our Discord RIGHT HERE!

Homebrewed Christianity Podcast
The Great Disconnect: When the Pulpit and the Pew aren't Speaking the Same Language

Homebrewed Christianity Podcast

Play Episode Listen Later Dec 20, 2025 84:07


What happens when the person preaching on Sunday morning believes something completely different than the folks sitting in the pews? Well friends, that's exactly what we're digging into today. My buddy Ryan Burge brought the graphs—including some brand new data that hasn't even dropped on his Substack yet—and let me tell you, it's a real deal predicament for Mainline Protestantism. Turns out about 60-70% of mainline clergy identify as liberal, but only about 25% of the people in the pews do. That's not a gap, that's a canyon. We're talking ELCA, UCC, PCUSA, Episcopalians—the whole crew. And look, Ryan and I are both mainline folks, so we're not throwing rocks across the river here. We're throwing rocks at our own faces. We get into why this disconnect exists, what the "silver tsunami" of aging Boomers means for these congregations, and why young progressive folks aren't joining our churches even though we thought we built them a home. It's honest, it's a little uncomfortable, and yeah, we also talk about Zion Williamson and Christmas movies because that's just how we roll. If you want to go deeper on where American religion is headed, join me and Ryan along with Tony Jones for our upcoming class The Rise of the Nones this January at www.AmericanNones.com. Come on. You can WATCH the conversation and see the graphs on YouTube Dr. Ryan Burge is a professor of practice at the Danforth Center on Religion and Politics at Washington University in St. Louis. He is the author or co-author of four books including The Nones, The American Religious Landscape, and The Great Dechurching. He has written for the New York Times, the Wall Street Journal and POLITICO. He has also appeared on 60 Minutes, where Anderson Cooper called him, “one of the leading data analysts of religion and politics in the United States.” Previous Visits from Ryan Burge ⁠Gen Z Revival?: The Next Chapter in American Religious Life⁠ The 2024 Election & Religion Post-Mortem Distrust & Denominations Trust, Religion, & a Functioning Democracy What it's like to close a church The Future of Christian Education & Ministry in Charts The Sky is Falling & the Charts are Popping! Graphs about Religion & Politics w/ Spicy Banter a Year in Religion (in Graphs) Evangelical Jews, Educated Church-Goers, & other bits of dizzying data 5 Religion Graphs w/ a side of Hot Takes Myths about Religion & Politics Join us at Theology Beer Camp, October 8-10, in Kansas City!⁠⁠⁠ ⁠⁠UPCOMING ONLINE CLASS: The Rise of the Nones⁠⁠ One-third of Americans now claim no religious affiliation. That's 100 million people.  But here's what most church leaders get wrong: they're not all the same. Some still believe in God. Some are actively searching. Some are quietly indifferent. Some think religion is harmful.  Ryan Burge & Tony Jones have conducted the first large-scale survey of American "Nones", which reveals 4 distinct categories—each requiring a different approach. Understanding the difference could transform everything from your ministry to your own spiritual quest. ⁠⁠Get info & join the donation-based class (including 0) here.⁠⁠ This podcast is a ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Homebrewed Christianity ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠production. Follow ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠the Homebrewed Christianity⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Theology Nerd Throwdown⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, & ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Rise of Bonhoeffer⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ podcasts for more theological goodness for your earbuds. Join over 75,000 other people by joining our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Substack - Process This!⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Get instant access to over 50 classes at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.TheologyClass.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Follow the podcast, drop a review⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, send ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠feedback/questions⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ or become a ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠member of the HBC Community⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Learn more about your ad choices. Visit megaphone.fm/adchoices

People I (Mostly) Admire
Ninety-Eight Years of Economic Wisdom (Replay)

People I (Mostly) Admire

Play Episode Listen Later Dec 13, 2025 49:09


The late Robert Solow was a giant among economists. When he was 98 years old he told Steve about cracking German codes in World War II, why it's so hard to reduce inequality, and how his field lost its way.  SOURCES:Robert Solow, professor emeritus of economics at the Massachusetts Institute of Technology. RESOURCES:"Secrecy, Cigars, and a Venetian Wedding: How the P.G.A. Tour Made a Deal with Saudi Arabia," by Alan Blinder, Lauren Hirsch, Kevin Draper, and Kate Kelly (The New York Times, 2023)."Global Assessment of Environmental-Economic Accounting and Supporting Statistics: 2020," by United Nations Committee of Experts on Environmental-Economic Accounting (2021)."Where Modern Macroeconomics Went Wrong," by Joseph E. Stiglitz (Oxford Review of Economic Policy, 2015)."As Inequality Grows, So Does the Political Influence of the Rich," (The Economist, 2018)."Big Bang Financial Deregulation and Income Inequality: Evidence From U.K. and Japan," by Daniel Waldenstrom and Julia Tanndal (VoxEU, 2016)."The Fall And Rise Of U.S. Inequality, In 2 Graphs," by Quoctrung Bui (Planet Money, 2015).Nobel Prize Biographical, by Robert Solow (1987).Principles of Political Economy, by John Stuart Mills (1848). EXTRAS:"Is Economic Growth the Wrong Goal? (Update)," by Freakonomics Radio (2023). Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Risky Business
Risky Biz Soap Box: Graph the planet!

Risky Business

Play Episode Listen Later Dec 11, 2025 42:53


In this sponsored Soap Box edition of the Risky Business podcast, Patrick Gray chats with Jared Atkinson, CTO of SpecterOps, about BloodHound OpenGraph. OpenGraph enumerates attack paths across platforms and services, not just your primary directories. A compromised GitHub account to on-prem AD compromise attack path? It's a thing, and OpenGraph will find it. Cross-platform attack path enumeration! So good! This episode is also available on Youtube. Show notes

The Refrigeration Mentor Podcast
Episode 358. Trend Graphs and CO2 System Troubleshooting with Andrew Freeburg and Erik Holland

The Refrigeration Mentor Podcast

Play Episode Listen Later Dec 8, 2025 58:20


Join the Refrigeration Mentor Hub here Learn more about Refrigeration Mentor Customized Technical Training Programs at www.refrigerationmentor.com/courses This episode is another of our "Morning Coffee" sessions with longtime refrigeration professionals Andrew Freeburg and Erik Holland, diving deep into trend graphs and CO2 refrigeration system troubleshooting. We cover understanding and reading trend graphs, practical tips for locating and analyzing system inefficiencies, and the procedures for fine-tuning refrigeration systems. We also discuss gas coolers, the impact of ambient temperature on system performance, and effective strategies for maintaining system stability in cold climates. Interested in joining the next Morning Coffee live? Join our FREE Refrigeration Mentor Community today. In this episode, we cover: -Understanding trend graphs -Comparing graphs on E2 systems -Trend graph analysis -Graphing techniques on E2 systems -Analyzing CO2 system graphs -System fine-tuning -Creating work and building credibility -Advanced graph analysis and troubleshooting -High pressure valve and bypass valve correlation -Gas cooler and system stability -System oscillations and expansion valves -Gas cooler maintenance -High pressure valve control -Fan control strategies -Troubleshooting fan and gas cooler issues -Winter challenges in CO2 systems -Heat reclaim and system efficiency Helpful Links and Resources: Episode 287. CO2 Experts: Using Trend Graphs to Troubleshoot CO2 Systems with Andrew Freeburg Episode 144.  Troubleshooting CO2 High Pressure Valves Using Trend Graphs with the CAREL Boss System Episode 350. Supermarket Refrigeration Tips and Tricks with Robert Ochs  

Reclaim Your Rise: Type 1 Diabetes with Lauren Bongiorno
200. “Is a Flat Graph Without Restriction Even Possible?” How Layne Found Freedom After 30 Years with Type 1 Diabetes

Reclaim Your Rise: Type 1 Diabetes with Lauren Bongiorno

Play Episode Listen Later Nov 25, 2025 46:57


In this special 200th episode of Reclaim Your Rise, I sit down with Risely coaching alum Layne—an ICU nurse practitioner who has lived with type 1 diabetes for over 30 years—to explore a struggle I know so many in our community quietly carry: the weight of comparison, perfectionism, and those triggering “perfect” flat-line graphs. Even with an A1C of 5.8, Layne shares how she felt mentally drained from micromanaging every detail of her diabetes and questioning whether true freedom and stable numbers could ever exist together. She opens up about the years when flat graphs came only from restriction, and how that left her wondering if peace was possible without losing herself again. In our conversation, Layne reflects on how she learned to redefine progress, shift her mindset, and rebuild trust with her body in a way that finally brought her steadier days and more ease. I'm so excited for you to hear this honest, raw, and deeply relatable story. And to celebrate episode 200, I'm hosting a special giveaway for the community—details are inside!

The Fitness Movement: Training | Programming | Competing
Heart Rate Graph Analysis: CrossFit Athletes [Ep.209]

The Fitness Movement: Training | Programming | Competing

Play Episode Listen Later Nov 25, 2025 44:26


Day & Ben analyze 6 different heart rate graphs from the athletes we coach.» Watch on YouTube: https://youtu.be/ibpFvGpzIqs» View All Episodes: https://zoarfitness.com/podcast/» Hire a Coach: https://www.zoarfitness.com/coach/» Shop Programs: https://www.zoarfitness.com/product-category/downloads/» Follow ZOAR Fitness on Instagram: https://www.instagram.com/zoarfitness/Support the show

CISO-Security Vendor Relationship Podcast
Are You Implying This Line Graph Isn't a Compelling Cybersecurity Narrative?

CISO-Security Vendor Relationship Podcast

Play Episode Listen Later Nov 18, 2025 41:01


All links and images can be found on CISO Series. This week's episode is hosted by David Spark, producer of CISO Series and Andy Ellis (@csoandy), principal of Duha. Joining them is our sponsored guest, Nathan Hunstad, director, security, Vanta. In this episode: Metrics that matter Testing for real AI as an assistant Intelligence without context Huge thanks to our sponsor, Vanta Vanta automates key areas of your GRC program—including compliance, risk, and customer trust—and streamlines the way you manage information. A recent IDC analysis found that compliance teams using Vanta are 129% more productive. Get back time to focus on strengthening security and scaling your business at vanta.com/ciso

ShopTalk » Podcast Feed
691: Charts + Graphs, Vibe Coding an App, and Debating Affordances

ShopTalk » Podcast Feed

Play Episode Listen Later Nov 17, 2025 68:59


Show DescriptionWhat do Balatro streamers do when the game is over, Random in CSS is so hot right now, Dave has a better idea for charts and graphs that would change the world, Quiet UI follow up, Dave tries vibe coding a tennis app and doesn't completely John McEnroe his laptop, Chris wonders about better cursor UI on the web, and debating affordances vs conventions. Listen on WebsiteWatch on YouTubeLinks Jynxzi - Twitch BALL x PIT on Steam Could Open Graph Just Be a CSS Media Type? | Scott Jehl, Web Designer/Developer https://webawesome.com Podcast Awesome Quiet UI A Beautiful Site Eleventy is a simpler static site generator Don't use custom CSS mouse cursors – Eric Bailey Home | Rach Smith's digital garden The Two Button Problem – Frontend Masters Blog SponsorstldrawHave you ever wanted to build an app that works kinda like Miro or Figma, that has a zoomable infinite canvas, that's multiplayer, and really good, but you also want to build it in React with normal React components on the canvas? Good news! tldraw is the world's first, best, and only SDK for building infinite canvas apps in React. tldraw takes care of all the canvas complexities — things like the camera, selection logic, and undo redo — so that you can focus on building the features that matter to your users. It's easy to use with plenty of examples and starter kits, including a kit where you can use AI to create things on the canvas. Get started for free at tldraw.dev/shoptalk, or run npm create tldraw to spin up a starter kit.