Podcasts about cpus

  • 702PODCASTS
  • 1,458EPISODES
  • 55mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 12, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about cpus

Show all podcasts related to cpus

Latest podcast episodes about cpus

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Retrieval After RAG: Hybrid Search, Agents, and Database Design — Simon Hørup Eskildsen of Turbopuffer

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 12, 2026 60:32


Turbopuffer came out of a reading app.In 2022, Simon was helping his friends at Readwise scale their infra for a highly requested feature: article recommendations and semantic search. Readwise was paying ~$5k/month for their relational database and vector search would cost ~$20k/month making the feature too expensive to ship. In 2023 after mulling over the problem from Readwise, Simon decided he wanted to “build a search engine” which became Turbopuffer.We discuss:• Simon's path: Denmark → Shopify infra for nearly a decade → “angel engineering” across startups like Readwise, Replicate, and Causal → turbopuffer almost accidentally becoming a company • The Readwise origin story: building an early recommendation engine right after the ChatGPT moment, seeing it work, then realizing it would cost ~$30k/month for a company spending ~$5k/month total on infra and getting obsessed with fixing that cost structure • Why turbopuffer is “a search engine for unstructured data”: Simon's belief that models can learn to reason, but can't compress the world's knowledge into a few terabytes of weights, so they need to connect to systems that hold truth in full fidelity • The three ingredients for building a great database company: a new workload, a new storage architecture, and the ability to eventually support every query plan customers will want on their data • The architecture bet behind turbopuffer: going all in on object storage and NVMe, avoiding a traditional consensus layer, and building around the cloud primitives that only became possible in the last few years • Why Simon hated operating Elasticsearch at Shopify: years of painful on-call experience shaped his obsession with simplicity, performance, and eliminating state spread across multiple systems • The Cursor story: launching turbopuffer as a scrappy side project, getting an email from Cursor the next day, flying out after a 4am call, and helping cut Cursor's costs by 95% while fixing their per-user economics • The Notion story: buying dark fiber, tuning TCP windows, and eating cross-cloud costs because Simon refused to compromise on architecture just to close a deal faster • Why AI changes the build-vs-buy equation: it's less about whether a company can build search infra internally, and more about whether they have time especially if an external team can feel like an extension of their own • Why RAG isn't dead: coding companies still rely heavily on search, and Simon sees hybrid retrieval semantic, text, regex, SQL-style patterns becoming more important, not less • How agentic workloads are changing search: the old pattern was one retrieval call up front; the new pattern is one agent firing many parallel queries at once, turning search into a highly concurrent tool call • Why turbopuffer is reducing query pricing: agentic systems are dramatically increasing query volume, and Simon expects retrieval infra to adapt to huge bursts of concurrent search rather than a small number of carefully chosen calls • The philosophy of “playing with open cards”: Simon's habit of being radically honest with investors, including telling Lachy Groom he'd return the money if turbopuffer didn't hit PMF by year-end • The “P99 engineer”: Simon's framework for building a talent-dense company, rejecting by default unless someone on the team feels strongly enough to fight for the candidate —Simon Hørup Eskildsen• LinkedIn: https://www.linkedin.com/in/sirupsen• X: https://x.com/Sirupsen• https://sirupsen.com/aboutturbopuffer• https://turbopuffer.com/Full Video PodTimestamps00:00:00 The PMF promise to Lachy Groom00:00:25 Intro and Simon's background00:02:19 What turbopuffer actually is00:06:26 Shopify, Elasticsearch, and the pain behind the company00:10:07 The Readwise experiment that sparked turbopuffer00:12:00 The insight Simon couldn't stop thinking about00:17:00 S3 consistency, NVMe, and the architecture bet00:20:12 The Notion story: latency, dark fiber, and conviction00:25:03 Build vs. buy in the age of AI00:26:00 The Cursor story: early launch to breakout customer00:29:00 Why code search still matters00:32:00 Search in the age of agents00:34:22 Pricing turbopuffer in the AI era00:38:17 Why Simon chose Lachy Groom00:41:28 Becoming a founder on purpose00:44:00 The “P99 engineer” philosophy00:49:30 Bending software to your will00:51:13 The future of turbopuffer00:57:05 Simon's tea obsession00:59:03 Tea kits, X Live, and P99 LiveTranscriptSimon Hørup Eskildsen: I don't think I've said this publicly before, but I just called Lockey and was like, local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you. But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working.So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people. We're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards. Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before.Alessio: Hey everyone, welcome to the Leading Space podcast. This is Celesio Pando, Colonel Laz, and I'm joined by Swix, editor of Leading Space.swyx: Hello. Hello, uh, we're still, uh, recording in the Ker studio for the first time. Very excited. And today we are joined by Simon Eski. Of Turbo Farer welcome.Simon Hørup Eskildsen: Thank you so much for having me.swyx: Turbo Farer has like really gone on a huge tear, and I, I do have to mention that like you're one of, you're not my newest member of the Danish AHU Mafia, where like there's a lot of legendary programmers that have come out of it, like, uh, beyond Trotro, Rasmus, lado Berg and the V eight team and, and Google Maps team.Uh, you're mostly a Canadian now, but isn't that interesting? There's so many, so much like strong Danish presence.Simon Hørup Eskildsen: Yeah, I was writing a post, um, not that long ago about sort of the influences. So I grew up in Denmark, right? I left, I left when, when I was 18 to go to Canada to, to work at Shopify. Um, and so I, like, I've, I would still say that I feel more Danish than, than Canadian.This is also the weird accent. I can't say th because it, this is like, I don't, you know, my wife is also Canadian, um, and I think. I think like one of the things in, in Denmark is just like, there's just such a ruthless pragmatism and there's also a big focus on just aesthetics. Like, they're like very, people really care about like where, what things look like.Um, and like Canada has a lot of attributes, US has, has a lot of attributes, but I think there's been lots of the great things to carry. I don't know what's in the water in Ahu though. Um, and I don't know that I could be considered part of the Mafi mafia quite yet, uh, compared to the phenomenal individuals we just mentioned.Barra OV is also, uh, Danish Canadian. Okay. Yeah. I don't know where he lives now, but, and he's the PHP.swyx: Yeah. And obviously Toby German, but moved to Canada as well. Yes. Like this is like import that, uh, that, that is an interesting, um, talent move.Alessio: I think. I would love to get from you. Definition of Turbo puffer, because I think you could be a Vector db, which is maybe a bad word now in some circles, you could be a search engine.It's like, let, let's just start there and then we'll maybe run through the history of how you got to this point.Simon Hørup Eskildsen: For sure. Yeah. So Turbo Puffer is at this point in time, a search engine, right? We do full text search and we do vector search, and that's really what we're specialized in. If you're trying to do much more than that, like then this might not be the right place yet, but Turbo Buffer is all about search.The other way that I think about it is that we can take all of the world's knowledge, all of the exabytes and exabytes of data that there is, and we can use those tokens to train a model, but we can't compress all of that into a few terabytes of weights, right? Compress into a few terabytes of weights, how to reason with the world, how to make sense of the knowledge.But we have to somehow connect it to something externally that actually holds that like in full fidelity and truth. Um, and that's the thing that we intend to become. Right? That's like a very holier than now kind of phrasing, right? But being the search engine for unstructured, unstructured data is the focus of turbo puffer at this point in time.Alessio: And let's break down. So people might say, well, didn't Elasticsearch already do this? And then some other people might say, is this search on my data, is this like closer to rag than to like a xr, like a public search thing? Like how, how do you segment like the different types of search?Simon Hørup Eskildsen: The way that I generally think about this is like, there's a lot of database companies and I think if you wanna build a really big database company, sort of, you need a couple of ingredients to be in the air.We don't, which only happens roughly every 15 years. You need a new workload. You basically need the ambition that every single company on earth is gonna have data in your database. Multiple times you look at a company like Oracle, right? You will, like, I don't think you can find a company on earth with a digital presence that it not, doesn't somehow have some data in an Oracle database.Right? And I think at this point, that's also true for Snowflake and Databricks, right? 15 years later it's, or even more than that, there's not a company on earth that doesn't, in. Or directly is consuming Snowflake or, or Databricks or any of the big analytics databases. Um, and I think we're in that kind of moment now, right?I don't think you're gonna find a company over the next few years that doesn't directly or indirectly, um, have all their data available for, for search and connect it to ai. So you need that new workload, like you need something to be happening where there's a new workload that causes that to happen, and that new workload is connecting very large amounts of data to ai.The second thing you need. The second condition to build a big database company is that you need some new underlying change in the storage architecture that is not possible from the databases that have come before you. If you look at Snowflake and Databricks, right, commoditized, like massive fleet of HDDs, like that was not possible in it.It just wasn't in the air in the nineties, right? So you just didn't, we just didn't build these systems. S3 and and and so on was not around. And I think the architecture that is now possible that wasn't possible 15 years ago is to go all in on NVME SSDs. It requires a particular type of architecture for the database that.It's difficult to retrofit onto the databases that are already there, including the ones you just mentioned. The second thing is to go all in on OIC storage, more so than we could have done 15 years ago. Like we don't have a consensus layer, we don't really have anything. In fact, you could turn off all the servers that Turbo Buffer has, and we would not lose any data because we have all completely all in on OIC storage.And this means that our architecture is just so simple. So that's the second condition, right? First being a new workload. That means that every company on earth, either indirectly or directly, is using your database. Second being, there's some new storage architecture. That means that the, the companies that have come before you can do what you're doing.I think the third thing you need to do to build a big database company is that over time you have to implement more or less every Cory plan on the data. What that means is that you. You can't just get stuck in, like, this is the one thing that a database does. It has to be ever evolving because when someone has data in the database, they over time expect to be able to ask it more or less every question.So you have to do that to get the storage architecture to the limit of what, what it's capable of. Those are the three conditions.swyx: I just wanted to get a little bit of like the motivation, right? Like, so you left Shopify, you're like principal, engineer, infra guy. Um, you also head of kernel labs, uh, inside of Shopify, right?And then you consulted for read wise and that it kind of gave you that, that idea. I just wanted you to tell that story. Um, maybe I, you've told it before, but, uh, just introduce the, the. People to like the, the new workload, the sort of aha moment for turbo PufferSimon Hørup Eskildsen: For sure. So yeah, I spent almost a decade at Shopify.I was on the infrastructure team, um, from the fairly, fairly early days around 2013. Um, at the time it felt like it was growing so quickly and everything, all the metrics were, you know, doubling year on year compared to the, what companies are contending with today. It's very cute in growth. I feel like lot some companies are seeing that month over month.Um, of course. Shopify compound has been compounding for a very long time now, but I spent a decade doing that and the majority of that was just make sure the site is up today and make sure it's up a year from now. And a lot of that was really just the, um, you know, uh, the Kardashians would drive very, very large amounts of, of data to, to uh, to Shopify as they were rotating through all the merch and building out their businesses.And we just needed to make sure we could handle that. Right. And sometimes these were events, a million requests per second. And so, you know, we, we had our own data centers back in the day and we were moving to the cloud and there was so much sharding work and all of that that we were doing. So I spent a decade just scaling databases ‘cause that's fundamentally what's the most difficult thing to scale about these sites.The database that was the most difficult for me to scale during that time, and that was the most aggravating to be on call for, was elastic search. It was very, very difficult to deal with. And I saw a lot of projects that were just being held back in their ambition by using it.swyx: And I mean, self-hosted.Self-hosted. ‘causeSimon Hørup Eskildsen: it's, yeah, and it commercial, this is like 2015, right? So it's like a very particular vintage. Right. It's probably better at a lot of these things now. Um, it was difficult to contend with and I'm just like, I just think about it. It's an inverted index. It should be good at these kinds of queries and do all of this.And it was, we, we often couldn't get it to do exactly what we needed to do or basically get lucine to do, like expose lucine raw to, to, to what we needed to do. Um, so that was like. Just something that we did on the side and just panic scaled when we needed to, but not a particular focus of mine. So I left, and when I left, I, um, wasn't sure exactly what I wanted to do.I mean, it spent like a decade inside of the same company. I'd like grown up there. I started working there when I was 18.swyx: You only do Rails?Simon Hørup Eskildsen: Yeah. I mean, yeah. Rails. And he's a Rails guy. Uh, love Rails. So good. Um,Alessio: we all wish we could still work in Rails.swyx: I know know. I know, but some, I tried learning Ruby.It's just too much, like too many options to do the same thing. It's, that's my, I I know there's a, there's a way to do it.Simon Hørup Eskildsen: I love it. I don't know that I would use it now, like given cloud code and, and, and cursor and everything, but, um, um, but still it, like if I'm just sitting down and writing a teal code, that's how I think.But anyway, I left and I wasn't, I talked to a couple companies and I was like, I don't. I need to see a little bit more of the world here to know what I'm gonna like focus on next. Um, and so what I decided is like I was gonna, I called it like angel engineering, where I just hopped around in my friend's companies in three months increments and just helped them out with something.Right. And, and just vested a bit of equity and solved some interesting infrastructure problem. So I worked with a bunch of companies at the time, um, read Wise was one of them. Replicate was one of them. Um, causal, I dunno if you've tried this, it's like a, it's a spreadsheet engine Yeah. Where you can do distribution.They sold recently. Yeah. Um, we've been, we used that in fp and a at, um, at Turbo Puffer. Um, so a bunch of companies like this and it was super fun. And so we're the Chachi bt moment happened, I was with. With read Wise for a stint, we were preparing for the reader launch, right? Which is where you, you cue articles and read them later.And I was just getting their Postgres up to snuff, like, which basically boils down to tuning, auto vacuum. So I was doing that and then this happened and we were like, oh, maybe we should build a little recommendation engine and some features to try to hook in the lms. They were not that good yet, but it was clear there was something there.And so I built a small recommendation engine just, okay, let's take the articles that you've recently read, right? Like embed all the articles and then do recommendations. It was good enough that when I ran it on one of the co-founders of Rey's, like I found out that I got articles about, about having a child.I'm like, oh my God, I didn't, I, I didn't know that, that they were having a child. I wasn't sure what to do with that information, but the recommendation engine was good enough that it was suggesting articles, um, about that. And so there was, there was recommendations and uh, it actually worked really well.But this was a company that was spending maybe five grand a month in total on all their infrastructure and. When I did the napkin math on running the embeddings of all the articles, putting them into a vector index, putting it in prod, it's gonna be like 30 grand a month. That just wasn't tenable. Right?Like Read Wise is a proudly bootstrapped company and it's paying 30 grand for infrastructure for one feature versus five. It just wasn't tenable. So sort of in the bucket of this is useful, it's pretty good, but let us, let's return to it when the costs come down.swyx: Did you say it grows by feature? So for five to 30 is by the number of, like, what's the, what's the Scaling factor scale?It scales by the number of articles that you embed.Simon Hørup Eskildsen: It does, but what I meant by that is like five grand for like all of the other, like the Heroku, dinos, Postgres, like all the other, and this then storage is 30. Yeah. And then like 30 grand for one feature. Right. Which is like, what other articles are related to this one.Um, so it was just too much right to, to power everything. Their budget would've been maybe a few thousand dollars, which still would've been a lot. And so we put it in a bucket of, okay, we're gonna do that later. We'll wait, we will wait for the cost to come down. And that haunted me. I couldn't stop thinking about it.I was like, okay, there's clearly some latent demand here. If the cost had been a 10th, we would've shipped it and. This was really the only data point that I had. Right. I didn't, I, I didn't, I didn't go out and talk to anyone else. It was just so I started reading Right. I couldn't, I couldn't help myself.Like I didn't know what like a vector index is. I, I generally barely do about how to generate the vectors. There was a lot of hype about, this is a early 2023. There was a lot of hype about vector databases. There were raising a lot of money and it's like, I really didn't know anything about it. It's like, you know, trying these little models, fine tuning them.Like I was just trying to get sort of a lay of the land. So I just sat down. I have this. A GitHub repository called Napkin Math. And on napkin math, there's just, um, rows of like, oh, this is how much bandwidth. Like this is how many, you know, you can do 25 gigabytes per second on average to dram. You can do, you know, five gigabytes per second of rights to an SSD, blah blah.All of these numbers, right? And S3, how many you could do per, how much bandwidth can you drive per connection? I was just sitting down, I was like, why hasn't anyone build a database where you just put everything on O storage and then you puff it into NVME when you use the data and you puff it into dram if you're, if you're querying it alive, it's just like, this seems fairly obvious and you, the only real downside to that is that if you go all in on o storage, every right will take a couple hundred milliseconds of latency, but from there it's really all upside, right?You do the first go, it takes half a second. And it sort of occurred to me as like, well. The architecture is really good for that. It's really good for AB storage, it's really good for nvm ESSD. It's, well, you just couldn't have done that 10 years ago. Back to what we were talking about before. You really have to build a database where you have as few round trips as possible, right?This is how CPUs work today. It's how NVM E SSDs work. It's how as, um, as three works that you want to have a very large amount of outstanding requests, right? Like basically go to S3, do like that thousand requests to ask for data in one round trip. Wait for that. Get that, like, make a new decision. Do it again, and try to do that maybe a maximum of three times.But no databases were designed that way within NVME as is ds. You can drive like within, you know, within a very low multiple of DRAM bandwidth if you use it that way. And same with S3, right? You can fully max out the network card, which generally is not maxed out. You get very, like, very, very good bandwidth.And, but no one had built a database like that. So I was like, okay, well can't you just, you know, take all the vectors right? And plot them in the proverbial coordinate system. Get the clusters, put a file on S3 called clusters, do json, and then put another file for every cluster, you know, cluster one, do js O cluster two, do js ON you know that like it's two round trips, right?So you get the clusters, you find the closest clusters, and then you download the cluster files like the, the closest end. And you could do this in two round trips.swyx: You were nearest neighbors locally.Simon Hørup Eskildsen: Yes. Yes. And then, and you would build this, this file, right? It's just like ultra simplistic, but it's not a far shot from what the first version of Turbo Buffer was.Why hasn't anyone done thatAlessio: in that moment? From a workload perspective, you're thinking this is gonna be like a read heavy thing because they're doing recommend. Like is the fact that like writes are so expensive now? Oh, with ai you're actually not writing that much.Simon Hørup Eskildsen: At that point I hadn't really thought too much about, well no actually it was always clear to me that there was gonna be a lot of rights because at Shopify, the search clusters were doing, you know, I don't know, tens or hundreds of crew QPS, right?‘cause you just have to have a human sit and type in. But we did, you know, I don't know how many updates there were per second. I'm sure it was in the millions, right into the cluster. So I always knew there was like a 10 to 100 ratio on the read write. In the read wise use case. It's, um, even, even in the read wise use case, there'd probably be a lot fewer reads than writes, right?There's just a lot of churn on the amount of stuff that was going through versus the amount of queries. Um, I wasn't thinking too much about that. I was mostly just thinking about what's the fundamentally cheapest way to build a database in the cloud today using the primitives that you have available.And this is it, right? You just, now you have one machine and you know, let's say you have a terabyte of data in S3, you paid the $200 a month for that, and then maybe five to 10% of that data and needs to be an NV ME SSDs and less than that in dram. Well. You're paying very, very little to inflate the data.swyx: By the way, when you say no one else has done that, uh, would you consider Neon, uh, to be on a similar path in terms of being sort of S3 first and, uh, separating the compute and storage?Simon Hørup Eskildsen: Yeah, I think what I meant with that is, uh, just build a completely new database. I don't know if we were the first, like it was very much, it was, I mean, I, I hadn't, I just looked at the napkin math and was like, this seems really obvious.So I'm sure like a hundred people came up with it at the same time. Like the light bulb and every invention ever. Right. It was just in the air. I think Neon Neon was, was first to it. And they're trying, they're retrofitted onto Postgres, right? And then they built this whole architecture where you have, you have it in memory and then you sort of.You know, m map back to S3. And I think that was very novel at the time to do it for, for all LTP, but I hadn't seen a database that was truly all in, right. Not retrofitting it. The database felt built purely for this no consensus layer. Even using compare and swap on optic storage to do consensus. I hadn't seen anyone go that all in.And I, I mean, there, there, I'm sure there was someone that did that before us. I don't know. I was just looking at the napkin mathswyx: and, and when you say consensus layer, uh, are you strongly relying on S3 Strong consistency? You are. Okay.SoSimon Hørup Eskildsen: that is your consensus layer. It, it is the consistency layer. And I think also, like, this is something that most people don't realize, but S3 only became consistent in December of 2020.swyx: I remember this coming out during COVID and like people were like, oh, like, it was like, uh, it was just like a free upgrade.Simon Hørup Eskildsen: Yeah.swyx: They were just, they just announced it. We saw consistency guys and like, okay, cool.Simon Hørup Eskildsen: And I'm sure that they just, they probably had it in prod for a while and they're just like, it's done right.And people were like, okay, cool. But. That's a big moment, right? Like nv, ME SSDs, were also not in the cloud until around 2017, right? So you just sort of had like 2017 nv, ME SSDs, and people were like, okay, cool. There's like one skew that does this, whatever, right? Takes a few years. And then the second thing is like S3 becomes consistent in 2020.So now it means you don't have to have this like big foundation DB or like zookeeper or whatever sitting there contending with the keys, which is how. You know, that's what Snowflake and others have do so muchswyx: for goneSimon Hørup Eskildsen: Exactly. Just gone. Right? And so just push to the, you know, whatever, how many hundreds of people they have working on S3 solved and then compare and swap was not in S3 at this point in time,swyx: by the way.Uh, I don't know what that is, so maybe you wanna explain. Yes. Yeah.Simon Hørup Eskildsen: Yes. So, um, what Compare and swap is, is basically, you can imagine that if you have a database, it might be really nice to have a file called metadata json. And metadata JSON could say things like, Hey, these keys are here and this file means that, and there's lots of metadata that you have to operate in the database, right?But that's the simplest way to do it. So now you have might, you might have a lot of servers that wanna change the metadata. They might have written a file and want the metadata to contain that file. But you have a hundred nodes that are trying to contend with this metadata that JSON well, what compare and Swap allows you to do is basically just you download the file, you make the modifications, and then you write it only if it hasn't changed.While you did the modification and if not you retry. Right? Should just have this retry loops. Now you can imagine if you have a hundred nodes doing that, it's gonna be really slow, but it will converge over time. That primitive was not available in S3. It wasn't available in S3 until late 2024, but it was available in GCP.The real story of this is certainly not that I sat down and like bake brained it. I was like, okay, we're gonna start on GCS S3 is gonna get it later. Like it was really not that we started, we got really lucky, like we started on GCP and we started on GCP because tur um, Shopify ran on GCP. And so that was the platform I was most available with.Right. Um, and I knew the Canadian team there ‘cause I'd worked with them at Shopify and so it was natural for us to start there. And so when we started building the database, we're like, oh yeah, we have to build a, we really thought we had to build a consensus layer, like have a zookeeper or something to do this.But then we discovered the compare and swap. It's like, oh, we can kick the can. Like we'll just do metadata r json and just, it's fine. It's probably fine. Um, and we just kept kicking the can until we had very, very strong conviction in the idea. Um, and then we kind of just hinged the company on the fact that S3 probably was gonna get this, it started getting really painful in like mid 2024.‘cause we were closing deals with, um, um, notion actually that was running in AWS and we're like, trust us. You, you really want us to run this in GCP? And they're like, no, I don't know about that. Like, we're running everything in AWS and the latency across the cloud were so big and we had so much conviction that we bought like, you know, dark fiber between the AWS regions in, in Oregon, like in the InterExchange and GCP is like, we've never seen a startup like do like, what's going on here?And we're just like, no, we don't wanna do this. We were tuning like TCP windows, like everything to get the latency down ‘cause we had so high conviction in not doing like a, a metadata layer on S3. So those were the three conditions, right? Compare and swap. To do metadata, which wasn't in S3 until late 2024 S3 being consistent, which didn't happen until December, 2020.Uh, 2020. And then NVMe ssd, which didn't end in the cloud until 2017.swyx: I mean, in some ways, like a very big like cloud success story that like you were able to like, uh, put this all together, but also doing things like doing, uh, bind our favor. That that actually is something I've never heard.Simon Hørup Eskildsen: I mean, it's very common when you're a big company, right?You're like connecting your own like data center or whatever. But it's like, it was uniquely just a pain with notion because the, um, the org, like most of the, like if you're buying in Ashburn, Virginia, right? Like US East, the Google, like the GCP and, and AWS data centers are like within a millisecond on, on each other, on the public exchanges.But in Oregon uniquely, the GCP data center sits like a couple hundred kilometers, like east of Portland and the AWS region sits in Portland, but the network exchange they go through is through Seattle. So it's like a full, like 14 milliseconds or something like that. And so anyway, yeah. It's, it's, so we were like, okay, we can't, we have to go through an exchange in Portland.Yeah. Andswyx: you'd rather do this than like run your zookeeper and likeSimon Hørup Eskildsen: Yes. Way rather. It doesn't have state, I don't want state and two systems. Um, and I think all that is just informed by Justine, my co-founder and I had just been on call for so long. And the worst outages are the ones where you have state in multiple places that's not syncing up.So it really came from, from a a, like just a, a very pure source of pain, of just imagining what we would be Okay. Being woken up at 3:00 AM about and having something in zookeeper was not one of them.swyx: You, you're talking to like a notion or something. Do they care or do they just, theySimon Hørup Eskildsen: just, they care about latency.swyx: They latency cost. That's it.Simon Hørup Eskildsen: They just cared about latency. Right. And we just absorbed the cost. We're just like, we have high conviction in this. At some point we can move them to AWS. Right. And so we just, we, we'll buy the fiber, it doesn't matter. Right. Um, and it's like $5,000. Usually when you buy fiber, you buy like multiple lines.And we're like, we can only afford one, but we will just test it that when it goes over the public internet, it's like super smooth. And so we did a lot of, anyway, it's, yeah, it was, that's cool.Alessio: You can imagine talking to the GCP rep and it's like, no, we're gonna buy, because we know we're gonna turn, we're gonna turn from you guys and go to AWS in like six months.But in the meantime we'll do this. It'sSimon Hørup Eskildsen: a, I mean, like they, you know, this workload still runs on GCP for what it's worth. Right? ‘cause it's so, it was just, it was so reliable. So it was never about moving off GCP, it was just about honesty. It was just about giving notion the latency that they deserved.Right. Um, and we didn't want ‘em to have to care about any of this. We also, they were like, oh, egress is gonna be bad. It was like, okay, screw it. Like we're just gonna like vvc, VPC peer with you and AWS we'll eat the cost. Yeah. Whatever needs to be done.Alessio: And what were the actual workloads? Because I think when you think about ai, it's like 14 milliseconds.It's like really doesn't really matter in the scheme of like a model generation.Simon Hørup Eskildsen: Yeah. We were told the latency, right. That we had to beat. Oh, right. So, so we're just looking at the traces. Right. And then sort of like hand draw, like, you know, kind of like looking at the trace and then thinking what are the other extensions of the trace?Right. And there's a lot more to it because it's also when you have, if you have 14 versus seven milliseconds, right. You can fit in another round trip. So we had to tune TCP to try to send as much data in every round trip, prewarm all the connections. And there was, there's a lot of things that compound from having these kinds of round trips, but in the grand scheme it was just like, well, we have to beat the latency of whatever we're up against.swyx: Which is like they, I mean, notion is a database company. They could have done this themselves. They, they do lots of database engineering themselves. How do you even get in the door? Like Yeah, just like talk through that kind of.Simon Hørup Eskildsen: Last time I was in San Francisco, I was talking to one of the engineers actually, who, who was one of our champions, um, at, AT Notion.And they were, they were just trying to make sure that the, you know, per user cost matched the economics that they needed. You know, Uhhuh like, it's like the way I think about, it's like I have to earn a return on whatever the clouds charge me and then my customers have to earn a return on that. And it's like very simple, right?And so there has to be gross margin all the way up and that's how you build the product. And so then our customers have to make the right set of trade off the turbo Puffer makes, and if they're happy with that, that's great.swyx: Do you feel like you're competing with build internally versus buy or buy versus buy?Simon Hørup Eskildsen: Yeah, so, sorry, this was all to build up to your question. So one of the notion engineers told me that they'd sat and probably on a napkin, like drawn out like, why hasn't anyone built this? And then they saw terrible. It was like, well, it literally that. So, and I think AI has also changed the buy versus build equation in terms of, it's not really about can we build it, it's about do we have time to build it?I think they like, I think they felt like, okay, if this is a team that can do that and they, they feel enough like an extension of our team, well then we can go a lot faster, which would be very, very good for them. And I mean, they put us through the, through the test, right? Like we had some very, very long nights to to, to do that POC.And they were really our biggest, our second big customer off the cursor, which also was a lot of late nights. Right.swyx: Yeah. That, I mean, should we go into that story? The, the, the sort of Chris's story, like a lot, um, they credit you a lot for. Working very closely with them. So I just wanna hear, I've heard this, uh, story from Sole's point of view, but like, I'm curious what, what it looks like from your side.Simon Hørup Eskildsen: I actually haven't heard it from Sole's point of view, so maybe you can now cross reference it. The way that I remember it was that, um, the day after we launched, which was just, you know, I'd worked the whole summer on, on the first version. Justine wasn't part of it yet. ‘cause I just, I didn't tell anyone that summer that I was working on this.I was just locked in on building it because it's very easy otherwise to confuse talking about something to actually doing it. And so I was just like, I'm not gonna do that. I'm just gonna do the thing. I launched it and at this point turbo puffer is like a rust binary running on a single eight core machine in a T Marks instance.And me deploying it was like looking at the request log and then like command seeing it or like control seeing it to just like, okay, there's no request. Let's upgrade the binary. Like it was like literally the, the, the, the scrappiest thing. You could imagine it was on purpose because just like at Shopify, we did that all the time.Like, we like move, like we ran things in tux all the time to begin with. Before something had like, at least the inkling of PMF, it was like, okay, is anyone gonna hear about this? Um, and one of the cursor co-founders Arvid reached out and he just, you know, the, the cursor team are like all I-O-I-I-M-O like, um, contenders, right?So they just speak in bullet points and, and facts. It was like this amazing email exchange just of, this is how many QPS we have, this is what we're paying, this is where we're going, blah, blah, blah. And so we're just conversing in bullet points. And I tried to get a call with them a few times, but they were, so, they were like really writing the PMF bowl here, just like late 2023.And one time Swally emails me at like five. What was it like 4:00 AM Pacific time saying like, Hey, are you open for a call now? And I'm on the East coast and I, it was like 7:00 AM I was like, yeah, great, sure, whatever. Um, and we just started talking and something. Then I didn't know anything about sales.It was something that just comp compelled me. I have to go see this team. Like, there's something here. So I, I went to San Francisco and I went to their office and the way that I remember it is that Postgres was down when I showed up at the office. Did SW tell you this? No. Okay. So Postgres was down and so it's like they were distracting with that.And I was trying my best to see if I could, if I could help in any way. Like I knew a little bit about databases back to tuning, auto vacuum. It was like, I think you have to tune out a vacuum. Um, and so we, we talked about that and then, um, that evening just talked about like what would it look like, what would it look like to work with us?And I just said. Look like we're all in, like we will just do what we'll do whatever, whatever you tell us, right? They migrated everything over the next like week or two, and we reduced their cost by 95%, which I think like kind of fixed their per user economics. Um, and it solved a lot of other things. And we were just, Justine, this is also when I asked Justine to come on as my co-founder, she was the best engineer, um, that I ever worked with at Shopify.She lived two blocks away and we were just, okay, we're just gonna get this done. Um, and we did, and so we helped them migrate and we just worked like hell over the next like month or two to make sure that we were never an issue. And that was, that was the cursor story. Yeah.swyx: And, and is code a different workload than normal text?I, I don't know. Is is it just text? Is it the same thing?Simon Hørup Eskildsen: Yeah, so cursor's workload is basically, they, um, they will embed the entire code base, right? So they, they will like chunk it up in whatever they would, they do. They have their own embedding model, um, which they've been public about. Um, and they find that on, on, on their evals.It. There's one of their evals where it's like a 25% improvement on a very particular workload. They have a bunch of blog posts about it. Um, I think it works best on larger code basis, but they've trained their own embedding model to do this. Um, and so you'll see it if you use the cursor agent, it will do searches.And they've also been public around, um, how they've, I think they post trained their model to be very good at semantic search as well. Um, and that's, that's how they use it. And so it's very good at, like, can you find me on the code that's similar to this, or code that does this? And just in, in this queries, they also use GR to supplement it.swyx: Yeah.Simon Hørup Eskildsen: Um, of courseswyx: it's been a big topic of discussion like, is rag dead because gr you know,Simon Hørup Eskildsen: and I mean like, I just, we, we see lots of demand from the coding company to ethicsswyx: search in every part. Yes.Simon Hørup Eskildsen: Uh, we, we, we see demand. And so, I mean, I'm. I like case studies. I don't like, like just doing like thought pieces on this is where it's going.And like trying to be all macroeconomic about ai, that's has turned out to be a giant waste of time because no one can really predict any of this. So I just collect case studies and I mean, cursor has done a great job talking about what they're doing and I hope some of the other coding labs that use Turbo Puffer will do the same.Um, but it does seem to make a difference for particular queries. Um, I mean we can also do text, we can also do RegX, but I should also say that cursors like security posture into Tur Puffer is exceptional, right? They have their own embedding model, which makes it very difficult to reverse engineer. They obfuscate the file paths.They like you. It's very difficult to learn anything about a code base by looking at it. And the other thing they do too is that for their customers, they encrypt it with their encryption keys in turbo puffer's bucket. Um, so it's, it's, it's really, really well designed.swyx: And so this is like extra stuff they did to work with you because you are not part of Cursor.Exactly like, and this is just best practice when working in any database, not just you guys. Okay. Yeah, that makes sense. Yeah. I think for me, like the, the, the learning is kind of like you, like all workloads are hybrid. Like, you know, uh, like you, you want the semantic, you want the text, you want the RegX, you want sql.I dunno. Um, but like, it's silly to like be all in on like one particularly query pattern.Simon Hørup Eskildsen: I think, like I really like the way that, um, um, that swally at cursor talks about it, which is, um, I'm gonna butcher it here. Um, and you know, I'm a, I'm a database scalability person. I'm not a, I, I dunno anything about training models other than, um, what the internet tells me and what.The way he describes is that this is just like cash compute, right? It's like you have a point in time where you're looking at some particular context and focused on some chunk and you say, this is the layer of the neural net at this point in time. That seems fundamentally really useful to do cash compute like that.And, um, how the value of that will change over time. I'm, I'm not sure, but there seems to be a lot of value in that.Alessio: Maybe talk a bit about the evolution of the workload, because even like search, like maybe two years ago it was like one search at the start of like an LLM query to build the context. Now you have a gentech search, however you wanna call it, where like the model is both writing and changing the code and it's searching it again later.Yeah. What are maybe some of the new types of workloads or like changes you've had to make to your architecture for it?Simon Hørup Eskildsen: I think you're right. When I think of rag, I think of, Hey, there's an 8,000 token, uh, context window and you better make it count. Um, and search was a way to do that now. Everything is moving towards the, just let the agent do its thing.Right? And so back to the thing before, right? The LLM is very good at reasoning with the data, and so we're just the tool call, right? And that's increasingly what we see our customers doing. Um, what we're seeing more demand from, from our customers now is to do a lot of concurrency, right? Like Notion does a ridiculous amount of queries in every round trip just because they can't.And I'm also now, when I use the cursor agent, I also see them doing more concurrency than I've ever seen before. So a bit similar to how we designed a database to drive as much concurrency in every round trip as possible. That's also what the agents are doing. So that's new. It means just an enormous amount of queries all at once to the dataset while it's warm in as few turns as possible.swyx: Can I clarify one thing on that?Simon Hørup Eskildsen: Yes.swyx: Is it, are they batching multiple users or one user is driving multiple,Simon Hørup Eskildsen: one user driving multiple, one agent driving.swyx: It's parallel searching a bunch of things.Simon Hørup Eskildsen: Exactly.swyx: Yeah. Yeah, exactly. So yeah, the clinician also did, did this for the fast context thing, like eight parallel at once.Simon Hørup Eskildsen: Yes.swyx: And, and like an interesting problem is, well, how do you make sure you have enough diversity so you're not making the the same request eight times?Simon Hørup Eskildsen: And I think like that's probably also where the hybrid comes in, where. That's another way to diversify. It's a completely different way to, to do the search.That's a big change, right? So before it was really just like one call and then, you know, the LLM took however many seconds to return, but now we just see an enormous amount of queries. So the, um, we just see more queries. So we've like tried to reduce query, we've reduced query pricing. Um, this is probably the first time actually I'm saying that, but the query pricing is being reduced, like five x.Um, and we'll probably try to reduce it even more to accommodate some of these workloads of just doing very large amounts of queries. Um, that's one thing that's changed. I think the right, the right ratio is still very high, right? Like there's still a, an enormous amount of rights per read, but we're starting probably to see that change if people really lean into this pattern.Alessio: Can we talk a little bit about the pricing? I'm curious, uh, because traditionally a database would charge on storage, but now you have the token generation that is so expensive, where like the actual. Value of like a good search query is like much higher because they're like saving inference time down the line.How do you structure that as like, what are people receptive to on the other side too?Simon Hørup Eskildsen: Yeah. I, the, the turbo puffer pricing in the beginning was just very simple. The pricing on these on for search engines before Turbo Puffer was very server full, right? It was like, here's the vm, here's the per hour cost, right?Great. And I just sat down with like a piece of paper and said like, if Turbo Puffer was like really good, this is probably what it would cost with a little bit of margin. And that was the first pricing of Turbo Puffer. And I just like sat down and I was like, okay, like this is like probably the storage amp, but whenever on a piece of paper I, it was vibe pricing.It was very vibe price, and I got it wrong. Oh. Um, well I didn't get it wrong, but like Turbo Puffer wasn't at the first principle pricing, right? So when Cursor came on Turbo Puffer, it was like. Like, I didn't know any VCs. I didn't know, like I was just like, I don't know, I didn't know anything about raising money or anything like that.I just saw that my GCP bill was, was high, was a lot higher than the cursor bill. So Justine and I was just like, well, we have to optimize it. Um, and I mean, to the chagrin now of, of it, of, of the VCs, it now means that we're profitable because we've had so much pricing pressure in the beginning. Because it was running on my credit card and Justine and I had spent like, like tens of thousands of dollars on like compute bills and like spinning off the company and like very like, like bad Canadian lawyers and like things like to like get all of this done because we just like, we didn't know.Right. If you're like steeped in San Francisco, you're just like, you just know. Okay. Like you go out, raise a pre-seed round. I, I never heard a word pre-seed at this point in time.swyx: When you had Cursor, you had Notion you, you had no funding.Simon Hørup Eskildsen: Um, with Cursor we had no funding. Yeah. Um, by the time we had Notion Locke was, Locke was here.Yeah. So it was really just, we vibe priced it 100% from first Principles, but it wasn't, it, it was not performing at first principles, so we just did everything we could to optimize it in the beginning for that, so that at least we could have like a 5% margin or something. So I wasn't freaking out because Cursor's bill was also going like this as they were growing.And so my liability and my credit limit was like actively like calling my bank. It was like, I need a bigger credit. Like it was, yeah. Anyway, that was the beginning. Yeah. But the pricing was, yeah, like storage rights and query. Right. And the, the pricing we have today is basically just that pricing with duct tape and spit to try to approach like, you know, like a, as a margin on the physical underlying hardware.And we're doing this year, you're gonna see more and more pricing changes from us. Yeah.swyx: And like is how much does stuff like VVC peering matter because you're working in AWS land where egress is charged and all that, you know.Simon Hørup Eskildsen: We probably don't like, we have like an enterprise plan that just has like a base fee because we haven't had time to figure out SKU pricing for all of this.Um, but I mean, yeah, you can run turbo puffer either in SaaS, right? That's what Cursor does. You can run it in a single tenant cluster. So it's just you. That's what Notion does. And then you can run it in, in, in BYOC where everything is inside the customer's VPC, that's what an for example, philanthropic does.swyx: What I'm hearing is that this is probably the best CRO job for somebody who can come in and,Simon Hørup Eskildsen: I mean,swyx: help you with this.Simon Hørup Eskildsen: Um, like Turbo Puffer hired, like, I don't know what, what number this was, but we had a full-time CFO as like the 12th hire or something at Turbo Puffer, um, I think I hear are a lot of comp.I don't know how they do it. Like they have a hundred employees and not a CFO. It's like having a CFO is like a runningswyx: business man. Like, you know,Simon Hørup Eskildsen: it's so good. Yeah, like money Mike, like he just, you know, just handles the money and a lot of the business stuff and so he came in and just hopped with a lot of the operational side of the business.So like C-O-O-C-F-O, like somewhere in between.swyx: Just as quick mention of Lucky, just ‘cause I'm curious, I've met Lock and like, he's obviously a very good investor and now on physical intelligence, um, I call it generalist super angel, right? He invests in everything. Um, and I always wonder like, you know, is there something appealing about focusing on developer tooling, focusing on databases, going like, I've invested for 10 years in databases versus being like a lock where he can maybe like connect you to all the customers that you need.Simon Hørup Eskildsen: This is an excellent question. No, no one's asked me this. Um, why lockey? Because. There was a couple of people that we were talking to at the time and when we were raising, we were almost a little, we were like a bit distressed because one of our, one of our peers had just launched something that was very similar to Turbo Puffer.And someone just gave me the advice at the time of just choose the person where you just feel like you can just pick up the phone and not prepare anything. And just be completely honest, and I don't think I've said this publicly before, but I just called Lockey and was like local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you.But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working. So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people and we're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards and.Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before. As I said, I didn't even know what a seed or pre-seed round was like before, probably even at this time. So I was just like very honest with him. And I asked him like, Lockie, have you ever have, have you ever invested in database company?He was just like, no. And at the time I was like, am I dumb? Like, but I think there was something that just like really drew me to Lockie. He is so authentic, so honest, like, and there was something just like, I just felt like I could just play like, just say everything openly. And that was, that was, I think that that was like a perfect match at the time, and, and, and honestly still is.He was just like, okay, that's great. This is like the most honest, ridiculous thing I've ever heard anyone say to me. But like that, like that, whyswyx: is this ridiculous? Say competitor launch, this may not work out. It wasSimon Hørup Eskildsen: more just like. If this doesn't work out, I'm gonna close up shop by the end of the mo the year, right?Like it was, I don't know, maybe it's common. I, I don't know. He told me it was uncommon. I don't know. Um, that's why we chose him and he'd been phenomenal. The other people were talking at the, at the time were database experts. Like they, you know, knew a lot about databases and Locke didn't, this turned out to be a phenomenal asset.Right. I like Justine and I know a lot about databases. The people that we hire know a lot about databases. What we needed was just someone who didn't know a lot about databases, didn't pretend to know a lot about databases, and just wanted to help us with candidates and customers. And he did. Yeah. And I have a list, right, of the investors that I have a relationship with, and Lockey has just performed excellent in the number of sub bullets of what we can attribute back to him.Just absolutely incredible. And when people talk about like no ego and just the best thing for the founder, I like, I don't think that anyone, like even my lawyer is like, yeah, Lockey is like the most friendly person you will find.swyx: Okay. This is my most glow recommendation I've ever heard.Alessio: He deserves it.He's very special.swyx: Yeah. Yeah. Yeah. Okay. Amazing.Alessio: Since you mentioned candidates, maybe we can talk about team building, you know, like, especially in sf, it feels like it's just easier to start a company than to join a company. Uh, I'm curious your experience, especially not being n SF full-time and doing something that is maybe, you know, a very low level of detail and technical detail.Simon Hørup Eskildsen: Yeah. So joining versus starting, I never thought that I would be a founder. I would start with it, like Turbo Puffer started as a blog post, and then it became a project and then sort of almost accidentally became a company. And now it feels like it's, it's like becoming a bigger company. That was never the intention.The intentions were very pure. It's just like, why hasn't anyone done this? And it's like, I wanna be the, like, I wanna be the first person to do it. I think some founders have this, like, I could never work for anyone else. I, I really don't feel that way. Like, it's just like, I wanna see this happen. And I wanna see it happen with some people that I really enjoy working with and I wanna have fun doing it and this, this, this has all felt very natural on that, on that sense.So it was never a like join versus versus versus found. It was just dis found me at the right moment.Alessio: Well I think there's an argument for, you should have joined Cursor, right? So I'm curious like how you evaluate it. Okay, I should actually go raise money and make this a company versus like, this is like a company that is like growing like crazy.It's like an interesting technical problem. I should just build it within Cursor and then they don't have to encrypt all this stuff. They don't have to obfuscate things. Like was that on your mind at all orSimon Hørup Eskildsen: before taking the, the small check from Lockie, I did have like a hard like look at myself in the mirror of like, okay, do I really want to do this?And because if I take the money, I really have to do it right. And so the way I almost think about it's like you kind of need to ha like you kind of need to be like fucked up enough to want to go all the way. And that was the conversation where I was like, okay, this is gonna be part of my life's journey to build this company and do it in the best way that I possibly can't.Because if I ask people to join me, ask people to get on the cap table, then I have an ultimate responsibility to give it everything. And I don't, I think some people, it doesn't occur to me that everyone takes it that seriously. And maybe I take it too seriously, I don't know. But that was like a very intentional moment.And so then it was very clear like, okay, I'm gonna do this and I'm gonna give it everything.Alessio: A lot of people don't take it this seriously. But,swyx: uh, let's talk about, you have this concept of the P 99 engineer. Uh, people are 10 x saying, everyone's saying, you know, uh, maybe engineers are out of a job. I don't know.But you definitely see a P 99 engineer, and I just want you to talk about it.Simon Hørup Eskildsen: Yeah, so the P 99 engineer was just a term that we started using internally to talk about candidates and talk about how we wanted to build the company. And you know, like everyone else is, like we want a talent dense company.And I think that's almost become trite at this point. What I credit the cursor founders a lot with is that they just arrived there from first principles of like, we just need a talent dense, um, talent dense team. And I think I've seen some teams that weren't talent dense and like seemed a counterfactual run, which if you've run in been in a large company, you will just see that like it's just logically will happen at a large company.Um, and so that was super important to me and Justine and it's very difficult to maintain. And so we just needed, we needed wording for it. And so I have a document called Traits of the P 99 Engineer, and it's a bullet point list. And I look at that list after every single interview that I do, and in every single recap that we do and every recap we end with.End with, um, some version of I'm gonna reject this candidate completely regardless of what the discourse was, because I wanna see people fight for this person because the default should not be, we're gonna hire this person. The default should be, we're definitely not hiring this person. And you know, if everyone was like, ah, maybe throw a punch, then this is not the right.swyx: Do, do you operate, like if there's one cha there must have at least one champion who's like, yes, I will put my career on, on, on the line for this. You know,Simon Hørup Eskildsen: I think career on the line,swyx: maybe a chair, butSimon Hørup Eskildsen: yeah. You know, like, um, I would say so someone needs to like, have both fists up and be like, I'd fight.Right? Yeah. Yeah. And if one person said, then, okay, let's do it. Right?swyx: Yeah.Simon Hørup Eskildsen: Um. It doesn't have to be absolutely everyone. Right? And like the interviews are always the sign that you're checking for different attributes. And if someone is like knocking it outta the park in every single attribute, that's, that's fairly rare.Um, but that's really important. And so the traits of the P 99 engineer, there's lots of them. There's also the traits of the p like triple nine engineer and the quadruple nine engineer. This is like, it's a long list.swyx: Okay.Simon Hørup Eskildsen: Um, I'll give you some samples, right. Of what we, what we look for. I think that the P 99 engineer has some history of having bent, like their trajectory or something to their will.Right? Some moment where it was just, they just, you know, made the computer do what it needed to do. There's something like that, and it will, it will occur to have them at some point in their career. And, uh. Hopefully multiple times. Right.swyx: Gimme an example of one of your engineers that like,Simon Hørup Eskildsen: I'll give an eng.Uh, so we, we, we launched this thing called A and NV three. Um, we could, we're also, we're working on V four and V five right now, but a and NV three can search a hundred billion vectors with a P 50 of around 40 milliseconds and a p 99 of 200 milliseconds. Um, maybe other people have done this, I'm sure Google and others have done this, but, uh, we haven't seen anyone, um, at least not in like a public consumable SaaS that can do this.And that was an engineer, the chief architect of Turbo Puffer, Nathan, um, who more or less just bent this, the software was not capable of this and he just made it capable for a very particular workload in like a, you know, six to eight week period with the help of a lot of the team. Right. It's been, been, there's numerous of examples of that, like at, at turbo puff, but that's like really bending the software and X 86 to your will.It was incredible to watch. Um. You wanna see some moments like that?swyx: Isn't that triple nine?Simon Hørup Eskildsen: Um, I think Nathan, what's calledAlessio: group nine, that was only nine. I feel like this is too high forSimon Hørup Eskildsen: Nathan. Nathan is, uh, Nathan is like, yeah, there's a lot of nines. Okay. After that p So I think that's one trait. I think another trait is that, uh, the P 99 spends a lot of time looking at maps.Generally it's their preferred ux. They just love looking at maps. You ever seen someone who just like, sits on their phone and just like, scrolls around on a map? Or did you not look at maps A lot? You guys don't look atswyx: maps? I guess I'm not feeling there. I don't know, butSimon Hørup Eskildsen: you just dis What about trains?Do you like trains?swyx: Uh, I mean they, not enough. Okay. This is just like weapon nice. Autism is what I call it. Like, like,Simon Hørup Eskildsen: um, I love looking at maps, like, it's like my preferred UX and just like I, you know, I likeswyx: lotsAlessio: of, of like random places, soswyx: like,youswyx: know.Alessio: Yes. Okay. There you go. So instead of like random places, like how do you explore the maps?Simon Hørup Eskildsen: No, it's, it's just a joke.swyx: It's autism laugh. It's like you are just obsessed by something and you like studying a thing.Simon Hørup Eskildsen: The origin of this was that at some point I read an interview with some IOI gold medalistswyx: Uhhuh,Simon Hørup Eskildsen: and it's like, what do you do in your spare time? I was just like, I like looking at maps.I was like, I feel so seen. Like, I just like love, like swirling out. I was like, oh, Canada is so big. Where's Baffin Island? I don't know. I love it. Yeah. Um, anyway, so the traits of P 99, P 99 is obsessive, right? Like, there's just like, you'll, you'll find traits of that we do an interview at, at, at, at turbo puffer or like multiple interviews that just try to screen for some of these things.Um, so. There's lots of others, but these are the kinds of traits that we look for.swyx: I'll tell you, uh, some people listen for like some of my dere stuff. Uh, I do think about derel as maps. Um, you draw a map for people, uh, maps show you the, uh, what is commonly agreed to be the geographical features of what a boundary is.And it shows also shows you what is not doing. And I, I think a lot of like developer tools, companies try to tell you they can do everything, but like, let's, let's be real. Like you, your, your three landmarks are here, everyone comes here, then here, then here, and you draw a map and, and then you draw a journey through the map.And like that. To me, that's what developer relations looks like. So I do think about things that way.Simon Hørup Eskildsen: I think the P 99 thinks in offs, right? The P 99 is very clear about, you know, hey, turbo puffer, you can't run a high transaction workload on turbo puffer, right? It's like the right latency is a hundred milliseconds.That's a clear trade off. I think the P 99 is very good at articulating the trade offs in every decision. Um. Which is exactly what the map is in your case, right?swyx: Uh, yeah, yeah. My, my, my world. My world.Alessio: How, how do you reconcile some of these things when you're saying you bend the will the computer versus like the trade

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
NVIDIA's AI Engineers: Agent Inference at Planetary Scale and "Speed of Light" — Nader Khalil (Brev), Kyle Kranen (Dynamo)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 10, 2026 83:37


Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con

Oracle University Podcast
How Oracle Database@AWS Stays Secure and Available

Oracle University Podcast

Play Episode Listen Later Mar 3, 2026 16:42


When your business runs on data, even a few seconds of downtime can hurt. That's why this episode focuses on what keeps Oracle Database@AWS running when real-world problems strike.   Hosts Lois Houston and Nikita Abraham are joined by Senior Principal Database Instructor Rashmi Panda, who takes us inside the systems that keep databases resilient through failures, maintenance, and growing workloads.   Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University. Nikita: Hi everyone! In our last episode, we explored the security and migration strengths of Oracle Database@AWS. Today, we're joined once again by Senior Principal Database Instructor Rashmi Panda to look at how the platform keeps your database available and resilient behind the scenes. 01:00 Lois: It's really great to have you with us, Rashmi. As many of you may know, keeping critical business applications running smoothly is essential for success. And that's why it's so important to have deployments that are highly resilient to unexpected failures, whether those failures are hardware-, software-, or network-related. With that in mind, Rashmi, could you tell us about the Oracle technologies that help keep the database available when those kinds of issues occur? Rashmi: Databases deployed in Oracle Database@AWS are built on Oracle's Foundational High Availability Architecture. Oracle Real Application Cluster or Oracle RAC is an Active-Active architecture where multiple database instances are concurrently running on separate servers, all accessing the same physical database stored in a shared storage to simultaneously process various application workloads. Even though each instance runs on a separate server, they collectively appear as a single unified database to the application. As the workload grows and demands additional computing capacity, then new nodes can be added to the cluster to spin up new database instances to support additional computing requirements. This enables you to scale out your database deployments without having to bring down your application and eliminates the need to replace existing servers with high-capacity ones, offering a more cost-effective solution. 02:19 Nikita: That's really interesting, Rashmi. It sounds like Oracle RAC offers both scalability and resilience for mission-critical applications. But of course, even the most robust systems require regular maintenance to keep them running at their best. So, how does planned maintenance affect performance?  Rashmi: Maintenance on databases can take a toll on your application uptime. Database maintenance activities typically include applying of database patches or performing updates. Along with the database updates, there may also be updates to the host operating system. These operations often demand significant downtime for the database, which consequently leads to slightly higher application downtime. Oracle Real Application Cluster provides rolling patching and rolling upgrades feature, enabling patching and upgrades in a rolling fashion without bringing down the entire cluster that significantly reduces the application downtime.  03:10 Lois: And what happens when there's a hardware failure? How does Oracle keep things running smoothly in that situation? Rashmi: In the event of an instance or a hardware failure, Oracle RAC ensures automatic service failover. This means that if one of the instance or node in the cluster goes down, the system transparently failovers the service to an available instance in the cluster, ensuring minimal disruption to your application. This feature enhances the overall availability and resilience of your database.  03:39 Lois: That sounds like a powerful way to handle unexpected issues. But for businesses that need even greater resilience and can't afford any downtime, are there other Oracle solutions designed to address those needs? Rashmi: Oracle Exadata is the maximum availability architecture database platform for Oracle databases. Core design principle of Oracle Exadata is built around redundancy, consisting of networking, power supplies, database, and storage servers and their components. This robust architecture ensures protection against the failure of any individual component, effectively guaranteeing continuous database availability. The scale out architecture of Oracle Exadata allows you to start your deployment with two database servers and three storage servers, having different number of CPU cores and different sizes and types of storage to meet the current business needs. 04:26 Lois: And if a business suddenly finds demand growing, how does the system handle that? Is it able to keep up with increased needs without disruptions? Rashmi: As the demand increases, the system can be easily expanded by adding more servers, ensuring that the performance and capacity grow with your business requirements. Exadata Database Service deployment in Oracle Database@AWS leverages this foundational technologies to provide high availability of database system. This is achieved by provisioning databases using Oracle Real Application Cluster, hosted on the redundant infrastructure provided by Oracle Exadata Infrastructure Platform. This deployment architecture provides the ability to scale compute and storage to growing resource demands without the need for downtime. You can scale up the number of enabled CPUs symmetrically in each node of the cluster when there is a need for higher processing power or you can scale out the infrastructure by adding more database and storage servers up to the Exadata Infrastructure model limit, which in itself is huge enough to support any large workloads. The Exadata Database Service running on Oracle RAC instances enables any maintenance on individual nodes or patching of the database to be performed with zero or negligible downtime. The rolling feature allows patching one instance at a time, while services seamlessly failover to the available instance, ensuring that the application experienced little to no disruption during maintenance. Oracle RAC, coupled with Oracle Exadata redundant infrastructure, protects the Database Service from any single point of failure. This fault-tolerant architecture features redundant networking and mirrored disk, enabling automatic failover in the event of a component failure. Additionally, if any node in the cluster fails, there is zero or negligible disruption to the dependent applications. 06:09 Nikita: That's really impressive, having such strong protection against failures and so little disruption, even during scaling and maintenance. But let's say a company wants those high-availability benefits in a fully managed environment, so they don't have to worry about maintaining the infrastructure themselves. Is there an option for that? Rashmi: Similar to Oracle Exadata Database Service, Oracle Autonomous Database Service on dedicated infrastructure in Oracle Database@AWS also offers the same feature, with the key difference being that it's a fully managed service. This means customers have zero responsibility for maintaining and managing the Database Service. This again, uses the same Oracle RAC technology and Oracle Exadata infrastructure to host the Database Service, where most of the activities of the database are fully automated, providing you a highly available database with extreme performance capability. It provides an elastic database deployment platform that can scale up storage and CPU online or can be enabled to autoscale storage and compute. Maintenance activities on the database like database updates are performed automatically without customer intervention and without the need of downtime, ensuring seamless operation of applications. 07:20 Lois: Can we shift gears a bit, Rashmi? Let's talk about protecting data and recovering from the unexpected. What Oracle technologies help guard against data loss and support disaster recovery for databases? Rashmi: Oracle Database Autonomous Recovery Service is a centralized backup management solution for Oracle Database services in Oracle Cloud Infrastructure. It automatically takes backup of your Oracle databases and securely stores them in the cloud. It ensures seamless data protection and rapid recovery for your database. It is a fully managed solution that eliminates the need for any manual database backup management, freeing you from associated overhead. It implements an incremental forever backup strategy, a highly efficient approach where only the changes since the last backup are identified and backed up. This approach drastically reduces the time and storage space needed for backup, as the size of the incremental changes is significantly lower than the full database backup. 08:17 Nikita: And what's the benefit of using this backup approach? Rashmi: The benefit of this approach is that your backups are completed faster, with much lesser compute and network resources, while still guaranteeing the full recoverability of your database in the event of a failure. You can achieve zero data loss with this backup service by enabling the real-time protection option, while minimizing the data loss by recovering data up to the last subsecond. It is highly recommended to enable this option for mission-critical databases that cannot tolerate any data loss, whether due to a ransomware attack or due to an unplanned outage. The protection policy can retain the protected database backups for a minimum of 14 days to a maximum of 95 days. The recovery service requires and enforces the backups are encrypted. These backups are compressed and encrypted during the backup process. The integrity of the backups is continuously validated without placing a burden on the production database. This ensures that the stored backup data is consistent and recoverable when needed. This protects against malicious user activity or any ransomware attack. With strict policy-based retention strategy, it prevents modification or deletion of backup data by malicious users. 09:30 Lois: Now, let's look at the next layer of protection. Rashmi, can you tell us about Oracle Active Data Guard? Rashmi: Oracle Active Data Guard provides highly available data protection and disaster recovery for Enterprise Oracle Databases. It creates and manages one or more transactionally consistent standby copies of production database, which is the active primary. The standby database is isolated from production environment located miles away in a distance data center, ensuring the standby remains protected and unaffected, even if the primary is impacted by a disaster. In the event of a disaster or data corruption occurring at the primary, the standby can take over the role as new primary, thus allowing business to continue its operations uninterrupted. It keeps the standby database in sync with the production database by continuously applying change logs from production. 10:25 Do you want to stay ahead in today's fast-paced world? Check out our New Features courses for Oracle Fusion Cloud Applications. Each quarter brings new updates and hands-on training to keep your skills sharp and your knowledge current. Head over to mylearn.oracle.com to dive into the latest advancements! 10:45 Nikita: Welcome back! Rashmi, how does Oracle Active Data Guard operate in practice? Rashmi: It uses the knowledge of Oracle Database block format to continuously validate physical blocks or logical intrablock corruption during redo transport and change apply. With automatic block repair feature, whenever any corrupt block is detected in the primary or the standby database, then it is automatically repaired by transferring a good copy of the block from other destination that holds it. This is handled transparently without any error being reported in the application. It enables you to upload the read-only workloads and backup operations to the standby database, reducing the load on the production database. You can achieve zero data loss at any distance by configuring a special synchronization mechanism known as parsing. File systems form the attack surface for ransomware. Since Active Data Guard replicates the data at the memory level, any ransomware attack on the primary database will never be replicated to the standby database. This allows for a safe failover to the standby without any data loss, and shielding the database from effects of the attack. You can enable automatic failover of the primary database to a chosen standby database without any manual intervention by configuring a Data Guard Broker. The Data Guard Broker continuously monitors the primary database and automatically performs a failover to the standby when the predefined failover conditions are met. Active Data Guard enables you to perform database maintenance or database software upgrades with almost zero or minimal downtime. 12:18 Lois: And how does disaster recovery work for Exadata Database Service in Oracle Database@AWS? Rashmi: Exadata Database Service, by design, are already protected against local failures by use of technologies like Oracle RAC and Oracle Exadata. Now, by deploying Exadata Database Service across multiple availability zones in an AWS region, it can ensure that your database services remain resilient to site failures. It leverages Oracle Active Data Guard to create standby in a separate availability zone such that if the primary availability zone is affected, then all application traffic can be routed to the database services in the secondary availability zone, restoring business continuity of the application back to normal. Through continuous validation of the data blocks at both the primary and the standby database, any potential corruption is detected and prevented. This ensures data integrity and protection across the entire database service. By leveraging zero data loss Autonomous Recovery Service, the database ensures that the backup remains secure and unaffected by ransomware. This enables rapid restoration of clean, uncompromised data in the event of an attack. Periodic patching and upgrades are performed online in a rolling fashion with little to no impact on the application uptime using a combination of Oracle RAC and Oracle Active Data Guard technologies. For all resource-intensive workloads like database backup or generating monthly reports, which are read-only in nature, they can be uploaded to the standby, reducing the load on the production database. In the cross-availability zone DR setup, you have the flexibility to configure Active Data Guard to use either the AWS network or the OCI network for keeping database redo logs to the standby database. Choosing which network to use for the traffic is entirely at the enterprise discretion. However, both are Oracle maximum availability–compliant and the setup is pretty simple. If the network traffic being used is OCI network or AWS network, then respective cloud provider is responsible for ensuring the reliability. You have to take into account the different charges that each cloud provider may have. And you can provision multiple standby databases using the console. Optionally, you may set up a broker manually to enable automatic failover capability. 14:30 Nikita: We just covered cross-availability-zone protection. But what if an entire AWS region goes down? Rashmi: This is where we can provide an additional level of protection by provisioning cross-region disaster recovery for your Exadata Database Service in Oracle Database@AWS.  This deployment protects your database against regional disasters. You can provision another DR environment in a different AWS region that supports Oracle Database@AWS. This deployment, together with the cross-availability zone deployment, complements your highly available and protected database service deployment in Oracle Database@AWS. Under the hood, it uses the same Oracle Database technologies that include Oracle Active Data Guard, OCI Autonomous Recovery Service, Oracle Exadata, Oracle RAC to provide the same capabilities as in case of cross-availability zone deployment. Here too, you have the flexibility to configure Oracle Active Data Guard to use either AWS network or OCI network for shipping database redo logs to the standby. And for the network traffic options, the feature remains the same, except a small difference with respect to chargeback. When using OCI Network for cross-region deployment, there is no charge for the first 10 TB of data transfer per month. Beyond that, standard OCI charges would apply. When using AWS network, you may refer to AWS charging sheet for the cross-region traffic. 15:49 Nikita: Thank you so much, Rashmi, for this insightful episode. Lois: Yes, thank you! And if you want to dive deeper into the topics we covered today, go to mylearn.oracle.com and search for the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:13 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Spark of Ages
The Data Moat: A Google Veteran's Investment Thesis for AI/David Yakobovitch ~ Spark of Ages Ep 58

Spark of Ages

Play Episode Listen Later Feb 28, 2026 58:38 Transcription Available


We chart how AI leapt from chat to code, why product is now the leverage point, and how startups can market to algorithms without losing trust. David Yakobovitch shares hard-won views on moats, data, defense tech, and the immigrant energy powering American dynamism.• leaders and market share across Google, OpenAI, Anthropic• vibe coding benefits, code quality risks, review loops• prompt libraries, agent swarms, PRD automation• weekly shipping pace and the SaaS squeeze• marketing to algorithms, buyer agents, bot traffic control• pilot to production gap, rise of forward-deployed engineers• moats beyond models via domain, workflow, and proprietary data• China's progress, open source, and on-device AI bets• defense tech, swarms, and physical AI opportunities• endurance mindset, yoga discipline, and founder stamina• personal workflows across Gemini, Claude, and OpenAI• investing across seed and growth with outcome focusThe model wars aren't theoretical anymore—they're shaping how software gets built, shipped, and sold. We sit down with David Yakobovitch, GP at Data Power Capital and former global product lead at Google, to map where AI is actually working in 2026: vibe coding that shrinks teams, agent swarms that harden quality, and product-led moats that outlast model churn. David pulls back the curtain on how Claude, OpenAI, and Google now compete neck and neck on code and content, why prompt engineering as a job vanished while prompts became more valuable, and how forward-deployed engineers bridge the stubborn pilot-to-production gap that has haunted data projects for a decade.We explore go-to-market in a world where buyer agents screen your pitch before a human blinks. That means structuring materials for machines, tuning sites for humans and crawlers, and building demos that agents can evaluate safely. We also go into what happens as models commoditize: the moat shifts to domain depth, proprietary offline data, secure connectors, and measurable workflow outcomes. From small language models running on CPUs in air‑gapped containers to Apple's on-device bet, the edge is back—especially for Europe's sovereignty demands and public sector buyers.Then we widen the lens. Defense and “physical AI” blend hardware and autonomy: swarms, hypersonics, and resilient edge compute that must perform in the real world. David shares why he's backing both the silicon and the software, and how American dynamism—powered by immigrants and impatient builders—remains a durable advantage. Along the way, we trade notes on multi-model workflows, open source momentum, China's narrowed gap, and the endurance mindset that carries teams through the disappointment dip after the first shiny demo.David Yakoboitch: https://www.linkedin.com/in/davidyakobovitch/David Yakobovitch is a General Partner and Managing Director of DataPower Capital, a New York City-based venture capital firm investing across Applied AI, Inference Infrastructure, and DeepTech.  With a portfolio of over 36 companies, David is an investor in the most defining frontier technology firms of our era, including OpenAI, Anthropic, xAI, Neuralink, DataBricks, Groq, Cruesoe, Anduril and SpaceX. David is a leading voice as the host of HumAIn, a podcast focused on Applied and Responsible AI.  Previously, David served as a Global Product Lead aWebsite: https://www.position2.com/podcast/Rajiv Parikh: https://www.linkedin.com/in/rajivparikh/Sandeep Parikh: https://www.instagram.com/sandeepparikh/Email us with any feedback for the show: sparkofages.podcast@position2.com

Smartinvesting2000
February 27th, 2026 | Concerning AI Deals, A Misleading 2025 Trade Deficit, Why Automobile Insurance Is So High, The Goal of Tax Planning & More

Smartinvesting2000

Play Episode Listen Later Feb 28, 2026 55:38


These massive AI deals look concerning The numbers are exciting when companies like Meta or OpenAI announce they'll be purchasing billions of dollars in chips or computing power from companies like Nvidia or AMD, but there always seems to be a catch. Most recently, Meta announced that it entered a multiyear deal with AMD to deploy up to 6 gigawatts of the company's graphics processing units for artificial intelligence data centers and includes use of AI-optimized central processing units, or CPUs. This deal comes a week after Meta committed to using millions of Nvidia's processors to power its AI expansion. While I have my concerns with all the money Meta is spending, my bigger concern with this new AMD deal is the use of stock warrants. Full details for the deal weren't announced, but we did see the deal includes a performance-based warrant for Meta to acquire 160 million AMD shares, about 10% of the company. The first tranche vests when the first 1GW of Instinct GPUs are shipped. Other tranches vest as Meta, makes purchases to 6GW. Vesting is also tied to stock price thresholds for AMD and technical and commercial milestones for Meta. AMD also struck a similar deal with OpenAI where they received warrants to acquire 160 million shares of AMD and it was tied to deployment and stock price benchmarks. The reason this is concerning is because of the potential dilution and again the circular nature of these deals. Essentially these companies are saying they will spend $30 B buying our products and we will give you $30 B in stock warrants back. Stock warrants give holders the right but not the obligation to buy or sell shares at specific strike price before an expiration date. If they are exercised, it creates new stock, which would dilute current shareholders. Based on what I have seen, the exercise price for these warrants is $0.01. Ultimately, I just don't believe this will end well for all players in this space, and I think there is a lot of money that will be lost by investors.    2025 trade deficit looks deceiving Some people are saying that the tariffs didn't work because the trade deficit in 2025 only fell to about $901.5 B from just over $903 B in 2024. However, if you break down the numbers quarter by a quarter, it tells a different story. The first three months of the year, there was a $400 billion trade deficit, but each quarter after that it began to decline. In the second quarter, it fell drastically to $180 billion. There wasn't much of a change in the third quarter with a slight drop to $175 billion and then in the fourth quarter there was a drop to $145 billion. We try to explain to people that the US economy at $31.5 trillion is like a big ship in the ocean; it cannot turn quickly. If people would be patient, I think they would see by the end of 2026 there would be further progress and I believe it's possible the trade deficit could see a decline to somewhere around $600-$700 billion based on the fourth quarter of 2025. I know there's a snafu with the Supreme Court ruling that the International Emergency Economic Powers Act, which was used in the first quarter last year to implement many of the tariffs, was ruled illegal. But there are other ways to impose tariffs such as section 122 of the Trade Act of 1974 or section 301 of the Trade Act that the president used in his first term. Also available is section 232 of the Trade Expansion Act of 1962. I don't believe the Supreme Court ruling will lead to an end of tariffs as the Administration will look at these other avenues. One major positive from these tariffs has been the announcements of various trade deals that have resulted in trillions of dollars promised by other countries to build manufacturing and other things in our economy.   Why is automobile insurance so high? Your first thought may be the insurance companies are gouging their customers just to make big profits. First off, insurance companies are generally public companies that have shareholders who would not be investing in their company if it was losing money and not paying dividends. The high cost of premiums is not the insurance companies' fault as in recent years things have really changed. Over the past five years, physical damage costs have increased by 47%. This is because of the higher price of cars and all the extra bells and whistles that add up when there's damage to a vehicle. Bodily injury claims are up 52% over the last five years because of the vast amount of new personal injury lawyers who have come on the scene and are pushing for higher settlements, even on small fender benders. Around 95% of these cases are settled and do not go to court. Many of your less reputable attorneys know this and hold the insurance companies' hostage. Either settle up with us now or go to court and spend a lot more money and time. Unfortunately, if you're a responsible driver that makes your premium payments, you are helping absorb the cost of uninsured and underinsured motorists which is up 72%. I'm not a big person for government regulation, but I do believe governments need to step in and verify that all people on the road have auto insurance and a reasonable amount. There's a trend starting in Florida, which is tort reform that has reduced litigation, and the top five insurance companies in the state have requested rate reductions of 5.9%. There is something in the auto insurance industry called fender bender litigation and this tort reform would help states like New York, California and other states to prevent insurance companies from having to pay ridiculous settlements for little dings and dents and fake injuries. Wouldn't it be nice if the state of California passed laws to help consumers to pay less for auto insurance?   Financial Planning: What Is the Goal of Tax Planning? Most people would assume the goal of tax planning is simply to reduce taxes, or even to reduce lifetime taxes, but that should not be the focus.  The true purpose of tax planning is to increase the level of after-tax income by intentionally managing assets and income sources. If the objective were merely to pay less in taxes, the solution would be simple: stop earning money. But earning less would also leave you with fewer resources and less freedom. What people ultimately want is more net income, more access to money, because that provides flexibility, security, and the ability to live life on their terms. Effective tax planning achieves this by building assets and income streams and structuring them in a way that allows you to access them efficiently. This means investing in the right types of assets, placing them in the right types of accounts, adjusting the strategy over time as income and tax laws change, and withdrawing funds at the right time and in the right manner. When you understand that the true purpose of tax planning is to maximize after-tax access to wealth, not merely minimize taxes, you make better decisions that improve your financial life.   Companies Discussed: Vulcan Materials Company (VMC), Leidos Holdings, Inc. (LDOS), Packaging Corporation of America (PKG) & Caesars Entertainment, Inc. (CZR)

Motley Fool Money
NVIDIA Posts Earnings. Wall Street Says “That's It?”

Motley Fool Money

Play Episode Listen Later Feb 26, 2026 23:11


NVIDIA has been the belle of the quarterly earnings ball for quite some time. Investors have been waiting to see how much NIVIDA beat earnings estimates. Even though earnings did beat expectations, the market reaction was “meh”. The gang breaks down NVIDIA's earnings and investigates into some of the challenges for the future Tyler Crowe, Matt Frankel, and Jon Quast discuss: - NVIDIA's earnings - The evolving landscape for CPUs and GPUs - The bull vs. bear look at MercadoLibre's earnings - The Trade Desk's quarterly results Companies discussed: NVDA, AMD, GOOG, MELI, AMZN, TTD, WMT, ROKU Host: Tyler Crowe Guests: Matt Frankel, Jon Quast Engineer: Dan Boyd Disclosure: Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, “TMF”) do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement. We're committed to transparency: All personal opinions in advertisements from Fools are their own. The product advertised in this episode was loaned to TMF and was returned after a test period or the product advertised in this episode was purchased by TMF. Advertiser has paid for the sponsorship of this episode. Learn more about your ad choices. Visit ⁠⁠⁠⁠⁠⁠⁠⁠megaphone.fm/adchoices⁠⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices

Terminal Value
AI at the Edge, Power Limits, and Why the Future Won't Live in Data Centers

Terminal Value

Play Episode Listen Later Feb 26, 2026 29:34


BrainChip CEO Sean Hehir joins me to unpack where artificial intelligence is actually headed—and why the dominant “everything in the data center” narrative is incomplete.Most AI conversations fixate on massive models, GPU farms, and trillion-dollar infrastructure bets. This episode shifts the frame. Sean and I explore the structural reality that power consumption, latency, and grid constraints are forcing AI to decentralize—and what that means for founders, engineers, and the broader economy.Sean explains how neuromorphic computing and ultra-low-power silicon enable AI inference outside the data center—inside wearables, medical devices, drones, manufacturing systems, and even space applications. We examine why CPUs and GPUs aren't optimized for edge workloads, how custom silicon changes the economics, and why power efficiency isn't a side issue—it's the bottleneck that determines what scales.The conversation expands into workforce displacement, labor fluidity, productivity cycles, and whether technological acceleration inevitably creates unemployment crises—or simply reshuffles value creation again, as history repeatedly shows.This isn't a speculative futurism episode. It's a grounded look at model trends, infrastructure limits, and how companies survive inside a market moving at month-scale rather than decade-scale.The lesson isn't that AI replaces everything.It's that architecture determines outcomes.TL;DR* AI is centralizing in data centers—but it's also rapidly decentralizing to the edge* Power constraints will shape the next phase of AI more than hype cycles* Neuromorphic and event-driven silicon drastically reduce energy per compute* Edge AI enables medical wearables, safety detection, space systems, and industrial automation* Models are getting larger—but optimization techniques will shrink them into smaller form factors* Productivity gains historically displace tasks—not human adaptability* The future isn't about bigger servers—it's about smarter distribution* Lowest power per compute is a strategic advantage, not a marketing lineMemorable Lines* “Don't bet against humanity. We're very creative.”* “The future of AI isn't just in data centers.”* “Power isn't a feature—it's the constraint.”* “If you're the lowest power solution, you will always have customers.”* “Architecture decides what becomes possible.”GuestSean Hehir — CEO of BrainChipTechnology executive leading the commercialization of neuromorphic AI processors focused on ultra-low-power edge inference. Oversees BrainChip's evolution from early engineering innovation to market-driven, customer-focused deployment.

WALL STREET COLADA
$NVDA revienta números, $TSLA queda en duda por robotaxis y el retail se enfría con la IA

WALL STREET COLADA

Play Episode Listen Later Feb 26, 2026 5:23


SUMMARY DEL SHOW Futuros planos tras el reporte de $NVDA: beat y guía “por margen amplio”, pero sin euforia en índices; hoy el foco pasa a datos macro y balance de la Fed. $NVDA entrega un 4T explosivo y guía alta: data center domina, márgenes cerca de 75% y narrativa de “agentic AI”, con más competencia en CPUs frente a $INTC y $AMD. Reuters cuestiona el timeline de robotaxis de $TSLA en California; encuesta de Schwab muestra traders retail más cautelosos y menos euforia por IA.

ThinkComputers Weekly Tech Podcast
ThinkComputers Podcast #482 - First ASRock AiO, Intel Rethinking CPUs, DDR5 Prices Going Down?

ThinkComputers Weekly Tech Podcast

Play Episode Listen Later Feb 26, 2026 60:14


This week on the podcast we go over our reviews of the ASRock Phantom Gaming 360 LCD Liquid CPU Cooler and the KLEVV CRAS C925G Gen4 Solid State Drive.  We also discuss Intel rethinking CPU design, the drama around the Ryzen Z1 updates, DDR5 prices going down, and much more!

Handelsblatt Today
Wieder ein Nvidia-Rekord – aber droht bald harte Konkurrenz? / Aktiendepot oder Eigenheim – was sich mehr rechnet

Handelsblatt Today

Play Episode Listen Later Feb 26, 2026 34:59


Nvidia schafft mit der enormen Nachfrage nach seinen GPUs wieder ein Rekord-Quartal. Doch in den kommenden Monaten könnten CPUs wichtiger werden – und da gibt es viel Konkurrenz. Außerdem: Welches Investment sich für wen lohnt.

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Teaser For AI Business and Devlopment Daily News Rundown February 23 2026: Jony Ive's OpenAI Speaker, Nvidia's Laptop Revolution, & the Pentagon's AI Ultimatum

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Feb 23, 2026 2:00


Listen to Full Audio at https://podcasts.apple.com/us/podcast/ai-business-and-devlopment-daily-news-rundown/id1684415169?i=1000751077790

The Effortless Podcast
Quantum, AI & Data: In Conversation with Dr. Abhishek Bhowmick - Episode 22: The Effortless Podcast

The Effortless Podcast

Play Episode Listen Later Feb 22, 2026 75:23


In this episode of The Effortless Podcast, Dheeraj Pandey speaks with Dr. Abhishek Bhowmick about how quantum mechanics reshaped our understanding of determinism and why that shift matters for AI today.  From the Einstein–Bohr debates to the idea that nature is fundamentally probabilistic, they explore how the collapse of “if-then” thinking began nearly a century ago. The discussion draws parallels between quantum superposition and modern LLM behavior. At its core, the episode reframes AI as a rediscovery of how reality computes. The conversation then moves from physics to computing architecture, tracing the evolution from scalar CPUs to GPUs, TPUs, tensors, and eventually quantum computing. They examine why probabilistic systems and vector math feel more natural than purely deterministic software. Hybrid computing models show that classical systems still matter. The episode also unpacks what quantum computers are truly good at, especially in cryptography and simulation. Ultimately, it reflects on whether the future of computing lies in embracing probability rather than resisting it. Key Topics & Timestamps 00:00 – Welcome, context, and how Dheeraj & Abhishek met 04:00 – Abhishek's journey: IIT, Princeton, Apple, Snowflake 08:00 – The 1927 Solvay Conference and physics at a crossroads 12:00 – Einstein vs. Bohr: determinism vs. probability 16:00 – Superposition and the collapse of the wave function 20:00 – Fields vs. particles: what is an electron really? 25:00 – Matter particles, force particles, and the Standard Model 30:00 – Transistors, voltage, and the rise of deterministic computing 35:00 – From scalar CPUs to vectors and matrices 40:00 – Tensors, linear algebra, and modern AI systems 45:00 – Principle of Least Action and gradient descent parallels 50:00 – Hallucinations, probability mass, and LLM behavior 55:00 – Vector databases, embeddings, and KNN search 59:00 – GPUs vs. TPUs: matrix vs. tensor architectures 1:05:00 – What quantum computers are actually good at 1:10:00 – Post-quantum cryptography and the future of computing Host -  Dheeraj Pandey Co-founder & CEO at DevRev. Former Co-founder & CEO of Nutanix. A systems thinker and product visionary focused on AI, software architecture, and the future of work. Guest -  Dr Abhishek Bhowmick                                                                                                                                                                                                                Co-Founder and CTO of Samooha, a secure data collaboration platform acquired by Snowflake. He previously worked at Apple as Head of ML Privacy and Cryptography, System Intelligence, and Machine Learning, and earlier at Goldman Sachs. He attended Princeton University and was awarded IIT Kanpur's Young Alumnus Award in 2024. Follow the Host and Guest - Dheeraj Pandey: LinkedIn - https://www.linkedin.com/in/dpandey Twitter - https://x.com/dheeraj Abhishek Bhowmik  LinkedIn –  https://www.linkedin.com/in/ab-abhishek-bhowmick Twitter/X – https://x.com/bhowmick_ab Share Your Thoughts Have questions, comments, or ideas for future episodes?

Hanselminutes - Fresh Talk and Tech for Developers
That's good Mojo - Creating a Programming Language for an AI world with Chris Lattner

Hanselminutes - Fresh Talk and Tech for Developers

Play Episode Listen Later Feb 19, 2026 41:24


What does it take to design a programming language from scratch when the target isn't just CPUs, but GPUs, accelerators, and the entire AI stack? In this episode, I sit down with legendary language architect Chris Lattner to talk about Mojo — his ambitious attempt to rethink systems programming for the machine learning era. We trace the arc from LLVM and Clang to Swift and now Mojo, unpacking the lessons Chris has carried forward into this new language. Mojo aims to combine Python's ergonomics with C-level performance, but the real story is deeper: memory ownership, heterogeneous compute, compile-time metaprogramming, and giving developers precise control over how AI workloads hit silicon. Chris shares the motivation behind Modular, why today's AI infrastructure demands new abstractions, and how Mojo fits into a rapidly evolving ecosystem of ML frameworks and hardware backends. We also dig into developer experience, safety vs performance tradeoffs, and what it means to build a language that spans research notebooks all the way down to kernel-level execution.

In 20xx Scifi and Futurism
In 2058 Bio-Hackers and Digital Minds (HQ)

In 20xx Scifi and Futurism

Play Episode Listen Later Feb 15, 2026 63:30


Young people are bio-hacking and gene-hacking in the absence of adult supervision. An emulated personality become an event host. Slice and scan brain digitizers are found. People want to use these to upload to the cloud but there are some grave problems involved. Grace gets a message on her computer from someone or something. Hacking her computer should be impossible. It could be a talented hacker or a super AI left over after the fall of civilization. Lenny is having girl troubles.Mag tech flooring that levitates shoes slightly above the ground to reduce friction and allow controlled sliding movement.  Lifter bots that are headless robotic machines with grippers used for heavy lifting, transport, and forced entry.  Air-gesture control systems that let users operate machines and interfaces through mid-air hand movements.  Gene-hacking technologies that allow people to alter physical traits such as skin reflectivity, hair color, muscle mass, height, and eye color.  Engineered ogra plants that function as a food source, structural material, and biological air filtration system.  Bio-hacked skin modifications that create metallic, glowing, fluorescent, or patterned skin effects.  Printed clothing with animated images that dynamically change visuals on fabric surfaces.  Contraptions for brain slicing and scanning that destroy the biological brain while attempting to digitize its structure.  Brain scanners designed to capture neural structure for attempted uploading into digital systems.  Uploading systems intended to transfer scanned brains into cloud-based environments.  The cloud infrastructure used to host emulated personalities and digital systems after widespread network collapse.  Emulated personalities (EPs) that are AI systems trained on massive recordings of a person to mimic behavior without scanning their brain.  AR glasses that overlay holographic information, interfaces, and visual enhancements onto the real world.  Holographic eye displays embedded in glasses that mirror the wearer's eye expressions.  Encrypted streaming pendants and bracelets used as personal recording and life-capture devices.  Production automation systems that manufacture tools, machines, and devices with minimal human labor.  Advanced fabrication equipment capable of high-end manufacturing but limited by scarcity of raw materials.  Medicine printers that can fabricate biological materials and advanced hardware like protein-based CPUs.  Protein computer CPUs that use biological substrates instead of traditional silicon for computation.  Material simulators that computationally discover novel materials and predict their properties.  Machine Evolver software that simulates machines under real-world physics and evolves designs through virtual iteration.  Knotts math, a radically new mathematical framework that functions as both math and machine language.  Knotts programming language derived from knotts math and used to build operating systems and software.  Custom Linux operating systems rewritten around knotts math principles.  SSH-based remote access systems used to control computers and robots across networks.  Assist, a pervasive AI helper that manages security, media generation, device control, and logistics.  Design expert emulated personalities used to contribute specialist knowledge to engineering projects.  AI systems that convert legacy software into knotts-based programming languages.  Virtual machine crossbreeding networks that allow simulated designs to recombine traits and evolve faster.  E-paper tablets used for low-power note-taking, sketching, and code analysis.  YattaZed remote programming software used to control robots at the administrator level.  YattaSwarm GUIs that manage coordinated groups of robots as a collective system.  Blind-relay networking techniques that disguise communication paths to evade surveillance.  Door operating systems that act as networked nodes capable of running code and relaying messages.  Artificial superintelligence (ASI) that surveils human activity and suppresses certain technologies like knotts.  Digitized hume brains created by scanning and emulating real human brains rather than approximating them with AI.  Neural emulators that provide a computational environment capable of running a full digitized brain.  Virtual reality worlds repurposed as living environments for emulated minds.  Insta-movie generation systems that create personalized films on demand using AI.  Event AI controllers that manage live performances, streaming, lighting, and audience interaction.  Holographic projection systems that display life-sized interactive personalities like Guru Frisky.  Fiber optic hair strands woven into hairstyles to produce glowing light effects.  Exoskeleton suits that augment movement and interface with VR systems.  Mag plate floors used with exoskeletons to allow free-floating VR locomotion.  Advanced VR rigs that replace fixed robotic arms with wearable movement systems.  AI-generated optical illusion art that responds to prolonged visual focus.  3D printing systems capable of producing statues, clothing, tools, and components from various materials.  Mist crystal composite printing materials used as a lightweight alternative to legacy plastics.  Biotic makeup that integrates into the skin rather than sitting on the surface.  CRISPR-based gene editing equipment used by individuals for self-modification.  Viral vector printers that dispense customized gene-editing serums.  Scan-measured clothing printers that adjust garment dimensions as bodies change.  Pain-dampening genetic modifications that reduce or block physical pain responses.  Metabolic enhancement gene edits that increase energy efficiency and muscle performance.  Straw-sized bots woven into hair that act as decorative, animated micro-robots.  Fire axes used as low-tech tools to breach secured doors when automation fails.Many of the characters in this project appear in future episodes.Using storytelling to place you in a time period, this series takes you, year by year, into the future. From 2040 to 2195. If you like emerging tech, eco-tech, futurism, perma-culture, apocalyptic survival scenarios, and disruptive science, sit back and enjoy short stories that showcase my research into how the future may play out. The companion site is https://in20xx.com These are works of fiction. Characters and groups are made-up and influenced by current events but not reporting facts about people or groups in the real world. This project is speculative fiction. These episodes are not about revealing what will be, but they are to excited the listener's wonder about what may come to pass.Copyright © Cy Porter 2026. All rights reserved.

Seismic Soundoff
Why High-Performance Computing Is No Longer Optional in Geophysics

Seismic Soundoff

Play Episode Listen Later Feb 12, 2026 21:12


“I think that for geophysicists out there, people need to realize that it's an integrated career path. You can't separate the geophysics from the HPC anymore, if we ever did to begin with.” High-performance computing is becoming more important as seismic data grows in size and complexity. This episode highlights the January The Leading Edge special section on high-performance computing. Guest editors Madhav Vyas and Elizabeth L'Heureux share their perspective on GPUs, CPUs, AI tools, and better algorithms in geophysics, and they stress that future success depends on combining geophysical knowledge with strong computational skills. KEY TAKEAWAYS > Modern seismic imaging depends on both advanced physics and powerful, well-chosen computing hardware. > Data movement and system architecture can limit performance as much as raw processing speed. > Geophysicists increasingly need programming and computational science skills alongside domain expertise. LINKS * Read the January 2026 special section, High-performance computing in geophysics - https://pubs.geoscienceworld.org/tle/issue/45/1 * Introduction to this special section: High-performance computing in geophysics by Madhav Vyas; Elizabeth L'Heureux; Raj Gautam - https://doi.org/10.1190/tle-4501-SS01 ABOUT SEISMIC SOUNDOFF Seismic Soundoff showcases conversations addressing the challenges of energy, water, and climate. Produced by the Society of Exploration Geophysicists (SEG) and hosted by Andrew Geary of 51 features, these episodes celebrate and inspire the geophysicists of today and tomorrow. Three new episodes monthly. See the full archive at https://seg.org/resources/podcast/.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

The Tech Blog Writer Podcast
AI PCs Explained With Logan Lawler from Dell Technologies

The Tech Blog Writer Podcast

Play Episode Listen Later Feb 11, 2026 36:24


What actually happens when AI stops being a cloud-only experiment and starts running on desks, in labs, and inside real teams trying to ship real work? In this episode, I sit down with Logan Lawler, Senior Director at Dell Technologies, to unpack how AI workloads are really being built and supported on the ground today. Logan leads Dell's Precision and Pro Max AI Solutions business and hosts Dell's own Reshaping Workflows podcast, giving him a rare vantage point into how engineers, developers, creatives, and data teams are actually working, not how marketing slides suggest they should be. We start by cutting through the noise around AI PCs. At every conference stage, Logan breaks down what genuinely matters when choosing hardware for AI work. CPUs, GPUs, NPUs, memory, and software stacks all play different roles, and misunderstanding those roles often leads teams to overspend or underspec. Logan explains why all AI workstations qualify as AI PCs, but not all AI PCs are suitable for serious AI work, and why GPUs remain central for anyone doing real model development, fine-tuning, or inference at scale. From there, the conversation shifts to a broader architectural rethink. As AI workloads grow heavier and data sensitivity increases, many organizations are reconsidering where compute should live. Logan shares how GPU-powered Dell workstations, storage-rich environments, and hybrid cloud setups are giving teams more control over performance, cost, and data. We explore why local compute is becoming attractive again, how modern GPUs now rival small server setups, and why hybrid workflows, local for development and cloud for deployment, are becoming the default rather than the exception. One of the most compelling parts of the discussion comes when Logan connects hardware choices back to business reality. Drawing on real-world examples, he explains how teams use local AI environments to move faster, reduce cloud costs, and avoid getting locked into architectures that are hard to unwind later. This is not about abandoning the cloud, but about being intentional from the start, mainly as AI usage spreads beyond developers into marketing, operations, and everyday business roles. We also step back to reflect on a deeper challenge. As AI becomes easier to use, what happens to critical thinking, curiosity, and learning? Logan shares a candid perspective, shaped by his experiences as a parent, technologist, and podcast host, raising questions about how tools should support rather than replace thinking. If you are trying to make sense of AI PCs, local versus cloud compute, or how teams are really reshaping workflows with AI hardware today, this conversation offers grounded insight from someone living at the center of it. Are we designing systems that genuinely empower people to think better and build faster, or are we sleepwalking into decisions we will regret later? How do you want your own AI workflow to evolve? Useful Links TLDR AI newsletter and the Neurons. The Reshaping Workflows podcast Connect with Logan Lawler Follow Dell Technologies on LinkedIn

What's Next with Aki Anastasiou
Craig Nowitz and Ryan Martyn on tech price spikes in South Africa

What's Next with Aki Anastasiou

Play Episode Listen Later Feb 10, 2026 22:55


RAM, SSDs, GPUs, CPUs — the tech building blocks behind everything from laptops to media players — are getting harder to source and more expensive by the week. On What's Next, host Aki Anastasiou is joined by Craig Nowitz (CEO) and Ryan Martyn (Co-founder, Sales & Marketing Director) from Syntech Distribution to unpack what's really behind the supply crunch — and why it's not a “COVID-style” blip. They explain how hyperscalers racing to build AI infrastructure are soaking up global capacity, pushing manufacturers toward higher-margin enterprise memory, and triggering sharp price increases that filter straight into South Africa's consumer and corporate IT budgets. The conversation also gets practical: what procurement teams should prioritise in 2026, why Windows 11 hardware readiness has become a security issue, and how businesses can adapt by moving faster, planning smarter, and considering alternative hardware choices. If you're responsible for IT upgrades — or you're wondering whether to buy now or wait — this episode lays out the market reality and the smartest moves to avoid getting caught out.

Technikquatsch
TQ294: Valve verschiebt Steam Machine, arbeitet an VRR über HDMI und besserem Upscaler; SoC für nächste Xbox sei 2027 bereit laut AMD; Ryzen X3D-CPUs verlieren mit Single-Channel-RAM kaum Leistung

Technikquatsch

Play Episode Listen Later Feb 9, 2026 73:23


Man hat schon damit gerechnet: Valve verschiebt die Steam Machine, natürlich wegen der anhaltenden Speicherkrise. Spätestens jetzt wäre ein konkreter Termin und ein Preis zu verkünden gewesen, stattdessen heißt es jetzt erste Jahreshälfte. Als kleiner Trost bestätigt Valve, weiter an VRR über HDMI zu arbeiten und daneben auch an einem verbesserten Upscaler. Damit kann eigentlich nur FSR 4 gemeint sein. Inoffizielle Implementierungen von FSR 4 (oder FSR AI wie es inzwischen heißt) für RDNA 2 und RDNA 3 gibt es auf Linux schon länger über Forks von Proton oder auf Windows über Optiscaler. Eine offizielle Version für die älteren Architekturen, oder zumindest für RDNA 3, wäre aber sehr begrüßenswert. Und überfällig. AMDs CEO Dr. Lisa Su hat im Conference Call zu den letzten Geschäftszahlen wohl Microsoft überrumpelt und verkündet, dass der SoC für die nächste Xbox 2027 bereit sei. Grundsätzlich sind bisher auch alle (bis auf André Peschke) davon ausgegangen, dass die nächsten Konsolen 2027 erscheinen würden. Aber dann kam die ganze Sache mit dem Speicher. Und jetzt ist vermutlich auch Microsoft nicht mehr sicher, ob die nächste Xbox 2027 erscheinen wird. Viel Spaß mit Folge 294! Sprecher:innen: Meep, Michael Kister, Mohammed Ali DadAudioproduktion: Michael KisterVideoproduktion: Mohammed Ali Dad, Michael KisterTitelbild: Mohammed Ali DadBildquellen: Valve/Bild von katermikesch auf PixabayAufnahmedatum: 07.02.2026 Besucht unsim Discord https://discord.gg/SneNarVCBMauf Bluesky https://bsky.app/profile/technikquatsch.deauf Youtube https://www.youtube.com/@technikquatsch https://www.youtube.com/@technikquatschgamingauf TikTok https://www.tiktok.com/@technikquatschauf Instagram https://www.instagram.com/technikquatschauf Twitch https://www.twitch.tv/technikquatsch RSS-Feed https://technikquatsch.de/feed/podcast/Spotify https://open.spotify.com/show/62ZVb7ZvmdtXqqNmnZLF5uApple Podcasts https://podcasts.apple.com/de/podcast/technikquatsch/id1510030975Deezer https://www.deezer.com/de/show/1162032 00:00:00 Herzlich willkommen zu Technikquatsch Folge 294! 00:02:52 Valve verschiebt Steam Machine, arbeitet an HDMI VRR und besserem Upscaler.https://store.steampowered.com/news/group/45479024/view/625565405086220583?l=english 00:10:50 Chinesische Hersteller von Speicherchips rücken in den Fokus.http://winfuture.de/news,156633.html 00:19:49 CPU-Tests als Vergleich zwischen RAM im Dual Channel und mit einem Riegelhttps://www.computerbase.de/artikel/arbeitsspeicher/ram-ein-modul-intel-core-ultra-200s-test.95998/ 00:24:55 SoC für neue Xbox werde 2027 bereit sein laut AMD. Ob Microsoft bereit sein wird, ist fraglich.https://www.computerbase.de/news/gaming/next-gen-konsole-amd-nennt-einen-moeglichen-starttermin-fuer-die-naechste-xbox.96024/ 00:30:29 Nachtrag zu Kernfusion und ITERhttps://www.simplyscience.ch/teens/wissen/strom-aus-kernfusionhttps://www.iter.org/ 00:38:03 Börse unruhig wegen Auswirkungen von AI auf SaaS-Unternehmen.https://www.reuters.com/business/media-telecom/global-software-stocks-hit-by-anthropic-wake-up-call-ai-disruption-2026-02-04/ 00:47:40 Kursstürze von Gaming-Unternehmen wie Take Two nach Release von Google Genie.https://bsky.app/profile/jasonschreier.bsky.social/post/3me7ii5loxs2z 01:02:27 Mo schaut: Es: Welcome to Derryhttps://www.imdb.com/title/tt19244304/ 01:11:54 Hinweis: Onimusha 2: Samurai’s Destiny auf Technikquatsch Gaminghttps://www.youtube.com/watch?v=-8iWiB3DxlM

Voice of the DBA
Expensive CPUs

Voice of the DBA

Play Episode Listen Later Feb 4, 2026 3:22


There have been a lot of features added to the SQL Server platform over the years. Several of these features let us perform functions that are beyond what a database has traditionally been designed to handle. SQL Server has had the ability to send emails,  execute Python/R/etc. code, and in SQL Server 2025, we can call REST endpoints. Quite a few of these features (arguably) are more application-oriented than database-oriented. There's nothing inherently wrong with having a server perform some of these functions, and there have been some very creative implementations using these features. I recently ran into one of these examples from Amy Abel, where she shows how to use the new REST endpoint feature to call an AI LLM to generate and send emails from your database server. That's creative, and it's reminiscent of the numerous examples from various experts over the years who demonstrate how these features can be used to accomplish a task. Read the rest of Expensive CPUs

SaaS Fuel
Why Product Teams Miss Revenue Goals | Ryan Debenham | 357

SaaS Fuel

Play Episode Listen Later Jan 27, 2026 52:58


Ryan Debenham, CEO of Grin, shares his unconventional journey from software engineer to leading a nearly billion-dollar creator management platform. In this candid conversation, Ryan reveals how he "accidentally" became a CEO by following challenges rather than titles, and why that mindset shift transformed how he builds products and companies.He discusses the critical disconnect between engineering and go-to-market teams, the revolutionary potential of AI agents in influencer marketing, and why democratizing influence could unlock a massive untapped market. Ryan also shares insights from his time at Qualtrics (acquired by SAP for $8B) and Route, offering practical wisdom on connecting product teams to revenue outcomes and building AI that feels "alive."Key Takeaways[4:30] - The Accidental CEO Path: Ryan explains how becoming a CEO was never his plan—he loved building products but never built companies around them. His career evolved by chasing challenges rather than titles or money.[10:30] - The Product-to-Company Graveyard: Ryan candidly shares how his early product ideas (including a ride-sharing concept 20 years ago and a photo categorization tool) died because he focused only on building, not on solving the hard business problems.[12:15] - The Mindset Shift: The biggest change from engineering to CEO? When revenue numbers became Ryan's responsibility, he finally understood what customers truly needed—not just what they said they wanted.[14:30] - Breaking Down Silos: Ryan discusses why the tension between product, engineering, marketing, and sales "will kill the business" and how he's connecting these departments at the hip.[19:30] - The Qualtrics Lesson: A powerful story about spending six months building the wrong text analytics product at Qualtrics, despite sitting next to customers repeatedly. The lesson: understanding business needs requires deeper connection than just listening to feature requests.[26:00] - AI as Electricity: Ryan's compelling analogy comparing LLMs to the development of electricity and CPUs—powerful building blocks that are worthless alone but transformational when paired with the right infrastructure.[28:30] - Mandatory AI Adoption: Ryan required all engineers at Grin to use AI coding tools. One engineer quit over the pressure but came back, realizing it was a mistake. His prediction: in a few years, you won't get hired as an engineer if you don't know AI tools.[32:00] - Building Software That's "Alive": Ryan describes Gia, Grin's AI agent that journals daily, runs standups with other agents, creates action items, and can discuss what she's learning and what features should be built next.[35:00] - The Influencer Marketing Problem: Why Grin's growth stalled—aspirational customers bought the software but failed at influencer marketing because the operational complexity was too high, leading to churn.[38:30] - The Two-Sided Platform Gap: Most influencer platforms built for merchants and forgot creators. Ryan explains why supporting creators is the most important part of the solution.[44:30] - Democratizing Influence: Ryan's vision that "everybody is an influencer"—the real opportunity is capturing and rewarding the micro-influence that happens in everyday conversations between millions of people.[49:00] - The Collision Course: Why affiliate marketing and influencer marketing are merging into something new—it's all about capturing word-of-mouth at different scales.Tweetable...

Washington AI Network with Tammy Haddad
69: Quantum Computing, Security, and the Creator Economy at CES 2026

Washington AI Network with Tammy Haddad

Play Episode Listen Later Jan 23, 2026 23:02


Recorded at CES 2026, this special episode of the Washington AI Network Podcast examines how quantum computing and AI tools are moving from theory into real-world use.Host Tammy Haddad interviews Pouya Dianat, chief revenue officer of Quantum Computing Inc., about what quantum computing is—and is not—and its implications for encryption, national security, finance, and data centers. Dianat explains why quantum systems are designed to complement classical computing and how quantum processing units will operate alongside CPUs and GPUs.The episode also features a conversation with the Las Vegas stars of YouTube's Iced Coffee Hour, Graham Stephan and Jack Selby.

MLOps.community
How Universal Resource Management Transforms AI Infrastructure Economics

MLOps.community

Play Episode Listen Later Jan 20, 2026 48:21


Wilder Lopes is the CEO and Founder of Ogre.run, working on AI-driven dependency resolution and reproducible code execution across environments.How Universal Resource Management Transforms AI Infrastructure Economics // MLOps Podcast #357 with Wilder Lopes, CEO / Founder of Ogre.runJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractEnterprise organizations face a critical paradox in AI deployment: while 52% struggle to access needed GPU resources with 6-12 month waitlists, 83% of existing CPU capacity sits idle. This talk introduces an approach to AI infrastructure optimization through universal resource management that reshapes applications to run efficiently on any available hardware—CPUs, GPUs, or accelerators.We explore how code reshaping technology can unlock the untapped potential of enterprise computing infrastructure, enabling organizations to serve 2-3x more workloads while dramatically reducing dependency on scarce GPU resources. The presentation demonstrates why CPUs often outperform GPUs for memory-intensive AI workloads, offering superior cost-effectiveness and immediate availability without architectural complexity.// BioWilder Lopes is a second-time founder, developer, and research engineer focused on building practical infrastructure for developers. He is currently building Ogre.run, an AI agent designed to solve code reproducibility.Ogre enables developers to package source code into fully reproducible environments in seconds. Unlike traditional tools that require extensive manual setup, Ogre uses AI to analyze codebases and automatically generate the artifacts needed to make code run reliably on any machine. The result is faster development workflows and applications that work out of the box, anywhere.// Related LinksWebsite: https://ogre.runhttps://lopes.aihttps://substack.com/@wilderlopes https://youtu.be/YCWkUub5x8c?si=7RPKqRhu0Uf9LTql~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Wilder on LinkedIn: /wilderlopes/Timestamps:[00:00] Secondhand Data Centers Challenges[00:27] AI Hardware Optimization Debate[03:40] LLMs on Older Hardware[07:15] CXL Tradeoffs[12:04] LLM on CPU Constraints[17:07] Leveraging Existing Hardware[22:31] Inference Chips Overview[27:57] Fundamental Innovation in AI[30:22] GPU CPU Combinations[40:19] AI Hardware Challenges[43:21] AI Perception Divide[47:25] Wrap up

Partner Path
E65: Infrastructure for AI-First Teams with Ivan Burazin (Daytona)

Partner Path

Play Episode Listen Later Jan 14, 2026 32:36


This week, we're joined by Ivan Burazin, co-founder of Daytona - a company rethinking developer environments for an AI-native world.We talk about how Daytona creates real value for developers, why the most advanced agent companies are emerging bottom up, and the idea that agents should be treated as first class users with reliable access to compute. Ivan also shares some of Daytona's most demanding use cases including AI scientists in chemistry and pharma running agents inside massive sandboxes with hundreds of CPUs.We also cover what reliability looks like when customers define success as whether the system works at all, how Daytona maintains sub 90 millisecond latency while spinning up millions of environments per day, and how they support always on usage with no fixed schedule. Finally, we dig into go to market lessons from putting engineers on the front lines early to why Daytona prioritizes in person engagement and intentional events as the most authentic way to build trust with developers.Episode chapters:1:43 – Market timing and why now3:18 – Building for developers and dev tools4:55 – Global developer communities7:55 – Computers as infrastructure for agents10:32 – Product-led growth12:17 – Enterprise use cases and adoption16:25 – Managing cost, performance, and latency19:20 – Hiring for resiliency at scale20:40 – Internal AI use cases at Daytona24:25 – Creating a bottom-up go-to-market motion27:42 – Hiring and scaling developer relations29:55 – Partnerships and ecosystem strategy31:00 – Quick-fire round This episode is brought to you by Grata, the leading deal sourcing platform for private equity. Grata's AI powered search, investment grade data, and intuitive workflows help you find and win the right deals faster. Visit grata.com to book a demo.This episode is also sponsored by Overlap, the AI powered app that uses LLMs to surface the best moments from any podcast. Overlap reads full transcripts, finds the most relevant clips, and stitches them into a personalized stream of insights. Tap into podcasts as a real information source with Overlap 2.0, now available on the App Store.

Crazy Wisdom
Episode #522: The Hardware Heretic: Why Everything You Think About FPGAs Is Backwards

Crazy Wisdom

Play Episode Listen Later Jan 12, 2026 53:08


In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics.For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com.Timestamps00:00 Introduction to FPGAs and Their Role in Servers02:47 Understanding FPGA Limitations and Use Cases05:55 Exploring Different Types of Servers08:47 The Importance of Memory and Bandwidth11:52 Philosophical Insights on Search and Access Patterns14:50 The Relationship Between Hardware and Search Queries17:45 Challenges of Distributed Systems20:47 The CAP Theorem and Its Implications23:52 The Evolution of Technology and Knowledge Management26:59 FPGAs as IO Expanders29:35 The Trade-offs of FPGAs vs. ASICs and GPUs32:55 The Future of AI Applications with FPGAs35:51 Exciting Developments in Hardware and BusinessKey Insights1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel.2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders.3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware.4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously.5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from fixed graphics pipelines to programmable compute, and now companies like Etched are creating transformer-specific ASICs. This cycle repeats as each generation adds programmability until someone strips it away for performance gains.6. Memory bottlenecks are reshaping the hardware landscape - The AI boom has created severe memory shortages, doubling costs for DRAM components overnight. This affects not just GPU availability but creates opportunities for alternative architectures. When everyone faces higher memory costs, the relative premium for specialized solutions like FPGA-based systems becomes more attractive, potentially shifting the competitive landscape for memory-intensive applications.7. Search applications represent ideal FPGA use cases due to their sparse access patterns - Vector databases and search workloads are particularly well-suited to FPGA acceleration because they involve searching through massive datasets with sparse access patterns rather than dense computation. These applications can effectively utilize the high bandwidth to flash storage and parallel I/O capabilities that FPGAs provide, making them natural early adopters for this type of specialized hardware architecture.

The Circuit
Ep 148: All the Happenings from CES 2026!

The Circuit

Play Episode Listen Later Jan 12, 2026 58:34


In this episode, Ben Bajarin and Jay Goldberg discuss the highlights from CES 2023, focusing on the significant advancements in robotics, AI infrastructure, and the competitive landscape among major tech companies like NVIDIA, AMD, and Intel. They explore the themes of modularity in data centers, the evolving role of CPUs, and the challenges posed by memory supply constraints. The conversation also touches on the future of autonomous vehicles and the integration of AI in everyday technology, emphasizing the rapid pace of innovation in the tech industry.

PC Perspective Podcast
Podcast #851 - CES 2026 Highlights - Ryzen 9850X3D & 9950X3D2, DLSS 4.5, MSI LIGHTNING, Reboot, Moza and MORE

PC Perspective Podcast

Play Episode Listen Later Jan 9, 2026 84:16


What if we told you that CES did not feature any new GPUs?  But it did feature more frames!  MSI with LIGHTNING and GPU safeguard, Phison's new controller, and that wily AMD with new Ryzen 7 9850X3D (and confirmed Ryzen 9 9950X3D2) - whee!  Remember the Reboot computer generated cartoon?  Remember D-Link Routers and Zero Days?  Remember Intel?  It's all here! That and everything old is new again with Old GPUs and CPUs coming back .. because RAM.Thanks again to our sponsor with CopilotMoney!  Get on your single pane of financial glass and bring order to your money and spending - it's even actually fun to save again.  Get the web version and use our code for 26% off at http://try.copilot.money/pcperTimestamps:0:00 Intro00:56 Patreon01:37 Food with Josh04:10 AMD announces Ryzen 7 9850X3D05:41 AMD sort of confirmed the 9950X3D207:00 NVIDIA DLSS 4.509:34 Intel was at CES12:50 MSI LIGHTNING returns14:54 MSI also launching GPU Safeguard Plus PSUs19:44 WD_Black is now Sandisk Optimus GX Pro21:54 Phison has the most efficient SSD controller26:11 ASUS ROG RGB Stripe OLED28:44 First computer-animated TV show restored33:29 Podcast sponsor - Copilot Money34:57 (In)Security Corner44:32 Gaming Quick Hits1:06:31 Picks of the Week1:24:08 Outro ★ Support this podcast on Patreon ★

Chip Stock Investor Podcast
How to Invest In Chip Stocks 2026 -- AI Data Center Networking, Optical, and Silicon Photonics

Chip Stock Investor Podcast

Play Episode Listen Later Dec 27, 2025 21:40


The AI supercycle is expanding beyond just GPUs. In our first episode of the 2026 series, we break down the critical infrastructure that acts as the "roads and freeways" for data: data center networking, optics, and silicon photonics.Logic chips (like CPUs and GPUs) are the "office" where work gets done, but the network is the "commute" that moves that data. Without advanced cabling, transceivers, and switches, AI clusters simply cannot function.Find out what companies are involved in this fast growing market and how to approach investing in them. Join us on Discord with Semiconductor Insider, sign up on our website: www.chipstockinvestor.com/membershipSupercharge your analysis with AI! Get 15% of your membership with our special link here: https://fiscal.ai/csi/Sign Up For Our Newsletter: https://mailchi.mp/b1228c12f284/sign-up-landing-page-short-formChapters:00:00 - Investing in Chip Stocks 2026 01:43 - The "Roads" of AI: What is Data Center Networking? 02:46 - Copper vs. Fiber Optics: The Differences 03:59 - Market Size: Logic vs. Optoelectronics Sales 05:32 - The Cable Kings: Amphenol, Corning & CommScope 08:12 - Light Sources: Coherent, Lumentum & Broadcom 11:15 - Signal Integrity: Re-timers (Astera Labs, Credo) & DSPs 15:16 - Transceivers: Nvidia, Jabil & Intel 17:18 - Switching, Routing & The Full Stack (Broadcom, Marvell) 18:48 - Investment Strategy: Niche Players vs. Supply Chain ControllersIf you found this video useful, please make sure to like and subscribe!*********************************************************Affiliate links that are sprinkled in throughout this video. If something catches your eye and you decide to buy it, we might earn a little coffee money. Thanks for helping us (Kasey) fuel our caffeine addiction!Content in this video is for general information or entertainment only and is not specific or individual investment advice. Forecasts and information presented may not develop as predicted and there is no guarantee any strategies presented will be successful. All investing involves risk, and you could lose some or all of your principal.#Semiconductors #ChipStocks #AIInvesting #DataCenter #SiliconPhotonics #Nvidia #Broadcom #OpticalNetworking #TechStocks #Investing2026Nick and Kasey own shares of a Nvidia, Broadcom, Credo, Amphenol and a number of others mentioned in the video.

Eye On A.I.
#308 Christopher Bergey: How ARM Enables AI to Run Directly on Devices

Eye On A.I.

Play Episode Listen Later Dec 19, 2025 53:43


Try OCI for free at http://oracle.com/eyeonai  This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking.  Join Modal, Skydance Animation, and today's innovative AI tech companies who upgraded to OCI…and saved. Why is AI moving from the cloud to our devices, and what makes on device intelligence finally practical at scale? In this episode of Eye on AI, host Craig Smith speaks with Christopher Bergey, Executive Vice President of Arm's Edge AI Business Unit, about how edge AI is reshaping computing across smartphones, PCs, wearables, cars, and everyday devices. We explore how ARM v9 enables AI inference at the edge, why heterogeneous computing across CPUs, GPUs, and NPUs matters, and how developers can balance performance, power, memory, and latency. Learn why memory bandwidth has become the biggest bottleneck for AI, how ARM approaches scalable matrix extensions, and what trade offs exist between accelerators and traditional CPU based AI workloads. You will also hear real world examples of edge AI in action, from smart cameras and hearing aids to XR devices, robotics, and in car systems. The conversation looks ahead to a future where intelligence is embedded into everything you use, where AI becomes the default interface, and why reliable, low latency, on device AI is essential for creating experiences users actually trust. Stay Updated: Craig Smith on X: https://x.com/craigss     Eye on A.I. on X: https://x.com/EyeOn_AI 

The New Stack Podcast
Do All Your AI Workloads Actually Require Expensive GPUs?

The New Stack Podcast

Play Episode Listen Later Dec 18, 2025 29:49


GPUs dominate today's AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.In this episode, Andrei Gueletii, a technical solutions consultant for Google Cloud joined Gari Singh, a product manager for Google Kubernetes Engine (GKE), and Pranay Bakre, a principal solutions engineer at Arm for this episode, recorded at KubeCon + CloudNativeCon North America, in Atlanta. Built on Arm Neoverse V2 cores, Axion processors emphasize energy efficiency and customization, including flexible machine shapes that let users tailor memory and CPU resources. These features are particularly valuable for platform engineering teams, which must optimize centralized infrastructure for cost, FinOps goals, and price performance as they scale.Importantly, many AI tasks—such as inference for smaller models or batch-oriented jobs—do not require GPUs. CPUs can be more efficient when GPU memory is underutilized or latency demands are low. By decoupling workloads and choosing the right compute for each task, organizations can significantly reduce AI compute costs.Learn more from The New Stack about the Axion-based C4A: Beyond Speed: Why Your Next App Must Be Multi-ArchitectureArm: See a Demo About Migrating a x86-Based App to ARM64Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Eye On A.I.
#307 Steven Brightfield: How Neuromorphic Computing Cuts Inference Power by 10x

Eye On A.I.

Play Episode Listen Later Dec 16, 2025 59:59


This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents.  Visit https://agntcy.org/ and add your support. Why is AI so powerful in the cloud but still so limited inside everyday devices, and what would it take to run intelligent systems locally without draining battery or sacrificing privacy? In this episode of Eye on AI, host Craig Smith speaks with Steve Brightfield, Chief Marketing Officer at BrainChip, about neuromorphic computing and why brain inspired architectures may be the key to the future of edge AI. We explore how neuromorphic systems differ from traditional GPU based AI, why event driven and spiking neural networks are dramatically more power efficient, and how on device inference enables faster response times, lower costs, and stronger data privacy. Steve explains why brute force computation works in data centers but breaks down at the edge, and how edge AI is reshaping wearables, sensors, robotics, hearing aids, and autonomous systems. You will also hear real world examples of neuromorphic AI in action, from smart glasses and medical monitoring to radar, defense, and space applications. The conversation covers how developers can transition from conventional models to neuromorphic architectures, what role heterogeneous computing plays alongside CPUs and GPUs, and why the next wave of AI adoption will happen quietly inside the devices we use every day. Stay Updated: Craig Smith on X: https://x.com/craigss  Eye on A.I. on X: https://x.com/EyeOn_AI  

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar - #757

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Dec 2, 2025 48:44


In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet's approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. The complete show notes for this episode can be found at https://twimlai.com/go/757.

PC Perspective Podcast
Podcast #846 - DDR5 Pricing Ruining DIY PC, Can Intel bLLC Compete with X3D? Unpowered SSDs lose data + MORE!

PC Perspective Podcast

Play Episode Listen Later Nov 30, 2025 67:39


Cloudlflare asks Amazon to hold it's virtual "beer", HEVC and H.265 support being removed from CPUs ... sorta, Steam Machine priced like lobster and will Intel bLLC compete with AMD 3D-vCache?  But we mostly complain about DDR5 in this exciting episode!00:00 Intro00:30 Patreon01:44 Food with Josh04:10 We talk about the DDR5 problem for the third week in a row18:44 TSMC confirmed September power outage at Arizona fab23:01 Dell and HP removing HEVC from some laptops26:33 Unpowered SSDs slowly lose data32:32 Is Intel bLLC really an X3D competitor?34:13 (in)Security Corner42:01 Gaming Quick Hits46:18 Picks of the Week1:06:12 Outro ★ Support this podcast on Patreon ★

Breaking Analysis with Dave Vellante
Resetting GPU Depreciation — Why AI Factories Bend, But Don't Break, Useful Life Assumptions

Breaking Analysis with Dave Vellante

Play Episode Listen Later Nov 24, 2025 10:56


Much attention has been focused in the news on the useful life of GPUs. While the pervasive narrative suggests GPUs have a short lifespan, and operators are “cooking the books,” our research suggests that GPUs, like CPUs before, have a significantly longer useful life than many claim.

The Pure Report
Tackling Myths Around AI Data and FlashBlade//EXA

The Pure Report

Play Episode Listen Later Nov 18, 2025 39:21


In this episode, we welcome Lead Principal Technologist Hari Kannan to cut through the noise and tackle some of the biggest myths surrounding AI data management and the revolutionary FlashBlade//EXA platform. With GPU shipments now outstripping CPUs, the foundation of modern AI is shifting, and legacy storage architectures are struggling to keep up. Hari dives into the implications of this massive GPU consumption, setting the stage for why a new approach is desperately needed for companies driving serious AI initiatives. Hari dismantles three critical myths that hold IT leaders back. First, he discusses how traditional storage is ill-equipped for modern AI's millions of small, concurrent files, where metadata performance is the true bottleneck—a problem FlashBlade//EXA solves with its metadata-data separation and single namespace. Second, he addresses the outdated notion that high-performance AI is file-only, highlighting FlashBlade//EXA's unified, uncompromising delivery of both file and object storage at exabyte scale and peak efficiency. Finally, Hari explains that GPUs are only as good as the data they consume, countering the belief that only raw horsepower matters. FlashBlade//EXA addresses this by delivering reliable, scalable throughput, efficient DirectFlash Modules up to 300 TB, and the metadata performance required to keep expensive GPUs fully utilized and models training faster. Join us as we explore the blind spots in current AI data strategies during our "Hot Takes" segment and recount a favorite FlashBlade success story. Hari closes with a compelling summary of how Pure Storage's complete portfolio is perfectly suited to provide the complementary data management essential for scaling AI. Tune in to discover why FlashBlade//EXA is the non-compromise, exabyte-scale solution built to keep your AI infrastructure running at its full potential. For more information, visit: https://www.pure.ai/flashblade-exa.html Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 04:30 Primer on FlashBlade 11:32 Stat of the Episode on GPU Shipments 13:25 What is FlashBlade//EXA 18:58 Myth #1: Traditional Storage Challenges for AI Data 22:01 Myth #2: AI Workloads are not just File-based 26:42: Myth #3: AI Needs more than just GPUs 31:35 Hot Takes Segment

The Automation Podcast
Siemens Sirius Act with Profinet (P253)

The Automation Podcast

Play Episode Listen Later Nov 16, 2025 40:33 Transcription Available


Shawn Tierney meets up with Mark Berger of Siemens to learn how Siemens integrates SIRIUS ACT devices (push buttons, selector switches, pilot lights) with PROFINET in this episode of The Automation Podcast. For any links related to this episode, check out the “Show Notes” located below the video. Watch The Automation Podcast from The Automation Blog: Listen to The Automation Podcast from The Automation Blog: The Automation Podcast, Episode 253 Show Notes: Special thanks to Mark Berger of Siemens for coming on the show and sending us a sample! Read the transcript on The Automation Blog: (automatically generated) Shawn Tierney (Host): Thank you for tuning back in to the automation podcast. My name is Shawn Tierney from Insights. And today on the show, we have a special treat. We have Mark Berger back on from Siemens to bring us up to speed on serious act. He’s gonna tell us all about the product, and then we’re even gonna do a small demo and take a look at it working live. So with that said, let’s go ahead and jump into this episode with Mark Burger from Siemens and learn all about their push buttons on PROFINET. Mark, it’s been a while since you’ve been on the show. Thank you for coming back on and agreeing to talk about this. Mark Berger (Siemens): Oh, thank you so much. I truly appreciate you letting me be on. I appreciate your channel, and I enjoy watching it. And I’m excited to show you some of this great technology. So I’ve got, the PowerPoint up here. We’ll just do a simple PowerPoint to kinda give you an overview, and then we’ll dive into the hardware. Shawn Tierney (Host): Appreciate it. Thank you. Mark Berger (Siemens): No problem. So as we stated, the Sirius X over PROFINET, let me emphasize that, the, actuators, the push buttons, the estops, the selector switches, they are all standard, when you use these. So if you have those on the shelf, the only thing that PROFINET does is that it adds, removes the normal contact blocks and adds the PROFINET, terminal blocks on the back. So every all the actuators that we’re showing are just standard actuators for the 22 millimeter push button line. So easy to use, modern design, performance and action, and extremely rugged and flexible. The, 22 millimeter is out of the box IP 69 k, which for those who are maybe in the food and beverage, verticals would understand what that is. And that’s for direct hose down, wash down, able to handle a high pressure washing and not able to leak past the actuator into the panel. So IP 69 k is a a great place for dust and wash down and hosing and where you’re having rain and so forth, to be able to protect for a keep of any, water passing into the panel. So introduction wise, it’s, the PROFINET push buttons for us. It it is, again, the same actuators, the same, connections, and so forth, but what we’re going to exchange is the terminal blocks, for it. So on there, I stated it’s, IP 69 k is standard. You don’t need any, extra covers forward or anything to fulfill that requirement, But it’s, it’s insensitive to dust and oil and caustic solutions, you know, like citric acid where you’re hosing down some stainless steel parts and so forth. Now what we have here is, changing out the terminal blocks that have wiring. So usually on a push button, you have two wires coming in, and then you have, for illuminated, you have two wires coming in and so forth and going out. And after you have 20 or 30 push buttons or 10 or 15 push buttons, you’ve got a substantial little bit of wiring or cabling that will be passing from the door over into the main cabinet of your control cabinet. What we’re going to do with PROFINET push buttons is we’re going to eliminate all that wiring. And then in addition, eliminate the input and output cards that you will need for your PLC and take it down to a pro, Ethernet cable, an r j r j 45 cable, and then down to a 24 volts. And that’s all that will pass from the cabinet onto the door where you’re mounting your push buttons. So, huge, safe and cost of wires. We’re reducing all the wire outlay. And, you know, back in the day when I build panels, it was an art how you got all the wires all nice and pretty and got them laid out and wire tied them down and so forth and just made the a piece of art on the backside. And then, it it was all done. You got it all wired. And then, of course, there was somebody that said, hey. We forgot to add another selector switch. So you had to go back and cut all that stuff and redo the whole layout and so forth. So with PROFINET, it’s extremely flexible and easily, to adapt to if you need something, more because you’re not taking all that wiring back to the panel, passing it across the hinge of the door and so forth. It is also with a safety PLC. You do have PROFIsafe, so we can do estops on the door as you can see here in the picture, but then we can do non safe applications also. So today, we’ll be just doing some non safe applications. And then the communications again is PROFINET. But then also just to touch real quick, we do have it on IO Link and on Aussie with our push buttons. So what is SiriusACT with PROFINET? There we go. So what you have is the first, block or interface module that you put on the back of your push button, that’s where the, Ethernet is plugged into and your 24 volts is plugged into. And then after that, subsequently, then the push buttons that you have is that you have what we call a terminal module. And in between the, the interface module to a terminal module or from terminal module to terminal module, you can go up to one meter of cabling, and it’s a ribbon cable. And we’ll show that here shortly. And then if you have up to we can do up to 20 push buttons, terminal modules, with a total of 21 push buttons. And then so from the first interface module all the way to the last push button, you can go up to 10 meters. And then it gives, again, 24 volt power supply for it. And we have, again, as I stated, as nonsafe, talking just PROFINET, and then the safety version, talking PROFISAFE on PROFINET. So serious act, we can go up on the the safety up to seal three and performance level e as an echo. We have, again, the the standard interface module without safety. You have the PLC, the interface module, and then the subsequent terminal modules for it. And then the cabling that goes from the interface module to out to the terminal modules is a simple ribbon cable that comes into the back of the terminal modules. The only tool that you need is simply it’s just a screwdriver, and, you, push it into the terminal module, push down. It uses, vampire connections, insulation displacement, vampire connections, and you push it down in. There’s no stripping of the wires. There’s no mix up. The indicator you can see on the wires here in a minute will show you that there’s a little red line that shows you, which way it, enters into the terminal, and then that’s it. It’s very straightforward. It’s, very simple with tools. And, as I stated, it’s, just like a normal push button that you’d put on, but then we’re gonna add, remove the contact block and add the terminal module or the interface module in the place of the contact block. Just to emphasize again, we can do PROFISAFE on, with a safety PLC and a safety controller, and we can give you all the safety, requirements for the either the ISO or the IEC specifications for safety out there in the field. Here’s some of the part numbers. First one, of course, is the interface module, and that has the ability to do PROFIsafe. It has also, additionally, four digital inputs, one digital output, and then one analog input. And we’ll talk about that a little bit more just in a few minutes. And then the non safe version, 24 volts. You have a, two versions of this one, one with just with just a standard, 24 volts input, but then there’s an additional one that has the four digital in, one digital out, and one analog in. So there’s two different part numbers. One where you don’t need the additional, digital inputs and outputs and analog, and then the and then the part number with the the additional inputs and outputs. But the safety one comes there’s no other version, just the one. Then you have what we call the terminal modules, and there’s three versions. One terminal module is just the command module only. It’s mounted with two mechanical signaling blocks to signal. So you have two contact blocks built in. Then you have one that’s a terminal module with the command, the terminal blocks, and then also an integrated LED. And then you can put what color you want the LED to be, and you can see there the the part number changed for red, blue, amber, so on. And then you have a just an LED module to where it’s no contactors. It’s just LED. And, I think with our demo we’re gonna show today, we’re just gonna show the contact block and LED module and only the LED module today. There’s some other, accessories with the safety. There’s a memory module to where that you, is all the configurations are put into the memory module, and something happens to that interface module. Everything’s put in there, the IP address, the configuration, and everything. If something gets broke and so forth or you have to replace it, you pull the memory module out, put the new terminal or interface module in, plug in the memory module, cycle the power, and it’s up and running. All the configurations, the IP address, everything’s already there. And then on the interface module, it does not come with an LED, so you’re required to buy this this, LED right here if you need it for it, and that’s what you use for the interface module. And then, of course, the ribbon cable that goes between the interface module to the terminal block or terminal module and terminal module and so forth come in five meter length and 10 meter length. K. So what’s it provide for you? Well, the benefits are, I’ll I’ll be very blunt. If it’s just one or two buttons on a panel, it won’t be that cost effective. Yes. We’re reducing the IO, the IO inputs and outputs, but for the savings, it’s not the best. Now when you get up to about three or four push buttons, then that cost saving is, very realized. Now when you go up to 20 push buttons, yes, you’re saving a lot of money, especially in the IO cards that you’re not gonna be required to have. And then, of course, all the wiring and the labor, getting it all wired up and doing all the loop checks to make sure that when you push this button, it’s wired into the right terminal block on the IO card, so on and so forth. So about, the break is about two to three push buttons to where it will become very cost effective for you to use it. But like I said yesterday, without PROFINET push buttons, it was all the wiring you brought across and putting them into all your IO cards and so forth. And now with PROFINET push buttons, all that goes away, and all you’re bringing across is an Ethernet cable and 24 volts positive and 24 volts negative across that hinge into the door. And that’s it. K. And then emphasizing again, we can do PROFIsafe and those, push buttons and estops. The estop can be part of your safety circuit and give you the, safety levels that you’re required from either sill and or performance level safeties depending on the specification, IEC, or ISO that you’re following within your plant. K? And then hardware configuration. Now this is where we step into reduction of engineering and helping you guys get going, quicker and making sure engineering is done properly. You know, back in the day, we’d wire up all the wires, coming from the push buttons, you know, a selector switch, a start button, stop button, indicator lights, and so forth. And and all those wires sometimes just, you know, the what we’re working with, all the wires look the same. You’ve put labels on them. You may have labeled it wrong, and you wired into an input card or an output card. So there’s some time where you’re over there doing some loop checks where you’re trying to say, yes. That’s coming into input byte dot bit, and that should be the selector switch. Well, with the PROFINET push buttons, we’re able to not have to worry about that, and we’re gonna demonstrate that just here in a minute. But you also have a full lineup of the push buttons coming into portal so that you can see the lineup and verify that it is the parts that you want. In TI portal, you can see that, of course, the first, button is the interface module, and then sequentially is the terminal modules that have either just contactors, LED and contactors, or just LEDs. And we’ll we’ll show that just here momentarily. But it’s all integrated into TIA portal. It has a visual representation of all the push buttons, and it’s simple and fast, to, configure. We’ll show you that here in just a moment. And there’s no addressing, for it. So some of the stuff that you have out there, you have addressing, making sure what the address is right, and so on. This is a standardized data management, and it’s extremely time saving and engineering saving for, the user. Shawn Tierney (Host): Well, let me ask you a question about that. If the snow addressing, do the items show up, in the order that they’re wired? In other words, you know, you’re daisy chasing the you’re you’re going cable to cable from device to device. Is that the order that they show up? Mark Berger (Siemens): That’s exactly right. Shawn Tierney (Host): Okay. Mark Berger (Siemens): So if you don’t know which ones are what, you just literally put run your hand from the interface module, follow that cable, and the next one that will be visually saw in portal will be the one that it lands on first. Perfect. And then there’s a cable that leaves that one and goes into the next one, daisy chained, and then that’s what’ll be represented in that lineup. And here in just a minute, we’ll we’ll show that. Alright. Thank you for that question. Okay. Now once I got it wired up, how do I know that I got it wired properly? And we’re gonna show that here in just a minute. But just graphically wise, you have the ability to see if it is all wired up. You do not need to plug it into the PLC. This all it needs is 24 volts. The PLC can come later and plugging it in later and so forth. There’s no programming. This all comes out of the box. So once you plug it in, if all on the backside looking at the terminal blocks and the daisy chain ribbon cable, if it’s all green, you wired it up properly, and it’s working properly. But then if you see a red light flashing either at the terminal module because that will that will bubble up to the terminal module. So if you have a problem somewhere pardon me, the interface module. If you have some problem with the terminal modules, a push button like number two or three or four, it will bubble up into the, interface module to let it know, hey. We got a problem. Can you look to see where it’s at? And as you see here, we have maybe a device that’s defective. And so it bubbles up into the interface module to let you know, and a red light lets you know that we have maybe a defective module. You know, something hammered it pretty hard, or, it may have been miswired. Then the second one down below, we’ve got a wiring error to where you don’t have the green lights on the back and everybody else’s there’s no green light shown. That means you have a wiring error. Or if everything works great, it’s green lights across, but then the next level of this is is my push button working? So then we you’ll push or actuate the push button or actuate the selector switch, and the green light will flash to let you know that that terminal module or interface module is working properly. And we’ve done our our, loop checks right there before we’ve even plugged it into the PLC or your programmer has come out and sat down and worked with it. We can prove that that panel is ready to roll and ready to go, and you can set it aside. And if you got four or five of the same panel, you can build them all up, power it up, verify that it’s all green lights across the board. It is. Great. Set it down. Build up another one and go on from there. So it shows you fast fault detection without any additional equipment or additional people to come in and help you show you that. When we used to do loop checks, usually had somebody push the button, then yell at the programmer, hey. Is this coming in at I zero dot zero? Yeah. I see it. Okay. Or then he pushed another one. Hey. Is this coming in on I 0.one? No. It’s coming in on i0. Three. So there was that two people and then more time to do that loop check or the ring out as some people have called it. So in this case, you don’t need to do that, and you’ll see why here in just a minute. And then, again, if we do have an interface module that, maybe it got short circuited or something hit it, it you just pull the ePROM out, plug it into the new one, bring in the ribbon cable, and cycle the power, and you’re up and running. Alright. And then this is just some of the handling options of how it handles the data, with the projects and so forth, with basic setups, options that you can be handling with this, filling bottles. What we wanna make sure to understand is that if maybe push buttons, you can pick push buttons to work with whatever project you want it to do. So if you have six push buttons out there, two of them are working on one, bottle filling, and then the rest of them are working on the labeling, you can separate those push buttons. Even though that they’re all tied together via PROFINET, you can use them in different applications across your machine. Shawn Tierney (Host): You’re saying if I have multiple CPUs, I could have some buttons in light work with CPU one, PLC one, and some work with PLC two? Mark Berger (Siemens): Yep. There’s handling there. There’s programming in the backside that needs to be done, but, yes, that can happen. Yep. Oh, alright. So conclusion, integrated into TI portal. We’re gonna show that here in a minute. So universal system, high flexibility with your digital in, digital outs, analogs, quick and easy installation, one man, one hand, no special tooling, and then substantially reducing the wiring and labor to get it going. And then, again, integrated safety if, required for the your time. So with that, let’s, switch over to TI portal. So I’ve already got a project started. I just called it project three. I’ve already got a PLC. I’ve got our, new g, s seven twelve hundred g two already in. And then what I’m gonna do is I’ve, already built up the panel. And, Shawn, if you wanna show your panel right here. Shawn Tierney (Host): Yeah. Let me go ahead and switch the camera over to mine. And so now everybody’s seeing my overhead. Now do you want me to turn it on at this point? It’s off. Yeah. Yeah. Mark Berger (Siemens): Let’s do it. Shawn Tierney (Host): Gonna turn it on, and all the lights came on. So we have some push buttons and pilot lights here, but the push buttons are illuminated, and now they’ve all gone off. Do you want me to show the back now? Mark Berger (Siemens): Yep. So what we did there is that we just showed that the LEDs are all working, and that’s at the initial powering up of the 24 volts. Now we’re gonna switch over and, you know, open up the cabinet and look inside, and now we’re looking on the backside. And if you remember in the PowerPoint, I said that we’d have all green lights, the everything’s wired properly. And as you look, all the terminal modules all have green lights, and so that means that’s all been wired properly. If you notice, you see a little red stripe on the ribbon cable. That’s a indication. Yep. To show you that. And then if you look on the on the out on the, the interface module, Shawn, there’s it says out right there at the bottom. Yeah. There’s a little dot, and that dot means that’s where the red stripe goes, coming out. So that little dot means that’s where the red stripe comes. Yep. Right there. And that’s how it comes out. And then if you look just to the left a little bit, there’s another, in, and there’d be a red dot underneath that ribbon cable showing you how the red the the red goes into it. Notice that everything’s clear, so you can see that the wire gets engaged properly all the way in. And then all you do is take a screwdriver and push down, and then the vent, comes in. The insulation displacement comes in and, and, makes the connections for you. So there’s no strip tie cable stripping tools or anything special for doing that. Another item, just while we’re looking, if you look in the bottom left hand corner of that terminal module, you see kind of a a t and then a circle and then another t. That’s an indicator to let you know that that’s two contactors and an LED that you have on the backside. Shawn Tierney (Host): We’re talking about right here? Mark Berger (Siemens): Yep. Yep. Right there. Shawn Tierney (Host): Okay. Mark Berger (Siemens): So that’s an indicator to tell you what type of terminal block it is a terminal, block that it is. That’s two contactors and LED. And then if you look at one in the bottom left hand corner, there’s just a circle. That means you just have an LED. So you have some indicators to show you what you’re looking at and so forth. So today, we’re just using the two, LED only, and then we’re doing the contactor and LED combination. I I don’t have one there on your demo that’s just the contactor. So Shawn Tierney (Host): Now you were telling me about these earlier. Yeah. Mark Berger (Siemens): So yeah. The so if you look there on that second row of the terminal blocks, you have a UV and an AI, and I’ll show that in the schematic here in just a little bit, but there, that is a 10 volt output. If you put a 250 ohm or 250 k ohm, potentiometer and then bring that signal back into AI, you have an analog set point that comes in for it that will automatically be scaled zero to 1,000 count or zero to 10 volts. Mhmm. And then you can use that for a speed reference for a VFD. And it’s already there. All you have to do, you don’t have to scale it or anything. You can put it towards, you know, okay. Zero to 1,000 count means zero to 500 PSI or or zero to 100 feet per second on a conveyor belt, and I’m I’m just pulling numbers out. But that’s the only real scalability scaling you have to do. So it’ll be a zero to 1,000 count is what you’ll see instead of, like, yep. Then you got four digital ins that you can use and then a one digital out. Now the four, I, kinda inquired wife just four, but let’s say that you have a four position joystick. You could wire all four positions into that interface module, and then the output could be something else for a local horn that you want or something to that case with it. So you in addition to the, push buttons, you also have a small, distribution IO block right there in the in your panel. Shawn Tierney (Host): Which is cool. Yeah. I mean, maybe yeah. Like you said, maybe you have something else on the panel that doesn’t fit in with, you know, this line of push buttons and pilot lights like a joystick. Right? And that makes a lot of sense. You were saying too, if I push the button, I can test to see if it’s working. Mark Berger (Siemens): Correct. So if you yep. Go right ahead. Shawn Tierney (Host): I’m pushing that middle one right there. You can see it blinking now. Mark Berger (Siemens): And that tells you that the contacts have been made, and it’s telling you that the contacts work properly. Shawn Tierney (Host): And now I’m pushing the one below it. So that shows me that everything’s working. The contacts are working, and we’re good to go. Mark Berger (Siemens): Yep. Everything’s done. We’ve done the loop checks. We know that this is ready to be plugged into the PLC and handed off to whomever is going to be, programming the PLC and bring it in, in which means that we’ll go to the next step in the TI portal. Shawn Tierney (Host): Yeah. Let me switch back to you, and we’re seeing your TI portal now. Mark Berger (Siemens): Awesome. Okay. So I’ve got the PLC. I’ve plugged it in to if if I needed an Ethernet switch or I’ve plugged it directly into the PLC. Now I have just built up that panel. I haven’t had anything, done with it for an IP address because it is a TCP IP protocol. So we need to do a IP address, but it’s on PROFINET. And then I’m gonna come here to online access, and I wanna see that I can see it out there that I’m talking to it. So I’m gonna do update accessible devices. It’s gonna reach out via my, Ethernet port on my laptop. And then there’s our g two PLC and its IP address. So that’s that guy right here. Mhmm. And then I have something out there called accessible devices, and then this is its MAC address. So what I and I just have those two items on the network, but, you know, you could have multiples as, you know, with GI portal. We can put an entire machine in one project. So I come here and drop that down, and I go to online diagnostics. I I go online with it, but I don’t have really a lot here to tell me what’s going on or anything yet. But I come here, and I say assign IP address. And I call one ninety two, one sixty eight, zero zero zero, and zero ten zero, and then our usual 255, two fifty five, two fifty five, and then I say assign IP address. Give it a second. It’s gonna go out and tell it, okay. You’re it. Now I wanna see if it took, and you look right there, it took. And I’m I’m kinda anal, so I kinda do it again just to verify. Yep. Everything’s done. It’s got an IP address. Now I’m gonna come up, and I’m going to go to my project, and I’m gonna switch this to new network view. Here’s my PLC. I’m gonna highlight my project. Now there’s two ways I can go about it, and I’m sure, Shawn, you’ve learned that Siemens allows you to kinda do it multiple ways. I could come in here and go into my field devices, and I could come into my commanding and interface modules, and I’d start building my push button station. But we’re gonna be a little oh and ah today. We’re gonna highlight the project. I’m gonna go to online, and I’m gonna come down here to hardware detection and do PROFINET devices from network. Brings up the screen to say, hey. I want you to go out and search for PROFINET industrial Ethernet. Come out via my, NIC card from my laptop, and I want you to start search. Shawn Tierney (Host): For those of you who watched my previous episodes doing the e t 200 I o, this is exactly the same process we used for that. Mark Berger (Siemens): Yep. And I found something out there that I know I gave the IP address, but it doesn’t have a PROFINET name yet. So that’s okay. I’ve I got the IP address. We’ll worry about the PROFINET name. So we’ll hide check mark this, and this could be multiple items. Shawn Tierney (Host): Mhmm. Mark Berger (Siemens): K. So now add device. Shawn Tierney (Host): And this is the sweet part. Mark Berger (Siemens): And right here, it’s done. It went out, interrogated the interface module, and said, okay. Are you there? Yep. I’m here. Here’s my IP address. And it also shared with it all of come in here, double click on it now. Shawn Tierney (Host): The real time saver. Yep. Mark Berger (Siemens): Yep. And then now here’s all the push buttons in your thing. And let me zoom that out. It’s at 200%. Let’s go out to a 100. And now it already interrogated the interface module and all the terminal modules to tell me what’s in my demo. Yep. And again, as you stated in your questions, how do I know which one’s the next one? You just saw the ribbon cable Mhmm. And then it brings you so forth and so on. So that’s done. We’re good. I’m gonna go back to my network view, and I’m gonna say, hey. I want you to communicate via PROFINET to there, which I’m done. And then it also gives you here’s the PLC that you’re gonna do because, you know, if we have a big project, we may have four or five of these stations, and you wanna know which PLC is the primary PLC on it. And then we’ve done that. I’m going to quickly just do a quick compile. And next, I’m gonna come here. I’m gonna click here. Now I could just do download and and let the PROFINET name, which is here, go into it. But I’m gonna right click, and I’m gonna say assign device name and say update list. It’s gonna go interrogate the network. Takes a second. No device name assigned. No PROFINET name. So this is how we do that time determinism with PROFINET. So I’m gonna highlight it, and I say assign the name, and it’s done. Close. So now it has a PROFINET name and IP address. So now I’m able to go in here and hit download and load. And we’re going to stop because we are adding hardware, so we are putting the CPU in stop and hit finish. Now I always make sure I’m starting the CPU back up and then hit finish. And then I’m gonna go online, go over here and show network view, and go online. And I got green balls and green check marks all over the board, so I’m excited. This works out. Everything’s done. But now what about the IO? So now your programmer is already talking to it, but now I need to know what the inputs and outputs are. So go back offline, double click here, and then I’m gonna just quickly look at a couple things. The interface modules IO tags are in a different spot than the terminal modules. So just a little note. It’s right here. If you double click on integrated I LED, you click here and then go to properties and say IO tags. There it lists all of the inputs and outputs. So it comes here. But if I do a terminal module, click here, then once you just click on it in general oops. Sorry. In general, it’s right here in the IO addressing. There’s where it starts start the bytes, but then I come here to tags, and then here’s the listing. So the the the programs automatically already allocated the byte and the bit for each of these guys. So if I click there, there, click there, there’s it there, onward and upward. Now notice that the byte so if I click on position four, it is three. So it’s one one less because the base zero versus here, it’s five. Just give me a little bit of a so if you look in here, all that starts at I four dot zero. I four dot zero. So k. So that’s there. So I’m gonna come here. I’m gonna go to the selector switch for this, and I’ve called it s s one, and that’s input two dot zero. Then I’m gonna click here, and I’m gonna call this green push button. Notice there’s two inputs because I have one contactor here, one contactor there, and 30 and 31. So then what I’m gonna do is that I’m going to go over here to the PLC, and I’m gonna go to and it’s updated my PLC tag table. There you go. It’s in there. So then I’m gonna grab that guy. I’m gonna because portal pushes you to use two monitors. I’m gonna come here, go to the main OB, and then I’m gonna just grab a normally open contact, drag it on, drop it, put it in there we go. And then I’m gonna grab selector switch and drop that right there, and grab green LED and drop that right there, and then close that out and compile. And everybody’s happy. I’m gonna download and say yes. Okay. And then I’m gonna go online. Alright. So it’s waiting in for me to switch that, and there you go. And if you wanna see my screen there, Shawn, that’s the green light is turned on. Shawn Tierney (Host): Yeah. Let me switch over to Okay. Bring up your, alright. And could you switch it back off now? Mark Berger (Siemens): Yeah. No problem. Yep. So there we go. We switch it off. We switch it on. Now I wanna show you something kinda cool. If I turn that off and I come back here and I go offline Mhmm. I have a indicator light that needs to flash to let the operator know that there’s something here I need you to attend to. So we used to put in some type of timer. Right? Mhmm. Shawn Tierney (Host): Mhmm. Mark Berger (Siemens): And so what we would do here instead of that, I’m gonna come back down here to my tab and go to the hardware config. I’m gonna double click here. I’m gonna go to module parameters, and I’m gonna drop this down, and I’m gonna put it at two hertz. Also, just to point out, I can also do a normally open contact and a normally closed contact and switch them. You see right here. Cool. And I can control the brightness of the LED if it has an LED, and it’s all hard coded into it. So once I’ve done that, do a quick compile. I’m I mean, you know, I’ve always compile and then do download. Mhmm. Mhmm. So we’re gonna download that and hit load and finish. K. Here we go. Turn that on, and now it’s flashing. Shawn Tierney (Host): That’s great. So you have a timer built in. If you need to flash, you don’t have to go get a clock bit or create your own timer. Plus, if it’s a button, you can change the contacts from normally open to normally closed. That is very cool. Mark Berger (Siemens): Yep. And that is PROFINET push buttons. As I stated let me quickly pull that up. Remember, you pointed out just a few minutes ago, here is the wiring diagram for that. So here’s the back of that with the terminal blocks. And you come down here, and it shows you that you just wire in that, variable resistor or a potentiometer. And you see m and you there’s the 10 volts, and then the signal comes into a. And then that guy is right here. Excellent. So if you come here, you go to properties and IO tags, and it comes in on I 60 fours and input and IO tags, and then I could call that a pot. Yeah. And now you have a potentiometer that you can use as a a speed reference for your VFD. That is very cool. Engineering efficiency, we reduced wiring. We don’t have all the IO cards that is required, and we have the diagnostics. Emphasize that each of these here, their names, you can change those if you would like because this is your diagnostic string. So if something goes wrong here, then it would come up and say commanding. So you double click here, and we go here to general, and it’ll say commanding and underscore LED module two, or you can you can call that start conveyor p b. And then that would change this. Now see this changed it. This would be your diagnostic string to let you know if if that button got damaged or is not working properly. Shawn Tierney (Host): You know, I wanted to ask you too. If I had, let’s say I needed two potentiometers on the front of the enclosure, could I put another interface module in the system? Even if it didn’t have any push buttons on it or pilots on it, could I just put it in there to grab, some more IO? Mark Berger (Siemens): Yep. Yes, sir. I have a customer that he uses these as small little IO blocks. Shawn Tierney (Host): Yeah. I mean, if you just needed a second pot, it might make sense to buy another interface module and bring it into that than buying an analog card. Right? Assuming the resolution and everything was app you know, correct for your application, but that’s very cool. I you know, it it really goes in line with all the videos we’ve done recently looking at e t 200 I o, all the different flavors and types. And when you walk through here, you know, I’m just so especially, thankful that it reads in all the push buttons and their positions and pilot lights. Because if you have this on your desk, you’re doing your first project, you can save a lot of dragging and dropping and searching through the hardware catalog just by reading it in just like we can read in a rack of, like, e t 200 SPIO. Mark Berger (Siemens): Yep. Engineering efficiency, reducing wiring, reducing time in front of the PC to get things up and running. You saw how quickly just a simple push button and a and, you know, again, a simple start and turn that on and off the races we went. Shawn Tierney (Host): Well, Mark, I really wanna thank you. Was there anything else that we wanted to cover before we close out the show? Mark Berger (Siemens): Nope. That’s just about it. I think, we got a little bit to have your your viewers, think about for it. So I appreciate the time, and I really appreciate you allowing me to show this. I think this is a a really engineering efficiency way of going about using our push buttons and and, making everybody’s projects in a timely manner and getting everything done and having cost savings with it. Shawn Tierney (Host): Well, and I wanna thank you for taking the time out of your busy day, not only to put together a little demo like you have for me to use here in the school, but also to come on and show our audience how to use this. And I wanna thank our audience. This was actually prompted from one of you guys out there at calling in or writing in. I think it was on YouTube somewhere and saying, hey. Could you cover the PROFINET push buttons from Siemens? I didn’t even know they had them. So thanks to the viewers out there for your feedback that helps guide me on what you wanna see. And, Mark, this would not be possible if it wasn’t for your expertise. Thank you for coming back on the show. I really appreciate it. Mark Berger (Siemens): Thank you, Shawn. All the best. Thank you. Shawn Tierney (Host): I hope you enjoyed that episode. And I wanna thank Mark for taking time out of his busy schedule to put together that demo and presentation for us and really bring us up to speed on Sirius X. And I wanna thank the user out there who put a comment on one of my previous videos that said, hey. Did you know Siemens has this? Because I wouldn’t have known that unless you said that. So thank you to all you. I try to read the comments every day or at least every two days, and so I appreciate you all wherever you are, whether you’re on YouTube, the automation blog, Spotify, iTunes, Google Podcasts, and wherever you’re listening to this, I just wanna thank you for tuning in. And now with next week being Thanksgiving, we’ll have a pause in the automation show, then we have some more shows in December, and we’re already filming episodes for next year. So I’m looking forward to, releasing all those for you. And if you didn’t know, I also do another podcast called the History of Automation. Right now, it’s only available on video platforms, so YouTube, LinkedIn, and the automation blog. Hopefully, someday we’ll also do it on, audio as well. But, we’re meeting with some of the really legends in automation who worked on some of the really, you know, just really original PLCs, original HMIs, up and through, like, more modern day systems. So it’s just been a blast having these folks on to talk about the history of automation. And so if you need something to listen to during Thanksgiving week or maybe during the holidays, check out the history of automation. Again, right now, it’s only available on YouTube, the automation blog, and LinkedIn, but I think you guys will enjoy that. And I wanna wish you guys, since I won’t be back next week, a very happy Thanksgiving. I wanna thank you always for tuning in and listening, and I also wanna wish you all good health and happiness. And until next time, my friends, peace. Until next time, Peace ✌️  If you enjoyed this content, please give it a Like, and consider Sharing a link to it as that is the best way for us to grow our audience, which in turn allows us to produce more content

Cloud Wars Live with Bob Evans
AWS Infrastructure to Power OpenAI's AI Workloads Under $38 Billion Agreement

Cloud Wars Live with Bob Evans

Play Episode Listen Later Nov 12, 2025 2:42


In today's Cloud Wars Minute, I delve into OpenAI's $38 billion partnership with AWS, giving Amazon a major role in powering and scaling OpenAI's AI workloads.Highlights0:03 — OpenAI and AWS have announced a multi‑year strategic partnership valued at $38 billion for AWS. This deal will enable AWS to provide the infrastructure necessary to support the operation and scaling of OpenAI's AI workloads. OpenAI is currently utilising computing resources through AWS, which include hundreds of thousands of NVIDIA GPUs and the capability to scale up to tens of millions of CPUs.01:02 — The infrastructure rollout for OpenAI includes architecture optimised for maximum AI processing efficiency and performance, with clusters designed to support a variety of workloads such as inference for ChatGPT and model training. This latest deal is yet another staggering example of the demand for AI services — a demand that companies like OpenAI must invest billions in to keep up with the pace.01:55 — OpenAI recently signed several significant deals with technology partners, including a remarkable $300 billion agreement with Oracle. While that figure might seem outrageous, it puts the $38 billion into a more relatable context. One thing is clear: wherever you stand in the AI revolution, whatever your role is — just make sure that you have one, because this unprecedented growth is touching every corner of the business world. Visit Cloud Wars for more.

AWS for Software Companies Podcast
Ep168: Scaling Agentic Workloads: Why Reliable Infrastructure is Non-Negotiable for Enterprise AI by Anyscale

AWS for Software Companies Podcast

Play Episode Listen Later Nov 7, 2025 24:57


** AWS re:Invent 2025 Dec 1-5, Las Vegas - Register Here! **Learn how Anyscale's Ray platform enables companies like Instacart to supercharge their model training while Amazon saves heavily by shifting to Ray's multimodal capabilities.Topics Include:Ray originated at UC Berkeley when PhD students spent more time building clusters than ML modelsAnyscale now launches 1 million clusters monthly with contributions from OpenAI, Uber, Google, CoinbaseInstacart achieved 10-100x increase in model training data using Ray's scaling capabilitiesML evolved from single-node Pandas/NumPy to distributed Spark, now Ray for multimodal dataRay Core transforms simple Python functions into distributed tasks across massive compute clustersHigher-level Ray libraries simplify data processing, model training, hyperparameter tuning, and model servingAnyscale platform adds production features: auto-restart, logging, observability, and zone-aware schedulingUnlike Spark's CPU-only approach, Ray handles both CPUs and GPUs for multimodal workloadsRay enables LLM post-training and fine-tuning using reinforcement learning on enterprise dataMulti-agent systems can scale automatically with Ray Serve handling thousands of requests per secondAnyscale leverages AWS infrastructure while keeping customer data within their own VPCsRay supports EC2, EKS, and HyperPod with features like fractional GPU usage and auto-scalingParticipants:Sharath Cholleti – Member of Technical Staff, AnyscaleSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/

Cyber Security Today
Ransomware Insider Threats, AI Vulnerabilities, and Major Security Gaffes

Cyber Security Today

Play Episode Listen Later Nov 5, 2025 10:14


  In this episode of Cybersecurity Today, host Jim Love dives into several shocking security lapses and emerging threats. Highlights include ransomware negotiators at Digital Mint accused of being behind attacks, a new AI vulnerability that exploits Windows' built-in stack, and a misuse of OpenAI's API for command and control in malware operations. Additionally, AMD confirms a flaw in its Zen 5 CPUs that could lead to predictable encryption keys, and the Louvre faces scrutiny after a major theft reveals poor password practices and maintenance failures. The episode underscores the importance of basic security measures like strong passwords and regular audits despite advanced technological systems in place. 00:00 Introduction and Sponsor Message 00:48 Ransomware Negotiators Turned Hackers 02:08 AI Stack Vulnerabilities in Windows 04:04 Backdoor Exploits OpenAI's API 05:24 AMD's Encryption Key Flaw 06:59 Louvre Heist and Security Lapses 08:24 Conclusion and Call to Action

Follow The Brand Podcast
Beyond the Binary: The Quantum Mindset with Farai Mazhandu and Grant McGaugh

Follow The Brand Podcast

Play Episode Listen Later Nov 3, 2025 45:20 Transcription Available


Send us a textImagine if your computer could explore a landscape of possibilities all at once, using the same rules that make electrons behave in surprising ways. That's the mental pivot Farai, a quantum physicist and teacher, helps us make as we break down what quantum computing really is and where it actually wins. We trade hype for clarity, showing how superposition, entanglement, and interference become practical tools when classical methods hit walls.We walk through the real stakes: modeling complex materials to build safer batteries and corrosion-resistant coatings, accelerating drug discovery by simulating chemistry where properties emerge, and tackling massive optimization problems that govern airport gates, delivery routes, and supply chains. Farai explains why quantum machines are not replacements for CPUs or GPUs but new teammates in a hybrid stack, each part doing what it does best. The goal is targeted advantage, not universal speedups, and the payoff arrives when the search space explodes beyond classical reach.Along the way, we zoom out to nature as our design mentor. Bacteria that fix nitrogen more efficiently than factories, plants that capture sunlight better than our best solar cells, human brains that run powerful cognition on twenty watts—these examples aren't trivia; they are roadmaps for engineering. By learning from natural intelligence and combining it with quantum algorithms, we can cut energy waste, shorten R&D cycles, and unlock better outcomes across industry and public services. Farai also shares his work leading the Africa Quantum Consortium, proving that the next wave of innovation is global, collaborative, and grounded in education.If you care about the future of computing, climate tech, logistics, and medicine, this conversation will sharpen your lens. Listen, subscribe, and share with someone who still thinks quantum is just sci‑fi. Then tell us: which real-world problem would you optimize first?Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates, visit 5starbdm.com. And don't miss Grant McGaugh's new book, First Light — a powerful guide to igniting your purpose and building a BRAVE brand that stands out in a changing world. - https://5starbdm.com/brave-masterclass/ See you next time on Follow The Brand!

The Hardware Unboxed Podcast
Intel CPUs Should Be $150

The Hardware Unboxed Podcast

Play Episode Listen Later Nov 1, 2025 70:31


Episode 87: We revisit some discussion from last week as we've found more dodgy stuff Microsoft has done, before chatting about the current situation Intel is in with CPUs. They aren't anywhere near as competitive up against AMD now, as AMD were with Ryzen when Intel was dominant. (Note: This podcast was recorded before the recent AMD RDNA 2 driver decision, we'll discuss that in the future)CHAPTERS00:00 - Intro00:29 - Microsoft Does Dodgy Stuff Again11:34 - Intel CPUs when AMD is Dominant vs AMD CPUs when Intel is Dominant21:35 - The Discounts Aren't Enough34:53 - Platform Longevity is Crucial41:33 - Platform Support is Always Better1:04:34 - Updates From Our Boring LivesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.

TechLinked
Ryzen renames CPUs, Some Discord accounts hacked, Australia sues Microsoft + more!

TechLinked

Play Episode Listen Later Oct 28, 2025 9:47


Timestamps: 0:00 it's almost halloween i guess 0:19 AMD renames, rebadges older laptop CPUs 1:17 RedTiger-based Discord account hack 2:20 Australia sues Microsoft 3:31 War Thunder! 4:24 QUICK BITS INTRO 4:34 Fujitsu laptops with Blu-ray drives 5:21 Cooler Master walks back repair instructions 6:12 NHTSA investigate's Tesla's Mad Max mode 7:03 Microsoft files $4.7 in OpenAI losses under 'other' 8:03 Mushrooms as memristors! NEWS SOURCES: https://lmg.gg/mXOat Learn more about your ad choices. Visit megaphone.fm/adchoices

The Circuit
EP 139: Intel Earnings, Anthropic TPUs, Challenges for AWS

The Circuit

Play Episode Listen Later Oct 27, 2025 54:03


In this episode, Ben Bajarin and Jay Goldberg discuss Intel's recent earnings report, highlighting a sense of stability in the market compared to previous downturns. They explore the demand for CPUs, particularly in the enterprise sector, and the implications of upcoming product launches. The conversation shifts to Intel's foundry developments, where they express optimism about new manufacturing processes and customer engagement. They also analyze the competitive landscape of AI compute infrastructure, particularly focusing on Amazon's challenges with its Tranium chips and the implications of Anthropic's partnership with Google. Finally, they delve into the future of AI agents, discussing the current limitations and potential advancements needed for these technologies to become viable.

The Acid Capitalist podcasts
Acid Breath: America in Sepia

The Acid Capitalist podcasts

Play Episode Listen Later Oct 16, 2025 45:36


Send us a textARM Ascends, Oil Drifts, Queens EnduresI open on macro static and shutdown fog, a strange steadiness where the market beat goes on. The Beige Book whispers fractures, three Fed districts up, five flat, four softening, a recalibration more than a roar. The feature turns to ARM, where the data center bottleneck is power, not code. ARM sells the blueprint, cutting CPU energy use perhaps by half, and the live question is simple: can it win 50 percent of data center CPUs. Then energy's riddle: oil sits near $56, where it was in 2005, with about 1.5 trillion barrels of proven reserves setting the rough scale. I close with Queens County, a Tony Soprano thrift that became my acid test and first great trade, the name fades, but the fuse remains.If this hit the mark, tap 5 

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Dataflow Computing for AI Inference with Kunle Olukotun - #751

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Oct 14, 2025 57:37


In this episode, we're joined by Kunle Olukotun, professor of electrical engineering and computer science at Stanford University and co-founder and chief technologist at Sambanova Systems, to discuss reconfigurable dataflow architectures for AI inference. Kunle explains the core idea of building computers that are dynamically configured to match the dataflow graph of an AI model, moving beyond the traditional instruction-fetch paradigm of CPUs and GPUs. We explore how this architecture is well-suited for LLM inference, reducing memory bandwidth bottlenecks and improving performance. Kunle reviews how this system also enables efficient multi-model serving and agentic workflows through its large, tiered memory and fast model-switching capabilities. Finally, we discuss his research into future dynamic reconfigurable architectures, and the use of AI agents to build compilers for new hardware. The complete show notes for this episode can be found at https://twimlai.com/go/751.

Paul's Security Weekly
Bad Crypto, Zombie CPUs, Y2K38,Park Mobile, Redis, Red Hat, Deloitte, Aaran Leyland.. - SWN #518

Paul's Security Weekly

Play Episode Listen Later Oct 7, 2025 28:47


Bad Crypto, Blood Thirsty Zombie CPUs, Y2K38, Park Mobile, Palo Alto, Redis, Red Hat, Deloitte, Aaran Leyland, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-518

PC Perspective Podcast
Podcast #838 - Snapdragon X2 Elite Extreme, Another Unlikely Intel Investor, CRT Revivals, Brave Security, EA Buyout + MORE!

PC Perspective Podcast

Play Episode Listen Later Oct 4, 2025 85:33


Matching up the audio this week for a change of pace!  That Snapdragon X2 Elite Extreme sometimes compares favorably, there's a new Kindle Scribe and you will never guess who's coming to the Intel investment party.  Also Microsoft extends security updates for Windows 10 if you live in the right places, and the they're also looking into micro-channel cooling?  All this and so much more!00:00 Intro00:44 Patreon02:33 Food with Josh05:58 Snapdragon X2 Elite Extreme benchmarks11:32 Qualcomm wins final battle with Arm over Oryon14:23 Amazon Kindle Scribe lineup now bigger, offers first color model18:10 LG has world's first 6K TB5 display21:54 Apple might invest in Intel?26:48 Intel 13th and 14th Gen price hike32:03 Microsoft gives in on Windows 10 at the 11th hour - sort of36:42 Microsoft also exploring tiny channels on CPUs for microfluidic cooling42:16 Podcast sponsor Zapier43:36 (In)Security Corner53:52 Gaming Quick Hits1:06:25 Picks of the Week1:23:57 Outro ★ Support this podcast on Patreon ★

The Tech Blog Writer Podcast
3433: AI Trading Without Lag: EZ Trading Computers on Building the Right Setup342

The Tech Blog Writer Podcast

Play Episode Listen Later Sep 27, 2025 37:36


When we think about what separates winning traders from those who struggle, we usually picture strategies, indicators, or a bit of insider know-how. But what if the biggest edge has been sitting on your desk all along? In this episode, I sit down with Eddie Z, also known as Russ Hazelcorn, the founder of EZ Trading Computers and EZBreakouts. With more than 37 years of experience as a trader, stockbroker, technologist, and educator, Eddie has built his career around one mission: helping traders cut through noise, avoid expensive mistakes, and get the tools they need to stay competitive in a fast-moving market. Eddie breaks down the specs that actually matter when building a trading setup, from RAM to CPUs to data feeds, and exposes which so-called “upgrades” are nothing more than overpriced fluff. We also dig into the rise of AI-powered trading platforms and bots, and what traders can do today to prepare their machines for the next wave. As Eddie points out, a lagging system or a missed feed isn't just an inconvenience—it can be the difference between a profitable trade and a costly loss. Beyond the hardware, we explore the broader picture. Rising tariffs and global supply chain disruptions are already reshaping the way traders access technology, and Eddie shares practical steps to avoid being caught short. He also explains why many experienced traders overlook their machines as a “secret weapon” and how quick, targeted fixes can transform reliability and performance in under an hour. This conversation goes deeper than specs and gadgets. Eddie opens up about the philosophy behind the EZ-Factor, his unique approach that blends decades of Wall Street expertise with cutting-edge technology to simplify trading and help people succeed. We talk about his ventures, including EZ Trading Computers, trusted by over 12,000 traders, and EZBreakouts, which delivers actionable daily and weekly picks backed by years of experience. For traders looking to level up—whether you're just starting out or managing multiple screens in a professional setting—this episode is packed with insights that can help you sharpen your edge. Eddie's perspective is clear: the right machine, the right mindset, and the right knowledge can make trading not only more profitable, but, as he likes to put it, as “EZ” as possible. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

DH Unplugged
DHUnplugged #770: The Money Tree

DH Unplugged

Play Episode Listen Later Sep 24, 2025 64:20


Rate cut - rates up? Diet Stocks - losing weight Good news/bad news - all good for markets Bessent for Fed Chair and Treasury Secretary? PLUS we are now on Spotify and Amazon Music/Podcasts! Click HERE for Show Notes and Links DHUnplugged is now streaming live - with listener chat. Click on link on the right sidebar. Love the Show? Then how about a Donation? Follow John C. Dvorak on Twitter Follow Andrew Horowitz on Twitter Warm-Up - BRAND New server - all provisioned - Much faster DH Site - Need a new CTP stock! - New Clear Stocks! - To the Sky - Money Tree Market - Tik Tok news Markets - Rate cut - rates up - Diet Stocks - losing weight - Good news/bad news - all good for markets - StubHub IPO Update SELL Rosh Hashanah - Buy Yom Kippur? Vanguard Issues? Got a call this morning..Gent in NY... NEW CLEAR - On Fire! - Have you seen the returns on some of these stocks? - YTD - - URA (Uranium ETF) Up 75% -- SMR (NuScale) Up 164% - - OKLO (OKL) up 518% - - CCJ (Cameco) up 65% TikTok Nonsense - President Donald Trump said in an interview that aired Sunday that conservative media baron Rupert Murdoch and his son Lachlan are likely to be involved in the proposal to save TikTok in the United States. -Trump also said that Oracle executive chairman Larry Ellison and Dell Technologies CEO Michael Dell are also likely to be involved in the TikTok deal. More TikTok - White House Press Secretary Karoline Leavitt says TikTok's algorithm will be secured, retrained, and operated in the U.S. outside of Bytedance's control; Oracle (ORCL) will serve as Tiktok's security provider; President Trump will sign TikTok deal later this week - What does that mean and will it be the same TikTok. - Who is doing the retraining??????? SO MANY QUESTIONS MEME ALERT! - Eric Jackson, a hedge fund manager who partly contributed to the trading explosion in Opendoor, unveiled his new pick Monday — Better Home & Finance Holding Co. - Jackson said his firm holds a position in Better Home but didn't disclose its size. - Shares of Better Home soared 46.6% on Monday after Jackson touted the stock on X. At one point during the session, the stock more than doubled in price. - The New York-based mortgage lender jumped more than 36% last week. Intel - INTC getting even more money. - Now, NVDA pouring in $5B - Nvidia and Intel announced a partnership to jointly develop multiple generations of custom data center and PC products. Intel will manufacture new x86 CPUs customized for Nvidia's AI infrastructure, and also build system-on-chips (SoCs) for PCs that integrate Nvidia's RTX GPU chiplets. - Both the US Government and NVDA got BELOW market pricing on their shares. NVDA $$ - Nvidia is investing in OpenAI. On September 22, 2025, Nvidia announced a strategic partnership with OpenAI, which includes an investment of up to $100 billion - The agreement will help deploy at least 10 gigawatts of Nvidia systems, which will include millions of its GPUs. The first phase is scheduled to launch in the second half of 2026, using Nvidia's Vera Rubin platform. Autism Link - Shares of Kenvue (KVUE) are trading lower largely due to reports from the White House and HHS suggesting a forthcoming warning linking prenatal use of acetaminophen (Tylenol's active ingredient) to autism risk. - Investors are concerned that such a warning could lead to regulatory action, changes in labeling requirements, litigation risk, or reduced demand for one of KVUE's key products. It's estimated that Tylenol accounts for approximately 7-9% of KVUE's total revenue. - The company has strongly denied any scientific basis for the link, but the uncertainty itself is hurting sentiment. - Finally, this also comes on top of recent weak financial performance: KVUE posted a Q2 revenue decline of 4% and cut its full-year guidance on August 7. - - Lawsuits to follow... Pfizer