Podcasts about Dram

  • 1,457PODCASTS
  • 3,119EPISODES
  • 41mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 16, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about Dram

Show all podcasts related to dram

Latest podcast episodes about Dram

WALL STREET COLADA
Rebote con petróleo caro, $NBIS vuela por acuerdo con $META, $MU expande HBM y $BABA lanza AI agent.

WALL STREET COLADA

Play Episode Listen Later Mar 16, 2026 4:16


SUMMARY DEL SHOW Futuros en verde tras una semana débil por shock de energía, pero el tape sigue dominado por Irán y el Estrecho de Ormuz Crudo volátil con Brent cerca de $106 y WTI alrededor de $96, mientras Trump presiona a aliados para reabrir rutas de envío $NBIS se dispara por acuerdo de infraestructura de IA con $META, $MU acelera capacidad en Taiwán para DRAM y HBM, y $BABA prepara un AI agent empresarial sobre Qwen

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Retrieval After RAG: Hybrid Search, Agents, and Database Design — Simon Hørup Eskildsen of Turbopuffer

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 12, 2026 60:32


Turbopuffer came out of a reading app.In 2022, Simon was helping his friends at Readwise scale their infra for a highly requested feature: article recommendations and semantic search. Readwise was paying ~$5k/month for their relational database and vector search would cost ~$20k/month making the feature too expensive to ship. In 2023 after mulling over the problem from Readwise, Simon decided he wanted to “build a search engine” which became Turbopuffer.We discuss:• Simon's path: Denmark → Shopify infra for nearly a decade → “angel engineering” across startups like Readwise, Replicate, and Causal → turbopuffer almost accidentally becoming a company • The Readwise origin story: building an early recommendation engine right after the ChatGPT moment, seeing it work, then realizing it would cost ~$30k/month for a company spending ~$5k/month total on infra and getting obsessed with fixing that cost structure • Why turbopuffer is “a search engine for unstructured data”: Simon's belief that models can learn to reason, but can't compress the world's knowledge into a few terabytes of weights, so they need to connect to systems that hold truth in full fidelity • The three ingredients for building a great database company: a new workload, a new storage architecture, and the ability to eventually support every query plan customers will want on their data • The architecture bet behind turbopuffer: going all in on object storage and NVMe, avoiding a traditional consensus layer, and building around the cloud primitives that only became possible in the last few years • Why Simon hated operating Elasticsearch at Shopify: years of painful on-call experience shaped his obsession with simplicity, performance, and eliminating state spread across multiple systems • The Cursor story: launching turbopuffer as a scrappy side project, getting an email from Cursor the next day, flying out after a 4am call, and helping cut Cursor's costs by 95% while fixing their per-user economics • The Notion story: buying dark fiber, tuning TCP windows, and eating cross-cloud costs because Simon refused to compromise on architecture just to close a deal faster • Why AI changes the build-vs-buy equation: it's less about whether a company can build search infra internally, and more about whether they have time especially if an external team can feel like an extension of their own • Why RAG isn't dead: coding companies still rely heavily on search, and Simon sees hybrid retrieval semantic, text, regex, SQL-style patterns becoming more important, not less • How agentic workloads are changing search: the old pattern was one retrieval call up front; the new pattern is one agent firing many parallel queries at once, turning search into a highly concurrent tool call • Why turbopuffer is reducing query pricing: agentic systems are dramatically increasing query volume, and Simon expects retrieval infra to adapt to huge bursts of concurrent search rather than a small number of carefully chosen calls • The philosophy of “playing with open cards”: Simon's habit of being radically honest with investors, including telling Lachy Groom he'd return the money if turbopuffer didn't hit PMF by year-end • The “P99 engineer”: Simon's framework for building a talent-dense company, rejecting by default unless someone on the team feels strongly enough to fight for the candidate —Simon Hørup Eskildsen• LinkedIn: https://www.linkedin.com/in/sirupsen• X: https://x.com/Sirupsen• https://sirupsen.com/aboutturbopuffer• https://turbopuffer.com/Full Video PodTimestamps00:00:00 The PMF promise to Lachy Groom00:00:25 Intro and Simon's background00:02:19 What turbopuffer actually is00:06:26 Shopify, Elasticsearch, and the pain behind the company00:10:07 The Readwise experiment that sparked turbopuffer00:12:00 The insight Simon couldn't stop thinking about00:17:00 S3 consistency, NVMe, and the architecture bet00:20:12 The Notion story: latency, dark fiber, and conviction00:25:03 Build vs. buy in the age of AI00:26:00 The Cursor story: early launch to breakout customer00:29:00 Why code search still matters00:32:00 Search in the age of agents00:34:22 Pricing turbopuffer in the AI era00:38:17 Why Simon chose Lachy Groom00:41:28 Becoming a founder on purpose00:44:00 The “P99 engineer” philosophy00:49:30 Bending software to your will00:51:13 The future of turbopuffer00:57:05 Simon's tea obsession00:59:03 Tea kits, X Live, and P99 LiveTranscriptSimon Hørup Eskildsen: I don't think I've said this publicly before, but I just called Lockey and was like, local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you. But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working.So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people. We're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards. Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before.Alessio: Hey everyone, welcome to the Leading Space podcast. This is Celesio Pando, Colonel Laz, and I'm joined by Swix, editor of Leading Space.swyx: Hello. Hello, uh, we're still, uh, recording in the Ker studio for the first time. Very excited. And today we are joined by Simon Eski. Of Turbo Farer welcome.Simon Hørup Eskildsen: Thank you so much for having me.swyx: Turbo Farer has like really gone on a huge tear, and I, I do have to mention that like you're one of, you're not my newest member of the Danish AHU Mafia, where like there's a lot of legendary programmers that have come out of it, like, uh, beyond Trotro, Rasmus, lado Berg and the V eight team and, and Google Maps team.Uh, you're mostly a Canadian now, but isn't that interesting? There's so many, so much like strong Danish presence.Simon Hørup Eskildsen: Yeah, I was writing a post, um, not that long ago about sort of the influences. So I grew up in Denmark, right? I left, I left when, when I was 18 to go to Canada to, to work at Shopify. Um, and so I, like, I've, I would still say that I feel more Danish than, than Canadian.This is also the weird accent. I can't say th because it, this is like, I don't, you know, my wife is also Canadian, um, and I think. I think like one of the things in, in Denmark is just like, there's just such a ruthless pragmatism and there's also a big focus on just aesthetics. Like, they're like very, people really care about like where, what things look like.Um, and like Canada has a lot of attributes, US has, has a lot of attributes, but I think there's been lots of the great things to carry. I don't know what's in the water in Ahu though. Um, and I don't know that I could be considered part of the Mafi mafia quite yet, uh, compared to the phenomenal individuals we just mentioned.Barra OV is also, uh, Danish Canadian. Okay. Yeah. I don't know where he lives now, but, and he's the PHP.swyx: Yeah. And obviously Toby German, but moved to Canada as well. Yes. Like this is like import that, uh, that, that is an interesting, um, talent move.Alessio: I think. I would love to get from you. Definition of Turbo puffer, because I think you could be a Vector db, which is maybe a bad word now in some circles, you could be a search engine.It's like, let, let's just start there and then we'll maybe run through the history of how you got to this point.Simon Hørup Eskildsen: For sure. Yeah. So Turbo Puffer is at this point in time, a search engine, right? We do full text search and we do vector search, and that's really what we're specialized in. If you're trying to do much more than that, like then this might not be the right place yet, but Turbo Buffer is all about search.The other way that I think about it is that we can take all of the world's knowledge, all of the exabytes and exabytes of data that there is, and we can use those tokens to train a model, but we can't compress all of that into a few terabytes of weights, right? Compress into a few terabytes of weights, how to reason with the world, how to make sense of the knowledge.But we have to somehow connect it to something externally that actually holds that like in full fidelity and truth. Um, and that's the thing that we intend to become. Right? That's like a very holier than now kind of phrasing, right? But being the search engine for unstructured, unstructured data is the focus of turbo puffer at this point in time.Alessio: And let's break down. So people might say, well, didn't Elasticsearch already do this? And then some other people might say, is this search on my data, is this like closer to rag than to like a xr, like a public search thing? Like how, how do you segment like the different types of search?Simon Hørup Eskildsen: The way that I generally think about this is like, there's a lot of database companies and I think if you wanna build a really big database company, sort of, you need a couple of ingredients to be in the air.We don't, which only happens roughly every 15 years. You need a new workload. You basically need the ambition that every single company on earth is gonna have data in your database. Multiple times you look at a company like Oracle, right? You will, like, I don't think you can find a company on earth with a digital presence that it not, doesn't somehow have some data in an Oracle database.Right? And I think at this point, that's also true for Snowflake and Databricks, right? 15 years later it's, or even more than that, there's not a company on earth that doesn't, in. Or directly is consuming Snowflake or, or Databricks or any of the big analytics databases. Um, and I think we're in that kind of moment now, right?I don't think you're gonna find a company over the next few years that doesn't directly or indirectly, um, have all their data available for, for search and connect it to ai. So you need that new workload, like you need something to be happening where there's a new workload that causes that to happen, and that new workload is connecting very large amounts of data to ai.The second thing you need. The second condition to build a big database company is that you need some new underlying change in the storage architecture that is not possible from the databases that have come before you. If you look at Snowflake and Databricks, right, commoditized, like massive fleet of HDDs, like that was not possible in it.It just wasn't in the air in the nineties, right? So you just didn't, we just didn't build these systems. S3 and and and so on was not around. And I think the architecture that is now possible that wasn't possible 15 years ago is to go all in on NVME SSDs. It requires a particular type of architecture for the database that.It's difficult to retrofit onto the databases that are already there, including the ones you just mentioned. The second thing is to go all in on OIC storage, more so than we could have done 15 years ago. Like we don't have a consensus layer, we don't really have anything. In fact, you could turn off all the servers that Turbo Buffer has, and we would not lose any data because we have all completely all in on OIC storage.And this means that our architecture is just so simple. So that's the second condition, right? First being a new workload. That means that every company on earth, either indirectly or directly, is using your database. Second being, there's some new storage architecture. That means that the, the companies that have come before you can do what you're doing.I think the third thing you need to do to build a big database company is that over time you have to implement more or less every Cory plan on the data. What that means is that you. You can't just get stuck in, like, this is the one thing that a database does. It has to be ever evolving because when someone has data in the database, they over time expect to be able to ask it more or less every question.So you have to do that to get the storage architecture to the limit of what, what it's capable of. Those are the three conditions.swyx: I just wanted to get a little bit of like the motivation, right? Like, so you left Shopify, you're like principal, engineer, infra guy. Um, you also head of kernel labs, uh, inside of Shopify, right?And then you consulted for read wise and that it kind of gave you that, that idea. I just wanted you to tell that story. Um, maybe I, you've told it before, but, uh, just introduce the, the. People to like the, the new workload, the sort of aha moment for turbo PufferSimon Hørup Eskildsen: For sure. So yeah, I spent almost a decade at Shopify.I was on the infrastructure team, um, from the fairly, fairly early days around 2013. Um, at the time it felt like it was growing so quickly and everything, all the metrics were, you know, doubling year on year compared to the, what companies are contending with today. It's very cute in growth. I feel like lot some companies are seeing that month over month.Um, of course. Shopify compound has been compounding for a very long time now, but I spent a decade doing that and the majority of that was just make sure the site is up today and make sure it's up a year from now. And a lot of that was really just the, um, you know, uh, the Kardashians would drive very, very large amounts of, of data to, to uh, to Shopify as they were rotating through all the merch and building out their businesses.And we just needed to make sure we could handle that. Right. And sometimes these were events, a million requests per second. And so, you know, we, we had our own data centers back in the day and we were moving to the cloud and there was so much sharding work and all of that that we were doing. So I spent a decade just scaling databases ‘cause that's fundamentally what's the most difficult thing to scale about these sites.The database that was the most difficult for me to scale during that time, and that was the most aggravating to be on call for, was elastic search. It was very, very difficult to deal with. And I saw a lot of projects that were just being held back in their ambition by using it.swyx: And I mean, self-hosted.Self-hosted. ‘causeSimon Hørup Eskildsen: it's, yeah, and it commercial, this is like 2015, right? So it's like a very particular vintage. Right. It's probably better at a lot of these things now. Um, it was difficult to contend with and I'm just like, I just think about it. It's an inverted index. It should be good at these kinds of queries and do all of this.And it was, we, we often couldn't get it to do exactly what we needed to do or basically get lucine to do, like expose lucine raw to, to, to what we needed to do. Um, so that was like. Just something that we did on the side and just panic scaled when we needed to, but not a particular focus of mine. So I left, and when I left, I, um, wasn't sure exactly what I wanted to do.I mean, it spent like a decade inside of the same company. I'd like grown up there. I started working there when I was 18.swyx: You only do Rails?Simon Hørup Eskildsen: Yeah. I mean, yeah. Rails. And he's a Rails guy. Uh, love Rails. So good. Um,Alessio: we all wish we could still work in Rails.swyx: I know know. I know, but some, I tried learning Ruby.It's just too much, like too many options to do the same thing. It's, that's my, I I know there's a, there's a way to do it.Simon Hørup Eskildsen: I love it. I don't know that I would use it now, like given cloud code and, and, and cursor and everything, but, um, um, but still it, like if I'm just sitting down and writing a teal code, that's how I think.But anyway, I left and I wasn't, I talked to a couple companies and I was like, I don't. I need to see a little bit more of the world here to know what I'm gonna like focus on next. Um, and so what I decided is like I was gonna, I called it like angel engineering, where I just hopped around in my friend's companies in three months increments and just helped them out with something.Right. And, and just vested a bit of equity and solved some interesting infrastructure problem. So I worked with a bunch of companies at the time, um, read Wise was one of them. Replicate was one of them. Um, causal, I dunno if you've tried this, it's like a, it's a spreadsheet engine Yeah. Where you can do distribution.They sold recently. Yeah. Um, we've been, we used that in fp and a at, um, at Turbo Puffer. Um, so a bunch of companies like this and it was super fun. And so we're the Chachi bt moment happened, I was with. With read Wise for a stint, we were preparing for the reader launch, right? Which is where you, you cue articles and read them later.And I was just getting their Postgres up to snuff, like, which basically boils down to tuning, auto vacuum. So I was doing that and then this happened and we were like, oh, maybe we should build a little recommendation engine and some features to try to hook in the lms. They were not that good yet, but it was clear there was something there.And so I built a small recommendation engine just, okay, let's take the articles that you've recently read, right? Like embed all the articles and then do recommendations. It was good enough that when I ran it on one of the co-founders of Rey's, like I found out that I got articles about, about having a child.I'm like, oh my God, I didn't, I, I didn't know that, that they were having a child. I wasn't sure what to do with that information, but the recommendation engine was good enough that it was suggesting articles, um, about that. And so there was, there was recommendations and uh, it actually worked really well.But this was a company that was spending maybe five grand a month in total on all their infrastructure and. When I did the napkin math on running the embeddings of all the articles, putting them into a vector index, putting it in prod, it's gonna be like 30 grand a month. That just wasn't tenable. Right?Like Read Wise is a proudly bootstrapped company and it's paying 30 grand for infrastructure for one feature versus five. It just wasn't tenable. So sort of in the bucket of this is useful, it's pretty good, but let us, let's return to it when the costs come down.swyx: Did you say it grows by feature? So for five to 30 is by the number of, like, what's the, what's the Scaling factor scale?It scales by the number of articles that you embed.Simon Hørup Eskildsen: It does, but what I meant by that is like five grand for like all of the other, like the Heroku, dinos, Postgres, like all the other, and this then storage is 30. Yeah. And then like 30 grand for one feature. Right. Which is like, what other articles are related to this one.Um, so it was just too much right to, to power everything. Their budget would've been maybe a few thousand dollars, which still would've been a lot. And so we put it in a bucket of, okay, we're gonna do that later. We'll wait, we will wait for the cost to come down. And that haunted me. I couldn't stop thinking about it.I was like, okay, there's clearly some latent demand here. If the cost had been a 10th, we would've shipped it and. This was really the only data point that I had. Right. I didn't, I, I didn't, I didn't go out and talk to anyone else. It was just so I started reading Right. I couldn't, I couldn't help myself.Like I didn't know what like a vector index is. I, I generally barely do about how to generate the vectors. There was a lot of hype about, this is a early 2023. There was a lot of hype about vector databases. There were raising a lot of money and it's like, I really didn't know anything about it. It's like, you know, trying these little models, fine tuning them.Like I was just trying to get sort of a lay of the land. So I just sat down. I have this. A GitHub repository called Napkin Math. And on napkin math, there's just, um, rows of like, oh, this is how much bandwidth. Like this is how many, you know, you can do 25 gigabytes per second on average to dram. You can do, you know, five gigabytes per second of rights to an SSD, blah blah.All of these numbers, right? And S3, how many you could do per, how much bandwidth can you drive per connection? I was just sitting down, I was like, why hasn't anyone build a database where you just put everything on O storage and then you puff it into NVME when you use the data and you puff it into dram if you're, if you're querying it alive, it's just like, this seems fairly obvious and you, the only real downside to that is that if you go all in on o storage, every right will take a couple hundred milliseconds of latency, but from there it's really all upside, right?You do the first go, it takes half a second. And it sort of occurred to me as like, well. The architecture is really good for that. It's really good for AB storage, it's really good for nvm ESSD. It's, well, you just couldn't have done that 10 years ago. Back to what we were talking about before. You really have to build a database where you have as few round trips as possible, right?This is how CPUs work today. It's how NVM E SSDs work. It's how as, um, as three works that you want to have a very large amount of outstanding requests, right? Like basically go to S3, do like that thousand requests to ask for data in one round trip. Wait for that. Get that, like, make a new decision. Do it again, and try to do that maybe a maximum of three times.But no databases were designed that way within NVME as is ds. You can drive like within, you know, within a very low multiple of DRAM bandwidth if you use it that way. And same with S3, right? You can fully max out the network card, which generally is not maxed out. You get very, like, very, very good bandwidth.And, but no one had built a database like that. So I was like, okay, well can't you just, you know, take all the vectors right? And plot them in the proverbial coordinate system. Get the clusters, put a file on S3 called clusters, do json, and then put another file for every cluster, you know, cluster one, do js O cluster two, do js ON you know that like it's two round trips, right?So you get the clusters, you find the closest clusters, and then you download the cluster files like the, the closest end. And you could do this in two round trips.swyx: You were nearest neighbors locally.Simon Hørup Eskildsen: Yes. Yes. And then, and you would build this, this file, right? It's just like ultra simplistic, but it's not a far shot from what the first version of Turbo Buffer was.Why hasn't anyone done thatAlessio: in that moment? From a workload perspective, you're thinking this is gonna be like a read heavy thing because they're doing recommend. Like is the fact that like writes are so expensive now? Oh, with ai you're actually not writing that much.Simon Hørup Eskildsen: At that point I hadn't really thought too much about, well no actually it was always clear to me that there was gonna be a lot of rights because at Shopify, the search clusters were doing, you know, I don't know, tens or hundreds of crew QPS, right?‘cause you just have to have a human sit and type in. But we did, you know, I don't know how many updates there were per second. I'm sure it was in the millions, right into the cluster. So I always knew there was like a 10 to 100 ratio on the read write. In the read wise use case. It's, um, even, even in the read wise use case, there'd probably be a lot fewer reads than writes, right?There's just a lot of churn on the amount of stuff that was going through versus the amount of queries. Um, I wasn't thinking too much about that. I was mostly just thinking about what's the fundamentally cheapest way to build a database in the cloud today using the primitives that you have available.And this is it, right? You just, now you have one machine and you know, let's say you have a terabyte of data in S3, you paid the $200 a month for that, and then maybe five to 10% of that data and needs to be an NV ME SSDs and less than that in dram. Well. You're paying very, very little to inflate the data.swyx: By the way, when you say no one else has done that, uh, would you consider Neon, uh, to be on a similar path in terms of being sort of S3 first and, uh, separating the compute and storage?Simon Hørup Eskildsen: Yeah, I think what I meant with that is, uh, just build a completely new database. I don't know if we were the first, like it was very much, it was, I mean, I, I hadn't, I just looked at the napkin math and was like, this seems really obvious.So I'm sure like a hundred people came up with it at the same time. Like the light bulb and every invention ever. Right. It was just in the air. I think Neon Neon was, was first to it. And they're trying, they're retrofitted onto Postgres, right? And then they built this whole architecture where you have, you have it in memory and then you sort of.You know, m map back to S3. And I think that was very novel at the time to do it for, for all LTP, but I hadn't seen a database that was truly all in, right. Not retrofitting it. The database felt built purely for this no consensus layer. Even using compare and swap on optic storage to do consensus. I hadn't seen anyone go that all in.And I, I mean, there, there, I'm sure there was someone that did that before us. I don't know. I was just looking at the napkin mathswyx: and, and when you say consensus layer, uh, are you strongly relying on S3 Strong consistency? You are. Okay.SoSimon Hørup Eskildsen: that is your consensus layer. It, it is the consistency layer. And I think also, like, this is something that most people don't realize, but S3 only became consistent in December of 2020.swyx: I remember this coming out during COVID and like people were like, oh, like, it was like, uh, it was just like a free upgrade.Simon Hørup Eskildsen: Yeah.swyx: They were just, they just announced it. We saw consistency guys and like, okay, cool.Simon Hørup Eskildsen: And I'm sure that they just, they probably had it in prod for a while and they're just like, it's done right.And people were like, okay, cool. But. That's a big moment, right? Like nv, ME SSDs, were also not in the cloud until around 2017, right? So you just sort of had like 2017 nv, ME SSDs, and people were like, okay, cool. There's like one skew that does this, whatever, right? Takes a few years. And then the second thing is like S3 becomes consistent in 2020.So now it means you don't have to have this like big foundation DB or like zookeeper or whatever sitting there contending with the keys, which is how. You know, that's what Snowflake and others have do so muchswyx: for goneSimon Hørup Eskildsen: Exactly. Just gone. Right? And so just push to the, you know, whatever, how many hundreds of people they have working on S3 solved and then compare and swap was not in S3 at this point in time,swyx: by the way.Uh, I don't know what that is, so maybe you wanna explain. Yes. Yeah.Simon Hørup Eskildsen: Yes. So, um, what Compare and swap is, is basically, you can imagine that if you have a database, it might be really nice to have a file called metadata json. And metadata JSON could say things like, Hey, these keys are here and this file means that, and there's lots of metadata that you have to operate in the database, right?But that's the simplest way to do it. So now you have might, you might have a lot of servers that wanna change the metadata. They might have written a file and want the metadata to contain that file. But you have a hundred nodes that are trying to contend with this metadata that JSON well, what compare and Swap allows you to do is basically just you download the file, you make the modifications, and then you write it only if it hasn't changed.While you did the modification and if not you retry. Right? Should just have this retry loops. Now you can imagine if you have a hundred nodes doing that, it's gonna be really slow, but it will converge over time. That primitive was not available in S3. It wasn't available in S3 until late 2024, but it was available in GCP.The real story of this is certainly not that I sat down and like bake brained it. I was like, okay, we're gonna start on GCS S3 is gonna get it later. Like it was really not that we started, we got really lucky, like we started on GCP and we started on GCP because tur um, Shopify ran on GCP. And so that was the platform I was most available with.Right. Um, and I knew the Canadian team there ‘cause I'd worked with them at Shopify and so it was natural for us to start there. And so when we started building the database, we're like, oh yeah, we have to build a, we really thought we had to build a consensus layer, like have a zookeeper or something to do this.But then we discovered the compare and swap. It's like, oh, we can kick the can. Like we'll just do metadata r json and just, it's fine. It's probably fine. Um, and we just kept kicking the can until we had very, very strong conviction in the idea. Um, and then we kind of just hinged the company on the fact that S3 probably was gonna get this, it started getting really painful in like mid 2024.‘cause we were closing deals with, um, um, notion actually that was running in AWS and we're like, trust us. You, you really want us to run this in GCP? And they're like, no, I don't know about that. Like, we're running everything in AWS and the latency across the cloud were so big and we had so much conviction that we bought like, you know, dark fiber between the AWS regions in, in Oregon, like in the InterExchange and GCP is like, we've never seen a startup like do like, what's going on here?And we're just like, no, we don't wanna do this. We were tuning like TCP windows, like everything to get the latency down ‘cause we had so high conviction in not doing like a, a metadata layer on S3. So those were the three conditions, right? Compare and swap. To do metadata, which wasn't in S3 until late 2024 S3 being consistent, which didn't happen until December, 2020.Uh, 2020. And then NVMe ssd, which didn't end in the cloud until 2017.swyx: I mean, in some ways, like a very big like cloud success story that like you were able to like, uh, put this all together, but also doing things like doing, uh, bind our favor. That that actually is something I've never heard.Simon Hørup Eskildsen: I mean, it's very common when you're a big company, right?You're like connecting your own like data center or whatever. But it's like, it was uniquely just a pain with notion because the, um, the org, like most of the, like if you're buying in Ashburn, Virginia, right? Like US East, the Google, like the GCP and, and AWS data centers are like within a millisecond on, on each other, on the public exchanges.But in Oregon uniquely, the GCP data center sits like a couple hundred kilometers, like east of Portland and the AWS region sits in Portland, but the network exchange they go through is through Seattle. So it's like a full, like 14 milliseconds or something like that. And so anyway, yeah. It's, it's, so we were like, okay, we can't, we have to go through an exchange in Portland.Yeah. Andswyx: you'd rather do this than like run your zookeeper and likeSimon Hørup Eskildsen: Yes. Way rather. It doesn't have state, I don't want state and two systems. Um, and I think all that is just informed by Justine, my co-founder and I had just been on call for so long. And the worst outages are the ones where you have state in multiple places that's not syncing up.So it really came from, from a a, like just a, a very pure source of pain, of just imagining what we would be Okay. Being woken up at 3:00 AM about and having something in zookeeper was not one of them.swyx: You, you're talking to like a notion or something. Do they care or do they just, theySimon Hørup Eskildsen: just, they care about latency.swyx: They latency cost. That's it.Simon Hørup Eskildsen: They just cared about latency. Right. And we just absorbed the cost. We're just like, we have high conviction in this. At some point we can move them to AWS. Right. And so we just, we, we'll buy the fiber, it doesn't matter. Right. Um, and it's like $5,000. Usually when you buy fiber, you buy like multiple lines.And we're like, we can only afford one, but we will just test it that when it goes over the public internet, it's like super smooth. And so we did a lot of, anyway, it's, yeah, it was, that's cool.Alessio: You can imagine talking to the GCP rep and it's like, no, we're gonna buy, because we know we're gonna turn, we're gonna turn from you guys and go to AWS in like six months.But in the meantime we'll do this. It'sSimon Hørup Eskildsen: a, I mean, like they, you know, this workload still runs on GCP for what it's worth. Right? ‘cause it's so, it was just, it was so reliable. So it was never about moving off GCP, it was just about honesty. It was just about giving notion the latency that they deserved.Right. Um, and we didn't want ‘em to have to care about any of this. We also, they were like, oh, egress is gonna be bad. It was like, okay, screw it. Like we're just gonna like vvc, VPC peer with you and AWS we'll eat the cost. Yeah. Whatever needs to be done.Alessio: And what were the actual workloads? Because I think when you think about ai, it's like 14 milliseconds.It's like really doesn't really matter in the scheme of like a model generation.Simon Hørup Eskildsen: Yeah. We were told the latency, right. That we had to beat. Oh, right. So, so we're just looking at the traces. Right. And then sort of like hand draw, like, you know, kind of like looking at the trace and then thinking what are the other extensions of the trace?Right. And there's a lot more to it because it's also when you have, if you have 14 versus seven milliseconds, right. You can fit in another round trip. So we had to tune TCP to try to send as much data in every round trip, prewarm all the connections. And there was, there's a lot of things that compound from having these kinds of round trips, but in the grand scheme it was just like, well, we have to beat the latency of whatever we're up against.swyx: Which is like they, I mean, notion is a database company. They could have done this themselves. They, they do lots of database engineering themselves. How do you even get in the door? Like Yeah, just like talk through that kind of.Simon Hørup Eskildsen: Last time I was in San Francisco, I was talking to one of the engineers actually, who, who was one of our champions, um, at, AT Notion.And they were, they were just trying to make sure that the, you know, per user cost matched the economics that they needed. You know, Uhhuh like, it's like the way I think about, it's like I have to earn a return on whatever the clouds charge me and then my customers have to earn a return on that. And it's like very simple, right?And so there has to be gross margin all the way up and that's how you build the product. And so then our customers have to make the right set of trade off the turbo Puffer makes, and if they're happy with that, that's great.swyx: Do you feel like you're competing with build internally versus buy or buy versus buy?Simon Hørup Eskildsen: Yeah, so, sorry, this was all to build up to your question. So one of the notion engineers told me that they'd sat and probably on a napkin, like drawn out like, why hasn't anyone built this? And then they saw terrible. It was like, well, it literally that. So, and I think AI has also changed the buy versus build equation in terms of, it's not really about can we build it, it's about do we have time to build it?I think they like, I think they felt like, okay, if this is a team that can do that and they, they feel enough like an extension of our team, well then we can go a lot faster, which would be very, very good for them. And I mean, they put us through the, through the test, right? Like we had some very, very long nights to to, to do that POC.And they were really our biggest, our second big customer off the cursor, which also was a lot of late nights. Right.swyx: Yeah. That, I mean, should we go into that story? The, the, the sort of Chris's story, like a lot, um, they credit you a lot for. Working very closely with them. So I just wanna hear, I've heard this, uh, story from Sole's point of view, but like, I'm curious what, what it looks like from your side.Simon Hørup Eskildsen: I actually haven't heard it from Sole's point of view, so maybe you can now cross reference it. The way that I remember it was that, um, the day after we launched, which was just, you know, I'd worked the whole summer on, on the first version. Justine wasn't part of it yet. ‘cause I just, I didn't tell anyone that summer that I was working on this.I was just locked in on building it because it's very easy otherwise to confuse talking about something to actually doing it. And so I was just like, I'm not gonna do that. I'm just gonna do the thing. I launched it and at this point turbo puffer is like a rust binary running on a single eight core machine in a T Marks instance.And me deploying it was like looking at the request log and then like command seeing it or like control seeing it to just like, okay, there's no request. Let's upgrade the binary. Like it was like literally the, the, the, the scrappiest thing. You could imagine it was on purpose because just like at Shopify, we did that all the time.Like, we like move, like we ran things in tux all the time to begin with. Before something had like, at least the inkling of PMF, it was like, okay, is anyone gonna hear about this? Um, and one of the cursor co-founders Arvid reached out and he just, you know, the, the cursor team are like all I-O-I-I-M-O like, um, contenders, right?So they just speak in bullet points and, and facts. It was like this amazing email exchange just of, this is how many QPS we have, this is what we're paying, this is where we're going, blah, blah, blah. And so we're just conversing in bullet points. And I tried to get a call with them a few times, but they were, so, they were like really writing the PMF bowl here, just like late 2023.And one time Swally emails me at like five. What was it like 4:00 AM Pacific time saying like, Hey, are you open for a call now? And I'm on the East coast and I, it was like 7:00 AM I was like, yeah, great, sure, whatever. Um, and we just started talking and something. Then I didn't know anything about sales.It was something that just comp compelled me. I have to go see this team. Like, there's something here. So I, I went to San Francisco and I went to their office and the way that I remember it is that Postgres was down when I showed up at the office. Did SW tell you this? No. Okay. So Postgres was down and so it's like they were distracting with that.And I was trying my best to see if I could, if I could help in any way. Like I knew a little bit about databases back to tuning, auto vacuum. It was like, I think you have to tune out a vacuum. Um, and so we, we talked about that and then, um, that evening just talked about like what would it look like, what would it look like to work with us?And I just said. Look like we're all in, like we will just do what we'll do whatever, whatever you tell us, right? They migrated everything over the next like week or two, and we reduced their cost by 95%, which I think like kind of fixed their per user economics. Um, and it solved a lot of other things. And we were just, Justine, this is also when I asked Justine to come on as my co-founder, she was the best engineer, um, that I ever worked with at Shopify.She lived two blocks away and we were just, okay, we're just gonna get this done. Um, and we did, and so we helped them migrate and we just worked like hell over the next like month or two to make sure that we were never an issue. And that was, that was the cursor story. Yeah.swyx: And, and is code a different workload than normal text?I, I don't know. Is is it just text? Is it the same thing?Simon Hørup Eskildsen: Yeah, so cursor's workload is basically, they, um, they will embed the entire code base, right? So they, they will like chunk it up in whatever they would, they do. They have their own embedding model, um, which they've been public about. Um, and they find that on, on, on their evals.It. There's one of their evals where it's like a 25% improvement on a very particular workload. They have a bunch of blog posts about it. Um, I think it works best on larger code basis, but they've trained their own embedding model to do this. Um, and so you'll see it if you use the cursor agent, it will do searches.And they've also been public around, um, how they've, I think they post trained their model to be very good at semantic search as well. Um, and that's, that's how they use it. And so it's very good at, like, can you find me on the code that's similar to this, or code that does this? And just in, in this queries, they also use GR to supplement it.swyx: Yeah.Simon Hørup Eskildsen: Um, of courseswyx: it's been a big topic of discussion like, is rag dead because gr you know,Simon Hørup Eskildsen: and I mean like, I just, we, we see lots of demand from the coding company to ethicsswyx: search in every part. Yes.Simon Hørup Eskildsen: Uh, we, we, we see demand. And so, I mean, I'm. I like case studies. I don't like, like just doing like thought pieces on this is where it's going.And like trying to be all macroeconomic about ai, that's has turned out to be a giant waste of time because no one can really predict any of this. So I just collect case studies and I mean, cursor has done a great job talking about what they're doing and I hope some of the other coding labs that use Turbo Puffer will do the same.Um, but it does seem to make a difference for particular queries. Um, I mean we can also do text, we can also do RegX, but I should also say that cursors like security posture into Tur Puffer is exceptional, right? They have their own embedding model, which makes it very difficult to reverse engineer. They obfuscate the file paths.They like you. It's very difficult to learn anything about a code base by looking at it. And the other thing they do too is that for their customers, they encrypt it with their encryption keys in turbo puffer's bucket. Um, so it's, it's, it's really, really well designed.swyx: And so this is like extra stuff they did to work with you because you are not part of Cursor.Exactly like, and this is just best practice when working in any database, not just you guys. Okay. Yeah, that makes sense. Yeah. I think for me, like the, the, the learning is kind of like you, like all workloads are hybrid. Like, you know, uh, like you, you want the semantic, you want the text, you want the RegX, you want sql.I dunno. Um, but like, it's silly to like be all in on like one particularly query pattern.Simon Hørup Eskildsen: I think, like I really like the way that, um, um, that swally at cursor talks about it, which is, um, I'm gonna butcher it here. Um, and you know, I'm a, I'm a database scalability person. I'm not a, I, I dunno anything about training models other than, um, what the internet tells me and what.The way he describes is that this is just like cash compute, right? It's like you have a point in time where you're looking at some particular context and focused on some chunk and you say, this is the layer of the neural net at this point in time. That seems fundamentally really useful to do cash compute like that.And, um, how the value of that will change over time. I'm, I'm not sure, but there seems to be a lot of value in that.Alessio: Maybe talk a bit about the evolution of the workload, because even like search, like maybe two years ago it was like one search at the start of like an LLM query to build the context. Now you have a gentech search, however you wanna call it, where like the model is both writing and changing the code and it's searching it again later.Yeah. What are maybe some of the new types of workloads or like changes you've had to make to your architecture for it?Simon Hørup Eskildsen: I think you're right. When I think of rag, I think of, Hey, there's an 8,000 token, uh, context window and you better make it count. Um, and search was a way to do that now. Everything is moving towards the, just let the agent do its thing.Right? And so back to the thing before, right? The LLM is very good at reasoning with the data, and so we're just the tool call, right? And that's increasingly what we see our customers doing. Um, what we're seeing more demand from, from our customers now is to do a lot of concurrency, right? Like Notion does a ridiculous amount of queries in every round trip just because they can't.And I'm also now, when I use the cursor agent, I also see them doing more concurrency than I've ever seen before. So a bit similar to how we designed a database to drive as much concurrency in every round trip as possible. That's also what the agents are doing. So that's new. It means just an enormous amount of queries all at once to the dataset while it's warm in as few turns as possible.swyx: Can I clarify one thing on that?Simon Hørup Eskildsen: Yes.swyx: Is it, are they batching multiple users or one user is driving multiple,Simon Hørup Eskildsen: one user driving multiple, one agent driving.swyx: It's parallel searching a bunch of things.Simon Hørup Eskildsen: Exactly.swyx: Yeah. Yeah, exactly. So yeah, the clinician also did, did this for the fast context thing, like eight parallel at once.Simon Hørup Eskildsen: Yes.swyx: And, and like an interesting problem is, well, how do you make sure you have enough diversity so you're not making the the same request eight times?Simon Hørup Eskildsen: And I think like that's probably also where the hybrid comes in, where. That's another way to diversify. It's a completely different way to, to do the search.That's a big change, right? So before it was really just like one call and then, you know, the LLM took however many seconds to return, but now we just see an enormous amount of queries. So the, um, we just see more queries. So we've like tried to reduce query, we've reduced query pricing. Um, this is probably the first time actually I'm saying that, but the query pricing is being reduced, like five x.Um, and we'll probably try to reduce it even more to accommodate some of these workloads of just doing very large amounts of queries. Um, that's one thing that's changed. I think the right, the right ratio is still very high, right? Like there's still a, an enormous amount of rights per read, but we're starting probably to see that change if people really lean into this pattern.Alessio: Can we talk a little bit about the pricing? I'm curious, uh, because traditionally a database would charge on storage, but now you have the token generation that is so expensive, where like the actual. Value of like a good search query is like much higher because they're like saving inference time down the line.How do you structure that as like, what are people receptive to on the other side too?Simon Hørup Eskildsen: Yeah. I, the, the turbo puffer pricing in the beginning was just very simple. The pricing on these on for search engines before Turbo Puffer was very server full, right? It was like, here's the vm, here's the per hour cost, right?Great. And I just sat down with like a piece of paper and said like, if Turbo Puffer was like really good, this is probably what it would cost with a little bit of margin. And that was the first pricing of Turbo Puffer. And I just like sat down and I was like, okay, like this is like probably the storage amp, but whenever on a piece of paper I, it was vibe pricing.It was very vibe price, and I got it wrong. Oh. Um, well I didn't get it wrong, but like Turbo Puffer wasn't at the first principle pricing, right? So when Cursor came on Turbo Puffer, it was like. Like, I didn't know any VCs. I didn't know, like I was just like, I don't know, I didn't know anything about raising money or anything like that.I just saw that my GCP bill was, was high, was a lot higher than the cursor bill. So Justine and I was just like, well, we have to optimize it. Um, and I mean, to the chagrin now of, of it, of, of the VCs, it now means that we're profitable because we've had so much pricing pressure in the beginning. Because it was running on my credit card and Justine and I had spent like, like tens of thousands of dollars on like compute bills and like spinning off the company and like very like, like bad Canadian lawyers and like things like to like get all of this done because we just like, we didn't know.Right. If you're like steeped in San Francisco, you're just like, you just know. Okay. Like you go out, raise a pre-seed round. I, I never heard a word pre-seed at this point in time.swyx: When you had Cursor, you had Notion you, you had no funding.Simon Hørup Eskildsen: Um, with Cursor we had no funding. Yeah. Um, by the time we had Notion Locke was, Locke was here.Yeah. So it was really just, we vibe priced it 100% from first Principles, but it wasn't, it, it was not performing at first principles, so we just did everything we could to optimize it in the beginning for that, so that at least we could have like a 5% margin or something. So I wasn't freaking out because Cursor's bill was also going like this as they were growing.And so my liability and my credit limit was like actively like calling my bank. It was like, I need a bigger credit. Like it was, yeah. Anyway, that was the beginning. Yeah. But the pricing was, yeah, like storage rights and query. Right. And the, the pricing we have today is basically just that pricing with duct tape and spit to try to approach like, you know, like a, as a margin on the physical underlying hardware.And we're doing this year, you're gonna see more and more pricing changes from us. Yeah.swyx: And like is how much does stuff like VVC peering matter because you're working in AWS land where egress is charged and all that, you know.Simon Hørup Eskildsen: We probably don't like, we have like an enterprise plan that just has like a base fee because we haven't had time to figure out SKU pricing for all of this.Um, but I mean, yeah, you can run turbo puffer either in SaaS, right? That's what Cursor does. You can run it in a single tenant cluster. So it's just you. That's what Notion does. And then you can run it in, in, in BYOC where everything is inside the customer's VPC, that's what an for example, philanthropic does.swyx: What I'm hearing is that this is probably the best CRO job for somebody who can come in and,Simon Hørup Eskildsen: I mean,swyx: help you with this.Simon Hørup Eskildsen: Um, like Turbo Puffer hired, like, I don't know what, what number this was, but we had a full-time CFO as like the 12th hire or something at Turbo Puffer, um, I think I hear are a lot of comp.I don't know how they do it. Like they have a hundred employees and not a CFO. It's like having a CFO is like a runningswyx: business man. Like, you know,Simon Hørup Eskildsen: it's so good. Yeah, like money Mike, like he just, you know, just handles the money and a lot of the business stuff and so he came in and just hopped with a lot of the operational side of the business.So like C-O-O-C-F-O, like somewhere in between.swyx: Just as quick mention of Lucky, just ‘cause I'm curious, I've met Lock and like, he's obviously a very good investor and now on physical intelligence, um, I call it generalist super angel, right? He invests in everything. Um, and I always wonder like, you know, is there something appealing about focusing on developer tooling, focusing on databases, going like, I've invested for 10 years in databases versus being like a lock where he can maybe like connect you to all the customers that you need.Simon Hørup Eskildsen: This is an excellent question. No, no one's asked me this. Um, why lockey? Because. There was a couple of people that we were talking to at the time and when we were raising, we were almost a little, we were like a bit distressed because one of our, one of our peers had just launched something that was very similar to Turbo Puffer.And someone just gave me the advice at the time of just choose the person where you just feel like you can just pick up the phone and not prepare anything. And just be completely honest, and I don't think I've said this publicly before, but I just called Lockey and was like local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you.But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working. So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people and we're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards and.Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before. As I said, I didn't even know what a seed or pre-seed round was like before, probably even at this time. So I was just like very honest with him. And I asked him like, Lockie, have you ever have, have you ever invested in database company?He was just like, no. And at the time I was like, am I dumb? Like, but I think there was something that just like really drew me to Lockie. He is so authentic, so honest, like, and there was something just like, I just felt like I could just play like, just say everything openly. And that was, that was, I think that that was like a perfect match at the time, and, and, and honestly still is.He was just like, okay, that's great. This is like the most honest, ridiculous thing I've ever heard anyone say to me. But like that, like that, whyswyx: is this ridiculous? Say competitor launch, this may not work out. It wasSimon Hørup Eskildsen: more just like. If this doesn't work out, I'm gonna close up shop by the end of the mo the year, right?Like it was, I don't know, maybe it's common. I, I don't know. He told me it was uncommon. I don't know. Um, that's why we chose him and he'd been phenomenal. The other people were talking at the, at the time were database experts. Like they, you know, knew a lot about databases and Locke didn't, this turned out to be a phenomenal asset.Right. I like Justine and I know a lot about databases. The people that we hire know a lot about databases. What we needed was just someone who didn't know a lot about databases, didn't pretend to know a lot about databases, and just wanted to help us with candidates and customers. And he did. Yeah. And I have a list, right, of the investors that I have a relationship with, and Lockey has just performed excellent in the number of sub bullets of what we can attribute back to him.Just absolutely incredible. And when people talk about like no ego and just the best thing for the founder, I like, I don't think that anyone, like even my lawyer is like, yeah, Lockey is like the most friendly person you will find.swyx: Okay. This is my most glow recommendation I've ever heard.Alessio: He deserves it.He's very special.swyx: Yeah. Yeah. Yeah. Okay. Amazing.Alessio: Since you mentioned candidates, maybe we can talk about team building, you know, like, especially in sf, it feels like it's just easier to start a company than to join a company. Uh, I'm curious your experience, especially not being n SF full-time and doing something that is maybe, you know, a very low level of detail and technical detail.Simon Hørup Eskildsen: Yeah. So joining versus starting, I never thought that I would be a founder. I would start with it, like Turbo Puffer started as a blog post, and then it became a project and then sort of almost accidentally became a company. And now it feels like it's, it's like becoming a bigger company. That was never the intention.The intentions were very pure. It's just like, why hasn't anyone done this? And it's like, I wanna be the, like, I wanna be the first person to do it. I think some founders have this, like, I could never work for anyone else. I, I really don't feel that way. Like, it's just like, I wanna see this happen. And I wanna see it happen with some people that I really enjoy working with and I wanna have fun doing it and this, this, this has all felt very natural on that, on that sense.So it was never a like join versus versus versus found. It was just dis found me at the right moment.Alessio: Well I think there's an argument for, you should have joined Cursor, right? So I'm curious like how you evaluate it. Okay, I should actually go raise money and make this a company versus like, this is like a company that is like growing like crazy.It's like an interesting technical problem. I should just build it within Cursor and then they don't have to encrypt all this stuff. They don't have to obfuscate things. Like was that on your mind at all orSimon Hørup Eskildsen: before taking the, the small check from Lockie, I did have like a hard like look at myself in the mirror of like, okay, do I really want to do this?And because if I take the money, I really have to do it right. And so the way I almost think about it's like you kind of need to ha like you kind of need to be like fucked up enough to want to go all the way. And that was the conversation where I was like, okay, this is gonna be part of my life's journey to build this company and do it in the best way that I possibly can't.Because if I ask people to join me, ask people to get on the cap table, then I have an ultimate responsibility to give it everything. And I don't, I think some people, it doesn't occur to me that everyone takes it that seriously. And maybe I take it too seriously, I don't know. But that was like a very intentional moment.And so then it was very clear like, okay, I'm gonna do this and I'm gonna give it everything.Alessio: A lot of people don't take it this seriously. But,swyx: uh, let's talk about, you have this concept of the P 99 engineer. Uh, people are 10 x saying, everyone's saying, you know, uh, maybe engineers are out of a job. I don't know.But you definitely see a P 99 engineer, and I just want you to talk about it.Simon Hørup Eskildsen: Yeah, so the P 99 engineer was just a term that we started using internally to talk about candidates and talk about how we wanted to build the company. And you know, like everyone else is, like we want a talent dense company.And I think that's almost become trite at this point. What I credit the cursor founders a lot with is that they just arrived there from first principles of like, we just need a talent dense, um, talent dense team. And I think I've seen some teams that weren't talent dense and like seemed a counterfactual run, which if you've run in been in a large company, you will just see that like it's just logically will happen at a large company.Um, and so that was super important to me and Justine and it's very difficult to maintain. And so we just needed, we needed wording for it. And so I have a document called Traits of the P 99 Engineer, and it's a bullet point list. And I look at that list after every single interview that I do, and in every single recap that we do and every recap we end with.End with, um, some version of I'm gonna reject this candidate completely regardless of what the discourse was, because I wanna see people fight for this person because the default should not be, we're gonna hire this person. The default should be, we're definitely not hiring this person. And you know, if everyone was like, ah, maybe throw a punch, then this is not the right.swyx: Do, do you operate, like if there's one cha there must have at least one champion who's like, yes, I will put my career on, on, on the line for this. You know,Simon Hørup Eskildsen: I think career on the line,swyx: maybe a chair, butSimon Hørup Eskildsen: yeah. You know, like, um, I would say so someone needs to like, have both fists up and be like, I'd fight.Right? Yeah. Yeah. And if one person said, then, okay, let's do it. Right?swyx: Yeah.Simon Hørup Eskildsen: Um. It doesn't have to be absolutely everyone. Right? And like the interviews are always the sign that you're checking for different attributes. And if someone is like knocking it outta the park in every single attribute, that's, that's fairly rare.Um, but that's really important. And so the traits of the P 99 engineer, there's lots of them. There's also the traits of the p like triple nine engineer and the quadruple nine engineer. This is like, it's a long list.swyx: Okay.Simon Hørup Eskildsen: Um, I'll give you some samples, right. Of what we, what we look for. I think that the P 99 engineer has some history of having bent, like their trajectory or something to their will.Right? Some moment where it was just, they just, you know, made the computer do what it needed to do. There's something like that, and it will, it will occur to have them at some point in their career. And, uh. Hopefully multiple times. Right.swyx: Gimme an example of one of your engineers that like,Simon Hørup Eskildsen: I'll give an eng.Uh, so we, we, we launched this thing called A and NV three. Um, we could, we're also, we're working on V four and V five right now, but a and NV three can search a hundred billion vectors with a P 50 of around 40 milliseconds and a p 99 of 200 milliseconds. Um, maybe other people have done this, I'm sure Google and others have done this, but, uh, we haven't seen anyone, um, at least not in like a public consumable SaaS that can do this.And that was an engineer, the chief architect of Turbo Puffer, Nathan, um, who more or less just bent this, the software was not capable of this and he just made it capable for a very particular workload in like a, you know, six to eight week period with the help of a lot of the team. Right. It's been, been, there's numerous of examples of that, like at, at turbo puff, but that's like really bending the software and X 86 to your will.It was incredible to watch. Um. You wanna see some moments like that?swyx: Isn't that triple nine?Simon Hørup Eskildsen: Um, I think Nathan, what's calledAlessio: group nine, that was only nine. I feel like this is too high forSimon Hørup Eskildsen: Nathan. Nathan is, uh, Nathan is like, yeah, there's a lot of nines. Okay. After that p So I think that's one trait. I think another trait is that, uh, the P 99 spends a lot of time looking at maps.Generally it's their preferred ux. They just love looking at maps. You ever seen someone who just like, sits on their phone and just like, scrolls around on a map? Or did you not look at maps A lot? You guys don't look atswyx: maps? I guess I'm not feeling there. I don't know, butSimon Hørup Eskildsen: you just dis What about trains?Do you like trains?swyx: Uh, I mean they, not enough. Okay. This is just like weapon nice. Autism is what I call it. Like, like,Simon Hørup Eskildsen: um, I love looking at maps, like, it's like my preferred UX and just like I, you know, I likeswyx: lotsAlessio: of, of like random places, soswyx: like,youswyx: know.Alessio: Yes. Okay. There you go. So instead of like random places, like how do you explore the maps?Simon Hørup Eskildsen: No, it's, it's just a joke.swyx: It's autism laugh. It's like you are just obsessed by something and you like studying a thing.Simon Hørup Eskildsen: The origin of this was that at some point I read an interview with some IOI gold medalistswyx: Uhhuh,Simon Hørup Eskildsen: and it's like, what do you do in your spare time? I was just like, I like looking at maps.I was like, I feel so seen. Like, I just like love, like swirling out. I was like, oh, Canada is so big. Where's Baffin Island? I don't know. I love it. Yeah. Um, anyway, so the traits of P 99, P 99 is obsessive, right? Like, there's just like, you'll, you'll find traits of that we do an interview at, at, at, at turbo puffer or like multiple interviews that just try to screen for some of these things.Um, so. There's lots of others, but these are the kinds of traits that we look for.swyx: I'll tell you, uh, some people listen for like some of my dere stuff. Uh, I do think about derel as maps. Um, you draw a map for people, uh, maps show you the, uh, what is commonly agreed to be the geographical features of what a boundary is.And it shows also shows you what is not doing. And I, I think a lot of like developer tools, companies try to tell you they can do everything, but like, let's, let's be real. Like you, your, your three landmarks are here, everyone comes here, then here, then here, and you draw a map and, and then you draw a journey through the map.And like that. To me, that's what developer relations looks like. So I do think about things that way.Simon Hørup Eskildsen: I think the P 99 thinks in offs, right? The P 99 is very clear about, you know, hey, turbo puffer, you can't run a high transaction workload on turbo puffer, right? It's like the right latency is a hundred milliseconds.That's a clear trade off. I think the P 99 is very good at articulating the trade offs in every decision. Um. Which is exactly what the map is in your case, right?swyx: Uh, yeah, yeah. My, my, my world. My world.Alessio: How, how do you reconcile some of these things when you're saying you bend the will the computer versus like the trade

Jay Fonseca
PODCAST LAS NOTICIAS CON CALLE DE 10 DE MARZO DE 2026

Jay Fonseca

Play Episode Listen Later Mar 10, 2026 19:58


PODCAST LAS NOTICIAS CON CALLE DE 10 DE MARZO DE 2026 -  Trump y Putin hablaron y llegan a acuerdo para liberar petróleo ruso - Reuters Petróleo bajó de precio a 87 el barril - CNBCLe dan asilo a mujeres iraníes que no quisieron cantar el himno de Irán en protesta - One Sports Representante busca que informante que investigó a Ciary vaya a vista en Legislatura - Notidel Plantean traer cohetes a Roosevelt Roads - El Nuevo DíaInvestigación contra Ciary seguirá por tres meses más, Rivera Schatz pide revisión - El Nuevo Día Plantean medida para impedir a gobierno federal a detener aguantar los fondos federales - El Nuevo DíaJuicio contra Anthonieska comienza el 6 de abril - El Nuevo Día¿Deben hablar inglés los meseros? - El Nuevo Día Dramática baja del precio del combustible luego de subida, pero el litro de gasolina ronda por el peso - El Vocero Expropiaciones forzosas tras el estado activarse - El Vocero Hay momentos en los que no necesitas solo respuestas sino tranquilidad y apoyo. En Universal Group combinamos tecnología avanzada con el Toque Humano que te da calma y te responde al instante. Universal: Auspiciador oficial del equipo de Puerto Rico en el World Baseball Classic.Incluye auspicio 

Whis-Cast - 50 Shades of Grain
Folge 65: Whis-Cast Special - Kalsarikännit und Korvapuusti im Land der tausend Seen mit Lucas Werner (von Kirsch Import)

Whis-Cast - 50 Shades of Grain

Play Episode Listen Later Mar 6, 2026 81:28


In dieser Folge verlassen wir die schottischen Highlands und tauschen Dudelsäcke gegen Heavy Metal und Saunagänge ein!Gemeinsam mit Lucas Werner von Kirsch Import reisen wir nach Lahti, wo die Teerenpeli Distillery beweist, dass Finnen nicht nur Weltmeister im Saunieren und Glücklichsein sind, sondern auch Single Malt können, bei dem selbst der härteste Wikinger weich wird.Was euch in dieser Episode erwartet:Vom Bier zum Brand: Wie man 1995 als Brauerei startet und 2002 plötzlich feststellt: „Hey, wenn wir das Zeug noch mal brennen, macht es noch mehr Spaß!“Teeren-was? Wir üben die finnische Aussprache, ohne uns die Zunge zu brechen (Tipp: Nach dem dritten Dram klappt's wie von selbst).Akte, Portti & Co.: Wir tasten uns durch Abfüllungen, die nach Sherry-Fässern, Hafencontainern und NICHT nach nordischer Schublade schmecken!Schnappt euch ein Nosing-Glas, werft euch in euren Bademantel und findet heraus, warum Teerenpeli schon 2 Mal völlig zurecht mit Gold in der Kategorie Worldwide Whiskey Producer ausgezeichnet wurde. Auf dem Weg dahin biegen wir kurz zu einer Endmoräne, zum Erdkunde-Unterricht und ins Schwimmbad ab. Dabei kreieren Tasting-Notes wie „Arsch auf Ledersofa“, Goldsaft oder grüner Center Schock und ermitteln in der mysteriösen „Akte 11!“Wir sagen nur: Bildungs-Podcast auf „Wer weiß den so was?“-Niveau mit finnischem Finish!Vorsicht: Diese Folge kann Spuren von trockenem Humor, Birkhuhn-Balztänzen und akutem Fernweh nach Finnland enthalten.Kippis!Verkostete Whiskys:Teerenpeli SoidinTeerenpeli PorttiTeerenpeli KuloTeerenpeli 15Teerenpeli AkteTeerenpeli Palo

Talking Heads - Craft Computing
Ep. 423 - Apple's $599 MacBook Neo; DRAM Pricing; Windows 12 Enshitification

Talking Heads - Craft Computing

Play Episode Listen Later Mar 5, 2026 147:45


Apple's $599 MacBook Neo; DRAM Pricing; Windows 12 Enshitification

Equity Mates Investing Podcast
The eBay harassment scandal, RAMageddon explained & Pimp My Portfolio with Matt Ingram

Equity Mates Investing Podcast

Play Episode Listen Later Mar 4, 2026 33:27


A wild eBay stalking scandal, a global DRAM shortage reshapes consumer tech pricing, and Matt Ingram helps community member Maddison simplify a heavily thematic ETF portfolio with a big Nvidia exposure.In this episode: 00:00 eBay stalking scandal: what happened07:14 eBay fallout: pleas, severance, board governance09:23 RAMageddon: what DRAM is and why AI is driving shortage14:18 Winners: DRAM stocks ripping 17:28 Phones + broader electronics inflation pressures20:49 Pimp My Portfolio: Maddison + Matt Ingram21:15 Portfolio breakdown (index ETFs, thematics, stocks)26:56 Core simplification 27:42 Thematics: what to drop, what to consider31:24 DCA + using gains for life flexibilityStocks and ETFs mentioned: eBay (NASDAQ:EBAY), Micron Technology (NASDAQ:MU), SK Hynix (KRX:000660), Samsung Electronics (KRX:005930), Tesla (NASDAQ:TSLA), NVIDIA (NASDAQ:NVDA), Alphabet (NASDAQ:GOOGL), Microsoft (NASDAQ:MSFT), Betashares Diversified All Growth ETF (ASX:DHHF), Betashares Asia Technology Tigers ETF (ASX:ASIA), Betashares Global Cybersecurity ETF (ASX:HACK), Betashares Sustainability Leaders ETF (ASX:ETHI), VanEck FANG+ ETF (ASX:FANG), Vanguard Total US Market ETF (NYSE:VTI), Vanguard US Total Market Shares Index ETF (ASX:VTS), Vanguard MSCI Index International Shares ETF (ASX:VGS), BetaShares Australia 200 ETF (ASX:A200), BetaShares NASDAQ 100 ETF (ASX:NDQ).———Want to get involved in the podcast? Record a voice note or send us a message And come and join the conversation in the Equity Mates Facebook Discussion Group.———Want more Equity Mates? Across books, podcasts, video and email, however you want to learn about investing – we've got you covered.Keep up with the news moving markets with our daily newsletter and podcast (Apple | Spotify)We're particularly excited to share our latest show: Basis PointsListen to the podcast (Apple | Spotify)Watch on YouTubeRead the monthly email———Looking for some of our favourite research tools?Download our free Basics of ETF handbookOr our free 4-step stock checklistFind company information on TIKRResearch reports from Good ResearchTrack your portfolio with Sharesight———In the spirit of reconciliation, Equity Mates Media and the hosts of Equity Mates Investing acknowledge the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respects to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander people today. ———Equity Mates Investing is a product of Equity Mates Media. Hosted on Acast. See acast.com/privacy for more information.

Business of Tech
Hardware Cost Volatility Forces MSPs to Reprice Contracts and Restructure Service Models

Business of Tech

Play Episode Listen Later Mar 2, 2026 12:49


Enterprise IT spending is projected to reach $4.5 trillion by 2026, but this growth is concentrated in software, cloud services, and AI infrastructure for large organizations, according to HG Insights and Omdia research cited by Dave Sobel. The system integration market is positioned to approach $950 billion in 2025, with enterprises working with an average of 6.3 technology partners. A substantial surge in AI-optimized server sales, as reflected in Dell Technologies' reported 342% year-over-year increase in revenue for those systems, is reshaping supply chains and vendor dynamics, leading to shortages of DRAM, SSDs, and hard drives. Underlying this development are volatile component costs. DRAM prices have doubled quarter over quarter, and both Micron Technologies and Western Digital have indicated they are sold out for 2026. HP reports that RAM now constitutes 35% of new PC materials costs, up dramatically from 18% the previous quarter. Such cost shifts are creating downstream risks for managed service providers (MSPs) with fixed-price agreements, as the economic assumptions underpinning many contracts—stable hardware prices and predictable cloud costs—no longer hold. The episode also highlights an increase in application sprawl and a widening gap between IT budgets and other operational costs. A Torii report shows large enterprises use over 2,191 applications on average, with more than 61% bypassing formal IT approvals, resulting in unmanaged security and compliance exposure. Additionally, 80% of small businesses report rising energy costs that directly compete with IT budget allocations. Industry analysis from Jefferies and Boston Consulting Group signals that AI and automation are not viewed uniformly as productivity boosters and may compress revenue models in both Indian and domestic IT services sectors. The practical implication for MSPs is the urgent need to audit and reprice contracts related to hardware procurement and refresh cycles, clearly documenting and communicating current cost realities with clients. Dave Sobel stresses reframing device lifecycle extensions as a security risk rather than a cost-saving measure and warns against selling clients on speculative AI market projections. The advice is to focus on specific, scoped use cases and to structure agreements that accurately reflect volatility in component costs and the operational burden of application sprawl, ensuring financial and legal accountability as the IT services landscape evolves. 00:00 $4.96T IT Spend Surge Bypasses SMBs as AI Infrastructure Captures Enterprise Budgets 03:58 Dell's $43B AI Server Backlog Triggers DRAM Shortage, Repricing Downstream Hardware 05:52 AI Shrinks IT Services Revenue Model; MSPs Face Contested Implementation Role   This is the Business of Tech.    Supported by:

ChannelBuzz.ca
ICYMI: Cisco rewrites partner pricing rules as component shortages bite

ChannelBuzz.ca

Play Episode Listen Later Mar 2, 2026 6:26


Today is Monday, March 2, 2026. Welcome to In Case You Missed It, our weekly five-minute rundown of important channel news stories that might have flown under the radar last week. In this edition: Component shortages start hitting the channel: Rising memory and storage costs are prompting vendors to revisit pricing and deal protections, highlighted by a letter from Cisco to partners and reinforced by warnings from other vendors, distributors, and suppliers as availability tightens across servers, storage, and PCs. Pure Storage rebrands as Everpure: Pure Storage has rebranded to Everpure, signaling a shift toward AI-ready data management and rolling out partner program changes aimed at supporting subscription services and platform-led growth. WatchGuard targets MSPs with enterprise-grade security: WatchGuard says new platform enhancements allow MSPs to deliver enterprise-level security outcomes — including zero trust, MDR, and unified management — without enterprise-level complexity. AWS threat research highlights AI-driven attacks: New findings from Amazon Web Services show attackers using AI-assisted techniques to accelerate exploitation of perimeter devices, including firewalls, underscoring how rapidly the threat landscape is evolving. Read Full Transcript Hello and welcome to In Case You Missed It from ChannelBuzz.ca, your Monday morning recap where we catch you up on some of the channel news and trend headlines you may have missed in the last week. I’m Robert Dutt, editor of ChannelBuzz.ca. Today is Monday, March 2, 2026. Let’s get your week started right. This week, the IT channel is being forced to confront an uncomfortable reality. Global components shortages and memory price spikes are fundamentally reshaping how hardware deals are negotiated and fulfilled, and vendors are already updating partner policies as they try to cope. At the center of the storm is a note from Cisco Systems to partners, which was obtained by CRN, in which Cisco says it’ll adjust partner contract terms in response to rapidly rising memory costs and supply volatility. The company now reserves the right to cancel compute orders up to 45 days prior to shipment and to adjust pricing between order and shipment date if component costs, tariffs, or other external factors shift dramatically. That’s a significant departure from the traditional price protection norms. And this isn’t isolated. Executives from major distributors told CRN that memory and storage shortages, particularly DRAM and SSDs, are pushing prices up and tightening supplies across servers, storage, and PC portfolios. Memory prices are reported to have doubled year over year in early 2026, and are expected to continue rising, leading many distributors to shorten their own validities and revisit backlog pricing with vendors. Vendors themselves are directly advising partners of pricing shifts too. Lenovo has warned partners that select PC and server products will see price hikes in March unless orders are placed and shipped promptly, reflecting those costs. And hardware availability is also tightening in real terms. For example, Western Digital says its entire 2026 hard drive production capacity is already spoken for, with most allocations locked up in long-term agreements with hyperscale cloud and AI customers, a trend that could push prices higher and leave less inventory for channel projects. As memory, storage, and other components become harder to source and pricier to procure, partners may face shortened quote windows, less pricing certainty, and project timing risk, compelling MSPs and VARs to rethink their own quoting strategies, accelerate their sales cycles, and build supply chain agility into their roadmaps. Good luck out there. Also worth noting, Everpure, the company formerly known as Pure Storage, has completed a major strategic evolution, rebranding itself to signal a transition from traditional storage vendor to a broader AI-ready data management platform and announcing changes that partners should really pay attention to. The name change, which takes effect on the New York Stock Exchange March 5, reflects the company’s push into enterprise data orchestration and intelligence beyond simply shipping storage hardware and arrays. Central to this transformation is Everpure’s planned acquisition of data intelligence firm 1touch, a move designed to bring automated data discovery, classification, and semantic enrichment capabilities into its portfolio. This expands the enterprise data cloud vision, equipping enterprises to make data inherently AI-ready and more valuable across hybrid environments. Alongside that rebrand, Everpure has updated its partner engagement model with a new tiering structure that gives MSPs, resellers, and distributors clearer pathways to profitability and growth, reflecting the broader mission of the company going forward. Recent results show that the demand for data management and subscription services are driving double-digit growth, the company says, underscoring why partners should lean into Everpure’s evolving platform play. For channel pros, the message is that Everpure sees partners as critical to selling data-centric solutions in the AI era and is aligning its incentives and program structure accordingly. Up next, WatchGuard is positioning its latest platform updates as a way for MSPs to deliver what it calls enterprise-grade security to small and mid-sized customers, without the complexity typically associated with large enterprise tools. The company says the enhancements are focused on unifying endpoint, network, identity, and MDR capabilities into a single manageable platform designed for service providers. Key to the message is simplification. WatchGuard is emphasizing centralized management, automated threat response, and bundled security services that allow MSPs to deploy advanced protection like zero-trust network access, AI-driven threat detection, and 24/7 monitoring at scale and under predictable pricing models. For MSPs, the pitch is that this closes a long-standing gap, giving smaller customers access to security capabilities that more rival enterprise deployments, while still fitting MSP operational and margin requirements. WatchGuard argues that as threats become more sophisticated, the ability to offer enterprise-grade outcomes without enterprise-grade overhead is becoming a baseline expectation rather than a premium add-on. And speaking of more sophisticated threats to bring this week’s roundup home, new threat research from Amazon Web Services adding to the evidence that AI is actively changing how attacks are carried out, not just how they’re defended against. AWS researchers report seeing threat actors use AI-assisted techniques to more quickly identify and exploit vulnerabilities in perimeter devices, including Fortinet FortiGate firewalls, reducing the time between disclosure and real-world exploitation. The finding reinforces a growing concern for solution providers. Attackers are using AI to scale reconnaissance, speed up exploit development, and adapt attacks faster than traditional defenses expect. For MSPs and VARs, the implication is clear. Staying ahead now requires faster patching cycles, continuous monitoring, and security platforms that assume AI-accelerated threats are the norm and not an edge case. Those are some of the things we were paying attention to last week. This week on the podcast, expect to hear how Citrix is thinking of partners as it hands off more of its channel management to Arrow Electronics, a look at the role of identity in taming shadow AI, and how startup Lexful is aiming to redefine how MSPs think about documentation. I’m Robert Dutt for ChannelBuzz.ca. Have a great week!

M2 Podcast
Control and Consequences in the Modern Game Industry S7E7

M2 Podcast

Play Episode Listen Later Feb 27, 2026 100:05


This week on the M2 Podcast, we break down major power shifts and mounting pressures across the gaming industry. Microsoft has appointed Asha Sharma as the new CEO of Xbox following the departures of Phil Spencer and Sarah Bond, signaling a strategic reset as the company looks to redefine the brand beyond consoles. Meanwhile, a surge in global DRAM and NAND memory prices—driven largely by AI infrastructure demand—is squeezing hardware margins and could lead to higher costs across consoles, components, and subscriptions. On the legal front, New York Attorney General Letitia James is suing Valve, alleging its loot box systems in games like Counter-Strike 2 constitute illegal gambling and could have sweeping regulatory consequences. Finally, Discord has delayed its global age-verification rollout after user backlash, as platforms continue navigating increasing regulatory scrutiny around teen safety. It's a week defined by leadership changes, economic strain, and growing legal accountability in gaming.0:00 Intro1:44 Updates17:08 Inside Xbox leadership shake-up https://tinyurl.com/ek5vv5tb 49:36 Surge in memory prices https://tinyurl.com/444va88t1:07:47 New York Attorney General Sues Valve https://tinyurl.com/2arjkpwe 1:25:03 Discord Is Delaying Age Checks https://tinyurl.com/7m9mrxde 1:35:50 OutroLeave a LIKE and a comment, thanks for watching/listening!-----------------------------------------------------------------------------------PODCAST                ►► https://anchor.fm/m2podcastAMAZON Music         ► https://music.amazon.com/podcasts/091902c3-b83b-487c-8fe7-4c96787434fe/M2-PodcastAPPLE                          ► https://podcasts.apple.com/podcast/id1531832410BREAKER                     ► https://www.breaker.audio/m2-podcast-2CASTRO                       ► https://castro.fm/podcast/6f69d373-d879-46d9-9f1c-bcf7c4bf1741GOOGLE                       ► https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy8zNTYwNWZiMC9wb2RjYXN0L3Jzcw==OVERCAST                   ► https://overcast.fm/itunes1531832410/m2-podcastPOCKETCASTS           ► https://pca.st/5jghvf6eRADIOPUBLIC             ► https://radiopublic.com/m2-podcast-GMZkY4SPOTIFY                      ► https://open.spotify.com/show/2VedhO03IRoHERJqF6Sy87STICHER                      ► https://www.stitcher.com/podcast/m2-podcastTUNEIN                        ► http://tun.in/pj3ZI#podcast JOIN THE DISCORD!  ►► https://discord.gg/Kp5Gre6KyleHeath Socials:TIKTOK           ►► https://www.tiktok.com/@mrjkheathTWITCH          ►► https://www.twitch.tv/kyleheathTWITTER        ►► https://twitter.com/mrjkheathYOUTUBE       ►► https://www.youtube.com/MrJkheathMadMikeWillEatU Socials:TIKTOK           ►► https://www.tiktok.com/@madmikewilleatuTWITCH          ►► https://www.twitch.tv/madmikewilleatu/aboutTWITTER        ►► https://twitter.com/madmikewilleatuYOUTUBE       ►► https://www.youtube.com/channel/UC1MoIvzyMDvH_5Ta

TechTimeRadio
287: TechTime Radio: A Courtroom Clash with META, Sci‑Fi Pigeons, and a Hardware Squeeze Reveal the Growing Tension Between Innovation and Control. Why Do Your Devices, Data, and Autonomy Feel Increasingly Up for Grabs? | Air Date: 2/24 - 3/2/26

TechTimeRadio

Play Episode Listen Later Feb 25, 2026 57:52 Transcription Available


287: TechTime Radio: A landmark social‑media addiction trial, brain‑steered pigeons, and a global memory crunch collide in an hour that questions who really controls attention, autonomy, and access. We break down Zuckerberg's courtroom spotlight, the stakes of age‑verification and identity collection, and the eerie rise of biodrone pigeons that blur the line between experimentation and coercive tech. The conversation widens to AI‑driven DRAM shortages slowing devices, inflating prices, and reshaping hardware roadmaps, all while Copilot's sensitive‑email summarization misstep raises fresh questions about guardrails and trust.From bioethics to supply chains, the episode tracks how emerging systems quietly reshape daily life—from slower AI tools to pricier gadgets to new surveillance risks. We even detour into Japan's “Monster Wolf” deterrent, a reminder that strange inventions often surface deeper debates about safety and unintended consequences. And as always, we ground the big stories with our whiskey tasting and game segment, keeping the tech turbulence both sharp and fun.Full Details:A courtroom showdown, brain-steered birds, and a supply chain squeeze collide in a fast-moving hour where we probe who truly controls attention, autonomy, and access. We start with the landmark social media addiction trial putting Mark Zuckerberg under the spotlight and ask what “less than one percent of ad revenue” really means against testimony, internal emails, and the lived experiences of teens and parents. We debate how age verification could evolve, why “government made us do it” might justify deeper identity collection, and where meaningful safety ends and surveillance begins.Then we pivot to a story that feels ripped from science fiction: a Russian startup turning pigeons into biodrones via neural stimulation. The birds navigate cities with uncanny stealth—no rotors, no glare, just feathers and control signals—raising red flags for bioethics, law enforcement, and civil liberties. We unpack the slippery slope from animal experiments to human augmentation, along with the unsettling possibility that autonomy becomes optional when enhancement is sold as progress.Meanwhile, the hardware reality bites. AI data centers are inhaling global DRAM, driving prices up and forcing even top-tier firms to rethink roadmaps. With a handful of manufacturers controlling production and expansion lagging demand, the industry faces delayed launches, pricier devices, and a renewed interest in repair and refurbishment. We connect the dots to everyday users: why your AI tools feel slower, why memory costs more, and how scarcity triggers hoarding and gray markets.We also break down Microsoft Copilot's eyebrow-raising leap into summarizing sensitive emails and drafts, exploring what went wrong, why “code issue” isn't a satisfying answer, and what robust guardrails should look like. Plus, a wild detour into Japan's “Monster Wolf” bear deterrent, proof that even quirky gadgets can surface deep questions about safety, design, and unintended consequences. Along the way, we keep it grounded with our whiskey tasting and game segment.If you're curious about where tech policy, bioethics, and infrastructure collide—and what it means for your devices, data, and daily life—this one's for you. If it sparks a thought, share it with a friend, subscribe, and leave a review with the one change you'd make to social platforms today.Support the show

CRAFTED
Bitters, Ghost Towns, & Botanical Sodas w/ Shae Whitney of Dram

CRAFTED

Play Episode Listen Later Feb 25, 2026 46:33


How do you grow a national beverage brand without investors, trend-chasing, or artificial ingredients? Dram began as an all natural cocktail bitters producer, and (unintentionally) went on to become the leader in the ‘functional beverage' space.Today, Eli talks with Dram founder, Shae Whitney, about Dram's unlikely origin story; what led her to start making bitters with real botanicals; building a business in a Colorado ghost town; the strange history of their original production space; and how Dram navigated the rise of functional beverages without chasing trends.We Want to Hear from You!Have a topic, craft category, or craft company you'd like to see us cover? Email us here to share those or any other thoughts you have about CRAFTED. Hosted on Acast. See acast.com/privacy for more information.

Radio Alicante
Dra. Mª Elena Martín, presidenta del Comité de Bioética Asistencial del Dpto. de Salud Alicante, en Hoy por Hoy Alicante

Radio Alicante

Play Episode Listen Later Feb 24, 2026 19:52


Neydi Bu ?
SEVGİSİZLİK ÖLDÜRÜR MÜ ? Yavru Maymun Punch'ın Dramı

Neydi Bu ?

Play Episode Listen Later Feb 23, 2026 12:18


Annesi tarafından terk edilen Punch, neden cansız bir oyuncağa delicesine sarılıyor?

Dram Good - Der Whisky Podcast
Wir haben Besuch: Boris und Sebastian vom Wu Dram Clan

Dram Good - Der Whisky Podcast

Play Episode Listen Later Feb 22, 2026 106:01


In dieser Folge treffen wir Boris und Sebastian vom Wu Dram Clan einem Indy-Bottler aus München, dem Schwarzwald und Kyoto, der nicht nur Whisky-, sondern auch Rum-, Cognac- und Armagnac-Einzelfassabfüllungen kuratiert und veröffentlicht – immer mit Fokus auf Qualität, Authentizität und Handwerk. Gemeinsam sprechen wir über:

Dilo Camilo
Dilo Camilo - Sintético - 22/02/26

Dilo Camilo

Play Episode Listen Later Feb 22, 2026 60:05


Es simpático y patético. Dramático y energético. O tal vez sólo sintético. Un Dilo Camilo con música de Los Yolos, Jupiter en Casa, Rixxia, Baile Que, Persempre, Baltasar sin dinero, Palomo Palomo, Juan Cirerol y muchos más Escuchar audio

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
GM Bets on Lean Inventory, AI Taking Chips From Cars, The Power of Storytelling

The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier

Play Episode Listen Later Feb 19, 2026 10:16


Shoot us a Text.Episode #1273: GM is staying lean to outmaneuver the next sales slowdown. AI's appetite for memory chips could spark a new supply squeeze across autos and tech. Retailers are proving that telling better stories sells.Show Notes with links:General Motors is rewriting its inventory playbook, running 30–40% leaner and hoping that tighter supply, stronger cash flow, and faster decision-making could turn the next cycle into a competitive advantage.S&P Global Mobility forecasts U.S. sales down 2.5% to 15.8M units as affordability and softer EV demand weigh on the market.GM is targeting a 50–60 day supply versus the pre-pandemic 100+ days.Leaner inventory gives GM more flexibility to adjust incentives in a downturn without crushing profitability.Dealers have felt the squeeze, especially on affordable models, prompting GM to stage select Trax and Trailblazer units at ports to speed delivery.CFO Paul Jacobson summed up the strategy: “It's easier to do when you have less inventory in the system because you can just respond much more quickly.”Just when the auto industry thought it survived the chip crisis, here comes round two—this time powered by AI. Data centers are devouring global memory supply, forcing automakers to brace for tighter supply, higher costs, and potential production headaches.AI data centers are soaking up global DRAM and memory production, with Western Digital and Seagate already sold out of most 2026 capacity.Memory chip prices have jumped 90% quarter-over-quarter, prompting PC makers like Dell to raise prices 15–20%.Tesla's Elon Musk says the solution may be vertical integration: “We're going to hit a chip wall if we don't do the fab.”Retailers are doubling down on something we at More Than Cars know well—storytelling sells. Brands are shifting from simply stocking products to crafting narratives that spark emotion, build loyalty, and turn casual shoppers into long-term fans.Nordstrom says department stores no longer “introduce” brands—they help tell their story and build deeper consumer connection.Five Below credits curated social storytelling—merchandising and marketing working together—for stronger engagement with younger shoppers.Under Armour's Kevin Plank says brands must inspire emotion: “The world does not need another capable apparel and footwear manufacturer. The world needs hope and they need a dream.”Today's show is brought to you by ESi-Q. ESi-Q measures employee satisfaction and provides actionable insight into what's driving employee engagement and turnover - before employees leave.Join Paul J Daly and Kyle Mountsier every morning for the Automotive State of the Union podcast as they connect the dots across car dealerships, retail trends, emerging tech like AI, and cultural shifts—bringing clarity, speed, and people-first insight to automotive leaders navigating a rapidly changing industry.Get the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/

Press X to Start
PX2S 9.50 - Our Favorite 5 of 2025!

Press X to Start

Play Episode Listen Later Feb 18, 2026 67:46


SUBSCRIBE NOW!!!! on iTunes, Google Podcasts, Spotify, Stitcher & Audible. Happy Holidays, Merry Christmas, and all of the above, we are ending the year with talking about our Favorite 5 of 2025 (Games that is), but unfortunately, not all is good, the industry got hit with some terribly sad news. All this and more on this episode of Press X to Start Gamer's Digest  Gaming News:  Vince Zampella, video game developer behind 'Call of Duty,' killed in Ferrari crash  Video Game Physical Software and Hardware Sales Just Had the Worst November in the U.S. Since 1995 - IGN Nintendo Switch 2 Hardware Hit by 41% Price Jump in DRAM, NAND Up 8% | TechPowerUp Subscribe to the Channel Sucker Punch Productions Co-Founder Brian Fleming Passes the Reins to the Next Chapter of Creative Leadership Report claims Naughty Dog recently ordered employees take on "mandatory" overtime | GamesIndustry.biz  Zoë Quinn and multiple game writers come forward about Larian's hiring practices ("unpaid writing tests you have to make playable") News | ResetEra Quick Hits Sony's legal battle against Tencent's Horizon ‘clone' is already over | The Verge Bethesda reportedly held a secret Starfield event to showcase an upcoming update that will add faster loading times and technical improvements to the Creation Engine, along with a PS5 port that will be announced in 2026 | TechRadar  What We Been Playing: Sean - Battlefield 6 Dj - Returnal, Battlefield 6, Where WInds Meet Favorite 5 of 2025: DJ: Expedition 33 Ghost of Yotei Battlefield 6 Dispatch Monster Hunter Wilds Sean: Expedition 33 Like a Dragon: Pirate Yakuza in Hawaii Death Stranding 2: On the Beach Ghost of Yotei Battlefield 6   If you're enjoying the show, please take a moment to rate/review it on whatever service you're using. Every little bit helps!  Want to ask a question, ask us at PressX2start.com/Questions Join/Follow Us: Youtube: Press X To Start TV Twitch: pressxtostarttv Facebook: https://www.facebook.com/pressx2start Twitter: @PressX2S  Instagram: @PressX2Start TikTok: @pressx2start You can find more info about the Press X and who we are at www.PressX2start.com. If you have any questions or just want to tell us how great (or just slightly okay) we're doing or how we can be better, be a friend and reach out and email us at pressxtostartpodcast@gmail.com End music by @MarcoMavy on IG & Twitter Be good to each other, Peace!

Tu dosis diaria de noticias
18 de febrero de 2026 - El despido de Marx Arriaga causó dramón en la SEP

Tu dosis diaria de noticias

Play Episode Listen Later Feb 18, 2026 10:45


La SEP ya presentó al reemplazó de Marx Arraiga para la dirección general de Materiales Educativos; se trata de Nadia López, poeta mixteca y pedagoga.Hilary Clinton acusó a la Administración Trump de un “encubrimiento” de los “Epstein files”.Además… Claudia Sheinbaum rechazó la invitación de Donald Trump a la Junta de Paz; la falsificación de billetes en México subió 1.6% el año pasado; el Congreso de Perú destituyó ayer al presidente interino, José Jerí; la Unión Europea iniciará una investigación contra Shein debido a sospechas de incumplimiento de la legislación del bloque; Paramount lanzó una nueva oferta para comprar Warner Bros. Discovery; Mark Zuckerberg testificará este miércoles ante un jurado de Los Ángeles por presuntamente diseñar funciones que fomentan la adicción a redes sociales.Y para #ElVasoMedioLleno… Mara Santiago se presentará en el escenario Búho del EDC el 21 de febrero.Para enterarte de más noticias como estas, síguenos en redes sociales. Estamos en todas las plataformas como @telokwento. Hosted on Acast. See acast.com/privacy for more information.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

Art of Boring
Emerging Markets: AI "Picks and Shovels," ROIC, and the Great Supply Chain Reshuffle | EP 210

Art of Boring

Play Episode Listen Later Feb 12, 2026 28:04


Wen Quan Cheong, co-manager of Mawer's emerging markets equity strategy, outlines four major themes shaping the opportunity set today. First, the "picks and shovels" of AI: upstream enablers such as advanced chip manufacturers, memory makers, and specialized chip-testing firms that are benefiting from structural bottlenecks in the AI supply chain. Second, companies that are actually converting AI investment into higher returns on capital. Third, the "Great Supply Chain Reshuffle," where national security concerns, tariffs, and "China plus one" strategies are driving a reconfiguration of strategic manufacturing infrastructure across Asia and the U.S. And finally, a broader universe of less obvious EM stories that illustrate how opportunity is evolving across regions and sectors as these forces play out.   Highlights: Why upstream AI enablers are seeing such powerful earnings leverage: how capacity cuts, equipment bottlenecks, and surging demand for DRAM, HBM, and NAND have flipped the memory market from oversupplied to structurally tight. What it takes for companies to truly convert AI investment into sustainable returns on invested capital, and why early, well-run adopters may enjoy a multi year edge. How shifting geopolitics, U.S. tariffs, and national security concerns are driving a "Great Supply Chain Reshuffle," from TSMC-linked clean room specialists like Actor Group supporting new fabs to Chinese manufacturers using their domestic scale and integration to expand overseas. Why emerging markets are more than just China and tech, with examples ranging from Saudi insurance aggregation and Vietnamese pharmacies to ship maintenance businesses with recurring revenues.   Host: Rob Campbell, CFA Institutional Portfolio Manager Guest: Wen Quan Cheong, CFA Portfolio Manager   This episode is available for download anywhere you get your podcasts. Founded in 1974, Mawer Investment Management Ltd. (pronounced "more") is a privately owned independent investment firm managing assets for institutional and individual investors. Mawer employs over 250 people in Canada, U.S., and Singapore. Visit Mawer at https://www.mawer.com. Follow us on social: LinkedIn - https://www.linkedin.com/company/mawer-investment-management/ Instagram - https://www.instagram.com/mawerinvestmentmanagement/

Linha de Passe
Grande vitória do São Paulo, Bahia triunfando contra o Vasco e empate dramática entre Atlético-MG e Remo

Linha de Passe

Play Episode Listen Later Feb 12, 2026 80:34


No programa desta quarta-feira, nossos comentaristas falaram sobre tudo que rolou nessa rodada de Brasileirão. Vem com a gente! Learn more about your ad choices. Visit podcastchoices.com/adchoices

Calendar Call
Dram Shop Act

Calendar Call

Play Episode Listen Later Feb 11, 2026 29:00


Episode 106: Dram Shop Act This month on Calendar Call, Matt Berardino talks with Attorney Daniel Petroskey, counsel at DeFronzo & Petroskey, P.C., about the Dram Shop Act. Matt and Attorney Petroskey discuss the elements of a Dram Shop claim, intoxication, establishing causation, as well as the many nuances and additional claims. Sec. 30-102 Sec. 30-86 O'Dell v. Kozee Sanders v. Officers Club of CT

Kinky Cocktail Hour
The Gay Leather Scene

Kinky Cocktail Hour

Play Episode Listen Later Feb 11, 2026 59:22


Send a textIn this episode, Lady Petra and Saffermaster chat with Rocket Rob of the Rocket Review podcast about this Gay Leather Scene over. Dram of Col Isla Single Malt Scotch Whiskey.The Kinky cocktail Hour is brought to you by Motorbunny, the best saddle style vibrator on the market today. Save $40 on your Motorbunny purchase with the code LADYPETRAPLAYGROUND at Motorbunny.com You can order the TechRing, "Where health meets pleasure" at http://myfirmtech.com using the code "KINKY" to save 15%. Put a ring on it!Support the showHard Married: A Guide to Building Lasting Love by Unlocking the Secrets of Deep Intimacy. Get your copy of this new book by Saffer here: https://tinyurl.com/Hard-Married Visit Hardmarried.net Listen on Podurama https://podurama.com

secrets unlocking leather kinky dram podurama motorbunny lady petra saffermaster
El Podcast de Aníbal
Sobre La Mesa - Lunes, 9 de febrero de 2026

El Podcast de Aníbal

Play Episode Listen Later Feb 10, 2026 101:15


1. Bad Bunny, sus mensajes políticos en el Super Tazón y las repercusiones 2. Muchas interrogantes sobre la Secretaria de la Vivienda, Ciary Pérez Peña y su Centro de Inspección 3. Inicia el proceso de radicación de planilla y siguen en el limbo el cheque del “reintegro reintegrable” y la supuesta reforma contributiva 4. Controversia por el veredicto de culpable en el caso de la muerte del biólogo marino Roberto Viqueira a manos del enfermero Eduardo Meléndez 5. Queremos fomentar y crecer el turismo, pero tenemos de las playas más inseguras de todo los Estados Unidos 6. Dramática la situación de los haitianos que llegaron legalmente a Puerto Rico y los están deportando 7. Compinche de Epstein se ampara en la quinta enmienda y dice que si Trump la indulta, hablará para exonerar a Trump y a Clinton 8. DEPORTES ZONA-5See omnystudio.com/listener for privacy information.

Podcast de La Hora de Walter
02 09-02-26 LHDW Charly 015: Dramática situación en Cuba con la falta de combustible. ¿Caerá el régimen comunista?

Podcast de La Hora de Walter

Play Episode Listen Later Feb 9, 2026 16:46


02 09-02-26 LHDW Charly 015: Dramática situación en Cuba con la falta de combustible. ¿Caerá el régimen comunista?, efecto domino después de Venezuela

Geek Forever's Podcast
ทำไมเกาหลีใต้ครองโลก Memory Chip เบื้องหลัง Samsung และ SK Hynix ครองโลกชิปความจำ 65% | Geek Story EP600

Geek Forever's Podcast

Play Episode Listen Later Feb 5, 2026 14:27


เชื่อไหมครับว่า ในโทรศัพท์มือถือที่คุณกำลังถืออยู่ตอนนี้ ในคอมพิวเตอร์ที่คุณใช้ทำงาน หรือแม้แต่ในเซิร์ฟเวอร์ยักษ์ใหญ่ที่รันระบบ Facebook และ Google ให้เราใช้กันทั่วโลก มีชิ้นส่วนเล็กๆ ชิ้นหนึ่งที่ถ้าขาดมันไป โลกดิจิทัลทั้งใบจะหยุดชะงักทันที เราไม่ได้กำลังพูดถึงชิปประมวลผลสมองกลอย่าง CPU ของ Intel หรือการ์ดจอ AI สุดล้ำจาก NVIDIA ที่เราได้ยินชื่อกันบ่อยๆ ในข่าวช่วงนี้ แต่เรากำลังพูดถึง “สมองส่วนความจำ” ของโลกใบนี้ ลองจินตนาการดูครับ ถ้า NVIDIA คือสมองที่คิดเลขเร็วที่สุดในโลก ชิปที่เราจะคุยกันวันนี้ก็เปรียบเสมือนกระดาษทดที่คอยจดจำตัวเลขมหาศาลเหล่านั้น เพื่อส่งให้สมองคิดต่อ ถ้าไม่มีกระดาษทดแผ่นนี้ ต่อให้สมองเร็วแค่ไหน ก็ทำงานไม่ได้ และสิ่งที่น่าทึ่งที่สุดคือ เจ้าตลาดของ “กระดาษทด” หรือหน่วยความจำพวกนี้ ไม่ได้อยู่ที่อเมริกา ไม่ได้อยู่ที่ไต้หวัน และไม่ได้อยู่ที่จีน แต่ศูนย์กลางของมันตั้งอยู่บนคาบสมุทรเล็กๆ ที่ครั้งหนึ่งเคยบอบช้ำจากสงครามจนแทบไม่เหลืออะไรเลย ใช่ครับ เรากำลังพูดถึงประเทศเกาหลีใต้ และสองยักษ์ใหญ่ที่ชื่อว่า Samsung และ SK Hynix ข้อมูลล่าสุดจากปี 2025 บอกเราว่า เฉพาะแค่สองบริษัทนี้จากเกาหลีใต้ ก็ครองส่วนแบ่งตลาดหน่วยความจำ DRAM ของโลกไปแล้วเกือบ 65% คำถามคือ… ประเทศที่เริ่มต้นจากการรับจ้างผลิตวิกผมและเสื้อผ้าเมื่อ 60 ปีก่อน ไต่เต้าขึ้นมาเป็นผู้กุมชะตาชีวิตของโลกดิจิทัลได้อย่างไร? วันนี้ ผมจะพาคุณย้อนเวลากลับไปดูจุดเริ่มต้น การเดิมพันหมดหน้าตักที่เกือบทำให้บริษัทล้มละลาย และการต่อสู้ด้วยเทคโนโลยีที่ตัดสินแพ้ชนะกันด้วยความหนาเพียงไม่กี่นาโนเมตร เลือกฟังกันได้เลยนะครับ อย่าลืมกด Follow ติดตาม PodCast ช่อง Geek Forever's Podcast ของผมกันด้วยนะครับ #Samsung #SKHynix #Semiconductor #ธุรกิจเทคโนโลยี #ประวัติศาสตร์ธุรกิจ #ชิปคอมพิวเตอร์ #เศรษฐกิจเกาหลีใต้ #ความรู้รอบตัว #DRAM #สาระน่ารู้ #geekstory #geekforeverpodcast

Asia Centric by Bloomberg Intelligence
How AI Created an Unprecedented Memory Chip Crisis

Asia Centric by Bloomberg Intelligence

Play Episode Listen Later Feb 4, 2026 32:47 Transcription Available


A surprising new bottleneck has emerged in the global AI infrastructure build-out: memory chips. Major manufacturers including SK Hynix, Micron and Samsung Electronics have effectively run out of capacity, sparking a scramble among customers to secure supply. Contract prices for certain DRAM chips surged 78% in the fourth quarter alone, with another 50% jump forecast by March. This price shock is creating a squeeze — especially for makers of smartphones, PCs and automobiles — as memory suppliers prioritize high-margin AI chips over "legacy" components. The result is a widening supply gap that threatens to leave consumer electronics companies struggling to secure essential parts through 2027. MS Hwang, research director at Counterpoint Research, joins John Lee and Bloomberg News technology editor Vlad Savov on the Asia Centric podcast. They unpack the dynamics of the shortage and how Chinese upstarts are racing to fill the void.See omnystudio.com/listener for privacy information.

Actualidade - Renascença V+ - Videocast
Autarca de Leiria: "É dramático. 50 mil casas estão 'out' há quatro dias, não há 12 horas como no apagão"

Actualidade - Renascença V+ - Videocast

Play Episode Listen Later Feb 1, 2026 1:35


Autarca de Leiria: "É dramático. 50 mil casas estão 'out' há quatro dias, não há 12 horas como no apagão"

Analytic Dreamz: Notorious Mass Effect
"EXPLAINING WHY RAM PRICES ARE SKYROCKETING AND REACHING NEW HIGHS, SUGGESTING THERE MAY BE NO END IN SIGHT"

Analytic Dreamz: Notorious Mass Effect

Play Episode Listen Later Jan 30, 2026 12:49


Linktree: ⁠https://linktr.ee/Analytic⁠Join The Normandy For Additional Bonus Audio And Visual Content For All Things Nme+! Join Here: ⁠https://ow.ly/msoH50WCu0K⁠In the Notorious Mass Effect segment, Analytic Dreamz dives deep into the RAM Price Crisis (2025–2026), unpacking the key data, market drivers, and real consumer impact behind the dramatic surge in memory costs.RAM prices have skyrocketed into a sustained inflation cycle heading into 2026, fueled by explosive AI data center demand that prioritizes high-bandwidth memory (HBM) and diverts supply from consumer DRAM. Manufacturing bottlenecks, limited cleanroom capacity, and lithography constraints exacerbate the shortage, while major players like Micron exit consumer RAM sales (Crucial brand in December 2025) to focus on higher-margin AI segments. Samsung and SK hynix report massive profit surges amid the boom.DDR5 RAM has seen prices more than quadruple (+340–344%) since July 2025, with a +27% month-on-month jump from December to January 2026. DDR4 and older standards are rising even faster recently (+46% MoM in January), narrowing the gap with newer tech. ComputerBase's fixed-basket analysis confirms average prices have quadrupled versus September 2025, with Germany's retail tracking—Europe's largest PC hardware market—mirroring global trends, including growing secondary-market distortions.Secondary effects hit related components hard: SSDs up +79%, hard drives +53%, GPUs +14% (with street prices far exceeding MSRP on models like RTX 5070 Ti). Specific examples include 2TB NVMe drives jumping 60–159% and NAS HDDs doubling.Analyst forecasts from TrendForce and Omdia point to +50–60% DRAM contract price hikes in Q1 2026, following 40–70% YoY increases in 2025. PC shipments grew +9.2% in 2025 but face potential declines in 2026, while smartphone output forecasts drop ~20% for some brands, risking +30% price hikes or spec downgrades. Gaming consoles may see delays or higher launch prices.Apple's upgrade costs (e.g., $400 for 16GB→32GB) already outpace comparable DDR5 sticks, with M6 Macs potentially facing steeper hikes or supply delays if AI firms continue outbidding.The core takeaway: This AI-driven structural shift has quadrupled RAM prices in under six months, with volatility persisting through 2026. A plateau is the most optimistic scenario—no full reversal in sight. Analytic Dreamz breaks down the data, root causes, and widespread ripple effects across PCs, smartphones, and beyond.Support this podcast at — https://redcircle.com/analytic-dreamz-notorious-mass-effect/donationsPrivacy & Opt-Out: https://redcircle.com/privacy

¡Buenos días, Javi y Mar!
09:00H | 26 ENE 2026 | ¡Buenos días, Javi y Mar!

¡Buenos días, Javi y Mar!

Play Episode Listen Later Jan 26, 2026 60:00


En `¡Buenos días, Javi y Mar!` en CADENA 100, José Real informa sobre la investigación de la fractura en una vía de tren. El PP y Esquerra piden la dimisión de Óscar Puente y critican el caos en Rodalíes. Se anuncian los Premios Feroz, antesala de los Goya, donde "Los Domingos" gana Mejor Película Dramática y "La Cena" la de Comedia. La temporada de Calçotadas arranca en Valls. Suenan Ana Mena, Ben Sombun, Maldita Nerea y Melendi, y Modjo. Jimeno propone jeroglíficos con recetas de pescado. Tres grafitis madrileños compiten en Street Art Cities por el mejor arte urbano del mundo. Se explica cómo hacer palomitas con agua. El programa comparte audios divertidos de oyentes sobre conversaciones de vecinos y situaciones familiares. También se promociona la película "Marty Supreme", y se anuncia el concierto de Maroon 5 en Sevilla. Se pone música sin interrupción.

SpreadShotNews
SpreadShotNews Podcast 700: Episodio DCC - Panteon de las Pastas Edition

SpreadShotNews

Play Episode Listen Later Jan 25, 2026 146:47


¡Ni los más codiciados carbohidratos podrán detenernos!¡Porque es lunes y SpreadShotNews Podcast ya llegó! En este episodio: Nico continua sus desventuras por Project Wingman, mientras Maxi continúa con la maratón de Ace Combat: esta vez termina Ace Combat 2. Además continúa a pleno Trails to Azure y nos da algunos detalles sobre su aventura. En el Rapid-Fire, tenemos noticias sobre Hitman agregando cross-progression, la enésima reorganización de Ubisoft y la probabilidad de ver Xbox Could Gaming con publicidades. Para el Hot Coffee, repasamos lo que nos dejó el Xbox Developer_Direct de la semana pasada. Para finalizar, en el Special Move, Maxi nos recomienda un documental de Gamers Nexus sobre cómo China está acortando la brecha con el resto del mundo en manufacturación de DRAM y NAND. También nos recomienda un video sobre las tecnologías de upscaling y que tanto se puede bajar la resolución y aún obtener un resultado legible. Nico por su parte recomienda cooler control, la versión agnostica de fan control para regular y controlar los fans de la PC. Por último, recuerden que nos pueden escribir preguntas directamente a través de google forms en el siguiente link: spreadshotnews.com/preguntas

Loucos por Biografias
A Dramática História de SALVADOR ALLENDE - De Presidente do Chile á Mártir!

Loucos por Biografias

Play Episode Listen Later Jan 23, 2026 9:07


Apoie a Cultura: Seja Membro do Canal no Youtube á partir de R$1,99 por mês. Salvador Allende (1908–1973) foi um médico e político socialista chileno, conhecido por ser o primeiro marxista a ser eleito presidente de um país democrático na América Latina, governando o Chile de 1970 a 1973. Ele buscava a "via chilena para o socialismo" através de reformas estruturais, como a nacionalização do cobre. Essa é a nossa história de hoje. Se você gostou deixe seu like, faça seu comentário, compartilhe essa biografia com mais pessoas. Vamos incentivar a cultura em nosso pais. Encontro voces na próxima história. Até lá! (Tania Barros)

Personal Injury Primer
Ep 348 What is a Dram Shop Claim?

Personal Injury Primer

Play Episode Listen Later Jan 21, 2026 3:58


What is a Dram Shop Claim? I’m David Holub, an attorney focusing on personal injury law in northwest Indiana. Welcome to Personal Injury Primer, where we break down the law into simple terms, provide legal tips, and discuss personal injury law topics. Today’s question comes from a caller concerned about a relative who was hit […] The post Ep 348 What is a Dram Shop Claim? first appeared on Personal Injury Primer.

Club de Malasmadres
Cómo cuidarse sin culpa y sin perfección con Patri Psicóloga

Club de Malasmadres

Play Episode Listen Later Jan 19, 2026 73:42 Transcription Available


Este episodio cuenta con la colaboración de HAMNET. En el podcast hablamos muchas veces de lo que no se ve, de lo que duele y de cómo seguimos adelante incluso cuando las cosas no salen como esperábamos. Y HAMNET conecta profundamente con todo eso.Dirigida por la ganadora del Oscar Chloé Zhao, HAMNET parte de una historia real de amor, maternidad y pérdida que está en el origen de Hamlet, la obra de Shakespeare. Protagonizada por Jessie Buckley y Paul Mescal, habla del duelo, pero también de la resiliencia, del cuidado y de la forma en que seguimos adelante.HAMNET ha ganando el Globo de Oro a Mejor Película Dramática y el Globo de Oro a Mejor Actriz. Conecta porque es honesta, sensible y profundamente humana. Se estrena en cines el 23 de enero, exclusivamente en salas. Una película para ver con calma y dejar que haga su propio recorrido.En este primer episodio de la tercera temporada del Podcast del Club de Malasmadres abrimos una conversación muy necesaria sobre el autocuidado sin culpa, sin perfección y sin excusas. Junto a Patri Psicóloga hablamos de por qué cuidarnos nos genera tanta culpa, de cómo empezar incluso cuando no tenemos tiempo, de microhábitos reales, de corresponsabilidad y de cómo sostenernos emocionalmente en medio del agotamiento.Porque el autocuidado de las madres es importante y en este podcast Patri Psicóloga nos da la hoja de ruta para, al menos, intentarlo.Descubre Hamnet aquí: Https://www.Hamnet-LaPelicula.es/entradasLibros de Podcast:Autocuidado: 52 semanas para cuidar de ti: (https://www.amazon.es/Autocuidado-semanas-para-cuidar-Psicolog%C3%ADa/dp/8425369134/)Cómo tener tiempo para todo: (https://www.amazon.es/C%C3%B3mo-tener-tiempo-para-Psicolog%C3%ADa/dp/8425366720/)*Podéis seguir a Malasmadres en:Facebook (https://www.facebook.com/malasmadres) Instagram (https://www.instagram.com/malasmadres)Twitter (https://twitter.com/malasmadres)Youtube (https://www.youtube.com/Malasmadres)Y en nuestra web (https://clubdemalasmadres.com/)*Podéis seguir a Patri Psicóloga en: Facebook (https://www.facebook.com/patripsicologaoficial/)Twitter (https://x.com/Patri_Psicologa)Instagram (https://www.instagram.com/patri_psicologa/)En su web (https://www.patripsicologa.com/)

Les Cast Codeurs Podcast
LCC 335 - 200 terminaux en prod vendredi

Les Cast Codeurs Podcast

Play Episode Listen Later Jan 16, 2026 103:16


De retour à cinq dans l'épisode, les cast codeurs démarrent cette année avec un gros épisode pleins de news et d'articles de fond. IA bien sûr, son impact sur les pratiques, Mockito qui tourne un page, du CSS (et oui), sur le (non) mapping d'APIs REST en MCP et d'une palanquée d'outils pour vous. Enregistré le 9 janvier 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-335.mp3 ou en vidéo sur YouTube. News Langages 2026 sera-t'elle l'année de Java dans le terminal ? (j'ai ouïe dire que ça se pourrait bien…) https://xam.dk/blog/lets-make-2026-the-year-of-java-in-the-terminal/ 2026: Année de Java dans le terminal, pour rattraper son retard sur Python, Rust, Go et Node.js. Java est sous-estimé pour les applications CLI et les TUIs (interfaces utilisateur terminales) malgré ses capacités. Les anciennes excuses (démarrage lent, outillage lourd, verbosité, distribution complexe) sont obsolètes grâce aux avancées récentes : GraalVM Native Image pour un démarrage en millisecondes. JBang pour l'exécution simplifiée de scripts Java (fichiers uniques, dépendances) et de JARs. JReleaser pour l'automatisation de la distribution multi-plateforme (Homebrew, SDKMAN, Docker, images natives). Project Loom pour la concurrence facile avec les threads virtuels. PicoCLI pour la gestion des arguments. Le potentiel va au-delà des scripts : création de TUIs complètes et esthétiques (ex: dashboards, gestionnaires de fichiers, assistants IA). Excuses caduques : démarrage rapide (GraalVM), légèreté (JBang), distribution simple (JReleaser), concurrence (Loom). Potentiel : créer des applications TUI riches et esthétiques. Sortie de Ruby 4.0.0 https://www.ruby-lang.org/en/news/2025/12/25/ruby-4-0-0-released/ Ruby Box (expérimental) : Une nouvelle fonctionnalité permettant d'isoler les définitions (classes, modules, monkey patches) dans des boîtes séparées pour éviter les conflits globaux. ZJIT : Un nouveau compilateur JIT de nouvelle génération développé en Rust, visant à surpasser YJIT à terme (actuellement en phase expérimentale). Améliorations de Ractor : Introduction de Ractor::Port pour une meilleure communication entre Ractors et optimisation des structures internes pour réduire les contentions de verrou global. Changements syntaxiques : Les opérateurs logiques (||, &&, and, or) en début de ligne permettent désormais de continuer la ligne précédente, facilitant le style "fluent". Classes Core : Set et Pathname deviennent des classes intégrées (Core) au lieu d'être dans la bibliothèque standard. Diagnostics améliorés : Les erreurs d'arguments (ArgumentError) affichent désormais des extraits de code pour l'appelant ET la définition de la méthode. Performances : Optimisation de Class#new, accès plus rapide aux variables d'instance et améliorations significatives du ramasse-miettes (GC). Nettoyage : Suppression de comportements obsolètes (comme la création de processus via IO.open avec |) et mise à jour vers Unicode 17.0. Librairies Introduction pour créer une appli multi-tenant avec Quarkus et http://nip.io|nip.io https://www.the-main-thread.com/p/quarkus-multi-tenant-api-nipio-tutorial Construction d'une API REST multi-tenant en Quarkus avec isolation par sous-domaine Utilisation de http://nip.io|nip.io pour la résolution DNS automatique sans configuration locale Extraction du tenant depuis l'en-tête HTTP Host via un filtre JAX-RS Contexte tenant géré avec CDI en scope Request pour l'isolation des données Service applicatif gérant des données spécifiques par tenant avec Map concurrent Interface web HTML/JS pour visualiser et ajouter des données par tenant Configuration CORS nécessaire pour le développement local Pattern acme.127-0-0-1.nip.io résolu automatiquement vers localhost Code complet disponible sur GitHub avec exemples curl et tests navigateur Base idéale pour prototypage SaaS, tests multi-tenants Hibernate 7.2 avec quelques améliorations intéressantes https://docs.hibernate.org/orm/7.2/whats-new/%7Bhtml-meta-canonical-link%7D read only replica (experimental), crée deux session factories et swap au niveau jdbc si le driver le supporte et custom sinon. On ouvre une session en read only child statelesssession (partage le contexte transactionnel) hibernate vector module ajouter binary, float16 and sparse vectors Le SchemaManager peut resynchroniser les séquences par rapport aux données des tables Regexp dans HQL avec like Nouvelle version de Hibernate with Panache pour Quarkus https://quarkus.io/blog/hibernate-panache-next/ Nouvelle extension expérimentale qui unifie Hibernate ORM with Panache et Hibernate Reactive with Panache Les entités peuvent désormais fonctionner en mode bloquant ou réactif sans changer de type de base Support des sessions sans état (StatelessSession) en plus des entités gérées traditionnelles Intégration de Jakarta Data pour des requêtes type-safe vérifiées à la compilation Les opérations sont définies dans des repositories imbriqués plutôt que des méthodes statiques Possibilité de définir plusieurs repositories pour différents modes d'opération sur une même entité Accès aux différents modes (bloquant/réactif, géré/sans état) via des méthodes de supertype Support des annotations @Find et @HQL pour générer des requêtes type-safe Accès au repository via injection ou via le métamodèle généré Extension disponible dans la branche main, feedback demandé sur Zulip ou GitHub Spring Shell 4.0.0 GA publié - https://spring.io/blog/2025/12/30/spring-shell-4-0-0-ga-released Sortie de la version finale de Spring Shell 4.0.0 disponible sur Maven Central Compatible avec les dernières versions de Spring Framework et Spring Boot Modèle de commandes revu pour simplifier la création d'applications CLI interactives Intégration de jSpecify pour améliorer la sécurité contre les NullPointerException Architecture plus modulaire permettant meilleure personnalisation et extension Documentation et exemples entièrement mis à jour pour faciliter la prise en main Guide de migration vers la v4 disponible sur le wiki du projet Corrections de bugs pour améliorer la stabilité et la fiabilité Permet de créer des applications Java autonomes exécutables avec java -jar ou GraalVM native Approche opinionnée du développement CLI tout en restant flexible pour les besoins spécifiques Une nouvelle version de la librairie qui implémenter des gatherers supplémentaires à ceux du JDK https://github.com/tginsberg/gatherers4j/releases/tag/v0.13.0 gatherers4j v0.13.0. Nouveaux gatherers : uniquelyOccurringBy(), moving/runningMedian(), moving/runningMax/Min(). Changement : les gatherers "moving" incluent désormais par défaut les valeurs partielles (utiliser excludePartialValues() pour désactiver). LangChain4j 1.10.0 https://github.com/langchain4j/langchain4j/releases/tag/1.10.0 Introduction d'un catalogue de modèles pour Anthropic, Gemini, OpenAI et Mistral. Ajout de capacités d'observabilité et de monitoring pour les agents. Support des sorties structurées, des outils avancés et de l'analyse de PDF via URL pour Anthropic. Support des services de transcription pour OpenAI. Possibilité de passer des paramètres de configuration de chat en argument des méthodes. Nouveau garde-fou de modération pour les messages entrants. Support du contenu de raisonnement pour les modèles. Introduction de la recherche hybride. Améliorations du client MCP. Départ du lead de mockito après 10 ans https://github.com/mockito/mockito/issues/3777 Tim van der Lippe, mainteneur majeur de Mockito, annonce son départ pour mars 2026, marquant une décennie de contribution au projet. L'une des raisons principales est l'épuisement lié aux changements récents dans la JVM (JVM 22+) concernant les agents, imposant des contraintes techniques lourdes sans alternative simple proposée par les mainteneurs du JDK. Il pointe du doigt le manque de soutien et la pression exercée sur les bénévoles de l'open source lors de ces transitions technologiques majeures. La complexité croissante pour supporter Kotlin, qui utilise la JVM de manière spécifique, rend la base de code de Mockito plus difficile à maintenir et moins agréable à faire évoluer selon lui. Il exprime une perte de plaisir et préfère désormais consacrer son temps libre à d'autres projets comme Servo, un moteur web écrit en Rust. Une période de transition est prévue jusqu'en mars pour assurer la passation de la maintenance à de nouveaux contributeurs. Infrastructure Le premier intérêt de Kubernetes n'est pas le scaling - https://mcorbin.fr/posts/2025-12-29-kubernetes-scale/ Avant Kubernetes, gérer des applications en production nécessitait de multiples outils complexes (Ansible, Puppet, Chef) avec beaucoup de configuration manuelle Le load balancing se faisait avec HAProxy et Keepalived en actif/passif, nécessitant des mises à jour manuelles de configuration à chaque changement d'instance Le service discovery et les rollouts étaient orchestrés manuellement, instance par instance, sans automatisation de la réconciliation Chaque stack (Java, Python, Ruby) avait sa propre méthode de déploiement, sans standardisation (rpm, deb, tar.gz, jar) La gestion des ressources était manuelle avec souvent une application par machine, créant du gaspillage et complexifiant la maintenance Kubernetes standardise tout en quelques ressources YAML (Deployment, Service, Ingress, ConfigMap, Secret) avec un format déclaratif simple Toutes les fonctionnalités critiques sont intégrées : service discovery, load balancing, scaling, stockage, firewalling, logging, tolérance aux pannes La complexité des centaines de scripts shell et playbooks Ansible maintenus avant était supérieure à celle de Kubernetes Kubernetes devient pertinent dès qu'on commence à reconstruire manuellement ces fonctionnalités, ce qui arrive très rapidement La technologie est flexible et peut gérer aussi bien des applications modernes que des monolithes legacy avec des contraintes spécifiques Mole https://github.com/tw93/Mole Un outil en ligne de commande (CLI) tout-en-un pour nettoyer et optimiser macOS. Combine les fonctionnalités de logiciels populaires comme CleanMyMac, AppCleaner, DaisyDisk et iStat Menus. Analyse et supprime en profondeur les caches, les fichiers logs et les résidus de navigateurs. Désinstallateur intelligent qui retire proprement les applications et leurs fichiers cachés (Launch Agents, préférences). Analyseur d'espace disque interactif pour visualiser l'occupation des fichiers et gérer les documents volumineux. Tableau de bord temps réel (mo status) pour surveiller le CPU, le GPU, la mémoire et le réseau. Fonction de purge spécifique pour les développeurs permettant de supprimer les artefacts de build (node_modules, target, etc.). Intégration possible avec Raycast ou Alfred pour un lancement rapide des commandes. Installation simple via Homebrew ou un script curl. Des images Docker sécurisées pour chaque développeur https://www.docker.com/blog/docker-hardened-images-for-every-developer/ Docker rend ses "Hardened Images" (DHI) gratuites et open source (licence Apache 2.0) pour tous les développeurs. Ces images sont conçues pour être minimales, prêtes pour la production et sécurisées dès le départ afin de lutter contre l'explosion des attaques sur la chaîne logistique logicielle. Elles s'appuient sur des bases familières comme Alpine et Debian, garantissant une compatibilité élevée et une migration facile. Chaque image inclut un SBOM (Software Bill of Materials) complet et vérifiable, ainsi qu'une provenance SLSA de niveau 3 pour une transparence totale. L'utilisation de ces images permet de réduire considérablement le nombre de vulnérabilités (CVE) et la taille des images (jusqu'à 95 % plus petites). Docker étend cette approche sécurisée aux graphiques Helm et aux serveurs MCP (Mongo, Grafana, GitHub, etc.). Des offres commerciales (DHI Enterprise) restent disponibles pour des besoins spécifiques : correctifs critiques sous 7 jours, support FIPS/FedRAMP ou support à cycle de vie étendu (ELS). Un assistant IA expérimental de Docker peut analyser les conteneurs existants pour recommander l'adoption des versions sécurisées correspondantes. L'initiative est soutenue par des partenaires majeurs tels que Google, MongoDB, Snyk et la CNCF. Web La maçonnerie ("masonry") arrive dans la spécification des CSS et commence à être implémentée par les navigateurs https://webkit.org/blog/17660/introducing-css-grid-lanes/ Permet de mettre en colonne des éléments HTML les uns à la suite des autres. D'abord sur la première ligne, et quand la première ligne est remplie, le prochain élément se trouvera dans la colonne où il pourra être le plus haut possible, et ainsi de suite. après la plomberie du middleware, la maçonnerie du front :laughing: Data et Intelligence Artificielle On ne devrait pas faire un mapping 1:1 entre API REST et MCP https://nordicapis.com/why-mcp-shouldnt-wrap-an-api-one-to-one/ Problématique : Envelopper une API telle quelle dans le protocole MCP (Model Context Protocol) est un anti-pattern. Objectif du MCP : Conçu pour les agents d'IA, il doit servir d'interface d'intention, non de miroir d'API. Les agents comprennent les tâches, pas la logique complexe des API (authentification, pagination, orchestration). Conséquences du mappage un-à-un : Confusion des agents, erreurs, hallucinations. Difficulté à gérer les orchestrations complexes (plusieurs appels pour une seule action). Exposition des faiblesses de l'API (schéma lourd, endpoints obsolètes). Maintenance accrue lors des changements d'API. Meilleure approche : Construire des outils MCP comme des SDK pour agents, encapsulant la logique nécessaire pour accomplir une tâche spécifique. Pratiques recommandées : Concevoir autour des intentions/actions utilisateur (ex. : "créer un projet", "résumer un document"). Regrouper les appels en workflows ou actions uniques. Utiliser un langage naturel pour les définitions et les noms. Limiter la surface d'exposition de l'API pour la sécurité et la clarté. Appliquer des schémas d'entrée/sortie stricts pour guider l'agent et réduire l'ambiguïté. Des agents en production avec AWS - https://blog.ippon.fr/2025/12/22/des-agents-en-production-avec-aws/ AWS re:Invent 2025 a massivement mis en avant l'IA générative et les agents IA Un agent IA combine un LLM, une boucle d'appel et des outils invocables Strands Agents SDK facilite le prototypage avec boucles ReAct intégrées et gestion de la mémoire Managed MLflow permet de tracer les expérimentations et définir des métriques de performance Nova Forge optimise les modèles par réentraînement sur données spécifiques pour réduire coûts et latence Bedrock Agent Core industrialise le déploiement avec runtime serverless et auto-scaling Agent Core propose neuf piliers dont observabilité, authentification, code interpreter et browser managé Le protocole MCP d'Anthropic standardise la fourniture d'outils aux agents SageMaker AI et Bedrock centralisent l'accès aux modèles closed source et open source via API unique AWS mise sur l'évolution des chatbots vers des systèmes agentiques optimisés avec modèles plus frugaux Debezium 3.4 amène plusieurs améliorations intéressantes https://debezium.io/blog/2025/12/16/debezium-3-4-final-released/ Correction du problème de calcul du low watermark Oracle qui causait des pertes de performance Correction de l'émission des événements heartbeat dans le connecteur Oracle avec les requêtes CTE Amélioration des logs pour comprendre les transactions actives dans le connecteur Oracle Memory guards pour protéger contre les schémas de base de données de grande taille Support de la transformation des coordonnées géométriques pour une meilleure gestion des données spatiales Extension Quarkus DevServices permettant de démarrer automatiquement une base de données et Debezium en dev Intégration OpenLineage pour tracer la lignée des données et suivre leur flux à travers les pipelines Compatibilité testée avec Kafka Connect 4.1 et Kafka brokers 4.1 Infinispan 16.0.4 et .5 https://infinispan.org/blog/2025/12/17/infinispan-16-0-4 Spring Boot 4 et Spring 7 supportés Evolution dans les metriques Deux bugs de serialisation Construire un agent de recherche en Java avec l'API Interactions https://glaforge.dev/posts/2026/01/03/building-a-research-assistant-with-the-interactions-api-in-java/ Assistant de recherche IA Java (API Interactions Gemini), test du SDK implémenté par Guillaume. Workflow en 4 phases : Planification : Gemini Flash + Google Search. Recherche : Modèle "Deep Research" (tâche de fond). Synthèse : Gemini Pro (rapport exécutif). Infographie : Nano Banana Pro (à partir de la synthèse). API Interactions : gestion d'état serveur, tâches en arrière-plan, réponses multimodales (images). Appréciation : gestion d'état de l'API (vs LLM sans état). Validation : efficacité du SDK Java pour cas complexes. Stephan Janssen (le papa de Devoxx) a créé un serveur MCP (Model Context Protocol) basé sur LSP (Language Server Protocol) pour que les assistants de code analysent le code en le comprenant vraiment plutôt qu'en faisant des grep https://github.com/stephanj/LSP4J-MCP Le problème identifié : Les assistants IA utilisent souvent la recherche textuelle (type grep) pour naviguer dans le code, ce qui manque de contexte sémantique, génère du bruit (faux positifs) et consomme énormément de tokens inutilement. La solution LSP4J-MCP : Une approche "standalone" (autonome) qui encapsule le serveur de langage Eclipse (JDTLS) via le protocole MCP (Model Context Protocol). Avantage principal : Offre une compréhension sémantique profonde du code Java (types, hiérarchies, références) sans nécessiter l'ouverture d'un IDE lourd comme IntelliJ. Comparaison des méthodes : AST : Trop léger (pas de compréhension inter-fichiers). IntelliJ MCP : Puissant mais exige que l'IDE soit ouvert (gourmand en ressources). LSP4J-MCP : Le meilleur des deux mondes pour les workflows en terminal, à distance (SSH) ou CI/CD. Fonctionnalités clés : Expose 5 outils pour l'IA (find_symbols, find_references, find_definition, document_symbols, find_interfaces_with_method). Résultats : Une réduction de 100x des tokens utilisés pour la navigation et une précision accrue (distinction des surcharges, des scopes, etc.). Disponibilité : Le projet est open source et disponible sur GitHub pour intégration immédiate (ex: avec Claude Code, Gemini CLI, etc). A noter l'ajout dans claude code 2.0.74 d'un tool pour supporter LSP ( https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#2074 ) Awesome (GitHub) Copilot https://github.com/github/awesome-copilot Une collection communautaire d'instructions, de prompts et de configurations pour optimiser l'utilisation de GitHub Copilot. Propose des "Agents" spécialisés qui s'intègrent aux serveurs MCP pour améliorer les flux de travail spécifiques. Inclut des prompts ciblés pour la génération de code, la documentation et la résolution de problèmes complexes. Fournit des instructions détaillées sur les standards de codage et les meilleures pratiques applicables à divers frameworks. Propose des "Skills" (compétences) sous forme de dossiers contenant des ressources pour des tâches techniques spécialisées. (les skills sont dispo dans copilot depuis un mois : https://github.blog/changelog/2025-12-18-github-copilot-now-supports-agent-skills/ ) Permet une installation facile via un serveur MCP dédié, compatible avec VS Code et Visual Studio. Encourage la contribution communautaire pour enrichir les bibliothèques de prompts et d'agents. Aide à augmenter la productivité en offrant des solutions pré-configurées pour de nombreux langages et domaines. Garanti par une licence MIT et maintenu activement par des contributeurs du monde entier. IA et productivité : bilan de l'année 2025 (Laura Tacho - DX)) https://newsletter.getdx.com/p/ai-and-productivity-year-in-review?aid=recNfypKAanQrKszT En 2025, l'ingénierie assistée par l'IA est devenue la norme : environ 90 % des développeurs utilisent des outils d'IA mensuellement, et plus de 40 % quotidiennement. Les chercheurs (Microsoft, Google, GitHub) soulignent que le nombre de lignes de code (LOC) reste un mauvais indicateur d'impact, car l'IA génère beaucoup de code sans forcément garantir une valeur métier supérieure. Si l'IA améliore l'efficacité individuelle, elle pourrait nuire à la collaboration à long terme, car les développeurs passent plus de temps à "parler" à l'IA qu'à leurs collègues. L'identité du développeur évolue : il passe de "producteur de code" à un rôle de "metteur en scène" qui délègue, valide et exerce son jugement stratégique. L'IA pourrait accélérer la montée en compétences des développeurs juniors en les forçant à gérer des projets et à déléguer plus tôt, agissant comme un "accélérateur" plutôt que de les rendre obsolètes. L'accent est mis sur la créativité plutôt que sur la simple automatisation, afin de réimaginer la manière de travailler et d'obtenir des résultats plus impactants. Le succès en 2026 dépendra de la capacité des entreprises à cibler les goulots d'étranglement réels (dette technique, documentation, conformité) plutôt que de tester simplement chaque nouveau modèle d'IA. La newsletter avertit que les titres de presse simplifient souvent à l'excès les recherches sur l'IA, masquant parfois les nuances cruciales des études réelles. Un développeur décrit dans un article sur Twitter son utilisation avancée de Claude Code pour le développement, avec des sous-agents, des slash-commands, comment optimiser le contexte, etc. https://x.com/AureaLibe/status/2008958120878330329?s=20 Outillage IntelliJ IDEA, thread dumps et project Loom (virtual threads) - https://blog.jetbrains.com/idea/2025/12/thread-dumps-and-project-loom-virtual-threads/ Les virtual threads Java améliorent l'utilisation du matériel pour les opérations I/O parallèles avec peu de changements de code Un serveur peut maintenant gérer des millions de threads au lieu de quelques centaines Les outils existants peinent à afficher et analyser des millions de threads simultanément Le débogage asynchrone est complexe car le scheduler et le worker s'exécutent dans des threads différents Les thread dumps restent essentiels pour diagnostiquer deadlocks, UI bloquées et fuites de threads Netflix a découvert un deadlock lié aux virtual threads en analysant un heap dump, bug corrigé dans Java 25. Mais c'était de la haute voltige IntelliJ IDEA supporte nativement les virtual threads dès leur sortie avec affichage des locks acquis IntelliJ IDEA peut ouvrir des thread dumps générés par d'autres outils comme jcmd Le support s'étend aussi aux coroutines Kotlin en plus des virtual threads Quelques infos sur IntelliJ IDEA 2025.3 https://blog.jetbrains.com/idea/2025/12/intellij-idea-2025-3/ Distribution unifiée regroupant davantage de fonctionnalités gratuites Amélioration de la complétion des commandes dans l'IDE Nouvelles fonctionnalités pour le débogueur Spring Thème Islands devient le thème par défaut Support complet de Spring Boot 4 et Spring Framework 7 Compatibilité avec Java 25 Prise en charge de Spring Data JDBC et Vitest 4 Support natif de Junie et Claude Agent pour l'IA Quota d'IA transparent et option Bring Your Own Key à venir Corrections de stabilité, performance et expérience utilisateur Plein de petits outils en ligne pour le développeur https://blgardner.github.io/prism.tools/ génération de mot de passe, de gradient CSS, de QR code encodage décodage de Base64, JWT formattage de JSON, etc. resumectl - Votre CV en tant que code https://juhnny5.github.io/resumectl/ Un outil en ligne de commande (CLI) écrit en Go pour générer un CV à partir d'un fichier YAML. Permet l'exportation vers plusieurs formats : PDF, HTML, ou un affichage direct dans le terminal. Propose 5 thèmes intégrés (Modern, Classic, Minimal, Elegant, Tech) personnalisables avec des couleurs spécifiques. Fonctionnalité d'initialisation (resumectl init) permettant d'importer automatiquement des données depuis LinkedIn et GitHub (projets les plus étoilés). Supporte l'ajout de photos avec des options de filtre noir et blanc ou de forme (rond/carré). Inclut un mode "serveur" (resumectl serve) pour prévisualiser les modifications en temps réel via un navigateur local. Fonctionne comme un binaire unique sans dépendances externes complexes pour les modèles. mactop - Un moniteur "top" pour Apple Silicon https://github.com/metaspartan/mactop Un outil de surveillance en ligne de commande (TUI) conçu spécifiquement pour les puces Apple Silicon (M1, M2, M3, M4, M5). Permet de suivre en temps réel l'utilisation du CPU (E-cores et P-cores), du GPU et de l'ANE (Neural Engine). Affiche la consommation électrique (wattage) du système, du CPU, du GPU et de la DRAM. Fournit des données sur les températures du SoC, les fréquences du GPU et l'état thermique global. Surveille l'utilisation de la mémoire vive, de la swap, ainsi que l'activité réseau et disque (E/S). Propose 10 mises en page (layouts) différentes et plusieurs thèmes de couleurs personnalisables. Ne nécessite pas l'utilisation de sudo car il s'appuie sur les API natives d'Apple (SMC, IOReport, IOKit). Inclut une liste de processus détaillée (similaire à htop) avec la possibilité de tuer des processus directement depuis l'interface. Offre un mode "headless" pour exporter les métriques au format JSON et un serveur optionnel pour Prometheus. Développé en Go avec des composants en CGO et Objective-C. Adieu direnv, Bonjour misehttps://codeka.io/2025/12/19/adieu-direnv-bonjour-mise/ L'auteur remplace ses outils habituels (direnv, asdf, task, just) par un seul outil polyvalent écrit en Rust : mise. mise propose trois fonctions principales : gestionnaire de paquets (langages et outils), gestionnaire de variables d'environnement et exécuteur de tâches. Contrairement à direnv, il permet de gérer des alias et utilise un fichier de configuration structuré (mise.toml) plutôt que du scripting shell. La configuration est hiérarchique, permettant de surcharger les paramètres selon les répertoires, avec un système de "trust" pour la sécurité. Une "killer-feature" soulignée est la gestion des secrets : mise s'intègre avec age pour chiffrer des secrets (via clés SSH) directement dans le fichier de configuration. L'outil supporte une vaste liste de langages et d'outils via un registre interne et des plugins (compatibilité avec l'écosystème asdf). Il simplifie le workflow de développement en regroupant l'installation des outils et l'automatisation des tâches au sein d'un même fichier. L'auteur conclut sur la puissance, la flexibilité et les excellentes performances de l'outil après quelques heures de test. Claude Code v2.1.0 https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#210 Rechargement à chaud des "skills" : Les modifications apportées aux compétences dans ~/.claude/skills sont désormais appliquées instantanément sans redémarrer la session. Sous-agents et forks : Support de l'exécution de compétences et de commandes slash dans un contexte de sous-agent forké via context: fork. Réglages linguistiques : Ajout d'un paramètre language pour configurer la langue de réponse par défaut (ex: language: "french"). Améliorations du terminal : Shift+Enter fonctionne désormais nativement dans plusieurs terminaux (iTerm2, WezTerm, Ghostty, Kitty) sans configuration manuelle. Sécurité et correction de bugs : Correction d'une faille où des données sensibles (clés API, tokens OAuth) pouvaient apparaître dans les logs de débogage. Nouvelles commandes slash : Ajout de /teleport et /remote-env pour les abonnés claude.ai afin de gérer des sessions distantes. Mode Plan : Le raccourci /plan permet d'activer le mode plan directement depuis le prompt, et la demande de permission à l'entrée de ce mode a été supprimée. Vim et navigation : Ajout de nombreux mouvements Vim (text objects, répétitions de mouvements f/F/t/T, indentations, etc.). Performance : Optimisation du temps de démarrage et du rendu terminal pour les caractères Unicode/Emoji. Gestion du gitignore : Support du réglage respectGitignore dans settings.json pour contrôler le comportement du sélecteur de fichiers @-mention. Méthodologies 200 déploiements en production par jour, même le vendredi : retours d'expérience https://mcorbin.fr/posts/2025-03-21-deploy-200/ Le déploiement fréquent, y compris le vendredi, est un indicateur de maturité technique et augmente la productivité globale. L'excellence technique est un atout stratégique indispensable pour livrer rapidement des produits de qualité. Une architecture pragmatique orientée services (SOA) facilite les déploiements indépendants et réduit la charge cognitive. L'isolation des services est cruciale : un développeur doit pouvoir tester son service localement sans dépendre de toute l'infrastructure. L'automatisation via Kubernetes et l'approche GitOps avec ArgoCD permettent des déploiements continus et sécurisés. Les feature flags et un système de permissions solide permettent de découpler le déploiement technique de l'activation fonctionnelle pour les utilisateurs. L'autonomie des développeurs est renforcée par des outils en self-service (CLI maison) pour gérer l'infrastructure et diagnostiquer les incidents sans goulot d'étranglement. Une culture d'observabilité intégrée dès la conception permet de détecter et de réagir rapidement aux anomalies en production. Accepter l'échec comme inévitable permet de concevoir des systèmes plus résilients capables de se rétablir automatiquement. "Vibe Coding" vs "Prompt Engineering" : l'IA et le futur du développement logiciel https://www.romenrg.com/blog/2025/12/25/vibe-coding-vs-prompt-engineering-ai-and-the-future-of-software-development/ L'IA est passée du statut d'expérimentation à celui d'infrastructure essentielle pour le développement de logiciels en 2025. L'IA ne remplace pas les ingénieurs, mais agit comme un amplificateur de leurs compétences, de leur jugement et de la qualité de leur réflexion. Distinction entre le "Vibe Coding" (rapide, intuitif, idéal pour les prototypes) et le "Prompt Engineering" (délibéré, contraint, nécessaire pour les systèmes maintenables). L'importance cruciale du contexte ("Context Engineering") : l'IA devient réellement puissante lorsqu'elle est connectée aux systèmes réels (GitHub, Jira, etc.) via des protocoles comme le MCP. Utilisation d'agents spécialisés (écriture de RFC, revue de code, architecture) plutôt que de modèles génériques pour obtenir de meilleurs résultats. Émergence de l'ingénieur "Technical Product Manager" capable d'abattre seul le travail d'une petite équipe grâce à l'IA, à condition de maîtriser les fondamentaux techniques. Le risque majeur : l'IA permet d'aller très vite dans la mauvaise direction si le jugement humain et l'expérience font défaut. Le niveau d'exigence global augmente : les bases techniques solides deviennent plus importantes que jamais pour éviter l'accumulation de dette technique rapide. Une revue de code en solo (Kent Beck) ! https://tidyfirst.substack.com/p/party-of-one-for-code-review?r=64ov3&utm_campaign=post&utm_medium=web&triedRedirect=true La revue de code traditionnelle, héritée des inspections formelles d'IBM, s'essouffle car elle est devenue trop lente et asynchrone par rapport au rythme du développement moderne. Avec l'arrivée de l'IA ("le génie"), la vitesse de production du code dépasse la capacité de relecture humaine, créant un goulot d'étranglement majeur. La revue de code doit évoluer vers deux nouveaux objectifs prioritaires : un "sanity check" pour vérifier que l'IA a bien fait ce qu'on lui demandait, et le contrôle de la dérive structurelle de la base de code. Maintenir une structure saine est crucial non seulement pour les futurs développeurs humains, mais aussi pour que l'IA puisse continuer à comprendre et modifier le code efficacement sans perdre le contexte. Kent Beck expérimente des outils automatisés (comme CodeRabbit) pour obtenir des résumés et des schémas d'architecture afin de garder une conscience globale des changements rapides. Même si les outils automatisés sont utiles, le "Pair Programming" reste irremplaçable pour la richesse des échanges et la pression sociale bénéfique qu'il impose à la réflexion. La revue de code solo n'est pas une fin en soi, mais une adaptation nécessaire lorsque l'on travaille seul avec des outils de génération de code augmentés. Loi, société et organisation Lego lance les Lego Smart Play, avec des Brique, des Smart Tags et des Smart Figurines pour faire de nouvelles constructions interactives avec des Legos https://www.lego.com/fr-fr/smart-play LEGO SMART Play : technologie réactive au jeu des enfants. Trois éléments clés : SMART Brique : Brique LEGO 2x4 "cerveau". Accéléromètre, lumières réactives, détecteur de couleurs, synthétiseur sonore. Réagit aux mouvements (tenir, tourner, taper). SMART Tags : Petites pièces intelligentes. Indiquent à la SMART Brique son rôle (ex: hélicoptère, voiture) et les sons à produire. Activent sons, mini-jeux, missions secrètes. SMART Minifigurines : Activées près d'une SMART Brique. Révèlent des personnalités uniques (sons, humeurs, réactions) via la SMART Brique. Encouragent l'imagination. Fonctionnement : SMART Brique détecte SMART Tags et SMART Minifigurines. Réagit aux mouvements avec lumières et sons dynamiques. Compatibilité : S'assemble avec les briques LEGO classiques. Objectif : Créer des expériences de jeu interactives, uniques et illimitées. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 5 février 2026 : Web Days Convention - Aix-en-Provence (France) 12 février 2026 : Strasbourg Craft #1 - Strasbourg (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 9-10 avril 2026 : AndroidMakers by droidcon - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 24-25 avril 2026 : Faiseuses du Web 5 - Dinan (France) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

netflix google guide secret service tech spring evolution data microsoft mit modern chefs class code skills web ga difficult lego construction base confusion ces oracle cons classic saas encourage excuses ia react pattern assistant year in review gemini openai cv faire maintenance combine distribution extension analyse correction blue sky validation rust acc api map qr conf puppets materials islands io sous elles python ui aws nouvelle expose nouveau toutes trois java minimal quelques github guillaume bonjour fork corrections workflow int distinction prometheus aur probl helm extraction alpine installation mole loom llm macos documentation exposition html aide kafka apache invent nouvelles gestion prod prise plein wax gpu changement cpu nouveaux propose els gc interface anthropic css vendredi dns adieu jars meilleure construire ide synth soc diagnostics homebrew objectif dram docker elegant node bedrock loi kubernetes utiliser m2 sortie tableau sdks offre m3 accepter cdi contrairement servo enregistr mongodb pratiques approche changements m4 ci cd tui mcp mistral json jira london uk potentiel permet paris france cli cve appr vim soa fonctionne github copilot loc limiter possibilit fonction ssh utilisation vs code maintenir m5 rfc visual studio prompt engineering apple silicon comparaison 7d jit lippe ingress kotlin oauth e s panache ansible avantage jvm debian vache unicode lsp affiche hibernate appliquer jwt snyk mixit garanti yaml objective c concevoir grafana cncf cgo pair programming changelog ajout tech summit gitops devcon kent beck technical product manager spring boot cleanmymac nice france gemini pro jdk lyon france intellij surveille raycast spring framework intellij idea base64 tuis provence france haproxy devoxx strasbourg france argocd istat menus cannes france lille france iterm2 daisydisk kafka connect regexp devoxx france appcleaner
Chip Stock Investor Podcast
The Best Memory Stocks For 2026: How To Play the Memory Shortage

Chip Stock Investor Podcast

Play Episode Listen Later Jan 15, 2026 15:27


Memory shortages are all the rage in 2026. How should you play the AI data center supply crunch?We discussed this back in 2025, and now it is here: Memory shortages are hitting the AI data center supply chain across the board. But is this an AI bubble, or just a normal cyclical growth cycle? In this video, we break down the entire memory hierarchy—from ultra-fast on-chip SRAM to HBM and long-term storage—and give you the basket of companies to watch for each layer.We also discuss why Pure Storage is our top bet for secondary storage and how equipment suppliers like Lam Research could benefit as manufacturers race to expand capacity.Join us on Discord with Semiconductor Insider, sign up on our website: www.chipstockinvestor.com/membershipSupercharge your analysis with AI! Get 15% of your membership with our special link here: https://fiscal.ai/csi/Sign Up For Our Newsletter: https://mailchi.mp/b1228c12f284/sign-up-landing-page-short-formChapters:00:00 – Memory Shortages: Bubble vs. Cyclical Growth 02:13 – The AI Memory Hierarchy Explained (SRAM, DRAM, NAND) 04:59 – SRAM Stocks: Nvidia, AMD, & Synopsys 06:50 – Embedded Memory: Weebit Nano & MRAM players 07:46 – DRAM & HBM Leaders: SK Hynix, Micron, Samsung 09:00 – The NAND & HDD Resurgence (Seagate & WD) 11:00 – Why Pure Storage is a Top Bet 14:00 – The Fab Five & Lam Research OpportunityIf you found this video useful, please make sure to like and subscribe!*********************************************************Affiliate links that are sprinkled in throughout this video. If something catches your eye and you decide to buy it, we might earn a little coffee money. Thanks for helping us (Kasey) fuel our caffeine addiction!Content in this video is for general information or entertainment only and is not specific or individual investment advice. Forecasts and information presented may not develop as predicted and there is no guarantee any strategies presented will be successful. All investing involves risk, and you could lose some or all of your principal. #semiconductors #chips #investing #stocks #finance #financeeducation #silicon #artificialintelligence #ai #financeeducation #chipstocks #finance #stocks #investing #investor #financeeducation #stockmarket #chipstockinvestor #fablesschipdesign #chipmanufacturing #semiconductormanufacturing #semiconductorstocks Nick and Kasey own shares of Nvidia, Micron, Pure Storage, Sk hynix, Kioxia, Lam Research

Crazy Wisdom
Episode #522: The Hardware Heretic: Why Everything You Think About FPGAs Is Backwards

Crazy Wisdom

Play Episode Listen Later Jan 12, 2026 53:08


In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics.For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com.Timestamps00:00 Introduction to FPGAs and Their Role in Servers02:47 Understanding FPGA Limitations and Use Cases05:55 Exploring Different Types of Servers08:47 The Importance of Memory and Bandwidth11:52 Philosophical Insights on Search and Access Patterns14:50 The Relationship Between Hardware and Search Queries17:45 Challenges of Distributed Systems20:47 The CAP Theorem and Its Implications23:52 The Evolution of Technology and Knowledge Management26:59 FPGAs as IO Expanders29:35 The Trade-offs of FPGAs vs. ASICs and GPUs32:55 The Future of AI Applications with FPGAs35:51 Exciting Developments in Hardware and BusinessKey Insights1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel.2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders.3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware.4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously.5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from fixed graphics pipelines to programmable compute, and now companies like Etched are creating transformer-specific ASICs. This cycle repeats as each generation adds programmability until someone strips it away for performance gains.6. Memory bottlenecks are reshaping the hardware landscape - The AI boom has created severe memory shortages, doubling costs for DRAM components overnight. This affects not just GPU availability but creates opportunities for alternative architectures. When everyone faces higher memory costs, the relative premium for specialized solutions like FPGA-based systems becomes more attractive, potentially shifting the competitive landscape for memory-intensive applications.7. Search applications represent ideal FPGA use cases due to their sparse access patterns - Vector databases and search workloads are particularly well-suited to FPGA acceleration because they involve searching through massive datasets with sparse access patterns rather than dense computation. These applications can effectively utilize the high bandwidth to flash storage and parallel I/O capabilities that FPGAs provide, making them natural early adopters for this type of specialized hardware architecture.

Software Defined Talk
Episode 554: The Alpha and The Omega

Software Defined Talk

Play Episode Listen Later Jan 9, 2026 72:05


This week, we discuss AI's impact on Stack Overflow, Docker's Hardened Images, and Nvidia buying Groq. Plus, thoughts on playing your own game and having fun. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/LQSxLbjvz3c?si=ao8f3hwxlCrmH1vX) 554 (https://www.youtube.com/live/LQSxLbjvz3c?si=ao8f3hwxlCrmH1vX) Please complete the Software Defined Talk Listener Survey! (https://docs.google.com/forms/d/e/1FAIpQLSfl7eHWQJwu2tBLa-FjZqHG2nr6p_Z3zQI3Pp1EyNWQ8Fu-SA/viewform?usp=header) Runner-up Titles It's all brisket after that. Exploring Fun Should I go build a snow man? Pets Innersourcing Two books Michael Lewis should write. Article IV is foundational. Freedom is options. Rundown Stack Overflow is dead. (https://x.com/rohanpaul_ai/status/2008007012920209674?s=20) Hardened Images for Everyone (https://www.docker.com/blog/docker-hardened-images-for-every-developer/) Tanzu's Bitnami stuff does this too (https://blogs.vmware.com/tanzu/what-good-software-supply-chain-security-looks-like-for-highly-regulated-industries/). OpenAI OpenAI's New Fundraising Round Could Value Startup at as Much as $830 Billion (https://www.wsj.com/tech/ai/openais-new-fundraising-round-could-value-startup-at-a[…]4238&segment_id=212500&user_id=c5a514ba8b7d9a954711959a6031a3fa) OpenAI Reportedly Planning to Make ChatGPT "Prioritize" Advertisers in Conversation (https://futurism.com/artificial-intelligence/openai-chatgpt-sponsored-ads) OpenAI bets big on audio as Silicon Valley declares war on screens (https://techcrunch.com/2026/01/01/openai-bets-big-on-audio-as-silicon-valley-declares-war-on-screens/) Sam Altman says: He has zero percent interest in remaining OpenAI CEO, once (https://timesofindia.indiatimes.com/technology/tech-news/sam-altman-says-he-has-zero-percent-interest-remaining-openai-ceo-once-/articleshow/126350602.cms) Nvidia buying AI chip startup Groq's assets for about $20 billion in its largest deal on record (https://www.cnbc.com/2025/12/24/nvidia-buying-ai-chip-startup-groq-for-about-20-billion-biggest-deal.html) Relevant to your Interests Broadcom IT uses Tanzu Platform to host MCP Servers (https://news.broadcom.com/app-dev/broadcom-tanzu-platform-agentic-business-transformation). A Brief History Of The Spreadsheet (https://hackaday.com/2025/12/15/a-brief-history-of-the-spreadsheet/) Databricks is raising over $4 billion in Series L funding at a $134 billion (https://x.com/exec_sum/status/2000971604449485132?s=20) Amazon's big AGI reorg decoded by Corey Quinn (https://www.theregister.com/2025/12/17/jassy_taps_peter_desantis_to_run_agi/) “They burned millions but got nothing.” (https://automaton-media.com/en/news/japanese-game-font-services-aggressive-price-hike-could-be-result-of-parent-companys-alleged-ai-failu/) X sues to protect Twitter brand Musk has been trying to kill (https://www.theregister.com/2025/12/17/x_twitter_brand_lawsuit/) Mozilla's new CEO says AI is coming to Firefox, but will remain a choice | TechCrunch (https://techcrunch.com/2025/12/17/mozillas-new-ceo-says-ai-is-coming-to-firefox-but-will-remain-a-choice/) Why Oracle keeps sparking AI-bubble fears (https://www.axios.com/2025/12/18/ai-oracle-stock-blue-owl) What's next for Threads (https://sources.news/p/whats-next-for-threads) Salesforce Executives Say Trust in Large Language Models Has Declined (https://www.theinformation.com/articles/salesforce-executives-say-trust-generative-ai-declined?rc=giqjaz) Akamai Technologies Announces Acquisition of Function-as-a-Service Company Fermyon (https://www.akamai.com/newsroom/press-release/akamai-announces-acquisition-of-function-as-a-service-company-fermyon) Google Rolling Out Gmail Address Change Feature: Here Is How It Works (https://finance.yahoo.com/news/google-rolling-gmail-address-change-033112607.html) The Enshittifinancial Crisis (https://www.wheresyoured.at/the-enshittifinancial-crisis/) MongoBleed: Critical MongoDB Vulnerability CVE-2025-14847 | Wiz Blog (https://www.wiz.io/blog/mongobleed-cve-2025-14847-exploited-in-the-wild-mongodb) Softbank to buy data center firm DigitalBridge for $4 billion in AI push (https://www.cnbc.com/amp/2025/12/29/digitalbridge-shares-jump-on-report-softbank-in-talks-to-acquire-firm.html) The best tech announced at CES 2026 so far (https://www.theverge.com/tech/854159/ces-2026-best-tech-gadgets-smartphones-appliances-robots-tvs-ai-smart-home) Who's who at X, the deepfake porn site formerly known as Twitter (https://www.ft.com/content/ad94db4c-95a0-4c65-bd8d-3b43e1251091?accessToken=zwAGR7kzep9gkdOtlNtMlaBMZdO9jTtD4SUQkQ.MEYCIQCdZajuC9uga-d9b5Z1t0HI2BIcnkVoq98loextLRpCTgIhAPL3rW72aTHBNL_lS7s1ONpM2vBgNlBNHDBeGbHkPkZj&sharetype=gift&token=a7473827-0799-4064-9008-bf22b3c99711) Manus Joins Meta for Next Era of Innovation (https://manus.im/blog/manus-joins-meta-for-next-era-of-innovation) The WELL: State of the World 2026 with Bruce Sterling and Jon Lebkowsky (https://people.well.com/conf/inkwell.vue/topics/561/State-of-the-World-2026-with-Bru-page01.html) Virtual machines still run the world (https://cote.io/2026/01/07/virtual-machines-still-run-the.html) Databases in 2025: A Year in Review (https://www.cs.cmu.edu/~pavlo/blog/2026/01/2025-databases-retrospective.html) Chat Platform Discord Files Confidentially for IPO (https://www.bloomberg.com/news/articles/2026-01-06/chat-platform-discord-is-said-to-file-confidentially-for-ipo?embedded-checkout=true) The DRAM shortage explained: AI, rising prices, and what's next (https://www.techradar.com/pro/why-is-ram-so-expensive-right-now-its-more-complicated-than-you-think) Nonsense Palantir CEO buys monastery in Old Snowmass for $120 million (https://www.denverpost.com/2025/12/17/palantir-alex-karp-snowmass-monastery/amp/) H-E-B gives free groceries to all customers after registers glitch today in Burleson, Texas. (https://www.reddit.com/r/interestingasfuck/s/ZEcblg7atP) Conferences cfgmgmtcamp 2026 (https://cfgmgmtcamp.org/ghent2026/), February 2nd to 4th, Ghent, BE. Coté speaking - anyone interested in being a SDI guest? DevOpsDayLA at SCALE23x (https://www.socallinuxexpo.org/scale/23x), March 6th, Pasadena, CA Use code: DEVOP for 50% off. Devnexus 2026 (https://devnexus.com), March 4th to 6th, Atlanta, GA. Coté has a discount code, but he's not sure if he can give it out. He's asking! Send him a DM in the meantime. KubeCon EU, March 23rd to 26th, 2026 - Coté will be there on a media pass. Whole bunch of VMUGs, mostly in the US. The CFPs are open (https://app.sessionboard.com/submit/vmug-call-for-content-2026/ae1c7013-8b85-427c-9c21-7d35f8701bbe?utm_campaign=5766542-VMUG%20Voice&utm_medium=email&_hsenc=p2ANqtz-_YREN7dr6p3KSQPYkFSN5K85A-pIVYZ03ZhKZOV0O3t3h0XHdDHethhx5O8gBFguyT5mZ3n3q-ZnPKvjllFXYfWV3thg&_hsmi=393690000&utm_content=393685389&utm_source=hs_email), go speak at them! Coté speaking in Amsterdam. Amsterdam (March 17-19, 2026), Minneapolis (April 7-9, 2026), Toronto (May 12-14, 2026), Dallas (June 9-11, 2026), Orlando (October 20-22, 2026) SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: Why Data Doesn't Always Win, with a Philosopher of Art (https://podcasts.apple.com/us/podcast/the-points-you-shouldnt-score-a-new-years-resolution/id1685093486?i=1000743950053) (Apple Podcasts) Why Data Doesn't Always Win, with a Philosopher of Art (https://www.youtube.com/watch?v=7AdbePyGS2M&list=RD7AdbePyGS2M&start_radio=1) (YouTube) Coté: “Databases in 2025: A Year in Review.” (https://www.cs.cmu.edu/~pavlo/blog/2026/01/2025-databases-retrospective.html) Photo Credits Header (https://unsplash.com/photos/red-and-black-love-neon-light-signage-igJrA98cf4A)

Choses à Savoir ÉCONOMIE
Pourquoi les appareils électroniques vont-ils coûter plus cher en 2026 ?

Choses à Savoir ÉCONOMIE

Play Episode Listen Later Jan 5, 2026 2:45


En 2026, les appareils électroniques — smartphones, ordinateurs, tablettes, consoles ou objets connectés — vont coûter plus cher. L'une des raisons majeures, encore peu visible pour le grand public, est l'augmentation rapide du prix de la mémoire vive, la RAM. Et cette hausse est directement liée à l'explosion de l'intelligence artificielle.La RAM est un composant essentiel de tout appareil électronique. Elle permet de stocker temporairement les données utilisées par le processeur et conditionne la rapidité et la fluidité d'un système. Sans RAM, pas de multitâche, pas d'applications modernes, pas d'IA embarquée. Or, depuis deux ans, la demande mondiale de mémoire a changé de nature.Traditionnellement, la RAM était majoritairement destinée aux PC, aux smartphones et aux serveurs classiques. Désormais, les grandes entreprises de l'IA — OpenAI, Google, Microsoft, Meta, Amazon — consomment des quantités colossales de mémoire pour entraîner et faire fonctionner leurs modèles. Les serveurs d'IA utilisent des mémoires spécifiques, comme la HBM (High Bandwidth Memory), indispensables pour alimenter les puces de calcul de type GPU. Un seul serveur dédié à l'IA peut embarquer plusieurs centaines de gigaoctets de RAM, soit l'équivalent de dizaines, voire de centaines de smartphones.Selon plusieurs cabinets d'analyse, la demande en mémoire liée à l'IA progresse de plus de 40 % par an. En face, l'offre ne suit pas. Les fabricants de mémoire — Samsung, SK Hynix et Micron — ont volontairement limité leurs investissements après la crise de surproduction de 2022-2023. Résultat : en 2026, la production mondiale de DRAM devrait augmenter d'environ 15 à 16 %, bien moins que la demande.Ce déséquilibre a déjà un impact sur les prix. En 2025, les prix de la DRAM ont augmenté de plus de 50 %. Pour 2026, plusieurs prévisions évoquent une nouvelle hausse comprise entre 30 et 50 %, selon les segments. La mémoire HBM, très utilisée par l'IA, est encore plus sous tension, car elle mobilise davantage de silicium et des chaînes de production complexes, au détriment de la RAM “classique”.Or la RAM représente entre 10 et 20 % du coût de fabrication d'un PC ou d'un smartphone milieu et haut de gamme. Quand ce composant augmente fortement, les fabricants n'ont que deux options : réduire les performances ou augmenter les prix. De plus en plus, ils choisissent la seconde solution. Des hausses de prix sont déjà anticipées sur les PC et les smartphones dès 2026, avec une augmentation moyenne estimée entre 6 et 8 %.En résumé, l'essor fulgurant de l'intelligence artificielle accapare la mémoire mondiale. Et cette bataille invisible pour la RAM se traduira très concrètement, en 2026, par des appareils électroniques plus chers pour les consommateurs. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

The Scotchy Bourbon Boys
We Battle Blind For Our 2025: Bourbon Of The Year

The Scotchy Bourbon Boys

Play Episode Listen Later Jan 2, 2026 74:27 Transcription Available


Send us a textFive heavy hitters enter a blind tasting; a tie forces a live tiebreak and the crown goes to Old Man Winter With a prestige French oak finish. We trade hype for flavor, argue proof vs. balance, and learn how availability shapes a worthy Scotchy Bourbon Boys “Bourbon of the year.”• sponsors thanked and distillery updates shared• lineup set for the blind: five bourbons, one wild card on the side• simple scoring rules agreed, color coding to prevent bias• first-pass notes on cola, peanut, vanilla, and fruit-forward profiles• proof chat and how it affects palate, not just heat• debate on noses vs palates, dryness vs sweetness• tally confusion resolved, two-way tie identified• audience tiebreak selects Old Man Winter as winner• value and availability weighed against rarity and price• honorable mentions and how oxidation changes bottles over timeMake sure that you leave us good feedback on Apple and iHeart and Spotify five-star rating and everything. Leave a review and then become members.What happens when you strip away labels, lock in a scoring system, and let the glass do the talking? We gathered the full crew, poured five of the year's most talked-about bourbons completely blind, and chased the truth through cola notes, peanut vibes, dessert-like vanilla, and bright, fruit-forward finishes. The lineup was stacked: Knob Creek 21, Russell's Reserve 13 (2025), A Midwinter Night's Dram, Cathedral French Oak, and Old Man Winter from Preservation. Expectations were sky high for the heavy hitters—but the scoreboard had other plans.We walk you through the tasting rules, the early favorites, and the turning point when pour number three changed the room's mood. Proof chasers met balance seekers as a silky 90s-proof contender outperformed its label, while a 110-proof nose bomb turned out more polarizing on the palate than predicted. Cathedral French Oak cast a spell on the nose. Knob 21 delivered oak-driven structure. Russell's 13 flashed that rich sweetness many love. Midwinter offered juicy fruit and charm. But the question we kept asking was simple: which glass makes you want another pour?By the end, scores tied between two colors and we pulled in a live audience to break it. The winner? Old Man Winter—an underdog that paired layered fruit, spice control, and a welcoming finish with the practical upside of being findable at retail. We dig into why availability matters for a “Bourbon of the Year,” what blind tasting reveals about our biases, and how time in the bottle can flip your rankings weeks later. Stick around for honorable mentions, lessons learned from oxidation, and a reminder that great whiskey doesn't always wear the most expensive label.If you enjoyed this blind battle, follow the show, share it with a bourbon friend, and drop your own top pick of the year in a review. Your palate belongs in this conversation.voice over Whiskey Thief Add for SOFLSupport the showhttps://www.scotchybourbonboys.com The Scotchy bourbon Boys are #3 in Feedspots Top 60 whiskey podcasts in the world https://podcast.feedspot.com/whiskey_podcasts/

GreyBeards on Storage
173: GreyBeards Year End 2025 podcast

GreyBeards on Storage

Play Episode Listen Later Dec 29, 2025 41:13


Once again in 2025, AI was all the news. We are seeing some cracks in the NVIDIA moat but it's early yet. Broadcom VMware moves and DRAM sky-high pricing sort of round out the year for us. Listen to our YE podcast to learn more.

ai year end nvidia ye dram broadcom vmware greybeards
WSJ Tech News Briefing
TNB Tech Minute: India's Coforge to Acquire AI Software Firm Encora

WSJ Tech News Briefing

Play Episode Listen Later Dec 26, 2025 2:24


Plus: Nvidia and Coupang shares jump. And analysts say Samsung Electronics and SK Hynix could benefit from DRAM demand. Julie Chang hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices

伊藤洋一のRound Up World Now!
Round Up World Now!(2025.12.26放送分)

伊藤洋一のRound Up World Now!

Play Episode Listen Later Dec 26, 2025


<ヘッドライン>政府、一般会計総額を1122兆3092円億円とする26年度予算案を閣議決定 25年度を7兆円余り上回り2年連続で過去最大更新 新規国債発行額も25年度を上回る29兆5840億円 高市首相、日経単独インタビューで「私が責任ある積極財政と言っているその『責任』というのは、今を生きている国民と未来を生きる国民に対する責任だ」/日経・テレ東が12月19〜21日実施の世論調査、高市内閣支持率は11月の前回調査と変わらない75% 10月の内閣発足後3カ月連続で7割台維持/ゼレンスキー・ウクライナ大統領、東部ドンバス地方について「合意時の前線に基づき国際部隊が監視する案のほか、ドンバス地方を非武装化する案がある。その場合、ロシア軍も撤退する必要がある」 米国と最終調整進める20項目対ロシア和平案の詳細公表/トランプ米大統領、海軍の大型戦艦「トランプ級」2隻建造計画を承認 排水量3万〜4万超、計20〜25隻に増やす「黄金艦隊」構想 国内で建造し国産の鉄鋼を使用、国内造船業の復活図る/中国11月レアアース磁石対日輸出量、前月比プラス34.7%の304トンと月別で2025年中で最多 中国、日中関係でもレアアースに関して表立った対抗措置打ち出さず/内閣府、小笠原諸島・南鳥島にレアアース含む泥の処理施設を2027年までに設置 水深約6000メートルの海底から回収する実証試験を2027年から開始、経済安全保障の観点から重要、開発急ぐ/半導体メモリー・DRAM、今年10月の大口取引価格交渉で値段がつかない異例の事態 大手メーカーが指標品・DDR4型の生産縮小、供給が急減 エレクトロニクス商社関係者「DRAMの価格交渉が成立しないのは初めて」/政府、固定金利型公的住宅ローン「フラット35」の融資限度額を 8000万円から1億2千万円に引き上げる方針 住宅価格の高騰に対応 日銀利上げ進めれば変動金利型は負担増す恐れ、固定金利型のニーズ高まると見て制度使いやすく <ポイント> (1) 年末年始のマーケットに要注意(2) 2026年の展望(3) 正念場迎える高市内閣 <ここ/これを見てきた>皆様への早めの年賀状

Sacred Symbols: A PlayStation Podcast
#390 | But Without Me, You're Only You

Sacred Symbols: A PlayStation Podcast

Play Episode Listen Later Dec 22, 2025 209:09


Let's be honest: This whole Light of Motiram thing was shady from the get-go. A game that blatantly rips off Guerrilla's Horizon franchise, created by the publisher-and-developer combo that was later revealed to have unsuccessfully pitched Sony a Horizon spin-off beforehand? They call that 'dead to rights.' And now it's official, because Sony's lawsuit against Tencent over this facsimile has been dropped with prejudice, and the game has been delisted from Steam. That's a win for PlayStation, and an even bigger win for creativity in the AAA space. We discuss. Plus: Sucker Punch co-founder, long-time producer, and studio lead Brian Fleming is officially retiring from the team after 28 years, Bungie's Marathon is slated to come to PS5 in March, Hollow Knight: Silksong is getting a free nautical expansion in '26, and more. Then: Listener inquiries! Should we expect devs to lean more deeply into attractive characters? Do we think "made using AI" warnings will appear on games at some point in the near future? Who's to blame for Highguard's tepid response at The Game Awards? Is it strange if a wheelchair-bound listener wants to proudly wear a Stand Down shirt? Please keep in mind that our timestamps are approximate, and will often be slightly off due to dynamic ad placement. 0:00:00 - Intro0:37:46 - A nice note from William0:43:47 - Andrew and Steven0:52:25 - Ratchet & Clank: Ranger Rumble is out in some European countries0:53:55 - Light of Motiram case dropped1:02:24 - Sucker Punch's co-founder is leaving1:15:19 - Marathon gets a release month1:28:21 - Silksong sells 7 million with an expansion on the way1:31:05 - Ubisoft acquires an Amazon team1:32:32 - Mega Man Star Force Legacy collection announced1:35:14 - US sales data1:47:35 - What We're Playing (Terminator 2D: NO FATE, Dying Light: The Beast, Marvel Cosmic Invasion, The Outer Worlds 2, Destiny 2: Renegades (The Star Wars Expansion), Skate Story, Tomba 2, Metroid Prime 4)2:11:17 - Featuring "attractive female characters"2:27:14 - Game studios and AI2:45:51 - Will the DRAM shortage effect consoles?2:53:46 - One handed gaming2:57:39 - Highguard3:08:25 - Games that will never get revived Learn more about your ad choices. Visit podcastchoices.com/adchoices

Android Faithful
An Android Tablet... On Wheels?

Android Faithful

Play Episode Listen Later Dec 17, 2025 74:42


Another glorious week on the Android Faithful podcast with Jason Howell reviewing a 32" tablet, Mishaal flexing his Google sleuthing muscles all over Android 17, and Huyen bringing the TURBO to 2026!Don't forget the 2025 Annual Podcast Kudos (APKs!!!) You can vote for your favorite phones and news stories at https://bit.ly/2025apks - Voting closes on 12/22 at 11:59pm ET!Note: Time codes subject to change depending on dynamic ad insertion by the distributor0:05:18 - NEWSGoogle and Apple partner on better Android-iPhone switchingGalaxy Z TriFold Becomes Instant Hit, Sold Out Within MinutesAndroid 17 may finally add the native App Lock feature Pixel users have been waiting forAndroid 17 could mimic this helpful iOS feature to reduce motion sicknessPatron pick: Google's Stadia controller just got a big upgrade courtesy of Steam0:32:25 - HARDWAREJason reviews the KTC 32" Android TabletOnePlus says its upcoming ‘Turbo' gaming phones are ‘frighteningly strong'Return of 4GB RAM in smartphones by 2026 amidst DRAM crisis2026 Smartphone Shipment Forecasts Revised Down as Memory Shortage Drives BoM Costs Up0:52:09 - APPSInstagram hands you the keys to control 'Your Algorithm' in Reels, plans to expandBringing state-of-the-art Gemini translation capabilities to Google Translate0:59:35 - FEEDBACKJeff is having Pixel 10 Qi2 issuesHilton thinks monochrome icons make Android less usableMorgan shares a (long) report on Google's sub-par customer service Hosted on Acast. See acast.com/privacy for more information.

TD Ameritrade Network
A.I. Storage Price Hikes & Demand Benefit MU, Data Centers Biggest Question Mark

TD Ameritrade Network

Play Episode Listen Later Dec 17, 2025 7:03


Matthew Bryson with Wedbush says Micron's (MU) earnings will be all about the guidance. He believes price increases in dRAM and demand for storage chips will open a wide runway for Micron's growth. The biggest uncertainty for Matthew lies in the data center outlook, as a demand pullback can hit Micron. Tom White offers an example options trade for the A.I. chipmaker ahead of the company's earnings. ======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about

PC Perspective Podcast
Podcast #848 - Weekly DDR5 Discussion, AMD Redstone, Steam on Windows 7, Noctua 3D Filament, Cyberpunk Police +more!

PC Perspective Podcast

Play Episode Listen Later Dec 13, 2025 65:14


There are FOUR lights!But besides that, we have AMD news on Redstone, their B650 chipset and so much DDR pricing and related news that you'll platz.  Oh, Kohler has got a toilet cam security issue (seriously) and your One Time Password use is so very passé.  Microsoft also plans to finally make Windows 11 better for gaming in 2026 - Steam and Bazzite reportedly not worried.Thanks to Notion and their AI enfused Notion Agent for the sponsorship this week, help us out and check out their very cool offer at:notion.com/pcperTimestamps:00:00 Preroll00:04 Intro00:25 Patreon01:34 Food with Josh03:15 Weekly DDR5 price check04:44 Micron is making more than ever before on DRAM06:51 A DDR4 / DDR5 combo board!08:57 The DRAM problem - now vs. 201712:11 AMD Redstone is out16:45 AMD extending life of B65020:05 Heavy SSD writes due to AMD chipset driver?22:55 Steam client back-ported to Windows 7, 824:50 NVIDIA allowed to export H200 to China now26:25 Noctua beige and brown 3D printer filament28:24 Windows update removes Start Menu and Explorer31:36 Podcast sponsor - Notion32:50 (In)Security Corner45:18 Gaming Quick Hits52:37 Picks of the Week1:03:47 Outro ★ Support this podcast on Patreon ★

The Hardware Unboxed Podcast
How The DRAM Crisis Will Affect Gaming GPUs (feat. Ed from Sapphire)

The Hardware Unboxed Podcast

Play Episode Listen Later Dec 12, 2025 68:56


Episode 92: Edward Crisler from Radeon-exclusive AIB Sapphire joins the podcast to chat about the current GPU market. How will rising DRAM prices affect gaming GPUs? Can the GPU makers and AIBs absorb some of the increased cost? Also we talk about RDNA 4 and how successful it's been compared to previous generations, AMD's true market share, and of course, the Sapphire Puke box artCHAPTERS00:00 - Intro01:03 - RDNA 4 Launch at Sapphire05:11 - RDNA 4 vs Older Generations Success11:32 - The DRAM Crisis20:25 - AIBs Want More Control24:48 - Thoughts on 12VHPWR26:32 - How Are SKU Decisions Made?32:35 - Sapphire Puke35:27 - DRAM Pricing: What Can AMD and AIBs Do?44:50 - AI-Focused GPU Makers Owe Everything to Gamers50:56 - AMD's True Market Share59:05 - The Key to RDNA 4's Success1:03:13 - Outro with Ed's Favorite Sapphire GenerationSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.social Hosted on Acast. See acast.com/privacy for more information.

The Hardware Unboxed Podcast
Bad Signs for PC Gaming: DRAM for AI Kills Crucial

The Hardware Unboxed Podcast

Play Episode Listen Later Dec 5, 2025 76:58


Episode 91: Troubling signs in the PC gaming space with DRAM prices continuing to skyrocket and the surprising death of Micron's Crucial brand. Also we chat about the Ryzen 7 9850X3D and Nvidia ending driver support for Pascal.CHAPTERS00:00 - Intro00:31 - DRAM Demand for AI Kills Micron's Crucial Brand29:39 - AMD Confirms Ryzen 7 9850X3D41:08 - Nvidia Ends Pascal and Maxwell Driver Support45:20 - Updates From Our Boring LivesSUBSCRIBE TO THE PODCASTAudio: https://shows.acast.com/the-hardware-unboxed-podcastVideo: https://www.youtube.com/channel/UCqT8Vb3jweH6_tj2SarErfwSUPPORT US DIRECTLYPatreon: https://www.patreon.com/hardwareunboxedLINKSYouTube: https://www.youtube.com/@Hardwareunboxed/Twitter: https://twitter.com/HardwareUnboxedBluesky: https://bsky.app/profile/hardwareunboxed.bsky.socialTombstone image on thumbnail: www.freepik.com Hosted on Acast. See acast.com/privacy for more information.

Get Up in the Cool
Episode 484: Joe Seamons (Songs of the Pacific Northwest)

Get Up in the Cool

Play Episode Listen Later Dec 3, 2025 59:57


Welcome to Get Up in the Cool: Old Time Music with Cameron DeWhitt and Friends. This week's friend is Joe Seamons! We recorded this last weekend at my home in Portland, OR. Tunes in this episode: * Cedar Mill Boys (lyrics from Hobe Kytr) (1:13) * Waterbound (extra lyrics from Joe Seamons) (14:34) * Give the Fiddler a Dram (extra lyrics from John Cunnick) (30:12) * Same Old Wind (John and Kim Cunnick original) (39:21) * Memphis Blues (W.C. Handy original) (55:35) * BONUS TRACK: Love, Love Alone (John Hardy original) Follow Joe Seamons on Instagram (https://www.instagram.com/joebanjo/?hl=en) Support Get Up in the Cool on Patreon (https://www.patreon.com/getupinthecool) Send Tax Deductible Donations to Get Up in the Cool through Fracture Atlas (https://fundraising.fracturedatlas.org/get-up-in-the-cool) Sign up at Pitchfork Banjo for my clawhammer instructional series! (https://www.pitchforkbanjo.com/) Schedule a banjo lesson with Cameron (https://www.camerondewhitt.com/banjolessons) Visit Tall Poppy String Band's website (https://www.tallpoppystringband.com/) and follow us on Instagram (https://www.instagram.com/tallpoppystringband/) follow Sweeten the Third on Instagram (https://www.instagram.com/sweetenthethird/?hl=en)