POPULARITY
SUMMARY DEL SHOW Futuros en verde tras una semana débil por shock de energía, pero el tape sigue dominado por Irán y el Estrecho de Ormuz Crudo volátil con Brent cerca de $106 y WTI alrededor de $96, mientras Trump presiona a aliados para reabrir rutas de envío $NBIS se dispara por acuerdo de infraestructura de IA con $META, $MU acelera capacidad en Taiwán para DRAM y HBM, y $BABA prepara un AI agent empresarial sobre Qwen
Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con
In this episode of The Canadian Investor Podcast, Simon Belanger and Dan Kent kick things off with a surprising ripple effect from the AI boom: a full-blown RAM/memory shortage that’s sending PC upgrade costs through the roof. They break down why high-bandwidth memory (HBM) is crowding out “normal” consumer RAM production, how Micron, Samsung, and SK Hynix are prioritizing the most profitable AI-driven demand, and what that could mean for pricing, upgrade cycles, and the broader tech supply chain. From there, they shift into a pragmatic, investor-focused look at positioning during geopolitical uncertainty—without cheerleading conflict. Dan outlines key areas investors often look at in these environments: defense contractors (and why buying after the headlines can be “buying the umbrella in the rain”), Canadian energy as a cleaner way to express higher oil prices with less Middle East exposure, the growing (and expensive) opportunity set in cybersecurity, and gold as both a safe haven and an inflation hedge. They also touch on different ways to gain exposure—individual names vs. ETFs—and wrap up with updates on the podcast’s YouTube live plans and what’s coming next. Tickers of Stocks discussed: LMT, NOC, GD, RTX, MU, AEM, FNV, WPM, ZJG.TO Subscribe to our Our New Youtube Channel! Check out our portfolio by going to Jointci.com Our Website Our New Youtube Channel! Canadian Investor Podcast Network Twitter: @cdn_investing Simon’s twitter: @Fiat_Iceberg Braden’s twitter: @BradoCapital Dan’s Twitter: @stocktrades_ca Want to learn more about Real Estate Investing? Check out the Canadian Real Estate Investor Podcast! Apple Podcast - The Canadian Real Estate Investor Spotify - The Canadian Real Estate Investor Web player - The Canadian Real Estate Investor Asset Allocation ETFs | BMO Global Asset Management Sign up for Fiscal.ai for free to get easy access to global stock coverage and powerful AI investing tools. Register for EQ Bank, the seamless digital banking experience with better rates and no nonsense. See omnystudio.com/listener for privacy information.
In dit uitgebreide Tech Webinar ontvangt Nico Inberg techanalist Marc Langeveld van het Antaurus AI Tech Fund. Marc Langeveld geeft zijn visie op de AI en technologiesector en bespreekt een aantal Tech & AI-gerelateerde aandelen. Onderwerpen die langskomen zijn onder andere de sell off in software-aandelen, de AI Capex boom en de bottlenecks bij de bouw van datacenters. Daarnaast komen er een aantal aandelen aan bod waaronder Nvidia, Besi, ServiceNow, ASML, TSMC en Broadcom.Verder wordt het Antaurus AI Tech Fund besproken en legt Marc Langeveld uit hoe beleggers kunnen investeren in het fonds.Voor meer informatie over Antaurus en haar AI Tech Fund, ga naar: www.antaurus.nl De informatie en meningen in deze podcast zijn uitsluitend bedoeld ter informatie en educatie. Ze vormen geen beleggingsadvies, beleggingsaanbeveling of financieel advies.Beleggen brengt risico's met zich mee. Resultaten uit het verleden bieden geen garantie voor de toekomst.Doe altijd eigen onderzoek of raadpleeg een financieel adviseur voordat u beleggingsbeslissingen neemt. Tijdslijn:00:00 - 7:55 Introductie Antaurus7:55 - 10:35 Introductie Marc Langeveld10:35 - 20:00 AI Capex Boom20:00 - 27:40 AI-Stack: Waar zit de moat?27:40 - 31:44 Claude/AI Agents: Disruptie van software31:44 - 35:35 Compute Wars: Nvidia vs TPU35:35 - 38:08 Datacenter bottlenecks38:08 - 41:19 HBM, Advanced packaging & Foundry41:19 - 44:18 Winnaars & Verliezers44:18 - 46:30 Katalysatoren om te monitoren46:30 - 50:40 Tech waarderingen50:40 - 54:40 Nvidia54:40 - 57:05 TSMC57:05 - 1:02:20 ASML1:02:20 - 1:06:25 Besi1:06:25 - 1:10:38 ServiceNow1:10:38 - 1:12:00 Broadcom1:12:00 Goede start AI Tech Fund
Speicher gilt im Tech-Sektor oft als austauschbare Komponente. In Wahrheit ist er der Engpass, an dem sich ganze Investitionszyklen entscheiden. Gerade jetzt, im Zuge des KI-Ausbaus. Micron steht damit an einer Schnittstelle, die Anleger elektrisiert: Steigende Preise für DRAM, HBM und Speicherlösungen wirken unmittelbar auf Umsatz und Marge. Gleichzeitig ist der Speichermarkt berüchtigt für seine abrupten Wendungen. Was heute nach Rückenwind aussieht, kann im nächsten Zyklus zur Belastung werden.Diese Micron Aktienanalyse 2026 setzt genau dort an: Sind die Speicherpreise bereits am Hochpunkt, oder erleben wir erst die mittlere Phase eines Knappheitsregimes? Im Fokus stehen die Mechanismen, die diese Branche seit Jahrzehnten prägen: Über- und Unterkapazitäten, Preissprünge, Margen, die innerhalb weniger Quartale zwischen Boom und Ernüchterung pendeln. Dazu kommt die Einordnung von Microns Position im Wettbewerb, der DRAM-Marktanteile sowie der strategischen Verschiebung hin zu Bereichen, in denen Rechenzentren und KI-Infrastruktur die Nachfrage treiben.
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
Wen Quan Cheong, co-manager of Mawer's emerging markets equity strategy, outlines four major themes shaping the opportunity set today. First, the "picks and shovels" of AI: upstream enablers such as advanced chip manufacturers, memory makers, and specialized chip-testing firms that are benefiting from structural bottlenecks in the AI supply chain. Second, companies that are actually converting AI investment into higher returns on capital. Third, the "Great Supply Chain Reshuffle," where national security concerns, tariffs, and "China plus one" strategies are driving a reconfiguration of strategic manufacturing infrastructure across Asia and the U.S. And finally, a broader universe of less obvious EM stories that illustrate how opportunity is evolving across regions and sectors as these forces play out. Highlights: Why upstream AI enablers are seeing such powerful earnings leverage: how capacity cuts, equipment bottlenecks, and surging demand for DRAM, HBM, and NAND have flipped the memory market from oversupplied to structurally tight. What it takes for companies to truly convert AI investment into sustainable returns on invested capital, and why early, well-run adopters may enjoy a multi year edge. How shifting geopolitics, U.S. tariffs, and national security concerns are driving a "Great Supply Chain Reshuffle," from TSMC-linked clean room specialists like Actor Group supporting new fabs to Chinese manufacturers using their domestic scale and integration to expand overseas. Why emerging markets are more than just China and tech, with examples ranging from Saudi insurance aggregation and Vietnamese pharmacies to ship maintenance businesses with recurring revenues. Host: Rob Campbell, CFA Institutional Portfolio Manager Guest: Wen Quan Cheong, CFA Portfolio Manager This episode is available for download anywhere you get your podcasts. Founded in 1974, Mawer Investment Management Ltd. (pronounced "more") is a privately owned independent investment firm managing assets for institutional and individual investors. Mawer employs over 250 people in Canada, U.S., and Singapore. Visit Mawer at https://www.mawer.com. Follow us on social: LinkedIn - https://www.linkedin.com/company/mawer-investment-management/ Instagram - https://www.instagram.com/mawerinvestmentmanagement/
Samsung se prepara para enviar HBM4 tras el Año Nuevo Lunar y acelerar la memoria para IAPor Félix Riaño @LocutorCoEl 9 de febrero de 2026, las fábricas de semiconductores en Corea del Sur están operando bajo un calendario especial. El Año Nuevo Lunar, conocido localmente como Seollal, se celebrará el 17 de febrero y los feriados oficiales se extenderán del 16 al 18 de febrero. Durante esos días, gran parte de la actividad industrial del país se detendrá o funcionará de forma limitada. En ese contexto, Samsung Electronics ha confirmado que retomará a pleno ritmo la producción justo después del feriado para comenzar los primeros envíos comerciales de su memoria HBM4 a Nvidia. Esta memoria está destinada a los próximos aceleradores de inteligencia artificial de Nvidia y su calendario de producción está directamente condicionado por esta pausa anual, una de las más relevantes del año para la industria tecnológica asiática.La inteligencia artificial depende de calendarios industriales muy concretosLa inteligencia artificial moderna funciona gracias a centros de datos que procesan enormes volúmenes de información de manera constante. En el núcleo de esos sistemas están los procesadores diseñados por Nvidia, una empresa estadounidense especializada en unidades de procesamiento gráfico, conocidas como GPU. Estos chips destacan por realizar muchos cálculos al mismo tiempo, pero su rendimiento depende directamente de la memoria que los alimenta. High Bandwidth Memory, o HBM, es un tipo de memoria creada para ese propósito. A diferencia de la memoria tradicional, HBM se apila en capas y se coloca muy cerca del procesador, lo que permite mover datos con mayor velocidad y menor consumo energético. La tecnología ha evolucionado por etapas: HBM, HBM2, HBM2E, HBM3, HBM3E y ahora HBM4. Cada generación responde al aumento de demanda provocado por modelos de inteligencia artificial cada vez más grandes. Samsung Electronics ha desarrollado HBM4 usando su proceso DRAM 1c, de sexta generación en la clase de diez nanómetros, junto con una base lógica fabricada con tecnología de cuatro nanómetros.La transición hacia HBM4 ocurre tras un periodo complejo para Samsung. En la generación anterior, HBM3E, la empresa no logró posicionarse con la misma rapidez que SK hynix, otra compañía surcoreana especializada en memoria. SK hynix consiguió convertirse en el principal proveedor de HBM para Nvidia y capturó la mayor parte de los contratos vinculados al auge de la inteligencia artificial. Micron Technology, fabricante estadounidense de memoria, quedó en una posición secundaria en esta categoría. Mientras la demanda de inteligencia artificial siguió creciendo, la capacidad mundial de fabricación de memoria se volvió un recurso limitado. Este problema se agrava cada año alrededor del Año Nuevo Lunar, cuando fábricas en Corea del Sur, China y otros países asiáticos reducen su actividad durante varios días. Esa pausa afecta cadenas de suministro globales y obliga a planificar con precisión qué se fabrica antes y qué se entrega después del feriado.Ante esta situación, Samsung ha organizado su calendario para que la producción y los envíos de HBM4 comiencen inmediatamente después del Seollal. En su complejo industrial de Pyeongtaek, uno de los mayores centros de fabricación de semiconductores del mundo, la empresa está ampliando la línea P4 para producir entre cien mil y ciento veinte mil obleas al mes dedicadas a HBM4. Sumadas a otras líneas, el objetivo es alcanzar alrededor de doscientas mil obleas mensuales, una parte relevante de su producción total de DRAM. Los primeros envíos a Nvidia están previstos para la tercera semana de febrero, en línea con los planes de Nvidia para presentar su nueva plataforma de aceleradores de inteligencia artificial, llamada Vera Rubin, durante la conferencia GTC 2026, programada para marzo. Aunque los analistas estiman que SK hynix mantendrá una mayor cuota de suministro, llegar temprano al mercado permite a Samsung reforzar su posición técnica y comercial.HBM4 introduce mejoras relevantes en eficiencia energética frente a la generación anterior. Esto resulta especialmente importante para centros de datos que operan de forma continua, donde el consumo eléctrico y la refrigeración representan una parte considerable de los costos. Nvidia necesita este tipo de memoria para alcanzar anchos de banda totales superiores a los veinte terabytes por segundo en sus sistemas más avanzados. Sin HBM4, ese nivel de rendimiento no sería viable. Al mismo tiempo, el énfasis de los fabricantes en producir HBM reduce la oferta de memoria convencional para computadores personales y dispositivos móviles, lo que mantiene presión sobre los precios. En este contexto, los fabricantes de memoria ya no influyen solo en componentes, sino en el ritmo general de la innovación tecnológica.)A días del Año Nuevo Lunar, Samsung se prepara para activar la producción y los envíos de HBM4 a Nvidia. Esta memoria será una pieza central de los próximos sistemas de inteligencia artificial. El calendario industrial asiático vuelve a marcar el ritmo global. Escucha más historias como esta y sigue Flash Diario en Spotify.A días del Año Nuevo Lunar, Samsung se alista para enviar HBM4 a Nvidia y acelerar la inteligencia artificial.
For the 100th episode of Astonishing Healthcare, we welcomed AJ Loiacono, our co-founder and CEO, back to the show for a lively discussion about the evolution of our industry and business. What started as a transparent pharmacy benefits manager (PBM) in the "age of indifference" is now a more comprehensive health benefits manager (HBM), and we've entered the "era of acceptance." It's been an incredible 8+ years of growth, fueled by innovation and an unwavering commitment to our clients and delivering on our mission: to build the infrastructure our country needs to deliver the healthcare we deserve. But we had to endure an "age of confusion" to get here!AJ explains why traditional healthcare giants are facing a "BlackBerry moment" - trying to emulate a conflict-free challenger when "it's already too late." The balance of power is shifting away from the traditional PBMs, as the industry now demands full transparency - buyers of health benefits today are smarter than ever before. We also discuss how and why the U.S. wastes [at least] a trillion dollars annually by trying to deliver care using inefficient, fragmented systems; we built the infrastructure to stop it. This episode isn't just a retrospective; it's a blueprint of sorts, and we've got the cultural DNA required to bring about sustainable change (vs. just daydreaming about it). Related ContentReplay - Unifying Medical and Pharmacy Benefits: The Blueprint for Better Employee Health and WellnessJudi Health's Capital Rx Surpasses Five Million Contracted PBM Lives as America's Largest Employers, Unions, and Leading Health Systems Evolve Their Health Benefits StrategiesAH095 - What's in Store for the New Year? A Special Round-Robin Episode of Astonishing HealthcareHealth Benefits 101: Service Excellence & Scaling an Award-Winning Call Center ModelFor more information about Judi Health and this episode, please visit Judi Health - Insights.
In this episode Katherine Forrest and Scott Caravello take us down “memory lane” to explain the importance of high bandwidth memory (HBM) and RAM to AI development. Our hosts also give us a rundown of potential challenges ahead, unpacking developments in the market for memory, including plans for additional capacity and lobster-style RAM pricing. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence
See my $350,000+ Stock Portfolio: https://www.patreon.com/citizenoftheyear/postsJoin the discord: https://discord.gg/Gq8hGbg2CqCheck out these AMAZING Deals: https://amzn.to/3NGmBPTMicron stock has surged because the company has become a key supplier of memory chips for AI, especially high-bandwidth memory (HBM), which is already sold out through 2026. Strong AI demand, record earnings, and unusually high profit margins have caused investors to view Micron less as a cyclical memory stock and more as critical AI infrastructure. The big risk for Micron stock long term is whether competitors like Samsung flood the market and drive prices down, or if memory truly stays essential to the AI economy.Check out my favorite research tool Seeking Alpha! Premium: https://link.seekingalpha.com/3B2L85W/4G6SHH/Alpha Picks: https://www.sahg6dtr.com/3B2L85W/J8P3N/Disclaimer:This is not financial advice and I am not a licensed financial advisor. Always do your own research before investing and work with a licensed financial advisor. These are my opinions for informational purposes only and not to be taken as investing advice. Some of the links on this page are affiliate links, meaning, at no additional cost to you, I may earn a commission if you click through and make a purchase and/or subscribe. As an Amazon Associate, I earn from qualifying purchases. Affiliate commissions help fund videos like this one
Linktree: https://linktr.ee/AnalyticJoin The Normandy For Additional Bonus Audio And Visual Content For All Things Nme+! Join Here: https://ow.ly/msoH50WCu0KIn the Notorious Mass Effect segment, Analytic Dreamz dives deep into the RAM Price Crisis (2025–2026), unpacking the key data, market drivers, and real consumer impact behind the dramatic surge in memory costs.RAM prices have skyrocketed into a sustained inflation cycle heading into 2026, fueled by explosive AI data center demand that prioritizes high-bandwidth memory (HBM) and diverts supply from consumer DRAM. Manufacturing bottlenecks, limited cleanroom capacity, and lithography constraints exacerbate the shortage, while major players like Micron exit consumer RAM sales (Crucial brand in December 2025) to focus on higher-margin AI segments. Samsung and SK hynix report massive profit surges amid the boom.DDR5 RAM has seen prices more than quadruple (+340–344%) since July 2025, with a +27% month-on-month jump from December to January 2026. DDR4 and older standards are rising even faster recently (+46% MoM in January), narrowing the gap with newer tech. ComputerBase's fixed-basket analysis confirms average prices have quadrupled versus September 2025, with Germany's retail tracking—Europe's largest PC hardware market—mirroring global trends, including growing secondary-market distortions.Secondary effects hit related components hard: SSDs up +79%, hard drives +53%, GPUs +14% (with street prices far exceeding MSRP on models like RTX 5070 Ti). Specific examples include 2TB NVMe drives jumping 60–159% and NAS HDDs doubling.Analyst forecasts from TrendForce and Omdia point to +50–60% DRAM contract price hikes in Q1 2026, following 40–70% YoY increases in 2025. PC shipments grew +9.2% in 2025 but face potential declines in 2026, while smartphone output forecasts drop ~20% for some brands, risking +30% price hikes or spec downgrades. Gaming consoles may see delays or higher launch prices.Apple's upgrade costs (e.g., $400 for 16GB→32GB) already outpace comparable DDR5 sticks, with M6 Macs potentially facing steeper hikes or supply delays if AI firms continue outbidding.The core takeaway: This AI-driven structural shift has quadrupled RAM prices in under six months, with volatility persisting through 2026. A plateau is the most optimistic scenario—no full reversal in sight. Analytic Dreamz breaks down the data, root causes, and widespread ripple effects across PCs, smartphones, and beyond.Support this podcast at — https://redcircle.com/analytic-dreamz-notorious-mass-effect/donationsPrivacy & Opt-Out: https://redcircle.com/privacy
This Morning's Headlines1. Tariff talks2. Housing supply3. Chip support4. HBM race5. Expelled
NIO ships to 460K vehicles, Li Auto goes all-in on robots, memory crisis hits everyone. This is execution vs vision vs reality.NIO NWM UPDATE (460K+ VEHICLES):Major "human-like" driving update using closed-loop reinforcement learningLearns from REAL human driving, not just expertsBattery swap navigation: industry-first piloted driving to 2,000+ stationsShenji in-house chips (no NVIDIA delays)EXECUTION AT SCALE despite sales strugglesLI AUTO ROBOT PIVOT (LEAKED INTERNAL MEETING):Jan 26 all-hands: Li Xiang announces humanoid robot pushKey Points:2026 = last year to become top AI companyOnly 3 global companies will master foundation models + chips + OS + embodied intelligenceLi Auto will be oneRestructuring: cars + robots = "hardware ontology team"Aggressive hiring: "bring back employees who left for robot startups"Multiple robot R&D roles postedContext: Sales 500K (2024) → 400K (2025), -20%. Pure EV struggling. Is this genius or desperation?MEMORY CHIP CRISIS (AFFECTS ALL):DDR4/DDR5 prices +40-70%, adding 1,000-2,000 yuan per vehicleStats:Li Auto:
S&P futures is up +0.2% and pointing to a higher open today. Asian equities closed broadly higher Tuesday. SK Hynix has emerged as the exclusive supplier of HBM chips for Microsoft's Maia 200 AI chip, driving outsized gains in South Korea's markets. Japan's Nikkei was also higher on strength in exporters, while the Hang Seng led Greater China market gains. European markets are also higher in early trading. Companies Mentioned: Meta, SK Hynix, Ford, General Motors
Rambus (RMBS) has historically been known as an IP and patent powerhouse—but in 2026, the story has changed. With a massive memory chip shortage driving demand, Rambus is pivoting hard into becoming a fabless chip designer of memory interfaces. In this video, Nick and Kasey break down exactly how Rambus fits into the electronics manufacturing supply chain today. We analyze their transition from pure licensing to selling their own silicon (like memory interface chips for DDR5 and HBM), review their latest Q3 2025 financials, and discuss whether the current valuation makes sense for your portfolio.Join us on Discord with Semiconductor Insider, sign up on our website: www.chipstockinvestor.com/membershipSupercharge your analysis with AI! Get 15% of your membership with our special link here: https://fiscal.ai/csi/Sign Up For Our Newsletter: https://mailchi.mp/b1228c12f284/sign-up-landing-page-short-form
Memory shortages are all the rage in 2026. How should you play the AI data center supply crunch?We discussed this back in 2025, and now it is here: Memory shortages are hitting the AI data center supply chain across the board. But is this an AI bubble, or just a normal cyclical growth cycle? In this video, we break down the entire memory hierarchy—from ultra-fast on-chip SRAM to HBM and long-term storage—and give you the basket of companies to watch for each layer.We also discuss why Pure Storage is our top bet for secondary storage and how equipment suppliers like Lam Research could benefit as manufacturers race to expand capacity.Join us on Discord with Semiconductor Insider, sign up on our website: www.chipstockinvestor.com/membershipSupercharge your analysis with AI! Get 15% of your membership with our special link here: https://fiscal.ai/csi/Sign Up For Our Newsletter: https://mailchi.mp/b1228c12f284/sign-up-landing-page-short-formChapters:00:00 – Memory Shortages: Bubble vs. Cyclical Growth 02:13 – The AI Memory Hierarchy Explained (SRAM, DRAM, NAND) 04:59 – SRAM Stocks: Nvidia, AMD, & Synopsys 06:50 – Embedded Memory: Weebit Nano & MRAM players 07:46 – DRAM & HBM Leaders: SK Hynix, Micron, Samsung 09:00 – The NAND & HDD Resurgence (Seagate & WD) 11:00 – Why Pure Storage is a Top Bet 14:00 – The Fab Five & Lam Research OpportunityIf you found this video useful, please make sure to like and subscribe!*********************************************************Affiliate links that are sprinkled in throughout this video. If something catches your eye and you decide to buy it, we might earn a little coffee money. Thanks for helping us (Kasey) fuel our caffeine addiction!Content in this video is for general information or entertainment only and is not specific or individual investment advice. Forecasts and information presented may not develop as predicted and there is no guarantee any strategies presented will be successful. All investing involves risk, and you could lose some or all of your principal. #semiconductors #chips #investing #stocks #finance #financeeducation #silicon #artificialintelligence #ai #financeeducation #chipstocks #finance #stocks #investing #investor #financeeducation #stockmarket #chipstockinvestor #fablesschipdesign #chipmanufacturing #semiconductormanufacturing #semiconductorstocks Nick and Kasey own shares of Nvidia, Micron, Pure Storage, Sk hynix, Kioxia, Lam Research
Hey folks, Alex here from Weights & Biases, with your weekly AI update (and a first live show of this year!) For the first time, we had a co-host of the show also be a guest on the show, Ryan Carson (from Amp) went supernova viral this week with an X article (1.5M views) about Ralph Wiggum (yeah, from Simpsons) and he broke down that agentic coding technique at the end of the show. LDJ and Nisten helped cover NVIDIA's incredible announcements during CES with their Vera Rubin upcoming platform (4-5X improvements) and we all got excited about AI medicine with ChatGPT going into Health officially! Plus, a bunch of Open Source news, let's get into this: ThursdAI - Recaps of the most high signal AI weekly spaces is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Open Source: The “Small” Models Are WinningWe often talk about the massive frontier models, but this week, Open Source came largely from unexpected places and focused on efficiency, agents, and specific domains.Solar Open 100B: A Data MasterclassUpstage released Solar Open 100B, and it's a beast. It's a 102B parameter Mixture-of-Experts (MoE) model, but thanks to MoE magic, it only uses about 12B active parameters during inference. This means it punches incredibly high but runs fast.What I really appreciated here wasn't just the weights, but the transparency. They released a technical report detailing their “Data Factory” approach. They trained on nearly 20 trillion tokens, with a huge chunk being synthetic. They also used a dynamic curriculum that adjusted the difficulty and the ratio of synthetic data as training progressed. This transparency is what pushes the whole open source community forward.Technically, it hits 88.2 on MMLU and competes with top-tier models, especially in Korean language tasks. You can grab it on Hugging Face.MiroThinker 1.5: The DeepSeek Moment for Agents?We also saw MiroThinker 1.5, a 30B parameter model that is challenging the notion that you need massive scale to be smart. It uses something they call “Interactive Scaling.”Wolfram broke this down for us: this agent forms hypotheses, searches for evidence, and then iteratively revises its answers in a time-sensitive sandbox. It effectively “thinks” before answering. The result? It beats trillion-parameter models on search benchmarks like BrowseComp. It's significantly cheaper to run, too. This feels like the year where smaller models + clever harnesses (harnesses are the software wrapping the model) will outperform raw scale.Liquid AI LFM 2.5: Running on Toasters (Almost)We love Liquid AI and they are great friends of the show. They announced LFM 2.5 at CES with AMD, and these are tiny ~1B parameter models designed to run on-device. We're talking about running capable AI on your laptop, your phone, or edge devices (or the Reachy Mini bot that I showed off during the show! I gotta try and run LFM on him!)Probably the coolest part is the audio model. Usually, talking to an AI involves a pipeline: Speech-to-Text (ASR) -> LLM -> Text-to-Speech (TTS). Liquid's model is end-to-end. It hears audio and speaks audio directly. We watched a demo from Maxime Labonne where the model was doing real-time interaction, interleaving text and audio. It's incredibly fast and efficient. While it might not write a symphony for you, for on-device tasks like summarization or quick interactions, this is the future.NousCoder-14B and Zhipu AI IPOA quick shoutout to our friends at Nous Research who released NousCoder-14B, an open-source competitive programming model that achieved a 7% jump on LiveCodeBench accuracy in just four days of RL training on 48 NVIDIA B200 GPUs. The model was trained on 24,000 verifiable problems, and the lead researcher Joe Li noted it achieved in 4 days what took him 2 years as a teenager competing in programming contests. The full RL stack is open-sourced on GitHub and Nous published a great WandB results page as well! And in historic news, Zhipu AI (Z.ai)—the folks behind the GLM series—became the world's first major LLM company to IPO, raising $558 million on the Hong Kong Stock Exchange. Their GLM-4.7 currently ranks #1 among open-source and domestic models on both Artificial Analysis and LM Arena. Congrats to them!Big Companies & APIsNVIDIA CES: Vera Rubin Changes EverythingLDJ brought the heat on this one covering Jensen's CES keynote that unveiled the Vera Rubin platform, and the numbers are almost hard to believe. We're talking about a complete redesign of six chips: the Rubin GPU delivering 50 petaFLOPS of AI inference (5x Blackwell), the Vera CPU with 88 custom Olympus ARM cores, NVLink 6, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet.Let me put this in perspective using LDJ's breakdown: if you look at FP8 performance, the jump from Hopper to Blackwell was about 5x. The jump from Blackwell to Vera Rubin is over 3x again—but here's the kicker—while only adding about 200 watts of power draw. That's insane efficiency improvement.The real-world implications Jensen shared: training a 10 trillion parameter mixture-of-experts model now requires 75% fewer GPUs compared to Blackwell. Inference token costs drop roughly 10x—a 1MW cluster goes from 1 million to 10 million tokens per second at the same power. HBM4 memory delivers 22 TB/s bandwidth with 288GB capacity, exceeding NVIDIA's own 2024 projections by nearly 70%.As Ryan noted, when people say there's an AI bubble, this is why it's hilarious. Jensen keeps saying the need for inference is unbelievable and only going up exponentially. We all see this. I can't get enough inference—I want to spin up 10 Ralphs running concurrently! The NVL72 rack-scale system achieves 3.6 exaFLOPS inference with 20.7TB total HBM, and it's already shipping. Runway 4.5 is already running on the new platform, having ported their model from Hopper to Vera Rubin NVL72 in a single day.NVIDIA also recently acqui-hidred Groq (with a Q) in a ~$20 billion deal, bringing the inference chip expertise from the guy who created Google's TPUs in-house.Nemotron Speech ASR & The Speed of Voice (X, HF, Blog)NVIDIA also dropped Nemotron Speech ASR. This is a 600M parameter model that offers streaming transcription with 24ms latency.We showed a demo from our friend Kwindla Kramer at Daily. He was talking to an AI, and the response was virtually instant. The pipeline is: Nemotron (hearing) -> Llama/Nemotron Nano (thinking) -> Magpie TTS (speaking). The total latency is under 500ms. It feels like magic. Instant voice agents are going to be everywhere this year.XAI Raises $20B While Grok Causes Problems (Again)So here's the thing about covering anything Elon-related: it's impossible to separate signal from noise because there's an army of fans who hype everything and an army of critics who hate everything. But let me try to be objective here.XAI raised another massive Round E of $20 billion! at a $230 billion valuation, with NVIDIA and Cisco as strategic investors. The speed of their infrastructure buildout is genuinely incredible. Grok's voice mode is impressive. I use Grok for research and it's really good, notable for it's unprecedented access to X !But. This raise happened in the middle of a controversy where Grok's image model was being used to “put bikinis” on anyone in reply threads, including—and this is where I draw a hard line—minors. As Nisten pointed out on the show, it's not even hard to implement guardrails. You just put a 2B VL model in front and ask “is there a minor in this picture?” But people tested it, asked Grok not to use the feature, and it did it anyway. And yeah, putting Bikini on Claude is funny, but basic moderation is lacking! The response of “we'll prosecute illegal users” is stupid when there's no moderation built into the product. There's an enormous difference between Photoshop technically being able to do something after hours of work, and a feature that generates edited images in one second as the first comment to a celebrity, then gets amplified by the platform's algorithm to millions of people. One is a tool. The other is a product with amplification mechanics. Products need guardrails. I don't often link to CNN (in fact this is the first time) but they have a great writeup about the whole incident here which apparently includes the quitting of a few trust and safety folks and Elon's pushback on guardrails. CrazyThat said, Grok 5 is in training and XAI continues to ship impressive technology. I just wish they'd put the same engineering effort into safety as they do into capabilities!OpenAI Launches GPT HealthThis one's exciting. OpenAI CEO Fidji Simo announced ChatGPT Health, a privacy-first space for personalized health conversations that can connect to electronic health records, Apple Health, Function Health, Peloton, and MyFitnessPal.Here's why this matters: health already represents about 5% of all ChatGPT messages globally and touches 25% of weekly active users—often outside clinic hours or in underserved areas. People are already using these models for health advice constantly.Nisten, who has worked on AI doctors since the GPT-3 days and even published papers on on-device medical AI, gave us some perspective: the models have been fantastic for health stuff for two years now. The key insight is that medical data seems like a lot, but there are really only about 2,000 prescription drugs and 2,000 diseases (10,000 if you count rare ones). That's nothing for an LLM. The models excel at pattern recognition across this relatively contained dataset.The integration with Function Health is particularly interesting to me. Function does 160+ lab tests, but many doctors won't interpret them because they didn't order them. ChatGPT could help bridge that gap, telling you “hey, this biomarker looks off, you should discuss this with your doctor.” The bad news is, this is just a waitlist and you can add yourself to the waitlist here, we'll keep monitoring the situation and let you know when it opens upDoctronic: AI Prescribing Without Physician OversightSpeaking of healthcare, Doctronic launched a pilot in Utah where AI can autonomously renew prescriptions for chronic conditions without any physician in the loop. The system covers about 190 routine medications (excluding controlled substances) at just $4 per renewal. Trial data showed 99.2% concordance with physician treatment plans, and they've secured pioneering malpractice insurance that treats the AI like a clinician.Nisten made the case that it's ethically wrong to delay this kind of automation when ER wait times keep increasing and doctors are overworked. The open source models are already excellent at medical tasks. Governments should be buying GPUs rather than creating administrative roadblocks. Strong strong agree here! Google Brings Gmail into the Gemini Era (X)Breaking news from the day of our show: Google announced Gmail's biggest AI transformation since its 2004 launch, powered by Gemini 3. This brings AI Overviews that summarize email threads, natural language queries (”Who gave me a plumber quote last year?”), Help Me Write, contextual Suggested Replies matching your writing style, and the upcoming AI Inbox that filters noise to surface VIPs and urgent items.For 3 billion Gmail users, this is huge. I'm very excited to test it—though not live on the show because I don't want you reading my emails.This weeks buzz - covering Weights & Biases updatesNot covered on the show, but a great update on stuff from WandB, Chris Van Pelt (@vanpelt), one of the 3 co-founders released a great project I wanted to tell you about! For coders, this is an app that allows you to run multiple Claude Codes on free Github sandboxes, so you can code (or Ralph) and control everything away from home! GitHub gives personal users 120 free Codespaces hours/month, and Catnip automatically shuts down inactive instances so you can code for quite a while with Catnip! It's fully open source on Github and you can download the app hereInterview: Ryan Carson - What the hell is Ralph Wiggum?Okay, let's talk about the character everyone is seeing on their timeline: Ralph Wiggum. My co-host Ryan Carson went viral this week with an article about this technique, and I had to have him break it down.Ralph isn't a new model; it's a technique for running agents in a loop to perform autonomous coding. The core idea is deceptively simple: Ralph is a bash script that loops an AI coding agent. In a loop, until it a certain condition is met. But why is it blowing up? Normally when you use a coding agent like Cursor, Claude Code, or AMP, you need to be in the loop. You approve changes, look at code, fix things when the agent hits walls or runs out of context. Ralph solves this by letting the agent run autonomously while you sleep.Here's how it works: First, you write a Product Requirements Doc (PRD) by talking to your agent for a few minutes about what you want to build. Then you convert that PRD into a JSON file containing atomic user stories with clear acceptance criteria. Each user story is small enough for the agent to complete in one focused thread.The Ralph script then loops: it picks the first incomplete user story, the agent writes code to implement it, tests against the acceptance criteria, commits the changes, marks the story as complete, writes what it learned to a shared “agents.md” file, and loops to the next story. That compound learning step is crucial—without it, the agent would keep making the same mistakes.What makes this work is the pre-work. As Ryan put it, “no real work is done one-shot.” This is how software engineering has always worked—you break big problems into smaller problems into user stories and solve them incrementally. The innovation is letting AI agents work through that queue autonomously while you sleep! Ryan's excellent (and viral) X article is here! Vision & VideoLTX-2 Goes Fully Open Source (HF, Paper)Lightricks finally open-sourced LTX-2, marking a major milestone as the first fully open audio-video generation model. This isn't just “we released the weights” open—it's complete model weights (13B and 2B variants), distilled versions, controllable LoRAs, a full multimodal trainer, benchmarks, and evaluation scripts. For a video model that is aiming to be the open source SORA, supports audio and lipsyncThe model generates synchronized audio and video in a single DiT-based architecture—motion, dialogue, ambience, and music flow simultaneously. Native 4K at up to 50 FPS with audio up to 10 seconds. And there's also a distilled version (Thanks Pruna AI!) hosted on ReplicateComfyUI provided day-0 native support, and community testing shows an A6000 generating 1280x720 at 120 frames in 50 seconds. This is near Sora-level quality that you can fine-tune on your own data for custom styles and voices in about an hour.What a way to start 2026. From chips that are 5x faster to AI doctors prescribing meds in Utah, the pace is only accelerating. If anyone tells you we're in an AI bubble, just show them what we covered today. Even if the models stopped improving tomorrow, the techniques like “Ralph” prove we have years of work ahead of us just figuring out how to use the intelligence we already have.Thank you for being a ThursdAI subscriber. See you next week!As always, here's the show notes and TL;DR links: * Hosts & Guests* Alex Volkov - AI Evangelist & Weights & Biases (@altryne)* Co-Hosts - @WolframRvnwlf, @nisten, @ldjconfirmed* Special Guest - Ryan Carson (@ryancarson) breaking down the Ralph Wiggum technique.* Open Source LLMs* Solar Open 100B - Upstage's 102B MoE model. Trained on 19.7T tokens with a heavy focus on “data factory” synthetic data and high-performance Korean reasoning (X, HF, Tech Report).* MiroThinker 1.5 - A 30B parameter search agent that uses “Interactive Scaling” to beat trillion-parameter models on search benchmarks like BrowseComp (X, HF, GitHub).* Liquid AI LFM 2.5 - A family of 1B models designed for edge devices. Features a revolutionary end-to-end audio model that skips the ASR-LLM-TTS pipeline (X, HF).* NousCoder-14B - competitive coding model from Nous Research that saw a 7% LiveCodeBench accuracy jump in just 4 days of RL (X, WandB Dashboard).* Zhipu AI IPO - The makers of GLM became the first major LLM firm to go public on the HKEX, raising $558M (Announcement).* Big Co LLMs & APIs* NVIDIA Vera Rubin - Jensen Huang's CES reveal of the next-gen platform. Delivers 5x Blackwell inference performance and 75% fewer GPUs needed for MoE training (Blog).* OpenAI ChatGPT Health - A privacy-first vertical for EHR and fitness data integration (Waitlist).* Google Gmail Era - Gemini 3 integration into Gmail for 3 billion users, featuring AI Overviews and natural language inbox search (Blog).* XAI $20B Raise - Elon's XAI raises Series E at a $230B valuation, even as Grok faces heat over bikini-gate and safety guardrails (CNN Report).* Doctronic - The first US pilot in Utah for autonomous AI prescription renewals without a physician in the loop (Web).* Alexa+ Web - Amazon brings the “Smart Alexa” experience to browser-based chat (Announcement).* Autonomous Coding & Tools* Ralph Wiggum - The agentic loop technique for autonomous coding using small, atomic user stories. Ryan Carson's breakdown of why this is the death of “vibe coding” (Viral X Article).* Catnip by W&B - Chris Van Pelt's open-source iOS app to run Claude Code anywhere via GitHub Codespaces (App Store, GitHub).* Vision & Video* LTX-2 - Lightricks open-sources the first truly open audio-video generation model with synchronized output and full training code (GitHub, Replicate Demo).* Avatar Forcing - KAIST's framework for real-time interactive talking heads with ~500ms latency (Arxiv).* Qwen Edit 2512 - Optimized by PrunaAI to generate high-res realistic images in under 7 seconds (Replicate).* Voice & Audio* Nemotron Speech ASR - NVIDIA's 600M parameter streaming model with sub-100ms stable latency for massive-scale voice agents (HF). This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit sub.thursdai.news/subscribe
En 2026, les appareils électroniques — smartphones, ordinateurs, tablettes, consoles ou objets connectés — vont coûter plus cher. L'une des raisons majeures, encore peu visible pour le grand public, est l'augmentation rapide du prix de la mémoire vive, la RAM. Et cette hausse est directement liée à l'explosion de l'intelligence artificielle.La RAM est un composant essentiel de tout appareil électronique. Elle permet de stocker temporairement les données utilisées par le processeur et conditionne la rapidité et la fluidité d'un système. Sans RAM, pas de multitâche, pas d'applications modernes, pas d'IA embarquée. Or, depuis deux ans, la demande mondiale de mémoire a changé de nature.Traditionnellement, la RAM était majoritairement destinée aux PC, aux smartphones et aux serveurs classiques. Désormais, les grandes entreprises de l'IA — OpenAI, Google, Microsoft, Meta, Amazon — consomment des quantités colossales de mémoire pour entraîner et faire fonctionner leurs modèles. Les serveurs d'IA utilisent des mémoires spécifiques, comme la HBM (High Bandwidth Memory), indispensables pour alimenter les puces de calcul de type GPU. Un seul serveur dédié à l'IA peut embarquer plusieurs centaines de gigaoctets de RAM, soit l'équivalent de dizaines, voire de centaines de smartphones.Selon plusieurs cabinets d'analyse, la demande en mémoire liée à l'IA progresse de plus de 40 % par an. En face, l'offre ne suit pas. Les fabricants de mémoire — Samsung, SK Hynix et Micron — ont volontairement limité leurs investissements après la crise de surproduction de 2022-2023. Résultat : en 2026, la production mondiale de DRAM devrait augmenter d'environ 15 à 16 %, bien moins que la demande.Ce déséquilibre a déjà un impact sur les prix. En 2025, les prix de la DRAM ont augmenté de plus de 50 %. Pour 2026, plusieurs prévisions évoquent une nouvelle hausse comprise entre 30 et 50 %, selon les segments. La mémoire HBM, très utilisée par l'IA, est encore plus sous tension, car elle mobilise davantage de silicium et des chaînes de production complexes, au détriment de la RAM “classique”.Or la RAM représente entre 10 et 20 % du coût de fabrication d'un PC ou d'un smartphone milieu et haut de gamme. Quand ce composant augmente fortement, les fabricants n'ont que deux options : réduire les performances ou augmenter les prix. De plus en plus, ils choisissent la seconde solution. Des hausses de prix sont déjà anticipées sur les PC et les smartphones dès 2026, avec une augmentation moyenne estimée entre 6 et 8 %.En résumé, l'essor fulgurant de l'intelligence artificielle accapare la mémoire mondiale. Et cette bataille invisible pour la RAM se traduira très concrètement, en 2026, par des appareils électroniques plus chers pour les consommateurs. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
L'AI sta divorando la RAM del mondo. Micron chiude Crucial dopo 29 anni, Samsung non vende memoria nemmeno a se stessa, Raspberry Pi aumenta i prezzi. E NVIDIA spende 20 miliardi per Groq e la sua architettura SRAM. Cosa sta succedendo e quando finirà?00:00 Intro01:07 HBM e crisi dei prezzi RAM06:30 Quando finirà la crisi dei prezzi RAM08:40 NVIDIA acquisisce Groq15:02 Compratevi della DDR4 usata#ram #dram #hbm #sram #ai #micron #crucial #nvidia #groq
MONEY FM 89.3 - Prime Time with Howie Lim, Bernard Lim & Finance Presenter JP Ong
Singapore shares rose in the first trading session of 2026 today. The Straits Times Index was up 0.4% at 4,664.97 points at 12.31pm Singapore time, with a value turnover of S$418.45M seen in the broader market. In terms of counters to watch, we have Nio, after the Chinese electric vehicle (EV) maker said it had new record-high monthly and quarterly deliveries. Meanwhile, from how Singapore’s economy expanded 4.8 per cent year on year in 2025 to how Samsung Electronics said its customers have praised the differentiated competitiveness of its next-generation high-bandwidth memory (HBM) chips, or HBM4, more economic and corporate headlines remained in focus. Also on deck, how US markets are expected to kickstart the year in the first trading session of 2026. On Market View, Money Matters’ finance presenter Chua Tian Tian unpacked the developments with Benjamin Goh, Head of Research and Investor Education, SIAS.See omnystudio.com/listener for privacy information.
Les signaux sont au rouge pour les prix de l'électronique grand public en 2026. Les dirigeants de Asus et Acer ont confirmé que les ordinateurs portables et de bureau verront leurs tarifs augmenter dès le début de l'année prochaine. En cause : la flambée du prix de la mémoire vive et du stockage, happés par la demande massive des centres de données dédiés à l'intelligence artificielle.Selon le quotidien taïwanais Commercial Times, Samson Hu, patron d'Asus, et Jason Chen, PDG d'Acer, s'accordent sur un constat partagé par l'ensemble du secteur : les hausses de coûts devront inévitablement être répercutées sur les prix de vente. Jusqu'ici, les constructeurs avaient réussi à contenir l'inflation grâce à des stocks constitués avant la pénurie. Mais cette période de répit touche à sa fin. Dès le premier trimestre 2026, les nouvelles machines intégreront des composants achetés au prix fort. Asus entend ajuster finement ses gammes, en jouant sur les configurations et le positionnement tarifaire pour rester compétitif. Acer se montre plus direct : « les prix du quatrième trimestre ne seront pas ceux du premier trimestre de l'an prochain », a prévenu Jason Chen. Pour limiter la casse, certains fabricants pourraient réduire les dotations techniques : 8 Go de RAM au lieu de 16 Go, capacités de stockage revues à la baisse. Une stratégie défensive, alors même que la pénurie touche aussi les SSD.La situation pourrait s'installer dans la durée. Les deux géants du secteur, SK Hynix et Samsung, n'envisagent pas d'augmenter significativement leurs capacités de production. Construire une usine de mémoire prend entre trois et cinq ans, un pari risqué dans un marché cyclique. Quant à Micron, le groupe a recentré ses efforts sur la mémoire à très haut débit (HBM) pour l'IA, au détriment du grand public, et prévient que la tension sur les prix pourrait durer au-delà de 2026. Résultat : les consommateurs risquent de payer plus cher des machines parfois moins bien équipées. Une ironie à l'heure où les logiciels, dopés à l'IA, deviennent toujours plus gourmands en ressources. L'informatique personnelle entre ainsi dans une phase paradoxale : plus puissante côté usages, mais plus coûteuse et plus contrainte côté matériel. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
December 22. 2025 | Season 7 | Episode 47We trace a line from the contested elections of the late 1800s to today's market mood, then dig into AI-driven pricing, chip supply pinch points, prediction markets, and the real progress of robotaxis. The goal is to separate noise from durable drivers of earnings and risk.• Parallels between 1876–1880 elections and present-day policy debates• AI's role in personalized pricing and margin expansion• Market tone, rates, and a commodities surge led by gold• Upcoming GDP, durable goods, and confidence data• Earnings growth scenarios tied to stable policy and margins• HBM-driven memory shortages and downstream effects• Prediction markets inside trading apps and data value• Robotaxis now, Tesla versus Waymo, and adoption questionsThis podcast is available on most platforms, including Apple Podcasts and Spotify. For more information, please visit our website at www.heroldlantern.com** For informational and educational purposes only, not intended as investment advice. Views and opinions are subject to change without notice. For full disclosures, ADVs, and CRS Forms, please visit https://heroldlantern.com/disclosure **To learn about becoming a Herold & Lantern Investments valued client, please visit https://heroldlantern.com/wealth-advisory-contact-formFollow and Like Us on Youtube, Facebook, Twitter, and LinkedIn | @HeroldLantern
US equity futures point to a mixed open, with Asian markets mostly lower and European equities trading slightly higher. Today focus is on continued risk aversion in US equities. Moreover, the global rate backdrop remains a headwind as markets digest a hawkish tilt in central bank expectations, with investors increasingly focused on upcoming US inflation data and jobless claims for confirmation on whether policy easing can resume next year. In addition, corporate developments remained in focus as Micron guided above expectations and lifted medium-term capital expenditure plans tied to HBM demand, offering selective support to memory-related names but failing to offset broader concerns around AI monetization, positioning fatigue, and elevated valuations.Companies Mentioned: OpenAI, Warner Bros. Discovery, lululemon athletica
Fifty years of Semicon Europa set a fitting backdrop for a conversation that feels both celebratory and unsentimental about the state of advanced packaging in Europe. We walk the floor in Munich and pull together a story that spans chemical metrology, panel plating, glass substrates, thermal materials, logistics resilience, and the push from R&D to production—plus a heartfelt goodbye.Dena Mitchell, Nova opens the curtain on chemical metrology for electroplating, showing how bath health drives TSV fill, hybrid bond grain structure, and environmental wins through longer bath life. Sally Ann Henry, ACM Research, explains why horizontal panel electroplating can deliver better uniformity than vertical as panel-level packaging grows. Thomas Uhrmann, EV Group zooms out to the strategy: Europe's strength in pilot lines and research consortia, the urgency to materialize large-scale packaging fabs, and how the EU Chips Act is knitting packaging into every node from photonics to logic.Henkel's Ram Trichur takes on thermals, from kilowatt-class data center processors with backside power delivery to mobile's shift from package-on-package to side-by-side for exposed die cooling, and the heat challenges inside HBM stacks. Comet's Isabella Drolz steps into glass panel territory with TGV inspection at 610 x 610 mm, aligning tools, standards, and timelines toward late-decade ramps. Martin Wynaendts van Resandt explains howLab14 brings agility with direct-write lithography for large substrates and optical interconnect masters—speeding iteration and trimming mask overhead as co-packaged optics advances. Jim Garstka, Shellback Semiconductor, talks about its Hydrozone product that is finding traction in photo mask cleaning. We also get practical about moving all this innovation: Barry O'Dowd and Robin Knopf, of Kuehne+Nagel, detail how Europe's packaging supply chains remain global, and how sea-air blends can cut cost and time for non-sensitive, high-volume flows while building resilience against disruptions. ASE's Patricia MacLeod, Christophe Zinck, and Bradford Factor tie it together with automotive realities—centralized compute, heterogeneous integration, reliability constraints—and the enduring role of MEMS and sensors to feed the brain of the car.It's a grounded, forward-looking journey through the technologies and decisions that will determine whether Europe turns its R&D leadership into production momentum. Listen for clear takeaways, candid perspectives, and a final toast to the community that made the 3D InCites Podcast possible.If this conversation resonates, follow the show, share it with a colleague, and leave a review to help more listeners find it.Support the show
Micron is leaving the consumer memory market, including its Crucial brand, to focus on high-bandwidth memory (HBM) for AI data centers. The company will continue selling consumer products until February 2026. The move comes amid a global chip shortage, and HBM sales are growing fast, making AI-focused memory more profitable than consumer products. This and more on the Tech Field Day News Rundown with Tom Hollingsworth and Alastair Cooke. Time Stamps: 0:00 - Cold Open0:27 - Welcome to the Tech Field Day News Rundown1:22 - Trump Administration Lets Nvidia Sell H200 AI Chips to China4:17 - React Server Flaw Lets Hackers Run Code7:34 - IBM Strikes $11 Billion Deal to Acquire Confluent11:17 - Intel Reverses Plan to Sell Networking Unit, Keeps NEX In-House14:59 - IBM CEO Says Today's AI Datacenter Boom Isn't Financially Sustainable19:42 - Cloudflare Forces Outage to Stop Critical React2Shell Exploit22:53 - Micron Exits Consumer Memory to Focus on AI Chips30:53 - The Weeks Ahead31:58 - Thanks for Watching Follow our hosts Tom Hollingsworth, Alastair Cooke, and Stephen Foskett. Follow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon.
De geheugenprijzen zijn de afgelopen maanden vier- tot vijfvoudigd, en dat heeft alles te maken met OpenAI's gigantische deal met Samsung en SK Hynix. In deze aflevering van Techzine Talks analyseren we waarom OpenAI 40% van de wereldwijde DRAM-productiecapaciteit heeft opgekocht en wat dit betekent voor de rest van de markt.Van laptops tot servers en smartphones: alle apparaten worden duurder door het extreme tekort aan geheugen. Dell overweegt prijsverhogingen van 15%, Micron stopt met Crucial geheugen voor consumenten, en Samsung weigert zelfs zijn eigen Galaxy-divisie vangeheugen te voorzien. We bespreken hoe lang deze crisis gaat duren, wat bedrijven kunnen doen om kosten te beheersen, en of er alternatieven zijn zoals efficiëntere software of nieuwe productiecapaciteit.Ook komen AGI-ambities, de rol van AI-inferencing, en de vraag aan bod waarom OpenAI zoveel geheugen nodig heeft. Gaat het om Stargate datacenters, een geheimzinnige hardware-gadget met Jonathan Ive, of iets heel anders? En wat betekent dit voor Windows 11, ARM-laptops en de toekomst van enterprise IT?• OpenAI's 900.000 geheugenwafers per maand deal• Geheugenprijzen stijgen van €100 naar €400 voor 32GB DDR5• Impact op Dell, HP, Lenovo en smartphone fabrikanten• Productiecapaciteit groeit slechts 8% terwijl vraag explodeert• Samsung weigert eigen Galaxy-divisie te voorzien van geheugen• Alternatieve efficiëntie-oplossingen en DeepSeek OCR-innovaties• Langetermijnvooruitzichten: 2-10 jaar tekorten?0:09 - Geheugenprijzen stijgen explosief1:24 - OpenAI koopt 40% van wereldwijde geheugencapaciteit3:09 - Productiecapaciteit en tekorten3:42 - Gevolgen voor PC- en laptopprijzen6:44 - Marktdynamiek en leveranciers7:46 - AI-infrastructuur en geheugenbehoefte23:30 - Toekomstscenario's en efficiëntiewinstTags: OpenAI, geheugenprijzen, DRAM, DDR5, HBM, Samsung, SK Hynix, Micron, AI-infrastructuur, geheugen tekort, laptop prijzen, Dell, enterprise IT, datacenter, GPU, Nvidia, Windows 11, AGI
Tech Contrarians explains the market's AI obsession, and why fears of a bubble might be premature (1:00). OpenAI's spending spree (3:20). Big tech's CapEx surge and what it signals about market anxiety (5:40). Red flags may indicate short-term supply chain hiccups not AI collapse (8:00). AI bubble or deflation? Mid-2026 more likely for major corrections (10:15). AMD, Nvidia & Broadcom (15:30). Intel's turning point (25:40). Why data storage and HBM memory are long-term AI plays (33:50). Opportunities outside AI (36:00).Episode TranscriptsShow Notes:AMD: OpenAI Got A Bargain - I Wouldn't Hold Into EarningsTaking Profits For Yield And Growth With David Alton ClarkMichael Burry to shut down hedge fundRegister for Top Income & AI Growth Stocks Worth Watching: https://bit.ly/4ifR7PPFor full access to analyst ratings, stock and ETF quant scores, and dividend grades, subscribe to Seeking Alpha Premium at seekingalpha.com/subscriptions
【謝晨彥分析師Line官方帳號】 https://lin.ee/se5Bh8n 2025.11.07【海力士 HBM 漲價50%! 記憶體缺貨到2027!】#華爾街見聞 謝晨彥分析師 ☆ #海力士 明年產程全被包下 #HBM 缺貨到2027? ☆ HBM產業近況 法人估海力士營運將超越 #台積電? ☆ 台廠有哪些HBM供應鏈 該如何佈局? 馬上加入Line帳號! 獲取更多股票訊息! LINE搜尋ID:@gp520 https://lin.ee/se5Bh8n 也可來電免付費專線洽詢任何疑問! 0800-66-8085 獲取更多股票訊息 #摩爾投顧 #謝晨彥 #分析師 #股怪教授 #股票 #台股 #飆股 #三大法人 #漲停 #選股 #技術分析 #波段 #獲利 #飆股啟航 #大賺 #美債 #華爾街見聞 -- Hosting provided by SoundOn
Everyone is talking about a new memory super cycle related to AI data centers, and suddenly, NAND flash is having its moment. SanDisk (SNDK) has returned to the public market after its spinoff IPO from Western Digital, and it's back in growth mode.In this deep dive, we use our investing framework to analyze SanDisk's position in the storage market. We examine the major shift from HDDs (Hard Disk Drives) to SSDs (Solid State Drives) in data centers due to product shortages and the need for new solutions.Key Topics Covered:The Market: Why the NAND flash market is about to heat up and how SanDisk is uniquely positioned against memory chip makers like SK hynix and Micron.The Partnership: Our preference for SanDisk over Kioxia due to their Flash Ventures joint venture, allowing SanDisk to buy finished wafers at cost with a small markup (asset light model).The Innovation: SanDisk's invention of HBF (High Bandwidth Flash), which might be an answer to HBM for co-packaging next to GPUs.The Financials: Analyzing the 30x expected free cash flow valuation, the company's flip from free cashflow loss to free cashflow positive, the GAAP net loss, and the loan inherited from Western Digital.Investment Thesis: Whether SanDisk should be a small bet in a basket play alongside Lam Research and Pure Storage.TImestamps:(00:00:00) | Introduction: The Memory Supercycle and SanDisk's Re-IPO(00:01:06) | Core Product: NAND Flash, IDMs, and the $200 Billion Market(00:03:00) | SanDisk's History: Spin-off from Western Digital & The NAND Landscape(00:03:38) | The Storage Supply Chain: Lam Research, Kioxia, and Pure Storage(00:05:14) | Kioxia Partnership: Why SanDisk Gets Wafers "At Cost"(00:07:34) | The Market Catalyst: HDD Shortages and Data Center SSD Demand(00:09:56) | Next-Gen Innovation: High Bandwidth Flash (HBF) vs. HBM(00:11:15) | Enablers & Market Exposure: Fab Equipment (Lam, Applied) and Client/Cloud Segments(00:14:02) | Financials: Flipping from Free Cash Flow Loss to Positive(00:16:08) | Q1 Fiscal 2026 Guidance, Debt, and NTM Valuation(00:18:12) | Final Takeaway: SanDisk as a "Small Bet" in a Basket PlayJoin us on Discord with Semiconductor Insider, sign up on our website: www.chipstockinvestor.com/membershipSupercharge your analysis with AI! Get 15% of your membership with our special link here: https://fiscal.ai/csi/Sign Up For Our Newsletter: https://mailchi.mp/b1228c12f284/sign-up-landing-page-short-formIf you found this video useful, please make sure to like and subscribe!********************************************************Affiliate links that are sprinkled in throughout this video. If something catches your eye and you decide to buy it, we might earn a little coffee money. Thanks for helping us (Kasey) fuel our caffeine addiction!Content in this video is for general information or entertainment only and is not specific or individual investment advice. Forecasts and information presented may not develop as predicted and there is no guarantee any strategies presented will be successful. All investing involves risk, and you could lose some or all of your principal. #SanDisk #NANDFlash #AIDC #MemorySupercycle #Investing #semiconductors #chips #investing #stocks #finance #financeeducation #silicon #artificialintelligence #ai #financeeducation #chipstocks #finance #stocks #investing #investor #financeeducation #stockmarket #chipstockinvestor #fablesschipdesign #chipmanufacturing #semiconductormanufacturing #semiconductorstocks
欢迎收听雪球出品的财经有深度,雪球,国内领先的集投资交流交易一体的综合财富管理平台,聪明的投资者都在这里。今天分享的内容叫英伟达的护城河,来自古董鱼。看了一晚上英伟达的护城河,强行洗脑,最后的结论是英伟达不倒,我不撤退,一直AI下去。如果哪天英伟达被颠覆了,别问我还能不能拿,因为那时候我已经跑了。大家都以为英伟达的硬件强,其实它的隐形护城河是计算平台和编程模型加网络。我们来看看英伟达的先发优势与成熟度:他的计算平台和编程模型于 2007 年推出,经过近 20 年的发展,已成为 G P U 计算的行业标准。它积累了超过 400 万开发者,形成了庞大的社区和网络效应。从英伟达的全栈优化与工具链来看,计算平台和编程模型提供了从编译器、调试器到高度优化的核心库的全套工具。这些库经过英伟达的深度优化,能充分发挥其硬件性能,开发者无需编写底层代码即可获得顶尖性能。再从开发习惯与迁移成本来看,计算平台和编程模型广泛纳入大学课程和培训项目,工程师们从小白阶段就开始接触它。企业积累了大量的 CUDA 代码和专业知识,切换到其他平台需要重写代码、重新培训员工,并面临性能不确定的风险,这种切换成本高得难以想象。这种计算平台和编程模型的关键优势之一是,随着时间的推移,它通过新的软件更新不断改进硬件。刚刚对在H100和新的Blackwell GB200 NVL72这两种版本的芯片上运行AI训练进行了基准比较,结果表明了为什么计算平台和编程模型及其软件随着时间的推移的改进如此重要。最新,CoreWeave公司给出的数据,对 NVIDIA GB300 NVL72,进行了基准测试,其每 4x的 G P U 的单位时间内跑AI的速度比16x的H100高6倍,最初可不是这个比值,通过英伟达的计算平台和编程模型的不断优化,最后达到了这个高性能。其实一直有用CUDA转换器的,然而,用过转换器的,他们以大约80%的速度转换CUDA代码,而剩下的20%必须由内核工程师手动完成,这样成本并不便宜。同样有趣的是,虽然其他公司正在结成联盟,为Nv的全栈部分建立替代方案,但是目前没有一个与英伟达竞争的联盟出现。接着是英伟达网络的护城河。关于网络,通常说纵向扩展和横向扩展这两个部分,最近火的scale across先不提了。纵向扩展指的是机架里的 G P U 能够相互连接,形成单个 G P U 节点,并使其尽可能强大。然后,横向扩展网络使这些 G P U 节点能够连接到其他 G P U 节点,并共同形成一个大型 G P U 集群,使用其专有的 N V Link和 N V switches横向扩展时,他们使用从Mellanox收购中获得InfiniBand或以太网作为次要选项。英伟达的其他对手一起搞了个 U A link联盟,它的成员包含了能想到的其他公司。U A link有 A M D 、亚马逊、谷歌、英特尔、Meta、微软、思科、苹果、Astera Labs等公司组成。但它对 A M D 来说很重要,因为与英伟达相比,其最大的缺点之一是网络。网络不仅对培训人工智能工作负载很重要,而且对推理也很重要。随着推理模型的推论变得更加复杂,拥有良好的放大和缩小是关键。同时,为了解决这一挑战,他们希望支持所有可用的替代方案。这就是为什么他们有灵活的输入输出通道。这些灵活的输入输出通道使A M D能够支持不同的标准。虽然 U A Link还很年轻,但它已经遇到了很大的挫折。起初,博通是参与的关键公司之一,但后来退了。这是一个重大的挫折,因为 A M D 现在必须依靠AsteraLabs和Marvell来生产 U A Link联盟的交换机,而 U A Link交换机要到2027年才能准备就绪。这就是为什么我们可以看到,虽然 A M D 的MI400x显卡有 U A Link Serdes,但它并没有构成一个完整的扩展网络。不过,英伟达不仅仅是在关注这一发展,因为在UALink 1.0发布一个月后,他们宣布了NVLink Fusion,从纸面上看,它打开了NVLink生态系统。这对英伟达来说是一大步,因为一位前英伟达高级员工解释说,在内部实施这一步骤是多么具有挑战性,因为Meta想在他在那里工作时将 N V Links用于他们的MTIA,而英伟达的回答是坚定的“不”。NVLink 技术模块是用英伟达自家独有的方式和芯片传递数据的,其中一部分技术至今还是英伟达独有的。有了这套技术,英伟达只能让客户用他们的芯片间连接技术。现在客户也意识到了这一点,就像那位前英伟达员工提到的,他们担心这样一来,就算自己有定制的专用芯片ASIC,也会进一步被绑在英伟达的生态系统里 ,所以 U A Link到现在依旧是个替代选择。英伟达和 U A Link这边,有个关键角色是 Astera Labs公司 —— 毕竟现在博通已经自己单干、走自己的技术路线了。现在 U A Link联盟得靠 Astera Labs 来提供交换机。英伟达很清楚Astera Labs现在是 U A Link联盟里的核心部分,可能会想办法促使Astera Labs订购更多英伟达的 NVLink Fusion;而一旦Astera Labs用了NVLink Fusion,他们能为 U A Link服务的能力就会受限,至于这么做最终能不能帮到英伟达,还得靠时间来验证。在横向扩展方面,英伟达的InfiniBand网络技术,有个替代方案是支持远程直接内存访问的以太网。英伟达也支持这个替代方案,但只把它当作“次要选项”,英伟达甚至还有个 Spectrum X 以太网平台,因为他们通过收购,拿到了Spectrum系列交换机的技术和产能。很多大型科技公司也支持以太网,原因很实在:它成本更低,早就广泛用在数据中心里,而且有多家供应商可选。现在支持 RDMA 的以太网已经获得了不少采用度,因为大型科技公司和Meta这类企业,都愿意用它来减少对英伟达的依赖。不过,此前我们虽已探讨过纵向扩展和横向扩展软件与网络这两个核心层面,但还有一个新的关键层面才刚刚兴起,那就是HBM,高带宽内存。作为人工智能加速器的核心组件之一,HBM的重要性会随着AI模型向更大规模、更复杂结构发展,而愈发凸显。目前,海力士与美光是 HBM3 内存的主要供应商,不过三星预计也将完成相关认证流程,加入 HBM3 的供应体系。当向HBM4内存过渡时,将迎来一项关键变革:HBM4 的基础芯片晶圆需采用先进的逻辑芯片制造工艺。这意味着海力士与美光无法独立完成,必须将制造环节外包给台积电;同时,这些内存厂商还需与逻辑芯片设计公司或技术授权商展开合作,方能完成它的设计工作。这一变革为 “定制化 HBM 内存方案” 创造了空间,但反过来也意味着,HBM4的利润需与台积电共享一部分 —— 毕竟其制造环节高度依赖台积电。此外,HBM4 的复杂度远高于HBM3,需将内存厂商的芯片堆叠技术与代工厂的先进制造工艺相结合,这种局面实际上对英伟达更为有利,因为英伟达此前已计划自主设计HBM4的 3 纳米芯片裸片。事实上,我并不担心专用芯片ASIC会侵占过多市场份额。多数云服务提供商选择自主研发芯片,主要源于英伟达的市场垄断与显卡产能不足 —— 这实属无奈之举,他们为了更快获取可用算力,才不得不走上自主研发之路。此次英伟达发布的 Rubin 系列 CPX 产品,核心目标便是提升 AI 的上下文推理能力。在我看来,推理领域真正的领先者,并非 ASIC 这类专用推理芯片,仍属英伟达的产品。另有一项关键问题不容忽视:数据中心可使用的电力存在限制,尤其在北美地区,电力更是必须重视的硬性约束。为何 X AI 公司能在 122 天内建成全球规模最大的算力中心?一方面,马斯克拥有全球顶尖的工程团队与执行能力;更重要的是,X AI所能获得的供电支持,在全球范围内也处于顶尖水平。当你运营现有数据中心,或计划新建数据中心时,需与电力公司合作确定固定的电力使用额度,而这一额度具有明确上限 —— 你无法随意致电电力公司,提出 “需额外增加 10% 电力” 的需求。若我们对比英伟达当前一代与下一代服务器,那么在评估H100与GB300服务器时,核心衡量标准应是 “处理同等数量的令牌时,可节省多少电力”。而英伟达每次产品更新,实际上都在推进这项电力效率优化工作。所以,我想说的是英伟达的手里牌很多,老黄这个人能力强的可怕,就算现在出来ASIC和其他 G P U 竞争对手,都是更多跟随和模仿,对所有在供应链做硬件的公司都是利好,因为总的需求变多了,可以说遍地开花。
Stephanie Walter and John Freeman preview Micron (MU) earnings. Stephanie notes that Micron is “leaning in” to HBM chips, which are a prerequisite for AI data centers – potentially bullish, but with raised market expectations, it might not be enough. They discuss how Micron has become more of a strategic play than in the past as it becomes a bedrock of AI infrastructure. John owns MU shares and gives his outlook for shares, which are near highs going into the report.======== Schwab Network ========Empowering every investor and trader, every market day.Options involve risks and are not suitable for all investors. Before trading, read the Options Disclosure Document. http://bit.ly/2v9tH6DSubscribe to the Market Minute newsletter - https://schwabnetwork.com/subscribeDownload the iOS app - https://apps.apple.com/us/app/schwab-network/id1460719185Download the Amazon Fire Tv App - https://www.amazon.com/TD-Ameritrade-Network/dp/B07KRD76C7Watch on Sling - https://watch.sling.com/1/asset/191928615bd8d47686f94682aefaa007/watchWatch on Vizio - https://www.vizio.com/en/watchfreeplus-exploreWatch on DistroTV - https://www.distro.tv/live/schwab-network/Follow us on X – https://twitter.com/schwabnetworkFollow us on Facebook – https://www.facebook.com/schwabnetworkFollow us on LinkedIn - https://www.linkedin.com/company/schwab-network/About Schwab Network - https://schwabnetwork.com/about
Breastmilk is Dynamic Cellular and transcriptional diversity over the course of human lactation This recent 2022 paper in the Proceedings of the National Academy of Sciences by Dr. Nyqiust and colleagues is a site for sore eyes. It offers a remarkable, high-resolution portrait of how the cellular landscape of human breast milk (hBM) shifts over time. The authors capture something both scientifically rich and uniquely human: the dynamic, living composition of milk as it adapts to the changing needs of mother and child. The abstract: "Human breast milk is a dynamic fluid that contains millions of cells, but their identities and phenotypic properties are poorly understood. We generated and analyzed single-cell RNA-sequencing (scRNA-seq) data to characterize the transcriptomes of cells from hBM across lactational time from 3 to 632 d postpartum in 15 donors. We found that the majority of cells in hBM are lactocytes, a specialized epithelial subset, and that cell-type frequencies shift over the course of lactation, yielding greater epithelial diversity at later points. Analysis of lactocytes reveals a continuum of cell states characterized by transcriptional changes in hormone-, growth factor-, and milk production-related pathways. Generalized additive models suggest that one subcluster, LC1 epithelial cells, increases as a function of time postpartum, daycare attendance, and the use of hormonal birth control. We identify several subclusters of macrophages in hBM that are enriched for tolerogenic functions, possibly playing a role in protecting the mammary gland during lactation. Our description of the cellular components of breast milk, their association with maternal–infant dyad metadata, and our quantification of alterations at the gene and pathway levels provide a detailed longitudinal picture of hBM cells across lactational time. This work paves the way for future investigations of how a potential division of cellular labor and differential hormone regulation might be leveraged therapeutically to support healthy lactation and potentially aid in milk production." (Nyquist et. al. 2022) And more information on breastmilk immunology and a recipe. Dr. M
Today’s episode is all about high-performance memory in switches. We dig into the differences among TCAM, SRAM, DRAM, and HBM, and all the complex tradeoffs that go into allocating memory resources to networking functions. If you've ever had to select a Switching Database Manager template or done similar operations on a switch, this is your... Read more »
Today’s episode is all about high-performance memory in switches. We dig into the differences among TCAM, SRAM, DRAM, and HBM, and all the complex tradeoffs that go into allocating memory resources to networking functions. If you've ever had to select a Switching Database Manager template or done similar operations on a switch, this is your... Read more »
Today’s episode is all about high-performance memory in switches. We dig into the differences among TCAM, SRAM, DRAM, and HBM, and all the complex tradeoffs that go into allocating memory resources to networking functions. If you've ever had to select a Switching Database Manager template or done similar operations on a switch, this is your... Read more »
At FMS25 there was lots of discussion on HBM, QLC&SCM SSDs, UAlink/UEC, UCIe for SSDs and liquid cooled m.2 SSDs, listen to the podcast to learn more.
So we're selling AI chips to China now. Chris Miller, author of Chip Wars, and Lennart Heim at RAND join to discuss: What are the tradeoffs involved in selling Why China is talking like they don't even want the H20s Why selling HBM and semiconductor manufacturing equipment might be an even bigger deal than Nvidia chips Check out the Horizon Fellowship to work in DC on emerging tech policy issues like AI chip export controls! https://horizonpublicservice.org/applications-open-for-2026-horizon-fellowship-cohort/ Outtro Music: It's a Shame, The Spinners, 1970 https://www.youtube.com/watch?v=uRQQudHLi0A&ab_channel=TheSpinners-Topic Learn more about your ad choices. Visit megaphone.fm/adchoices
So we're selling AI chips to China now. Chris Miller, author of Chip Wars, and Lennart Heim at RAND join to discuss: What are the tradeoffs involved in selling Why China is talking like they don't even want the H20s Why selling HBM and semiconductor manufacturing equipment might be an even bigger deal than Nvidia chips Check out the Horizon Fellowship to work in DC on emerging tech policy issues like AI chip export controls! https://horizonpublicservice.org/applications-open-for-2026-horizon-fellowship-cohort/ Outtro Music: It's a Shame, The Spinners, 1970 https://www.youtube.com/watch?v=uRQQudHLi0A&ab_channel=TheSpinners-Topic Learn more about your ad choices. Visit megaphone.fm/adchoices
Guest: Dr. Tom Coughlin, President, Coughlin Associates, IEEE Past President (2025) Website: https://tomcoughlin.com FMS Conference: https://futurememorystorage.com/ Episode Summary: Join us for an enlightening conversation with Dr. Tom Coughlin, a seasoned digital storage analyst and consultant with over 40 years in the industry. Tom, the President of Coughlin Associates and former IEEE President, shares unparalleled insights into the foundational technologies shaping our digital world. We delve into the crucial role of memory in AI's development, the surprising realities of storage demand, and the fascinating world of breakthrough memory technologies. Discover why memory often gets overlooked in AI discussions, critical considerations for data privacy, and the global impact of the IEEE. Tom also previews the upcoming Future of Memory and Storage (FMS) conference and offers invaluable career advice for tech entrepreneurs. Key Discussion Points: Behind-the-Scenes of Storage Innovation: Tom shares a surprising story about the 25-year research journey behind HAMR technology now rolling out in HDDs. Evolving Storage Demands: Learn how SSDs have become primary data center storage and replaced HDDs in personal computers and consumer applications. Understand HDDs' shift to colder storage in data centers—this is their growth market, and much of the world's data lives on HDDs. Discover magnetic tape's vital role in archiving and backing up cloud data. Explore new archive storage technologies being developed, such as optical recording and DNA storage. Memory's Critical Role in AI: Memory, particularly DRAM, is playing a big role in training AI models. Approaches are emerging that reduce the need for expensive DRAM (especially in HBM) for inference applications, using storage technologies like SSDs (e.g., Kioxia's AiSAQ for tuning LLMs). er optical storage or DNA for long-term data storage and preservation. Why Memory is Overlooked in AI: Insights into why people tend to focus more on processing (GPUs) than on the data itself, despite memory and storage advances being as impressive as those in GPUs. Data Privacy & Security in Storage: Essential considerations include having copies of data on immutable storage for ransomware recovery, using AI for anomaly detection on networked systems to prevent malware, and proper encryption use in storage systems for data security. The Global Impact of IEEE: Learn about IEEE as the world's largest technical professional organization with nearly half a million members in over 190 countries. IEEE puts on over 2,000 conferences and events each year and publishes a good percentage of the world's technical literature. IEEE standards enable interoperability and industries, with a recent focus on sustainability and ethical AI practices to solve global problems and benefit humanity. Future of Memory and Storage (FMS) Conference: Dr. Coughlin, the general chair, provides details on the 2025 FMS (August 4-7, 2025, at the Santa Clara Convention Center). The conference will feature keynotes by major players in the digital storage and memory industry and sessions covering all major technologies and applications. FMS is the largest independent event focused on digital storage and memory. Highlight Speakers at FMS: Keynote talks include representatives from Kioxia, Fadu, Micron, Silicon Motion, SK hynix, Samsung, Neo, Sandisk, Max Linear, VergeIO, and Kove. There will also be a special session on AI, memory, and storage organized by NVIDIA, and Dr. Coughlin will give a talk on his experiences as IEEE President in 2024. Many parallel sessions will feature speakers from important industry players. Major Disruption in Digital Storage: Dr. Coughlin predicts that just managing the massive amounts of data generated by AI and IoT will be a huge challenge. He also foresees a growing need for technology to ensure data provenance, to identify false information and curate data for AI training. Career Advice for Tech Professionals: Dr. Coughlin advises aspiring tech professionals to be part of their industry and join technical professional organizations like the IEEE. This provides opportunities to develop professional networks and learn important skills like working with others and communicating through volunteer leadership. Learn More About Dr. Tom Coughlin and FMS: Future of Memory and Storage (FMS) Conference: https://futurememorystorage.com/ Tom Coughlin's Work: https://tomcoughlin.com Disclaimer: The information provided in these show notes is for informational purposes only and does not constitute financial, investment, or technical advice. Views expressed by the guest are their own and do not necessarily reflect the views of the podcast host or its affiliates..do not necessarily reflect the views of Finalis Inc. or Finalis Securities LLC, Member FINRA/SIPC.. Listeners should conduct their own research and consult with qualified professionals before making any decisions.
En el episodio de hoy de VG Daily, Eugenio Garibay y Andre Dos Santos analizan a fondo los reportes financieros más recientes de tres gigantes del mercado: Paychex, Micron Technology y Walgreens Boots Alliance. El episodio arranca con un repaso al desempeño de Micron y su revolucionaria memoria HBM, explicando de manera sencilla cómo funciona esta tecnología, por qué es clave en la era de la inteligencia artificial y cómo está impulsando el crecimiento de la compañía. Luego, el foco se traslada a Paychex tras su adquisición de Paycor, destacando las sinergias, la estrategia de integración y la reacción de los analistas. Finalmente, se aborda el caso de Walgreens, una empresa que atraviesa un proceso de privatización y reestructuración, enfrentando desafíos en ventas y márgenes, y cuya historia refleja la transformación del sector retail en Estados Unidos. A lo largo del episodio, Andre y Eugenio aportan datos curiosos, contexto histórico y opiniones relevantes para entender no solo los números, sino también las historias y tendencias que están moviendo el mercado en estos días.
In this special episode of the Astonishing Healthcare podcast, Capital Rx Co-Founder and CEO, AJ Loiacono, and John Asalone, Executive Vice President of the newly formed Judi Care (former CEO of Amino Health), join Justin Venneri in the studio for a discussion about Capital Rx's acquisition of Amino, a unique care navigation company. The conversation covers everything from the background on how AJ and John met to "What is care navigation?" and how Judi Care offers 1) health plan members (i.e., healthcare consumers) a differentiated way to take control of their individual healthcare journeys, and 2) plan sponsors and other payers a user-friendly, unified pharmacy and medical care navigation front end that empowers plan members to find the care they need, when they need it.We're incredibly excited about the future and the opportunity to meaningfully improve access to care and the overall health benefits experience while helping reduce costs. Capital Rx has evolved into an HBM - or health benefits manager - as a result of Judi® processing medical AND pharmacy claims (and supporting all related workflows in one system), and a unified front end that "puts quality, cost insights, and all of the benefits that your health insurance provides into one simple search box" is a natural extension of our enterprise health tech capabilities. We hope you enjoy learning more about our journey and evolving mission!Related ContentJudi Health™ Earns Best Healthcare InsurTech Solution in the 9th Annual MedTech Breakthrough Awards ProgramCapital Rx Unveils Healthcare's First Unified Pharmacy and Medical Claims Processing PlatformCapital Rx Adds More than 80 New Partnerships in 2024 and Eyes Another Year of Record Growth in 2025AH065 - The Bridge to Value-Based Care: Unified Claims Processing™, with Dr. Sunil BudhraniFor more information about Capital Rx and this episode, please visit Capital Rx Insights.
Today's guest is a powerhouse of resilience and resourcefulness: Janice Thayer of Curated Properties in Abingdon, Virginia. As a longtime member of Hosting Business Mastery (HBM), Janice shares her remarkable journey from navigating personal upheaval to launching a thriving short-term rental business — all while living on-site.In this inspiring conversation, Janice explains how she leveraged her design expertise and entrepreneurial background to build a luxury hosting brand, create multiple rentable spaces within her own home, and master the art of direct bookings. She walks us through the challenges of launching during COVID, navigating property renovations, and the mindset that has allowed her to double her revenue every year since opening.We also discuss how Janice uses dynamic pricing, multiple social media platforms, and creative SEO tactics to attract guests — including how she's achieved a 28% direct booking rate with zero paid advertising. Plus, she shares her strategies for guest screening, why she chooses to stay involved in HBM years after launching, and why resilience is every host's superpower.If you've ever wondered whether you can truly take control of your STR income and build a sustainable, guest-loved business — this is the episode for you.In this episode, we cover:• How Janice's interior design and hospitality background set her apart as a host• Starting a STR business after a major life transition• The pros and cons of living onsite with your guests• Renovating and reconfiguring space to maximize bookings• Why dynamic pricing was a game-changer for Janice's revenue• How she built a direct booking website that now accounts for nearly 30% of her income• Tips for using social media and SEO to attract direct bookings• Managing cleaners and maintaining high standards• Why Janice stays engaged in Hosting Business Mastery year after year• The #1 mindset shift every host needs to succeedResources mentioned:• Join our FREE upcoming workshop: thanksforvisiting.com/workshop• Watch on YouTube: The #1 Airbnb Revenue Management Metric You NEED to know about!• Follow Janice and explore her properties: linktr.ee/curatedpropertiesllcMentioned in this episode:Make More Money This Year | Join our LIVE Workshop!Minoan | Visit MinoanExperience.com and tell them TFV sent you!Hostfully | Go to https://www.hostfully.com/tfv and use TFV500 to get $500 off your subscription.Make More Money This Year | Join our LIVE Workshop!
※※※ 김현정 앵커의 연수 휴가로 〈김현정의 뉴스쇼〉는 이철희 前 정무수석이 대신 진행합니다 ※※※ 삼성전자+하이닉스, 국내주식 시가총액 25%삼성전자 위기는 사이클 아닌 구조적 문제HBM 못 따라잡고 파운드리는 적자범용은 중국에 밀려..과감한 구조개혁 필수분사 독립경영해야..특별법으로 될일 아냐 ■ 방송 : CBS 라디오 [김현정의 뉴스쇼] FM 98.1 (07:10~09:00)■ 진행 : 이철희 (김현정 앵커 대신)■ 대담 : 박상인 (서울대 행정대학원 교수)See omnystudio.com/listener for privacy information.
[손에잡히는경제 인터뷰] 대한민국 HBM 개발의 미래와 생존전략 - 심대용 동아대 교수 (전 SK하이닉스 부사장)
Hello everyone!!For our first HBM of 2025 we offer something quite strange, at first glance something all about military propaganda and nothing quite our podcast range, Ace Combat 7!But, as we are joined by our friend Sid of, among other things, The Bad Game Hall of Fame, we dive deeper!And yeah... there's a lot of propaganda, but still, attempts at telling a story and even being anti-war, but we shall see.Happy new year and here's to plenty of HBM in 2025! Enjoy!Check out Sid's stuff!https://linktr.ee/beamsplashxIf you can and are interested in early episodes and the Here Be Extras, check our Patreon!https://www.patreon.com/leftpage Also! If you're not there already, feel free to join our Discord, as we have been more talkative than usual, and plan to do so more and more!https://discord.gg/J2wgG3yrPNIntro Music: Home, by Karl Casey @ White Bat AudioOutro Music: Leve Palestina, Spartacus Hosted on Acast. See acast.com/privacy for more information.
In this episode, Ben Bajarin and Jay Goldberg discuss the recent Marvell Industry Analyst Day, focusing on the concept of accelerated infrastructure in data centers, the competitive landscape with Broadcom, and the significance of custom HBM in AI silicon. They explore how Marvell is positioning itself as a data center company and the implications of custom solutions in the evolving semiconductor industry. The conversation also touches on Nvidia's dominance and the future of data centers, emphasizing the need for optimization and the potential for a shift back to more affordable solutions. In this conversation, Ben Bajarin and Jay Goldberg discuss the recent developments surrounding Broadcom, particularly its stock surge attributed to optimism in AI. They delve into the company's market position, the significance of data center design, and the distinction between Total Addressable Market (TAM) and Serviceable Addressable Market (SAM). The discussion also covers the critical role of networking in AI, the rise of million-node data centers, and Broadcom's strategy regarding M&A and custom silicon. The conversation highlights the evolving landscape of AI and the competitive dynamics between major players in the industry.
[깊이 있는 경제뉴스] 1) 소비자단체 “OTT 구독료, 산정 근거 공시해야” 2) 정부, 자본시장법 개정안.. 상법과 차이는? 3) 中 10년물 국채금리, 2% 아래로 떨어졌다 4) 美, 대중 반도체 수출 규제.. 한국산 HBM 타격 -김치형 경제뉴스 큐레이터 -조미현 한국경제신문 기자 -나수지 한국경제신문 기자