POPULARITY
Categories
Think of this one as something of a "sequel," insofar as podcasts can have sequels. Or don't. It's not really our concern. Join Spencer, Ty, and Andy as they rank all of your favorite genres of film, such as "Suburban Gothic," "Devotional," "Gokudō," "Action," and "Commedia Sexy All'italiana." Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
Lebron James puttering around in the year 2126, scoring layups on the wasteland mutants from his diesel-powered mechanical exoskeleton? It's more likely than you might think. Join Spencer, Ty, and Andy as they once again debate the battle prowess of such heroes as Popeye, Samus Aran, Chuck E Cheese, Millard Fillmore, and many more. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
VOV1 - Bộ Công Thương kiên định thực hiện mục tiêu tăng trưởng xuất khẩu từ 15 - 16% so với năm 2025, tổng kim ngạch dự kiến đạt khoảng 546 - 550 tỷ USD, phấn đấu thặng dư thương mại - xuất siêu khoảng 23 tỷ USD, góp phần hiện thực hóa Nghị quyết số 01/NQ-CP của Chính phủ.Ngày 12/3/2026, Bộ Công Thương tổ chức Hội nghị giao ban xúc tiến thương mại (XTTM) với hệ thống Thương vụ Việt Nam ở nước ngoài Quý I/2026 và khai trương “Nền tảng số về phát triển thị trường nước ngoài”. Hội nghị thực hiện theo hình thức trực tiếp và trực tuyến với các Thương vụ Việt Nam trên toàn thế giới do Quyền Bộ trưởng Bộ Công Thương Lê Mạnh Hùng chủ trì. Phó Thủ tướng Chính phủ Bùi Thanh Sơn dự và phát biểu chỉ đạo nhiều nội dung quan trọng. PV Nguyên Long thông tin
Tener dolores crónicos es un problema no solo por el dolor en sí, sino también por la frustración de tenerlos
Tener dolores crónicos es un problema no solo por el dolor en sí, sino también por la frustración de tenerlos
Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin16:55 Chia sẻ Lời Chúa : Lm. Đaminh Vũ Duy Cường, SJ, chia sẻ Lời Chúa Chúa Nhật III Mùa Chay24:55 Nữ tu trong Giáo hội : Các nữ tu Kenya đối mặt với nạn buôn người trong thời đại kỹ thuật số---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
We're not talking about nutrients. We're not talking about meal planning. We're talking about low-dow, muddy, kick-you-in-the-balls-and-run-away combat between these here two methods of food conveyance. Join Spencer, Ty, and Andy as they debate which is yummier and more satisfying: the humble snack or the glorious meal? Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
Trong không khí vui tươi đầu năm Bính Ngọ, 57 cụ cao niên từ 80 tuổi trở lên đã được vinh danh tại Lễ Mừng Xuân và Khánh Thọ do Hội Cao Niên Tị nạn Đông Dương Victoria (IERA-VIC) tổ chức trọng thể ở Melbourne. Buổi lễ quy tụ đông đảo hội viên cùng nhiều khách mời danh dự, như một lời tri ân thế hệ đã góp phần xây dựng cộng đồng người Việt tại Úc.
Đêm thức trắng canh tên lửa, ngày đi làm lại thấp thỏm nhìn hướng bom bay để chạy, anh Lê Mạnh ở UAE quyết bám trụ xưởng lọc dầu vì còn gánh nặng kinh tế gia đình.
Hoy estuvo lloviendo por aquí, y no puedo negar que me inspiré en el clima de hoy
Hoy estuvo lloviendo por aquí, y no puedo negar que me inspiré en el clima de hoy
Allen, Rosemary, Yolanda, and Matthew discuss highlights from Blades USA including the carbon blade debate. Plus TPI Composites’ bankruptcy sale hits major obstacles as partners dispute over $100M in claims. And Europe’s offshore and onshore wind developers clash over state aid, with WindEurope’s new CEO urging unity. Sign up now for Uptime Tech News, our weekly newsletter on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on YouTube, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary’s “Engineering with Rosie” YouTube channel here. Have a question we can answer on the show? Email us! [00:00:00] The Uptime Wind Energy Podcast brought to you by Strike Tape, protecting thousands of wind turbines from lightning damage worldwide. Visit strike tape.com. And now your hosts. Allen Hall 2025: Welcome to the Uptime Wind Energy Podcast. I’m your host Alan Hall, and I’m here with Yolanda Padron, Rosemary Barnes and Matthew Stead. Yolanda and Matthew have just wrapped up a couple of days at the Blade USA forum in Austin, Texas. Maybe we should start there. Thoughts on the forum this year? Things that were highlights? Matthew Stead: Yeah. Lightning Root de bond. One positive was that, um, there are a couple of startups there, so, you know, kudos to them for, you know, making the investment. There was a. There was a startup around, you know, data analytics and, you know, bringing machine learning in. And then there was also another startup looking at recycling. [00:01:00] Um, really trying to get that, that food chain through of, um, you know, grinding and then turning into some sort of valuable product. Um, yeah. However, I think someone also from EPRI said that, you know, at the moment, you know, the recycling path is, you know, eight times more expensive than the, um, the landfill path. There was a lot of carbon discussion actually. So, and, um, yeah, a lot of discussion about repairs, a lot of discussion about testing, uh, a lot of discussion about, you know, how maybe a carbon blade can last 40 years. Um, so a lot of discussion about lifetime extensions around carbon. Um, but, but, but, but, you know, really, really hard to repair. Allen Hall 2025: That goes back to the comments Rosemary and Morton Hanberg made about carbon blades. Should we be making. Carbon blades are not. And I think Morton’s opinion, and maybe Rosemary’s, I don’t wanna speak for her, was carbon blades are okay, but they are really difficult to repair. Almost impossible to repair. And is it [00:02:00] worth even building them? Rosemary Barnes: I think if you consider the blade in isolation, then it probably is adding more headaches than it’s worth. But carbon fiber is a bit of an enabler for improvements across the whole system of a, a wind turbine. ’cause when you take, like you can take a lot of weight out of a blade by using carbon fiber. I mean, it’s never been cheaper to make a blade with carbon fiber than an equivalent blade with glass. You do, you buy the more expensive carbon fiber blade because it’s lighter, a like, a lot lighter, and then you can take, um, weight. It, it reduces the requirements for basically every other component in the wind turbine, but especially stuff like the pitch bearings. Um, so you solve a lot of other problems, but you create blade problems. So. I think if you ask some of the only works on maintaining blades, then you’re gonna be like, why would you make a carbon fiber blade? It is so much headache. Um, but that’s not the reason why they were ever made in the first place. [00:03:00] So you’d need to talk to, you know, somebody on, uh, I dunno, front end engineering. Someone from the sales team about why it is that they are going with a more expensive carbon fiber blade. Even acknowledging that they probably underestimate how many problems there are with o and m with, uh, carbon fiber blades. But even so, like they’re already aware that there are trade offs. Um, and yeah, there’s non blade reasons for, for taking, taking that pain. Allen Hall 2025: Are there other fibers that could be substituted besides carbon? There, I, I know fiberglass. A, a good, relatively strong fiber and carbon obviously is much stronger. But are there things in the middle that could be substituted that are non-conductive? Rosemary Barnes: Uh, y yeah, there are, but carbon fibers, it’s not just strong. It’s really stiff. And that’s what its benefit is. Um, like there’s Kevlar but it’s not very stiff. So you would, we would make a really heavy blade if you used Kevlar. It would be probably bulletproof though. So I guess that would be a plus. I, I haven’t looked into it recently, but nothing is [00:04:00] at the, um, like got the performance specs and the cost specs that you would need to, um, make it replace carbon fiber. Matthew Stead: So one thing that I picked up I thought was pretty, uh, interesting was that by having a stronger, you know, carbon protrusion, you know, the, you know, the backbone of the blade, um, it took a little bit of pressure off the skin. And so therefore, um, you know, the life, life of the blade, um, and the ability to keep running it ’cause the skin is not so critical. Those seem to be a real, a real plus as well. Rosemary Barnes: I don’t know, people talk about this in like absolutes, but everything is just a con continuum, right? Like you can make an all glass blade that would last a thousand years if you really wanted to. You just, you know, you just have to make it very, very strong. ’cause it’s, you know, it’s all based on fatigue lifetime. And the smaller that your, um, strain on every component in the blade is, then the less, um, the less fatigue damage is gonna accumulate. Making it a little bit stiffer will actually increase the lifetime by [00:05:00] a a lot. I think the main benefit to protrusions is just that you avoid all of the um, or you avoid a lot of the possibilities for manufacturing defects. It’s easy to control the manufacture ’cause carbon fiber, like much more so than glass fiber. It’s so, um, it’s so dependent on the fibers being perfectly straight. If you have a little wrinkle, like a little wrinkle is bad in glass fiber, but it’s like really bad in carbon fiber. So protrusions mean that you won’t get wrinkles. Uh, and you can, you know, control the manufacturing process a lot better, but they are barely repairable, right? So that’s the trade off. You can do some small repairs, but you’re not gonna be just. Um, if you’ve got a, a, a full thickness crack or something, it’s, you know, it’s gonna be game over. You’re not gonna be building that up again. Allen Hall 2025: Delamination and bottomline failures and blades are difficult problems to [00:06:00] detect early. These hidden issues can cost you millions in repairs and lost energy production. C-I-C-N-D-T are specialists to detect these critical flaws before they become expensive burdens. Their non-destructive test technology penetrates deep to blade materials to find voids and cracks. Traditional inspections, completely. Miss C-I-C-N-D-T Maps. Every critical defect delivers actionable reports and provides support to get your blades. Back in service, so visit cic ndt.com because catching blade problems early Yolanda Padron: will save you millions. Allen Hall 2025: Well keep going on the, the subject of blades. Imagine if you were selling your house and you told the bank you owe nothing on it. Then the bank shows up with a bill for over a hundred million dollars. That is essentially what’s happening right now in the TPI composites bankruptcy. Uh, the wind blade manufacturer canceled its [00:07:00] February 17th asset auction after only one bidder came forward. A firm called ECP five LLC, which is, uh, part of Energy Capital Partners, which is based in New Jersey. Uh, but before TPI. Can hand over the keys. It has to settle up with its business partners. TPI told the court many of those partners were owed little or nothing. Uh, the partners check their books. Strongly disagree. Now, the judge has a mountain of competing claims to sort through before the sale can close. And everyone, I mean, the, the claims are big. Uh, there are several large names listed, and if you go through the filings, uh, Siemens C Mesa is probably the largest one, and it, it claims TPI owes about 84 million plus an unpaid inspection, repair, and replacement costs. Plus under 22 million [00:08:00]under apparent guarantee. Others include Aurora Energy Services stating it is owned about $5 million, uh, for post-bankruptcy services, plus 38,000, uh, for before the filing of bankruptcy. The landlord up in Iowa for the TPI facility there is objecting because they’re owed some rent. Some other ones include, uh. Oracle, uh, which is, uh, has a lot of software licenses that TPI currently has, and they’re saying those licenses will not swap over to the new owner. So there, this is a series of these filings going on at the minute, and they’re pushing back the closing of the, uh, sale hearing until March 9th. So they got about another two weeks as we record right now. This is a big deal and, and although I have seen almost nothing about it in the press. Because it’s hard. One, it’s hard to find, and two, it’s really [00:09:00] difficult to sort through. Uh, but it is a major milestone for TPI that they’re gonna be able to sell the, or at least transfer ownership to, uh, energy capital partners. And the none of the buyers investors had bought part of the facilities. But GE Renova or Siemens cesa, for that matter, are not involved, at least at the top level. Which is really to, in my opinion, odd. I thought GE Renova would’ve been involved, at least at some level. They have been supporting TPI through this process. But in terms of going forward, doesn’t look like too much is going on with Renova or Siemens Ga Mesa in, in terms of the operations of these facilities. Thoughts. Rosemary Barnes: Yeah, I agree. It’s strange that they wouldn’t have taken that opportunity and that makes me wonder what I don’t know that, you know, ’cause obviously it’s not a strange decision to the people who have made it so. They’ve got more information, a lot more information than us. So what is it that made it unappealing to them? That’s, um, that’s my question. [00:10:00] Yolanda Padron: What did TP, I think was gonna happen with all of that money that they owe everyone? Allen Hall 2025: Well, it’s a bankruptcy hearing. Obviously they like to wipe that debt free and so would Energy Capital partners. They don’t wanna pay the a hundred million plus of whatever, uh, the court would ict, but. You just like to get the assets. If you can do it, that’s your cheapest option if you’re Energy Capital partners. But do you see Energy Capital Partners running the facilities? There’s a lot of organization within TPI that manages those facilities and controls the operation. From the quality side engineering side, there’s, there’s a lot of pieces to TPI here. Do you think they’re just gonna pick it up and run, run the company as it stands today? Or, or, Rosemary Barnes: oh my goodness. I would be so nervous to, um, buy blades, uh, from them in that situation. I mean, we’ve seen so many examples in the last few years of decisions being made by senior management that have really compromised the quality at the end of the day. Like in theory, yes, the factory, you know, all the processes are in place to do things. Um, to do things [00:11:00] right, but you know, as soon as they get the next new project, which they’re doing constantly, right? It’s not like they just make a blade and they just make it over and over again. They make many different kinds of blades. There’s decisions to be made and you’re trying to get the price right and the quality right. And then, you know, given that we know that TPI was not profitable the way they were doing it before, they’re gonna have to spend less money. Then somebody who isn’t from the industry is making those calls about where to save it. It just seems like totally implausible to me. Matthew Stead: Can I just add though, you know, TPI was mentioned multiple times at, um, at Blades, USA, and so, you know, a lot of people are relying on them or have relied on them and so forth. And so maybe this is a strategy about supporting the industry into the future. Like I think Alan, you, you said that they’re involved in, um, this investment business has other wind assets, so maybe it’s just like. Securing supply chain and, which I mean, that’s a pretty logical approach, isn’t it? Allen Hall 2025: Oh, it would be. Uh, they’re about 50% owners of Ted’s US onshore fleet and a number. There are [00:12:00] other projects they’re involved in a number of renewable projects. Uh, so it would make sense for them to try to keep the supply chain going. But the largest purchaser of GB GE turbines that I know of is NextEra. So you would think NextEra would want to step into the mix too and at least in all the court filings, I haven’t seen much from NextEra or nothing from them at all. It if Osted US is wanting to keep their supply chain and Energy Capital partners wanted to keep the supply chain going, that would make a lot of sense to me. However, I just don’t know if they have the infrastructure to manage it. As Rosemary has described on numerous occasions running LM wind power is not easy. There’s just a lot of moving pieces, supply chain problems. You’ve got people problems, you have quality problems, you have repair problems, warranty issues. It’s a lot to that business. It isn’t like you’re stamping out widgets. You, you have a responsibility to that product after it goes out into [00:13:00] service. So if you have problems out in service, you’re, you’re kind of on the hook for all those warranty claims. It’s complicated. Rosemary Barnes: You make it sound like I was running lm Yolanda Padron: Rosie runs the world. Rosemary Barnes: I just wanna make it clear I was not running lm Allen Hall 2025: Not yet. Rosie. There’s still time. Rosemary Barnes: I was ru running one very tiny, tiny corner of it. Yolanda Padron: I’d almost be curious ’cause like since ECP is so much into risk management and just, just in general, they have so many things that they are like part owners in, but they don’t necessarily manage the day to day hands on. Uh. I’d almost be curious to see if maybe they take a page out of Rosie’s book and try to make one thing. Well, Matthew Stead: mm, that’d be novel, wouldn’t it? Rosemary Barnes: It has actually been tried before. Um, you know, it’s, it’s uh, not something that has escaped the notice of blade engineers, uh, that if you make one thing, you can do it right. And wind turbine blades are a pretty similar there. No, you know, like great [00:14:00] differentiator between. How well performing the blades are from one company to another. I know at, at least at lm, they did have a blade that they designed, and their plan was to sell just heaps and heaps of those to multiple different manufacturers and just no one wanted it. Um, so it just quietly died. Um, so yeah, the, the concept is good. I think it’s. A little bit harder to pull off than you would hope. There are also some Chinese companies that are kind of selling just parts, generic parts. And so if you wanted to make your own wind turbine, um, company, if you wanted to be a wind energy o and m Yolanda, you could just buy an assortment of parts from Chinese manufacturers and put a. Yolanda Wind energy sticker on it and um, and, and, and you could be an an OEM. So it is, it, it, it is possible. I haven’t seen any of these out in the wild. Um, I have [00:15:00] heard of, you know, people considering it for, you know, certain aspects of certain types of projects. So it kind of exists in a way. Matthew Stead: But the financial aspect, I mean, that’s accounting 1 0 1, I mean. You gotta know your assets and to owe people a hundred million dollars, that’s absolutely shocking. Really? Allen Hall 2025: They owed a lot more than that before the bankruptcy. It is a lot of money. Matthew Stead: How do you miss that? Allen Hall 2025: Well, I don’t think they missed it. I just think the warranty claims and some of the repair that was going on and the, the, it sounded like price discounting was happening to some of the OEMs just caught up to ’em. But at the end of the day, I, I, I guess the question is. Does TPI as an entity remain? Obviously the Vestas portion will, because Vestas is gonna make them Vestas factories in a sense, and, uh, integrate as part of their overall operations. But Renova is not, Siemens is not interested in doing it, at least as we speak. No one’s [00:16:00] making any noise over at Nordex. It, it does leave these assets questionable as to what the real value is. We haven’t heard how much, uh, ECP has paid for them yet. The Vestas factories that were purchased, I think the, the two TPI factories in Mexico, I think Vestas paid about $10 million for each factory, which is a really inexpensive price to pay for new factories because Vestus had talked about at one point a year or two ago, about standing up a new factory saying it would cost him roughly a half a billion dollars to do. So buying a, that same asset for $10 million is a discount, a deep, deep discount, which maybe Vestas figures, Hey, it’s 20 million bucks, plus they got the India operations. Uh, it’s not that much money. If it all goes sour, it’s not that much money and we’re okay. Whereas Ver Nova decided to not to participate in that. As wind energy professionals, staying informed is crucial, and let’s face it difficult. That’s why [00:17:00] the Uptime podcast recommends PES Wind Magazine. PES Wind offers a diverse range of in-depth articles and expert insights that dive into the most pressing issues facing our energy future. Whether you’re an industry veteran or new to wind, PES Wind has the high quality content you need. Don’t miss out. Visit p ps wind.com. Today, over in Denmark, a fight has been brewing between offshore and onshore wind developers and. Sted once State Aid brought back for offshore wind auctions, onshore developers say that would tilt the playing field against them. Well, some have even walked out on their own trade group, uh, over it. Now the new CEO of Wind Europe, Tina Van Stratton, uh, is stepping in the middle of that discussion with a simple message. We need both. Don’t let offshore and onshore wind divide us. Nearly 90% of Europe’s installed wind capacity sits currently on land, and [00:18:00] she says that is not going to change anytime soon. Uh, so there, there is a big dispute about this right there. There does seem to be a, a amount of money being poured into offshore wind and requests of governments to support offshore wind at the same time. Onshore wind, which has been the primary growth market for wind in Europe, is getting the cold shoulder. In a sense. How does this play out everyone? Is there a, a good solution to it or is the need for offshore wind so great that, that they have to ignore onshore wind development for a couple of years? Matthew Stead: I think we should just all be friends. So, I mean, really. Yeah, we need both and, um, I mean for the diversity and, you know, uh, I’ll leave all the technical topics to Rosie, but, um, um, really I think we need both. I mean, so what, it’d be crazy to, to drop the onshore, onshore industry. Yolanda Padron: Yeah. I mean, it makes sense that, or said, especially Orid Europe doesn’t have any onshore anymore. Right. So it’s just [00:19:00]offshore. It would make sense that they really wanna push for help for themselves. And it’s, it’s great. It, it’s, it’s great to help, but I, I agree with Matt. Allen Hall 2025: Well, the Northern Europe and Scandinavian countries are talking about 100 gigawatts in the water by what, 2050? Something of that sort. So that’s a lot of energy in the water. In order to do that, you have to devote a number of resources to it, which. Will mean onshore wind is not gonna get the support it probably deserves, even though it has a proven track record. Rosemary Barnes: I just think it, it’s really interesting because I guess wind is, um, a very Europe. LED industry. Um, and so yeah, in Europe, e everything big and exciting is in offshore and the volume is in offshore. Um, I feel like that’s kind of filtered through to other regions though, because I mean, in Australia we don’t even have any offshore wind yet. We are probably getting some, but you go to any wind energy event, it’s gonna be. [00:20:00] More than 50% offshore wind and sometimes like 90% offshore wind, um, focused, which is, I think crazy when onshore is, is exists and has plenty of problems that need to be solved, and we need to be building more, a lot faster. I, I do actually wish that. If we could spend as much of the, you know, like some of the effort and the political effort that’s going into paving the way for offshore wind, I think would be much better spent on solving the problems. Um, the obstacles stopping us from rolling out onshore wind faster. Because we’re not on track in Australia to meet our renewable energy targets if we can’t get that under control. And then in the US yes you have some offshore wind, but it is not a growth industry at the moment or it’s not very appealing at the moment, at least. Right. So, and I dunno how much you talk about it there, but I do hear a lot of, like a whole lot of talk about offshore compared to how important it is for regions outside of Europe. Yolanda Padron: I think it’s important too to [00:21:00] note that. When you have a lot of offshore wind in your fleet, like you can sometimes test out products onshore that maybe they’re, of course not the exact same conditions, but you can test out products to a degree onshore. And I’ve seen, you know, owner operators that have to go across continents just to test that product because it’s cheaper to do that onshore than to do it offshore in your home site, in your backyard. So I mean that that would really benefit from an RD standpoint. It would really benefit everyone. If Allen Hall 2025: they gave it up attention Yolanda Padron: to onshore. Rosemary Barnes: When I was at lm, one of my, well my key team member who was an electrical engineer, he had, um, done a bunch of work for a system that was only implemented on an offshore wind farm. And it sucked up so much time when stuff started going wrong with that, like even small things. And he was the only one [00:22:00] that could do it. You know, you go out, if you’ve got a five minute job to do, to get, you know, like turn something off and on again off. Reconnect something that’s a whole day of work, right? Like you, and, and not like a normal day, but like a 12 hour day, you’re gonna go out in the morning, they, you know, they go around in a boat or whatever and drop people off and they don’t come get you when you’re done 10 minutes later, you know, they come get you at the end of the day when they’re picking everyone up again. So, um, it, it was, it was incredibly challenging. I mean, for him personally and the team. Um, and I always recommend to, or, you know, sometimes I’m advising, um, companies that have offshore wind, um, technologies. And I’m always advising anything that you can test on shore, do it and get creative about it as well. ’cause you might think that you can’t, you certainly can’t get all the way there without testing in your real operating environment. But any problem that could happen onshore that you, um, learn about when it’s onshore is gonna cost you probably like, you know, one 10th as much [00:23:00] to fix. Um. So, and, and the time as well. So, yeah, I, I think that you’re right that we should be actually considering onshore as an opportunity for, um, improving offshore technology as well. Allen Hall 2025: Can we talk about, uh, data centers for a minute? Just off the top of mind, I’ve been listening to a number of podcasts over the last month or two talking about powering AI data centers and how much coal or natural gas. It’s gonna be needed to provide the stable, reliable power that these data centers supposedly need. In the meantime, there’s like this industry being built, uh, and you see the, the purchases of gas turbines going out to like, what, 2032? I think it’s what Renova is talking about now is when you could actually get in line for a gas turbine. Other manufacturers or gas turbines are basically saying the same thing in the meantime. [00:24:00] Elon Musk and SpaceX are talking about putting AI data centers up in space where you don’t have any regulatory issues. You don’t have to burn coal or natural gas or any of these things. So the, the ground-based AI data centers appear to be locked into making these really expensive buildings and assets and putting generation and transmission and, and this infrastructure together, which will cost them. Hundreds of millions at a minimum, likely tens of billions of dollars to do, and that’s just in the United States. Meanwhile, SpaceX is really on a pathway of doing this up in the sky for probably a fraction of the cost. Is there a break point here? Because it does seem like the, the natural gas, coal, oil, petroleum industry and the on ground build, the building, people are ignoring that. SpaceX has a [00:25:00] capability of doing this, and if Musk decides to do it, and SpaceX decides to do it, that all those gas turbine orders, all that infrastructure, all the gas pipeline, all the drilling that would have to happen would just go immediately. Poof. Gone. Rosemary Barnes: I don’t know about immediately because I mean, we’re not at the point yet where you can just launch a data center into space. So there is a bit of a, a, a transition period. Um, I. I also think that it’s overblown that, you know, I think you might have even fallen into the trap also, where you’re like, oh, when data centers need more energy, so therefore it has to be coal or gas or nuclear. Allen Hall 2025: Nope, I agree with you. Rosemary Barnes: Those things aren’t quick to build either. If you truly wanted to do it quickly, you’d be putting in, um, you know, heaps of solar panels and batteries and, and you know, wind turbines where that made sense. But that said, I, I do agree that, uh, like I, I don’t think space-based data centers is farfetched at all. I, I guess the biggest [00:26:00] challenges, uh, are, um, the cooling and heating requirements space has very large temperature fluctuations. So I guess you’re gonna need to design that carefully. I don’t think it’s insurmountable. Um, and then the next thing is a cost of launch, which I’m sure you’re about to tell me how. Dramatically the cost of launch is dropping. Um, you know, like, it, it’s got, it’s got a very good learning curve. The space launches, which is basically, you know, SpaceX is probably the main reason why that is just dropping and dropping and dropping. So I don’t think that it’s unrealistic at all. I don’t know the timeframe. You would know more, Alan, you work in, um, aerospace. I just. You know, um, follow it for general interest. Matthew Stead: I reckon it’s stupid. He’s really stupid on a number of grounds. So first of all, you know, why do that when. You just, I can’t see how it can ever be more cost effective and you know, [00:27:00] I, you know, you should really, should be putting that effort into things like, you know, better healthcare and so forth. I mean, what a waste of resources. But why? I mean, why, why? Allen Hall 2025: Because it’s a lot less expensive and it’s faster. Matthew Stead: You’d do it in the ocean before that, wouldn’t you? Rosemary Barnes: No, but the ocean still has, like how do you power it? You, you get the 24 7 solar power in space. That’s what you. That’s what you get, um, which you can’t get on Earth Matthew Stead: or you put it next to a wind farm and you, you, and you make the load go up and down depending on the wind. I mean, seriously, there’s so many other ways of doing it. You put it next to a wind and solar. Rosemary Barnes: I agree with you, Matt, that I think that the, the bulk of the solutions with data centers is gonna come from one demand not being what people think it is today. Like the numbers that get reported are just like the. Absolute best, best, best case scenario and then multiplied by three or four times because they’re looking at different options for locating each of the data centers they plan to make. So I think I wouldn’t be surprised if we end up with 10% of what people think that we’re gonna get. [00:28:00] Now, the first thing, secondly, people assume that it needs to be 24 7. Just, you know, like a hundred percent reliable power, and that’s. That’s simply, yeah, it’s not, not everything needs to be just, um, you know, done at, at the exact time that it’s requested. There’s heaps of things that can be shifted and uh, when the price differential is there, then people are naturally going to choose that. And in fact, there are already some companies offering different levels of reliability depend, you know, for different prices. And companies can choose which of their processes can be put on hold. Like a lot of the training stuff, you’re happy don’t. Need 99.999% reliability, you’re probably happy with 90% reliability. And so, you know, if it costs a whole lot less than you will, I, I agree with you, Matt, that that’s gonna take most of it. But I do still think that for the, like, super reliable, um, data centers, I, I bet that we see at least one. And even if it’s just because Elon Musk is the type to push something through, um, you know, [00:29:00] first and. Wait for the market to catch up later. Uh, maybe that will be the reason, but I, I honestly think it’s more than 50% likely that we see a data center in space in the next, in the next decade, Matthew Stead: it would make more sense to like drill a hole to the center of the earth and get the, the hot well cutting rock Rosemary Barnes: and or there’s also plenty of geothermal. You did thermal projects as well. Matthew Stead: Yeah, it’s just ridiculous. Rosemary Barnes: I think that we’ve had our first hot take from Matthew, so I don’t know some sort of sound effect to be added here. Claire. Uh, yeah, Allen Hall 2025: that wraps up another episode of the Uptime Wind Energy Podcast. If today’s discussion sparked any questions or ideas, we’d love to hear from you. Just reach out to us on LinkedIn and don’t forget to subscribe so you never miss an episode. And if you found value in today’s conversation, please give us a review. It really helps other wind energy professionals discover the show. For Rosa, Yolanda and [00:30:00] Matthew, I’m Alan Hall, and we’ll see you next week on the Uptime Wind Energy Podcast.
Search inside the app stores is changing — and AI is accelerating that shift. In this episode, we speak with Dave Bell, CEO at Gummicube, about how artificial intelligence is transforming the way users discover apps. Dave explains why search is becoming more conversational and intent-driven, how natural language queries are reshaping rankings, and why the era of optimizing for a single dominant keyword is fading. As users ask longer, more specific questions — both inside the App Store and through tools like ChatGPT — ASO strategies must evolve to reflect how people actually search. If you're responsible for app visibility, organic growth, or ASO strategy, this episode offers a timely look at where search is heading next. Without any further ado, let's get started. Today's topics include: How AI is changing app store search behavior Natural language queries and intent-based ranking Why single-keyword optimization no longer works The growing role of LLMs in app discovery Apple opening the App Store to web indexing What AI-driven search means for future ASO strategy Links and Resources: Dave Bell on LinkedIn Gummicube Business Of Apps - connecting the app industry Quotes from Dave Bell “It's not about looking for the one keyword to rule them all. It's not The Lord of the Rings — it's about understanding all the ways users might search and find your app.” “Users are really being retrained both in the way that they search for information and in terms of what results they expect from a natural search.” “LM models are now including summaries and links to apps that best fit a user's prompt, giving users a new path into the app stores.” Host Business Of Apps - connecting the app industry since 2012
Me pasa, creo que como a cualquiera que nos preguntan, oye, qué comiste hoy?
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin16:45 Chia sẻ Lời Chúa : Lm. Bartolomeo Nguyễn Anh Huy, SJ, chia sẻ Lời Chúa Chúa Nhật II Mùa Chay23:36 Nữ tu trong Giáo hội : Các Nữ tu Thừa Sai Thánh Thể ở Manila mang sự hiện diện của Chúa Giêsu đến với người nghèo khổ---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
In this episode of the Homeopathy247 podcast, host Mary is joined by expert homeopath Robin Gladstone to discuss a topic that touches many families: asthma. They dive into how homeopathy works alongside conventional medicine to provide deep, long-term relief for children and adults. Understanding Asthma and Its Triggers Robin explains that many cases, especially in children, involve allergic asthma. While winter is often blamed, many children find their symptoms "tick up" in the springtime due to hay fever, pollen, and increased outdoor sports. Common triggers include: Exercise: Physical activity frequently brings on coughing or wheezing. Allergens: Sensitivity to cats, milk, or sweets can exacerbate conditions. Environment: Dampness, moldy leaves in the fall, or high pollen counts in spring. Prescribing for the Person, Not Just the Flare A key theme of this episode is that homeopathy for asthma doesn't just treat the "flare" or the emergency; it works at a deeper, constitutional level to change how the body reacts over time. Alice shares a powerful success story of a 4-year-old boy who frequently visited the hospital for severe wheezing. By using Tuberculinum and LM potencies (gentle daily liquid doses), he was eventually able to have sleepovers at houses with cats without any reaction. Modern Homeopathic Prescribing: LM Potencies & Compounding Robin explains specific methods used in her practice: LM Potencies: Gentle remedies used for chronic conditions, designed to be taken daily to nudge the "vital force" without causing large aggravations. Compounding: Turning pellet remedies into a liquid solution, which is legally required in parts of Canada. Succussing: Shaking the bottle before each dose to slightly "bump up" the potency, keeping the healing response active. Mainstream Medicine and Homeopathy Together It is important to remember that homeopathy is complementary. Alice encourages parents to always use life-saving medications like nebulizers or Advil when needed in emergencies. Homeopathy works alongside these to make the lungs stronger so that, eventually, those emergency medications are needed less and less. Links in this episode: Robin Gladstone's website: https://www.homeopathicfamilywellness.com/ Subscribe to our YouTube channel and be updated with our latest episodes. You can also subscribe to our podcast channels available on your favourite podcast listening app below: Apple Podcast: https://podcasts.apple.com/us/podcast/homeopathy247-podcast/id1628767810 Spotify: https://open.spotify.com/show/39rjXAReQ33hGceW1E50dk Follow us on our social media accounts: Facebook: https://www.facebook.com/homeopathy247 Instagram: https://www.instagram.com/homeopathy247 You can also visit our website at https://homeopathy247.com/
Chris Cieslak, CEO of BladeBug, joins the show to discuss how their walking robot is making ultrasonic blade inspections faster and more accessible. They cover new horizontal scanning capabilities for lay down yards, blade root inspections for bushing defects, and plans to expand into North America in 2026. Sign up now for Uptime Tech News, our weekly newsletter on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on YouTube, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary’s “Engineering with Rosie” YouTube channel here. Have a question we can answer on the show? Email us! Welcome to Uptime Spotlight, shining Light on Wind. Energy’s brightest innovators. This is the Progress Powering Tomorrow. Allen Hall: Chris, welcome back to the show. Chris Cieslak: It’s great to be back. Thank you very much for having me on again. Allen Hall: It’s great to see you in person, and a lot has been happening at Blade Bugs since the last time I saw Blade Bug in person. Yeah, the robot. It looks a lot different and it has really new capabilities. Chris Cieslak: So we’ve continued to develop our ultrasonic, non-destructive testing capabilities of the blade bug robot. Um, but what we’ve now added to its capabilities is to do horizontal blade scans as well. So we’re able to do blades that are in lay down yards or blades that have come down for inspections as well as up tower. So we can do up tower, down tower inspections. We’re trying to capture. I guess the opportunity to inspect blades after transportation when they get delivered to site, to look [00:01:00] for any transport damage or anything that might have been missed in the factory inspections. And then we can do subsequent installation inspections as well to make sure there’s no mishandling damage on those blades. So yeah, we’ve been just refining what we can do with the NDT side of things and improving its capabilities Joel Saxum: was that need driven from like market response and people say, Hey, we need, we need. We like the blade blood product. We like what you’re doing, but we need it here. Or do you guys just say like, Hey, this is the next, this is the next thing we can do. Why not? Chris Cieslak: It was very much market response. We had a lot of inquiries this year from, um, OEMs, blade manufacturers across the board with issues within their blades that need to be inspected on the ground, up the tap, any which way they can. There there was no, um, rhyme or reason, which was better, but the fact that he wanted to improve the ability of it horizontally has led the. Sort of modifications that you’ve seen and now we’re doing like down tower, right? Blade scans. Yeah. A really fast breed. So Joel Saxum: I think the, the important thing there is too is that because of the way the robot is built [00:02:00] now, when you see NDT in a factory, it’s this robot rolls along this perfectly flat concrete floor and it does this and it does that. But the way the robot is built, if a blade is sitting in a chair trailing edge up, or if it’s flap wise, any which way the robot can adapt to, right? And the idea is. We, we looked at it today and kind of the new cage and the new things you have around it with all the different encoders and for the heads and everything is you can collect data however is needed. If it’s rasterized, if there’s a vector, if there’s a line, if we go down a bond line, if we need to scan a two foot wide path down the middle of the top of the spa cap, we can do all those different things and all kinds of orientations. That’s a fantastic capability. Chris Cieslak: Yeah, absolutely. And it, that’s again for the market needs. So we are able to scan maybe a meter wide in one sort of cord wise. Pass of that probe whilst walking in the span-wise direction. So we’re able to do that raster scan at various spacing. So if you’ve got a defect that you wanna find that maximum 20 mil, we’ll just have a 20 mil step [00:03:00] size between each scan. If you’ve got a bigger tolerance, we can have 50 mil, a hundred mil it, it’s so tuneable and it removes any of the variability that you get from a human to human operator doing that scanning. And this is all about. Repeatable, consistent high quality data that you can then use to make real informed decisions about the state of those blades and act upon it. So this is not about, um, an alternative to humans. It’s just a better, it’s just an evolution of how humans do it. We can just do it really quick and it’s probably, we, we say it’s like six times faster than a human, but actually we’re 10 times faster. We don’t need to do any of the mapping out of the blade, but it’s all encoded all that data. We know where the robot is as we walk. That’s all captured. And then you end up with really. Consistent data. It doesn’t matter who’s operating a robot, the robot will have those settings preset and you just walk down the blade, get that data, and then our subject matter experts, they’re offline, you know, they are in their offices, warm, cozy offices, reviewing data from multiple sources of robots. And it’s about, you know, improving that [00:04:00] efficiency of getting that report out to the customer and letting ’em know what’s wrong with their blades, actually, Allen Hall: because that’s always been the drawback of, with NDT. Is that I think the engineers have always wanted to go do it. There’s been crush core transportation damage, which is sometimes hard to see. You can maybe see a little bit of a wobble on the blade service, but you’re not sure what’s underneath. Bond line’s always an issue for engineering, but the cost to take a person, fly them out to look at a spot on a blade is really expensive, especially someone who is qualified. Yeah, so the, the difference now with play bug is you can have the technology to do the scan. Much faster and do a lot of blades, which is what the de market demand is right now to do a lot of blades simultaneously and get the same level of data by the review, by the same expert just sitting somewhere else. Chris Cieslak: Absolutely. Joel Saxum: I think that the quality of data is a, it’s something to touch on here because when you send someone out to the field, it’s like if, if, if I go, if I go to the wall here and you go to the wall here and we both take a paintbrush, we paint a little bit [00:05:00] different, you’re probably gonna be better. You’re gonna be able to reach higher spots than I can. Allen Hall: This is true. Joel Saxum: That’s true. It’s the same thing with like an NDT process. Now you’re taking the variability of the technician out of it as well. So the data quality collection at the source, that’s what played bug ducts. Allen Hall: Yeah, Joel Saxum: that’s the robotic processes. That is making sure that if I scan this, whatever it may be, LM 48.7 and I do another one and another one and another one, I’m gonna get a consistent set of quality data and then it’s goes to analysis. We can make real decisions off. Allen Hall: Well, I, I think in today’s world now, especially with transportation damage and warranties, that they’re trying to pick up a lot of things at two years in that they could have picked up free installation. Yeah. Or lifting of the blades. That world is changing very rapidly. I think a lot of operators are getting smarter about this, but they haven’t thought about where do we go find the tool. Speaker: Yeah. Allen Hall: And, and I know Joel knows that, Hey, it, it’s Chris at Blade Bug. You need to call him and get to the technology. But I think for a lot of [00:06:00] operators around the world, they haven’t thought about the cost They’re paying the warranty costs, they’re paying the insurance costs they’re paying because they don’t have the set of data. And it’s not tremendously expensive to go do. But now the capability is here. What is the market saying? Is it, is it coming back to you now and saying, okay, let’s go. We gotta, we gotta mobilize. We need 10 of these blade bugs out here to go, go take a scan. Where, where, where are we at today? Chris Cieslak: We’ve hads. Validation this year that this is needed. And it’s a case of we just need to be around for when they come back round for that because the, the issues that we’re looking for, you know, it solves the problem of these new big 80 a hundred meter plus blades that have issues, which shouldn’t. Frankly exist like process manufacturer issues, but they are there. They need to be investigated. If you’re an asset only, you wanna know that. Do I have a blade that’s likely to fail compared to one which is, which is okay? And sort of focus on that and not essentially remove any uncertainty or worry that you have about your assets. ’cause you can see other [00:07:00] turbine blades falling. Um, so we are trying to solve that problem. But at the same time, end of warranty claims, if you’re gonna be taken over these blades and doing the maintenance yourself, you wanna know that what you are being given. It hasn’t gotten any nasties lurking inside that’s gonna bite you. Joel Saxum: Yeah. Chris Cieslak: Very expensively in a few years down the line. And so you wanna be able to, you know, tick a box, go, actually these are fine. Well actually these are problems. I, you need to give me some money so I can perform remedial work on these blades. And then you end of life, you know, how hard have they lived? Can you do an assessment to go, actually you can sweat these assets for longer. So we, we kind of see ourselves being, you know, useful right now for the new blades, but actually throughout the value chain of a life of a blade. People need to start seeing that NDT ultrasonic being one of them. We are working on other forms of NDT as well, but there are ways of using it to just really remove a lot of uncertainty and potential risk for that. You’re gonna end up paying through the, you know, through the, the roof wall because you’ve underestimated something or you’ve missed something, which you could have captured with a, with a quick inspection. Joel Saxum: To [00:08:00] me, NDT has been floating around there, but it just hasn’t been as accessible or easy. The knowledge hasn’t been there about it, but the what it can do for an operator. In de-risking their fleet is amazing. They just need to understand it and know it. But you guys with the robotic technology to me, are bringing NDT to the masses Chris Cieslak: Yeah. Joel Saxum: In a way that hasn’t been able to be done, done before Chris Cieslak: that. And that that’s, we, we are trying to really just be able to roll it out at a way that you’re not limited to those limited experts in the composite NDT world. So we wanna work with them, with the C-N-C-C-I-C NDTs of this world because they are the expertise in composite. So being able to interpret those, those scams. Is not a quick thing to become proficient at. So we are like, okay, let’s work with these people, but let’s give them the best quality data, consistent data that we possibly can and let’s remove those barriers of those limited people so we can roll it out to the masses. Yeah, and we are that sort of next level of information where it isn’t just seen as like a nice to have, it’s like an essential to have, but just how [00:09:00] we see it now. It’s not NDT is no longer like, it’s the last thing that we would look at. It should be just part of the drones. It should inspection, be part of the internal crawlers regimes. Yeah, it’s just part of it. ’cause there isn’t one type of inspection that ticks all the boxes. There isn’t silver bullet of NDT. And so it’s just making sure that you use the right system for the right inspection type. And so it’s complementary to drones, it’s complimentary to the internal drones, uh, crawlers. It’s just the next level to give you certainty. Remove any, you know, if you see something indicated on a a on a photograph. That doesn’t tell you the true picture of what’s going on with the structure. So this is really about, okay, I’ve got an indication of something there. Let’s find out what that really is. And then with that information you can go, right, I know a repair schedule is gonna take this long. The downtime of that turbine’s gonna be this long and you can plan it in. ’cause everyone’s already got limited budgets, which I think why NDT hasn’t taken off as it should have done because nobody’s got money for more inspections. Right. Even though there is a money saving to be had long term, everyone is fighting [00:10:00] fires and you know, they’ve really got a limited inspection budget. Drone prices or drone inspections have come down. It’s sort, sort of rise to the bottom. But with that next value add to really add certainty to what you’re trying to inspect without, you know, you go to do a day repair and it ends up being three months or something like, well Allen Hall: that’s the lightning, Joel Saxum: right? Allen Hall: Yeah. Lightning is the, the one case where every time you start to scarf. The exterior of the blade, you’re not sure how deep that’s going and how expensive it is. Yeah, and it always amazes me when we talk to a customer and they’re started like, well, you know, it’s gonna be a foot wide scarf, and now we’re into 10 meters and now we’re on the inside. Yeah. And the outside. Why did you not do an NDT? It seems like money well spent Yeah. To do, especially if you have a, a quantity of them. And I think the quantity is a key now because in the US there’s 75,000 turbines worldwide, several hundred thousand turbines. The number of turbines is there. The number of problems is there. It makes more financial sense today than ever because drone [00:11:00]information has come down on cost. And the internal rovers though expensive has also come down on cost. NDT has also come down where it’s now available to the masses. Yeah. But it has been such a mental barrier. That barrier has to go away. If we’re going going to keep blades in operation for 25, 30 years, I Joel Saxum: mean, we’re seeing no Allen Hall: way you can do it Joel Saxum: otherwise. We’re seeing serial defects. But the only way that you can inspect and or control them is with NDT now. Allen Hall: Sure. Joel Saxum: And if we would’ve been on this years ago, we wouldn’t have so many, what is our term? Blade liberations liberating Chris Cieslak: blades. Joel Saxum: Right, right. Allen Hall: What about blade route? Can the robot get around the blade route and see for the bushings and the insert issues? Chris Cieslak: Yeah, so the robot can, we can walk circumferentially around that blade route and we can look for issues which are affecting thousands of blades. Especially in North America. Yeah. Allen Hall: Oh yeah. Chris Cieslak: So that is an area that is. You know, we are lucky that we’ve got, um, a warehouse full of blade samples or route down to tip, and we were able to sort of calibrate, verify, prove everything in our facility to [00:12:00] then take out to the field because that is just, you know, NDT of bushings is great, whether it’s ultrasonic or whether we’re using like CMS, uh, type systems as well. But we can really just say, okay, this is the area where the problem is. This needs to be resolved. And then, you know, we go to some of the companies that can resolve those issues with it. And this is really about played by being part of a group of technologies working together to give overall solutions Allen Hall: because the robot’s not that big. It could be taken up tower relatively easily, put on the root of the blade, told to walk around it. You gotta scan now, you know. It’s a lot easier than trying to put a technician on ropes out there for sure. Chris Cieslak: Yeah. Allen Hall: And the speed up it. Joel Saxum: So let’s talk about execution then for a second. When that goes to the field from you, someone says, Chris needs some help, what does it look like? How does it work? Chris Cieslak: Once we get a call out, um, we’ll do a site assessment. We’ve got all our rams, everything in place. You know, we’ve been on turbines. We know the process of getting out there. We’re all GWO qualified and go to site and do their work. Um, for us, we can [00:13:00] turn up on site, unload the van, the robot is on a blade in less than an hour. Ready to inspect? Yep. Typically half an hour. You know, if we’ve been on that same turbine a number of times, it’s somewhere just like clockwork. You know, muscle memory comes in, you’ve got all those processes down, um, and then it’s just scanning. Our robot operator just presses a button and we just watch it perform scans. And as I said, you know, we are not necessarily the NDT experts. We obviously are very mindful of NDT and know what scans look like. But if there’s any issues, we have a styling, we dial in remote to our supplement expert, they can actually remotely take control, change the settings, parameters. Allen Hall: Wow. Chris Cieslak: And so they’re virtually present and that’s one of the beauties, you know, you don’t need to have people on site. You can have our general, um, robot techs to do the work, but you still have that comfort of knowing that the data is being overlooked if need be by those experts. Joel Saxum: The next level, um, commercial evolution would be being able to lease the kit to someone and or have ISPs do it for [00:14:00] you guys kinda globally, or what is the thought Chris Cieslak: there? Absolutely. So. Yeah, so we to, to really roll this out, we just wanna have people operate in the robots as if it’s like a drone. So drone inspection companies are a classic company that we see perfectly aligned with. You’ve got the sky specs of this world, you know, you’ve got drone operator, they do a scan, they can find something, put the robot up there and get that next level of information always straight away and feed that into their systems to give that insight into that customer. Um, you know, be it an OEM who’s got a small service team, they can all be trained up. You’ve got general turbine technicians. They’ve all got G We working at height. That’s all you need to operate the bay by road, but you don’t need to have the RAA level qualified people, which are in short supply anyway. Let them do the jobs that we are not gonna solve. They can do the big repairs we are taking away, you know, another problem for them, but giving them insights that make their job easier and more successful by removing any of those surprises when they’re gonna do that work. Allen Hall: So what’s the plans for 2026 then? Chris Cieslak: 2026 for us is to pick up where 2025 should have ended. [00:15:00] So we were, we were meant to be in the States. Yeah. On some projects that got postponed until 26. So it’s really, for us North America is, um, what we’re really, as you said, there’s seven, 5,000 turbines there, but there’s also a lot of, um, turbines with known issues that we can help determine which blades are affected. And that involves blades on the ground, that involves blades, uh, that are flying. So. For us, we wanna get out to the states as soon as possible, so we’re working with some of the OEMs and, and essentially some of the asset owners. Allen Hall: Chris, it’s so great to meet you in person and talk about the latest that’s happening. Thank you. With Blade Bug, if people need to get ahold of you or Blade Bug, how do they do that? Chris Cieslak: I, I would say LinkedIn is probably the best place to find myself and also Blade Bug and contact us, um, through that. Allen Hall: Alright, great. Thanks Chris for joining us and we will see you at the next. So hopefully in America, come to America sometime. We’d love to see you there. Chris Cieslak: Thank you very [00:16:00] much.
Did you know that a beloved podcast episode now has a sequel? Join Spencer, Ty, and Andy as they share hundreds of true facts about the world around you, all sourced from limited-run Snapple caps. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
Pareciera que cada cierto tiempo la vida se encarga de recordarnos que las cosas no están bajo nuestro control ⏳Pero creo que no se trata de intentar hacer todo para que las cosas estén bajo nuestro control, sino aceptar que en realidad muchas cosas no lo están
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin17:33 Chia sẻ Lời Chúa : Lm. JB Phương Đình Toại, MI, chia sẻ Lời Chúa Chúa Nhật I Mùa Chay---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
Tickets for AIEi Miami and AIE Europe are live, with first wave speakers announced!From pioneering software-defined networking to backing many of the most aggressive AI model companies of this cycle, Martin Casado and Sarah Wang sit at the center of the capital, compute, and talent arms race reshaping the tech industry. As partners at a16z investing across infrastructure and growth, they've watched venture and growth blur, model labs turn dollars into capability at unprecedented speed, and startups raise nine-figure rounds before monetization.Martin and Sarah join us to unpack the new financing playbook for AI: why today's rounds are really compute contracts in disguise, how the “raise → train → ship → raise bigger” flywheel works, and whether foundation model companies can outspend the entire app ecosystem built on top of them. They also share what's underhyped (boring enterprise software), what's overheated (talent wars and compensation spirals), and the two radically different futures they see for AI's market structure.We discuss:* Martin's “two futures” fork: infinite fragmentation and new software categories vs. a small oligopoly of general models that consume everything above them* The capital flywheel: how model labs translate funding directly into capability gains, then into revenue growth measured in weeks, not years* Why venture and growth have merged: $100M–$1B hybrid rounds, strategic investors, compute negotiations, and complex deal structures* The AGI vs. product tension: allocating scarce GPUs between long-term research and near-term revenue flywheels* Whether frontier labs can out-raise and outspend the entire app ecosystem built on top of their APIs* Why today's talent wars ($10M+ comp packages, $B acqui-hires) are breaking early-stage founder math* Cursor as a case study: building up from the app layer while training down into your own models* Why “boring” enterprise software may be the most underinvested opportunity in the AI mania* Hardware and robotics: why the ChatGPT moment hasn't yet arrived for robots and what would need to change* World Labs and generative 3D: bringing the marginal cost of 3D scene creation down by orders of magnitude* Why public AI discourse is often wildly disconnected from boardroom reality and how founders should navigate the noiseShow Notes:* “Where Value Will Accrue in AI: Martin Casado & Sarah Wang” - a16z show* “Jack Altman & Martin Casado on the Future of Venture Capital”* World Labs—Martin Casado• LinkedIn: https://www.linkedin.com/in/martincasado/• X: https://x.com/martin_casadoSarah Wang• LinkedIn: https://www.linkedin.com/in/sarah-wang-59b96a7• X: https://x.com/sarahdingwanga16z• https://a16z.com/Timestamps00:00:00 – Intro: Live from a16z00:01:20 – The New AI Funding Model: Venture + Growth Collide00:03:19 – Circular Funding, Demand & “No Dark GPUs”00:05:24 – Infrastructure vs Apps: The Lines Blur00:06:24 – The Capital Flywheel: Raise → Train → Ship → Raise Bigger00:09:39 – Can Frontier Labs Outspend the Entire App Ecosystem?00:11:24 – Character AI & The AGI vs Product Dilemma00:14:39 – Talent Wars, $10M Engineers & Founder Anxiety00:17:33 – What's Underinvested? The Case for “Boring” Software00:19:29 – Robotics, Hardware & Why It's Hard to Win00:22:42 – Custom ASICs & The $1B Training Run Economics00:24:23 – American Dynamism, Geography & AI Power Centers00:26:48 – How AI Is Changing the Investor Workflow (Claude Cowork)00:29:12 – Two Futures of AI: Infinite Expansion or Oligopoly?00:32:48 – If You Can Raise More Than Your Ecosystem, You Win00:34:27 – Are All Tasks AGI-Complete? Coding as the Test Case00:38:55 – Cursor & The Power of the App Layer00:44:05 – World Labs, Spatial Intelligence & 3D Foundation Models00:47:20 – Thinking Machines, Founder Drama & Media Narratives00:52:30 – Where Long-Term Power Accrues in the AI StackTranscriptLatent.Space - Inside AI's $10B+ Capital Flywheel — Martin Casado & Sarah Wang of a16z[00:00:00] Welcome to Latent Space (Live from a16z) + Meet the Guests[00:00:00] Alessio: Hey everyone. Welcome to the Latent Space podcast, live from a 16 z. Uh, this is Alessio founder Kernel Lance, and I'm joined by Twix, editor of Latent Space.[00:00:08] swyx: Hey, hey, hey. Uh, and we're so glad to be on with you guys. Also a top AI podcast, uh, Martin Cado and Sarah Wang. Welcome, very[00:00:16] Martin Casado: happy to be here and welcome.[00:00:17] swyx: Yes, uh, we love this office. We love what you've done with the place. Uh, the new logo is everywhere now. It's, it's still getting, takes a while to get used to, but it reminds me of like sort of a callback to a more ambitious age, which I think is kind of[00:00:31] Martin Casado: definitely makes a statement.[00:00:33] swyx: Yeah.[00:00:34] Martin Casado: Not quite sure what that statement is, but it makes a statement.[00:00:37] swyx: Uh, Martin, I go back with you to Netlify.[00:00:40] Martin Casado: Yep.[00:00:40] swyx: Uh, and, uh, you know, you create a software defined networking and all, all that stuff people can read up on your background. Yep. Sarah, I'm newer to you. Uh, you, you sort of started working together on AI infrastructure stuff.[00:00:51] Sarah Wang: That's right. Yeah. Seven, seven years ago now.[00:00:53] Martin Casado: Best growth investor in the entire industry.[00:00:55] swyx: Oh, say[00:00:56] Martin Casado: more hands down there is, there is. [00:01:00] I mean, when it comes to AI companies, Sarah, I think has done the most kind of aggressive, um, investment thesis around AI models, right? So, worked for Nom Ja, Mira Ia, FEI Fey, and so just these frontier, kind of like large AI models.[00:01:15] I think, you know, Sarah's been the, the broadest investor. Is that fair?[00:01:20] Venture vs. Growth in the Frontier Model Era[00:01:20] Sarah Wang: No, I, well, I was gonna say, I think it's been a really interesting tag, tag team actually just ‘cause the, a lot of these big C deals, not only are they raising a lot of money, um, it's still a tech founder bet, which obviously is inherently early stage.[00:01:33] But the resources,[00:01:36] Martin Casado: so many, I[00:01:36] Sarah Wang: was gonna say the resources one, they just grow really quickly. But then two, the resources that they need day one are kind of growth scale. So I, the hybrid tag team that we have is. Quite effective, I think,[00:01:46] Martin Casado: what is growth these days? You know, you don't wake up if it's less than a billion or like, it's, it's actually, it's actually very like, like no, it's a very interesting time in investing because like, you know, take like the character around, right?[00:01:59] These tend to [00:02:00] be like pre monetization, but the dollars are large enough that you need to have a larger fund and the analysis. You know, because you've got lots of users. ‘cause this stuff has such high demand requires, you know, more of a number sophistication. And so most of these deals, whether it's US or other firms on these large model companies, are like this hybrid between venture growth.[00:02:18] Sarah Wang: Yeah. Total. And I think, you know, stuff like BD for example, you wouldn't usually need BD when you were seed stage trying to get market biz Devrel. Biz Devrel, exactly. Okay. But like now, sorry, I'm,[00:02:27] swyx: I'm not familiar. What, what, what does biz Devrel mean for a venture fund? Because I know what biz Devrel means for a company.[00:02:31] Sarah Wang: Yeah.[00:02:32] Compute Deals, Strategics, and the ‘Circular Funding' Question[00:02:32] Sarah Wang: You know, so a, a good example is, I mean, we talk about buying compute, but there's a huge negotiation involved there in terms of, okay, do you get equity for the compute? What, what sort of partner are you looking at? Is there a go-to market arm to that? Um, and these are just things on this scale, hundreds of millions, you know, maybe.[00:02:50] Six months into the inception of a company, you just wouldn't have to negotiate these deals before.[00:02:54] Martin Casado: Yeah. These large rounds are very complex now. Like in the past, if you did a series A [00:03:00] or a series B, like whatever, you're writing a 20 to a $60 million check and you call it a day. Now you normally have financial investors and strategic investors, and then the strategic portion always still goes with like these kind of large compute contracts, which can take months to do.[00:03:13] And so it's, it's very different ties. I've been doing this for 10 years. It's the, I've never seen anything like this.[00:03:19] swyx: Yeah. Do you have worries about the circular funding from so disease strategics?[00:03:24] Martin Casado: I mean, listen, as long as the demand is there, like the demand is there. Like the problem with the internet is the demand wasn't there.[00:03:29] swyx: Exactly. All right. This, this is like the, the whole pyramid scheme bubble thing, where like, as long as you mark to market on like the notional value of like, these deals, fine, but like once it starts to chip away, it really Well[00:03:41] Martin Casado: no, like as, as, as, as long as there's demand. I mean, you know, this, this is like a lot of these sound bites have already become kind of cliches, but they're worth saying it.[00:03:47] Right? Like during the internet days, like we were. Um, raising money to put fiber in the ground that wasn't used. And that's a problem, right? Because now you actually have a supply overhang.[00:03:58] swyx: Mm-hmm.[00:03:59] Martin Casado: And even in the, [00:04:00] the time of the, the internet, like the supply and, and bandwidth overhang, even as massive as it was in, as massive as the crash was only lasted about four years.[00:04:09] But we don't have a supply overhang. Like there's no dark GPUs, right? I mean, and so, you know, circular or not, I mean, you know, if, if someone invests in a company that, um. You know, they'll actually use the GPUs. And on the other side of it is the, is the ask for customer. So I I, I think it's a different time.[00:04:25] Sarah Wang: I think the other piece, maybe just to add onto this, and I'm gonna quote Martine in front of him, but this is probably also a unique time in that. For the first time, you can actually trace dollars to outcomes. Yeah, right. Provided that scaling laws are, are holding, um, and capabilities are actually moving forward.[00:04:40] Because if you can put translate dollars into capabilities, uh, a capability improvement, there's demand there to martine's point. But if that somehow breaks, you know, obviously that's an important assumption in this whole thing to make it work. But you know, instead of investing dollars into sales and marketing, you're, you're investing into r and d to get to the capability, um, you know, increase.[00:04:59] And [00:05:00] that's sort of been the demand driver because. Once there's an unlock there, people are willing to pay for it.[00:05:05] Alessio: Yeah.[00:05:06] Blurring Lines: Models as Infra + Apps, and the New Fundraising Flywheel[00:05:06] Alessio: Is there any difference in how you built the portfolio now that some of your growth companies are, like the infrastructure of the early stage companies, like, you know, OpenAI is now the same size as some of the cloud providers were early on.[00:05:16] Like what does that look like? Like how much information can you feed off each other between the, the two?[00:05:24] Martin Casado: There's so many lines that are being crossed right now, or blurred. Right. So we already talked about venture and growth. Another one that's being blurred is between infrastructure and apps, right? So like what is a model company?[00:05:35] Mm-hmm. Like, it's clearly infrastructure, right? Because it's like, you know, it's doing kind of core r and d. It's a horizontal platform, but it's also an app because it's um, uh, touches the users directly. And then of course. You know, the, the, the growth of these is just so high. And so I actually think you're just starting to see a, a, a new financing strategy emerge and, you know, we've had to adapt as a result of that.[00:05:59] And [00:06:00] so there's been a lot of changes. Um, you're right that these companies become platform companies very quickly. You've got ecosystem build out. So none of this is necessarily new, but the timescales of which it's happened is pretty phenomenal. And the way we'd normally cut lines before is blurred a little bit, but.[00:06:16] But that, that, that said, I mean, a lot of it also just does feel like things that we've seen in the past, like cloud build out the internet build out as well.[00:06:24] Sarah Wang: Yeah. Um, yeah, I think it's interesting, uh, I don't know if you guys would agree with this, but it feels like the emerging strategy is, and this builds off of your other question, um.[00:06:33] You raise money for compute, you pour that or you, you pour the money into compute, you get some sort of breakthrough. You funnel the breakthrough into your vertically integrated application. That could be chat GBT, that could be cloud code, you know, whatever it is. You massively gain share and get users.[00:06:49] Maybe you're even subsidizing at that point. Um, depending on your strategy. You raise money at the peak momentum and then you repeat, rinse and repeat. Um, and so. And that wasn't [00:07:00] true even two years ago, I think. Mm-hmm. And so it's sort of to your, just tying it to fundraising strategy, right? There's a, and hiring strategy.[00:07:07] All of these are tied, I think the lines are blurring even more today where everyone is, and they, but of course these companies all have API businesses and so they're these, these frenemy lines that are getting blurred in that a lot of, I mean, they have billions of dollars of API revenue, right? And so there are customers there.[00:07:23] But they're competing on the app layer.[00:07:24] Martin Casado: Yeah. So this is a really, really important point. So I, I would say for sure, venture and growth, that line is blurry app and infrastructure. That line is blurry. Um, but I don't think that that changes our practice so much. But like where the very open questions are like, does this layer in the same way.[00:07:43] Compute traditionally has like during the cloud is like, you know, like whatever, somebody wins one layer, but then another whole set of companies wins another layer. But that might not, might not be the case here. It may be the case that you actually can't verticalize on the token string. Like you can't build an app like it, it necessarily goes down just because there are no [00:08:00] abstractions.[00:08:00] So those are kinda the bigger existential questions we ask. Another thing that is very different this time than in the history of computer sciences is. In the past, if you raised money, then you basically had to wait for engineering to catch up. Which famously doesn't scale like the mythical mammoth. It take a very long time.[00:08:18] But like that's not the case here. Like a model company can raise money and drop a model in a, in a year, and it's better, right? And, and it does it with a team of 20 people or 10 people. So this type of like money entering a company and then producing something that has demand and growth right away and using that to raise more money is a very different capital flywheel than we've ever seen before.[00:08:39] And I think everybody's trying to understand what the consequences are. So I think it's less about like. Big companies and growth and this, and more about these more systemic questions that we actually don't have answers to.[00:08:49] Alessio: Yeah, like at Kernel Labs, one of our ideas is like if you had unlimited money to spend productively to turn tokens into products, like the whole early stage [00:09:00] market is very different because today you're investing X amount of capital to win a deal because of price structure and whatnot, and you're kind of pot committing.[00:09:07] Yeah. To a certain strategy for a certain amount of time. Yeah. But if you could like iteratively spin out companies and products and just throw, I, I wanna spend a million dollar of inference today and get a product out tomorrow.[00:09:18] swyx: Yeah.[00:09:19] Alessio: Like, we should get to the point where like the friction of like token to product is so low that you can do this and then you can change the Right, the early stage venture model to be much more iterative.[00:09:30] And then every round is like either 100 k of inference or like a hundred million from a 16 Z. There's no, there's no like $8 million C round anymore. Right.[00:09:38] When Frontier Labs Outspend the Entire App Ecosystem[00:09:38] Martin Casado: But, but, but, but there's a, there's a, the, an industry structural question that we don't know the answer to, which involves the frontier models, which is, let's take.[00:09:48] Anthropic it. Let's say Anthropic has a state-of-the-art model that has some large percentage of market share. And let's say that, uh, uh, uh, you know, uh, a company's building smaller models [00:10:00] that, you know, use the bigger model in the background, open 4.5, but they add value on top of that. Now, if Anthropic can raise three times more.[00:10:10] Every subsequent round, they probably can raise more money than the entire app ecosystem that's built on top of it. And if that's the case, they can expand beyond everything built on top of it. It's like imagine like a star that's just kind of expanding, so there could be a systemic. There could be a, a systemic situation where the soda models can raise so much money that they can out pay anybody that bills on top of ‘em, which would be something I don't think we've ever seen before just because we were so bottlenecked in engineering, and this is a very open question.[00:10:41] swyx: Yeah. It's, it is almost like bitter lesson applied to the startup industry.[00:10:45] Martin Casado: Yeah, a hundred percent. It literally becomes an issue of like raise capital, turn that directly into growth. Use that to raise three times more. Exactly. And if you can keep doing that, you literally can outspend any company that's built the, not any company.[00:10:57] You can outspend the aggregate of companies on top of [00:11:00] you and therefore you'll necessarily take their share, which is crazy.[00:11:02] swyx: Would you say that kind of happens in character? Is that the, the sort of postmortem on. What happened?[00:11:10] Sarah Wang: Um,[00:11:10] Martin Casado: no.[00:11:12] Sarah Wang: Yeah, because I think so,[00:11:13] swyx: I mean the actual postmortem is, he wanted to go back to Google.[00:11:15] Exactly. But like[00:11:18] Martin Casado: that's another difference that[00:11:19] Sarah Wang: you said[00:11:21] Martin Casado: it. We should talk, we should actually talk about that.[00:11:22] swyx: Yeah,[00:11:22] Sarah Wang: that's[00:11:23] swyx: Go for it. Take it. Take,[00:11:23] Sarah Wang: yeah.[00:11:24] Character.AI, Founder Goals (AGI vs Product), and GPU Allocation Tradeoffs[00:11:24] Sarah Wang: I was gonna say, I think, um. The, the, the character thing raises actually a different issue, which actually the Frontier Labs will face as well. So we'll see how they handle it.[00:11:34] But, um, so we invest in character in January, 2023, which feels like eons ago, I mean, three years ago. Feels like lifetimes ago. But, um, and then they, uh, did the IP licensing deal with Google in August, 2020. Uh, four. And so, um, you know, at the time, no, you know, he's talked publicly about this, right? He wanted to Google wouldn't let him put out products in the world.[00:11:56] That's obviously changed drastically. But, um, he went to go do [00:12:00] that. Um, but he had a product attached. The goal was, I mean, it's Nome Shair, he wanted to get to a GI. That was always his personal goal. But, you know, I think through collecting data, right, and this sort of very human use case, that the character product.[00:12:13] Originally was and still is, um, was one of the vehicles to do that. Um, I think the real reason that, you know. I if you think about the, the stress that any company feels before, um, you ultimately going one way or the other is sort of this a GI versus product. Um, and I think a lot of the big, I think, you know, opening eyes, feeling that, um, anthropic if they haven't started, you know, felt it, certainly given the success of their products, they may start to feel that soon.[00:12:39] And the real. I think there's real trade-offs, right? It's like how many, when you think about GPUs, that's a limited resource. Where do you allocate the GPUs? Is it toward the product? Is it toward new re research? Right? Is it, or long-term research, is it toward, um, n you know, near to midterm research? And so, um, in a case where you're resource constrained, um, [00:13:00] of course there's this fundraising game you can play, right?[00:13:01] But the fund, the market was very different back in 2023 too. Um. I think the best researchers in the world have this dilemma of, okay, I wanna go all in on a GI, but it's the product usage revenue flywheel that keeps the revenue in the house to power all the GPUs to get to a GI. And so it does make, um, you know, I think it sets up an interesting dilemma for any startup that has trouble raising up until that level, right?[00:13:27] And certainly if you don't have that progress, you can't continue this fly, you know, fundraising flywheel.[00:13:32] Martin Casado: I would say that because, ‘cause we're keeping track of all of the things that are different, right? Like, you know, venture growth and uh, app infra and one of the ones is definitely the personalities of the founders.[00:13:45] It's just very different this time I've been. Been doing this for a decade and I've been doing startups for 20 years. And so, um, I mean a lot of people start this to do a GI and we've never had like a unified North star that I recall in the same [00:14:00] way. Like people built companies to start companies in the past.[00:14:02] Like that was what it was. Like I would create an internet company, I would create infrastructure company, like it's kind of more engineering builders and this is kind of a different. You know, mentality. And some companies have harnessed that incredibly well because their direction is so obviously on the path to what somebody would consider a GI, but others have not.[00:14:20] And so like there is always this tension with personnel. And so I think we're seeing more kind of founder movement.[00:14:27] Sarah Wang: Yeah.[00:14:27] Martin Casado: You know, as a fraction of founders than we've ever seen. I mean, maybe since like, I don't know the time of like Shockly and the trade DUR aid or something like that. Way back in the beginning of the industry, I, it's a very, very.[00:14:38] Unusual time of personnel.[00:14:39] Sarah Wang: Totally.[00:14:40] Talent Wars, Mega-Comp, and the Rise of Acquihire M&A[00:14:40] Sarah Wang: And it, I think it's exacerbated by the fact that talent wars, I mean, every industry has talent wars, but not at this magnitude, right? No. Yeah. Very rarely can you see someone get poached for $5 billion. That's hard to compete with. And then secondly, if you're a founder in ai, you could fart and it would be on the front page of, you know, the information these days.[00:14:59] And so there's [00:15:00] sort of this fishbowl effect that I think adds to the deep anxiety that, that these AI founders are feeling.[00:15:06] Martin Casado: Hmm.[00:15:06] swyx: Uh, yes. I mean, just on, uh, briefly comment on the founder, uh, the sort of. Talent wars thing. I feel like 2025 was just like a blip. Like I, I don't know if we'll see that again.[00:15:17] ‘cause meta built the team. Like, I don't know if, I think, I think they're kind of done and like, who's gonna pay more than meta? I, I don't know.[00:15:23] Martin Casado: I, I agree. So it feels so, it feel, it feels this way to me too. It's like, it is like, basically Zuckerberg kind of came out swinging and then now he's kind of back to building.[00:15:30] Yeah,[00:15:31] swyx: yeah. You know, you gotta like pay up to like assemble team to rush the job, whatever. But then now, now you like you, you made your choices and now they got a ship.[00:15:38] Martin Casado: I mean, the, the o other side of that is like, you know, like we're, we're actually in the job hiring market. We've got 600 people here. I hire all the time.[00:15:44] I've got three open recs if anybody's interested, that's listening to this for investor. Yeah, on, on the team, like on the investing side of the team, like, and, um, a lot of the people we talk to have acting, you know, active, um, offers for 10 million a year or something like that. And like, you know, and we pay really, [00:16:00] really well.[00:16:00] And just to see what's out on the market is really, is really remarkable. And so I would just say it's actually, so you're right, like the really flashy one, like I will get someone for, you know, a billion dollars, but like the inflated, um, uh, trickles down. Yeah, it is still very active today. I mean,[00:16:18] Sarah Wang: yeah, you could be an L five and get an offer in the tens of millions.[00:16:22] Okay. Yeah. Easily. Yeah. It's so I think you're right that it felt like a blip. I hope you're right. Um, but I think it's been, the steady state is now, I think got pulled up. Yeah. Yeah. I'll pull up for[00:16:31] Martin Casado: sure. Yeah.[00:16:32] Alessio: Yeah. And I think that's breaking the early stage founder math too. I think before a lot of people would be like, well, maybe I should just go be a founder instead of like getting paid.[00:16:39] Yeah. 800 KA million at Google. But if I'm getting paid. Five, 6 million. That's different but[00:16:45] Martin Casado: on. But on the other hand, there's more strategic money than we've ever seen historically, right? Mm-hmm. And so, yep. The economics, the, the, the, the calculus on the economics is very different in a number of ways. And, uh, it's crazy.[00:16:58] It's cra it's causing like a, [00:17:00] a, a, a ton of change in confusion in the market. Some very positive, sub negative, like, so for example, the other side of the, um. The co-founder, like, um, acquisition, you know, mark Zuckerberg poaching someone for a lot of money is like, we were actually seeing historic amount of m and a for basically acquihires, right?[00:17:20] That you like, you know, really good outcomes from a venture perspective that are effective acquihires, right? So I would say it's probably net positive from the investment standpoint, even though it seems from the headlines to be very disruptive in a negative way.[00:17:33] Alessio: Yeah.[00:17:33] What's Underfunded: Boring Software, Robotics Skepticism, and Custom Silicon Economics[00:17:33] Alessio: Um, let's talk maybe about what's not being invested in, like maybe some interesting ideas that you would see more people build or it, it seems in a way, you know, as ycs getting more popular, it's like access getting more popular.[00:17:47] There's a startup school path that a lot of founders take and they know what's hot in the VC circles and they know what gets funded. Uh, and there's maybe not as much risk appetite for. Things outside of that. Um, I'm curious if you feel [00:18:00] like that's true and what are maybe, uh, some of the areas, uh, that you think are under discussed?[00:18:06] Martin Casado: I mean, I actually think that we've taken our eye off the ball in a lot of like, just traditional, you know, software companies. Um, so like, I mean. You know, I think right now there's almost a barbell, like you're like the hot thing on X, you're deep tech.[00:18:21] swyx: Mm-hmm.[00:18:22] Martin Casado: Right. But I, you know, I feel like there's just kind of a long, you know, list of like good.[00:18:28] Good companies that will be around for a long time in very large markets. Say you're building a database, you know, say you're building, um, you know, kind of monitoring or logging or tooling or whatever. There's some good companies out there right now, but like, they have a really hard time getting, um, the attention of investors.[00:18:43] And it's almost become a meme, right? Which is like, if you're not basically growing from zero to a hundred in a year, you're not interesting, which is just, is the silliest thing to say. I mean, think of yourself as like an introvert person, like, like your personal money, right? Mm-hmm. So. Your personal money, will you put it in the stock market at 7% or you put it in this company growing five x in a very large [00:19:00] market?[00:19:00] Of course you can put it in the company five x. So it's just like we say these stupid things, like if you're not going from zero to a hundred, but like those, like who knows what the margins of those are mean. Clearly these are good investments. True for anybody, right? True. Like our LPs want whatever.[00:19:12] Three x net over, you know, the life cycle of a fund, right? So a, a company in a big market growing five X is a great investment. We'd, everybody would be happy with these returns, but we've got this kind of mania on these, these strong growths. And so I would say that that's probably the most underinvested sector.[00:19:28] Right now.[00:19:29] swyx: Boring software, boring enterprise software.[00:19:31] Martin Casado: Traditional. Really good company.[00:19:33] swyx: No, no AI here.[00:19:34] Martin Casado: No. Like boring. Well, well, the AI of course is pulling them into use cases. Yeah, but that's not what they're, they're not on the token path, right? Yeah. Let's just say that like they're software, but they're not on the token path.[00:19:41] Like these are like they're great investments from any definition except for like random VC on Twitter saying VC on x, saying like, it's not growing fast enough. What do you[00:19:52] Sarah Wang: think? Yeah, maybe I'll answer a slightly different. Question, but adjacent to what you asked, um, which is maybe an area that we're not, uh, investing [00:20:00] right now that I think is a question and we're spending a lot of time in regardless of whether we pull the trigger or not.[00:20:05] Um, and it would probably be on the hardware side, actually. Robotics, right? And the robotics side. Robotics. Right. Which is, it's, I don't wanna say that it's not getting funding ‘cause it's clearly, uh, it's, it's sort of non-consensus to almost not invest in robotics at this point. But, um, we spent a lot of time in that space and I think for us, we just haven't seen the chat GPT moment.[00:20:22] Happen on the hardware side. Um, and the funding going into it feels like it's already. Taking that for granted.[00:20:30] Martin Casado: Yeah. Yeah. But we also went through the drone, you know, um, there's a zip line right, right out there. What's that? Oh yeah, there's a zip line. Yeah. What the drone, what the av And like one of the takeaways is when it comes to hardware, um, most companies will end up verticalizing.[00:20:46] Like if you're. If you're investing in a robot company for an A for agriculture, you're investing in an ag company. ‘cause that's the competition and that's surprising. And that's supply chain. And if you're doing it for mining, that's mining. And so the ad team does a lot of that type of stuff ‘cause they actually set up to [00:21:00] diligence that type of work.[00:21:01] But for like horizontal technology investing, there's very little when it comes to robots just because it's so fit for, for purpose. And so we kinda like to look at software. Solutions or horizontal solutions like applied intuition. Clearly from the AV wave deep map, clearly from the AV wave, I would say scale AI was actually a horizontal one for That's fair, you know, for robotics early on.[00:21:23] And so that sort of thing we're very, very interested. But the actual like robot interacting with the world is probably better for different team. Agree.[00:21:30] Alessio: Yeah, I'm curious who these teams are supposed to be that invest in them. I feel like everybody's like, yeah, robotics, it's important and like people should invest in it.[00:21:38] But then when you look at like the numbers, like the capital requirements early on versus like the moment of, okay, this is actually gonna work. Let's keep investing. That seems really hard to predict in a way that is not,[00:21:49] Martin Casado: I think co, CO two, kla, gc, I mean these are all invested in in Harvard companies. He just, you know, and [00:22:00] listen, I mean, it could work this time for sure.[00:22:01] Right? I mean if Elon's doing it, he's like, right. Just, just the fact that Elon's doing it means that there's gonna be a lot of capital and a lot of attempts for a long period of time. So that alone maybe suggests that we should just be investing in robotics just ‘cause you have this North star who's Elon with a humanoid and that's gonna like basically willing into being an industry.[00:22:17] Um, but we've just historically found like. We're a huge believer that this is gonna happen. We just don't feel like we're in a good position to diligence these things. ‘cause again, robotics companies tend to be vertical. You really have to understand the market they're being sold into. Like that's like that competitive equilibrium with a human being is what's important.[00:22:34] It's not like the core tech and like we're kind of more horizontal core tech type investors. And this is Sarah and I. Yeah, the ad team is different. They can actually do these types of things.[00:22:42] swyx: Uh, just to clarify, AD stands for[00:22:44] Martin Casado: American Dynamism.[00:22:45] swyx: Alright. Okay. Yeah, yeah, yeah. Uh, I actually, I do have a related question that, first of all, I wanna acknowledge also just on the, on the chip side.[00:22:51] Yeah. I, I recall a podcast that where you were on, i, I, I think it was the a CC podcast, uh, about two or three years ago where you, where you suddenly said [00:23:00] something, which really stuck in my head about how at some point, at some point kind of scale it makes sense to. Build a custom aic Yes. For per run.[00:23:07] Martin Casado: Yes.[00:23:07] It's crazy. Yeah.[00:23:09] swyx: We're here and I think you, you estimated 500 billion, uh, something.[00:23:12] Martin Casado: No, no, no. A billion, a billion dollar training run of $1 billion training run. It makes sense to actually do a custom meic if you can do it in time. The question now is timelines. Yeah, but not money because just, just, just rough math.[00:23:22] If it's a billion dollar training. Then the inference for that model has to be over a billion, otherwise it won't be solvent. So let's assume it's, if you could save 20%, which you could save much more than that with an ASIC 20%, that's $200 million. You can tape out a chip for $200 million. Right? So now you can literally like justify economically, not timeline wise.[00:23:41] That's a different issue. An ASIC per model, which[00:23:44] swyx: is because that, that's how much we leave on the table every single time. We, we, we do like generic Nvidia.[00:23:48] Martin Casado: Exactly. Exactly. No, it, it is actually much more than that. You could probably get, you know, a factor of two, which would be 500 million.[00:23:54] swyx: Typical MFU would be like 50.[00:23:55] Yeah, yeah. And that's good.[00:23:57] Martin Casado: Exactly. Yeah. Hundred[00:23:57] swyx: percent. Um, so, so, yeah, and I mean, and I [00:24:00] just wanna acknowledge like, here we are in, in, in 2025 and opening eyes confirming like Broadcom and all the other like custom silicon deals, which is incredible. I, I think that, uh, you know, speaking about ad there's, there's a really like interesting tie in that obviously you guys are hit on, which is like these sort, this sort of like America first movement or like sort of re industrialized here.[00:24:17] Yeah. Uh, move TSMC here, if that's possible. Um, how much overlap is there from ad[00:24:23] Martin Casado: Yeah.[00:24:23] swyx: To, I guess, growth and, uh, investing in particularly like, you know, US AI companies that are strongly bounded by their compute.[00:24:32] Martin Casado: Yeah. Yeah. So I mean, I, I would view, I would view AD as more as a market segmentation than like a mission, right?[00:24:37] So the market segmentation is, it has kind of regulatory compliance issues or government, you know, sale or it deals with like hardware. I mean, they're just set up to, to, to, to, to. To diligence those types of companies. So it's a more of a market segmentation thing. I would say the entire firm. You know, which has been since it is been intercepted, you know, has geographical biases, right?[00:24:58] I mean, for the longest time we're like, you [00:25:00] know, bay Area is gonna be like, great, where the majority of the dollars go. Yeah. And, and listen, there, there's actually a lot of compounding effects for having a geographic bias. Right. You know, everybody's in the same place. You've got an ecosystem, you're there, you've got presence, you've got a network.[00:25:12] Um, and, uh, I mean, I would say the Bay area's very much back. You know, like I, I remember during pre COVID, like it was like almost Crypto had kind of. Pulled startups away. Miami from the Bay Area. Miami, yeah. Yeah. New York was, you know, because it's so close to finance, came up like Los Angeles had a moment ‘cause it was so close to consumer, but now it's kind of come back here.[00:25:29] And so I would say, you know, we tend to be very Bay area focused historically, even though of course we've asked all over the world. And then I would say like, if you take the ring out, you know, one more, it's gonna be the US of course, because we know it very well. And then one more is gonna be getting us and its allies and Yeah.[00:25:44] And it goes from there.[00:25:45] Sarah Wang: Yeah,[00:25:45] Martin Casado: sorry.[00:25:46] Sarah Wang: No, no. I agree. I think from a, but I think from the intern that that's sort of like where the companies are headquartered. Maybe your questions on supply chain and customer base. Uh, I, I would say our customers are, are, our companies are fairly international from that perspective.[00:25:59] Like they're selling [00:26:00] globally, right? They have global supply chains in some cases.[00:26:03] Martin Casado: I would say also the stickiness is very different.[00:26:05] Sarah Wang: Yeah.[00:26:05] Martin Casado: Historically between venture and growth, like there's so much company building in venture, so much so like hiring the next PM. Introducing the customer, like all of that stuff.[00:26:15] Like of course we're just gonna be stronger where we have our network and we've been doing business for 20 years. I've been in the Bay Area for 25 years, so clearly I'm just more effective here than I would be somewhere else. Um, where I think, I think for some of the later stage rounds, the companies don't need that much help.[00:26:30] They're already kind of pretty mature historically, so like they can kind of be everywhere. So there's kind of less of that stickiness. This is different in the AI time. I mean, Sarah is now the, uh, chief of staff of like half the AI companies in, uh, in the Bay Area right now. She's like, ops Ninja Biz, Devrel, BizOps.[00:26:48] swyx: Are, are you, are you finding much AI automation in your work? Like what, what is your stack.[00:26:53] Sarah Wang: Oh my, in my personal stack.[00:26:54] swyx: I mean, because like, uh, by the way, it's the, the, the reason for this is it is triggering, uh, yeah. We, like, I'm hiring [00:27:00] ops, ops people. Um, a lot of ponders I know are also hiring ops people and I'm just, you know, it's opportunity Since you're, you're also like basically helping out with ops with a lot of companies.[00:27:09] What are people doing these days? Because it's still very manual as far as I can tell.[00:27:13] Sarah Wang: Hmm. Yeah. I think the things that we help with are pretty network based, um, in that. It's sort of like, Hey, how do do I shortcut this process? Well, let's connect you to the right person. So there's not quite an AI workflow for that.[00:27:26] I will say as a growth investor, Claude Cowork is pretty interesting. Yeah. Like for the first time, you can actually get one shot data analysis. Right. Which, you know, if you're gonna do a customer database, analyze a cohort retention, right? That's just stuff that you had to do by hand before. And our team, the other, it was like midnight and the three of us were playing with Claude Cowork.[00:27:47] We gave it a raw file. Boom. Perfectly accurate. We checked the numbers. It was amazing. That was my like, aha moment. That sounds so boring. But you know, that's, that's the kind of thing that a growth investor is like, [00:28:00] you know, slaving away on late at night. Um, done in a few seconds.[00:28:03] swyx: Yeah. You gotta wonder what the whole, like, philanthropic labs, which is like their new sort of products studio.[00:28:10] Yeah. What would that be worth as an independent, uh, startup? You know, like a[00:28:14] Martin Casado: lot.[00:28:14] Sarah Wang: Yeah, true.[00:28:16] swyx: Yeah. You[00:28:16] Martin Casado: gotta hand it to them. They've been executing incredibly well.[00:28:19] swyx: Yeah. I, I mean, to me, like, you know, philanthropic, like building on cloud code, I think, uh, it makes sense to me the, the real. Um, pedal to the metal, whatever the, the, the phrase is, is when they start coming after consumer with, uh, against OpenAI and like that is like red alert at Open ai.[00:28:35] Oh, I[00:28:35] Martin Casado: think they've been pretty clear. They're enterprise focused.[00:28:37] swyx: They have been, but like they've been free. Here's[00:28:40] Martin Casado: care publicly,[00:28:40] swyx: it's enterprise focused. It's coding. Right. Yeah.[00:28:43] AI Labs vs Startups: Disruption, Undercutting & the Innovator's Dilemma[00:28:43] swyx: And then, and, but here's cloud, cloud, cowork, and, and here's like, well, we, uh, they, apparently they're running Instagram ads for Claudia.[00:28:50] I, on, you know, for, for people on, I get them all the time. Right. And so, like,[00:28:54] Martin Casado: uh,[00:28:54] swyx: it, it's kind of like this, the disruption thing of, uh, you know. Mo Open has been doing, [00:29:00] consumer been doing the, just pursuing general intelligence in every mo modality, and here's a topic that only focus on this thing, but now they're sort of undercutting and doing the whole innovator's dilemma thing on like everything else.[00:29:11] Martin Casado: It's very[00:29:11] swyx: interesting.[00:29:12] Martin Casado: Yeah, I mean there's, there's a very open que so for me there's like, do you know that meme where there's like the guy in the path and there's like a path this way? There's a path this way. Like one which way Western man. Yeah. Yeah.[00:29:23] Two Futures for AI: Infinite Market vs AGI Oligopoly[00:29:23] Martin Casado: And for me, like, like all the entire industry kind of like hinges on like two potential futures.[00:29:29] So in, in one potential future, um, the market is infinitely large. There's perverse economies of scale. ‘cause as soon as you put a model out there, like it kind of sublimates and all the other models catch up and like, it's just like software's being rewritten and fractured all over the place and there's tons of upside and it just grows.[00:29:48] And then there's another path which is like, well. Maybe these models actually generalize really well, and all you have to do is train them with three times more money. That's all you have to [00:30:00] do, and it'll just consume everything beyond it. And if that's the case, like you end up with basically an oligopoly for everything, like, you know mm-hmm.[00:30:06] Because they're perfectly general and like, so this would be like the, the a GI path would be like, these are perfectly general. They can do everything. And this one is like, this is actually normal software. The universe is complicated. You've got, and nobody knows the answer.[00:30:18] The Economics Reality Check: Gross Margins, Training Costs & Borrowing Against the Future[00:30:18] Martin Casado: My belief is if you actually look at the numbers of these companies, so generally if you look at the numbers of these companies, if you look at like the amount they're making and how much they, they spent training the last model, they're gross margin positive.[00:30:30] You're like, oh, that's really working. But if you look at like. The current training that they're doing for the next model, their gross margin negative. So part of me thinks that a lot of ‘em are kind of borrowing against the future and that's gonna have to slow down. It's gonna catch up to them at some point in time, but we don't really know.[00:30:47] Sarah Wang: Yeah.[00:30:47] Martin Casado: Does that make sense? Like, I mean, it could be, it could be the case that the only reason this is working is ‘cause they can raise that next round and they can train that next model. ‘cause these models have such a short. Life. And so at some point in time, like, you know, they won't be able to [00:31:00] raise that next round for the next model and then things will kind of converge and fragment again.[00:31:03] But right now it's not.[00:31:04] Sarah Wang: Totally. I think the other, by the way, just, um, a meta point. I think the other lesson from the last three years is, and we talk about this all the time ‘cause we're on this. Twitter X bubble. Um, cool. But, you know, if you go back to, let's say March, 2024, that period, it felt like a, I think an open source model with an, like a, you know, benchmark leading capability was sort of launching on a daily basis at that point.[00:31:27] And, um, and so that, you know, that's one period. Suddenly it's sort of like open source takes over the world. There's gonna be a plethora. It's not an oligopoly, you know, if you fast, you know, if you, if you rewind time even before that GPT-4 was number one for. Nine months, 10 months. It's a long time. Right.[00:31:44] Um, and of course now we're in this era where it feels like an oligopoly, um, maybe some very steady state shifts and, and you know, it could look like this in the future too, but it just, it's so hard to call. And I think the thing that keeps, you know, us up at [00:32:00] night in, in a good way and bad way, is that the capability progress is actually not slowing down.[00:32:06] And so until that happens, right, like you don't know what's gonna look like.[00:32:09] Martin Casado: But I, I would, I would say for sure it's not converged, like for sure, like the systemic capital flows have not converged, meaning right now it's still borrowing against the future to subsidize growth currently, which you can do that for a period of time.[00:32:23] But, but you know, at the end, at some point the market will rationalize that and just nobody knows what that will look like.[00:32:29] Alessio: Yeah.[00:32:29] Martin Casado: Or, or like the drop in price of compute will, will, will save them. Who knows?[00:32:34] Alessio: Yeah. Yeah. I think the models need to ask them to, to specific tasks. You know? It's like, okay, now Opus 4.5 might be a GI at some specific task, and now you can like depreciate the model over a longer time.[00:32:45] I think now, now, right now there's like no old model.[00:32:47] Martin Casado: No, but let, but lemme just change that mental, that's, that used to be my mental model. Lemme just change it a little bit.[00:32:53] Capital as a Weapon vs Task Saturation: Where Real Enterprise Value Gets Built[00:32:53] Martin Casado: If you can raise three times, if you can raise more than the aggregate of anybody that uses your models, that doesn't even matter.[00:32:59] It doesn't [00:33:00] even matter. See what I'm saying? Like, yeah. Yeah. So, so I have an API Business. My API business is 60% margin, or 70% margin, or 80% margin is a high margin business. So I know what everybody is using. If I can raise more money than the aggregate of everybody that's using it, I will consume them whether I'm a GI or not.[00:33:14] And I will know if they're using it ‘cause they're using it. And like, unlike in the past where engineering stops me from doing that.[00:33:21] Alessio: Mm-hmm.[00:33:21] Martin Casado: It is very straightforward. You just train. So I also thought it was kind of like, you must ask the code a GI, general, general, general. But I think there's also just a possibility that the, that the capital markets will just give them the, the, the ammunition to just go after everybody on top of ‘em.[00:33:36] Sarah Wang: I, I do wonder though, to your point, um, if there's a certain task that. Getting marginally better isn't actually that much better. Like we've asked them to it, to, you know, we can call it a GI or whatever, you know, actually, Ali Goi talks about this, like we're already at a GI for a lot of functions in the enterprise.[00:33:50] Um. That's probably those for those tasks, you probably could build very specific companies that focus on just getting as much value out of that task that isn't [00:34:00] coming from the model itself. There's probably a rich enterprise business to be built there. I mean, could be wrong on that, but there's a lot of interesting examples.[00:34:08] So, right, if you're looking the legal profession or, or whatnot, and maybe that's not a great one ‘cause the models are getting better on that front too, but just something where it's a bit saturated, then the value comes from. Services. It comes from implementation, right? It comes from all these things that actually make it useful to the end customer.[00:34:24] Martin Casado: Sorry, what am I, one more thing I think is, is underused in all of this is like, to what extent every task is a GI complete.[00:34:31] Sarah Wang: Mm-hmm.[00:34:32] Martin Casado: Yeah. I code every day. It's so fun.[00:34:35] Sarah Wang: That's a core question. Yeah.[00:34:36] Martin Casado: And like. When I'm talking to these models, it's not just code. I mean, it's everything, right? Like I, you know, like it's,[00:34:43] swyx: it's healthcare.[00:34:44] It's,[00:34:44] Martin Casado: I mean, it's[00:34:44] swyx: Mele,[00:34:45] Martin Casado: but it's every, it is exactly that. Like, yeah, that's[00:34:47] Sarah Wang: great support. Yeah.[00:34:48] Martin Casado: It's everything. Like I'm asking these models to, yeah, to understand compliance. I'm asking these models to go search the web. I'm asking these models to talk about things I know in the history, like it's having a full conversation with me while I, I engineer, and so it could be [00:35:00] the case that like, mm-hmm.[00:35:01] The most a, you know, a GI complete, like I'm not an a GI guy. Like I think that's, you know, but like the most a GI complete model will is win independent of the task. And we don't know the answer to that one either.[00:35:11] swyx: Yeah.[00:35:12] Martin Casado: But it seems to me that like, listen, codex in my experience is for sure better than Opus 4.5 for coding.[00:35:18] Like it finds the hardest bugs that I work in with. Like, it is, you know. The smartest developers. I don't work on it. It's great. Um, but I think Opus 4.5 is actually very, it's got a great bedside manner and it really, and it, it really matters if you're building something very complex because like, it really, you know, like you're, you're, you're a partner and a brainstorming partner for somebody.[00:35:38] And I think we don't discuss enough how every task kind of has that quality.[00:35:42] swyx: Mm-hmm.[00:35:43] Martin Casado: And what does that mean to like capital investment and like frontier models and Submodels? Yeah.[00:35:47] Why “Coding Models” Keep Collapsing into Generalists (Reasoning vs Taste)[00:35:47] Martin Casado: Like what happened to all the special coding models? Like, none of ‘em worked right. So[00:35:51] Alessio: some of them, they didn't even get released.[00:35:53] Magical[00:35:54] Martin Casado: Devrel. There's a whole, there's a whole host. We saw a bunch of them and like there's this whole theory that like, there could be, and [00:36:00] I think one of the conclusions is, is like there's no such thing as a coding model,[00:36:04] Alessio: you know?[00:36:04] Martin Casado: Like, that's not a thing. Like you're talking to another human being and it's, it's good at coding, but like it's gotta be good at everything.[00:36:10] swyx: Uh, minor disagree only because I, I'm pretty like, have pretty high confidence that basically open eye will always release a GPT five and a GT five codex. Like that's the code's. Yeah. The way I call it is one for raisin, one for Tiz. Um, and, and then like someone internal open, it was like, yeah, that's a good way to frame it.[00:36:32] Martin Casado: That's so funny.[00:36:33] swyx: Uh, but maybe it, maybe it collapses down to reason and that's it. It's not like a hundred dimensions doesn't life. Yeah. It's two dimensions. Yeah, yeah, yeah, yeah. Like and exactly. Beside manner versus coding. Yeah.[00:36:43] Martin Casado: Yeah.[00:36:44] swyx: It's, yeah.[00:36:46] Martin Casado: I, I think for, for any, it's hilarious. For any, for anybody listening to this for, for, for, I mean, for you, like when, when you're like coding or using these models for something like that.[00:36:52] Like actually just like be aware of how much of the interaction has nothing to do with coding and it just turns out to be a large portion of it. And so like, you're, I [00:37:00] think like, like the best Soto ish model. You know, it is going to remain very important no matter what the task is.[00:37:06] swyx: Yeah.[00:37:07] What He's Actually Coding: Gaussian Splats, Spark.js & 3D Scene Rendering Demos[00:37:07] swyx: Uh, speaking of coding, uh, I, I'm gonna be cheeky and ask like, what actually are you coding?[00:37:11] Because obviously you, you could code anything and you are obviously a busy investor and a manager of the good. Giant team. Um, what are you calling?[00:37:18] Martin Casado: I help, um, uh, FEFA at World Labs. Uh, it's one of the investments and um, and they're building a foundation model that creates 3D scenes.[00:37:27] swyx: Yeah, we had it on the pod.[00:37:28] Yeah. Yeah,[00:37:28] Martin Casado: yeah. And so these 3D scenes are Gaussian splats, just by the way that kind of AI works. And so like, you can reconstruct a scene better with, with, with radiance feels than with meshes. ‘cause like they don't really have topology. So, so they, they, they produce each. Beautiful, you know, 3D rendered scenes that are Gaussian splats, but the actual industry support for Gaussian splats isn't great.[00:37:50] It's just never, you know, it's always been meshes and like, things like unreal use meshes. And so I work on a open source library called Spark js, which is a. Uh, [00:38:00] a JavaScript rendering layer ready for Gaussian splats. And it's just because, you know, um, you, you, you need that support and, and right now there's kind of a three js moment that's all meshes and so like, it's become kind of the default in three Js ecosystem.[00:38:13] As part of that to kind of exercise the library, I just build a whole bunch of cool demos. So if you see me on X, you see like all my demos and all the world building, but all of that is just to exercise this, this library that I work on. ‘cause it's actually a very tough algorithmics problem to actually scale a library that much.[00:38:29] And just so you know, this is ancient history now, but 30 years ago I paid for undergrad, you know, working on game engines in college in the late nineties. So I've got actually a back and it's very old background, but I actually have a background in this and so a lot of it's fun. You know, but, but the, the, the, the whole goal is just for this rendering library to, to,[00:38:47] Sarah Wang: are you one of the most active contributors?[00:38:49] The, their GitHub[00:38:50] Martin Casado: spark? Yes.[00:38:51] Sarah Wang: Yeah, yeah.[00:38:51] Martin Casado: There's only two of us there, so, yes. No, so by the way, so the, the pri The pri, yeah. Yeah. So the primary developer is a [00:39:00] guy named Andres Quist, who's an absolute genius. He and I did our, our PhDs together. And so like, um, we studied for constant Quas together. It was almost like hanging out with an old friend, you know?[00:39:09] And so like. So he, he's the core, core guy. I did mostly kind of, you know, the side I run venture fund.[00:39:14] swyx: It's amazing. Like five years ago you would not have done any of this. And it brought you back[00:39:19] Martin Casado: the act, the Activ energy, you're still back. Energy was so high because you had to learn all the framework b******t.[00:39:23] Man, I f*****g used to hate that. And so like, now I don't have to deal with that. I can like focus on the algorithmics so I can focus on the scaling and I,[00:39:29] swyx: yeah. Yeah.[00:39:29] LLMs vs Spatial Intelligence + How to Value World Labs' 3D Foundation Model[00:39:29] swyx: And then, uh, I'll observe one irony and then I'll ask a serious investor question, uh, which is like, the irony is FFE actually doesn't believe that LMS can lead us to spatial intelligence.[00:39:37] And here you are using LMS to like help like achieve spatial intelligence. I just see, I see some like disconnect in there.[00:39:45] Martin Casado: Yeah. Yeah. So I think, I think, you know, I think, I think what she would say is LLMs are great to help with coding.[00:39:51] swyx: Yes.[00:39:51] Martin Casado: But like, that's very different than a model that actually like provides, they, they'll never have the[00:39:56] swyx: spatial inte[00:39:56] Martin Casado: issues.[00:39:56] And listen, our brains clearly listen, our brains, brains clearly have [00:40:00] both our, our brains clearly have a language reasoning section and they clearly have a spatial reasoning section. I mean, it's just, you know, these are two pretty independent problems.[00:40:07] swyx: Okay. And you, you, like, I, I would say that the, the one data point I recently had, uh, against it is the DeepMind, uh, IMO Gold, where, so, uh, typically the, the typical answer is that this is where you start going down the neuros symbolic path, right?[00:40:21] Like one, uh, sort of very sort of abstract reasoning thing and one form, formal thing. Um, and that's what. DeepMind had in 2024 with alpha proof, alpha geometry, and now they just use deep think and just extended thinking tokens. And it's one model and it's, and it's in LM.[00:40:36] Martin Casado: Yeah, yeah, yeah, yeah, yeah.[00:40:37] swyx: And so that, that was my indication of like, maybe you don't need a separate system.[00:40:42] Martin Casado: Yeah. So, so let me step back. I mean, at the end of the day, at the end of the day, these things are like nodes in a graph with weights on them. Right. You know, like it can be modeled like if you, if you distill it down. But let me just talk about the two different substrates. Let's, let me put you in a dark room.[00:40:56] Like totally black room. And then let me just [00:41:00] describe how you exit it. Like to your left, there's a table like duck below this thing, right? I mean like the chances that you're gonna like not run into something are very low. Now let me like turn on the light and you actually see, and you can do distance and you know how far something away is and like where it is or whatever.[00:41:17] Then you can do it, right? Like language is not the right primitives to describe. The universe because it's not exact enough. So that's all Faye, Faye is talking about. When it comes to like spatial reasoning, it's like you actually have to know that this is three feet far, like that far away. It is curved.[00:41:37] You have to understand, you know, the, like the actual movement through space.[00:41:40] swyx: Yeah.[00:41:40] Martin Casado: So I do, I listen, I do think at the end of these models are definitely converging as far as models, but there's, there's, there's different representations of problems you're solving. One is language. Which, you know, that would be like describing to somebody like what to do.[00:41:51] And the other one is actually just showing them and the space reasoning is just showing them.[00:41:55] swyx: Yeah, yeah, yeah. Right. Got it, got it. Uh, the, in the investor question was on, on, well labs [00:42:00] is, well, like, how do I value something like this? What, what, what work does the, do you do? I'm just like, Fefe is awesome.[00:42:07] Justin's awesome. And you know, the other two co-founder, co-founders, but like the, the, the tech, everyone's building cool tech. But like, what's the value of the tech? And this is the fundamental question[00:42:16] Martin Casado: of, well, let, let, just like these, let me just maybe give you a rough sketch on the diffusion models. I actually love to hear Sarah because I'm a venture for, you know, so like, ventures always, always like kind of wild west type[00:42:24] swyx: stuff.[00:42:24] You, you, you, you paid a dream and she has to like, actually[00:42:28] Martin Casado: I'm gonna say I'm gonna mar to reality, so I'm gonna say the venture for you. And she can be like, okay, you a little kid. Yeah. So like, so, so these diffusion models literally. Create something for, for almost nothing. And something that the, the world has found to be very valuable in the past, in our real markets, right?[00:42:45] Like, like a 2D image. I mean, that's been an entire market. People value them. It takes a human being a long time to create it, right? I mean, to create a, you know, a, to turn me into a whatever, like an image would cost a hundred bucks in an hour. The inference cost [00:43:00] us a hundredth of a penny, right? So we've seen this with speech in very successful companies.[00:43:03] We've seen this with 2D image. We've seen this with movies. Right? Now, think about 3D scene. I mean, I mean, when's Grand Theft Auto coming out? It's been six, what? It's been 10 years. I mean, how, how like, but hasn't been 10 years.[00:43:14] Alessio: Yeah.[00:43:15] Martin Casado: How much would it cost to like, to reproduce this room in 3D? Right. If you, if you, if you hired somebody on fiber, like in, in any sort of quality, probably 4,000 to $10,000.[00:43:24] And then if you had a professional, probably $30,000. So if you could generate the exact same thing from a 2D image, and we know that these are used and they're using Unreal and they're using Blend, or they're using movies and they're using video games and they're using all. So if you could do that for.[00:43:36] You know, less than a dollar, that's four or five orders of magnitude cheaper. So you're bringing the marginal cost of something that's useful down by three orders of magnitude, which historically have created very large companies. So that would be like the venture kind of strategic dreaming map.[00:43:49] swyx: Yeah.[00:43:50] And, and for listeners, uh, you can do this yourself on your, on your own phone with like. Uh, the marble.[00:43:55] Martin Casado: Yeah. Marble.[00:43:55] swyx: Uh, or but also there's many Nerf apps where you just go on your iPhone and, and do this.[00:43:59] Martin Casado: Yeah. Yeah. [00:44:00] Yeah. And, and in the case of marble though, it would, what you do is you literally give it in.[00:44:03] So most Nerf apps you like kind of run around and take a whole bunch of pictures and then you kind of reconstruct it.[00:44:08] swyx: Yeah.[00:44:08] Martin Casado: Um, things like marble, just that the whole generative 3D space will just take a 2D image and it'll reconstruct all the like, like[00:44:16] swyx: meaning it has to fill in. Uh,[00:44:18] Martin Casado: stuff at the back of the table, under the table, the back, like, like the images, it doesn't see.[00:44:22] So the generator stuff is very different than reconstruction that it fills in the things that you can't see.[00:44:26] swyx: Yeah. Okay.[00:44:26] Sarah Wang: So,[00:44:27] Martin Casado: all right. So now the,[00:44:28] Sarah Wang: no, no. I mean I love that[00:44:29] Martin Casado: the adult[00:44:29] Sarah Wang: perspective. Um, well, no, I was gonna say these are very much a tag team. So we, we started this pod with that, um, premise. And I think this is a perfect question to even build on that further.[00:44:36] ‘cause it truly is, I mean, we're tag teaming all of these together.[00:44:39] Investing in Model Labs, Media Rumors, and the Cursor Playbook (Margins & Going Down-Stack)[00:44:39] Sarah Wang: Um, but I think every investment fundamentally starts with the same. Maybe the same two premises. One is, at this point in time, we actually believe that there are. And of one founders for their particular craft, and they have to be demonstrated in their prior careers, right?[00:44:56] So, uh, we're not investing in every, you know, now the term is NEO [00:45:00] lab, but every foundation model, uh, any, any company, any founder trying to build a foundation model, we're not, um, contrary to popular opinion, we're
We love love. Don't you? You don't have to answer that, I don't really care. Join Spencer, Ty, Andy and special guest Clay Parks as they delve the depths of AO3's archives with some fanfictions about real people. who, like, live and draw breath. Is that messed up? Probably. IDK. You be the judge. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
Allen, Rosemary, and Yolanda discuss Ming Yang’s proposed $1.5 billion factory in Scotland and why the UK government is hesitating. Plus the challenges of reviving wind turbine manufacturing in Australia, how quickly a blade factory can be stood up, and whether advanced manufacturing methods could give Australia a competitive edge in the next generation of wind energy. Sign up now for Uptime Tech News, our weekly newsletter on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on YouTube, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary’s “Engineering with Rosie” YouTube channel here. Have a question we can answer on the show? Email us! The Uptime Wind Energy Podcast brought to you by Strike Tape, protecting thousands of wind turbines from lightning damage worldwide. Visit strike tape.com And now your hosts. Allen Hall: Welcome to the Uptime Wind Energy Podcast. I’m your host Allen Hall, and I’m here with Yolanda Padron and Rosemary Barnes, and we’re all in Australia at the same time. We’re getting ready for Woma 2026, which is going to happen when this release is, will be through the first day. Uh, it’ll, it’s gonna be a big conference and right now. We’re so close to, to selling it out within a couple of people, so it’ll be a great event. So those of you listening to this podcast, hopefully you’re at Wilma 2026 and we’ll see, see you there. Uh, the news for this week, there’s a number of, of big, uh, country versus country situations going on. Uh, the one at the moment is [00:01:00] ING Yang in Scotland, and as we know, uh, Scotland. It has been offered by Ming Yang, uh, to build a factory there. They’re put about one and a half billion pounds into Scotland, uh, that is not going so well. So, so they’re talking about 3000 jobs, 1.5 billion in investment and then. Building, uh, offshore turbines for Britain and the larger Europe, but the UK government is hesitating and they have not approved it yet. And Scotland’s kind of caught in the middle. Ming Yang is supposedly looking elsewhere that they’re tired of waiting and figure they can probably get another factory somewhere in Europe. I don’t think this is gonna end well. Everyone. I think Bing Yang is obviously being pushed by the Chinese, uh, government to, to explore Scotland and try to get into Scotland and the Scottish government and leaders in the Scottish government have been meeting with, uh, [00:02:00] Chinese officials for a year or two. From what I can tell, if this doesn’t end with the factory in Scotland. Is China gonna take it out on the uk? And are they gonna build, is is me gonna be able to build a factory in Europe? Europe at the minute is looking into the Chinese investments into their wind turbine infrastructure in, in terms of basically tax support and, and funding and grants of that, uh, uh, aspect to, to see if China is undercutting prices artificially. Uh, which I think the answer is gonna be. Yes. So where does this go? It seems like a real impasse. At a moment when the UK in particular, and Europe, uh, the greater Europe are talking about more than a hundred gigawatts of offshore wind, Yolanda Padron: I mean, just with the, the business that you mentioned that’s coming into to the uk, right? Will they have without Min Yang the ability to, to reach their goals? Allen Hall: So you have the Siemens [00:03:00] factory in hall. They have a Vestus factory in Hollow White on the sort of the bottom of the country. Right. Then Vestus has had a facility there for a long time and the UK just threw about 20 million pounds into reopening the onshore blade portion of that factory ’cause it had been mothballed several months ago. It does seem like maybe there’s an alternative plan within the UK to stand up its own blade manufacturing and turbine manufacturing facilities, uh, to do a lot of things in country. Who I don’t think we know. Is it Siemens? Is it ge? Is it Vestus or is it something completely British? Maybe all the above. Rosemary. You know, being inside of a Blade factory for a long time with lm, it’s pretty hard to stand up a Blade factory quickly. How many years would it take you if you wanted to start today? Before you would actually produce a a hundred meter long offshore blade, Rosemary Barnes: I reckon you could do it in a year if you had like real, real strong motivation [00:04:00] Allen Hall: really. Rosemary Barnes: I think so. I mean, it’s a big shed and like, it, it would be, most of the delays would be like regulatory and, you know, hiring, getting enough people hired and trained and that sort of thing. But, um, if you had good. Support from the, the government and not too much red tape to deal with. Then, uh, you know, if you’ve got lots of manufacturing capability elsewhere, then you can move people. Like usually when, um, when I worked at LM there were a few new factories opened while I was working there, and I’m sure that they took longer than, than a year in terms of like when it was first thought of. But, um, you know, once the decision was made, I, I actually dunno how long, how long it took. So it is a guess, but it didn’t, it didn’t take. As long as you would think it wasn’t. It wasn’t years and years, that’s for sure. Um, and what they would do is they don’t, you know, hire a whole new workforce and train them up right from the start. And then once they’re ready to go, then they start operating. What they’ll do to start with is they’ve got, you know, like a bunch [00:05:00] of really good people from the global factories, like all around, um, who will go, um, you know, from all roles. And I’m not talking just management at all, like it will include technicians, um, you know, every, every role in the factory, they’ll get people from another factory to go over. And, um, you know, they do some of the work. They’re training up local people so you know, there’s more of a gradual handover. And also so that you know, the best practices, um, get spread from factory to factory and make a good global culture. ’cause obviously like you’ve got the same design everywhere. You want the same quality coming out everywhere. Um, there is, as much as you try and document everything should be documented in work instructions. That should make it, you know, impossible to do things wrong. However, you never quite get to that standard and, um. There is a lot, a lot to be said for just the know-how and the culture of the people doing the um, yeah, doing the work. Allen Hall: So the infrastructure would take about a year to build, but the people would have to come from the broader Europe then at [00:06:00] least temporarily. Rosemary Barnes: That, that would be the fastest and safest way to do it. Like if it’s a brand new company that has never made a wind turbine before and someone just got a few, you know, I don’t know, a billion dollars, and um, said, let’s start a wind turbine factory, then I think it’s gonna be a few years and there’s gonna be some learning curve before it starts making blades fast enough. And. With the correct quality. Um, yeah. But if you’re just talking about one more factory from a company that already has half a dozen or a dozen wind turbine blade factories elsewhere in the world, then that’s where I think it can be done fast. Allen Hall: This, uh, type of situation actually pops up a lot in aerospace, uh, power plants, engines. The jet engines on a lot of aircraft are kind of a combined effort from. Big multinational companies. So if they want to build something in country, they’ll hook up with a GE or a, a Honeywell or somebody who makes Jet engines and they’ll create this division and they’ll [00:07:00] stand this, this, uh, plant up. Maybe it’s gonna be something like that where GB energy is in the middle, uh, providing the funding and some of the resources, but they bring in another company, like a Siemens, like a Vestas, like a GE or a Nordex even to come in and to. Do the operational aspects and maybe some of the training pieces. But, uh, there’s a, there’s a funding arm and a technical arm, and they create a standalone, uh, British company to go manufacture towers to go manufacture in the cells to manufacture blades. Is that where you think this goes? Rosemary Barnes: It depends also what kind of, um, component you’re talking about. Like if you’re talking about, I, I was talking a specific example of wind turbine blades, which are a mediumly complex thing to make, I would say, um. Yeah. And then if you go on the simpler side, when turbine towers, most countries would have the. Rough expertise needed, um, to, to do that. Nearly all towers at the moment come out of [00:08:00] China, um, or out of Asia. And with China being the, the vast bulk of those. Um, and it’s because they’ve got, aside from having very, very cheap steel, um, they also have just got huge factories that are set up with assembly lines so that, you know, there’s not very much moving of things back and forth. So they have the exact right bit of equipment to do. The exact right kind of, you know, like rolling and welding and they’re not moving tower sections around a lot. That makes it really hard for, um, for other countries to compete. But it’s not because they couldn’t make towers, it’s because they would struggle to make them cheap enough. Um, so yeah, if you set up a factory, you know, say you set up a wind turbine, um, factory in, uh, wind turbine tower factory in Australia, you, you could buy the equipment that you needed for, you know, a few hundred million dollars and, um. You could make it, but unless you have enough orders to keep that factory busy, you know, with the, the volume that you need to keep all of that [00:09:00] modern equipment, uh, operating just absolutely around the clock, your towers are gonna be expensive out of that facility. So that’s kind of the, that it’s cost is the main barrier when it comes to towers Allen Hall: with Vestus in Mitsubishi recently having a partnership and then ending that partnership. It would seem like Vestus has the most experience in putting large corporations together to work on a, an advanced wind turbine project is they would, it would make sense to me if, if, if Vestus was involved because Vestus also has facilities in the uk. Are they the leading choice you think just because they have that experience with Mitsubishi and they have something in country or you think it’s somebody else? Is it a ge Rosemary Barnes: My instinct is saying Vestas. Yes, Allen Hall: me too. Okay. Rosemary Barnes: Ge. It’s wind turbine Manufacturing seems to be in a bit of a, more of an ebb rather than a flow right now, so I [00:10:00] mean that’s, that’s probably as much as what it’s based on. Um, and then yes, like the location of, of factories, there are already some vest, uh, factories, vest people in the uk so that would make it easier. : Delamination and bottomline failures and blades are difficult problems to detect early. These hidden issues can cost you millions in repairs and lost energy production. C-I-C-N-D-T are specialists to detect these critical flaws before they become expensive burdens. Their non-destructive test technology penetrates deep into blade materials to find voids and cracks. Traditional inspections completely miss. C-I-C-N-D-T Maps. Every critical defect delivers actionable reports and provides support to get your blades back in service. So visit cic ndt.com because catching blade problems early will save you millions.[00:11:00] Allen Hall: Can you build a renewable energy future on someone else’s supply chain? Well, in Australia, the last domestic wind tower manufacturers are down. Last year, after losing a 15 year battle against cheaper imports from China, now the Albanese government wants to try again, launching a consultation to revive local manufacturing. Meanwhile, giant turbines are rising in Western Australia’s. Largest wind farms soon to power 164,000 homes. Uh, the steel towers, blades and the cells, they all arrive on ships. And the question is whether that’s going to change anytime soon. Rosemary? Rosemary Barnes: Yeah, it’s, uh, it’s a topic I’ve thought about a lot and done a fair bit of work on as well, local manufacturing and whether you should or shouldn’t, the Australian government does try to support local manufacturing in. General, um, and in particular for renewables, but they focused much more on solar and [00:12:00] batteries. Um, with their manufacturing support, Australian government and agencies like a uh, arena, Australian Renewable Energy Agency have not traditionally supported wind like at all. It bothers me because actually Australia is a fantastic place to be developing some of these supporting technologies for wind energy and even the next generation of wind energy. Um, technologies, we, not any manufacturing. There are heaps of, um, things that would make it more suitable Australia, like just actually a really natural place to develop that. The thing about Australian projects is that they are. Big. Right. That makes it really attractive to developers because like in Europe where they’re, you know, still building wind, but you know, an onshore wind farm in Europe is like a couple of turbines here or there, maybe five, like a big wind farm would be 10, 10 turbines over there. Um, in Australia it’s like a hundred, 200 turbines at a time. Um, for onshore also choosing. Really big turbines. Australians, for some reason, Australian developers really like to [00:13:00] choose the latest technologies. And then if we think about some of the, um, you know, like new supporting technologies for existing wind turbines, like, you know, let’s, um, talk about. O and m there’s a whole lot of, um, o and m technologies, and Australia’s a great place for that too because as Australia wind farms spend so much on o and m compared to other countries. So a technology provider that can improve some of those pain points can much quicker get like a positive, um, return on investment in Australia than they would be able to in somewhere like America or, or Europe. So I think it makes sense to develop here Allen Hall: with the number of wind farms. Rosie, I, I completely agree with you and. When we were talking about the war Dge wind Farm, which is the Western Australian wind farm that’s gonna expand, they’re adding 30 turbines to provide 283 megawatts. That’s like a nine and a half megawatt machine. Those are big turbines. Those are new turbines, right? That’s not something that’s been around for a couple years. They’ve been around for a couple of months in, in terms of the lifespan of, of wind [00:14:00] turbines. So if Australia’s gonna go down the pathway of larger turbines, the, the most advanced turbines. It has to make sense that some of this has, has to be developed in country just because you need to have the knowledge to go repair, modify, improve, adjust, figure out what the next generation is, right? I don’t know how you, this happens. Rosemary Barnes: We see some examples of that. Right. And I think that Fortescue is the best example of, um, companies that are trying to think forward to what they’re going to need to make their, you know, they’ve got ambitious plans for putting in some big wind farms with. Big wind turbines in really remote locations. So they’ve got a lot of, um, it’s a lot of obvious challenges there. Um, and I know that they’re thinking ahead and working through that. And so, you know, we saw their investment in, um, nbra wind, the Spanish company and in particular their nbra lift. The bit of the tower that attaches to the rotor. It looks [00:15:00] pretty normal. Um, but then they make it taller by, um, slotting in like a lattice framework. Um, and then they jack it up and slot in another one underneath and jack it up and slot in another one underneath. So they don’t need a gigantic crane and they don’t need, um, I mean, it’s still a huge crane, but they don’t, they don’t, it doesn’t need to be as, as big because, you know, the rotor starts, starts off already on there by the time that the tower gets su to its full height. So, um, yeah, it’s a lot. That’s an innovative solution, I think, and it would, I would be very surprised if they weren’t also looking at every other technology that they’re gonna need in these turbines. Allen Hall: If Australia’s gonna go down the pathway of large turbines on shore, then the manufacturing needs to happen in country. There’s no other way to do it. And you could have manufacturing facilities in Western Australia or Victoria and still get massive turbine blades shipped or trucked either way. To [00:16:00] wherever they needed it to go. In country, it would, it’s not that hard to get around Australia and unlike other countries like, like Germany was a lot of mountains and you had bridges and narrow roads and all that, and it, it’s, it’s much more expansive in Australia where you can move big projects around. And obviously with all the, the mining that happens in Australia, it’s pretty much normal. So I, I just trying to get over the hurdle of where the Albanese government is having an issue of sort of pushing this forward. It seems like it’s a simple thing because the Australian infrastructure is already ready. Someone need to flip the switch and say go. Rosemary Barnes: I don’t know if I’d say that we’re we’re ready. ’cause Australia doesn’t have a whole lot of manufacturing of anything at the moment. It’s not true that we have no manufacturing. That’s what Australians like to say. We don’t manufacture anything and that’s not true. We do manufacture. We have some pretty good advanced manufacturing. If you just look at the hard economics of wind turbine manufacturing in Australia of solar panel manufacturing, battery manufacturing. Any of that, it is cheaper to just get it from China, not least [00:17:00] because some of the, um, those components are subsidized by the, the Chinese government. If you start saying, okay, we’re gonna have local manufacturing, like, you can either, you can achieve that either by supporting the local manufacturing industry, you know, like giving subsidies to our manufacturing. Or you could, um, make a local content requirement. Um, say things, you know, if you want project approval for this, then it has to have so much local content. You have to do it really carefully because if you get the settings wrong, then you just end up with very, very expensive, um, renewable energy. And at the moment, especially wind is. Expensive, and I think it’s still getting more expensive in Australia. It has been since, basically since the pandemic. If you then said, we’ve gotta also make it in Australia, then you add a bunch more costs and we would just probably not have wind energy then, so, uh, or new, new wind energy. So there needs to be that balance. But I think that like, even though you can say, okay, cheapest is best, it is also not good to rely on. [00:18:00] Exclusively on other countries, and especially not on just one other country to give you all of your energy infrastructure. If it was up to me, I would be much more supporting the next wave of, um, technologies. I would really love to see, you know, a new Australian. Wind turbine blade manufacturing method. Like at some point in the next decade, we’re going to start getting, uh, advanced manufacturing is gonna make it into wind turbine blades. It’s already there in some of the other components. Allen Hall: Wait, so you just said if we were gonna build a factory in Scotland, it would take about a year. Why would it take 10 years to do it in Australia? Australia’s a nice place to live. Rosemary Barnes: No, I didn’t say that. It would, it would take teens. I said in, sometime in the next decade around the world, wind turbine blades are basically handmade, right? They, you know, there are some, um, machines that are helping people, but you know, you have a look at a picture of a wind turbine blade factor and there’s, you know, there’s 20 people walking over, walking over a blade, smoothing down glass. And at some point we’re gonna start using advanced manufacturing methods. I [00:19:00] mean, there are really advanced composite manufacturing methods. Um, you know, with, um, individual fiber placement and 3D printing with, um, continuous fibers. And that’s being used for like aerospace components a lot. It’s early days for that technology and there is no barrier to the technologies to being able to put them, you know, like say on a GaN gantry that just, you know, like ran down the length of a whole blade like that, that could be done. If it was economic, that’s the kind of technology that Australia should be supporting before that’s the mainstream, and everybody else has already done it, right? You need to find the next thing, and ideally not just one next thing, but several next things because you’re not gonna, you don’t know ahead of time, um, which is gonna be the winner. Allen Hall: That hasn’t been the tack that China has taken, that the latest technology in batteries is not something that China is producing today. They’re producing a generation prior, but they’re doing it at scale. At some point they, the Chinese just said, we’re stopping here and we’re gonna do this, this kind of [00:20:00] battery, and that’s it. And away we go. If we keep waiting until the next generation of blade techniques come out, I think we’re gonna be waiting forever. Rosemary Barnes: I don’t think why I think. Do, you know, make the next generation of, of blade bio technologies? Yolanda Padron: I think it makes sense for someplace like Australia, right? Because we, we’ve talked about the fact that like here, you, you have to consider a lot of factors in operation that you don’t have to consider in other places, especially for blades, right? So if you can eliminate all of those issues, for the most part that are happening in the factory at manufacturing, then that can really help boost. The next operational projects. Allen Hall: So then what you’re saying is that. There are new technologies, but what stage are they at? Are they TRL two, TRL five, TRL seven. How close is this technology because I’d hate for Australia to miss out on this big opportunity. Rosemary Barnes: Frown Hoffer has actually just published an article recently, uh, [00:21:00] about some, I can’t remember if it was fiber, um, tape placement or if it was printed, small wind turbine blades. Small wind is a nice, like, it’s a, a nice bite-sized kind of thing that you can master a lot quicker than you can, you know, you can make a thousand small wind turbines and learn a lot more than making 100 meter long blade. That would probably be bad because it’s your first one and you didn’t realize all of the downsides to the new technology yet. Um, so I, I think it is kind of promising, but. In terms of, yeah, like a major, like in terms of let’s say a hundred meter long blade that was made with 3D printing, that would be terra, L one. Like it’s an idea now. Nobody has actually made one or, um, done, done too much. Um, as far as I know. I think you could get, could get to nine over the next year. Like I said, like I think sometime in the next decade will be when that, when that comes. Allen Hall: Okay. If you, you didn’t get to a nine that quickly. No, it is possible. Yeah. You gotta put some money into it. Rosemary Barnes: If someone wants to give me, [00:22:00] you know, enough money, then I’ll make it. I’ll make it happen. I’ll, I would, I would absolutely be able to make that happen, but I don’t know when it’s gonna be cheap enough. Allen Hall: I would just love to see it. If, if, if you’ve got a, if you’ve got a, a factory, you got squirreled away somewhere in the. Inland of Australia that is making blades at quantity or has the technology to do that. I would love to see it because that would be amazing. Rosemary Barnes: Technologies don’t just fall out of the sky, you know, like they, you, you, you force them into existence. That’s what you, that’s what you do. You know what this comes down to? Have you ever done the, is it Myers-Briggs where you get the, like letters of your personality? You and I are in opposite corners inside some ways. Allen Hall: That wraps up another episode of the Uptime Wind Energy Podcast. If today’s discussion sparked any questions or ideas, and it surely should, we’d love to hear from you. Reach out to us on LinkedIn, particularly Rosie, so it’s Rosemary Barnes on LinkedIn. Don’t forget to subscribe to who you never miss an episode. And if you found value in today’s conversation, please leave us a review. It really helps other wind [00:23:00] energy professionals discover the show. For Rosie and Yolanda, I am Alan Hall, and we’ll see here next week on the Uptime Wind Energy Podcast.
Un diagnóstico como el cáncer puede generar muchas emociones, tanto en las personas que lo viven como en sus familiares y amigos.
- Đoàn đại biểu BCH Trung ương Đảng, Quốc hội, Chủ tịch nước, Chính phủ, UBTW MTTQ Việt Nam đặt vòng hoa vào lăng viếng Chủ tịch Hồ Chí Minh.- Quyền Bộ trưởng Bộ Công thương Lê Mạnh Hùng yêu cầu bảo đảm nguồn cung xăng dầu - “huyết mạch” của nền kinh tế - trong mọi tình huống.- Sau hơn 10 ngày diễn ra, Hội chợ Mùa Xuân lần thứ nhất năm 2026 sẽ bế mạc vào tối nay. - Ngày làm việc cuối cùng trước kỳ nghỉ Tết Nguyên đán 2026, lượng người rời các thành phố lớn tăng mạnh, áp lực giao thông tăng cao tại các tuyến đường cửa ngõ.- Lãnh đạo 27 nước thành viên EU nhất trí kế hoạch tái cấu trúc nền kinh tế, đặt mục tiêu nâng cao năng lực cạnh tranh và bảo đảm tăng trưởng bền vững trước sức ép từ Mỹ, Trung Quốc và Nga. - Nhân Ngày Phát thanh Thế giới 13/2, UNESCO lựa chọn thông điệp “Ây-ai là công cụ, không phải là tiếng nói” trong bối cảnh, ngành phát thanh đang bước vào giai đoạn chuyển mình sâu sắc trước làn sóng công nghệ và trí tuệ nhân tạo.
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin16:38 Chia sẻ Lời Chúa : Lm. Giuse Trần Sĩ Nghị, SJ, chia sẻ Lời Chúa Chúa Nhật VI thường niên---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
You've been wondering why YouTube ads are so bad these days. I know it, you know it. Well... mea culpa, mea culpa, mea maxima culpa. Join Spencer, Ty, and Andy as they pick the ads that you are going to see for the next 20 years on YouTube, and discuss ways to make them even worse. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
El sobrepensamiento puede ser, por sí solo, una experiencia tremendamente angustiante
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin17:05 Chia sẻ Lời Chúa : Lm. Đa Minh Vũ Duy Cường, SJ, chia sẻ Lời Chúa Chúa Nhật V thường niên25:15 Nữ tu trong Giáo hội : Các nữ tu Con Đức Mẹ Thăm viếng ở Kenya phục hồi các gia đình và hàn gắn những trái tim bằng tình yêu---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
In this eye-opening episode, Nurse Erica sits down with Bob Funk, creator of LaborLab, the only nonprofit watchdog organization tracking corporate spending on union-busting. Bob pulls back the curtain on the multi-million dollar industry dedicated to keeping healthcare workers from organizing, revealing how hospitals and healthcare systems spend millions of dollars on union-busting consultants. They explore LaborLab's union-buster tracker and discuss the common tactics employers use to discourage nurses from organizing, from captive audience meetings to intimidation and retaliation. Bob explains the Labor Management Reporting and Disclosure Act of 1959 and how the LM-20 forms are supposed to work—along with the troubling reality that many employers and union-busters simply don't comply with legally required financial reporting. The conversation dives into the "persuader loophole" that allows consultants to hide their anti-union activities and discuss why the PRO Act matters for nursing. They don't shy away from the controversial topic of scab nurses and the damage strike-breaking causes to both patient care and the profession. Whether you're curious about organizing, already involved in union efforts, or just want to understand the forces working against nurses' collective power, this episode is essential listening! Interested in Sponsoring the Show? Email with the subject NURSES UNCORKED SPONSOR to: nursesuncorked@gmail.com Support the Show: Help keep Nurses Uncorked going and become an official Patron! Gain early access to episodes, exclusive bonus content, giveaways, Zoom parties, shout-outs, and much more. Become a Wine Cork, Wine Bottle, Decanter, Grand Preserve, or even a Vineyard Member: https://patron.podbean.com/nursesuncorkedpodcast ETSY Shop: Stop Healthcare Worker Violence! https://www.etsy.com/shop/TheNurseErica Labor Lab: https://laborlab.us/ https://www.tiktok.com/@laborlab.us https://www.instagram.com/laborlab_us/?hl=en https://x.com/LaborLabUS Chapters: 00:00 Introduction 03:40 Testifying Before House of Representatives 08:00 Employer Reporting Noncompliance 11:50 Persuader Loophole 14:50 Labor Lab 17:27 Union Buster Tracker 19:00 Common Union Busting Tactics 24:50 Captive Audience Meetings 28:14 Legal Protections 35:37 Union Busters 45:00 Breaking Down LM-20 Disclosure Forms 48:30 Pitfalls of Union Organizing 53:30 National Labor Relations Board 57:49 The PROAct 1:00:12 Healthcare System Consolidations 1:01:40 Nursing Strikes 1:05:25 Strike Insurance 1:10:55 Scabs Damage the Profession 1:27:49 Conclusion Help the podcast grow by giving episodes a like, download, follow and a 5 ️ star rating! Please follow Nurses Uncorked at: tiktok.com/nurses-uncorked https://youtube.com/@NursesUncorkedL You can listen to the podcast at: podcasts.apple/nursesuncorked spotify.com/nursesuncorked podbean.com/nursesuncorked iheart.com/nurses-uncorked Follow Nurse Erica: @TheNurseErica on TikTok, Instagram, Facebook and YouTube! https://www.youtube.com/@thenurseerica9094 https://www.instagram.com/the.nurse.erica/ DISCLAIMER: This Podcast and all related content published or distributed by or on behalf of Nurse Erica or Nurses Uncorked Podcast is for informational, educational and entertainment purposes only and may include information that is general in nature and that is not specific to you. Any information or opinions expressed or contained herein are not intended to serve as legal advice, or replace medical advice, nor to diagnose, prescribe or treat any disease, condition, illness or injury, and you should consult the health care professional of your choice regarding all matters concerning your health, including before beginning any exercise, weight loss, or health care program. If you have, or suspect you may have, a health-care emergency, please contact a qualified health care professional for treatment. The views and opinions expressed on Nurses Uncorked do not reflect the views of our employers, professional organizations or affiliates. Any information or opinions provided by guest experts or hosts featured within website or on Nurses Uncorked Podcast are their own; not those of Nurse Erica or Nurses Uncorked LLC. Accordingly, Nurse Erica and Nurses Uncorked cannot be responsible for any results or consequences or actions you may take based on such information or opinions. All content is the sole property of Nurses Uncorked, LLC. All copyrights are reserved and the exclusive property of Nurses Uncorked, LLC.
What the @*$^ did you just $%%^ing say about me, you little neph? I'll have you know I graduated top of my class in the Navy Uncs, and I've been involved in numerous secret raids on Aunt-Quaeda, and I have over 300 confirmed beers. Join Spencer, Ty, and Andy as they write the greatest TV show in history: the secret history of the Uncles. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
VOV1 - Chiều 3/2, tại Thủ đô Washington D.C., Mỹ Quyền Bộ trưởng Bộ Công Thương Lê Mạnh Hùng đã chứng kiến Lễ ký kết các Biên bản ghi nhớ (MOU) hợp tác giữa Công ty Lọc hoá dầu Bình Sơn và các đối tác năng lượng hàng đầu của Mỹ.
Creo que esta es una de las meditaciones que mejor describen el centro de lo que es el mindfulness y lo que lo caracteriza
LM publica cómo el empleo público se ha disparado bajo el mandato de Pedro Sánchez en 523.600 personas.
How are you naming your pitbull that. You know you can't be naming a pitbull that word. Come on with that nonsense. Join Spencer, Ty, and Andy as they decide once again who would win in a fight between King Von and Mort Rifkin. You know, normal discussions. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
A veces sólo necesitamos un par de minutos para darnos durante la noche para poder hacer una diferencia cuando vayamos a la cama. Aunque sea sólo unos momentos, puede marcar una diferencia en nuestro sueño.Te mando un fuerte abrazo
LM publica cómo la deuda pública vuelve a crecer, sube en noviembre 5.071 millones de euros superando el 100% del PIB.
LM publican lo que decía Marta Serrano, que ocupó la Secretaría General de Transporte Terrestre entre 2023 y 2025.
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more!We discuss:* Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold* The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number)* Why they threw away AlphaProof: “If one model can't do it, can we get to AGI?” The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus* On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—”humans learn by making mistakes, not by copying”* Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference* The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else?* Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous “resolution of possible worlds” paradigm (curve-fitting to find the world model that best explains the data)* Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—”the model is better than me at this”* The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? “Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up”* DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify* Why RecSys and IR feel like a different universe: “modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart”* The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before* Why ideas still matter: “the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here”* Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier—Yi Tay* Google DeepMind: https://deepmind.google* X: https://x.com/YiTayMLFull Video EpisodeTimestamps00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini00:21:33 Training IMO Cat: Four Captains Across Three Time Zones00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks00:36:29 AI Coding Assistants: From Lazy to Actually Useful00:32:59 Reasoning, Chain of Thought, and Latent Thinking00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima00:55:04 Data Efficiency and World Models: The Next Frontier01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets01:28:49 Health, HRV, and Research Performance: The 23kg Journey Get full access to Latent.Space at www.latent.space/subscribe
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey
Record prices. Wild color combinations. And a white GTO that quietly told the real story. In this episode, I break down the Bachman Ferrari sale—why so many cars shattered records, why the boldest (and loudest) specs seemed to win, and what this means for the broader Ferrari market going forward. We talk about how extreme colors and one-off specifications are fueling a new wave of Tailor Made Ferraris, often with investment hopes attached, and why that strategy doesn't always end the way people expect. I also dig into the surprises: softness in cars like the Superamerica, Dinos, and Daytonas, the continued strength of Scuderia, Stradale, and Aperta models, and why the white Ferrari 330 LM / 250 GTO selling for $35 million wasn't as shocking as it looked—unless you weren't paying attention. Finally, I connect the dots to what this could mean for upcoming auctions, including RM Sotheby's Arizona, and why fundamentals still matter—even in a market that sometimes feels like a rainbow-painted Skittles car just crossed the block.
Listen to the January 2026 edition of The Postal Record. Browse the digital issue here. 00:00 Introduction 00:14 Looking back and looking forward, by President Brian L. Renfroe 05:03 News from Washington 12:05 2025 JCAM is now available 18:58 Register now for the food drive 25:14 Informal Step A Training announced 30:25 After a year of standing strong NALC is ready to fight on. 2026: A look ahead 50:07 Leadership Academy founder asks grads to serve other letter carriers back home 58:26 Important benefits new letter carriers should expect to receive from USPS 01:11:22 Caretakers of the community 01:39:04 George Meany, first president of the AFL-CIO 01:44:30 NALC Branch Publication competition call for entries 01:50:07 Carriers and the mail make news online 01:56:31 From airwaves to the page: A creative journey and tribute to lifelong friends 02:04:00 Veterans' legislative update 02:16:56 Executive Vice President Paul Barner: An update to cases pending at the Interpretive step 02:27:38 Vice President James Henry: NALC Needs you 02:32:57 Secretary-Treasurer Nicole Rhine: Reporting to the DOL: Forms LM-2, LM-3 and LM-4 02:38:57 Assistant Secretary-Treasurer Mack Julion: Postal protection 02:44:33 Director of City Delivery Christopher Jackson: USPS pilot testing and additional revenue streams 02:49:48 Director of Safety and Health Manuel Peralta Jr.: Safety committees 02:56:45 Director of Retired Members Dan Toth: Roth TSP—Another tool to manage your taxes 03:02:49 Director of Life Insurance James Yates: MBA Retirement Savings Plan 2026 update 03:08:32 Director of Health Benefits Stephanie Stewart: New benefits and wellness programs 03:14:40 Contract Talk: Route inspections 03:36:41 Regional Workers' Compensation Assistant Coby Jones: Preexisting conditions 03:43:18 Staff report - CLUW: "Women of the World Unite'
That's right, folks. You thought TGOFV would never do class warfare? You simply don't know us well enough. Join Spencer, Ty, and Andy as they debate over which type of worker is the only good one to be: computer guy or construction site wolf-whistler. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
Why are so many parents refusing to register birth certificates for their kids? The answer might shock you.
Allen, Joel, Rosemary, and Yolanda cover major offshore wind developments on both sides of the Atlantic. In the US, Ørsted’s Revolution Wind won a court victory allowing construction to resume after the Trump administration’s suspension. Meanwhile, the UK awarded contracts for 8.4 gigawatts of new offshore capacity in the largest auction in European history, with RWE securing nearly 7 gigawatts. Plus Canada’s Nova Scotia announces ambitious 40 gigawatt offshore wind plans, and the crew discusses the ongoing Denmark-Greenland tensions with the US administration. Sign up now for Uptime Tech News, our weekly newsletter on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on YouTube, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary’s “Engineering with Rosie” YouTube channel here. Have a question we can answer on the show? Email us! The Uptime Wind Energy Podcast brought to you by Strike Tape, protecting thousands of wind turbines from lightning damage worldwide. Visit strike tape.com. And now your hosts, Alan Hall, Rosemary Barnes, Joel Saxon and Yolanda Padron. Welcome to the Uptime Wind Energy Podcast. I’m Allen Hall, along with Yolanda, Joel and Rosie. Boy, a lot of action in the US courts. And as you know, for weeks, American offshore wind has been holding its breath and a lot of people’s jobs are at stake right now. The Trump administration suspended, uh, five major projects on December 22nd, and still they’re still citing national security concerns. Billions of dollars are really in balance here. Construction vessels for most of these. Sites are just doing nothing at the minute, but the courts are stepping in and Sted won a [00:01:00] key victory when the federal judge allowed its revolution wind project off the coast of Rhode Island to resume construction immediately. So everybody’s excited there and it does sound like Osted is trying to finish that project as fast as they can. And Ecuador and Dominion Energy, which are two of the other bigger projects, are fighting similar battles. Ecuador is supposed to hear in the next couple of days as we’re recording. Uh, but the message is pretty clear from developers. They have invested too much to walk away, and if they get an opportunity to wrap these projects up quickly. They are going to do it now. Joel, before the show, we were talking about vineyard wind and vineyard. Wind was on hold, and I think it, it may not even be on hold right now, I have to go back and look. But when they were put on hold, uh, the question was, the turbines that were operating, were they able to continue operating? And the answer initially I thought was no. But it was yes, the, the turbines that were [00:02:00] producing power. We’re allowed to continue to produce powers. What was in the balance were the remaining turbines that were still being installed or, uh, being upgraded. So there’s, there’s a lot going on right now, but it does seem like, and back to your earlier point, Joel, before we start talking and maybe you can discuss this, we, there is an offshore wind farm called Block Island really closely all these other wind farms, and it’s been there for four or five years at this point. No one’s said anything about that wind farm. Speaker: I think it’s been there, to be honest with you, since like 2016 or 17. It’s been there a long time. Is it that old? Yeah, yeah, yeah, yeah. So when we were talk, when we’ve been talking through and it gets lost in the shuffle and it shouldn’t, because that’s really the first offshore wind farm in the United States. We keep talking about all these big, you know, utility scale massive things, but that is a utility scale wind farm as well. There’s fi, correct me if I’m wrong, Yolanda, is it five turbos or six? It’s five. Their decent sized turbines are sitting on jackets. They’re just, uh, they’re, they’re only a couple miles offshore. They’re not way offshore. But throughout all of these issues that we’ve had, um, with [00:03:00] these injunctions and stopping construction and stopping this and reviewing permits and all these things, block Island has just been spinning, producing power, uh, for the locals there off the coast of Rhode Island. So we. What were our, the question was is, okay, all these other wind farms that are partially constructed, have they been spinning? Are they producing power? And my mind goes to this, um, as a risk reduction effort. I wonder if, uh, the cable, if the cable lay timelines were what they were. Right. So would you now, I guess as a risk reduction effort, and this seems really silly to have to think about this. If you have your offshore substation, was the, was the main export cable connected to some of these like revolution wind where they have the injunction right now? Was that export cable connected and were the inter array cables regularly connected to turbines and them coming online? Do, do, do, do, do. Like, it wasn’t like a COD, we turned the switch and we had to wait for all 62 turbines. Right. So to our [00:04:00] knowledge and, and, uh, please reach out to any of us on LinkedIn or an email or whatever to our knowledge. The turbines that are in production have still have been spinning. It’s the construction activities that have been stopped, but now. Hey, revolution wind is 90% complete and they’re back out and running, uh, on construction activities as of today. Speaker 2: It was in the last 48 hours. So this, this is a good sign because I think as the other wind farms go through the courts, they’re gonna essentially run through this, this same judge I that. Tends to happen because they have done all the research already. So you, you likely get the same outcome for all the other wind farms, although they have to go through the process. You can’t do like a class action, at least that’s doesn’t appear to be in play at the minute. Uh, they’re all gonna have to go through this little bit of a process. But what the judge is saying essentially is the concern from the Department of War, and then the Department of Interior is. [00:05:00] Make believe. I, I don’t wanna frame it. It’s not framed that way, the way it’s written. There’s a lot more legalistic terms about it. But it basically, they’re saying they tried to stop it before they didn’t get the result they wanted. The Trump administration didn’t get the result they wanted. So the Trump administration ramped it up by saying it was something that was classified in, in part of the Department of War. The judge isn’t buying it. So the, the, the early action. I think what we initially talked about this, everybody, I think the early feeling was they’re trying to stop it, but the fact that they’re trying to stop it just because, and just start pulling permits is not gonna stand outta the court. And when they want to come back and do it again, they’re not likely to win. If they would. Kept their ammunition dry and just from the beginning said it’s something classified as something defense related that Trump administration probably would’ve had a better shot at this. But now it just seems like everything’s just gonna lead down the pathway where all these projects get finished. Speaker: Yeah, I think that specific judge probably was listening to the [00:06:00] Uptime podcast last week for his research. Um, listen to, to our opinions that we talked about here, saying that this is kind of all bs. It’s not gonna fly. Uh, but what we’re sitting at here is like Revolution Wind was, had the injunction against it. Uh, empire Wind had an injunction again, but they were awaiting a similar ruling. So hopefully that’s actually supposed to go down today. That’s Wednesday. Uh, this is, so we’re recording this on Wednesday. Um, and then Dominion is, has, is suing as well, and their, uh, hearing is on Friday. In two, two days from now. And I would expect, I mean, it’s the same, same judge, same piece of papers, like it’s going to be the same result. Some numbers to throw at this thing. Now, just so the listeners know the impact of this, uh, dominion for the Coastal Virginia Offshore Wind Project, they say that their pause in construction is costing them $5 million a day, and that is. That’s a pretty round number. It’s a conservative number to be honest with you. For officer operations, how many vessels and how much stuff is out there? That makes sense. Yep. [00:07:00] 5 million. So $5 million a day. And that’s one of the wind farms. Uh, coastal, Virginia Wind Farm is an $11 billion project. With, uh, it’s like 176 turbines. I think something to that, like it’s, it’s got enough power, it’s gonna have enough production out there to power up, like, uh, like 650,000 homes when it’s done. So there’s five projects suspended right now. I’m continuing with the numbers. Um, well, five, there’s four now. Revolution’s back running, right? So five and there’s four. Uh, four still stopped. And of those five is 28. Billion dollars in combined capital at risk, right? So you can understand why some of these companies are worried, right? They’re this is, this is not peanuts. Um, so you saw a little bump in like Ted stock in the markets when this, this, uh, revolution wind, uh, injunction was stopped. Uh, but. You also see that, uh, Moody’s is a credit [00:08:00] rating. They’ve lowered ORs, Ted’s um, rating from stable to negative, given that political risk. Speaker 2: Well, if you haven’t been paying attention, wind energy O and m Australia 2026 is happening relatively soon. It’s gonna be February 17th and 18th. It’s gonna be at the Pullman Hotel downtown Melbourne. And we are all looking forward to it. The, the roster and the agenda is, is nearly assembled at this point. Uh, we have a, a couple of last minute speakers, but uh, I’m looking at the agenda and like, wow, if you work in o and m or even are around wind turbines, this is the place to be in February. From my Speaker: seat. It’s pretty, it’s, it’s, it’s shaping up for pretty fun. My phone has just been inundated with text message and WhatsApp of when are you traveling? What are your dates looking forward to, and I wanna say this right, Rosie. Looking forward to Melvin. Did I get it? Did I do it okay. Speaker 3: You know how to say it. Speaker: So, so we’re, we’re really looking forward to, we’ve got a bunch of people traveling from around the [00:09:00] world, uh, to come and share their collective knowledge, uh, and learn from the Australians about how they’re doing things, what the, what the risks are, what the problems are, uh, really looking forward to the environment down there, like we had last year was very. Collaborative, the conversations are flowing. Um, so we’re looking forward to it, uh, in a big way from our seats. Over here, Speaker 2: we are announcing a lightning workshop, and that workshop will be answering all your lightning questions in regards to your turbines Now. Typically when we do this, it’s about $10,000 per seat, and this will be free as part of WMA 2026. We’re gonna talk about some of the lightning physics, what’s actually happening in the field versus what the OEMs are saying and what the IEC specification indicates. And the big one is force majeure. A lot of operators are paying for damages that are well within the IEC specification, and we’ll explain.[00:10:00] What that is all about and what you can do to save yourself literally millions of dollars. But that is only possible if you go to Woma 2020 six.com and register today because we’re running outta seats. Once they’re gone, they’re gone. But this is a great opportunity to get your lightning questions answered. And Rosemary promised me that we’re gonna talk about Vestus turbines. Siemens turbines. GE Renova turbines. Nordex turbines. So if you have Nordex turbines, Sulan turbines, bring the turbine. Type, we’ll talk about it. We’ll get your questions answered, and the goal is that everybody at at Wilma 2026 is gonna go home and save themselves millions of dollars in 26 and millions of dollars in 27 and all the years after, because this Lightning workshop is going to take care of those really frustrating lightning questions that just don’t get answered. We’re gonna do it right there. Sign up today. Speaker 3: [00:11:00] You know what, I’m really looking forward to that session and especially ’cause I’ve got a couple of new staff or new-ish staff at, it’s a great way to get them up to speed on lightning. And I think that actually like the majority of people, even if you are struggling with lightning problems every day, I bet that there is a whole bunch that you could learn about the underlying physics of lightning. And there’s not so many places to find that in the world. I have looked, um, for my staff training, where is the course that I can send them to, to understand all about lightning? I know when I started atm, I had a, an intro session, one-on-one with the, you know, chief Lightning guy there. That’s not so easy to come by, and this is the opportunity where you can get that and better because it’s information about every, every OEM and a bit of a better understanding about how it works so that you can, you know, one of the things that I find working with Lightning is a lot of force MA mature claims. And then, um, the OEMs, they try and bamboozle you with this like scientific sounding talk. If you understand better, then you’ll be able to do better in those discussions. [00:12:00] So I would highly recommend attending if you can swing the Monday as well. Speaker: If you wanna attend now and you’re coming to the events. Reach out to, you can reach out to me directly because what we want to do now is collect, uh, as much information as possible about the specific turbine types of the, that the people in the room are gonna be responsible for. So we can tailor those messages, um, to help you out directly. So feel free to reach out to me, joel.saxo, SAXU m@wglightning.com and uh, we’ll be squared away and ready to roll on Monday. I think that’s Monday the 16th. Speaker 2: So while American offshore wind fights for survival in the courts, British offshore wind just had its biggest day ever. The United Kingdom awarded contracts for 8.4 gigawatts. That’s right. 8.4 gigawatts of new offshore wind capacity, the largest auction in European history. Holy smokes guys. The price came in at about 91 pounds per megawatt hour, and that’s 2024 pounds. [00:13:00] Uh, and that’s roughly 40% cheaper than building a new. Gas plant Energy Secretary Ed Milliband called it a monumental step towards the country’s 2030 clean power goals and that it is, uh, critics say that prices are still higher than previous auctions, and one that the government faces challenges connecting all this new capacity to the grid, and they do, uh, transmission is a limiting factor here, but in terms of where the UK is headed. Putting in gigawatts of offshore wind is going to disconnect them from a lot of need on the gas supply and other energy sources. It’s a massive auction round. This was way above what I remember being, uh. Talked about when we were in Scotland just a couple of weeks ago, Joel. Speaker: Yeah, that’s what I was gonna say. You know, when we were, when we were up with the, or E Catapult event, and we talked to a lot of the different organizations of their OWGP and um, you know, the course, the or e Catapult folks and, and, and a [00:14:00] few others, they were really excited about AR seven. They were like, oh, we’re, we’re so excited. It’s gonna come down, it’s gonna be great. I didn’t expect these kind of numbers to come out of this thing. Right? ’cause we know that, um, they’ve got about, uh, the UK currently has about. 16 and a half or so gigawatts of offshore wind capacity, um, with, you know, they got a bunch under construction, it’s like 11 under construction, but their goal is to have 43 gigawatts by 2030. So, Speaker 2: man. Speaker: Yeah. And, and when 2030, put this into Conte Con context now. This is one of our first podcasts of the new year. That’s only four years away. Right. It’s soon. And, and to, to be able to do that. So you’re saying they got 16, they go some round numbers. They got 16 now. Pro producing 11 in the pipe, 11 being constructed. So get that to 27. That’s another 16 gigawatts of wind. They want, they that are not under construction today that they want to have completed in the next four years. That is a monumental effort now. We know that there’s some grid grid complications and connection [00:15:00] requirements and things that will slow that down, but just thinking about remove the grid idea, just thinking about the amount of effort to get those kind of large capital projects done in that short of timeline. Kudos to the UK ’cause they’re unlocking a lot of, um, a lot of private investment, a lot of effort to get these things, but they’re literally doing the inverse of what we’re doing in the United States right now. Speaker 2: There would be about a total of 550, 615 ish megawatt turbines in the water. That does seem doable though. The big question is who’s gonna be providing those turbines? That’s a. Massive order. Whoever the salesperson is involved in that transaction is gonna be very happy. Well, the interesting thing here Speaker: too is the global context of assets to be able to deliver this. We just got done talking about the troubles at these wind farms in the United States. As soon as these. Wind farms are finished. There’s not more of them coming to construction phase shortly, right? So all of these assets, all these jack up vessels, these installation vessels, these specialized cable lay vessels, they [00:16:00]can, they can fuel up and freaking head right across, back across the Atlantic and start working on these things. If the pre all of the engineering and, and the turbine deliveries are ready to roll the vessels, uh, ’cause that you, that, you know, two years ago that was a problem. We were all. Forecasting. Oh, we have this forecasted problem of a shortage of vessels and assets to be able to do installs. And now with the US kind of, basically, once we’re done with the wind farms, we’re working on offshore, now we’re shutting it down. It frees those back up, right? So the vessels will be there, be ready to roll. You’ll have people coming off of construction projects that know what’s going on, right? That, that know how to, to work these things. So the, the people, the vessels that will be ready to roll it is just, can we get the cables, the mono piles, the turbines and the cells, the blades, all done in time, uh, to make this happen And, and. I know I’m rambling now, but after leaving that or e Catapult event and talking to some of the people, um, that are supporting those [00:17:00] funds over there, uh, being injected from the, uh, the government, I think that they’ve got Speaker 2: the, the money flowing over there to get it done too. The big winner in the auction round was RWE and they. Almost seven gigawatts. So that was a larger share of the 8.4 gigawatts. RWE obviously has a relationship with Vestus. Is that where this is gonna go? They’re gonna be, uh, installing vestus turbines. And where were those tur turbines? As I was informed by Scottish gentlemen, I won’t name names. Uh, will those turbines be built in the uk? Speaker 3: It’s a lot. It’s a, it’s one of the biggest challenges with, um, the supply chain for wind energy is that it just is so lumpy. So, you know, you get, um, uh. You get huge eight gigawatts all at once and then you have years of, you know, just not much. Not much, not much going on. I mean, for sure they’re not gonna be just building [00:18:00] eight gigawatts worth of, um, wind turbines in the UK in the next couple of years because they would also have to build the capacity to manufacture that and, and then would wanna be building cocks every couple of years for, you know, the next 10 or 20 years. So, yeah, of course they’re gonna be manufacturing. At facilities around the world and, and transporting them. But, um, yeah, I just, I don’t know. It’s one of the things that I just. Constantly shake my head about is like, how come, especially when projects are government supported, when plans are government supported, why, why can’t we do a better job of smoothing things out so that you can have, you know, for example, local manufacturing because everyone knows that they’ve got a secure pipeline. It’s just when the government’s involved, it should be possible. Speaker 2: At least the UK has been putting forth some. Pretty big numbers to support a local supply chain. When we were over in Scotland, they announced 300 million pounds, and that was just one of several. That’s gonna happen over the next year. There will be a [00:19:00] near a billion pounds be put into the supply chain, which will make a dramatic difference. But I think you’re right. Also, it’s, they’re gonna ramp up and then they, it’s gonna ramp down. They have to find a way to feed the global marketplace at some point, be because the technology and the people are there. It’s a question of. How do you sustain it for a 20, 30 year period? That’s a different question. Speaker 3: I do agree that the UK is doing a better job than probably anybody else. Um, it it’s just that they, the way that they have chosen to organize these auctions and the government support and the planning just means that they have that, that this is the perfect conditions to, you know. Make a smooth rollout and you know, take care of all this. And so I just a bit frustrated that they’re not doing more. But you are right that they’re doing the best probably Speaker 4: once all of these are in service though, aren’t there quite a bit of aftermarket products that are available in the UK Speaker: on the service then? I think there’s more. Speaker 4: Which, I mean, that’s good. A good part of it, right? Speaker: If we’re talking Vestas, so, so let’s just round this [00:20:00] up too. If we’re talking vest’s production for blades in Europe, you have two facilities in Denmark that build V 2 36 blades. You have one facility in Italy that builds V 2 36 blades, Taiwan, but they build them for the APAC market. Of course. Um, Poland had a, has one on hold right now, V 2 36 as well. Well, they just bought that factory from LM up in Poland also. That’s, but I think that’s for onshore term, onshore blades. Oh, yes, sure. And then Scotland has, they have the proposed facility in, in Laith. That there, that’s kind of on hold as well. So if that one’s proposed, I’m sure, hey, if we get a big order, they’ll spin that up quick because they’ll get, I am, I would imagine someone o you know, one of the, one of the funds to spool up a little bit of money, boom, boom, boom. ’cause they’re turning into local jobs. Local supply Speaker 2: chain does this then create the condition where a lot of wind turbines, like when we were in Scotland, a lot of those wind turbines are. Gonna reach 20 years old, maybe a little bit older here over the next five years where they will [00:21:00] need to be repowered upgraded, whatever’s gonna happen there. If you had internal manufacturing. In country that would, you’d think lower the price to go do that. That will be a big effort just like it is in Spain right now. Speaker: The trouble there though too, is if you’re using local content in, in the uk, the labor prices are so much Speaker 2: higher. I’m gonna go back to Rosie’s point about sort of the way energy is sold worldwide. UK has high energy prices, mostly because they are buying energy from other countries and it’s expensive to get it in country. So yes, they can have higher labor prices and still be lower cost compared to the alternatives. It, it’s not the same equation in the US versus uk. It’s, it’s totally different economics, but. If they get enough power generation, which I think the UK will, they’re gonna offload that and they’re already doing it now. So you can send power to France, send power up [00:22:00] north. There’s ways to sell that extra power and help pay for the system you built. That would make a a lot of sense. It’s very similar to what the Saudis have done for. Dang near 80 years, which is fill tankers full of oil and sell it. This is a little bit different that we’re just sending electrons through the water to adjacent European countries. It does seem like a plan. I hope they’re sending ’em through a cable in the water and not just into the water. Well, here’s the thing that was concerning early on. They’re gonna turn it into hydrogen and put it on a ship and send it over to France. Like that didn’t make any sense at all. Uh. Cable’s on the way to do it. Right. Speaker: And actually, Alan, you and I did have a conversation with someone not too long ago about that triage market and how the project where they put that, that that trans, that HVDC cable next to the tunnel it, and it made and it like paid for itself in a year or something. Was that like, that they didn’t wanna really tell us like, yeah, it paid for itself in a year. Like it was a, the ROI was like on a, like a $500 million [00:23:00]project or something. That’s crazy. Um, but yeah, that’s the same. That’s, that is, I would say part of the big push in the uk there is, uh, then they can triage that power and send it, send it back across. Um, like I think Nord Link is the, the cable between Peterhead and Norway, right? So you have, you have a triage market going across to the Scandinavian countries. You have the triage market going to mainland eu. Um, and in when they have big time wind, they’re gonna be able to do it. So when you have an RWE. Looking at seven gigawatts of, uh, possibility that they just, uh, just procured. Game on. I love it. I think it’s gonna be cool. I’m, I’m happy to see it blow Speaker 2: up. Canada is getting serious about offshore wind and international developers are paying attention. Q Energy, France and its South Korean partner. Hawa Ocean have submitted applications to develop wind projects off Nova Scotia’s Coast. The province has big ambitions. Premier, Tim Houston wants to license enough. Offshore [00:24:00] wind to produce 40 gigawatts of power far more than Nova Scotia would ever need. Uh, the extra electricity could supply more than a quarter of Canada’s total demand. If all goes according to plan, the first turbines could be spinning by 2035. Now, Joel. Yeah, some of this power will go to Canada, but there’s a huge market in the United States also for this power and the capacity factor up in Nova Scotia offshore is really good. Yeah. It’s uh, it Speaker: is simply, it’s stellar, right? Uh, that whole No, Nova Scotia, new Brunswick, Newfoundland, that whole e even Maritimes of Canada. The wind, the wind never stops blowing, right? Like I, I go up there every once in a while ’cause my wife is from up there and, uh, it’s miserable sometimes even in the middle of summer. Um, so the, the wind resource is fantastic. The, it, it is a boom or will be a boom for the Canadian market, right? There’re always [00:25:00] that maritime community, they’re always looking for, for, uh, new jobs. New jobs, new jobs. And this is gonna bring them to them. Um, one thing I wanna flag here is when I know this, when this announcement came out. And I reached out to Tim Houston’s office to try to get him on the podcast, and I haven’t gotten a response yet. Nova Scotia. So if someone that’s listening can get ahold of Tim Houston, we’d love to talk to him about the plans for Nova Scotia. Um, but, but we see that just like we see over overseas, the triage market of we’re making power, we can sell it. You know, we balance out the prices, we can sell it to other places. From our seats here we’ve been talking about. The electricity demand on the east coast of the United States for, for years and how it is just climbing, climbing, climbing, especially AI data centers. Virginia is a hub of this, right? They need power and we’re shooting ourselves in the foot, foot for offshore wind, plus also canceling pipelines and like there’s no extra generation going on there except for some solar plants where you can squeeze ’em in down in the Carolinas and whatnot. [00:26:00] There is a massive play here for the Canadians to be able to HVD see some power down to us. Speaker 2: The offshore conditions off the coast of Nova Scotia are pretty rough, and the capacity factor being so high makes me think of some of the Brazilian wind farms where the capacity factor is over 50%. It’s amazing down there, but one of the outcomes of that has been early turbine problems. And I’m wondering if the Nova Scotia market is going to demand a different kind of turbine that is specifically built for those conditions. It’s cold, really cold. It’s really windy. There’s a lot of moisture in the air, right? So the salt is gonna be bad. Uh, and then the sea life too, right? There’s a lot of, uh, sea life off the coast of the Nova Scotia, which everybody’s gonna be concerned about. Obviously, as this gets rolling. How do we think about this? And who’s gonna be the manufacturer of turbines for Canada? Is it gonna be Nordics? Well, Speaker: let’s start from the ground up there. So from the or ground up, it’s, how about sea [00:27:00] floor up? Let’s start from there. There is a lot of really, really, if you’ve ever worked in the offshore world, the o offshore, maritime Canadian universities that focus on the, on offshore construction, they produce some of the best engineers for those markets, right? So if you go down to Houston, Texas where there’s offshore oil and gas companies and engineering companies everywhere, you run into Canadians from the Maritimes all over the place ’cause they’re really good at what they do. Um, they are developing or they have developed offshore oil and gas platforms. Off of the coast of Newfoundland and up, up in that area. And there’s some crazy stuff you have to compete with, right? So you have icebergs up there. There’s no icebergs in the North Atlantic that like, you know, horn seats, internet cruising through horn C3 with icebergs. So they’ve, they’ve engineered and created foundations and things that can deal with that, those situations up there. But you also have to remember that you’re in the Canadian Shield, which is, um, the Canadian Shield is a geotechnical formation, right? So it’s very rocky. Um, and it’s not [00:28:00] like, uh, the other places where we’re putting fixed bottom wind in where you just pound the piles into the sand. That’s not how it’s going to go, uh, up in Canada there. So there’s some different engineering that’s going to have to take place for the foundations, but like you said, Alan Turbine specific. It blows up there. Right. And we have seen onshore, even in the United States, when you get to areas that have high capacity burning out main bearings, burning out generators prematurely because the capacity factor is so high and those turbines are just churning. Um, I, I don’t know if any of the offshore wind turbine manufacturers are adjusting any designs specifically for any markets. I, I just don’t know that. Um, but they may run into some. Some tough stuff up there, right? You might run into some, some overspeeding main bearings and some maintenance issues, specifically in the wintertime ’cause it is nasty up there. Speaker 2: Well, if you have 40 gigawatts of capacity, you have several thousand turbines, you wanna make sure really [00:29:00] sure that the blade design is right, that the gearbox is right if you have a gearbox, and that everything is essentially over-designed, heated. You can have deicing systems on it, I would assume that would be something you would be thinking about. You do the same thing for the monopoles. The whole assembly’s gotta be, have a, just a different thought process than a turbine. You would stick off the coast of Germany. Still rough conditions at times, but not like Nova Scotia. Speaker: One, one other thing there to think about too that we haven’t dealt with, um. In such extreme levels is the, the off the coast of No. Nova Scotia is the Bay of Fundee. If you know anything about the Bay of Fundee, it is the highest tide swings in the world. So the tide swings at certain times of the year, can be upwards of 10 meters in a 12 hour period in this area of, of the ocean. And that comes with it. Different time, different types of, um, one of the difficult things for tide swings is it creates subsid currents. [00:30:00] Subsid currents are, are really, really, really bad, nasty. Against rocks and for any kind of cable lay activities and longevity of cable lay scour protection around turbines and stuff like that. So that’s another thing that subsea that we really haven’t spoke about. Speaker 3: You know, I knew when you say Bay Bay of funding, I’m like, I know that I have heard that place before and it’s when I was researching for. Tidal power videos for Tidal Stream. It’s like the best place to, to generate electricity from. Yeah, from Tidal Stream. So I guess if you are gonna be whacking wind turbines in there anyway, maybe you can share some infrastructure and Yeah. Eca a little bit, a little bit more from your, your project. Speaker 2: that wraps up another episode of the Uptime Wind Energy Podcast. If today’s discussion sparked any questions or ideas. We’d love to hear from you. Just reach out to us on LinkedIn and don’t forget to subscribe so you never miss an episode. And if you found value in today’s conversation, please leave us a review. It really helps other wind energy professionals discover the show For Rosie, Yolanda and Joel, I’m Alan Hall, and we’ll see you here next week on the Uptime [00:36:00] Wind Energy Podcast.
Lethal Mullet Podcast #300: Chatting Eighties with Dee Tails
Remember those good old days, at [INSERT ALMA MATER]? Good times, good times. Join Spencer, Ty, and Andy as they reminisce about their college experiences, and the scholastic opportunities that shaped them into who they are today. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Abbie Phelps, Adam W, Anthony Cabrera, asdf, Axon, Baylor Thornton, Bedi, bernventers, bunknown, Celeste, Charles Doyle, Dane Stephen, Dave Finlay, David Gebhardt, Dean, Francis Wolf, Heather-Pleather, Jacob Sauber-Cavazos, James Lloyd-Jones, Jennifer Knowles, Jeremy-Alice, Josh O'Brien, Kilo, LM, Lawrence, Louis Ceresa, Malek Douglas, Newmans Own, Packocamels, Phat Ass Cyberman, Rach, raouldyke, Rebecca Kimpel, revidicism, Sam Thomas, T, Tash Diehart, Themandme, Tomix, weedworf, William Copping, and Yung Zoe!
Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b
Daniel Muñoz comenta con Luis F. Quintero toda la actualidad económica centrada en el informe de LM sobre el infierno fiscal español