POPULARITY
Categories
Pareciera que cada cierto tiempo la vida se encarga de recordarnos que las cosas no están bajo nuestro control ⏳Pero creo que no se trata de intentar hacer todo para que las cosas estén bajo nuestro control, sino aceptar que en realidad muchas cosas no lo están
Pareciera que cada cierto tiempo la vida se encarga de recordarnos que las cosas no están bajo nuestro control ⏳Pero creo que no se trata de intentar hacer todo para que las cosas estén bajo nuestro control, sino aceptar que en realidad muchas cosas no lo están
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin17:33 Chia sẻ Lời Chúa : Lm. JB Phương Đình Toại, MI, chia sẻ Lời Chúa Chúa Nhật I Mùa Chay---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
Tickets for AIEi Miami and AIE Europe are live, with first wave speakers announced!From pioneering software-defined networking to backing many of the most aggressive AI model companies of this cycle, Martin Casado and Sarah Wang sit at the center of the capital, compute, and talent arms race reshaping the tech industry. As partners at a16z investing across infrastructure and growth, they've watched venture and growth blur, model labs turn dollars into capability at unprecedented speed, and startups raise nine-figure rounds before monetization.Martin and Sarah join us to unpack the new financing playbook for AI: why today's rounds are really compute contracts in disguise, how the “raise → train → ship → raise bigger” flywheel works, and whether foundation model companies can outspend the entire app ecosystem built on top of them. They also share what's underhyped (boring enterprise software), what's overheated (talent wars and compensation spirals), and the two radically different futures they see for AI's market structure.We discuss:* Martin's “two futures” fork: infinite fragmentation and new software categories vs. a small oligopoly of general models that consume everything above them* The capital flywheel: how model labs translate funding directly into capability gains, then into revenue growth measured in weeks, not years* Why venture and growth have merged: $100M–$1B hybrid rounds, strategic investors, compute negotiations, and complex deal structures* The AGI vs. product tension: allocating scarce GPUs between long-term research and near-term revenue flywheels* Whether frontier labs can out-raise and outspend the entire app ecosystem built on top of their APIs* Why today's talent wars ($10M+ comp packages, $B acqui-hires) are breaking early-stage founder math* Cursor as a case study: building up from the app layer while training down into your own models* Why “boring” enterprise software may be the most underinvested opportunity in the AI mania* Hardware and robotics: why the ChatGPT moment hasn't yet arrived for robots and what would need to change* World Labs and generative 3D: bringing the marginal cost of 3D scene creation down by orders of magnitude* Why public AI discourse is often wildly disconnected from boardroom reality and how founders should navigate the noiseShow Notes:* “Where Value Will Accrue in AI: Martin Casado & Sarah Wang” - a16z show* “Jack Altman & Martin Casado on the Future of Venture Capital”* World Labs—Martin Casado• LinkedIn: https://www.linkedin.com/in/martincasado/• X: https://x.com/martin_casadoSarah Wang• LinkedIn: https://www.linkedin.com/in/sarah-wang-59b96a7• X: https://x.com/sarahdingwanga16z• https://a16z.com/Timestamps00:00:00 – Intro: Live from a16z00:01:20 – The New AI Funding Model: Venture + Growth Collide00:03:19 – Circular Funding, Demand & “No Dark GPUs”00:05:24 – Infrastructure vs Apps: The Lines Blur00:06:24 – The Capital Flywheel: Raise → Train → Ship → Raise Bigger00:09:39 – Can Frontier Labs Outspend the Entire App Ecosystem?00:11:24 – Character AI & The AGI vs Product Dilemma00:14:39 – Talent Wars, $10M Engineers & Founder Anxiety00:17:33 – What's Underinvested? The Case for “Boring” Software00:19:29 – Robotics, Hardware & Why It's Hard to Win00:22:42 – Custom ASICs & The $1B Training Run Economics00:24:23 – American Dynamism, Geography & AI Power Centers00:26:48 – How AI Is Changing the Investor Workflow (Claude Cowork)00:29:12 – Two Futures of AI: Infinite Expansion or Oligopoly?00:32:48 – If You Can Raise More Than Your Ecosystem, You Win00:34:27 – Are All Tasks AGI-Complete? Coding as the Test Case00:38:55 – Cursor & The Power of the App Layer00:44:05 – World Labs, Spatial Intelligence & 3D Foundation Models00:47:20 – Thinking Machines, Founder Drama & Media Narratives00:52:30 – Where Long-Term Power Accrues in the AI StackTranscriptLatent.Space - Inside AI's $10B+ Capital Flywheel — Martin Casado & Sarah Wang of a16z[00:00:00] Welcome to Latent Space (Live from a16z) + Meet the Guests[00:00:00] Alessio: Hey everyone. Welcome to the Latent Space podcast, live from a 16 z. Uh, this is Alessio founder Kernel Lance, and I'm joined by Twix, editor of Latent Space.[00:00:08] swyx: Hey, hey, hey. Uh, and we're so glad to be on with you guys. Also a top AI podcast, uh, Martin Cado and Sarah Wang. Welcome, very[00:00:16] Martin Casado: happy to be here and welcome.[00:00:17] swyx: Yes, uh, we love this office. We love what you've done with the place. Uh, the new logo is everywhere now. It's, it's still getting, takes a while to get used to, but it reminds me of like sort of a callback to a more ambitious age, which I think is kind of[00:00:31] Martin Casado: definitely makes a statement.[00:00:33] swyx: Yeah.[00:00:34] Martin Casado: Not quite sure what that statement is, but it makes a statement.[00:00:37] swyx: Uh, Martin, I go back with you to Netlify.[00:00:40] Martin Casado: Yep.[00:00:40] swyx: Uh, and, uh, you know, you create a software defined networking and all, all that stuff people can read up on your background. Yep. Sarah, I'm newer to you. Uh, you, you sort of started working together on AI infrastructure stuff.[00:00:51] Sarah Wang: That's right. Yeah. Seven, seven years ago now.[00:00:53] Martin Casado: Best growth investor in the entire industry.[00:00:55] swyx: Oh, say[00:00:56] Martin Casado: more hands down there is, there is. [00:01:00] I mean, when it comes to AI companies, Sarah, I think has done the most kind of aggressive, um, investment thesis around AI models, right? So, worked for Nom Ja, Mira Ia, FEI Fey, and so just these frontier, kind of like large AI models.[00:01:15] I think, you know, Sarah's been the, the broadest investor. Is that fair?[00:01:20] Venture vs. Growth in the Frontier Model Era[00:01:20] Sarah Wang: No, I, well, I was gonna say, I think it's been a really interesting tag, tag team actually just ‘cause the, a lot of these big C deals, not only are they raising a lot of money, um, it's still a tech founder bet, which obviously is inherently early stage.[00:01:33] But the resources,[00:01:36] Martin Casado: so many, I[00:01:36] Sarah Wang: was gonna say the resources one, they just grow really quickly. But then two, the resources that they need day one are kind of growth scale. So I, the hybrid tag team that we have is. Quite effective, I think,[00:01:46] Martin Casado: what is growth these days? You know, you don't wake up if it's less than a billion or like, it's, it's actually, it's actually very like, like no, it's a very interesting time in investing because like, you know, take like the character around, right?[00:01:59] These tend to [00:02:00] be like pre monetization, but the dollars are large enough that you need to have a larger fund and the analysis. You know, because you've got lots of users. ‘cause this stuff has such high demand requires, you know, more of a number sophistication. And so most of these deals, whether it's US or other firms on these large model companies, are like this hybrid between venture growth.[00:02:18] Sarah Wang: Yeah. Total. And I think, you know, stuff like BD for example, you wouldn't usually need BD when you were seed stage trying to get market biz Devrel. Biz Devrel, exactly. Okay. But like now, sorry, I'm,[00:02:27] swyx: I'm not familiar. What, what, what does biz Devrel mean for a venture fund? Because I know what biz Devrel means for a company.[00:02:31] Sarah Wang: Yeah.[00:02:32] Compute Deals, Strategics, and the ‘Circular Funding' Question[00:02:32] Sarah Wang: You know, so a, a good example is, I mean, we talk about buying compute, but there's a huge negotiation involved there in terms of, okay, do you get equity for the compute? What, what sort of partner are you looking at? Is there a go-to market arm to that? Um, and these are just things on this scale, hundreds of millions, you know, maybe.[00:02:50] Six months into the inception of a company, you just wouldn't have to negotiate these deals before.[00:02:54] Martin Casado: Yeah. These large rounds are very complex now. Like in the past, if you did a series A [00:03:00] or a series B, like whatever, you're writing a 20 to a $60 million check and you call it a day. Now you normally have financial investors and strategic investors, and then the strategic portion always still goes with like these kind of large compute contracts, which can take months to do.[00:03:13] And so it's, it's very different ties. I've been doing this for 10 years. It's the, I've never seen anything like this.[00:03:19] swyx: Yeah. Do you have worries about the circular funding from so disease strategics?[00:03:24] Martin Casado: I mean, listen, as long as the demand is there, like the demand is there. Like the problem with the internet is the demand wasn't there.[00:03:29] swyx: Exactly. All right. This, this is like the, the whole pyramid scheme bubble thing, where like, as long as you mark to market on like the notional value of like, these deals, fine, but like once it starts to chip away, it really Well[00:03:41] Martin Casado: no, like as, as, as, as long as there's demand. I mean, you know, this, this is like a lot of these sound bites have already become kind of cliches, but they're worth saying it.[00:03:47] Right? Like during the internet days, like we were. Um, raising money to put fiber in the ground that wasn't used. And that's a problem, right? Because now you actually have a supply overhang.[00:03:58] swyx: Mm-hmm.[00:03:59] Martin Casado: And even in the, [00:04:00] the time of the, the internet, like the supply and, and bandwidth overhang, even as massive as it was in, as massive as the crash was only lasted about four years.[00:04:09] But we don't have a supply overhang. Like there's no dark GPUs, right? I mean, and so, you know, circular or not, I mean, you know, if, if someone invests in a company that, um. You know, they'll actually use the GPUs. And on the other side of it is the, is the ask for customer. So I I, I think it's a different time.[00:04:25] Sarah Wang: I think the other piece, maybe just to add onto this, and I'm gonna quote Martine in front of him, but this is probably also a unique time in that. For the first time, you can actually trace dollars to outcomes. Yeah, right. Provided that scaling laws are, are holding, um, and capabilities are actually moving forward.[00:04:40] Because if you can put translate dollars into capabilities, uh, a capability improvement, there's demand there to martine's point. But if that somehow breaks, you know, obviously that's an important assumption in this whole thing to make it work. But you know, instead of investing dollars into sales and marketing, you're, you're investing into r and d to get to the capability, um, you know, increase.[00:04:59] And [00:05:00] that's sort of been the demand driver because. Once there's an unlock there, people are willing to pay for it.[00:05:05] Alessio: Yeah.[00:05:06] Blurring Lines: Models as Infra + Apps, and the New Fundraising Flywheel[00:05:06] Alessio: Is there any difference in how you built the portfolio now that some of your growth companies are, like the infrastructure of the early stage companies, like, you know, OpenAI is now the same size as some of the cloud providers were early on.[00:05:16] Like what does that look like? Like how much information can you feed off each other between the, the two?[00:05:24] Martin Casado: There's so many lines that are being crossed right now, or blurred. Right. So we already talked about venture and growth. Another one that's being blurred is between infrastructure and apps, right? So like what is a model company?[00:05:35] Mm-hmm. Like, it's clearly infrastructure, right? Because it's like, you know, it's doing kind of core r and d. It's a horizontal platform, but it's also an app because it's um, uh, touches the users directly. And then of course. You know, the, the, the growth of these is just so high. And so I actually think you're just starting to see a, a, a new financing strategy emerge and, you know, we've had to adapt as a result of that.[00:05:59] And [00:06:00] so there's been a lot of changes. Um, you're right that these companies become platform companies very quickly. You've got ecosystem build out. So none of this is necessarily new, but the timescales of which it's happened is pretty phenomenal. And the way we'd normally cut lines before is blurred a little bit, but.[00:06:16] But that, that, that said, I mean, a lot of it also just does feel like things that we've seen in the past, like cloud build out the internet build out as well.[00:06:24] Sarah Wang: Yeah. Um, yeah, I think it's interesting, uh, I don't know if you guys would agree with this, but it feels like the emerging strategy is, and this builds off of your other question, um.[00:06:33] You raise money for compute, you pour that or you, you pour the money into compute, you get some sort of breakthrough. You funnel the breakthrough into your vertically integrated application. That could be chat GBT, that could be cloud code, you know, whatever it is. You massively gain share and get users.[00:06:49] Maybe you're even subsidizing at that point. Um, depending on your strategy. You raise money at the peak momentum and then you repeat, rinse and repeat. Um, and so. And that wasn't [00:07:00] true even two years ago, I think. Mm-hmm. And so it's sort of to your, just tying it to fundraising strategy, right? There's a, and hiring strategy.[00:07:07] All of these are tied, I think the lines are blurring even more today where everyone is, and they, but of course these companies all have API businesses and so they're these, these frenemy lines that are getting blurred in that a lot of, I mean, they have billions of dollars of API revenue, right? And so there are customers there.[00:07:23] But they're competing on the app layer.[00:07:24] Martin Casado: Yeah. So this is a really, really important point. So I, I would say for sure, venture and growth, that line is blurry app and infrastructure. That line is blurry. Um, but I don't think that that changes our practice so much. But like where the very open questions are like, does this layer in the same way.[00:07:43] Compute traditionally has like during the cloud is like, you know, like whatever, somebody wins one layer, but then another whole set of companies wins another layer. But that might not, might not be the case here. It may be the case that you actually can't verticalize on the token string. Like you can't build an app like it, it necessarily goes down just because there are no [00:08:00] abstractions.[00:08:00] So those are kinda the bigger existential questions we ask. Another thing that is very different this time than in the history of computer sciences is. In the past, if you raised money, then you basically had to wait for engineering to catch up. Which famously doesn't scale like the mythical mammoth. It take a very long time.[00:08:18] But like that's not the case here. Like a model company can raise money and drop a model in a, in a year, and it's better, right? And, and it does it with a team of 20 people or 10 people. So this type of like money entering a company and then producing something that has demand and growth right away and using that to raise more money is a very different capital flywheel than we've ever seen before.[00:08:39] And I think everybody's trying to understand what the consequences are. So I think it's less about like. Big companies and growth and this, and more about these more systemic questions that we actually don't have answers to.[00:08:49] Alessio: Yeah, like at Kernel Labs, one of our ideas is like if you had unlimited money to spend productively to turn tokens into products, like the whole early stage [00:09:00] market is very different because today you're investing X amount of capital to win a deal because of price structure and whatnot, and you're kind of pot committing.[00:09:07] Yeah. To a certain strategy for a certain amount of time. Yeah. But if you could like iteratively spin out companies and products and just throw, I, I wanna spend a million dollar of inference today and get a product out tomorrow.[00:09:18] swyx: Yeah.[00:09:19] Alessio: Like, we should get to the point where like the friction of like token to product is so low that you can do this and then you can change the Right, the early stage venture model to be much more iterative.[00:09:30] And then every round is like either 100 k of inference or like a hundred million from a 16 Z. There's no, there's no like $8 million C round anymore. Right.[00:09:38] When Frontier Labs Outspend the Entire App Ecosystem[00:09:38] Martin Casado: But, but, but, but there's a, there's a, the, an industry structural question that we don't know the answer to, which involves the frontier models, which is, let's take.[00:09:48] Anthropic it. Let's say Anthropic has a state-of-the-art model that has some large percentage of market share. And let's say that, uh, uh, uh, you know, uh, a company's building smaller models [00:10:00] that, you know, use the bigger model in the background, open 4.5, but they add value on top of that. Now, if Anthropic can raise three times more.[00:10:10] Every subsequent round, they probably can raise more money than the entire app ecosystem that's built on top of it. And if that's the case, they can expand beyond everything built on top of it. It's like imagine like a star that's just kind of expanding, so there could be a systemic. There could be a, a systemic situation where the soda models can raise so much money that they can out pay anybody that bills on top of ‘em, which would be something I don't think we've ever seen before just because we were so bottlenecked in engineering, and this is a very open question.[00:10:41] swyx: Yeah. It's, it is almost like bitter lesson applied to the startup industry.[00:10:45] Martin Casado: Yeah, a hundred percent. It literally becomes an issue of like raise capital, turn that directly into growth. Use that to raise three times more. Exactly. And if you can keep doing that, you literally can outspend any company that's built the, not any company.[00:10:57] You can outspend the aggregate of companies on top of [00:11:00] you and therefore you'll necessarily take their share, which is crazy.[00:11:02] swyx: Would you say that kind of happens in character? Is that the, the sort of postmortem on. What happened?[00:11:10] Sarah Wang: Um,[00:11:10] Martin Casado: no.[00:11:12] Sarah Wang: Yeah, because I think so,[00:11:13] swyx: I mean the actual postmortem is, he wanted to go back to Google.[00:11:15] Exactly. But like[00:11:18] Martin Casado: that's another difference that[00:11:19] Sarah Wang: you said[00:11:21] Martin Casado: it. We should talk, we should actually talk about that.[00:11:22] swyx: Yeah,[00:11:22] Sarah Wang: that's[00:11:23] swyx: Go for it. Take it. Take,[00:11:23] Sarah Wang: yeah.[00:11:24] Character.AI, Founder Goals (AGI vs Product), and GPU Allocation Tradeoffs[00:11:24] Sarah Wang: I was gonna say, I think, um. The, the, the character thing raises actually a different issue, which actually the Frontier Labs will face as well. So we'll see how they handle it.[00:11:34] But, um, so we invest in character in January, 2023, which feels like eons ago, I mean, three years ago. Feels like lifetimes ago. But, um, and then they, uh, did the IP licensing deal with Google in August, 2020. Uh, four. And so, um, you know, at the time, no, you know, he's talked publicly about this, right? He wanted to Google wouldn't let him put out products in the world.[00:11:56] That's obviously changed drastically. But, um, he went to go do [00:12:00] that. Um, but he had a product attached. The goal was, I mean, it's Nome Shair, he wanted to get to a GI. That was always his personal goal. But, you know, I think through collecting data, right, and this sort of very human use case, that the character product.[00:12:13] Originally was and still is, um, was one of the vehicles to do that. Um, I think the real reason that, you know. I if you think about the, the stress that any company feels before, um, you ultimately going one way or the other is sort of this a GI versus product. Um, and I think a lot of the big, I think, you know, opening eyes, feeling that, um, anthropic if they haven't started, you know, felt it, certainly given the success of their products, they may start to feel that soon.[00:12:39] And the real. I think there's real trade-offs, right? It's like how many, when you think about GPUs, that's a limited resource. Where do you allocate the GPUs? Is it toward the product? Is it toward new re research? Right? Is it, or long-term research, is it toward, um, n you know, near to midterm research? And so, um, in a case where you're resource constrained, um, [00:13:00] of course there's this fundraising game you can play, right?[00:13:01] But the fund, the market was very different back in 2023 too. Um. I think the best researchers in the world have this dilemma of, okay, I wanna go all in on a GI, but it's the product usage revenue flywheel that keeps the revenue in the house to power all the GPUs to get to a GI. And so it does make, um, you know, I think it sets up an interesting dilemma for any startup that has trouble raising up until that level, right?[00:13:27] And certainly if you don't have that progress, you can't continue this fly, you know, fundraising flywheel.[00:13:32] Martin Casado: I would say that because, ‘cause we're keeping track of all of the things that are different, right? Like, you know, venture growth and uh, app infra and one of the ones is definitely the personalities of the founders.[00:13:45] It's just very different this time I've been. Been doing this for a decade and I've been doing startups for 20 years. And so, um, I mean a lot of people start this to do a GI and we've never had like a unified North star that I recall in the same [00:14:00] way. Like people built companies to start companies in the past.[00:14:02] Like that was what it was. Like I would create an internet company, I would create infrastructure company, like it's kind of more engineering builders and this is kind of a different. You know, mentality. And some companies have harnessed that incredibly well because their direction is so obviously on the path to what somebody would consider a GI, but others have not.[00:14:20] And so like there is always this tension with personnel. And so I think we're seeing more kind of founder movement.[00:14:27] Sarah Wang: Yeah.[00:14:27] Martin Casado: You know, as a fraction of founders than we've ever seen. I mean, maybe since like, I don't know the time of like Shockly and the trade DUR aid or something like that. Way back in the beginning of the industry, I, it's a very, very.[00:14:38] Unusual time of personnel.[00:14:39] Sarah Wang: Totally.[00:14:40] Talent Wars, Mega-Comp, and the Rise of Acquihire M&A[00:14:40] Sarah Wang: And it, I think it's exacerbated by the fact that talent wars, I mean, every industry has talent wars, but not at this magnitude, right? No. Yeah. Very rarely can you see someone get poached for $5 billion. That's hard to compete with. And then secondly, if you're a founder in ai, you could fart and it would be on the front page of, you know, the information these days.[00:14:59] And so there's [00:15:00] sort of this fishbowl effect that I think adds to the deep anxiety that, that these AI founders are feeling.[00:15:06] Martin Casado: Hmm.[00:15:06] swyx: Uh, yes. I mean, just on, uh, briefly comment on the founder, uh, the sort of. Talent wars thing. I feel like 2025 was just like a blip. Like I, I don't know if we'll see that again.[00:15:17] ‘cause meta built the team. Like, I don't know if, I think, I think they're kind of done and like, who's gonna pay more than meta? I, I don't know.[00:15:23] Martin Casado: I, I agree. So it feels so, it feel, it feels this way to me too. It's like, it is like, basically Zuckerberg kind of came out swinging and then now he's kind of back to building.[00:15:30] Yeah,[00:15:31] swyx: yeah. You know, you gotta like pay up to like assemble team to rush the job, whatever. But then now, now you like you, you made your choices and now they got a ship.[00:15:38] Martin Casado: I mean, the, the o other side of that is like, you know, like we're, we're actually in the job hiring market. We've got 600 people here. I hire all the time.[00:15:44] I've got three open recs if anybody's interested, that's listening to this for investor. Yeah, on, on the team, like on the investing side of the team, like, and, um, a lot of the people we talk to have acting, you know, active, um, offers for 10 million a year or something like that. And like, you know, and we pay really, [00:16:00] really well.[00:16:00] And just to see what's out on the market is really, is really remarkable. And so I would just say it's actually, so you're right, like the really flashy one, like I will get someone for, you know, a billion dollars, but like the inflated, um, uh, trickles down. Yeah, it is still very active today. I mean,[00:16:18] Sarah Wang: yeah, you could be an L five and get an offer in the tens of millions.[00:16:22] Okay. Yeah. Easily. Yeah. It's so I think you're right that it felt like a blip. I hope you're right. Um, but I think it's been, the steady state is now, I think got pulled up. Yeah. Yeah. I'll pull up for[00:16:31] Martin Casado: sure. Yeah.[00:16:32] Alessio: Yeah. And I think that's breaking the early stage founder math too. I think before a lot of people would be like, well, maybe I should just go be a founder instead of like getting paid.[00:16:39] Yeah. 800 KA million at Google. But if I'm getting paid. Five, 6 million. That's different but[00:16:45] Martin Casado: on. But on the other hand, there's more strategic money than we've ever seen historically, right? Mm-hmm. And so, yep. The economics, the, the, the, the calculus on the economics is very different in a number of ways. And, uh, it's crazy.[00:16:58] It's cra it's causing like a, [00:17:00] a, a, a ton of change in confusion in the market. Some very positive, sub negative, like, so for example, the other side of the, um. The co-founder, like, um, acquisition, you know, mark Zuckerberg poaching someone for a lot of money is like, we were actually seeing historic amount of m and a for basically acquihires, right?[00:17:20] That you like, you know, really good outcomes from a venture perspective that are effective acquihires, right? So I would say it's probably net positive from the investment standpoint, even though it seems from the headlines to be very disruptive in a negative way.[00:17:33] Alessio: Yeah.[00:17:33] What's Underfunded: Boring Software, Robotics Skepticism, and Custom Silicon Economics[00:17:33] Alessio: Um, let's talk maybe about what's not being invested in, like maybe some interesting ideas that you would see more people build or it, it seems in a way, you know, as ycs getting more popular, it's like access getting more popular.[00:17:47] There's a startup school path that a lot of founders take and they know what's hot in the VC circles and they know what gets funded. Uh, and there's maybe not as much risk appetite for. Things outside of that. Um, I'm curious if you feel [00:18:00] like that's true and what are maybe, uh, some of the areas, uh, that you think are under discussed?[00:18:06] Martin Casado: I mean, I actually think that we've taken our eye off the ball in a lot of like, just traditional, you know, software companies. Um, so like, I mean. You know, I think right now there's almost a barbell, like you're like the hot thing on X, you're deep tech.[00:18:21] swyx: Mm-hmm.[00:18:22] Martin Casado: Right. But I, you know, I feel like there's just kind of a long, you know, list of like good.[00:18:28] Good companies that will be around for a long time in very large markets. Say you're building a database, you know, say you're building, um, you know, kind of monitoring or logging or tooling or whatever. There's some good companies out there right now, but like, they have a really hard time getting, um, the attention of investors.[00:18:43] And it's almost become a meme, right? Which is like, if you're not basically growing from zero to a hundred in a year, you're not interesting, which is just, is the silliest thing to say. I mean, think of yourself as like an introvert person, like, like your personal money, right? Mm-hmm. So. Your personal money, will you put it in the stock market at 7% or you put it in this company growing five x in a very large [00:19:00] market?[00:19:00] Of course you can put it in the company five x. So it's just like we say these stupid things, like if you're not going from zero to a hundred, but like those, like who knows what the margins of those are mean. Clearly these are good investments. True for anybody, right? True. Like our LPs want whatever.[00:19:12] Three x net over, you know, the life cycle of a fund, right? So a, a company in a big market growing five X is a great investment. We'd, everybody would be happy with these returns, but we've got this kind of mania on these, these strong growths. And so I would say that that's probably the most underinvested sector.[00:19:28] Right now.[00:19:29] swyx: Boring software, boring enterprise software.[00:19:31] Martin Casado: Traditional. Really good company.[00:19:33] swyx: No, no AI here.[00:19:34] Martin Casado: No. Like boring. Well, well, the AI of course is pulling them into use cases. Yeah, but that's not what they're, they're not on the token path, right? Yeah. Let's just say that like they're software, but they're not on the token path.[00:19:41] Like these are like they're great investments from any definition except for like random VC on Twitter saying VC on x, saying like, it's not growing fast enough. What do you[00:19:52] Sarah Wang: think? Yeah, maybe I'll answer a slightly different. Question, but adjacent to what you asked, um, which is maybe an area that we're not, uh, investing [00:20:00] right now that I think is a question and we're spending a lot of time in regardless of whether we pull the trigger or not.[00:20:05] Um, and it would probably be on the hardware side, actually. Robotics, right? And the robotics side. Robotics. Right. Which is, it's, I don't wanna say that it's not getting funding ‘cause it's clearly, uh, it's, it's sort of non-consensus to almost not invest in robotics at this point. But, um, we spent a lot of time in that space and I think for us, we just haven't seen the chat GPT moment.[00:20:22] Happen on the hardware side. Um, and the funding going into it feels like it's already. Taking that for granted.[00:20:30] Martin Casado: Yeah. Yeah. But we also went through the drone, you know, um, there's a zip line right, right out there. What's that? Oh yeah, there's a zip line. Yeah. What the drone, what the av And like one of the takeaways is when it comes to hardware, um, most companies will end up verticalizing.[00:20:46] Like if you're. If you're investing in a robot company for an A for agriculture, you're investing in an ag company. ‘cause that's the competition and that's surprising. And that's supply chain. And if you're doing it for mining, that's mining. And so the ad team does a lot of that type of stuff ‘cause they actually set up to [00:21:00] diligence that type of work.[00:21:01] But for like horizontal technology investing, there's very little when it comes to robots just because it's so fit for, for purpose. And so we kinda like to look at software. Solutions or horizontal solutions like applied intuition. Clearly from the AV wave deep map, clearly from the AV wave, I would say scale AI was actually a horizontal one for That's fair, you know, for robotics early on.[00:21:23] And so that sort of thing we're very, very interested. But the actual like robot interacting with the world is probably better for different team. Agree.[00:21:30] Alessio: Yeah, I'm curious who these teams are supposed to be that invest in them. I feel like everybody's like, yeah, robotics, it's important and like people should invest in it.[00:21:38] But then when you look at like the numbers, like the capital requirements early on versus like the moment of, okay, this is actually gonna work. Let's keep investing. That seems really hard to predict in a way that is not,[00:21:49] Martin Casado: I think co, CO two, kla, gc, I mean these are all invested in in Harvard companies. He just, you know, and [00:22:00] listen, I mean, it could work this time for sure.[00:22:01] Right? I mean if Elon's doing it, he's like, right. Just, just the fact that Elon's doing it means that there's gonna be a lot of capital and a lot of attempts for a long period of time. So that alone maybe suggests that we should just be investing in robotics just ‘cause you have this North star who's Elon with a humanoid and that's gonna like basically willing into being an industry.[00:22:17] Um, but we've just historically found like. We're a huge believer that this is gonna happen. We just don't feel like we're in a good position to diligence these things. ‘cause again, robotics companies tend to be vertical. You really have to understand the market they're being sold into. Like that's like that competitive equilibrium with a human being is what's important.[00:22:34] It's not like the core tech and like we're kind of more horizontal core tech type investors. And this is Sarah and I. Yeah, the ad team is different. They can actually do these types of things.[00:22:42] swyx: Uh, just to clarify, AD stands for[00:22:44] Martin Casado: American Dynamism.[00:22:45] swyx: Alright. Okay. Yeah, yeah, yeah. Uh, I actually, I do have a related question that, first of all, I wanna acknowledge also just on the, on the chip side.[00:22:51] Yeah. I, I recall a podcast that where you were on, i, I, I think it was the a CC podcast, uh, about two or three years ago where you, where you suddenly said [00:23:00] something, which really stuck in my head about how at some point, at some point kind of scale it makes sense to. Build a custom aic Yes. For per run.[00:23:07] Martin Casado: Yes.[00:23:07] It's crazy. Yeah.[00:23:09] swyx: We're here and I think you, you estimated 500 billion, uh, something.[00:23:12] Martin Casado: No, no, no. A billion, a billion dollar training run of $1 billion training run. It makes sense to actually do a custom meic if you can do it in time. The question now is timelines. Yeah, but not money because just, just, just rough math.[00:23:22] If it's a billion dollar training. Then the inference for that model has to be over a billion, otherwise it won't be solvent. So let's assume it's, if you could save 20%, which you could save much more than that with an ASIC 20%, that's $200 million. You can tape out a chip for $200 million. Right? So now you can literally like justify economically, not timeline wise.[00:23:41] That's a different issue. An ASIC per model, which[00:23:44] swyx: is because that, that's how much we leave on the table every single time. We, we, we do like generic Nvidia.[00:23:48] Martin Casado: Exactly. Exactly. No, it, it is actually much more than that. You could probably get, you know, a factor of two, which would be 500 million.[00:23:54] swyx: Typical MFU would be like 50.[00:23:55] Yeah, yeah. And that's good.[00:23:57] Martin Casado: Exactly. Yeah. Hundred[00:23:57] swyx: percent. Um, so, so, yeah, and I mean, and I [00:24:00] just wanna acknowledge like, here we are in, in, in 2025 and opening eyes confirming like Broadcom and all the other like custom silicon deals, which is incredible. I, I think that, uh, you know, speaking about ad there's, there's a really like interesting tie in that obviously you guys are hit on, which is like these sort, this sort of like America first movement or like sort of re industrialized here.[00:24:17] Yeah. Uh, move TSMC here, if that's possible. Um, how much overlap is there from ad[00:24:23] Martin Casado: Yeah.[00:24:23] swyx: To, I guess, growth and, uh, investing in particularly like, you know, US AI companies that are strongly bounded by their compute.[00:24:32] Martin Casado: Yeah. Yeah. So I mean, I, I would view, I would view AD as more as a market segmentation than like a mission, right?[00:24:37] So the market segmentation is, it has kind of regulatory compliance issues or government, you know, sale or it deals with like hardware. I mean, they're just set up to, to, to, to, to. To diligence those types of companies. So it's a more of a market segmentation thing. I would say the entire firm. You know, which has been since it is been intercepted, you know, has geographical biases, right?[00:24:58] I mean, for the longest time we're like, you [00:25:00] know, bay Area is gonna be like, great, where the majority of the dollars go. Yeah. And, and listen, there, there's actually a lot of compounding effects for having a geographic bias. Right. You know, everybody's in the same place. You've got an ecosystem, you're there, you've got presence, you've got a network.[00:25:12] Um, and, uh, I mean, I would say the Bay area's very much back. You know, like I, I remember during pre COVID, like it was like almost Crypto had kind of. Pulled startups away. Miami from the Bay Area. Miami, yeah. Yeah. New York was, you know, because it's so close to finance, came up like Los Angeles had a moment ‘cause it was so close to consumer, but now it's kind of come back here.[00:25:29] And so I would say, you know, we tend to be very Bay area focused historically, even though of course we've asked all over the world. And then I would say like, if you take the ring out, you know, one more, it's gonna be the US of course, because we know it very well. And then one more is gonna be getting us and its allies and Yeah.[00:25:44] And it goes from there.[00:25:45] Sarah Wang: Yeah,[00:25:45] Martin Casado: sorry.[00:25:46] Sarah Wang: No, no. I agree. I think from a, but I think from the intern that that's sort of like where the companies are headquartered. Maybe your questions on supply chain and customer base. Uh, I, I would say our customers are, are, our companies are fairly international from that perspective.[00:25:59] Like they're selling [00:26:00] globally, right? They have global supply chains in some cases.[00:26:03] Martin Casado: I would say also the stickiness is very different.[00:26:05] Sarah Wang: Yeah.[00:26:05] Martin Casado: Historically between venture and growth, like there's so much company building in venture, so much so like hiring the next PM. Introducing the customer, like all of that stuff.[00:26:15] Like of course we're just gonna be stronger where we have our network and we've been doing business for 20 years. I've been in the Bay Area for 25 years, so clearly I'm just more effective here than I would be somewhere else. Um, where I think, I think for some of the later stage rounds, the companies don't need that much help.[00:26:30] They're already kind of pretty mature historically, so like they can kind of be everywhere. So there's kind of less of that stickiness. This is different in the AI time. I mean, Sarah is now the, uh, chief of staff of like half the AI companies in, uh, in the Bay Area right now. She's like, ops Ninja Biz, Devrel, BizOps.[00:26:48] swyx: Are, are you, are you finding much AI automation in your work? Like what, what is your stack.[00:26:53] Sarah Wang: Oh my, in my personal stack.[00:26:54] swyx: I mean, because like, uh, by the way, it's the, the, the reason for this is it is triggering, uh, yeah. We, like, I'm hiring [00:27:00] ops, ops people. Um, a lot of ponders I know are also hiring ops people and I'm just, you know, it's opportunity Since you're, you're also like basically helping out with ops with a lot of companies.[00:27:09] What are people doing these days? Because it's still very manual as far as I can tell.[00:27:13] Sarah Wang: Hmm. Yeah. I think the things that we help with are pretty network based, um, in that. It's sort of like, Hey, how do do I shortcut this process? Well, let's connect you to the right person. So there's not quite an AI workflow for that.[00:27:26] I will say as a growth investor, Claude Cowork is pretty interesting. Yeah. Like for the first time, you can actually get one shot data analysis. Right. Which, you know, if you're gonna do a customer database, analyze a cohort retention, right? That's just stuff that you had to do by hand before. And our team, the other, it was like midnight and the three of us were playing with Claude Cowork.[00:27:47] We gave it a raw file. Boom. Perfectly accurate. We checked the numbers. It was amazing. That was my like, aha moment. That sounds so boring. But you know, that's, that's the kind of thing that a growth investor is like, [00:28:00] you know, slaving away on late at night. Um, done in a few seconds.[00:28:03] swyx: Yeah. You gotta wonder what the whole, like, philanthropic labs, which is like their new sort of products studio.[00:28:10] Yeah. What would that be worth as an independent, uh, startup? You know, like a[00:28:14] Martin Casado: lot.[00:28:14] Sarah Wang: Yeah, true.[00:28:16] swyx: Yeah. You[00:28:16] Martin Casado: gotta hand it to them. They've been executing incredibly well.[00:28:19] swyx: Yeah. I, I mean, to me, like, you know, philanthropic, like building on cloud code, I think, uh, it makes sense to me the, the real. Um, pedal to the metal, whatever the, the, the phrase is, is when they start coming after consumer with, uh, against OpenAI and like that is like red alert at Open ai.[00:28:35] Oh, I[00:28:35] Martin Casado: think they've been pretty clear. They're enterprise focused.[00:28:37] swyx: They have been, but like they've been free. Here's[00:28:40] Martin Casado: care publicly,[00:28:40] swyx: it's enterprise focused. It's coding. Right. Yeah.[00:28:43] AI Labs vs Startups: Disruption, Undercutting & the Innovator's Dilemma[00:28:43] swyx: And then, and, but here's cloud, cloud, cowork, and, and here's like, well, we, uh, they, apparently they're running Instagram ads for Claudia.[00:28:50] I, on, you know, for, for people on, I get them all the time. Right. And so, like,[00:28:54] Martin Casado: uh,[00:28:54] swyx: it, it's kind of like this, the disruption thing of, uh, you know. Mo Open has been doing, [00:29:00] consumer been doing the, just pursuing general intelligence in every mo modality, and here's a topic that only focus on this thing, but now they're sort of undercutting and doing the whole innovator's dilemma thing on like everything else.[00:29:11] Martin Casado: It's very[00:29:11] swyx: interesting.[00:29:12] Martin Casado: Yeah, I mean there's, there's a very open que so for me there's like, do you know that meme where there's like the guy in the path and there's like a path this way? There's a path this way. Like one which way Western man. Yeah. Yeah.[00:29:23] Two Futures for AI: Infinite Market vs AGI Oligopoly[00:29:23] Martin Casado: And for me, like, like all the entire industry kind of like hinges on like two potential futures.[00:29:29] So in, in one potential future, um, the market is infinitely large. There's perverse economies of scale. ‘cause as soon as you put a model out there, like it kind of sublimates and all the other models catch up and like, it's just like software's being rewritten and fractured all over the place and there's tons of upside and it just grows.[00:29:48] And then there's another path which is like, well. Maybe these models actually generalize really well, and all you have to do is train them with three times more money. That's all you have to [00:30:00] do, and it'll just consume everything beyond it. And if that's the case, like you end up with basically an oligopoly for everything, like, you know mm-hmm.[00:30:06] Because they're perfectly general and like, so this would be like the, the a GI path would be like, these are perfectly general. They can do everything. And this one is like, this is actually normal software. The universe is complicated. You've got, and nobody knows the answer.[00:30:18] The Economics Reality Check: Gross Margins, Training Costs & Borrowing Against the Future[00:30:18] Martin Casado: My belief is if you actually look at the numbers of these companies, so generally if you look at the numbers of these companies, if you look at like the amount they're making and how much they, they spent training the last model, they're gross margin positive.[00:30:30] You're like, oh, that's really working. But if you look at like. The current training that they're doing for the next model, their gross margin negative. So part of me thinks that a lot of ‘em are kind of borrowing against the future and that's gonna have to slow down. It's gonna catch up to them at some point in time, but we don't really know.[00:30:47] Sarah Wang: Yeah.[00:30:47] Martin Casado: Does that make sense? Like, I mean, it could be, it could be the case that the only reason this is working is ‘cause they can raise that next round and they can train that next model. ‘cause these models have such a short. Life. And so at some point in time, like, you know, they won't be able to [00:31:00] raise that next round for the next model and then things will kind of converge and fragment again.[00:31:03] But right now it's not.[00:31:04] Sarah Wang: Totally. I think the other, by the way, just, um, a meta point. I think the other lesson from the last three years is, and we talk about this all the time ‘cause we're on this. Twitter X bubble. Um, cool. But, you know, if you go back to, let's say March, 2024, that period, it felt like a, I think an open source model with an, like a, you know, benchmark leading capability was sort of launching on a daily basis at that point.[00:31:27] And, um, and so that, you know, that's one period. Suddenly it's sort of like open source takes over the world. There's gonna be a plethora. It's not an oligopoly, you know, if you fast, you know, if you, if you rewind time even before that GPT-4 was number one for. Nine months, 10 months. It's a long time. Right.[00:31:44] Um, and of course now we're in this era where it feels like an oligopoly, um, maybe some very steady state shifts and, and you know, it could look like this in the future too, but it just, it's so hard to call. And I think the thing that keeps, you know, us up at [00:32:00] night in, in a good way and bad way, is that the capability progress is actually not slowing down.[00:32:06] And so until that happens, right, like you don't know what's gonna look like.[00:32:09] Martin Casado: But I, I would, I would say for sure it's not converged, like for sure, like the systemic capital flows have not converged, meaning right now it's still borrowing against the future to subsidize growth currently, which you can do that for a period of time.[00:32:23] But, but you know, at the end, at some point the market will rationalize that and just nobody knows what that will look like.[00:32:29] Alessio: Yeah.[00:32:29] Martin Casado: Or, or like the drop in price of compute will, will, will save them. Who knows?[00:32:34] Alessio: Yeah. Yeah. I think the models need to ask them to, to specific tasks. You know? It's like, okay, now Opus 4.5 might be a GI at some specific task, and now you can like depreciate the model over a longer time.[00:32:45] I think now, now, right now there's like no old model.[00:32:47] Martin Casado: No, but let, but lemme just change that mental, that's, that used to be my mental model. Lemme just change it a little bit.[00:32:53] Capital as a Weapon vs Task Saturation: Where Real Enterprise Value Gets Built[00:32:53] Martin Casado: If you can raise three times, if you can raise more than the aggregate of anybody that uses your models, that doesn't even matter.[00:32:59] It doesn't [00:33:00] even matter. See what I'm saying? Like, yeah. Yeah. So, so I have an API Business. My API business is 60% margin, or 70% margin, or 80% margin is a high margin business. So I know what everybody is using. If I can raise more money than the aggregate of everybody that's using it, I will consume them whether I'm a GI or not.[00:33:14] And I will know if they're using it ‘cause they're using it. And like, unlike in the past where engineering stops me from doing that.[00:33:21] Alessio: Mm-hmm.[00:33:21] Martin Casado: It is very straightforward. You just train. So I also thought it was kind of like, you must ask the code a GI, general, general, general. But I think there's also just a possibility that the, that the capital markets will just give them the, the, the ammunition to just go after everybody on top of ‘em.[00:33:36] Sarah Wang: I, I do wonder though, to your point, um, if there's a certain task that. Getting marginally better isn't actually that much better. Like we've asked them to it, to, you know, we can call it a GI or whatever, you know, actually, Ali Goi talks about this, like we're already at a GI for a lot of functions in the enterprise.[00:33:50] Um. That's probably those for those tasks, you probably could build very specific companies that focus on just getting as much value out of that task that isn't [00:34:00] coming from the model itself. There's probably a rich enterprise business to be built there. I mean, could be wrong on that, but there's a lot of interesting examples.[00:34:08] So, right, if you're looking the legal profession or, or whatnot, and maybe that's not a great one ‘cause the models are getting better on that front too, but just something where it's a bit saturated, then the value comes from. Services. It comes from implementation, right? It comes from all these things that actually make it useful to the end customer.[00:34:24] Martin Casado: Sorry, what am I, one more thing I think is, is underused in all of this is like, to what extent every task is a GI complete.[00:34:31] Sarah Wang: Mm-hmm.[00:34:32] Martin Casado: Yeah. I code every day. It's so fun.[00:34:35] Sarah Wang: That's a core question. Yeah.[00:34:36] Martin Casado: And like. When I'm talking to these models, it's not just code. I mean, it's everything, right? Like I, you know, like it's,[00:34:43] swyx: it's healthcare.[00:34:44] It's,[00:34:44] Martin Casado: I mean, it's[00:34:44] swyx: Mele,[00:34:45] Martin Casado: but it's every, it is exactly that. Like, yeah, that's[00:34:47] Sarah Wang: great support. Yeah.[00:34:48] Martin Casado: It's everything. Like I'm asking these models to, yeah, to understand compliance. I'm asking these models to go search the web. I'm asking these models to talk about things I know in the history, like it's having a full conversation with me while I, I engineer, and so it could be [00:35:00] the case that like, mm-hmm.[00:35:01] The most a, you know, a GI complete, like I'm not an a GI guy. Like I think that's, you know, but like the most a GI complete model will is win independent of the task. And we don't know the answer to that one either.[00:35:11] swyx: Yeah.[00:35:12] Martin Casado: But it seems to me that like, listen, codex in my experience is for sure better than Opus 4.5 for coding.[00:35:18] Like it finds the hardest bugs that I work in with. Like, it is, you know. The smartest developers. I don't work on it. It's great. Um, but I think Opus 4.5 is actually very, it's got a great bedside manner and it really, and it, it really matters if you're building something very complex because like, it really, you know, like you're, you're, you're a partner and a brainstorming partner for somebody.[00:35:38] And I think we don't discuss enough how every task kind of has that quality.[00:35:42] swyx: Mm-hmm.[00:35:43] Martin Casado: And what does that mean to like capital investment and like frontier models and Submodels? Yeah.[00:35:47] Why “Coding Models” Keep Collapsing into Generalists (Reasoning vs Taste)[00:35:47] Martin Casado: Like what happened to all the special coding models? Like, none of ‘em worked right. So[00:35:51] Alessio: some of them, they didn't even get released.[00:35:53] Magical[00:35:54] Martin Casado: Devrel. There's a whole, there's a whole host. We saw a bunch of them and like there's this whole theory that like, there could be, and [00:36:00] I think one of the conclusions is, is like there's no such thing as a coding model,[00:36:04] Alessio: you know?[00:36:04] Martin Casado: Like, that's not a thing. Like you're talking to another human being and it's, it's good at coding, but like it's gotta be good at everything.[00:36:10] swyx: Uh, minor disagree only because I, I'm pretty like, have pretty high confidence that basically open eye will always release a GPT five and a GT five codex. Like that's the code's. Yeah. The way I call it is one for raisin, one for Tiz. Um, and, and then like someone internal open, it was like, yeah, that's a good way to frame it.[00:36:32] Martin Casado: That's so funny.[00:36:33] swyx: Uh, but maybe it, maybe it collapses down to reason and that's it. It's not like a hundred dimensions doesn't life. Yeah. It's two dimensions. Yeah, yeah, yeah, yeah. Like and exactly. Beside manner versus coding. Yeah.[00:36:43] Martin Casado: Yeah.[00:36:44] swyx: It's, yeah.[00:36:46] Martin Casado: I, I think for, for any, it's hilarious. For any, for anybody listening to this for, for, for, I mean, for you, like when, when you're like coding or using these models for something like that.[00:36:52] Like actually just like be aware of how much of the interaction has nothing to do with coding and it just turns out to be a large portion of it. And so like, you're, I [00:37:00] think like, like the best Soto ish model. You know, it is going to remain very important no matter what the task is.[00:37:06] swyx: Yeah.[00:37:07] What He's Actually Coding: Gaussian Splats, Spark.js & 3D Scene Rendering Demos[00:37:07] swyx: Uh, speaking of coding, uh, I, I'm gonna be cheeky and ask like, what actually are you coding?[00:37:11] Because obviously you, you could code anything and you are obviously a busy investor and a manager of the good. Giant team. Um, what are you calling?[00:37:18] Martin Casado: I help, um, uh, FEFA at World Labs. Uh, it's one of the investments and um, and they're building a foundation model that creates 3D scenes.[00:37:27] swyx: Yeah, we had it on the pod.[00:37:28] Yeah. Yeah,[00:37:28] Martin Casado: yeah. And so these 3D scenes are Gaussian splats, just by the way that kind of AI works. And so like, you can reconstruct a scene better with, with, with radiance feels than with meshes. ‘cause like they don't really have topology. So, so they, they, they produce each. Beautiful, you know, 3D rendered scenes that are Gaussian splats, but the actual industry support for Gaussian splats isn't great.[00:37:50] It's just never, you know, it's always been meshes and like, things like unreal use meshes. And so I work on a open source library called Spark js, which is a. Uh, [00:38:00] a JavaScript rendering layer ready for Gaussian splats. And it's just because, you know, um, you, you, you need that support and, and right now there's kind of a three js moment that's all meshes and so like, it's become kind of the default in three Js ecosystem.[00:38:13] As part of that to kind of exercise the library, I just build a whole bunch of cool demos. So if you see me on X, you see like all my demos and all the world building, but all of that is just to exercise this, this library that I work on. ‘cause it's actually a very tough algorithmics problem to actually scale a library that much.[00:38:29] And just so you know, this is ancient history now, but 30 years ago I paid for undergrad, you know, working on game engines in college in the late nineties. So I've got actually a back and it's very old background, but I actually have a background in this and so a lot of it's fun. You know, but, but the, the, the, the whole goal is just for this rendering library to, to,[00:38:47] Sarah Wang: are you one of the most active contributors?[00:38:49] The, their GitHub[00:38:50] Martin Casado: spark? Yes.[00:38:51] Sarah Wang: Yeah, yeah.[00:38:51] Martin Casado: There's only two of us there, so, yes. No, so by the way, so the, the pri The pri, yeah. Yeah. So the primary developer is a [00:39:00] guy named Andres Quist, who's an absolute genius. He and I did our, our PhDs together. And so like, um, we studied for constant Quas together. It was almost like hanging out with an old friend, you know?[00:39:09] And so like. So he, he's the core, core guy. I did mostly kind of, you know, the side I run venture fund.[00:39:14] swyx: It's amazing. Like five years ago you would not have done any of this. And it brought you back[00:39:19] Martin Casado: the act, the Activ energy, you're still back. Energy was so high because you had to learn all the framework b******t.[00:39:23] Man, I f*****g used to hate that. And so like, now I don't have to deal with that. I can like focus on the algorithmics so I can focus on the scaling and I,[00:39:29] swyx: yeah. Yeah.[00:39:29] LLMs vs Spatial Intelligence + How to Value World Labs' 3D Foundation Model[00:39:29] swyx: And then, uh, I'll observe one irony and then I'll ask a serious investor question, uh, which is like, the irony is FFE actually doesn't believe that LMS can lead us to spatial intelligence.[00:39:37] And here you are using LMS to like help like achieve spatial intelligence. I just see, I see some like disconnect in there.[00:39:45] Martin Casado: Yeah. Yeah. So I think, I think, you know, I think, I think what she would say is LLMs are great to help with coding.[00:39:51] swyx: Yes.[00:39:51] Martin Casado: But like, that's very different than a model that actually like provides, they, they'll never have the[00:39:56] swyx: spatial inte[00:39:56] Martin Casado: issues.[00:39:56] And listen, our brains clearly listen, our brains, brains clearly have [00:40:00] both our, our brains clearly have a language reasoning section and they clearly have a spatial reasoning section. I mean, it's just, you know, these are two pretty independent problems.[00:40:07] swyx: Okay. And you, you, like, I, I would say that the, the one data point I recently had, uh, against it is the DeepMind, uh, IMO Gold, where, so, uh, typically the, the typical answer is that this is where you start going down the neuros symbolic path, right?[00:40:21] Like one, uh, sort of very sort of abstract reasoning thing and one form, formal thing. Um, and that's what. DeepMind had in 2024 with alpha proof, alpha geometry, and now they just use deep think and just extended thinking tokens. And it's one model and it's, and it's in LM.[00:40:36] Martin Casado: Yeah, yeah, yeah, yeah, yeah.[00:40:37] swyx: And so that, that was my indication of like, maybe you don't need a separate system.[00:40:42] Martin Casado: Yeah. So, so let me step back. I mean, at the end of the day, at the end of the day, these things are like nodes in a graph with weights on them. Right. You know, like it can be modeled like if you, if you distill it down. But let me just talk about the two different substrates. Let's, let me put you in a dark room.[00:40:56] Like totally black room. And then let me just [00:41:00] describe how you exit it. Like to your left, there's a table like duck below this thing, right? I mean like the chances that you're gonna like not run into something are very low. Now let me like turn on the light and you actually see, and you can do distance and you know how far something away is and like where it is or whatever.[00:41:17] Then you can do it, right? Like language is not the right primitives to describe. The universe because it's not exact enough. So that's all Faye, Faye is talking about. When it comes to like spatial reasoning, it's like you actually have to know that this is three feet far, like that far away. It is curved.[00:41:37] You have to understand, you know, the, like the actual movement through space.[00:41:40] swyx: Yeah.[00:41:40] Martin Casado: So I do, I listen, I do think at the end of these models are definitely converging as far as models, but there's, there's, there's different representations of problems you're solving. One is language. Which, you know, that would be like describing to somebody like what to do.[00:41:51] And the other one is actually just showing them and the space reasoning is just showing them.[00:41:55] swyx: Yeah, yeah, yeah. Right. Got it, got it. Uh, the, in the investor question was on, on, well labs [00:42:00] is, well, like, how do I value something like this? What, what, what work does the, do you do? I'm just like, Fefe is awesome.[00:42:07] Justin's awesome. And you know, the other two co-founder, co-founders, but like the, the, the tech, everyone's building cool tech. But like, what's the value of the tech? And this is the fundamental question[00:42:16] Martin Casado: of, well, let, let, just like these, let me just maybe give you a rough sketch on the diffusion models. I actually love to hear Sarah because I'm a venture for, you know, so like, ventures always, always like kind of wild west type[00:42:24] swyx: stuff.[00:42:24] You, you, you, you paid a dream and she has to like, actually[00:42:28] Martin Casado: I'm gonna say I'm gonna mar to reality, so I'm gonna say the venture for you. And she can be like, okay, you a little kid. Yeah. So like, so, so these diffusion models literally. Create something for, for almost nothing. And something that the, the world has found to be very valuable in the past, in our real markets, right?[00:42:45] Like, like a 2D image. I mean, that's been an entire market. People value them. It takes a human being a long time to create it, right? I mean, to create a, you know, a, to turn me into a whatever, like an image would cost a hundred bucks in an hour. The inference cost [00:43:00] us a hundredth of a penny, right? So we've seen this with speech in very successful companies.[00:43:03] We've seen this with 2D image. We've seen this with movies. Right? Now, think about 3D scene. I mean, I mean, when's Grand Theft Auto coming out? It's been six, what? It's been 10 years. I mean, how, how like, but hasn't been 10 years.[00:43:14] Alessio: Yeah.[00:43:15] Martin Casado: How much would it cost to like, to reproduce this room in 3D? Right. If you, if you, if you hired somebody on fiber, like in, in any sort of quality, probably 4,000 to $10,000.[00:43:24] And then if you had a professional, probably $30,000. So if you could generate the exact same thing from a 2D image, and we know that these are used and they're using Unreal and they're using Blend, or they're using movies and they're using video games and they're using all. So if you could do that for.[00:43:36] You know, less than a dollar, that's four or five orders of magnitude cheaper. So you're bringing the marginal cost of something that's useful down by three orders of magnitude, which historically have created very large companies. So that would be like the venture kind of strategic dreaming map.[00:43:49] swyx: Yeah.[00:43:50] And, and for listeners, uh, you can do this yourself on your, on your own phone with like. Uh, the marble.[00:43:55] Martin Casado: Yeah. Marble.[00:43:55] swyx: Uh, or but also there's many Nerf apps where you just go on your iPhone and, and do this.[00:43:59] Martin Casado: Yeah. Yeah. [00:44:00] Yeah. And, and in the case of marble though, it would, what you do is you literally give it in.[00:44:03] So most Nerf apps you like kind of run around and take a whole bunch of pictures and then you kind of reconstruct it.[00:44:08] swyx: Yeah.[00:44:08] Martin Casado: Um, things like marble, just that the whole generative 3D space will just take a 2D image and it'll reconstruct all the like, like[00:44:16] swyx: meaning it has to fill in. Uh,[00:44:18] Martin Casado: stuff at the back of the table, under the table, the back, like, like the images, it doesn't see.[00:44:22] So the generator stuff is very different than reconstruction that it fills in the things that you can't see.[00:44:26] swyx: Yeah. Okay.[00:44:26] Sarah Wang: So,[00:44:27] Martin Casado: all right. So now the,[00:44:28] Sarah Wang: no, no. I mean I love that[00:44:29] Martin Casado: the adult[00:44:29] Sarah Wang: perspective. Um, well, no, I was gonna say these are very much a tag team. So we, we started this pod with that, um, premise. And I think this is a perfect question to even build on that further.[00:44:36] ‘cause it truly is, I mean, we're tag teaming all of these together.[00:44:39] Investing in Model Labs, Media Rumors, and the Cursor Playbook (Margins & Going Down-Stack)[00:44:39] Sarah Wang: Um, but I think every investment fundamentally starts with the same. Maybe the same two premises. One is, at this point in time, we actually believe that there are. And of one founders for their particular craft, and they have to be demonstrated in their prior careers, right?[00:44:56] So, uh, we're not investing in every, you know, now the term is NEO [00:45:00] lab, but every foundation model, uh, any, any company, any founder trying to build a foundation model, we're not, um, contrary to popular opinion, we're
We love love. Don't you? You don't have to answer that, I don't really care. Join Spencer, Ty, Andy and special guest Clay Parks as they delve the depths of AO3's archives with some fanfictions about real people. who, like, live and draw breath. Is that messed up? Probably. IDK. You be the judge. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
Allen, Rosemary, and Yolanda discuss Ming Yang’s proposed $1.5 billion factory in Scotland and why the UK government is hesitating. Plus the challenges of reviving wind turbine manufacturing in Australia, how quickly a blade factory can be stood up, and whether advanced manufacturing methods could give Australia a competitive edge in the next generation of wind energy. Sign up now for Uptime Tech News, our weekly newsletter on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on YouTube, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary’s “Engineering with Rosie” YouTube channel here. Have a question we can answer on the show? Email us! The Uptime Wind Energy Podcast brought to you by Strike Tape, protecting thousands of wind turbines from lightning damage worldwide. Visit strike tape.com And now your hosts. Allen Hall: Welcome to the Uptime Wind Energy Podcast. I’m your host Allen Hall, and I’m here with Yolanda Padron and Rosemary Barnes, and we’re all in Australia at the same time. We’re getting ready for Woma 2026, which is going to happen when this release is, will be through the first day. Uh, it’ll, it’s gonna be a big conference and right now. We’re so close to, to selling it out within a couple of people, so it’ll be a great event. So those of you listening to this podcast, hopefully you’re at Wilma 2026 and we’ll see, see you there. Uh, the news for this week, there’s a number of, of big, uh, country versus country situations going on. Uh, the one at the moment is [00:01:00] ING Yang in Scotland, and as we know, uh, Scotland. It has been offered by Ming Yang, uh, to build a factory there. They’re put about one and a half billion pounds into Scotland, uh, that is not going so well. So, so they’re talking about 3000 jobs, 1.5 billion in investment and then. Building, uh, offshore turbines for Britain and the larger Europe, but the UK government is hesitating and they have not approved it yet. And Scotland’s kind of caught in the middle. Ming Yang is supposedly looking elsewhere that they’re tired of waiting and figure they can probably get another factory somewhere in Europe. I don’t think this is gonna end well. Everyone. I think Bing Yang is obviously being pushed by the Chinese, uh, government to, to explore Scotland and try to get into Scotland and the Scottish government and leaders in the Scottish government have been meeting with, uh, [00:02:00] Chinese officials for a year or two. From what I can tell, if this doesn’t end with the factory in Scotland. Is China gonna take it out on the uk? And are they gonna build, is is me gonna be able to build a factory in Europe? Europe at the minute is looking into the Chinese investments into their wind turbine infrastructure in, in terms of basically tax support and, and funding and grants of that, uh, uh, aspect to, to see if China is undercutting prices artificially. Uh, which I think the answer is gonna be. Yes. So where does this go? It seems like a real impasse. At a moment when the UK in particular, and Europe, uh, the greater Europe are talking about more than a hundred gigawatts of offshore wind, Yolanda Padron: I mean, just with the, the business that you mentioned that’s coming into to the uk, right? Will they have without Min Yang the ability to, to reach their goals? Allen Hall: So you have the Siemens [00:03:00] factory in hall. They have a Vestus factory in Hollow White on the sort of the bottom of the country. Right. Then Vestus has had a facility there for a long time and the UK just threw about 20 million pounds into reopening the onshore blade portion of that factory ’cause it had been mothballed several months ago. It does seem like maybe there’s an alternative plan within the UK to stand up its own blade manufacturing and turbine manufacturing facilities, uh, to do a lot of things in country. Who I don’t think we know. Is it Siemens? Is it ge? Is it Vestus or is it something completely British? Maybe all the above. Rosemary. You know, being inside of a Blade factory for a long time with lm, it’s pretty hard to stand up a Blade factory quickly. How many years would it take you if you wanted to start today? Before you would actually produce a a hundred meter long offshore blade, Rosemary Barnes: I reckon you could do it in a year if you had like real, real strong motivation [00:04:00] Allen Hall: really. Rosemary Barnes: I think so. I mean, it’s a big shed and like, it, it would be, most of the delays would be like regulatory and, you know, hiring, getting enough people hired and trained and that sort of thing. But, um, if you had good. Support from the, the government and not too much red tape to deal with. Then, uh, you know, if you’ve got lots of manufacturing capability elsewhere, then you can move people. Like usually when, um, when I worked at LM there were a few new factories opened while I was working there, and I’m sure that they took longer than, than a year in terms of like when it was first thought of. But, um, you know, once the decision was made, I, I actually dunno how long, how long it took. So it is a guess, but it didn’t, it didn’t take. As long as you would think it wasn’t. It wasn’t years and years, that’s for sure. Um, and what they would do is they don’t, you know, hire a whole new workforce and train them up right from the start. And then once they’re ready to go, then they start operating. What they’ll do to start with is they’ve got, you know, like a bunch [00:05:00] of really good people from the global factories, like all around, um, who will go, um, you know, from all roles. And I’m not talking just management at all, like it will include technicians, um, you know, every, every role in the factory, they’ll get people from another factory to go over. And, um, you know, they do some of the work. They’re training up local people so you know, there’s more of a gradual handover. And also so that you know, the best practices, um, get spread from factory to factory and make a good global culture. ’cause obviously like you’ve got the same design everywhere. You want the same quality coming out everywhere. Um, there is, as much as you try and document everything should be documented in work instructions. That should make it, you know, impossible to do things wrong. However, you never quite get to that standard and, um. There is a lot, a lot to be said for just the know-how and the culture of the people doing the um, yeah, doing the work. Allen Hall: So the infrastructure would take about a year to build, but the people would have to come from the broader Europe then at [00:06:00] least temporarily. Rosemary Barnes: That, that would be the fastest and safest way to do it. Like if it’s a brand new company that has never made a wind turbine before and someone just got a few, you know, I don’t know, a billion dollars, and um, said, let’s start a wind turbine factory, then I think it’s gonna be a few years and there’s gonna be some learning curve before it starts making blades fast enough. And. With the correct quality. Um, yeah. But if you’re just talking about one more factory from a company that already has half a dozen or a dozen wind turbine blade factories elsewhere in the world, then that’s where I think it can be done fast. Allen Hall: This, uh, type of situation actually pops up a lot in aerospace, uh, power plants, engines. The jet engines on a lot of aircraft are kind of a combined effort from. Big multinational companies. So if they want to build something in country, they’ll hook up with a GE or a, a Honeywell or somebody who makes Jet engines and they’ll create this division and they’ll [00:07:00] stand this, this, uh, plant up. Maybe it’s gonna be something like that where GB energy is in the middle, uh, providing the funding and some of the resources, but they bring in another company, like a Siemens, like a Vestas, like a GE or a Nordex even to come in and to. Do the operational aspects and maybe some of the training pieces. But, uh, there’s a, there’s a funding arm and a technical arm, and they create a standalone, uh, British company to go manufacture towers to go manufacture in the cells to manufacture blades. Is that where you think this goes? Rosemary Barnes: It depends also what kind of, um, component you’re talking about. Like if you’re talking about, I, I was talking a specific example of wind turbine blades, which are a mediumly complex thing to make, I would say, um. Yeah. And then if you go on the simpler side, when turbine towers, most countries would have the. Rough expertise needed, um, to, to do that. Nearly all towers at the moment come out of [00:08:00] China, um, or out of Asia. And with China being the, the vast bulk of those. Um, and it’s because they’ve got, aside from having very, very cheap steel, um, they also have just got huge factories that are set up with assembly lines so that, you know, there’s not very much moving of things back and forth. So they have the exact right bit of equipment to do. The exact right kind of, you know, like rolling and welding and they’re not moving tower sections around a lot. That makes it really hard for, um, for other countries to compete. But it’s not because they couldn’t make towers, it’s because they would struggle to make them cheap enough. Um, so yeah, if you set up a factory, you know, say you set up a wind turbine, um, factory in, uh, wind turbine tower factory in Australia, you, you could buy the equipment that you needed for, you know, a few hundred million dollars and, um. You could make it, but unless you have enough orders to keep that factory busy, you know, with the, the volume that you need to keep all of that [00:09:00] modern equipment, uh, operating just absolutely around the clock, your towers are gonna be expensive out of that facility. So that’s kind of the, that it’s cost is the main barrier when it comes to towers Allen Hall: with Vestus in Mitsubishi recently having a partnership and then ending that partnership. It would seem like Vestus has the most experience in putting large corporations together to work on a, an advanced wind turbine project is they would, it would make sense to me if, if, if Vestus was involved because Vestus also has facilities in the uk. Are they the leading choice you think just because they have that experience with Mitsubishi and they have something in country or you think it’s somebody else? Is it a ge Rosemary Barnes: My instinct is saying Vestas. Yes, Allen Hall: me too. Okay. Rosemary Barnes: Ge. It’s wind turbine Manufacturing seems to be in a bit of a, more of an ebb rather than a flow right now, so I [00:10:00] mean that’s, that’s probably as much as what it’s based on. Um, and then yes, like the location of, of factories, there are already some vest, uh, factories, vest people in the uk so that would make it easier. : Delamination and bottomline failures and blades are difficult problems to detect early. These hidden issues can cost you millions in repairs and lost energy production. C-I-C-N-D-T are specialists to detect these critical flaws before they become expensive burdens. Their non-destructive test technology penetrates deep into blade materials to find voids and cracks. Traditional inspections completely miss. C-I-C-N-D-T Maps. Every critical defect delivers actionable reports and provides support to get your blades back in service. So visit cic ndt.com because catching blade problems early will save you millions.[00:11:00] Allen Hall: Can you build a renewable energy future on someone else’s supply chain? Well, in Australia, the last domestic wind tower manufacturers are down. Last year, after losing a 15 year battle against cheaper imports from China, now the Albanese government wants to try again, launching a consultation to revive local manufacturing. Meanwhile, giant turbines are rising in Western Australia’s. Largest wind farms soon to power 164,000 homes. Uh, the steel towers, blades and the cells, they all arrive on ships. And the question is whether that’s going to change anytime soon. Rosemary? Rosemary Barnes: Yeah, it’s, uh, it’s a topic I’ve thought about a lot and done a fair bit of work on as well, local manufacturing and whether you should or shouldn’t, the Australian government does try to support local manufacturing in. General, um, and in particular for renewables, but they focused much more on solar and [00:12:00] batteries. Um, with their manufacturing support, Australian government and agencies like a uh, arena, Australian Renewable Energy Agency have not traditionally supported wind like at all. It bothers me because actually Australia is a fantastic place to be developing some of these supporting technologies for wind energy and even the next generation of wind energy. Um, technologies, we, not any manufacturing. There are heaps of, um, things that would make it more suitable Australia, like just actually a really natural place to develop that. The thing about Australian projects is that they are. Big. Right. That makes it really attractive to developers because like in Europe where they’re, you know, still building wind, but you know, an onshore wind farm in Europe is like a couple of turbines here or there, maybe five, like a big wind farm would be 10, 10 turbines over there. Um, in Australia it’s like a hundred, 200 turbines at a time. Um, for onshore also choosing. Really big turbines. Australians, for some reason, Australian developers really like to [00:13:00] choose the latest technologies. And then if we think about some of the, um, you know, like new supporting technologies for existing wind turbines, like, you know, let’s, um, talk about. O and m there’s a whole lot of, um, o and m technologies, and Australia’s a great place for that too because as Australia wind farms spend so much on o and m compared to other countries. So a technology provider that can improve some of those pain points can much quicker get like a positive, um, return on investment in Australia than they would be able to in somewhere like America or, or Europe. So I think it makes sense to develop here Allen Hall: with the number of wind farms. Rosie, I, I completely agree with you and. When we were talking about the war Dge wind Farm, which is the Western Australian wind farm that’s gonna expand, they’re adding 30 turbines to provide 283 megawatts. That’s like a nine and a half megawatt machine. Those are big turbines. Those are new turbines, right? That’s not something that’s been around for a couple years. They’ve been around for a couple of months in, in terms of the lifespan of, of wind [00:14:00] turbines. So if Australia’s gonna go down the pathway of larger turbines, the, the most advanced turbines. It has to make sense that some of this has, has to be developed in country just because you need to have the knowledge to go repair, modify, improve, adjust, figure out what the next generation is, right? I don’t know how you, this happens. Rosemary Barnes: We see some examples of that. Right. And I think that Fortescue is the best example of, um, companies that are trying to think forward to what they’re going to need to make their, you know, they’ve got ambitious plans for putting in some big wind farms with. Big wind turbines in really remote locations. So they’ve got a lot of, um, it’s a lot of obvious challenges there. Um, and I know that they’re thinking ahead and working through that. And so, you know, we saw their investment in, um, nbra wind, the Spanish company and in particular their nbra lift. The bit of the tower that attaches to the rotor. It looks [00:15:00] pretty normal. Um, but then they make it taller by, um, slotting in like a lattice framework. Um, and then they jack it up and slot in another one underneath and jack it up and slot in another one underneath. So they don’t need a gigantic crane and they don’t need, um, I mean, it’s still a huge crane, but they don’t, they don’t, it doesn’t need to be as, as big because, you know, the rotor starts, starts off already on there by the time that the tower gets su to its full height. So, um, yeah, it’s a lot. That’s an innovative solution, I think, and it would, I would be very surprised if they weren’t also looking at every other technology that they’re gonna need in these turbines. Allen Hall: If Australia’s gonna go down the pathway of large turbines on shore, then the manufacturing needs to happen in country. There’s no other way to do it. And you could have manufacturing facilities in Western Australia or Victoria and still get massive turbine blades shipped or trucked either way. To [00:16:00] wherever they needed it to go. In country, it would, it’s not that hard to get around Australia and unlike other countries like, like Germany was a lot of mountains and you had bridges and narrow roads and all that, and it, it’s, it’s much more expansive in Australia where you can move big projects around. And obviously with all the, the mining that happens in Australia, it’s pretty much normal. So I, I just trying to get over the hurdle of where the Albanese government is having an issue of sort of pushing this forward. It seems like it’s a simple thing because the Australian infrastructure is already ready. Someone need to flip the switch and say go. Rosemary Barnes: I don’t know if I’d say that we’re we’re ready. ’cause Australia doesn’t have a whole lot of manufacturing of anything at the moment. It’s not true that we have no manufacturing. That’s what Australians like to say. We don’t manufacture anything and that’s not true. We do manufacture. We have some pretty good advanced manufacturing. If you just look at the hard economics of wind turbine manufacturing in Australia of solar panel manufacturing, battery manufacturing. Any of that, it is cheaper to just get it from China, not least [00:17:00] because some of the, um, those components are subsidized by the, the Chinese government. If you start saying, okay, we’re gonna have local manufacturing, like, you can either, you can achieve that either by supporting the local manufacturing industry, you know, like giving subsidies to our manufacturing. Or you could, um, make a local content requirement. Um, say things, you know, if you want project approval for this, then it has to have so much local content. You have to do it really carefully because if you get the settings wrong, then you just end up with very, very expensive, um, renewable energy. And at the moment, especially wind is. Expensive, and I think it’s still getting more expensive in Australia. It has been since, basically since the pandemic. If you then said, we’ve gotta also make it in Australia, then you add a bunch more costs and we would just probably not have wind energy then, so, uh, or new, new wind energy. So there needs to be that balance. But I think that like, even though you can say, okay, cheapest is best, it is also not good to rely on. [00:18:00] Exclusively on other countries, and especially not on just one other country to give you all of your energy infrastructure. If it was up to me, I would be much more supporting the next wave of, um, technologies. I would really love to see, you know, a new Australian. Wind turbine blade manufacturing method. Like at some point in the next decade, we’re going to start getting, uh, advanced manufacturing is gonna make it into wind turbine blades. It’s already there in some of the other components. Allen Hall: Wait, so you just said if we were gonna build a factory in Scotland, it would take about a year. Why would it take 10 years to do it in Australia? Australia’s a nice place to live. Rosemary Barnes: No, I didn’t say that. It would, it would take teens. I said in, sometime in the next decade around the world, wind turbine blades are basically handmade, right? They, you know, there are some, um, machines that are helping people, but you know, you have a look at a picture of a wind turbine blade factor and there’s, you know, there’s 20 people walking over, walking over a blade, smoothing down glass. And at some point we’re gonna start using advanced manufacturing methods. I [00:19:00] mean, there are really advanced composite manufacturing methods. Um, you know, with, um, individual fiber placement and 3D printing with, um, continuous fibers. And that’s being used for like aerospace components a lot. It’s early days for that technology and there is no barrier to the technologies to being able to put them, you know, like say on a GaN gantry that just, you know, like ran down the length of a whole blade like that, that could be done. If it was economic, that’s the kind of technology that Australia should be supporting before that’s the mainstream, and everybody else has already done it, right? You need to find the next thing, and ideally not just one next thing, but several next things because you’re not gonna, you don’t know ahead of time, um, which is gonna be the winner. Allen Hall: That hasn’t been the tack that China has taken, that the latest technology in batteries is not something that China is producing today. They’re producing a generation prior, but they’re doing it at scale. At some point they, the Chinese just said, we’re stopping here and we’re gonna do this, this kind of [00:20:00] battery, and that’s it. And away we go. If we keep waiting until the next generation of blade techniques come out, I think we’re gonna be waiting forever. Rosemary Barnes: I don’t think why I think. Do, you know, make the next generation of, of blade bio technologies? Yolanda Padron: I think it makes sense for someplace like Australia, right? Because we, we’ve talked about the fact that like here, you, you have to consider a lot of factors in operation that you don’t have to consider in other places, especially for blades, right? So if you can eliminate all of those issues, for the most part that are happening in the factory at manufacturing, then that can really help boost. The next operational projects. Allen Hall: So then what you’re saying is that. There are new technologies, but what stage are they at? Are they TRL two, TRL five, TRL seven. How close is this technology because I’d hate for Australia to miss out on this big opportunity. Rosemary Barnes: Frown Hoffer has actually just published an article recently, uh, [00:21:00] about some, I can’t remember if it was fiber, um, tape placement or if it was printed, small wind turbine blades. Small wind is a nice, like, it’s a, a nice bite-sized kind of thing that you can master a lot quicker than you can, you know, you can make a thousand small wind turbines and learn a lot more than making 100 meter long blade. That would probably be bad because it’s your first one and you didn’t realize all of the downsides to the new technology yet. Um, so I, I think it is kind of promising, but. In terms of, yeah, like a major, like in terms of let’s say a hundred meter long blade that was made with 3D printing, that would be terra, L one. Like it’s an idea now. Nobody has actually made one or, um, done, done too much. Um, as far as I know. I think you could get, could get to nine over the next year. Like I said, like I think sometime in the next decade will be when that, when that comes. Allen Hall: Okay. If you, you didn’t get to a nine that quickly. No, it is possible. Yeah. You gotta put some money into it. Rosemary Barnes: If someone wants to give me, [00:22:00] you know, enough money, then I’ll make it. I’ll make it happen. I’ll, I would, I would absolutely be able to make that happen, but I don’t know when it’s gonna be cheap enough. Allen Hall: I would just love to see it. If, if, if you’ve got a, if you’ve got a, a factory, you got squirreled away somewhere in the. Inland of Australia that is making blades at quantity or has the technology to do that. I would love to see it because that would be amazing. Rosemary Barnes: Technologies don’t just fall out of the sky, you know, like they, you, you, you force them into existence. That’s what you, that’s what you do. You know what this comes down to? Have you ever done the, is it Myers-Briggs where you get the, like letters of your personality? You and I are in opposite corners inside some ways. Allen Hall: That wraps up another episode of the Uptime Wind Energy Podcast. If today’s discussion sparked any questions or ideas, and it surely should, we’d love to hear from you. Reach out to us on LinkedIn, particularly Rosie, so it’s Rosemary Barnes on LinkedIn. Don’t forget to subscribe to who you never miss an episode. And if you found value in today’s conversation, please leave us a review. It really helps other wind [00:23:00] energy professionals discover the show. For Rosie and Yolanda, I am Alan Hall, and we’ll see here next week on the Uptime Wind Energy Podcast.
Un diagnóstico como el cáncer puede generar muchas emociones, tanto en las personas que lo viven como en sus familiares y amigos.
- Đoàn đại biểu BCH Trung ương Đảng, Quốc hội, Chủ tịch nước, Chính phủ, UBTW MTTQ Việt Nam đặt vòng hoa vào lăng viếng Chủ tịch Hồ Chí Minh.- Quyền Bộ trưởng Bộ Công thương Lê Mạnh Hùng yêu cầu bảo đảm nguồn cung xăng dầu - “huyết mạch” của nền kinh tế - trong mọi tình huống.- Sau hơn 10 ngày diễn ra, Hội chợ Mùa Xuân lần thứ nhất năm 2026 sẽ bế mạc vào tối nay. - Ngày làm việc cuối cùng trước kỳ nghỉ Tết Nguyên đán 2026, lượng người rời các thành phố lớn tăng mạnh, áp lực giao thông tăng cao tại các tuyến đường cửa ngõ.- Lãnh đạo 27 nước thành viên EU nhất trí kế hoạch tái cấu trúc nền kinh tế, đặt mục tiêu nâng cao năng lực cạnh tranh và bảo đảm tăng trưởng bền vững trước sức ép từ Mỹ, Trung Quốc và Nga. - Nhân Ngày Phát thanh Thế giới 13/2, UNESCO lựa chọn thông điệp “Ây-ai là công cụ, không phải là tiếng nói” trong bối cảnh, ngành phát thanh đang bước vào giai đoạn chuyển mình sâu sắc trước làn sóng công nghệ và trí tuệ nhân tạo.
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin16:38 Chia sẻ Lời Chúa : Lm. Giuse Trần Sĩ Nghị, SJ, chia sẻ Lời Chúa Chúa Nhật VI thường niên---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
Un diagnóstico como el cáncer puede generar muchas emociones, tanto en las personas que lo viven como en sus familiares y amigos.
2月13日、大阪市住之江区のインテックス大阪にて開催されている『大阪オートメッセ2026』にて、LM corsaとチーム母体のOTG Motor Sportsが2026年のモータースポーツ参戦体制を発表した。 スーパ […]
You've been wondering why YouTube ads are so bad these days. I know it, you know it. Well... mea culpa, mea culpa, mea maxima culpa. Join Spencer, Ty, and Andy as they pick the ads that you are going to see for the next 20 years on YouTube, and discuss ways to make them even worse. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
El sobrepensamiento puede ser, por sí solo, una experiencia tremendamente angustiante
El sobrepensamiento puede ser, por sí solo, una experiencia tremendamente angustiante
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin17:05 Chia sẻ Lời Chúa : Lm. Đa Minh Vũ Duy Cường, SJ, chia sẻ Lời Chúa Chúa Nhật V thường niên25:15 Nữ tu trong Giáo hội : Các nữ tu Con Đức Mẹ Thăm viếng ở Kenya phục hồi các gia đình và hàn gắn những trái tim bằng tình yêu---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
In this eye-opening episode, Nurse Erica sits down with Bob Funk, creator of LaborLab, the only nonprofit watchdog organization tracking corporate spending on union-busting. Bob pulls back the curtain on the multi-million dollar industry dedicated to keeping healthcare workers from organizing, revealing how hospitals and healthcare systems spend millions of dollars on union-busting consultants. They explore LaborLab's union-buster tracker and discuss the common tactics employers use to discourage nurses from organizing, from captive audience meetings to intimidation and retaliation. Bob explains the Labor Management Reporting and Disclosure Act of 1959 and how the LM-20 forms are supposed to work—along with the troubling reality that many employers and union-busters simply don't comply with legally required financial reporting. The conversation dives into the "persuader loophole" that allows consultants to hide their anti-union activities and discuss why the PRO Act matters for nursing. They don't shy away from the controversial topic of scab nurses and the damage strike-breaking causes to both patient care and the profession. Whether you're curious about organizing, already involved in union efforts, or just want to understand the forces working against nurses' collective power, this episode is essential listening! Interested in Sponsoring the Show? Email with the subject NURSES UNCORKED SPONSOR to: nursesuncorked@gmail.com Support the Show: Help keep Nurses Uncorked going and become an official Patron! Gain early access to episodes, exclusive bonus content, giveaways, Zoom parties, shout-outs, and much more. Become a Wine Cork, Wine Bottle, Decanter, Grand Preserve, or even a Vineyard Member: https://patron.podbean.com/nursesuncorkedpodcast ETSY Shop: Stop Healthcare Worker Violence! https://www.etsy.com/shop/TheNurseErica Labor Lab: https://laborlab.us/ https://www.tiktok.com/@laborlab.us https://www.instagram.com/laborlab_us/?hl=en https://x.com/LaborLabUS Chapters: 00:00 Introduction 03:40 Testifying Before House of Representatives 08:00 Employer Reporting Noncompliance 11:50 Persuader Loophole 14:50 Labor Lab 17:27 Union Buster Tracker 19:00 Common Union Busting Tactics 24:50 Captive Audience Meetings 28:14 Legal Protections 35:37 Union Busters 45:00 Breaking Down LM-20 Disclosure Forms 48:30 Pitfalls of Union Organizing 53:30 National Labor Relations Board 57:49 The PROAct 1:00:12 Healthcare System Consolidations 1:01:40 Nursing Strikes 1:05:25 Strike Insurance 1:10:55 Scabs Damage the Profession 1:27:49 Conclusion Help the podcast grow by giving episodes a like, download, follow and a 5 ️ star rating! Please follow Nurses Uncorked at: tiktok.com/nurses-uncorked https://youtube.com/@NursesUncorkedL You can listen to the podcast at: podcasts.apple/nursesuncorked spotify.com/nursesuncorked podbean.com/nursesuncorked iheart.com/nurses-uncorked Follow Nurse Erica: @TheNurseErica on TikTok, Instagram, Facebook and YouTube! https://www.youtube.com/@thenurseerica9094 https://www.instagram.com/the.nurse.erica/ DISCLAIMER: This Podcast and all related content published or distributed by or on behalf of Nurse Erica or Nurses Uncorked Podcast is for informational, educational and entertainment purposes only and may include information that is general in nature and that is not specific to you. Any information or opinions expressed or contained herein are not intended to serve as legal advice, or replace medical advice, nor to diagnose, prescribe or treat any disease, condition, illness or injury, and you should consult the health care professional of your choice regarding all matters concerning your health, including before beginning any exercise, weight loss, or health care program. If you have, or suspect you may have, a health-care emergency, please contact a qualified health care professional for treatment. The views and opinions expressed on Nurses Uncorked do not reflect the views of our employers, professional organizations or affiliates. Any information or opinions provided by guest experts or hosts featured within website or on Nurses Uncorked Podcast are their own; not those of Nurse Erica or Nurses Uncorked LLC. Accordingly, Nurse Erica and Nurses Uncorked cannot be responsible for any results or consequences or actions you may take based on such information or opinions. All content is the sole property of Nurses Uncorked, LLC. All copyrights are reserved and the exclusive property of Nurses Uncorked, LLC.
What the @*$^ did you just $%%^ing say about me, you little neph? I'll have you know I graduated top of my class in the Navy Uncs, and I've been involved in numerous secret raids on Aunt-Quaeda, and I have over 300 confirmed beers. Join Spencer, Ty, and Andy as they write the greatest TV show in history: the secret history of the Uncles. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
VOV1 - Chiều 3/2, tại Thủ đô Washington D.C., Mỹ Quyền Bộ trưởng Bộ Công Thương Lê Mạnh Hùng đã chứng kiến Lễ ký kết các Biên bản ghi nhớ (MOU) hợp tác giữa Công ty Lọc hoá dầu Bình Sơn và các đối tác năng lượng hàng đầu của Mỹ.
Creo que esta es una de las meditaciones que mejor describen el centro de lo que es el mindfulness y lo que lo caracteriza
LM publica cómo el empleo público se ha disparado bajo el mandato de Pedro Sánchez en 523.600 personas.
How are you naming your pitbull that. You know you can't be naming a pitbull that word. Come on with that nonsense. Join Spencer, Ty, and Andy as they decide once again who would win in a fight between King Von and Mort Rifkin. You know, normal discussions. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
A veces sólo necesitamos un par de minutos para darnos durante la noche para poder hacer una diferencia cuando vayamos a la cama. Aunque sea sólo unos momentos, puede marcar una diferencia en nuestro sueño.Te mando un fuerte abrazo
LM publica cómo la deuda pública vuelve a crecer, sube en noviembre 5.071 millones de euros superando el 100% del PIB.
LM publican lo que decía Marta Serrano, que ocupó la Secretaría General de Transporte Terrestre entre 2023 y 2025.
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more! We discuss: Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number) Why they threw away AlphaProof: "If one model can't do it, can we get to AGI?" The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—"humans learn by making mistakes, not by copying" Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else? Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous "resolution of possible worlds" paradigm (curve-fitting to find the world model that best explains the data) Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—"the model is better than me at this" The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? "Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up" DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify Why RecSys and IR feel like a different universe: "modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart" The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before Why ideas still matter: "the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here" Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier — Yi Tay Google DeepMind: https://deepmind.google X: https://x.com/YiTayML Chapters 00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team 00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes 00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini 00:21:33 Training IMO Cat: Four Captains Across Three Time Zones 00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks 00:36:29 AI Coding Assistants: From Lazy to Actually Useful 00:32:59 Reasoning, Chain of Thought, and Latent Thinking 00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima 00:55:04 Data Efficiency and World Models: The Next Frontier 01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs 01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium 01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets 01:28:49 Health, HRV, and Research Performance: The 23kg Journey
From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind's pivot from architecture research to RL-driven reasoning—watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more!We discuss:* Yi's path: Brain → Reka → Google DeepMind → Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold* The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they'd hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number)* Why they threw away AlphaProof: “If one model can't do it, can we get to AGI?” The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus* On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else's trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience—”humans learn by making mistakes, not by copying”* Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference* The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where's the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else?* Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun's JEPA + FAIR's code world models (modeling internal execution state), (3) the amorphous “resolution of possible worlds” paradigm (curve-fitting to find the world model that best explains the data)* Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix—”the model is better than me at this”* The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? “Efficient search of novel idea space is interesting, but we're not even at the point where models can consistently apply knowledge they look up”* DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify* Why RecSys and IR feel like a different universe: “modeling dynamics are strange, like gravity is different—you hit the shuttlecock and hear glass shatter, cause and effect are too far apart”* The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before* Why ideas still matter: “the last five years weren't just blind scaling—transformers, pre-training, RL, self-consistency, all had to play well together to get us here”* Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier—Yi Tay* Google DeepMind: https://deepmind.google* X: https://x.com/YiTayMLFull Video EpisodeTimestamps00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini00:21:33 Training IMO Cat: Four Captains Across Three Time Zones00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks00:36:29 AI Coding Assistants: From Lazy to Actually Useful00:32:59 Reasoning, Chain of Thought, and Latent Thinking00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima00:55:04 Data Efficiency and World Models: The Next Frontier01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets01:28:49 Health, HRV, and Research Performance: The 23kg Journey Get full access to Latent.Space at www.latent.space/subscribe
Es normal preocuparnos por lo que puede pasar, al final del día, es parte de lo que nos hace humanos. Pero cuando la preocupación se extiende a cosas pequeñas, grandes y aparece todo el tiempo, puede volverse angustiante.La ansiedad tiene como núcleo el miedo y la preocupación por lo que podría ocurrir, junto con la sensación de pérdida de control que viene con ello.Trabajar todo esto implica reconocer que, a pesar de lo que pueda pasar, podemos aprender a estar tranquilos en el proceso
Record prices. Wild color combinations. And a white GTO that quietly told the real story. In this episode, I break down the Bachman Ferrari sale—why so many cars shattered records, why the boldest (and loudest) specs seemed to win, and what this means for the broader Ferrari market going forward. We talk about how extreme colors and one-off specifications are fueling a new wave of Tailor Made Ferraris, often with investment hopes attached, and why that strategy doesn't always end the way people expect. I also dig into the surprises: softness in cars like the Superamerica, Dinos, and Daytonas, the continued strength of Scuderia, Stradale, and Aperta models, and why the white Ferrari 330 LM / 250 GTO selling for $35 million wasn't as shocking as it looked—unless you weren't paying attention. Finally, I connect the dots to what this could mean for upcoming auctions, including RM Sotheby's Arizona, and why fundamentals still matter—even in a market that sometimes feels like a rainbow-painted Skittles car just crossed the block.
Listen to the January 2026 edition of The Postal Record. Browse the digital issue here. 00:00 Introduction 00:14 Looking back and looking forward, by President Brian L. Renfroe 05:03 News from Washington 12:05 2025 JCAM is now available 18:58 Register now for the food drive 25:14 Informal Step A Training announced 30:25 After a year of standing strong NALC is ready to fight on. 2026: A look ahead 50:07 Leadership Academy founder asks grads to serve other letter carriers back home 58:26 Important benefits new letter carriers should expect to receive from USPS 01:11:22 Caretakers of the community 01:39:04 George Meany, first president of the AFL-CIO 01:44:30 NALC Branch Publication competition call for entries 01:50:07 Carriers and the mail make news online 01:56:31 From airwaves to the page: A creative journey and tribute to lifelong friends 02:04:00 Veterans' legislative update 02:16:56 Executive Vice President Paul Barner: An update to cases pending at the Interpretive step 02:27:38 Vice President James Henry: NALC Needs you 02:32:57 Secretary-Treasurer Nicole Rhine: Reporting to the DOL: Forms LM-2, LM-3 and LM-4 02:38:57 Assistant Secretary-Treasurer Mack Julion: Postal protection 02:44:33 Director of City Delivery Christopher Jackson: USPS pilot testing and additional revenue streams 02:49:48 Director of Safety and Health Manuel Peralta Jr.: Safety committees 02:56:45 Director of Retired Members Dan Toth: Roth TSP—Another tool to manage your taxes 03:02:49 Director of Life Insurance James Yates: MBA Retirement Savings Plan 2026 update 03:08:32 Director of Health Benefits Stephanie Stewart: New benefits and wellness programs 03:14:40 Contract Talk: Route inspections 03:36:41 Regional Workers' Compensation Assistant Coby Jones: Preexisting conditions 03:43:18 Staff report - CLUW: "Women of the World Unite'
That's right, folks. You thought TGOFV would never do class warfare? You simply don't know us well enough. Join Spencer, Ty, and Andy as they debate over which type of worker is the only good one to be: computer guy or construction site wolf-whistler. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Celeste, Yung Zoe, Dane Stephen, Weedworf, James Lloyd-Jones, Sam Thomas, Josh O'Brien, Kilo, David, Sam, T, Rach, Tomix, Adam W, L M, Revidicism, Jennifer Knowles, Jeremy-Alice, Louis Ceresa, Charles Doyle, Dean, Axon, Themandme, Raouldyke, Stephen Tucker, Lawrence, Rebecca Kimpel, Malek Douglas, Jacon Sauber-Cavazos, Bernventers, William Copping, NewmansOwn, Heather-Pleather, Bunknown, Dinosarden, Bedi, Francis Wolf, King Krang, Anthony C, ASDF, Buffoonworld, Bavbiff, D Love, and Tugboat!
Why are so many parents refusing to register birth certificates for their kids? The answer might shock you.
Allen, Joel, Rosemary, and Yolanda cover major offshore wind developments on both sides of the Atlantic. In the US, Ørsted’s Revolution Wind won a court victory allowing construction to resume after the Trump administration’s suspension. Meanwhile, the UK awarded contracts for 8.4 gigawatts of new offshore capacity in the largest auction in European history, with RWE securing nearly 7 gigawatts. Plus Canada’s Nova Scotia announces ambitious 40 gigawatt offshore wind plans, and the crew discusses the ongoing Denmark-Greenland tensions with the US administration. Sign up now for Uptime Tech News, our weekly newsletter on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on YouTube, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary’s “Engineering with Rosie” YouTube channel here. Have a question we can answer on the show? Email us! The Uptime Wind Energy Podcast brought to you by Strike Tape, protecting thousands of wind turbines from lightning damage worldwide. Visit strike tape.com. And now your hosts, Alan Hall, Rosemary Barnes, Joel Saxon and Yolanda Padron. Welcome to the Uptime Wind Energy Podcast. I’m Allen Hall, along with Yolanda, Joel and Rosie. Boy, a lot of action in the US courts. And as you know, for weeks, American offshore wind has been holding its breath and a lot of people’s jobs are at stake right now. The Trump administration suspended, uh, five major projects on December 22nd, and still they’re still citing national security concerns. Billions of dollars are really in balance here. Construction vessels for most of these. Sites are just doing nothing at the minute, but the courts are stepping in and Sted won a [00:01:00] key victory when the federal judge allowed its revolution wind project off the coast of Rhode Island to resume construction immediately. So everybody’s excited there and it does sound like Osted is trying to finish that project as fast as they can. And Ecuador and Dominion Energy, which are two of the other bigger projects, are fighting similar battles. Ecuador is supposed to hear in the next couple of days as we’re recording. Uh, but the message is pretty clear from developers. They have invested too much to walk away, and if they get an opportunity to wrap these projects up quickly. They are going to do it now. Joel, before the show, we were talking about vineyard wind and vineyard. Wind was on hold, and I think it, it may not even be on hold right now, I have to go back and look. But when they were put on hold, uh, the question was, the turbines that were operating, were they able to continue operating? And the answer initially I thought was no. But it was yes, the, the turbines that were [00:02:00] producing power. We’re allowed to continue to produce powers. What was in the balance were the remaining turbines that were still being installed or, uh, being upgraded. So there’s, there’s a lot going on right now, but it does seem like, and back to your earlier point, Joel, before we start talking and maybe you can discuss this, we, there is an offshore wind farm called Block Island really closely all these other wind farms, and it’s been there for four or five years at this point. No one’s said anything about that wind farm. Speaker: I think it’s been there, to be honest with you, since like 2016 or 17. It’s been there a long time. Is it that old? Yeah, yeah, yeah, yeah. So when we were talk, when we’ve been talking through and it gets lost in the shuffle and it shouldn’t, because that’s really the first offshore wind farm in the United States. We keep talking about all these big, you know, utility scale massive things, but that is a utility scale wind farm as well. There’s fi, correct me if I’m wrong, Yolanda, is it five turbos or six? It’s five. Their decent sized turbines are sitting on jackets. They’re just, uh, they’re, they’re only a couple miles offshore. They’re not way offshore. But throughout all of these issues that we’ve had, um, with [00:03:00] these injunctions and stopping construction and stopping this and reviewing permits and all these things, block Island has just been spinning, producing power, uh, for the locals there off the coast of Rhode Island. So we. What were our, the question was is, okay, all these other wind farms that are partially constructed, have they been spinning? Are they producing power? And my mind goes to this, um, as a risk reduction effort. I wonder if, uh, the cable, if the cable lay timelines were what they were. Right. So would you now, I guess as a risk reduction effort, and this seems really silly to have to think about this. If you have your offshore substation, was the, was the main export cable connected to some of these like revolution wind where they have the injunction right now? Was that export cable connected and were the inter array cables regularly connected to turbines and them coming online? Do, do, do, do, do. Like, it wasn’t like a COD, we turned the switch and we had to wait for all 62 turbines. Right. So to our [00:04:00] knowledge and, and, uh, please reach out to any of us on LinkedIn or an email or whatever to our knowledge. The turbines that are in production have still have been spinning. It’s the construction activities that have been stopped, but now. Hey, revolution wind is 90% complete and they’re back out and running, uh, on construction activities as of today. Speaker 2: It was in the last 48 hours. So this, this is a good sign because I think as the other wind farms go through the courts, they’re gonna essentially run through this, this same judge I that. Tends to happen because they have done all the research already. So you, you likely get the same outcome for all the other wind farms, although they have to go through the process. You can’t do like a class action, at least that’s doesn’t appear to be in play at the minute. Uh, they’re all gonna have to go through this little bit of a process. But what the judge is saying essentially is the concern from the Department of War, and then the Department of Interior is. [00:05:00] Make believe. I, I don’t wanna frame it. It’s not framed that way, the way it’s written. There’s a lot more legalistic terms about it. But it basically, they’re saying they tried to stop it before they didn’t get the result they wanted. The Trump administration didn’t get the result they wanted. So the Trump administration ramped it up by saying it was something that was classified in, in part of the Department of War. The judge isn’t buying it. So the, the, the early action. I think what we initially talked about this, everybody, I think the early feeling was they’re trying to stop it, but the fact that they’re trying to stop it just because, and just start pulling permits is not gonna stand outta the court. And when they want to come back and do it again, they’re not likely to win. If they would. Kept their ammunition dry and just from the beginning said it’s something classified as something defense related that Trump administration probably would’ve had a better shot at this. But now it just seems like everything’s just gonna lead down the pathway where all these projects get finished. Speaker: Yeah, I think that specific judge probably was listening to the [00:06:00] Uptime podcast last week for his research. Um, listen to, to our opinions that we talked about here, saying that this is kind of all bs. It’s not gonna fly. Uh, but what we’re sitting at here is like Revolution Wind was, had the injunction against it. Uh, empire Wind had an injunction again, but they were awaiting a similar ruling. So hopefully that’s actually supposed to go down today. That’s Wednesday. Uh, this is, so we’re recording this on Wednesday. Um, and then Dominion is, has, is suing as well, and their, uh, hearing is on Friday. In two, two days from now. And I would expect, I mean, it’s the same, same judge, same piece of papers, like it’s going to be the same result. Some numbers to throw at this thing. Now, just so the listeners know the impact of this, uh, dominion for the Coastal Virginia Offshore Wind Project, they say that their pause in construction is costing them $5 million a day, and that is. That’s a pretty round number. It’s a conservative number to be honest with you. For officer operations, how many vessels and how much stuff is out there? That makes sense. Yep. [00:07:00] 5 million. So $5 million a day. And that’s one of the wind farms. Uh, coastal, Virginia Wind Farm is an $11 billion project. With, uh, it’s like 176 turbines. I think something to that, like it’s, it’s got enough power, it’s gonna have enough production out there to power up, like, uh, like 650,000 homes when it’s done. So there’s five projects suspended right now. I’m continuing with the numbers. Um, well, five, there’s four now. Revolution’s back running, right? So five and there’s four. Uh, four still stopped. And of those five is 28. Billion dollars in combined capital at risk, right? So you can understand why some of these companies are worried, right? They’re this is, this is not peanuts. Um, so you saw a little bump in like Ted stock in the markets when this, this, uh, revolution wind, uh, injunction was stopped. Uh, but. You also see that, uh, Moody’s is a credit [00:08:00] rating. They’ve lowered ORs, Ted’s um, rating from stable to negative, given that political risk. Speaker 2: Well, if you haven’t been paying attention, wind energy O and m Australia 2026 is happening relatively soon. It’s gonna be February 17th and 18th. It’s gonna be at the Pullman Hotel downtown Melbourne. And we are all looking forward to it. The, the roster and the agenda is, is nearly assembled at this point. Uh, we have a, a couple of last minute speakers, but uh, I’m looking at the agenda and like, wow, if you work in o and m or even are around wind turbines, this is the place to be in February. From my Speaker: seat. It’s pretty, it’s, it’s, it’s shaping up for pretty fun. My phone has just been inundated with text message and WhatsApp of when are you traveling? What are your dates looking forward to, and I wanna say this right, Rosie. Looking forward to Melvin. Did I get it? Did I do it okay. Speaker 3: You know how to say it. Speaker: So, so we’re, we’re really looking forward to, we’ve got a bunch of people traveling from around the [00:09:00] world, uh, to come and share their collective knowledge, uh, and learn from the Australians about how they’re doing things, what the, what the risks are, what the problems are, uh, really looking forward to the environment down there, like we had last year was very. Collaborative, the conversations are flowing. Um, so we’re looking forward to it, uh, in a big way from our seats. Over here, Speaker 2: we are announcing a lightning workshop, and that workshop will be answering all your lightning questions in regards to your turbines Now. Typically when we do this, it’s about $10,000 per seat, and this will be free as part of WMA 2026. We’re gonna talk about some of the lightning physics, what’s actually happening in the field versus what the OEMs are saying and what the IEC specification indicates. And the big one is force majeure. A lot of operators are paying for damages that are well within the IEC specification, and we’ll explain.[00:10:00] What that is all about and what you can do to save yourself literally millions of dollars. But that is only possible if you go to Woma 2020 six.com and register today because we’re running outta seats. Once they’re gone, they’re gone. But this is a great opportunity to get your lightning questions answered. And Rosemary promised me that we’re gonna talk about Vestus turbines. Siemens turbines. GE Renova turbines. Nordex turbines. So if you have Nordex turbines, Sulan turbines, bring the turbine. Type, we’ll talk about it. We’ll get your questions answered, and the goal is that everybody at at Wilma 2026 is gonna go home and save themselves millions of dollars in 26 and millions of dollars in 27 and all the years after, because this Lightning workshop is going to take care of those really frustrating lightning questions that just don’t get answered. We’re gonna do it right there. Sign up today. Speaker 3: [00:11:00] You know what, I’m really looking forward to that session and especially ’cause I’ve got a couple of new staff or new-ish staff at, it’s a great way to get them up to speed on lightning. And I think that actually like the majority of people, even if you are struggling with lightning problems every day, I bet that there is a whole bunch that you could learn about the underlying physics of lightning. And there’s not so many places to find that in the world. I have looked, um, for my staff training, where is the course that I can send them to, to understand all about lightning? I know when I started atm, I had a, an intro session, one-on-one with the, you know, chief Lightning guy there. That’s not so easy to come by, and this is the opportunity where you can get that and better because it’s information about every, every OEM and a bit of a better understanding about how it works so that you can, you know, one of the things that I find working with Lightning is a lot of force MA mature claims. And then, um, the OEMs, they try and bamboozle you with this like scientific sounding talk. If you understand better, then you’ll be able to do better in those discussions. [00:12:00] So I would highly recommend attending if you can swing the Monday as well. Speaker: If you wanna attend now and you’re coming to the events. Reach out to, you can reach out to me directly because what we want to do now is collect, uh, as much information as possible about the specific turbine types of the, that the people in the room are gonna be responsible for. So we can tailor those messages, um, to help you out directly. So feel free to reach out to me, joel.saxo, SAXU m@wglightning.com and uh, we’ll be squared away and ready to roll on Monday. I think that’s Monday the 16th. Speaker 2: So while American offshore wind fights for survival in the courts, British offshore wind just had its biggest day ever. The United Kingdom awarded contracts for 8.4 gigawatts. That’s right. 8.4 gigawatts of new offshore wind capacity, the largest auction in European history. Holy smokes guys. The price came in at about 91 pounds per megawatt hour, and that’s 2024 pounds. [00:13:00] Uh, and that’s roughly 40% cheaper than building a new. Gas plant Energy Secretary Ed Milliband called it a monumental step towards the country’s 2030 clean power goals and that it is, uh, critics say that prices are still higher than previous auctions, and one that the government faces challenges connecting all this new capacity to the grid, and they do, uh, transmission is a limiting factor here, but in terms of where the UK is headed. Putting in gigawatts of offshore wind is going to disconnect them from a lot of need on the gas supply and other energy sources. It’s a massive auction round. This was way above what I remember being, uh. Talked about when we were in Scotland just a couple of weeks ago, Joel. Speaker: Yeah, that’s what I was gonna say. You know, when we were, when we were up with the, or E Catapult event, and we talked to a lot of the different organizations of their OWGP and um, you know, the course, the or e Catapult folks and, and, and a [00:14:00] few others, they were really excited about AR seven. They were like, oh, we’re, we’re so excited. It’s gonna come down, it’s gonna be great. I didn’t expect these kind of numbers to come out of this thing. Right? ’cause we know that, um, they’ve got about, uh, the UK currently has about. 16 and a half or so gigawatts of offshore wind capacity, um, with, you know, they got a bunch under construction, it’s like 11 under construction, but their goal is to have 43 gigawatts by 2030. So, Speaker 2: man. Speaker: Yeah. And, and when 2030, put this into Conte Con context now. This is one of our first podcasts of the new year. That’s only four years away. Right. It’s soon. And, and to, to be able to do that. So you’re saying they got 16, they go some round numbers. They got 16 now. Pro producing 11 in the pipe, 11 being constructed. So get that to 27. That’s another 16 gigawatts of wind. They want, they that are not under construction today that they want to have completed in the next four years. That is a monumental effort now. We know that there’s some grid grid complications and connection [00:15:00] requirements and things that will slow that down, but just thinking about remove the grid idea, just thinking about the amount of effort to get those kind of large capital projects done in that short of timeline. Kudos to the UK ’cause they’re unlocking a lot of, um, a lot of private investment, a lot of effort to get these things, but they’re literally doing the inverse of what we’re doing in the United States right now. Speaker 2: There would be about a total of 550, 615 ish megawatt turbines in the water. That does seem doable though. The big question is who’s gonna be providing those turbines? That’s a. Massive order. Whoever the salesperson is involved in that transaction is gonna be very happy. Well, the interesting thing here Speaker: too is the global context of assets to be able to deliver this. We just got done talking about the troubles at these wind farms in the United States. As soon as these. Wind farms are finished. There’s not more of them coming to construction phase shortly, right? So all of these assets, all these jack up vessels, these installation vessels, these specialized cable lay vessels, they [00:16:00]can, they can fuel up and freaking head right across, back across the Atlantic and start working on these things. If the pre all of the engineering and, and the turbine deliveries are ready to roll the vessels, uh, ’cause that you, that, you know, two years ago that was a problem. We were all. Forecasting. Oh, we have this forecasted problem of a shortage of vessels and assets to be able to do installs. And now with the US kind of, basically, once we’re done with the wind farms, we’re working on offshore, now we’re shutting it down. It frees those back up, right? So the vessels will be there, be ready to roll. You’ll have people coming off of construction projects that know what’s going on, right? That, that know how to, to work these things. So the, the people, the vessels that will be ready to roll it is just, can we get the cables, the mono piles, the turbines and the cells, the blades, all done in time, uh, to make this happen And, and. I know I’m rambling now, but after leaving that or e Catapult event and talking to some of the people, um, that are supporting those [00:17:00] funds over there, uh, being injected from the, uh, the government, I think that they’ve got Speaker 2: the, the money flowing over there to get it done too. The big winner in the auction round was RWE and they. Almost seven gigawatts. So that was a larger share of the 8.4 gigawatts. RWE obviously has a relationship with Vestus. Is that where this is gonna go? They’re gonna be, uh, installing vestus turbines. And where were those tur turbines? As I was informed by Scottish gentlemen, I won’t name names. Uh, will those turbines be built in the uk? Speaker 3: It’s a lot. It’s a, it’s one of the biggest challenges with, um, the supply chain for wind energy is that it just is so lumpy. So, you know, you get, um, uh. You get huge eight gigawatts all at once and then you have years of, you know, just not much. Not much, not much going on. I mean, for sure they’re not gonna be just building [00:18:00] eight gigawatts worth of, um, wind turbines in the UK in the next couple of years because they would also have to build the capacity to manufacture that and, and then would wanna be building cocks every couple of years for, you know, the next 10 or 20 years. So, yeah, of course they’re gonna be manufacturing. At facilities around the world and, and transporting them. But, um, yeah, I just, I don’t know. It’s one of the things that I just. Constantly shake my head about is like, how come, especially when projects are government supported, when plans are government supported, why, why can’t we do a better job of smoothing things out so that you can have, you know, for example, local manufacturing because everyone knows that they’ve got a secure pipeline. It’s just when the government’s involved, it should be possible. Speaker 2: At least the UK has been putting forth some. Pretty big numbers to support a local supply chain. When we were over in Scotland, they announced 300 million pounds, and that was just one of several. That’s gonna happen over the next year. There will be a [00:19:00] near a billion pounds be put into the supply chain, which will make a dramatic difference. But I think you’re right. Also, it’s, they’re gonna ramp up and then they, it’s gonna ramp down. They have to find a way to feed the global marketplace at some point, be because the technology and the people are there. It’s a question of. How do you sustain it for a 20, 30 year period? That’s a different question. Speaker 3: I do agree that the UK is doing a better job than probably anybody else. Um, it it’s just that they, the way that they have chosen to organize these auctions and the government support and the planning just means that they have that, that this is the perfect conditions to, you know. Make a smooth rollout and you know, take care of all this. And so I just a bit frustrated that they’re not doing more. But you are right that they’re doing the best probably Speaker 4: once all of these are in service though, aren’t there quite a bit of aftermarket products that are available in the UK Speaker: on the service then? I think there’s more. Speaker 4: Which, I mean, that’s good. A good part of it, right? Speaker: If we’re talking Vestas, so, so let’s just round this [00:20:00] up too. If we’re talking vest’s production for blades in Europe, you have two facilities in Denmark that build V 2 36 blades. You have one facility in Italy that builds V 2 36 blades, Taiwan, but they build them for the APAC market. Of course. Um, Poland had a, has one on hold right now, V 2 36 as well. Well, they just bought that factory from LM up in Poland also. That’s, but I think that’s for onshore term, onshore blades. Oh, yes, sure. And then Scotland has, they have the proposed facility in, in Laith. That there, that’s kind of on hold as well. So if that one’s proposed, I’m sure, hey, if we get a big order, they’ll spin that up quick because they’ll get, I am, I would imagine someone o you know, one of the, one of the funds to spool up a little bit of money, boom, boom, boom. ’cause they’re turning into local jobs. Local supply Speaker 2: chain does this then create the condition where a lot of wind turbines, like when we were in Scotland, a lot of those wind turbines are. Gonna reach 20 years old, maybe a little bit older here over the next five years where they will [00:21:00] need to be repowered upgraded, whatever’s gonna happen there. If you had internal manufacturing. In country that would, you’d think lower the price to go do that. That will be a big effort just like it is in Spain right now. Speaker: The trouble there though too, is if you’re using local content in, in the uk, the labor prices are so much Speaker 2: higher. I’m gonna go back to Rosie’s point about sort of the way energy is sold worldwide. UK has high energy prices, mostly because they are buying energy from other countries and it’s expensive to get it in country. So yes, they can have higher labor prices and still be lower cost compared to the alternatives. It, it’s not the same equation in the US versus uk. It’s, it’s totally different economics, but. If they get enough power generation, which I think the UK will, they’re gonna offload that and they’re already doing it now. So you can send power to France, send power up [00:22:00] north. There’s ways to sell that extra power and help pay for the system you built. That would make a a lot of sense. It’s very similar to what the Saudis have done for. Dang near 80 years, which is fill tankers full of oil and sell it. This is a little bit different that we’re just sending electrons through the water to adjacent European countries. It does seem like a plan. I hope they’re sending ’em through a cable in the water and not just into the water. Well, here’s the thing that was concerning early on. They’re gonna turn it into hydrogen and put it on a ship and send it over to France. Like that didn’t make any sense at all. Uh. Cable’s on the way to do it. Right. Speaker: And actually, Alan, you and I did have a conversation with someone not too long ago about that triage market and how the project where they put that, that that trans, that HVDC cable next to the tunnel it, and it made and it like paid for itself in a year or something. Was that like, that they didn’t wanna really tell us like, yeah, it paid for itself in a year. Like it was a, the ROI was like on a, like a $500 million [00:23:00]project or something. That’s crazy. Um, but yeah, that’s the same. That’s, that is, I would say part of the big push in the uk there is, uh, then they can triage that power and send it, send it back across. Um, like I think Nord Link is the, the cable between Peterhead and Norway, right? So you have, you have a triage market going across to the Scandinavian countries. You have the triage market going to mainland eu. Um, and in when they have big time wind, they’re gonna be able to do it. So when you have an RWE. Looking at seven gigawatts of, uh, possibility that they just, uh, just procured. Game on. I love it. I think it’s gonna be cool. I’m, I’m happy to see it blow Speaker 2: up. Canada is getting serious about offshore wind and international developers are paying attention. Q Energy, France and its South Korean partner. Hawa Ocean have submitted applications to develop wind projects off Nova Scotia’s Coast. The province has big ambitions. Premier, Tim Houston wants to license enough. Offshore [00:24:00] wind to produce 40 gigawatts of power far more than Nova Scotia would ever need. Uh, the extra electricity could supply more than a quarter of Canada’s total demand. If all goes according to plan, the first turbines could be spinning by 2035. Now, Joel. Yeah, some of this power will go to Canada, but there’s a huge market in the United States also for this power and the capacity factor up in Nova Scotia offshore is really good. Yeah. It’s uh, it Speaker: is simply, it’s stellar, right? Uh, that whole No, Nova Scotia, new Brunswick, Newfoundland, that whole e even Maritimes of Canada. The wind, the wind never stops blowing, right? Like I, I go up there every once in a while ’cause my wife is from up there and, uh, it’s miserable sometimes even in the middle of summer. Um, so the, the wind resource is fantastic. The, it, it is a boom or will be a boom for the Canadian market, right? There’re always [00:25:00] that maritime community, they’re always looking for, for, uh, new jobs. New jobs, new jobs. And this is gonna bring them to them. Um, one thing I wanna flag here is when I know this, when this announcement came out. And I reached out to Tim Houston’s office to try to get him on the podcast, and I haven’t gotten a response yet. Nova Scotia. So if someone that’s listening can get ahold of Tim Houston, we’d love to talk to him about the plans for Nova Scotia. Um, but, but we see that just like we see over overseas, the triage market of we’re making power, we can sell it. You know, we balance out the prices, we can sell it to other places. From our seats here we’ve been talking about. The electricity demand on the east coast of the United States for, for years and how it is just climbing, climbing, climbing, especially AI data centers. Virginia is a hub of this, right? They need power and we’re shooting ourselves in the foot, foot for offshore wind, plus also canceling pipelines and like there’s no extra generation going on there except for some solar plants where you can squeeze ’em in down in the Carolinas and whatnot. [00:26:00] There is a massive play here for the Canadians to be able to HVD see some power down to us. Speaker 2: The offshore conditions off the coast of Nova Scotia are pretty rough, and the capacity factor being so high makes me think of some of the Brazilian wind farms where the capacity factor is over 50%. It’s amazing down there, but one of the outcomes of that has been early turbine problems. And I’m wondering if the Nova Scotia market is going to demand a different kind of turbine that is specifically built for those conditions. It’s cold, really cold. It’s really windy. There’s a lot of moisture in the air, right? So the salt is gonna be bad. Uh, and then the sea life too, right? There’s a lot of, uh, sea life off the coast of the Nova Scotia, which everybody’s gonna be concerned about. Obviously, as this gets rolling. How do we think about this? And who’s gonna be the manufacturer of turbines for Canada? Is it gonna be Nordics? Well, Speaker: let’s start from the ground up there. So from the or ground up, it’s, how about sea [00:27:00] floor up? Let’s start from there. There is a lot of really, really, if you’ve ever worked in the offshore world, the o offshore, maritime Canadian universities that focus on the, on offshore construction, they produce some of the best engineers for those markets, right? So if you go down to Houston, Texas where there’s offshore oil and gas companies and engineering companies everywhere, you run into Canadians from the Maritimes all over the place ’cause they’re really good at what they do. Um, they are developing or they have developed offshore oil and gas platforms. Off of the coast of Newfoundland and up, up in that area. And there’s some crazy stuff you have to compete with, right? So you have icebergs up there. There’s no icebergs in the North Atlantic that like, you know, horn seats, internet cruising through horn C3 with icebergs. So they’ve, they’ve engineered and created foundations and things that can deal with that, those situations up there. But you also have to remember that you’re in the Canadian Shield, which is, um, the Canadian Shield is a geotechnical formation, right? So it’s very rocky. Um, and it’s not [00:28:00] like, uh, the other places where we’re putting fixed bottom wind in where you just pound the piles into the sand. That’s not how it’s going to go, uh, up in Canada there. So there’s some different engineering that’s going to have to take place for the foundations, but like you said, Alan Turbine specific. It blows up there. Right. And we have seen onshore, even in the United States, when you get to areas that have high capacity burning out main bearings, burning out generators prematurely because the capacity factor is so high and those turbines are just churning. Um, I, I don’t know if any of the offshore wind turbine manufacturers are adjusting any designs specifically for any markets. I, I just don’t know that. Um, but they may run into some. Some tough stuff up there, right? You might run into some, some overspeeding main bearings and some maintenance issues, specifically in the wintertime ’cause it is nasty up there. Speaker 2: Well, if you have 40 gigawatts of capacity, you have several thousand turbines, you wanna make sure really [00:29:00] sure that the blade design is right, that the gearbox is right if you have a gearbox, and that everything is essentially over-designed, heated. You can have deicing systems on it, I would assume that would be something you would be thinking about. You do the same thing for the monopoles. The whole assembly’s gotta be, have a, just a different thought process than a turbine. You would stick off the coast of Germany. Still rough conditions at times, but not like Nova Scotia. Speaker: One, one other thing there to think about too that we haven’t dealt with, um. In such extreme levels is the, the off the coast of No. Nova Scotia is the Bay of Fundee. If you know anything about the Bay of Fundee, it is the highest tide swings in the world. So the tide swings at certain times of the year, can be upwards of 10 meters in a 12 hour period in this area of, of the ocean. And that comes with it. Different time, different types of, um, one of the difficult things for tide swings is it creates subsid currents. [00:30:00] Subsid currents are, are really, really, really bad, nasty. Against rocks and for any kind of cable lay activities and longevity of cable lay scour protection around turbines and stuff like that. So that’s another thing that subsea that we really haven’t spoke about. Speaker 3: You know, I knew when you say Bay Bay of funding, I’m like, I know that I have heard that place before and it’s when I was researching for. Tidal power videos for Tidal Stream. It’s like the best place to, to generate electricity from. Yeah, from Tidal Stream. So I guess if you are gonna be whacking wind turbines in there anyway, maybe you can share some infrastructure and Yeah. Eca a little bit, a little bit more from your, your project. Speaker 2: that wraps up another episode of the Uptime Wind Energy Podcast. If today’s discussion sparked any questions or ideas. We’d love to hear from you. Just reach out to us on LinkedIn and don’t forget to subscribe so you never miss an episode. And if you found value in today’s conversation, please leave us a review. It really helps other wind energy professionals discover the show For Rosie, Yolanda and Joel, I’m Alan Hall, and we’ll see you here next week on the Uptime [00:36:00] Wind Energy Podcast.
Lethal Mullet Podcast #300: Chatting Eighties with Dee Tails
Remember those good old days, at [INSERT ALMA MATER]? Good times, good times. Join Spencer, Ty, and Andy as they reminisce about their college experiences, and the scholastic opportunities that shaped them into who they are today. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Abbie Phelps, Adam W, Anthony Cabrera, asdf, Axon, Baylor Thornton, Bedi, bernventers, bunknown, Celeste, Charles Doyle, Dane Stephen, Dave Finlay, David Gebhardt, Dean, Francis Wolf, Heather-Pleather, Jacob Sauber-Cavazos, James Lloyd-Jones, Jennifer Knowles, Jeremy-Alice, Josh O'Brien, Kilo, LM, Lawrence, Louis Ceresa, Malek Douglas, Newmans Own, Packocamels, Phat Ass Cyberman, Rach, raouldyke, Rebecca Kimpel, revidicism, Sam Thomas, T, Tash Diehart, Themandme, Tomix, weedworf, William Copping, and Yung Zoe!
Laudetur Jesus Christus - Ngợi khen Chúa Giêsu KitôRadio Vatican hằng ngày của Vatican News Tiếng Việt.Nội dung chương trình hôm nay:0:00 Bản tin16:52 Chia sẻ Lời Chúa : Lm. Đa Minh Vũ Duy Cường, SJ, chia sẻ Lời Chúa Lễ Chúa Giêsu chịu phép rửa25:25 Nữ tu trong Giáo hội : Sứ vụ của các nhà truyền giáo Idente tại Bolivia: hành trình trợ giúp người dân ở những vùng nông thôn---Những hình ảnh này thuộc Bộ Truyền Thông của Toà Thánh. Mọi sử dụng những hình ảnh này của bên thứ ba đều bị cấm và dẫn đến việc đánh bản quyền, trừ khi được cho phép bằng giấy tờ của Bộ Truyền Thông. Copyright © Dicasterium pro Communicatione - Giữ mọi bản quyền.
Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we'll explain in the next State of Latent Space post, we'll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross' AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking—trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.We have chatted with both Clementine Fourrier of HuggingFace's OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is “open” really?We discuss:* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx's retweet* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding ”I don't know”), and Claude models lead with the lowest hallucination rates despite not always being the smartest* GDP Val AA: their version of OpenAI's GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)Links to Artificial Analysis* Website: https://artificialanalysis.ai* George Cameron on X: https://x.com/georgecameron* Micah-Hill Smith on X: https://x.com/micahhsmithFull Episode on YouTubeTimestamps* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins* 01:19 Business Model: Independence and Revenue Streams* 04:33 Origin Story: From Legal AI to Benchmarking Need* 16:22 AI Grant and Moving to San Francisco* 19:21 Intelligence Index Evolution: From V1 to V3* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology* 13:52 Mystery Shopper Policy and Maintaining Independence* 28:01 New Benchmarks: Omissions Index for Hallucination Detection* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks* 50:19 Stirrup Agent Harness: Open Source Agentic Framework* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses* 58:25 The Smiling Curve: Cost Falling While Spend Rising* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions* 1:16:50 Closing: The Insatiable Demand for IntelligenceTranscriptMicah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.swyx [00:00:17]: Which was January 2024. I don't even remember doing that, but yeah, it was very influential to me. Yeah, I'm looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it's an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I've been following your progress. Congrats on... It's been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...George [00:01:09]: Yeah, but you can't pay us for better results.swyx [00:01:12]: Yes, exactly.George [00:01:13]: Very important.Micah [00:01:14]: Start off with a spicy take.swyx [00:01:18]: Okay, how do I pay you?Micah [00:01:20]: Let's get right into that.swyx [00:01:21]: How do you make money?Micah [00:01:24]: Well, very happy to talk about that. So it's been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We're very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We've got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We've been very clear about that from the very start because there's no use doing what we do unless it's independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it's hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that's very different from the public benchmarking that we publicize, and there's no commercial model around that. For private benchmarking, we'll at times create benchmarks, run benchmarks to specs that enterprises want. And we'll also do that sometimes for AI companies who have built things, and we help them understand what they've built with private benchmarking. Yeah. So that's a piece mainly that we've developed through trying to support everybody publicly with our public benchmarks. Yeah.swyx [00:04:09]: Let's talk about TechStack behind that. But okay, I'm going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.Micah [00:04:19]: George was an SF, but he's Australian, but he moved here already. Yeah.swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let's start there. We'll go to the private benchmark. Yeah.George [00:04:33]: Why don't we even go back a little bit to like why we, you know, thought that it was needed? Yeah.Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you're doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you're trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.swyx [00:05:49]: Like we didn't like get together and say like, Hey, like we're going to stop working on all this stuff. I'm like, this is going to be our main thing. When I first called you, I think you hadn't decided on starting a company yet.Micah [00:05:58]: That's actually true. I don't even think we'd pause like, like George had an acquittance job. I didn't quit working on my legal AI thing. Like it was genuinely a side project.George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we'll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That's a fun one. Yeah. Like a open source model that really changed the landscape and opened up people's eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that's obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that's basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there's some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don't line up because they're independently run. And so your numbers are going to look better than... Your reproductions of other people's numbers are going to look worse because you don't hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang's project would also have some of these numbers. And I don't know if there's any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI's eval framework harness. Yup.Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it's like if it's a simple Q&A eval, all you're doing is asking a list of questions and checking if the answers are right, which shouldn't be that crazy. But it turns out there are an enormous number of things that you've got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn't just take rules from the labs was just that they would all prompt the models differently. And when you're competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I'm Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That's the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I'm sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn't look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn't do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren't at the start.Micah [00:09:36]: So like, I mean, we're paying for it personally at the start. There's a lot of money. Well, the numbers weren't nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that's gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn't that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what's the answer for this? Like, we didn't want to go into the answer directly without letting the models think. We weren't even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.swyx [00:10:33]: And so for people who haven't done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there's an open question whether you should give it points for not following your instructions on the format.Micah [00:11:00]: It depends what you're looking at, right? Because you can, if you're trying to see whether or not it can solve a particular type of reasoning problem, and you don't want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it's mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.swyx [00:11:28]: Yeah, yeah. And then there's other questions around, I guess, sometimes if you have a multiple choice question, sometimes there's a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you're like, I don't know how anyone believes the numbers on all these things. It's so dark magic.Micah [00:11:47]: You've also got, like… You've got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we're developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we're comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.George [00:12:37]: So, that's one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that's assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don't have any special deals with the labs. They don't discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is… Ah, 100%.Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we've developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we're working with a lab, if they're giving us a private endpoint to evaluate a model, that it is totally possible. That what's sitting behind that black box is not the same as they serve on a public endpoint. We're very aware of that. We have what we call a mystery shopper policy. And so, and we're totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks… Yeah, that's the job. …without them being able to identify it. And no one's ever had a problem with that. Because, like, a thing that turns out to actually be quite a good… …good factor in the industry is that they all want to believe that none of their competitors could manipulate what we're doing either.swyx [00:14:23]: That's true. I never thought about that. I've been in the database data industry prior, and there's a lot of shenanigans around benchmarking, right? So I'm just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.Micah [00:14:36]: I mean, okay, the biggest one, like, that I'll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It's that the things that get measured become things that get targeted by labs that they're trying to build, right? Exactly. So that doesn't mean anything that we should really call shenanigans. Like, I'm not talking about training on test set. But if you know that you're going to be great at another particular thing, if you're a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you're building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it's clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone's looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It'll be true for the next couple of years. There's no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.swyx [00:15:58]: And we'll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people's evals, but now you're coming up with your own. And I think, obviously, that is a necessary path once you're at the frontier. You've exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it's a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they've done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we're not quite typical of, like, a lot of the other AI startups that they've invested in.swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That's an interesting question.Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.swyx [00:17:09]: Which is also, like, a crazy list. Yeah.George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don't have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they've been great mentors to us as we've built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that's a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.swyx [00:17:59]: I think to some extent, I'm mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we're doing well and what we're not doing well and what they want to see next from us. Yeah. Yeah. Because when you're building any kind of AI application now, chances are you're using a whole bunch of different models. You're maybe switching reasonably frequently for different models and different parts of your application to optimize what you're able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they're like not commercial customers of ours, like we don't charge for all our data on the website. Yeah. They are absolutely some of our power users.swyx [00:19:07]: So let's talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What's next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.Micah [00:19:22]: So first, just like background, like we're talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We're pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn't tell the whole story. That's why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it's got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It's also got a couple of agentic data sets. It's got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we're most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We're all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we've got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.Micah [00:20:53]: Like how we've changed the index to where we are.swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that's a nice way to tell that story.Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It's easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today's version versus last week's version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that's very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that's been one of the key things, by the way, that's driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.swyx [00:22:12]: I don't know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.Micah [00:22:21]: Let's do it. Okay. This would be a pretty good way to chat about a few of the new things we've launched recently. Yeah.George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we've kind of built and partnered on focus on topics like hallucination. And we've got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don't have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.George [00:23:25]: Totally. A quick view of that is, okay, there's a lot. I love it. I love this chart. Yeah.Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there's been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn't have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we're in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There's so many dots on it, but I think it reflects a little bit what we felt, like how crazy it's been.swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you've got service now in there that are less traditional names. Yeah.George [00:25:01]: It's models that we're kind of highlighting by default in our charts, in our intelligence index. Okay.swyx [00:25:07]: You just have a manually curated list of stuff.George [00:25:10]: Yeah, that's right. But something that I actually don't think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.George [00:25:34]: Which got close to OpenAI's leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we'd been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that's the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.George [00:26:54]: I'm from Singapore.swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.Micah [00:27:11]: I don't know. I'm not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.George [00:27:20]: There's been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and thenswyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.Micah [00:27:42]: Yeah, that's fair. There's a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we've run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn't know the answer, so not able to get it correct, what's its probability of saying, I don't know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we're simply taking off a point if you give an incorrect answer to the question. We're pretty convinced that this is an example of where it makes most sense to do that, because it's strictly more helpful to say, I don't know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it's been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There's no incentive to say, I don't know. So we did that for this one here.swyx [00:29:22]: I think there's a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.George [00:29:31]: On that. And one reason that we didn't do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.swyx [00:29:43]: I don't know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.Micah [00:30:05]: And I mean, like anything that we do, it's not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it's pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we've evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what's is it, is there a held out set? There's a hell of a set for this one. So we, we have published a public test set, but we we've only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We'll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we've got some of that disclosed on the website publicly right now, and there's lots more coming in terms of our ability to break out very specific topics. Yeah.swyx [00:31:23]: I would be interested. Let's, let's dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don't know. What's, what do you make of that?George [00:31:37]: One interesting aspect is that we've found that there's not really a, not a strong correlation between intelligence and hallucination, right? That's to say that the smarter the models are in a general sense, isn't correlated with their ability to, when they don't know something, say that they don't know. It's interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.swyx [00:32:07]: I bet pro's really good. Uh, actually no, I meant, I meant, uh, the GPT pros.George [00:32:12]: Oh yeah.swyx [00:32:13]: Cause GPT pros are rumored. We don't know for a fact that it's like eight runs and then with the LM judge on top. Yeah.George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it's likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.Micah [00:32:45]: Um, there's, there's driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.swyx [00:32:56]: And so that's what we're changing. Uh, I know many smart people who are confidently incorrect.George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there's times and a place for that. I think our view is that hallucination rate makes sense in this context where it's around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that's the case in coding or when you're trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it's really hard, uh, physics problems. Okay.swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?George [00:33:37]: It's not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it's Princeton.George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they're trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it's something new. Yeah, exactly.swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you've chosen to not, uh, endorse that and you've made your own. And I think that's a, that's a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you'd measure quite differently, like we've called this a amnesty and solutionation rate, not trying to declare the, like, it's humanity's last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It's something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We're partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We're not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we've got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we're completely comfortable with that. A lot of the labs have released great data sets in the past that we've used to great success independently. And so it's between all of those techniques, we're going to be releasing more stuff in the future. Cool.swyx [00:36:26]: Let's cover the last couple. And then we'll, I want to talk about your trends analysis stuff, you know? Totally.Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We're not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.swyx [00:37:05]: What's the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don't know what to trust.Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we've got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we're seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we're looking at here, there's an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that's not out yet. Take those together, have a look. You might reasonably form a view that there's a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that's where you would, you would land if you have a look at it. Yeah.swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that's about it. Like, yeah, totally.George [00:38:17]: They've also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It's I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there's a bit of an incentive towards larger sparser models. Agreed.Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you're a developer or company using these things, not exactly as you say, it doesn't matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that's all it matters.swyx [00:38:56]: It's not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven't seen a dramatic scaling up in the total size of these models. And so there's a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn't have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it's like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.George [00:39:59]: It's a fantastic. It's a fantastic data set. Yeah.swyx [00:40:01]: And maybe it will recap for people who are still out of it. It's like 44 tasks based on some kind of GDP cutoff that's like meant to represent broad white collar work that is not just coding. Yeah.Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It's within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they're really interesting. I will say that it doesn't. It doesn't necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they're like quite hard take home exam tasks that you might do in an interview process.swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here's a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It's a great paper, encourage people to read it. What we've done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That's kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it's aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn't do actually that well. So that's kind of a good example of what we've done in GDPVal AA.swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.Micah [00:42:08]: I think the way that we're thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we've got the task that the grader and grading model is doing is quite different to the task of taking the test. When you're taking the test, you've got all of the agentic tools you're working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we're grading it, we're running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we're providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we've got the task out of two potential outcomes. It turns out that we proved that it's just very, very good at getting that right, matched with human preference a lot of the time, because I think it's got the raw intelligence, but it's combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we're comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?George [00:43:31]: So the outputs look like documents, and there's video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.swyx [00:43:43]: What task is that?George [00:43:45]: I mean, it's in the data set. Like be a YouTuber? It's a marketing video.Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It's pretty hard to do that with a code editor. I mean, the computer stuff doesn't work quite well enough and so on and so on, but yeah.George [00:44:02]: And so there's no kind of ground truth, necessarily, to compare against, to work out percentage correct. It's hard to come up with correct or incorrect there. And so it's on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It's just, I think what's helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you've crossed 50, you are superhuman. Yeah.Micah [00:44:47]: So we like, haven't grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It's one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it'll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.George [00:45:31]: Other inclusion that's quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...swyx [00:45:39]: Oh, that's right.George [00:45:40]: Oh, sorry.swyx [00:45:41]: I, yeah, I completely missed that. Okay.George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you're saying. Exactly. And what's really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it's meant for consumer use cases and here you're pushing it for something.Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That's, that was how we got the chatbot reference. We're not going to be keeping those updated at like quite the same scale as hundreds of models.swyx [00:46:38]: Well, so I don't know, talk to a browser base. They'll, they'll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.Micah [00:46:53]: And that's grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you're using the model is more different than ever.swyx [00:47:10]: What tools and what data connections come to mind when you say what's interesting, what's notable work that people have done?Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you're sending an email, you're not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it's notes that you've made, maybe it's meeting notes, maybe it's, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That's good. That's, that's, that's good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn't written by a chatbot.Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we'll come back and see where it's going. Totally. Um, super base shout out another famous Kiwi. Uh, I don't know if you've, you've any conversations with him about anything in particular on AI building and AI infra.George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we're quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you're, you're a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that's it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.George [00:50:21]: So it's, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It's called stirrup. So if people want to check it out and, and it's a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.Micah [00:50:39]: I'd say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it's not that many lines of code and the coding agents can work with it. Super well.swyx [00:50:51]: Well, that's nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it's, it's a, it's a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don't know if you've looked at it. I don't know if you've looked at the harbor at all, is that, is that like a, a standard that people want to adopt?George [00:51:19]: Yeah, we've looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we've looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we're getting to is that these models have gotten smart enough. They've gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that's a bit more built out that tries to dictate the, dictate the flow. Awesome.swyx [00:51:56]: Let's cover the openness index and then let's go into the report stuff. Uh, so that's the, that's the last of the proprietary art numbers, I guess. I don't know how you sort of classify all these. Yeah.Micah [00:52:07]: Or call it, call it, let's call it the last of like the, the three new things that we're talking about from like the last few weeks. Um, cause I mean, there's a, we do a mix of stuff that. Where we're using open source, where we open source and what we do and, um, proprietary stuff that we don't always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we're constantly iterating on and so on and so on and so on. So there's a huge mix, I would say, just of like stuff that is open source and not across the side. So that's a LCR for people. Yeah, yeah, yeah, yeah.swyx [00:52:41]: Uh, but let's, let's, let's talk about open.Micah [00:52:42]: Let's talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that's like pretty useful. That tells you what you're allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven't tracked until now. And that's how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you're allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.swyx [00:53:32]: I feel like I've seen a couple other people try to do this, but they're not maintained. I do think this does matter. I don't know what the numbers mean apart from is there a max number? Is this out of 20?George [00:53:44]: It's out of 18 currently, and so we've got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.swyx [00:54:04]: It's hooking face.George [00:54:05]: Oh, with their smaller model. It's coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.swyx [00:54:12]: You can't have it open in the next. We can not include hooking face. We love hooking face. We'll have that, we'll have that up very soon. I mean, you know, the refined web and all that stuff. It's, it's amazing. Or is it called fine web? Fine web. Fine web.Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you're trying to understand the holistic picture of the models and what you can do with all the stuff the company's contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it's just an extra view to understand.swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can b
Happy new year, members of the music fandom! Join Spencer, Ty, and Andy as they discuss rock, rap, country, western, techno, house, clownstep, frenchcore, lo-fi, psychotrance, darkstep, trouse, blogrock, mathrock, shoegaze, sockstep, cowpunk, and all of your other favorite musical genres. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Abbie Phelps, Adam W, Anthony Cabrera, asdf, Axon, Baylor Thornton, Bedi, bernventers, bunknown, Celeste, Charles Doyle, Dane Stephen, Dave Finlay, David Gebhardt, Dean, Francis Wolf, Heather-Pleather, Jacob Sauber-Cavazos, James Lloyd-Jones, Jennifer Knowles, Jeremy-Alice, Josh O'Brien, Kilo, LM, Lawrence, Louis Ceresa, Malek Douglas, Newmans Own, Packocamels, Phat Ass Cyberman, Rach, raouldyke, Rebecca Kimpel, revidicism, Sam Thomas, T, Tash Diehart, Themandme, Tomix, weedworf, William Copping, and Yung Zoe!
Esta es una meditación para quienes quieren priorizar su sueño, verlo como algo serio que hay que cuidar. Porque sí, es una de los grandes pilares para tu salud mental...
Allen, Joel, and Yolanda examine TPI Composites’ Chapter 11 proceedings, including the Oaktree Capital secured debt controversy and Vestas’ acquisition of two Mexican factories. With remaining assets heading to auction in January, they discuss what operators should consider as blade supply uncertainty grows. Sign up now for Uptime Tech News, our weekly email update on all things wind technology. This episode is sponsored by Weather Guard Lightning Tech. Learn more about Weather Guard’s StrikeTape Wind Turbine LPS retrofit. Follow the show on Facebook, YouTube, Twitter, Linkedin and visit Weather Guard on the web. And subscribe to Rosemary Barnes’ YouTube channel here. Have a question we can answer on the show? Email us! The Uptime Wind Energy Podcast brought to you by Strike Tape, protecting thousands of wind turbines from lightning damage worldwide. Visit strike tape.com. And now your hosts, Allen Hall, Rosemary Barnes, Joel Saxum and Yolanda Padron. Welcome to the Uptime Wind Energy Allen Hall: Podcast. I’m your host, Allen Hall. I’m here with Yolanda Padron and Joel Saxum. Rosemary Barnes is on holiday. We’re here to talk about the TPI composites, uh, bankruptcy hearings, and there’s been so much happening there behind the scenes. It’s hard to keep track of, but we’ve done a deep dive and wanted to give everybody at least a highlight of what has happened over the last couple of months. So, uh, if you do own vessels or GE turbines, you understand what the situation is. As we all know, TPI composites, gee, was the world’s largest independent of wind blade manufacturing. Uh, they [00:01:00] were, it, they built blades for renova, Vestas, Nordex. They built blades for almost everybody, uh, names that basically power the global energy transition. And then, uh, if, and a lot of people don’t know this, but back in December of 2023, uh, TPI struck a deal that is drawing some fire. Right now, TPI swapped $436 million in preferred stock for. $393 million in secure debt held by Oak Tree Capital and by August of last year, just a couple of months ago, TPI filed for Chapter 11. Now the Blade Makers assets are being carved up and sold, and two of wind energy’s biggest players are stepping in to keep production running while the bankruptcy plays out. Now, Joel and Yolanda, I, I think the bankruptcy of. TPI sort of came to the industry as a little bit of a shock. Obviously [00:02:00] the, the price had fallen quite a bit. Uh, if you’ve watched the stock price of TPI composites had been dropping for a while and didn’t have a lot of of market value. However, uh, GE and Vestas both have manufacturing facilities basically with uh, TPI composites and, and needs them to produce those blades. So the filing of the bankruptcy, I’m sure was a nervous point for Vestus and GE being really the, the two main ones. Joel Saxum: Well, I think we talked about this a little bit off air. Is it, it shouldn’t just be Vestus and GE nervous about this now. It should be every operator that’s in either in development or still has blades under warranty. Uh, so, and this is a not a US problem, this is a global problem. ’cause TPI is a global company that serves, uh, global industry all over the place, right? We know that a large percentage of their throughput was GE and Vestas, but also Siemens ESAs in there, you name it, right? The, any major operator’s gonna have some blades built [00:03:00] by TPI or op major, OEM. So. There isn’t gonna be much of a, uh, dark corner of the wind industry that this issue doesn’t touch. So I think they, the, one of the issues here is, um, we’ve, we’ve, we’ve heard about some issues going on with TPI, but it was almost like a, ah, they’re not, they’ll, they’ll be okay. They, so, so something will happen. I mean, Yolanda, you had said. What was it that you said ear earlier? Like, uh, the kind of the, the, the feeling about it. Yolanda Padron: They’ll take care of it. You know, OEMs will take care of it and we’ll be fine. Joel Saxum: Someone’s gonna support this thing. Yolanda Padron: Yeah. I, I think teams, you’re, you’re definitely right. Teams really do need to at least think of a, of a plan B or a plan C to have when the dust settles so you’re not scrambling. Allen Hall: Yeah. And it hasn’t really played out that way. Uh, Vestas has stepped in a little bit and GE has stepped in. Not in terms of acquiring any of the major assets, but I think the first question is what is Oaktree Capital’s, [00:04:00] uh, role in all this? And that is being played out right now in front of the bankruptcy court. Uh, so when you go to bankruptcy, there’s obviously a lot of oversight that happens there, uh, and. When TPI composites entered bankruptcy, the accreditors committee had a bunch of questions about that transaction. Uh, they pointed to a December, 2023 refin refinancing deal with Oaktree and in which creditors were really suspicious of basically saying that TPI was already insolvent in 2023 and Oaktree exchanged equity for secure debt jumping ahead of everybody else in line to get paid. So because they Oaktree has secured debt, they’re first in line to get paid. If, uh, weather Guard was involved selling parts to TPI, which thank goodness we weren’t, we would be unsecured. They wouldn’t have to pay us. So Oaktree would get paid first and everybody else is unsecured, gets paid [00:05:00] later. Uh, that’s okay. I mean, that’s the way they, uh, they structured it. But this has led to a problem, right? So that oak tree. Uh, was supposed to release about $20 million in funding to keep the factories open, and that, that happened just a couple of weeks ago, and Oaktree refused to do it. So the amount of cash flow to keep the factories open was a real issue. TPI was in front of the court saying, we’re in trouble. We’re gonna become insolvent. We don’t have cash flow to keep the doors open. So the blade factories nearly shut down a couple of weeks ago. However, there was a, the settlement, uh, just after that, uh, in regards to Oaktree about when the payouts happen, what Oaktree will receive, and which basically it’s, most of whatever’s gonna happen here. So whatever, uh, TPI decides to sell or can sell, Oaktree is gonna be the recipient of those funds for most of it. I think the Joel Saxum: difficult thing here for. The [00:06:00] general listener, me included, is understanding that this is a very complicated legal process that’s governed and it’s global, right? So it’s governed in certain court systems in different places. And because there is also the idea of like say in the, in the United States, the SEC Securities Exchanges Commission, that kind of regulates these. Publicly traded companies. There’s a lot of lights and there’s a lot of lawyers and there’s a lot of jargon involved in this thing. And, but basically what what we’re saying is, is the way the process works when you have a, uh, a bankruptcy and insolvency, if a company has debt to certain people, there may be a list of a hundred people. There may be a list of two, doesn’t matter. There’s certain classes of debt, right? And Oaktree has secured debt, which means. If they get paid first, if there’s anything, right? If this bankruptcy goes and, and gets, sell this, sell that, sell this, whatever’s left, goes to the secured debt and then it goes to unsecured debt. And [00:07:00] there’s sometimes there can be different classes of unsecured debt as well. And, but if there’s not, some of it just goes by like date or value or everybody gets a percentage, it just kind of all depends on how it works out in the specific court system that the stuff takes care of. But that person. That is the top. Um, in this case, Oaktree Capital, right? Based out of la but offices all over the world, they got about $200 billion in real estate equity and debt assets or, uh, I guess valuation. I wouldn’t say assets. Um, they are the debtor in possession, so they’re the one that’s kind of like top of the heap. They’re kind of controlling how the. The restructuring and or sale goes alongside the court system. Allen Hall: And the trouble is, is that when you have unsecured and secured debt, everybody that’s unsecured wants to get paid. So any material supplier that has been for in selling product to TPI over the years [00:08:00] usually has a 30, 60, 90, maybe 120 days of, of after they deliver the product to they get paid. In that timeframe, if bankruptcy happens, all that product that’s sitting on the floor at TPI, you sort of lost it. You know, you can’t get it back and you’re not gonna get paid for it for if, if, if ever, what do you do? And so you start, you know, you start filing claims, but those, those claims most likely will never get paid. Or if they will, they’re going to get pennies on the dollar. Joel Saxum: Yeah. And I would imagine like, so, you know, when we, when we sit here and say from the weather guard hat, right? We put a. They go to a client, net 15, net 30, we expect to get paid in that amount of time. That’s kind of how our, basically US forwarding credit to someone else. That’s how it works. And if you work within the wind industry, you know that the OEMs, because they are the OEMs, they have a heavier hand. Sometimes they’re net 90, net one 20. Um, once they, once they’re cool with your invoice. So you could see that some of these people that have, [00:09:00] uh, and TPI falls within that OEM category, right? Um, you can see that they more than likely will have had longer, more favorable terms for themselves with some of these sub-suppliers. And the sub-suppliers are, think about TPI blades. It is composites, it is fabric, it’s resins, it’s all of those supply companies. Um, and you know, there may be, uh, some other. Dead in there that you’re not, we’re not sure of. We saw some stuff with some OEMs, maybe they have some exchange agreements you paid up front for some blades or something of that sort. You didn’t get ’em. I don’t know. But there is also, and this is the one that kind of hits home to some of our listeners, um, not only some of our listeners are those supply chain companies that support them, um, but a lot of them are ISPs. Right? So we were just talking to someone who, you know, just a couple weeks ago that had done some inspection work, uh, for, for TPI that. They’re not gonna get paid for it. Um, we have seen on the creditors list of some ISPs that we know they’re not gonna get paid, and those are people out [00:10:00] doing warranty repairs and those kind of things over a course of time. And they may have had a net 30, net 60, net 90 days payment, but I’m sure that stuff is well and long gone. They probably have invoices due for a year now. Uh, but it, this, the, the, this downfall of TPI, what’s going on with them, it affects a lot of people in the wind industry. Um. Be being, having been on the short end once in my career of an unsecured debt, uh, when a, when the client or the, uh, um, purchaser of services, but went into bankruptcy and losing a whole bunch of cash, and there’s nothing you can do about it, um, except for. Be mad and stew over it and learn from your mistakes. Uh, that’s a tough place to be. Speaker 5: Australia’s wind farms are growing fast, but are your operations keeping up? Join us February 17th and 18th at Melbourne’s Poolman on the park for Wind energy o and m Australia [00:11:00] 2026, where you’ll connect with the experts solving real problems in maintenance asset management. And OEM relations. Walk away with practical strategies to cut costs and boost uptime that you can use the moment you’re back on site. Register now at W OM a 2020 six.com. Wind Energy o and m Australia is created by wind professionals for wind professionals because this industry needs solutions, not speeches, Allen Hall: the problem. With TPI has been keeping the doors open and they went in front of the court and said, we have a liquidity problem. Uh, Vestus bought those two factories, those two LLCs for $10 million each. That was the agreement During that transaction, TPI asked for another $55 million, uh, and it’s in the transcripts. You can go listen to this dental, listen to it, but obviously the vest representatives were. No [00:12:00] way. We’re not doing that. We are in good faith. De decided to buy, uh, these two pieces. So 10 million bucks a, a factory is. Pretty decent price, but they are still in a liquidity challenge. So GE Renova and Vestus, uh, don’t want the Blades manufacturing to stop. They have customers who need blades and so they need these TPI factories to keep running. GE Renova is providing emergency financing. Uh, through what the court calls, uh, Erna, G-E-R-N-A, it’s a liquidity agreement. Uh, they also signed a long lead materials agreement to keep raw materials moving into the plants. Vestas provided cash advances to keep production going at the Mexico facilities also. So for now, everything continues to be running, but essentially GE and Vestas are pro paying for the materials. To keep the production line going and there’s this, there’s on the back end of this TPI is essentially. Gonna charge, um, [00:13:00] GE vest less for the blades when they roll off the line because they advanced some those funds. So, TPI as an organization is still trying to continue to produce blades and trying to honor their commitments as much as they can, but they need cash and the, the place they’re going to go get it or have been getting it from as Vestas in GE Renova. So you Joel Saxum: one would expect that either Vestas or GE Renova would eventually just say like, we’ve got to buy you. Is that a reality? Because it doesn’t seem like it from the court documents and stuff. It seems like they’re, they’re kind of, they don’t want to get their hands into back or back into, in GEs case, this blade manufacturing, uh, faculties, right? They’re okay right now providing cash for you guys to keep your operation running and providing us with the things we need. But we don’t actually want to take it over. That’s what it feels Allen Hall: like. Uh, well, Vestus did, right? So Vestus took over two factories in Mexico. GE has not done [00:14:00] that yet, and there’s no indication from the proceedings that I read on all the documents that GE has made any move to do that. Vestus definitely stepped in and wants to keep the two factories running, uh, with the issues with ge, Renova and LM at the minute, and there was a lot of layoffs at LM just before the new year. It’s a question of what GE will do, and it doesn’t seem like as of right now, GE is going to buy factories. Now that being said, uh, TPI composites has deadlines to meet and some auctions to run. Uh, the remaining assets, the non vestus. Portion and the, the Turkish operations, which were sold way earlier, uh, all of the remaining assets go up for bid on January 26th. And if no outside buyer steps in, which is very possible, Oak Tree Capital can use its debt as currency to take ownership of from what is called a credit bid. [00:15:00] From there, uh, the secure lender could convert that debt into equity and, and so basically what happens is Oak Tree Capital. Would be the holder of the company for whatever remains. But you would think that GE Viva, uh, would want to have some piece of this to keep the blade factories running, but there’s no indication of that. No one from GE has said anything. None of the filings indicate that GE wants to go ahead and or ge. Viva wants to go ahead and buy the factories. Nothing like that has happened. So there may be, uh, some more financial transactions at play here, but as of right now, everything that remains for TPI composites is gonna be in the auction block. Someone could walk up and for several million dollars, obviously, uh, acquire it and Joel Saxum: in theory run it. So, I mean, Alan, you and I talked about this this morning a little bit. We have seen more [00:16:00] layoffs at lm. Right. We saw more people depart and it sounds like that building is basically a ghost town over in Denmark. GE is basically scuttling LM down to nothing, and they will more than likely either sell off whatever LM has or discontinue whatever that business model is, if that’s where they’re going, blade wise, wind wise. At the same time, they’ve also said, we’re not building any more g offshore turbines. Allen Hall: What are they Joel Saxum: doing? I don’t see them having the, the, the, the thirst to go scoop up or put any money into TPI, but it’s like a catch 22. ’cause they need them to fulfill the orders and stuff that they have. Right now what we’re staring at is basically oak tree composites. Allen Hall: There’s no chance of that. The oak tree doesn’t know how to run that business. They’re gonna have to hire somebody to go do that. Even if they did, you still got factories in Iowa, a bunch in Mexico, other [00:17:00] places. You have all these assets kind of spread all over the place. It’s not like running an automotive dealership on the corner, you’re, you’re running a major operation with thousands of employees and producing these massively complex blades. There’s only a handful of companies that would be even possible that we could acquire that and run it with any competency at all right now. Joel Saxum: So does oak tree being, being that oak tree is the debtor in possession and if, if possible with, or if possible, if it, if it rolls this way with the plan toggle, right. Where they would basically, the cell would convert them into equity holdings and they would own it. Are they the gatekeepers to who can bid? Like do they control ge? You can bid vest as you can bid? Or does the court control that? Allen Hall: The court controls all of that. So it’s all part of the chapter 11 proceedings. Anybody can walk up and put a bid in. And now whether it qualifies or not is, is a good question, but anybody can walk up and, [00:18:00] and make a claim for what remains. There’s, there is a process that will happen there, but who else would it be? Nordex? I don’t think so. Is is Vesta gonna buy more? I don’t think so. So the concern is obviously for TPI, what is it gonna look like going forward? If you have purchased Vestus turbines or GE Renova turbines, are you gonna have the blades that you have purchased in time? Great questions to ask. I think on the other side is if you do own GE Renova or Vestus turbines and they’re made by TPI, where the technical aspects lie, what do you do where, what should you be thinking about if you’re a large operator of some of these turbines? How I should be planning for the future here? What are you thinking about? Joel Saxum: So let’s divide it into two categories. One of them is turbine blades on order supply chain, supply [00:19:00] chain, and the other one’s being turbine blades already in production or received order. Yolanda Padron: I’m not sure that we can fully look at them separately though, right? Because if you have them, if, if they’re yours and they’re under a service agreement or something. Eventually you might be in the queue for a replacement that you need, right? That your OEM would be on the hook for. Joel Saxum: That raises another question there then does. I don’t, ’cause I don’t know this. Maybe you do. Alan does a bankruptcy qualify as a force majeure event? Allen Hall: Not in terms of like lightning would be, but, but in terms, yeah, sure. Joel Saxum: Yeah. But can they claim force majeure and be like, uh, out of our control? So now the turbine supply agreements are, you know, basically have to be rewritten. Timelines have to be rewritten. Yolanda, to your point, if we have a blade that we need for production, am I not responsible for LDS anymore because the blade manufacturer went into, uh, bankruptcy? Yolanda Padron: I think it’d be more of [00:20:00] either Now you’re not just. In the queue for TPI Blades. But you’re in the queue for whatever we can retrofit there, right? That they could put in. Joel Saxum: Yeah. The alternative is you need a whole set though, right? So if we say like, I need a blade from TPI, or I need an entire set of LM blades, now you’re triple the cost. Who has to pay for that? Yolanda Padron: I really would hope that it, they wouldn’t go this route, but I think some OEMs would just hit liquidated damages. And stop. Allen Hall: That’s what I think too. I mean, we’ve seen that happen with some of the OEMs. Is that the, uh, LDS and that’s it. There is nothing going forward. They’re, they’re fine doing that. That’s the only play that they have. I, I am deeply concerned what GE Renova is about to do in the wind business because of their gas turbine and everything else are so profitable. And they just announced that the wind business in 2026 is not likely to make any. Positive cash flow. [00:21:00] It, the, the discussion inside of GE Renova, at least at the sort of the boardroom level, must be really tense because in, in theory, they could buy TPIs assets in the factories and run them, but they just went through essentially a liquidation process with lm. Do they want to run another company, especially when they’re bleeding cash in that particular business? I think the answer GE historically has been no. If we’re not number one or number two, we’re getting the heck outta that business. That was the Jack Welsh of running ge, and anybody that worked for GE knew that loud and clear because they said it all the time. Those same people that grew up in that GE culture are now in the boardroom, and what are they likely to do? They’re likely to follow that advice. Because it’s just what they know. It’s, it’s, it’s, it’s the school they went to. Are they gonna change their mind and say, A longer term play is wind [00:22:00] and we wanna stay in it and we’re willing to lose a couple hundred million dollars a year for the next couple of years, and now we’re gonna run a Blade Factory with several thousand employees down in Mexico. I just don’t see it. Uh, not that I could be totally wrong about that. Probably am. Uh, today, sitting at the beginning of January of 2026, I don’t think GE Renova wants to be in the blade manufacturing business if they can at all avoid it. Yolanda Padron: I think it’s important for owners to start thinking a lot more about educating their internal teams on what they can. So if it’s through, if you know people within your OEM that you can trust and that can help you. Learn how to self-service some of your blades. That would be great if it’s through ISPs that you can trust. If it’s a hodgepodge of items. I think it’s really important for owners right now to start building that up because it will take a while. I. And, and the risk [00:23:00] is there. Allen Hall: That wraps up another episode of the Uptime Wind Energy Podcast, and if today’s discussion sparked any questions or ideas, we’d love to hear from you. Reach out to us on LinkedIn and don’t forget to subscribe so you never miss an episode. And if you found value in today’s conversation, please leave us overview. It really helps other wind energy professionals discover the show. And we will catch you here next week on the Uptime Wind Energy Podcast.
Daniel Muñoz comenta con Luis F. Quintero toda la actualidad económica centrada en el informe de LM sobre el infierno fiscal español
In the days of auld lang syne, everybody!? Join Spencer, Ty, and Andy as they usher in another new year with another round of Ins and Outs, including low-waisted pants, political assassins, and who's still going to be considered white in the next 365 days. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. TGOFV Theme by World Record Pace. A big shout-out to our $10/month patrons: Abbie Phelps, Adam W, Anthony Cabrera, asdf, Axon, Baylor Thornton, Bedi, bernventers, bunknown, Celeste, Charles Doyle, Dane Stephen, Dave Finlay, David Gebhardt, Dean, Francis Wolf, Heather-Pleather, Jacob Sauber-Cavazos, James Lloyd-Jones, Jennifer Knowles, Jeremy-Alice, Josh O'Brien, Kilo, LM, Lawrence, Louis Ceresa, Malek Douglas, Newmans Own, Packocamels, Phat Ass Cyberman, Rach, raouldyke, Rebecca Kimpel, revidicism, Sam Thomas, T, Tash Diehart, Themandme, Tomix, weedworf, William Copping, and Yung Zoe!
From the frontlines of OpenAI's Codex and GPT-5 training teams, Bryan and Bill are building the future of AI-powered coding—where agents don't just autocomplete, they architect, refactor, and ship entire features while you sleep. We caught up with them at AI Engineer Conference right after the launch of Codex Max, OpenAI's newest long-running coding agent designed to work for 24+ hours straight, manage its own context, and spawn sub-agents to parallelize work across your entire codebase.We sat down with Bryan and Bill to dig into what it actually takes to train a model that developers trust—why personality, communication, and planning matter as much as raw capability, how Codex is trained with strong opinions about tools (it loves rg over grep, seriously), why the abstraction layer is moving from models to full-stack agents you can plug into VS Code or Zed, how OpenAI partners co-develop tool integrations and discover unexpected model habits (like renaming tools to match Codex's internal training), the rise of applied evals that measure real-world impact instead of academic benchmarks, why multi-turn evals are the next frontier (and Bryan's “job interview eval” idea), how coding agents are breaking out of code into personal automation, terminal workflows, and computer use, and their 2026 vision: coding agents trusted enough to handle the hardest refactors at any company, not just top-tier firms, and general enough to build integrations, organize your desktop, and unlock capabilities you'd never get access to otherwise.We discuss:* What Codex Max is: a long-running coding agent that can work 24+ hours, manage its own context window, and spawn sub-agents for parallel work* Why the name “Max”: maximalist, maximization, speed and endurance—it's simply better and faster for the same problems* Training for personality: communication, planning, context gathering, and checking your work as behavioral characteristics, not just capabilities* How Codex develops habits like preferring rg over grep, and why renaming tools to match its training (e.g., terminal-style naming) dramatically improves tool-call performance* The split between Codex (opinionated, agent-focused, optimized for the Codex harness) and GPT-5 (general, more durable across different tools and modalities)* Why the abstraction layer is moving up: from prompting models to plugging in full agents (Codex, GitHub Copilot, Zed) that package the entire stack* The rise of sub-agents and agents-using-agents: Codex Max spawning its own instances, handing off context, and parallelizing work across a codebase* How OpenAI works with coding partners on the bleeding edge to co-develop tool integrations and discover what the model is actually good at* The shift to applied evals: capturing real-world use cases instead of academic benchmarks, and why ~50% of OpenAI employees now use Codex daily* Why multi-turn evals are the next frontier: LM-as-a-judge for entire trajectories, Bryan's “job interview eval” concept, and the need for a batch multi-turn eval API* How coding agents are breaking out of code: personal automation, organizing desktops, terminal workflows, and “Devin for non-coding” use cases* Why Slack is the ultimate UI for work, and how coding agents can become your personal automation layer for email, files, and everything in between* The 2026 vision: more computer use, more trust, and coding agents capable enough that any company can access top-tier developer capabilities, not just elite firms—Bryan & Bill (OpenAI Codex Team)* http://x.com/bfioca* https://x.com/realchillben* OpenAI Codex: https://openai.com/index/openai-codex/Where to find Latent Space* X: https://x.com/latentspacepodFull Video EpisodeTimestamps00:00:00 Introduction: Latent Space Listeners at AI Engineer Code00:01:27 Codex Max Launch: Training for Long-Running Coding Agents00:03:01 Model Personality and Trust: Communication, Planning, and Self-Checking00:05:20 Codex vs GPT-5: Opinionated Agents vs General Models00:07:47 Tool Use and Model Habits: The Ripgrep Discovery00:09:16 Personality Design: Verbosity vs Efficiency in Coding Agents00:11:56 The Agent Abstraction Layer: Building on Top of Codex00:14:08 Sub-Agents and Multi-Agent Patterns: The Future of Composition00:16:11 Trust and Adoption: OpenAI Developers Using Codex Daily00:17:21 Applied Evals: Real-World Testing vs Academic Benchmarks00:19:15 Multi-Turn Evals and the Job Interview Pattern00:21:35 Feature Request: Batch Multi-Turn Eval API00:22:28 Beyond Code: Personal Automation and Computer Use00:24:51 Vision-Native Agents and the UI Integration Challenge00:25:02 2026 Predictions: Trust, Computer Use, and Democratized Excellence Get full access to Latent.Space at www.latent.space/subscribe
From the frontlines of OpenAI's Codex and GPT-5 training teams, Bryan and Bill are building the future of AI-powered coding—where agents don't just autocomplete, they architect, refactor, and ship entire features while you sleep. We caught up with them at AI Engineer Conference right after the launch of Codex Max, OpenAI's newest long-running coding agent designed to work for 24+ hours straight, manage its own context, and spawn sub-agents to parallelize work across your entire codebase. We sat down with Bryan and Bill to dig into what it actually takes to train a model that developers trust—why personality, communication, and planning matter as much as raw capability, how Codex is trained with strong opinions about tools (it loves rg over grep, seriously), why the abstraction layer is moving from models to full-stack agents you can plug into VS Code or Zed, how OpenAI partners co-develop tool integrations and discover unexpected model habits (like renaming tools to match Codex's internal training), the rise of applied evals that measure real-world impact instead of academic benchmarks, why multi-turn evals are the next frontier (and Bryan's "job interview eval" idea), how coding agents are breaking out of code into personal automation, terminal workflows, and computer use, and their 2026 vision: coding agents trusted enough to handle the hardest refactors at any company, not just top-tier firms, and general enough to build integrations, organize your desktop, and unlock capabilities you'd never get access to otherwise. We discuss: What Codex Max is: a long-running coding agent that can work 24+ hours, manage its own context window, and spawn sub-agents for parallel work Why the name "Max": maximalist, maximization, speed and endurance—it's simply better and faster for the same problems Training for personality: communication, planning, context gathering, and checking your work as behavioral characteristics, not just capabilities How Codex develops habits like preferring rg over grep, and why renaming tools to match its training (e.g., terminal-style naming) dramatically improves tool-call performance The split between Codex (opinionated, agent-focused, optimized for the Codex harness) and GPT-5 (general, more durable across different tools and modalities) Why the abstraction layer is moving up: from prompting models to plugging in full agents (Codex, GitHub Copilot, Zed) that package the entire stack The rise of sub-agents and agents-using-agents: Codex Max spawning its own instances, handing off context, and parallelizing work across a codebase How OpenAI works with coding partners on the bleeding edge to co-develop tool integrations and discover what the model is actually good at The shift to applied evals: capturing real-world use cases instead of academic benchmarks, and why ~50% of OpenAI employees now use Codex daily Why multi-turn evals are the next frontier: LM-as-a-judge for entire trajectories, Bryan's "job interview eval" concept, and the need for a batch multi-turn eval API How coding agents are breaking out of code: personal automation, organizing desktops, terminal workflows, and "Devin for non-coding" use cases Why Slack is the ultimate UI for work, and how coding agents can become your personal automation layer for email, files, and everything in between The 2026 vision: more computer use, more trust, and coding agents capable enough that any company can access top-tier developer capabilities, not just elite firms — Bryan & Bill (OpenAI Codex Team) http://x.com/bfioca https://x.com/realchillben OpenAI Codex: https://openai.com/index/openai-codex/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Latent Space Listeners at AI Engineer Code 00:01:27 Codex Max Launch: Training for Long-Running Coding Agents 00:03:01 Model Personality and Trust: Communication, Planning, and Self-Checking 00:05:20 Codex vs GPT-5: Opinionated Agents vs General Models 00:07:47 Tool Use and Model Habits: The Ripgrep Discovery 00:09:16 Personality Design: Verbosity vs Efficiency in Coding Agents 00:11:56 The Agent Abstraction Layer: Building on Top of Codex 00:14:08 Sub-Agents and Multi-Agent Patterns: The Future of Composition 00:16:11 Trust and Adoption: OpenAI Developers Using Codex Daily 00:17:21 Applied Evals: Real-World Testing vs Academic Benchmarks 00:19:15 Multi-Turn Evals and the Job Interview Pattern 00:21:35 Feature Request: Batch Multi-Turn Eval API 00:22:28 Beyond Code: Personal Automation and Computer Use 00:24:51 Vision-Native Agents and the UI Integration Challenge 00:25:02 2026 Predictions: Trust, Computer Use, and Democratized Excellence
Yule City. Three months before Christmas. Children are disappearing mysteriously, almost as if taken by the winter winds themselves. Two cops -- Cash Humbug and Nick Klaussman -- will put it all on the line to discover where those kids have gone... even if it goes all the way to the North Pole. STARRING: Spencer Barrows as Det. Cash Humbug, Andy as Det. Nicholas Klaussman, Liv Agar as Kristen Kringle, John Moe as Commissioner Noel Noelle, Patrick Doran as Det. Chris Miss, Charles Austin as "Rocking" Harvey Evergreen, Raina Douris as Nancy Rudolph, Ty Wood as Narrator, and Clay Parks as Everyone Else. Listen to the rest of the series on Patreon for $5, $7, or $10: www.patreon.com/tgofv. A big shout-out to our $10/month patrons: Abbie Phelps, Adam W, Anthony Cabrera, asdf, Axon, Baylor Thornton, Bedi, bernventers, bunknown, Celeste, Charles Doyle, Dane Stephen, Dave Finlay, David Gebhardt, Dean, Francis Wolf, Heather-Pleather, Jacob Sauber-Cavazos, James Lloyd-Jones, Jennifer Knowles, Jeremy-Alice, Josh O'Brien, Kilo, LM, Lawrence, Louis Ceresa, Malek Douglas, Newmans Own, Packocamels, Phat Ass Cyberman, Rach, raouldyke, Rebecca Kimpel, revidicism, Sam Thomas, T, Tash Diehart, Themandme, Tomix, weedworf, William Copping, and Yung Zoe!
LM publica un detallado informe de balance sobre lo que ha sido económicamente 2025 y lo nocivo que está siendo Sánchez para nuestra economía.
LM publica el dato de que el cine español cerrará 2025 con menos espectadores e ingresos. A pesar de este declive, el sector recibe más subvenciones.
LM publica las declaraciones del experto danés sobre lo costosas e ineficientes que son las políticas "verdes": "No existe electricidad verde barata".
Y'all ever give it up for a Hallmark boyfriend? Y'all ever pop it open for a good-natured townie who owns a bakery? Join Spencer, Ty, and Jackson as they catch up after a long absence, discuss Jackson's crippling addiction to 7OH, and even rank some Christmas classics as we get closer to the Big Day. Support us on Patreon for $5, $7, or $10: www.patreon.com/tgofv. A big shout-out to our $10/month patrons: Abbie Phelps, Adam W, Anthony Cabrera, asdf, Axon, Baylor Thornton, Bedi, bernventers, bunknown, Celeste, Charles Doyle, Dane Stephen, Dave Finlay, David Gebhardt, Dean, Francis Wolf, Heather-Pleather, Jacob Sauber-Cavazos, James Lloyd-Jones, Jennifer Knowles, Jeremy-Alice, Josh O'Brien, Kilo, LM, Lawrence, Louis Ceresa, Malek Douglas, Newmans Own, Packocamels, Phat Ass Cyberman, Rach, raouldyke, Rebecca Kimpel, revidicism, Sam Thomas, T, Tash Diehart, Themandme, Tomix, weedworf, William Copping, and Yung Zoe!
LM publica un informe de Rotellar sobre las 11 razones por las que la condonación de deuda es una barbaridad legal y económica.
You ever see a new AI model drop and be like.... it's so good OMG how do I use it?