POPULARITY
Categories
Our final conversation of the Move Her Mind Summit brings together ASICS elite athletes Makenna Myler, Val Allman, and Taliyah Brooks for an inspiring and candid discussion. Each of these remarkable women shares her journey into sport, what fuels her drive to compete, the obstacles she has faced and overcome, and her vision for 2026 and beyond. It was such a fun conversation, and we hope you enjoy listening as much as we enjoyed being part of it.
Run Challenge da Asics vai ter 6 etapas este ano e as 5 primeiras já tem data (vai ter BH pra alegria dos mineiro e mais etapas no nordeste); não esqueça que hoje a noite tem maratona de toquio com transmissão paralela no Corrida no Ar; pegadinha nos 10k da Tribuna de Santos; como era de se esperar, as inscrições da São Cri-Cri evaporaram em 5 minutos; esgotadas as inscrições para a Maratona de Cali; vcs viram os tênis chineses no pódio da Maratona de Osaka?Nossos links - https://linktr.ee/corridanoarO Corrida no Ar News é produzido diariamente e postado por volta das 6 da manhã.
Tether has quietly become the largest bitcoin miners in the world, and Elektron manages 50 EH/s of the stablecoin issuer's fleet. Get your tickets to OPNEXT 2026 before prices increase! Join us on April 16 in NYC for technical discussions, investor talks, and intimate conversation with the brightest minds in Bitcoin. Welcome back to The Blockspace Podcast! Today, Rapha Zagury, CEO of Elektron, joins us to talk about the company's management of Tether's massive 50 EH/s bitcoin mining portfolio. Rapha breaks down Elektron and Tether's partnership, the incipient market bifurcation between AI/HPC and Bitcoin mining, and why he believes progress is directly correlated with energy use. We dive into the legal origins of Elektron, the company's global footprint across 32 sites, and the future of mining as Tether and Elektron double down on hashrate while the rest of the industry eyes AI. Subscribe to the newsletter! https://newsletter.blockspacemedia.com Notes: * Tether runs 50 EH/s with Elektron * Greenfield sites trading at $1/MW amid AI boom * Elektron manages ~200,000 ASICs globally * Operations span 32 sites across 5 countries * AI and BTC Mining bifurcation expected in 6 to 12 months Timestamps: 00:00 Start 05:31 BTC market crash 07:59 Who is Rapha? 11:16 What is Elektron? 14:46 Swan & Tether legal struggle 18:00 Asset light build out plan 23:20 Business setup 25:05 Why mine? 33:18 Hashrate geographic distribution 38:54 Bad places to mine BTC? 40:50 AI & HPC 48:56 3.8% staff costs 52:11 Hashrate growth 57:28 There's ALWAYS stranded energy 59:44 Elektron IPO?
Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ
The Athletic writer and author Liam Tharme joins the show to unpack the biggest shift in modern distance running: the rise of “super shoes.”Tharme's new book, Super Shoes: How Advanced Technology Revolutionized Running, traces how Nike's Vaporfly (and the carbon-plated, high-stack foam revolution that followed) helped trigger an avalanche of fast times and world records across the roads and track. In this conversation, Liam shares how his own running background fueled his curiosity, what he learned reporting the inside story of Breaking2, and why the technology boom has sparked debates around fairness, access, and sporting integrity.We dig into the science behind the gains, the key researchers who helped validate them, the brand arms race between Nike, Adidas, Puma, ASICS, Hoka, New Balance and On, and the tricky new reality super shoes introduce: when performance leaps can be explained by tech, it can get harder to interpret everything else we see on race day.In this episode, we cover:- How the Vaporfly changed running in 2016 and why the record books haven't looked the same since- The origins of carbon plates + advanced foams, and what the research actually says- Breaking2's behind-the-scenes decisions and the people who made it possible- The “shoe doping” debate, fairness, and how accessibility has evolved- The current footwear landscape and who's winning the innovation race now- The next frontier: personalization, super-responders, and what “the perfect shoe” could meanSuper Shoes is available now here.____________Host: Chris Chavez | @chris_j_chavezGuest: Liam Tharme | @liamtharmeProduced by: Jasmine Fehr | @jasminefehr____________SUPPORT OUR SPONSORSUSATF: The USATF Indoor Track and Field Championships presented by Prevagen are back in New York City from February 28th to March 1st at the Ocean Breeze Athletic Complex in Staten Island. This is where legends don't just race; they punch their ticket to the world stage. The pressure is real, the margins are razor thin, and every athlete is fighting for one thing: a spot on Team USATF at the World Indoor Championships. Grab your tickets now at USATF.org/tickets and experience track and field at its absolute loudest.OLIPOP: A blast from the past, Olipop's Shirley Temple combines smooth vanilla flavor with bright lemon and lime, finished with cherry juice for that nostalgic grenadine-like flavor. One sip of this timeless soda proves some flavors never grow old. Try Shirley Temple and more of Olipop's flavors at DrinkOlipop.com and use code CITIUS25 at checkout to get 25% off your orders.
In Episode 98 of Peak Pursuits brought to you by ASICS, Sim and Vlad start out by chatting through Vlad's experience at Tarawera, including what led to his DNF and his lessons for moving forward. They are then joined by Sarah Ludowici to go through the results of the weekends Australian Short Trail Championships, followed by a discussion on equality and inclusion after event policies rendered Sarah unable to participate in the 30km event she was registered for due to the timing of the race briefing, her need to look after her beautiful 10month old Aurora at that time, and inflexibility on accommodating this need. It sparks a larger discussion around what inclusion and equality looks like, and the effect it can have on groups of people when policies inadvertently disadvantage their ability to compete. The team then relay some of the results from what was a busy weekend, before previewing next week which is just as busy.We hope you enjoy! Results: Snowy Mountains Trail Run Hut 2 Hut Run the Lighthouse Takayna Trail Ultra ATR Summer Series 4: Cleland National Park Sydney Trail Summer Series 2: Manly Dam***Don't forget, use code PEAK at https://bix-hydration.myshopify.com/en-au for 20% off Bix products, exclusive to PPP listeners!***Thanks for tuning in to Peak Pursuits!Connect with us on Instagram @peakpursuits.pod to share your thoughts, questions, and your own trail stories, or email at peakpursuitspodcast@gmail.com. Until next time, keep hitting the trails and chasing those peak pursuits!Follow Vlad: Instagram | StravaFollow Sim: Instagram | StravaFollow Sarah: InstagramMusic from #Uppbeat (free for Creators!):https://uppbeat.io/t/mood-maze/trendsetter License code: K08PMQ3RATCE215R
This week on Fuel for the Sole, we break down a “study” making the rounds on Instagram about DQ Blizzards and tackle several supplement questions, including berberine, urolithin, MCT oil, and the Santa Madre RESET gel.Want to be featured on the show? Email us (written or an audio file!) at fuelforthesolepodcast@gmail.com. This episode is fueled by ASICS and RNWY!Head over to ASICS.com and sign up for a OneASICS account. It's completely free and when you sign up you will receive 10% off your first purchase. You also gain access to exclusive colorways on ASICS.com, free standard shipping, special birthday month discounts and more.Try the new Salty Carbs at https://rnwy.life/ and use code FEATHERS15 for 15% off your purchase. Disclaimer: This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
Episode 73: How to Hack Dopamine to Get Out the Door (Plus: Running in the Snow + A Beautiful Listener Story) Yes… we re-recorded this one.
La Asics Gel Trabuco 12 fue una de las mejores versiones de la saga. Y es que a partir de la Trabuco 13 (ya sin 'Gel') Asics buscó un concepto más versátil, rodador y ligero. Una zapatilla más "UTMBera".Definitivamente, la tecnicidad se perdió, en pos de una mayor sensacion de amplitud y confort, gracias en gran medida a la implementación de compuestos más ligeros y agradables, como el FF Blast+ de la Trabuco 13 o el FF Blast Max de la Trabuco 14.¿Qué alternativas hay, si es que añoras aquellas buenas sensaciones de la Trabuco 12?.Tienes la respuesta dentro del mismo catálogo de Asics, y se llama Trabuco Terra 3.En este programa, te lo explico.Contacto:juan@ellaboratoriodejuan.com
In episode 105, we finally get the stream dialed and dive straight into hands‑on Bitcoin mining and open-source hardware updates. We share the latest on Ember One: a sneaky IO voltage domain bug uncovered by Mujina dev Ryan led to a desk‑side hardware fix that's now pushing ~2 TH/s (target is 3.6 TH/s across 12 chips with proper cooling). We unpack chip and hashboard design lore—from stacked voltage domains and reliability in long chains to the insider politics at big silicon shops like Intel. We talk why selling chips openly matters, how spec sheets unlock real builder momentum, and why third‑party system builders (think Epic Blockchain) can grease the skids between chipmakers and end products.We cover Mujina's trajectory toward a universal, Linux‑first, open firmware for miners—auto‑detect dreams vs config realities—and near‑term support for Ember One's Intel boards and existing Antminers. We riff on home‑miner UX, remote monitoring, and agent/LLM tooling (cron‑job‑with‑superpowers, heartbeats, MCP integrations) to tune, alert, and manage miners. There's buzz around FutureBit's Apollo 3 (likely Auradine chips), open vs lawyered licenses, and the path from FPGA teaching rigs to community‑designed ASICs. We celebrate community hashing on the 256F HydroPool hash‑dash, solo‑block wins, and Heat Punk Summit prep (immersion hot tub included). Plus, a call to action: support developer freedom at change.org/billandkeonne. It's a dense, builder‑first session on chips, firmware, agents, and bringing practical hashrate‑heat products to life.
ASICS is coming in hot as we head into Spring with a big update to one of their most beloved models: the Superblast 3. Nathan and David are joined by Paul Lang (ASICS Global Footwear Senior Product Manager for Performance Running) to talk about all the changes. Version 3 introduces FF Leap, first seen in the Metaspeed series, a re-tooled upper, and much more. Tune in to hear the behind the scenes story of the Superblast 3!Get your DOR Merch: https://doctors-of-running.myspreadshop.com/We're thrilled to have Rabbit as a presenting partner! You can use code DOCTORS10 to get 10% off your entire order of $50.00 or more. Note that the code is limited to one use per customer and can't combined with other discounts. The code is active from 1st of every month to last day at 11:59PM PST, but don't worry because we'll be bringing you a new code every month. Shop now at https://www.runinrabbit.com.Our In For Testing segment is fueled by Skratch Labs! Get 20% off your first order from Skratch with code: DOCTORSOFRUNNING! https://www.skratchlabs.comChapters0:00 - Intro3:06 - What was the vision behind the Superblast 35:22 - How does Asics test their shoes?11:34 - Midsole origins and design15:48 - David's comparitive experiences between the Superblast 2 and 320:14 - Why no plate? Why the specific stack height?25:12 - Upper design33:44 - Changes to the outsole35:02 - The "Blast" line as a whole40:32 - If you could pick one "Blast" shoe, which one are you picking?45:04 - What's coming next for the Blast line47:22 - Wrap-up
Enio Augusto e Marcos Buosi falam sobre tudo que envolve o mundo dos tênis e também de outros acessórios relacionados à corrida.SEJA MEMBRO DO CANAL!!!Aqui tem análises, reviews, dicas, palpites, perguntas, respostas, números, valores e opinião. Informação com bom humor, dúvidas com resposta e conteúdo de sobra. Envie sua pergunta. Escute, aprenda, ensine e divirta-se com a gente.-Tudo sobre o Superblast 3 da Asics.-Cupom de Desconto:CORRA BARATO - PFCKEEP RUNNING BRASIL - PFChttps://www.instagram.com/keeprunningbrasil/https://www.youtube.com/@KeepRunningBrasilhttps://www.facebook.com/keeprunningbrasilhttps://www.linkedin.com/company/keep-running-brasil/https://www.instagram.com/keepers.run/-SEJA MEMBRO DO CANAL NO YOUTUBE
To kick off the Move Her Mind Event Summit, we sat down with ASICS athlete Taylia Brooks and Olympian Allyson Felix to explore the different stages and transitions throughout life. From high school competition to the world stage, they shared lessons from their journeys at the highest level—reminding us that every chapter brings its own challenges, growth, and opportunities.
On parle marques, ce matin, amis des mots, sur une suggestion de Phil, de Laval, qui m'écrit : "Bonjour madame, pourriez-vous faire une rubrique sur les noms d'entreprises ? Pour elles, le choix d'un nom est primordial !" Merci, Phil, voilà une super idée !Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Ontem rolou o lançamento oficial do, o Superblast 3 da Asics aqui em Sevilha e eu estava lá. Vamos ver como foi? Tem até entrevista com o Diretor de Marketing Global da Divisão de Tênis de Corrida da marca e um revezamento que a gente fez a noite.Nossos links - https://linktr.ee/corridanoarO Corrida no Ar News é produzido diariamente e postado por volta das 6 da manhã. PARCEIROSFORCELL - https://forcellperformance.com.br/Use o cupom CORRIDANOAR para ter um BÃO desconto
Send a textWhat starts as a quick run to get out of the house… can end up changing your entire life.In this episode of the Active Mom Podcast, I sit down with running creator Erica Kennedy aka @inmyrunningerica to talk about how running helped her survive an abusive marriage — and slowly rebuild her confidence, independence, and sense of self one mile at a time.Now she's the voice behind the hilarious IG posts that we runners can't stop sharing — from what to wear in Bald Headed Scallywags winter-storm running weather to her spot-on anatomy jokes and injury stories that make every runner say, “wait… that's literally me.”But behind the humor is real Life.We talk about:
This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w
This week on Fuel for the Sole, Tim Noakes has entered the chat. We break down a new research study suggesting that just 10 grams of carbs per hour may be sufficient—and that carb loading might be unnecessary after all. Plus, we tackle listener questions on what could be driving high liver enzymes, what might be sabotaging your recovery, and why bloating shows up mid-run.Want to be featured on the show? Email us (written or an audio file!) at fuelforthesolepodcast@gmail.com. This episode is fueled by ASICS and RNWY!Head over to ASICS.com and sign up for a OneASICS account. It's completely free and when you sign up you will receive 10% off your first purchase. You also gain access to exclusive colorways on ASICS.com, free standard shipping, special birthday month discounts and more.Try the new Salty Carbs at https://rnwy.life/ and use code FEATHERS15 for 15% off your purchase. Disclaimer: This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
Texas wrestling is on the verge of a major shift — and Coach Grant Leath is right in the middle of it.In Episode 435 of Airey Bros Radio, we go belly-to-belly with Coach Grant Leath, head wrestling coach at Tarleton State and a key part of the Texas Collegiate Wrestling Foundation, to talk about building a program with national ambition — and what it could mean for the first NCAA Division I wrestling future in Texas history.We get into Grant's Missouri roots, how injuries shaped him as a coach, the culture of “Tiger Style” (and why he's adjusted training to protect athletes who are too motivated), and what it's really like fundraising from scratch — including the wild idea of a bull-riding fundraiser.We also spotlight what doesn't get enough love: the NCWA. Grant explains why the NCWA is one of the biggest opportunity-makers in wrestling, how it can function as a pipeline for roster caps, and why it may be the sport's best “insurance policy” in uncertain NCAA times.Plus: Grant's health-conscious dress shoe brand built for recovery (“running shoe disguised as a dress shoe”), recruiting angles, tuition hacks for out-of-state athletes, and why Texas is still massively under-tapped.Key topics: Tarleton State Wrestling, Texas D1 wrestling, NCWA, Tiger Style, Rob Cole, Stanford Wrestling, Missouri Wrestling, recruiting, fundraising, roster caps, NCAA uncertainty, in-state tuition waivers, Texas wrestling growth, Shreveport NCWA nationals.Show Notes With Timestamps0:00 ABR mission: spotlighting JUCO/NAIA/D2/D3/NCWA programs + getting Jersey kids everywhere2:26 Full ABR intro + guest intro: Coach Grant Leath (Tarleton State) + Texas Collegiate Wrestling Foundation5:20 Recruiting plugs + where to learn more (tsu wrestling site, updates, newsletter)6:11 Grant plugs his product: health-conscious dress shoes (recovery-focused), copper threading + Hoka-style outsole8:33 ABR pitch: “I'll run a marathon in your dress shoes” cross-promo (Leadville + 26.2 talk)10:23 Grant's origin story: tiny-town Missouri kid hears “wrestling” and thinks WWE11:50 First practice moment: coach tells dad “he's a natural” — and Grant can't quit after that13:13 Coaching starts in college: injuries, surgeries, and coaching teammates while sidelined14:47 Career impact: major injuries, peak ranking, nationals finish, and the hard stop16:07 The bitter taste + leaving wrestling… briefly (Florida job)17:08 Lessons from injuries: film study, mental reps, never guaranteed anything, gratitude for mat time18:50 Training philosophy shift: balancing “one more” with recovery for intrinsically motivated athletes21:20 Breaking down Tiger Style: identity, daily choices, culture pillars, “one more” mentality23:03 ABR adopting “one more” into coaching/PE culture24:03 Path to Stanford: missing wrestling, Tampa Jesuit help, Stanford storyline + Rob Cole connection25:54 The legendary Rob Cole reply: “not qualified” + equipment room joke → then the real invite27:39 Driving 44 hours to the Bay Area + first real coaching break30:47 Staff change → being let go + the Texas D1 opportunity emerges33:47 Fundraising rumor confirmed: bull riding fundraiser idea (Tarleton rodeo culture)36:20 Comedy fundraiser: Grant does 10 minutes opening for Greg Warren (sold-out event)41:18 Reality check: fundraising without alumni, room, or built-in base — “what am I fundraising for?”42:39 D1 timeline tease: conference acceptance + “major announcement soon” (careful not to overpromise)43:54 Season update: roster changes, ranked progress, D1 opponents, tournament placers, NCWA ranking46:22 Recruiting pitch: being first in Texas, trailblazer mindset, “do what Little Rock did — faster”49:34 Mike Moyer/NWCA goal: a D1 program in every state + Texas impact53:09 Why the NCWA matters: opportunities, roster caps pipeline, growth, and wrestling's safety net59:11 NCWA gripe: Club Cup duels restriction conflicts with “opportunity” mission1:00:52 Tarleton recruiting: in-state tuition waiver for out-of-state (GPA/SAT/class rank)1:03:00 Location + campus growth + Texas A&M system resources1:03:43 Tarleton majors: education, nursing, engineering, ag + job placement stats1:06:00 Roster makeup: mostly Texas kids + untapped recruiting market1:11:05 Texas wrestling participation growth + number of programs vs public schools1:13:13 Tarleton as new D1 athletic department + campus culture (clean campus, “don't walk on the grass”)1:15:03 “60% female population” note for the single wrestlers
Road vs Trail: The Great Running Debate | Episode 71 In Episode 71 of the Women's Running Collective podcast, Hayles and Jussie dive into one of the biggest questions in the running world: road running vs trail running. From weekly training recaps to the realities of balancing running with hormones, family life and busy schedules, this episode is relatable, honest and full of laughs. The girls break down the pros and cons of trail running and road running, chatting everything from terrain, gear and accessibility to mindset, personality types and mental engagement. Whether you love the calm of the trails or the simplicity of the road, this episode will have you nodding along and possibly questioning your running identity. They also touch on the challenges and risks of trail running, the sense of competition and community it brings, and whether trail running really is “calm” or just a little bit chaotic. The episode wraps up with an exciting ASICS competition announcement, community updates and Hayles' weekly recommendations. A must-listen for runners of all levels whether you're trail curious, road loyal, or somewhere in between. 00:00 Welcome to the Women's Running Collective 00:27 Weekend running recap 01:00 Training highlights and challenges 01:23 Upcoming events and personal updates 04:16 Road running vs trail running: the great debate 06:38 Benefits of trail running 07:19 Advantages of road running 11:40 Running personality types 19:10 Trail running: calm or chaotic? 20:10 Risks and challenges of trail running 20:45 Competition, community and connection on the trails 24:19 Trail running gear, fashion and practicality 26:37 Accessibility and convenience: road vs trail 29:49 ASICS competition, recommendations and wrap-up Hayles' Recommendations This Week ✨ Krumbled Salted Caramel Beauty Bites https://krumbledfoods.com/products/salted-caramel-beauty-bites ✨ Edenvale Non-Alcoholic Sparkling Cuvée https://www.woolworths.com.au/shop/productdetails/308942/edenvale-non-alcoholic-wine-sparkling-cuvee
In this episode of the Functional Tennis Podcast, I'm joined by Yuhi Tanigaki, Product Manager for Tennis Footwear at ASICS, to go behind the scenes of the new Solution Speed FF4.Yuhi plays a central role in shaping ASICS tennis shoes, working closely with designers, researchers, professional athletes, and recreational players to turn feedback into real on court performance.We cover what actually goes into building a modern tennis shoe and how small changes can make a big difference.In this episode, we discuss:How long it really takes to develop a tennis shoe from first idea to retailWhat stayed consistent from the original Solution Speed and what has evolvedThe key updates in the Solution Speed FF4 and how they affect feel and movementHow ASICS balances speed, comfort, and durability without adding weightThe role of athlete feedback, including insights from Belinda BencicDifferences between what professional players and recreational players valueWhy ASICS sees the FF4 as an evolution rather than a complete redesignWhich type of player the Solution Speed FF4 is built forThis episode is a deep dive into tennis footwear design and a rare look at how performance products are actually created behind the scenes.
Quantum Blockchain Technologies PLC (AIM:QBT) CEO Francesco Gardin talked with Proactive's Stephen Gunnion about the company's participation in the recent Nashville Energy & Mining Summit (NEMS26) at Bitcoin Park in Nashville, a key networking hub for the Bitcoin community. Gardin described the conference as “extremely intensive” and valuable for meeting key industry players, including three ASIC manufacturers with whom the company has signed NDAs. He explained that Quantum Blockchain is progressing well in its relationships with these partners, particularly around its development of Method C, a neural network-based solution requiring ASIC-specific training. “The training is very ASICs oriented, specific ASICs oriented; and that's not something you do overnight,” Gardin noted. A major development for the industry, Gardin said, is the emergence of an open-source stack, including hashing board designs, control board software, and even mining pools. This marks a significant departure from the market dominance of Chinese manufacturers, who have historically restricted access to both ASIC specs and software. “This is for us, really a game changer,” Gardin said, pointing to the availability of chips and the open ecosystem as an opportunity for broader industry participation. He also clarified confusion around QBT's access to source code, explaining that access is being granted incrementally in alignment with the company's agreed path with its partners. For more updates from Quantum Blockchain Technologies and other innovative firms, visit Proactive's YouTube channel. Don't forget to like the video, subscribe to the channel, and enable notifications for future content. #QuantumBlockchain #FrancescoGardin #BitcoinMining #ASICs #MethodC #OpenSourceMining #CryptoTechnology #BlockchainInnovation #MiningHardware #BTCMining #ProactiveInvestors #TechUpdate
En este programa te hablo de la Asics Trabuco Max 5, y mis primeras sensaciones tras sacarlas de la caja, tomar medidas y pesos, y compararla con la versión anterior.Te explico qué cambia, qué permanece igual, y lo más importante; si los cambios que implementa merecen la pena, o en mi opinión, si son acertados.También reflexiono sobre la posibilidad de escoger la versión anterior, en caso de encontrar una buena oferta en el mercado.Otro detalle importante, es entender cómo queda el catálogo de Asics. Es decir, si la Trabuco Max 5 y la Trabuco 14 acaban por acercase tanto, no habría ningún producto para cubrir el hueco entre éstas y la Fujilite 6.Contacto:juan@ellaboratoriodejuan.com
I'm excited to share this conversation with Drew Hunter. I've wanted to have Drew on the show for a long time, and this episode did not disappoint. Drew runs professionally for ASICS and lives in Boulder, Colorado, but his path in the sport has been anything but typical. He went pro straight out of high school, turning down a full scholarship to Oregon, and has now spent nearly a decade navigating the ups and downs of professional running before making a sponsor change to ASICS last year. In this conversation, we talk about what it was really like going pro so young, how different that decision might look today with NIL, and how his perspective has shifted ten years into his career. Drew opens up about setting goals based on what genuinely excites him, why road racing has become such a big focus, and how he's thinking about longevity in the sport as both an athlete and a dad. We also talk about family life, faith, and the importance of community, especially as he and his wife prepare to welcome their third child. What I appreciated most about this episode is how grounded Drew is in who he is now. He reflects honestly on early loneliness, big expectations, and how his definition of success has changed over time. This conversation goes well beyond race results and gets into what it looks like to build a meaningful life alongside big athletic goals. I really enjoyed this one, and I hope you do too. If you enjoy the episode, please take a moment to leave a rating and review. It's one of the best ways to help new listeners find the show. Topics Discussed: Drew's indoor season plans, upcoming 3K, and building toward Millrose How he's thinking about 2026 goals and the World Road Running Championships in Copenhagen What excites him when he sets goals and why he likes mixing in road racing Why road racing feels like the future of the sport and how it brings fans into it Becoming a dad young, building a family, and how that shifts his mindset around running Community, church, and why having people nearby matters for parenting and life Converting to Catholicism and how he explored faith through reading and learning How he met his wife and how COVID changed the timeline Choosing to go pro out of high school, how NIL changes that decision now, and the realities of pro life Loneliness early in his pro career, moving to Boulder, and building Tin Man Elite and his training setup with his parents Support our Sponsors: Aletheia Run lets you see what your body is actually doing with every step by using a lightweight sensor that creates a unique force portrait of your movement. It gives personalized feedback, targeted drills, and science-backed insights to improve performance and help prevent injuries, bringing the running lab right to your everyday training. Noogs: Noogs Nutrition is my go-to for fun, flavorful fuel with carbs and electrolytes, with flavors like Lemon Zinger, Electric Watermelon, and Blue Raspberry, plus caffeinated options too. Use code “another15” for 15% off your first order. Amazfit Smartwatches – A wellness and recovery brand offering targeted supplements designed to support runners with energy, strength, and sleep. Use code “ANOTHER” at checkout!
Anta Sports, marca esportiva gigante da china comprou participação da Puma, Brigid Kosgei e outros atletas queniano vão defender a Turquia nos Jogos de Los Angeles 2028 e eu embarco hoje para correr a Meia de Cascais, em Portugal, prova que usarei meio que de preparção para a Maratona de Sevilha que vou em parceria com a Asics.Nossos links - https://linktr.ee/corridanoarO Corrida no Ar News é produzido diariamente e postado por volta das 6 da manhã.
This week on Fuel for the Sole, we briefly touch on the new food pyramid before diving into listener questions—including the risks of overdoing sodium and hypernatremia, recommendations for iron supplementation, and whether the off-season is better spent focusing on muscle building or weight loss.Want to be featured on the show? Email us (written or an audio file!) at fuelforthesolepodcast@gmail.com. This episode is fueled by ASICS and RNWY!Head over to ASICS.com and sign up for a OneASICS account. It's completely free and when you sign up you will receive 10% off your first purchase. You also gain access to exclusive colorways on ASICS.com, free standard shipping, special birthday month discounts and more.Try the new Salty Carbs at https://rnwy.life/ and use code FEATHERS15 for 15% off your purchase. Disclaimer: This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
Episode 144Happy New Year! This is one of my favorite episodes of the year — for the fourth time, Nathan Benaich and I did our yearly roundup of AI news and advancements, including selections from this year's State of AI Report.If you've stuck around and continue to listen, I'm really thankful you're here. I love hearing from you.You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com.Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (00:44) Air Street Capital and Nathan world* Nathan's path from cancer research and bioinformatics to AI investing* The “evergreen thesis” of AI from niche to ubiquitous* Portfolio highlights: Eleven Labs, Synthesia, Crusoe* (03:44) Geographic flexibility: Europe vs. the US* Why SF isn't always the best place for original decisions* Industry diversity in New York vs. San Francisco* The Munich Security Conference and Europe's defense pivot* Playing macro games from a European vantage point* (07:55) VC investment styles and the “solo GP” approach* Taste as the determinant of investments* SF as a momentum game with small information asymmetry* Portfolio diversity: defense (Delian), embodied AI (Syriact), protein engineering* Finding entrepreneurs who “can't do anything else”* (10:44) State of AI progress in 2025* Momentous progress in writing, research, computer use, image, and video* We're in the “instruction manual” phase* The scale of investment: private markets, public markets, and nation states* (13:21) Range of outcomes and what “going bad” looks like* Today's systems are genuinely useful—worst case is a valuation problem* Financialization of AI buildouts and GPUs* (14:55) DeepSeek and China closing the capability gap* Seven-month lag analysis (Epoch AI)* Benchmark skepticism and consumer preferences (”Coca-Cola vs. Pepsi”)* Hedonic adaptation: humans reset expectations extremely quickly* Bifurcation of model companies toward specific product bets* (18:29) Export controls and the “evolutionary pressure” argument* Selective pressure breeds innovation* Chinese companies rushing to public markets (Minimax, ZAI)* (21:30) Reasoning models and test-time compute* Chain of thought faithfulness questions* Monitorability tax: does observability reduce quality?* User confusion about when models should “think”* AI for science: literature agents, hypothesis generation* (23:53) Chain of thought interpretability and safety* Anthropomorphization concerns* Alignment faking and self-preservation behaviors* Cybersecurity as a bigger risk than existential risk* Models as payloads injected into critical systems* (27:26) Commercial traction and AI adoption data* Ramp data: 44% of US businesses paying for AI (up from 5% in early 2023)* Average contract values up to $530K from $39K* State of AI survey: 92% report productivity gains* The “slow takeoff” consensus and human inertia* Use cases: meeting notes, content generation, brainstorming, coding, financial analysis* (32:53) The industrial era of AI* Stargate and XAI data centers* Energy infrastructure: gas turbines and grid investment* Labs need to own models, data, compute, and power* Poolside's approach to owning infrastructure* (35:40) Venture capital in the age of massive GPU capex* The GP lives in the present, the entrepreneur in the future, the LP in the past* Generality vs. specialism narratives* “Two or 20”: management fees vs. carried interest* Scaling funds to match entrepreneur ambitions* (40:10) NVIDIA challengers and returns analysis* Chinese challengers: 6x return vs. 26x on NVIDIA* US challengers: 2x return vs. 12x on NVIDIA* Grok acquired for $20B; Samba Nova markdown to $1.6B* “The tide is lifting all boats”—demand exceeds supply* (44:06) The hardware lottery and architecture convergence* Transformer dominance and custom ASICs making a comeback* NVIDIA still 90–95% of published AI research* (45:49) AI regulation: Trump agenda and the EU AI Act* Domain-specific regulators vs. blanket AI policy* State-level experimentation creates stochasticity* EU AI Act: “born before GPT-4, takes effect in a world shaped by GPT-7”* Only three EU member states compliant by late 2025* (50:14) Sovereign AI: what it really means* True sovereignty requires energy, compute, data, talent, chip design, and manufacturing* The US is sovereign; the UK by itself is not* Form alliances or become world-class at one level of the stack* ASML and the Netherlands as an example* (52:33) Open weight safety and containment* Three paths: model-based safeguards, scaffolding/ecosystem, procedural/governance* “Pandora's box is open”—containment on distribution, not weights* Leak risk: the most vulnerable link is often human* Developer–policymaker communication and regulator upskilling* (55:43) China's AI safety approach* Matt Sheehan's work on Chinese AI regulation* Safety summits and China's participation* New Chinese policies: minor modes, mental health intervention, data governance* UK's rebrand from “safety” to “security” institutes* (58:34) Prior predictions and patterns* Hits on regulatory/political areas; misses on semiconductor consolidation, AI video games* (59:43) 2026 Predictions* A Chinese lab overtaking US on frontier (likely ZAI or DeepSeek, on scientific reasoning)* Data center NIMBYism influencing midterm politics* (01:01:01) ClosingLinks and ResourcesNathan / Air Street Capital* Air Street Capital* State of AI Report 2025* Air Street Press — essays, analysis, and the Guide to AI newsletter* Nathan on Substack* Nathan on Twitter/X* Nathan on LinkedInFrom Air Street Press (mentioned in episode)* Is the EU AI Act Actually Useful? — by Max Cutler and Nathan Benaich* China Has No Place at the UK AI Safety Summit (2023) — by Alex Chalmers and Nathan BenaichResearch & Analysis* Epoch AI: Chinese AI Models Lag US by 7 Months — the analysis referenced on the US-China capability gap* Sara Hooker: The Hardware Lottery — the essay on how hardware determines which research ideas succeed* Matt Sheehan: China's AI Regulations and How They Get Made — Carnegie EndowmentCompanies Mentioned* Eleven Labs — AI voice synthesis (Air Street portfolio)* Synthesia — AI video generation (Air Street portfolio)* Crusoe — clean compute infrastructure (Air Street portfolio)* Poolside — AI for code (Air Street portfolio)* DeepSeek — Chinese AI lab* Minimax — Chinese AI company* ASML — semiconductor equipmentOther Resources* Search Engine Podcast: Data Centers (Part 1 & 2) — PJ Vogt's two-part series on XAI data centers and the AI financing boom* RAAIS Foundation — Nathan's AI research and education charity Get full access to The Gradient at thegradientpub.substack.com/subscribe
La popularidad de los runners llegó a México justo después de la caída de la ola del hype por las marcas de siempre, o sea, a medida que los precios de la reventa de los lanzamientos limitados eran cada vez más escasos, por la falta de historias, la incomodidad, las dinámicas disparejas, fueron orillando a los entusiastas a buscar otro tipo de opciones.No fue fortuito, el cambio general también ha tenido que ver con esta creciente búsqueda de pares que se ven distintos, que se viralizan en trends de tik tok, o simplemente al trabajo que marcas como ASICS, Salomon, Hoka, Brooks, Saucony o Mizuno hacen alrededor del mundo.De la forma que sea, la Ciudad de México no puede escapar a este movimiento y hay un spot que lleva años cultivando una comunidad que es afín a estos modelos, que no llegaron gracias a los trends sino por la genuina búsqueda de algo distinto y Jonatan González A.K.A Pleser Lab estuvo ahí en el momento adecuado para llevarlos por ese camino.
We're just out of the recent earnings season and we've seen a wild range of results and some interesting implications. Melissa Otto CFA, head of S&P Global's Visible Alpha research team, returns to discuss what that markets have been saying and what she makes of the data with host Eric Hanselman. Macroeconomic effects are having some impact, as consumer sentiment diverges across the top and the bottom of the economy. In technology, there are mixed feelings about AI as the hunt continues for use cases with decisive revenue returns. The hyperscalers are continuing to invest capital at staggering rates and, so far, the markets have mostly approved. AI supply chain companies, like NVIDIA, are generally moving forward with solid results. The larger question is where is the AI boom headed. There are constraints not only in supply chains for data centers, but also in energy supply. Agentic AI has a lot of promise, but needs to prove out its value and earn trust, as providers look to improve efficiency with more targeted silicon, like ASICs, to stand up alongside the forests of GPU's being deployed. As investors hunt for improved returns, they may be rotating to international opportunities and small cap companies that might be able to see faster returns from AI deployments. More S&P Global Content: Next in Tech podcast: Agentic Customer Experience Nvidia GTC in DC Blackwell expectations increase Otto: Markets are grappling with how to price AI-related stocks Next in Tech podcast, Episode 239: AI Infrastructure For S&P Global Subscribers: A view of peaks and plateaus AI to lead tech spending in 2026, but orgs losing track of energy efficiency – Highlights from Macroeconomic Outlook, SME Tech Trends Hyperscaler earnings quarterly: Alphabet, Amazon and Microsoft charge ahead on AI capacity buildouts Agents are already driving workplace impact and agentic AI adoption – Highlights from VotE: AI & Machine Learning Big Picture 2026 AI Outlook: Unleashing agentic potential Credits: Host/Author: Eric Hanselman Guest: Melissa Otto, CFA Producer/Editor: Feranmi Adeoshun Published With Assistance From: Sophie Carr, Kyra Smith
We're heading into the new year with a new round of Buy or Sell, one of our favorite podcast games. If you don't know the drill, we force ourselves to decide whether we buy or sell an idea, shoe, treatment, etc. Today, Nathan, David, and Matt debate the Saucony Endorphin Azura (Shift 3 replacement??), the venerable ASICS Nimbus, whether runner's high really exists, and much more. Want your idea on the next edition of Buy or Sell? Email us at doctorsofrunning@gmail.com!We're thrilled to introduce Rabbit as a presenting partner! You can use code DORJAN10 to get 10% off your entire order of $50.00 or more. Note that the code is limited to one use per customer and can't combined with other discounts. The code is active from 1st of every month to last day at 11:59PM PST, but don't worry because we'll be bringing you a new code every month. Shop now at https://www.runinrabbit.com/.Get your DOR Merch: https://doctors-of-running.myspreadshop.com/Get 20% off your first order from Skratch with code: DOCTORSOFRUNNING! https://www.skratchlabs.comChapters0:00 - Intro4:26 - In for Testing: Powered by Skratch Labs14:20 - Buy/sell: Saucony Endorphin Azura24:34 - The Azura as a Shift 3 replacement29:18 - ASICS Nimbus 28 is better than the 2735:54 - The Saucony Endorphin Pro 5 is a super shoe44:48 - Running a marathon or ultra in 202653:12 - Running doubles1:01:50 - Runner's high is real1:11:25 - Wrap-up
This week on Fuel for the Sole, we're diving into the predicted nutrition trends for 2026 — from the rise of high-fiber and high-protein foods to a renewed focus on gut health and minimally processed eating. We also tackle listener questions on ferritin and altitude, the buzzy new supplement Nomio, and whether you really need to train with the exact same gel you plan to use on race day.Want to be featured on the show? Email us (written or an audio file!) at fuelforthesolepodcast@gmail.com. This episode is fueled by ASICS and RNWY!Head over to ASICS.com and sign up for a OneASICS account. It's completely free and when you sign up you will receive 10% off your first purchase. You also gain access to exclusive colorways on ASICS.com, free standard shipping, special birthday month discounts and more.Try the new Salty Carbs at https://rnwy.life/ and use code FEATHERS15 for 15% off your purchase. Disclaimer: This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
Is Stone Island actually worth the price? What lifestyle requires techwear? Is Timothee Chalamet overrated? Why does Lady Gaga feel less authentic than Charli XCX? What is gooning, and how did it end up in Harper's Bazaar? This week, Sol and Michael sit down with Shane O'Neill, writer for The Washington Post's twice-weekly pop culture newsletter "Seriously?," to unpack the weirdest corners of the internet and how they intersect with fashion, music, and modern masculinity.The trio explore the techwear vs. lifestyle debate, why Stone Island might be "a little more serious" than necessary, the complicated appeal of Timothee Chalamet vs. Pedro Pascal, and whether Madonna's Celebration Tour was genius or needed a creative director. Shane defends his take on why Charli XCX feels more genuine than Lady Gaga, the overlooked brilliance of the Blue Man Group's first album, and how his job at the Washington Post lets him explore everything from competitive Excel spreadsheet championships to extremely niche fetish communities on TikTok.The conversation goes off the deep end: gooning culture and how Shane learned about it years before it hit mainstream media, why Jeremy Scott is "the Taco Bell of fashion" (complimentary), the NFL's official stylist whose college thesis connected SpongeBob SquarePants to the Arab Spring uprisings (not a joke), and why most luxury fashion houses are actually perfume or shoe companies pretending to sell clothes. They also discuss the Ice Spice SpongeBob movie premiere outfit controversy, why people pay to stand motionless at techno clubs, and Warped Tour nostalgia.Other topics include: Devoa, the Saint Laurent SS16 Surf Sound collection, ASICS x Comme des Garçons sneakers, the oura ring and Palantir data concerns, extreme fitness culture and dissociation, Russian seal best friends named Kroshik and Shlissik, and competitive Excel spreadsheet merch.We hope you enjoy!Lots of love!Sol---Episode Tags: Shane O'Neill, Washington Post Seriously, fashion podcast 2026, gooning explained, gooning Harper's Bazaar, Jeremy Scott fashion, menswear podcast, streetwear podcast, archive fashion, internet subcultures, TikTok algorithm, competitive Excel spreadsheets, niche communities, oura ring review, Ice Spice stylist, men's fashion trends 2026, Kyle Smith NFL stylist, Alexander McQueen shoes Sol Thompson and Michael Smith explore the world and subcultures of fashion, interviewing creators, personalities, and industry insiders to highlight the new vanguard of the fashion world. Subscribe for weekly uploads of the podcast, and don't forgot to follow us on our social channels for additional content, and join our discord to access what we've dubbed “the happiest place in fashion”.Message us with Business Inquiries at pairofkingspod@gmail.comSubscribe to get early access to podcasts and videos, and participate in exclusive giveaways for $4 a month Links: Instagram TikTok Twitter/X Sol's Substack (One Size Fits All) Sol's Instagram Michael's Instagram Michael's TikTok
In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics.For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com.Timestamps00:00 Introduction to FPGAs and Their Role in Servers02:47 Understanding FPGA Limitations and Use Cases05:55 Exploring Different Types of Servers08:47 The Importance of Memory and Bandwidth11:52 Philosophical Insights on Search and Access Patterns14:50 The Relationship Between Hardware and Search Queries17:45 Challenges of Distributed Systems20:47 The CAP Theorem and Its Implications23:52 The Evolution of Technology and Knowledge Management26:59 FPGAs as IO Expanders29:35 The Trade-offs of FPGAs vs. ASICs and GPUs32:55 The Future of AI Applications with FPGAs35:51 Exciting Developments in Hardware and BusinessKey Insights1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel.2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders.3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware.4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously.5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from fixed graphics pipelines to programmable compute, and now companies like Etched are creating transformer-specific ASICs. This cycle repeats as each generation adds programmability until someone strips it away for performance gains.6. Memory bottlenecks are reshaping the hardware landscape - The AI boom has created severe memory shortages, doubling costs for DRAM components overnight. This affects not just GPU availability but creates opportunities for alternative architectures. When everyone faces higher memory costs, the relative premium for specialized solutions like FPGA-based systems becomes more attractive, potentially shifting the competitive landscape for memory-intensive applications.7. Search applications represent ideal FPGA use cases due to their sparse access patterns - Vector databases and search workloads are particularly well-suited to FPGA acceleration because they involve searching through massive datasets with sparse access patterns rather than dense computation. These applications can effectively utilize the high bandwidth to flash storage and parallel I/O capabilities that FPGAs provide, making them natural early adopters for this type of specialized hardware architecture.
Join Chris Chavez, Aisha Praught-Leer and Eric Jenkins as they unpack all of the action at the 2026 World Cross Country Championships in Tallahassee.Uganda's Jacob Kiplimo continued his reign at the World Cross Country Championships, cruising to a third consecutive title in commanding fashion. Ethiopia's Berihu Aregawi claimed silver for the third time in his career, while Kenya's Daniel Simiu Ebenyo rounded out the podium with bronze.In the women's race, Agnes Ngetich delivered a statement performance, powering to victory to extend Kenya's streak to 10 straight World Cross Country titles. The 10km world record holder broke clear early and won by 42 seconds, the second-largest winning margin in the history of the championships.____________SUPPORT OUR SPONSORSASICS: When you move your body, amazing things happen to your mind. Lace up and feel the good vibrations. Check out all of ASICS' latest running shoes and gear here.OLIPOP: Olipop is a better-for-you soda that puts 6-9g of fiber in every single can. This winter, Olipop's holiday cans are back featuring their Yeti Trio. Olipop is a smart, simple way to add more fiber to your day. No recipes, no resolutions, no salads required. Whether you're team Vintage Cola, Crisp Apple, or Ginger Ale, bundle up, pour yourself a can, and sip on some fiber. Visit DrinkOlipop.com and use code CITIUS25 at checkout to get 25% off your orders.
The World XC Championships return to the U.S. for the first time since 1992, with Saturday's races in Tallahassee offering a chance to reignite global attention on the best distance runners in the world.How to watch: The races will be broadcast on Peacock (starting at 9:35 a.m.) and will be televised on CNBC (starting at 10 a.m. ET).Schedule (All times ET):9:45 a.m. – Mixed 4x2K Relay10:20 a.m. – Women's U20 6K Race10:55 a.m. – Men's U20 8K Race11:35 a.m. – Women's Senior 10K Race12:20 p.m. – Men's Senior 10K RaceYou can read our full race preview here.____________Mixed Relay Preview:Favorites: Kenya w/ Reynold CheruiyotChallengers:Australia (Jessica Hull, Olli Hoare)France (Agathe Guillemot)USA (Sage Hurta-Klecker, Ethan Strand)Morocco's NCAA starsKenya has won 3 of 4 editions since 2017.Senior Women's Race:Star Power Dip: No Beatrice Chebet (pregnancy), no Olympic or World medalists from 5000m/10,000m entered.Top Teams:Kenya: Agnes Ngetich leads (14:01/28:46 PBs); Maurine Chebor debutsEthiopia: Loaded with U20 grads like Senayet Getachew, Asayech AyichewUganda: Joy Cheptoyek, Sarah & Rebecca Chelangat returnUSA: Led by Weini Kelati; Schweizer, Kurgat, Izzo in the mixWild cards: Megan Keith, Lauren Ryan Senior Men's Race:Deepest Field Post-COVID: 145 entrantsTitle contenders:Jacob Kiplimo: Two-time champ, fresh off 2:02:23 marathon winEthiopia: Berihu Aregawi + rising star Biniam MeharyKenya: Daniel Ebenyo returns; Weldon Langat, Robert Koech supportFrance: Jimmy Gressier headlines strong squadTeam USA Outlook:Potential for medal with Graham Blanks, Nico Young, Parker Wolfe, Rocky HansenU.S. last medaled in 2013 (silver)____________SUPPORT OUR SPONSORSASICS: When you move your body, amazing things happen to your mind. Lace up and feel the good vibrations. Check out all of ASICS' latest running shoes and gear here.OLIPOP: Olipop is a better-for-you soda that puts 6-9g of fiber in every single can. This winter, Olipop's holiday cans are back featuring their Yeti Trio. Olipop is a smart, simple way to add more fiber to your day. No recipes, no resolutions, no salads required. Whether you're team Vintage Cola, Crisp Apple, or Ginger Ale, bundle up, pour yourself a can, and sip on some fiber. Visit DrinkOlipop.com and use code CITIUS25 at checkout to get 25% off your orders.
In der heutigen Folge sprechen die Finanzjournalisten Anja Ettel und Holger Zschäpitz über en Superzyklus bei Speicherchips, ein Double Downgrade bei Adidas und Comeback-Hoffnung bei Novo Nordisk. Außerdem geht es um Bitcoin, Ether, Western Digital, Seagate, Micron Technology, Samsung, Sandisk, SK Hynix, Nanya Technology, Nvidia, Johnson Controls, Trane Technologies, Carrier Global, Tesla, Eli Lilly, Ventyx Biosciences, Infineon, STMicroelectronics, Aixtron, SAP, Adidas, Nike, Asics, Puma, JD Sports, Novo Nordisk, Wisdom Tree Strategic Metals and Rare Earths Miners (WKN: A3EKKT), MP Materials, Lynas Rare Earths, Northam Platinum Holdings, Taseko Mines, Aurubis, VanEck Rare Earth and Strategic Metals (WKN: A3CRL9), PLS Group, Albemarle, China Northern Rare Earth, MP Materials, BNP Paribas Copper ETC (WKN: PB8C0P), Global X Copper Miners UCITS ETF (WKN: A3C7FZ), Sprott Pure Play Copper Miners (WKN: A3EWMH), Copper Miners ETF (WKN: A3ECC3), Freeport-McMoRan und Southern Copper. Wir freuen uns an Feedback über aaa@welt.de. Noch mehr "Alles auf Aktien" findet Ihr bei WELTplus und Apple Podcasts – inklusive aller Artikel der Hosts und AAA-Newsletter. Hier bei WELT: https://www.welt.de/podcasts/alles-auf-aktien/plus247399208/Boersen-Podcast-AAA-Bonus-Folgen-Jede-Woche-noch-mehr-Antworten-auf-Eure-Boersen-Fragen.html. Der Börsen-Podcast Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte! https://linktr.ee/alles_auf_aktien Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html
En este programa, analizo los 2 modelos de la gama Fuji de Asics. Por un lado, una Fujilite 6 que implementa un nuevo compuesto en la mediasuela (FF Blast +) y una suela de patrón completamente diferente, y una rapidísima Fuji Speed 4 que abandona la placa de carbono, para dar paso al Pebax, lo que ha hace más controlable, le otorga mayor flexibilidad, confort y también sensación de control.Te explico la diferencia entre ambas, cúal escoger en función de las necesidadades y terreno.Contacto:juan@ellaboratoriodejuan.com
This week on Fuel for the Sole, we break down our biggest takeaways from The Running Event — from exciting new brands to the trends shaping the future of endurance fueling. We also dive into some eye-opening information we uncovered about Generation UCAN (and yeah… it was pretty wild). To wrap it up, we answer listener questions on cutting back fiber before race day and take a hard look at a new supplement called Primal Queen, which may be the ultimate case study in what not to take — ever.Want to be featured on the show? Email us (written or an audio file!) at fuelforthesolepodcast@gmail.com. This episode is fueled by ASICS and RNWY!Head over to ASICS.com and sign up for a OneASICS account. It's completely free and when you sign up you will receive 10% off your first purchase. You also gain access to exclusive colorways on ASICS.com, free standard shipping, special birthday month discounts and more.Try the new Salty Carbs at https://rnwy.life/ and use code FEATHERS15 for 15% off your purchase. Disclaimer: This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
Aktien hören ist gut. Aktien kaufen ist noch besser. Unser Partner Scalable Capital ist jetzt Bank und bietet euch dadurch jetzt noch bessere Konditionen. Mehr Infos findet ihr unter: scalable.capital/oaws. Oracle hat jetzt endlich TikTok. NVIDIA hat bald vielleicht China-Zulassung. Sonst nutzt Tencent einfach Datasection. Außerdem profitiert CoreWeave von der Citigroup. Carnival profitiert von Kreuzfahrt-Boom. Lamb Weston & Nike profitieren gar nicht. Musk kriegt Geld. Eine katastrophale Übernahme, sinkende Umsätze und ein KGV von 8. Das ist Crocs (WKN: A0HM52). Asics (WKN: 860398) boomt dank Onitsuka Tiger und Novablast. Garmin (WKN: A1C06B) boomt nur bei Fitness. Diesen Podcast vom 22.12.2025, 3:00 Uhr stellt dir die Podstars GmbH (Noah Leidinger) zur Verfügung.
In this timely episode of The Voice of Retail, host Michael LeBlanc is joined by Aamir Lakhani, Global Director of Threat Intelligence and Artificial Intelligence at Fortinet, for a deep and sobering conversation on the evolving cyber threat landscape facing retailers as they close out 2025 and prepare for 2026.Lakhani leads adversarial AI research within FortiGuard Labs, Fortinet's global R&D arm, where his team studies how cybercriminals—ranging from lone actors to state-sponsored groups—exploit technology, human behaviour, and increasingly, artificial intelligence. With Fortinet protecting over half of the world's firewall traffic, Lakhani brings unparalleled visibility into global cybercrime trends.A central theme of the discussion is the explosion of credential-based attacks, where hackers no longer “break in” but simply log in using stolen usernames and passwords. Lakhani explains how years of data breaches have enabled automated attacks across thousands of retail, banking, and corporate systems, often at massive scale. Two-factor authentication, passkeys, and password-less systems are no longer optional—they are table stakes.The conversation then turns to AI-driven fraud, which Lakhani describes as one of the most urgent threats retailers face today. From deepfake voice scams impersonating CEOs to hyper-personalized phishing attacks fueled by social media data, AI has dramatically lowered the cost and increased the sophistication of fraud. On a scale of concern, Lakhani rates AI fraud “off the charts.”LeBlanc and Lakhani also explore deceptive domains, poisoned AI shopping results, and the risks associated with buy-now-pay-later programs, which fraudsters increasingly exploit through urgency-based scams. Importantly, Lakhani emphasizes that cybersecurity is now a shared responsibility across platforms, retailers, and consumers—especially as many small and mid-sized retailers rely heavily on platforms like Shopify.Looking ahead to 2026, Lakhani offers clear guidance for retail leaders: invest in education, embrace AI-powered security tools, and do not shy away from automation. Cybersecurity, he argues, is no longer just an IT issue—it is a brand trust issue, a revenue protection issue, and a core leadership responsibility. Cyberthreats Targeting the 2025 Holiday Season: What CISOs Need to Know and the report Cyber Threat Landscape Overview for the 2025 Holiday Season. The Voice of Retail podcast is presented by Hale, a performance marketing partner trusted by brands like ASICS, Saje, and Orangetheory to scale with focus and impact. Michael LeBlanc is the president and founder of M.E. LeBlanc & Company Inc, a senior retail advisor, keynote speaker and now, media entrepreneur. He has been on the front lines of retail industry change for his entire career. Michael has delivered keynotes, hosted fire-side discussions and participated worldwide in thought leadership panels, most recently on the main stage in Toronto at Retail Council of Canada's Retail Marketing conference with leaders from Walmart & Google. He brings 25+ years of brand/retail/marketing & eCommerce leadership experience with Levi's, Black & Decker, Hudson's Bay, CanWest Media, Pandora Jewellery, The Shopping Channel and Retail Council of Canada to his advisory, speaking and media practice.Michael produces and hosts a network of leading retail trade podcasts, including the award-winning No.1 independent retail industry podcast in America, Remarkable Retail with his partner, Dallas-based best-selling author Steve Dennis; Canada's top retail industry podcast The Voice of Retail and Canada's top food industry and one of the top Canadian-produced management independent podcasts in the country, The Food Professor with Dr. Sylvain Charlebois from Dalhousie University in Halifax.Rethink Retail has recognized Michael as one of the top global retail experts for the fifth year in a row, the National Retail Federation has designated Michael as on their Top Retail Voices for 2025, Thinkers 360 has named him on of the Top 50 global thought leaders in retail, RTIH has named him a top 100 global though leader in retail technology and Coresight Research has named Michael a Retail AI Influencer. If you are a BBQ fan, you can tune into Michael's cooking show, Last Request BBQ, on YouTube, Instagram, X and yes, TikTok.Michael is available for keynote presentations helping retailers, brands and retail industry insiders explaining the current state and future of the retail industry in North America and around the world.
This week on Fuel for the Sole, we dive into the new trend of “fiber maxxing” and why it's probably not for us. We also tackle listener questions on fueling and race intensity, whether downing a bunch of hot dogs can replace a nitrate supplement, and if it's possible to “catch up” in a marathon after missing a few water stops.Want to be featured on the show? Email us (written or an audio file!) at fuelforthesolepodcast@gmail.com. This episode is fueled by ASICS and RNWY!Head over to ASICS.com and sign up for a OneASICS account. It's completely free and when you sign up you will receive 10% off your first purchase. You also gain access to exclusive colorways on ASICS.com, free standard shipping, special birthday month discounts and more.Try the new Salty Carbs at https://rnwy.life/ and use code FEATHERS15 for 15% off your purchase. Disclaimer: This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
Liting Cong is Legal Counsel at ASICS, one of Japan's most successful sportswear companies. Liting shares her journey through the lens of Japanese aesthetics, particularly the concept of wabi-sabi or embracing imperfection, impermanence, and incompleteness. If you're considering an in-house career in Japan, curious about human-centric AI, or looking for wisdom on embracing life's uncertainties, you will enjoy the metaphor Liting shares about building a beautiful garden. More on that inside this episode! If you enjoyed this episode and it inspired you in some way, we'd love to hear about it and know your biggest takeaway. Head over to Apple Podcasts to leave a review and we'd love it if you would leave us a message here!In this episode you'll hear:How Japanese martial arts and dance became a source of peace and resilience during challenging timesThe evolution of in-house counsel roles beyond gatekeeping and contract reviewPractical strategies for unlearning perfectionism that Liting uses herself at workWhy ideation is a lawyer's secret weapon in the age of AILiting's favourite book and other fun facts About LitingLiting Cong is a Legal Counsel at ASICS Corporation, where she leads global privacy, AI governance, and digital initiatives in the Legal Department. She graduated from Grinnell College in 2011, and University of Toronto Faculty of Law in 2014. She was admitted to the bar in Ontario in 2015, and in New York in 2019. Before relocating to Japan, Liting gained diverse international experience at King & Wood in Shanghai, Shin & Kim in Seoul, and Stikeman & Elliott in Toronto, and started her own practice as a sole practitioner in Toronto.In addition to her legal credentials, Liting is a data protection professional with multiple certifications from the International Association of Privacy Professionals (IAPP) for European privacy (CIPP/E), privacy program management (CIPM), and artificial intelligence governance (AIGP). With over a decade of experience living and working in Canada and Japan, Liting brings not only legal expertise but also fluency in the languages--English, Chinese, and Japanese--and a deep understanding of cross-cultural business environments. In 2018, as an avid fan of Japanese arts and culture since childhood, Liting relocated to Japan. She joined Mitsubishi Tanabe Pharma Corporation in Osaka as Legal Counsel, and later SymBio Pharmaceuticals Limited in Tokyo as Legal Manager.In 2023, Liting joined ASICS Corporation in its global headquarters in Kobe. She now serves as the lead in global privacy and AI governance and managing ASICS' digital initiatives across the globe. Liting lives in Osaka with her husband and a cat who enjoys making cameos in Teams calls and supervising all her legal work. Connect with LitingLinkedIn: https://www.linkedin.com/in/litingcong/ LinksGokan: https://patisserie-gokan.co.jp/item/ The Cultural Map by Erin Meyer https://amzn.asia/d/9w9muCI Connect with Catherine LinkedIn https://www.linkedin.com/in/oconnellcatherine/Instagram: https://www.instagram.com/lawyeronair
This week in bitcoin mining news, ERCOT sees a 266 GW of interconnection requests in 2026, IREN closed a $2.3 billion convertible note offering, and GPUs are leaving ASICs in the dust. Subscribe to the Blockspace newsletter for market-making news as it hits the wire! Welcome back to The Mining Pod! Today, Ethan Vera, COO of Luxor, joins us as we dive into MicroBT's Whatsminer M70 launching into a challenging ASIC market, IREN's $2.3 billion convertible note offering, the precarious state of hashprice, Luxor's new GPU hardware sales business, the staggering 270% leap in ERCOT interconnection requests, and the controversial Cat bitcoin fork proposal aimed at filtering ordinals / inscriptions. Subscribe to the newsletter! https://newsletter.blockspacemedia.com **Notes:** - Hash price is below $40 per second - Three negative difficulty adjustments - Ercot requests leaped 270% in 2025 - 73% of requests from data centers - IREN raised $2.3B in convertible notes - M70 efficiency: 12.5 J/TH 00:00 Start 02:35 Difficulty Report by Luxor 07:26 IREN note 10:44 M70 launch 20:02 Luxor launches GPU trading 27:12 ERCOT LL requests up 270% in 2025 34:10 Cry Corner: another filter fork proposal
Subscribe to the Blockspace newsletter for market-making news as it hits the wire! Welcome back to The Mining Pod! Today, Ethan Vera, COO of Luxor, joins us as we dive into MicroBT's Whatsminer M70 launching into a challenging ASIC market, IREN's $2.3 billion convertible note offering, the precarious state of hashprice, Luxor's new GPU hardware sales business, the staggering 270% leap in ERCOT interconnection requests, and the controversial Cat bitcoin fork proposal aimed at filtering ordinals / inscriptions. Subscribe to the newsletter! https://newsletter.blockspacemedia.com **Notes:** - Hash price is below $40 per second - Three negative difficulty adjustments - Ercot requests leaped 270% in 2025 - 73% of requests from data centers - IREN raised $2.3B in convertible notes - M70 efficiency: 12.5 J/TH 00:00 Start 02:35 Difficulty Report by Luxor 07:26 IREN note 10:44 M70 launch 20:02 Luxor launches GPU trading 27:12 ERCOT LL requests up 270% in 2025 34:10 Cry Corner: another filter fork proposal
In this episode I get into another set of about four Asics pickups and one New Balance pickup. Then of course, it's on to the upcoming pairs! Thanks as always for listening AFS Squad! Shoutout to the Patrons: Kingsley G, Tristan S, Joshua N, John You can support this podcast, get your name listed above and get early access to episodes (paid tier) at: Patreon.com/ActualFanOfSneakers
This week on Fuel for the Sole, we break down Thomas' latest algorithm—one that's somehow convinced he (and everyone else) should be taking nicotine. We also dig into the trending supplement creatine, answering your top questions, including the best time to take it and the difference between creatine HCL and monohydrate. Plus, we cover the Tokyo Marathon's hydration rules and what runners need to know before race day.Want to be featured on the show? Email us (written or an audio file!) at fuelforthesolepodcast@gmail.com. This episode is fueled by ASICS and RNWY!Head over to ASICS.com and sign up for a OneASICS account. It's completely free and when you sign up you will receive 10% off your first purchase. You also gain access to exclusive colorways on ASICS.com, free standard shipping, special birthday month discounts and more.Try the new Salty Carbs at https://rnwy.life/ and use code FEATHERS15 for 15% off your purchase. Disclaimer: This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
A CMO Confidential Interview with Michael Treff, the CEO of Code and Theory joins us for our 150th Show to share observations on the major forces impacting the B2B space. Michael details how "empowered buyers" are forcing sellers to increase focus on customer value creation and transforming marketing and sales from "leads to information" which is also shifting spending to capital expense. Key topics include: why the next AI frontier is customer experience; the need for companies to have both a long and short-term AI plans; why budgeting won't get any easier and; the gap between the CX problems and CX actions. Tune in to hear why you need to have an "AI plan for your humans" and learn if you need " a personalized relationship with your mustard."CMO Confidential #150: Michael Treff on B2B's Year-In-Review, What's Next, and How AI Will Actually Drive Growth**B2B is being rebuilt from the core. Michael explains why budgets are shifting from media to infrastructure, how the funnel is being rewritten by agentic search, and where AI must move from efficiency to growth. We also cover the KPIs that matter, budgeting realism for 2026, and three things every CMO should know by the end of next year. Sponsored by Typeface—the agentic AI marketing platform helping brands turn one idea into thousands of on-brand experiences. Learn more: typeface.ai/cmo. **Chapters**00:00 Intro + show setup01:00 Sponsor: Typeface — agentic AI marketing, enterprise-grade & integrated02:00 Guest intro: Michael Treff, CEO of Code and Theory03:00 B2B landscape: investment shifts, changing journeys, disintermediation07:00 From MQLs to value: sales enablement and end-to-end outcomes10:00 Mid-roll: Typeface ARC agents & content lifecycle11:00 Why suites win: implementation and value realization after the sale15:00 AI phases: Wave 1 (efficiency) → Wave 2 (growth) pressures on agencies17:00 CX as the bridge: measure outcomes, not vanity metrics22:00 Roadmaps, humans, and culture—planning beyond point tools26:00 Budget reality check: deliberation, polarization, and trade-offs29:00 Personalization vs. business impact—what to fund and measure33:00 By end of 2026: know your human plan, AI maturity, and new journeys35:00 2026 prediction: the ROI vice tightens—agencies must be consultative36:00 Closing advice: “Interrogate everything yourself.”38:00 Wrap + where to find past episodes39:00 Sponsor close: Typeface—see how ASICS & Microsoft scale personalization**About our sponsor, Typeface** @typefaceai is the first multimodal, agentic AI marketing platform that automates workflows from brief to launch, integrates with your MarTech stack, and delivers enterprise-grade security—named AI Company of the Year by Adweek and a TIME Best Invention. Learn more: typeface.ai/cmo. **Tags**B2B marketing, enterprise marketing, customer experience, AI marketing, agentic AI, marketing ROI, sales enablement, Code and Theory, Michael Treff, Mike Linton, CMO strategy, marketing budget, personalization, Martech, TypefaceSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This week on Fuel for the Sole, we recap the highlights from the New York City Marathon and share the latest updates on the LA100 Group. We also tackle listener questions on several topics, including hyponatremia, hitting the wall, the difference between ferritin and iron, and why sponge cake might just be the ultimate pre-run fuel for your big workouts and races.Want to be featured on the show? Email us (written or an audio file!) at fuelforthesolepodcast@gmail.com. This episode is fueled by ASICS and RNWY!Head over to ASICS.com and sign up for a OneASICS account. It's completely free and when you sign up you will receive 10% off your first purchase. You also gain access to exclusive colorways on ASICS.com, free standard shipping, special birthday month discounts and more.Try the new Salty Carbs at https://rnwy.life/ and use code FEATHERS15 for 15% off your purchase. Disclaimer: This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
Maya Raichoora is one of the UK's leading mental fitness and visualization experts. She is an award-winning entrepreneur, founder and CEO of Remap Mental Fitness. Day to day she works with global brands such as Nike, Gymshark, Asics, Amex, and Lego. She has a decade of experience mastering the technique of visualization and is on a mission to share it with others. As a two-time TEDx speaker and public figure, her story and expertise has the power to inspire, empower and educate the masses. She lives in London. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
With over a decade of experience, Kenny Santucci has made himself a known as one of New York City's top trainers and a thought leader in the health and wellness industry. Brand ambassador for Michelob Ultra and Fitaid, Technogym Master Trainer, host of the Fitaid Morning Show, Michelob Ultra MOVEMENT Fitness Festival, Model Beach Volleyball, and more, Santucci has established himself as a force within the fitness space. He has collaborated with industry titans across the health, wellness, and lifestyle space such as Reebok, Under Armour, Adidas, ASICS, Rhone, Melin, Cellucor, Bodybulding.com, CrossFit, the National Academy of Sports Medicine, Precision Nutrition, Nautica, TimeOut, Gregory's Coffee, and more. Kenny has also shared his training approach and wellness philosophy with features in top health and wellness publications such as Shape Magazine, Men's Health Magazine, Men's Journal, Well+Good, Askmen.com, Reebok.com, and Women's Health Magazine to name a few. Kenny lives his mantra of helping others well beyond the walls of the gym. As the creator of the STRONG New York health and wellness series, he is the heart and leader behind these events that have already raised thousands of dollars and brought awareness to the community around men's and women's health issues, with a portion of the proceeds going to different health-focused organizations such as the Alzheimer's Awareness Foundation, Movember Foundation and Breast Cancer Research Foundation. Work With Us: Arétē by RAPID Health Optimization Links: Kenny Santucci on Instagram Anders Varner on Instagram Doug Larson on Instagram Coach Travis Mash on Instagram