Podcasts about hyperscale

  • 137PODCASTS
  • 274EPISODES
  • 34mAVG DURATION
  • 1WEEKLY EPISODE
  • Feb 27, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about hyperscale

Latest podcast episodes about hyperscale

Software Sessions
Bryan Cantrill on Oxide Computer

Software Sessions

Play Episode Listen Later Feb 27, 2026 89:58


Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ

Cloud Wars Live with Bob Evans
ServiceNow's McDermott: We Are Hungry and SaaS Is For Dinner

Cloud Wars Live with Bob Evans

Play Episode Listen Later Feb 18, 2026 3:59


In today's Cloud Wars Minute, I explore why Bill McDermott says ServiceNow is not a SaaS company and why SaaS is “on the menu.”Highlights00:03 — Welcome back to Cloud Wars Minute. The big thing is ServiceNow. As Bill McDermott says, ServiceNow is hungry and SaaS is on the menu. He went to great lengths in ServiceNow's recent Q4 earnings call, and also in a follow-up interview with Jim Cramer of Mad Money, to say that ServiceNow is doing great. We hit and exceeded all our numbers. We are not a SaaS company now.00:34 — One of the reasons McDermott wants to emphasize this separation from the SaaS community is because the SaaS business has been getting ravaged by Wall Street analysts who are thinking that AI, generative AI is going to completely gut the whole SaaS model. So they have knocked anywhere from 50, 60, 70% off the market caps of some leading SaaS companies.01:09 — He said AI, generative AI, and workflows and data are going to be the new model, the old model of traditional SaaS applications, or of what McDermott referred to repeatedly as features and functions. He said those are things of the past. We are the AI platform on which a lot of these SaaS apps will work and they'll operate.02:03 — Hyperscale is a nice name, but it doesn't really describe all that they do. Some of them offer applications, application development. They all offer databases. You've now got SaaS companies that got caught up in just features and functions that don't drive value and don't get companies better prepared for the AI Economy. They're all rolled together now.03:05 — "Our stock price and our valuation have taken a huge hit because we are being misinterpreted as being part of the SaaS world." We are not in the SaaS neighborhood. We are not a SaaS company. SaaS is on the menu. We're hungry. AI and ServiceNow are going to eat a lot of these, devour a lot of these feature and function application companies. Visit Cloud Wars for more.

David Bombal
#536: Inside the Cisco 8000: 100Tbps Capacity

David Bombal

Play Episode Listen Later Feb 18, 2026 31:58


Is this the most powerful network switch ever built? In this interview from Cisco Live, we look at the new generation of Cisco 8000 and Nexus switches capable of routing 100 Terabits per second. We break down why AI data centers are forced to move from air cooling to liquid cooling, and how a single switch chassis can now handle the equivalent mobile traffic of 100 million people. AI models are growing faster than the infrastructure can keep up. To solve this, the network switch had to evolve. In this video, I talk to Will from Cisco about the engineering challenges of building the "G300" generation of switches, hardware so dense that air cooling is no longer enough. We discuss the massive architectural shift occurring in data centers, where Liquid Cooled Switches are becoming the new standard to support 1.6T Ethernet ports and massive GPU clusters. Key Hardware Topics: • The 100 Terabit Chassis: How Cisco architecture handles massive throughput. • Liquid Cooling: Why switches are adopting "direct-to-chip" cooling just like gaming rigs. • Scale-Out Networking: How these switches manage congestion for AI training jobs (Job Completion Time). • Career Insights: Will manages 5,000 engineers and explains why understanding the physical layer and hardware constraints is a superpower for modern developers. Big thanks to Cisco for sponsoring my trip to Cisco Live EMEA and for changing my life and the lives of many other people. // Will Eatherton SOCIAL // LinkedIn:   / willeatherton Newsroom: https://newsroom.cisco.com/c/r/newsro... // David's SOCIAL // Discord: discord.com/invite/usKSyzb Twitter: www.twitter.com/davidbombal Instagram: www.instagram.com/davidbombal LinkedIn: www.linkedin.com/in/davidbombal Facebook: www.facebook.com/davidbombal.co TikTok: tiktok.com/@davidbombal YouTube: / @davidbombal Spotify: open.spotify.com/show/3f6k6gE... SoundCloud: / davidbombal Apple Podcast: podcasts.apple.com/us/podcast... // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com // MENU // 0:00 - Coming up 0:49 - Will Eatherton introduction and projects // Hyperscale, neocloud & enterprises 08:03 - New Cisco hardware // Silicon One G300 13:18 - Data centers + AI + GPUs 16:27 - G300 use case & Cisco Nexus 21:56 - Liquid-cooled switches 24:12 - Networking as a career path 28:41 - Development and opportunities in networking 30:14 - Conclusion Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! Disclaimer: This video is for educational purposes only. #cisco #ciscolive #ciscoemea

Fiber Broadband Association - Fiber for Breakfast
FFB Episode 267 - The Hidden Infrastructure Behind AI and Hyperscale Networks

Fiber Broadband Association - Fiber for Breakfast

Play Episode Listen Later Feb 18, 2026 30:30


On this episode of Fiber for Breakfast, we pull back the curtain on In-Line Amplifier (ILA) huts—the unsung infrastructure powering long-haul fiber networks. Michael Thomas, Regional Vice President of Integration at Network Connex, joins Gary Bolton, President & CEO of the Fiber Broadband Association, to explain how ILAs keep signals strong across hundreds of miles, reduce latency, and enable the rapid growth of AI and cloud services. They'll also discuss why ILA placement is critical, how these sites are evolving into micro data centers, who's building them, and how they could create new broadband opportunities for rural communities. With Special Guest: Michael Thomas, Regional Vice President of Integration, Network Connex

Geschichten, die verkaufen - Mehr Umsatz durch Content Marketing
Nr. 409 - Behind the Story: Vom Fast-Tod zur 100-Millionen-Marke mit Rafael Frenk von Glow25

Geschichten, die verkaufen - Mehr Umsatz durch Content Marketing

Play Episode Listen Later Feb 15, 2026 75:26


Was passiert, wenn du fast stirbst – und beschließt, dein Leben radikal sinnvoll zu nutzen? In der ersten Folge von Behind the Story spricht Bernhard Kalhammer mit Raphael Frenk – Seriengründer, Co-Founder von Glow25 und jemand, der weiß, wie sich Extremsituationen anfühlen. Vom beinahe tödlichen Raubüberfall als Student über den Aufbau von Primal State bis hin zur Explosion von Glow25 zur 100-Millionen-Marke – Raphael teilt offen, wie sich Hyperscale wirklich anfühlt. Und warum das Nervensystem dabei oft der wahre Preis ist. Doch dieses Gespräch geht tiefer. Es geht um: • Tod und Legacy • Meditation als Überlebensstrategie • Breathwork als Durchbruch • Spiritualität ohne Esoterik • Unternehmertum zwischen Raketenstart und Nervenzusammenbruch Eine ehrliche, ungefilterte Unterhaltung über Erfolg, Schmerz, Heilung – und die Frage: Was bleibt wirklich von uns?

Outgrow's Marketer of the Month
EPISODE 248- Hustle at Hyperscale: Uber's EMEA Head of Network Infrastructure Vishnu Acharya Unpacks 10x, 100x, 1000x Growth

Outgrow's Marketer of the Month

Play Episode Listen Later Feb 2, 2026 36:13


Vishnu Acharya is a well-rounded technology leader and speaker with over 20 years of experience spanning engineering and operations. He has worked both as an individual contributor and in senior leadership roles. Currently, he is an engineering leader at Uber, where he focuses on hyperscale production network engineering, infrastructure, and software automation. Known for his natural curiosity to break and fix things, Vishnu is driven by a passion to learn, teach, create, and bring order out of chaos.On The Menu:Scaling Uber's infrastructure from 16 to 4,000 engineersBuilding systems that survive 10x, 100x, 1000x growthNetwork bottlenecks in AI factories and GPU connectivityHow Uber for Business started from a random networking requestAngel investing lessons: backing founders over trends and hypeChina expansion: cranes, windows, and creative data center solutionsAutonomous vehicle push in 2016: when innovation moved too fast

WenMint - Cardano Culture and NFTs
Cantex.io First Look // Radix Hyperscale Breaks 700k TPS

WenMint - Cardano Culture and NFTs

Play Episode Listen Later Feb 1, 2026 70:52


THE REVIEW is recorded live on X and available on all major streaming platforms. Head to Spotify for the video recording.Follow our hosts:Ken - https://x.com/NFTMachinistAaron - https://x.com/GuyettAaronAstrolescent Official Links:https://astrl.tradehttps://t.me/astrolescent_announcementshttps://x.com/astrolescentCantex.io Official Links:https://x.com/cantex_iohttps://discord.gg/SQ45PCAU7bhttps://news.cantex.ioTHE REVIEW PODCAST DOES NOT PROVIDE FINANCIAL ADVICE

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Welcome to AI Unraveled (December 12, 2025): Your daily strategic briefing on the business impact of AI.Today on AI Unraveled, we break down the escalation in the model wars as OpenAI rushes GPT-5.2 to market to counter Google's Gemini 3. We also analyze the massive copyright lawsuit Disney just handed to Google, Rivian's strategic pivot to proprietary AI chips, and the new space race between Bezos and Musk to build orbital data centers. Plus, why the Financial Times believes the "Hyperscale" bubble might burst in favor of specialized industrial AI.Strategic Pillars & Topics

Kodsnack in English
Kodsnack 678 - The intent of a human, with Justyna Zander

Kodsnack in English

Play Episode Listen Later Dec 4, 2025 25:10


Recorded on-stage at Øredev 2025, Fredrik talks to Justyna Zander about AI for self-driving cars, the noise of the present, and more. Don’t let the noise of today demolish the positive signal of the future! Many thanks to Øredev for inviting Kodsnack again, they paid for the trip and the editing time of these keynote recordings, but have no say about the content of these or any other episodes. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlund and @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Øredev All the presentation videos from Øredev 2025 Justyna Zander Physical AI: crafting resilient systems with emotional intelligence- Justyna’s keynote Emotional intelligence Empathy Hyperscalars Snowflake Demis Hassabis Titles You learn something new We have it in the spatial sense The policy of the machine What did the human tell me to do? How do you teach the machine empathy? The first to be disrupted The intent of a human Engineering with purpose Statistics on steroids

AdExchanger
From Hype To Hyperscale In AI

AdExchanger

Play Episode Listen Later Dec 2, 2025 58:51


AI hype is everywhere, but Moloco CEO Ikkjin Ahn says the real winners in ad tech will be those who can move beyond flashy demos and harness AI at true hyperscale.

Seaspray Making Waves
Financial Insight : Hyperscale Growth for Data Centres.

Seaspray Making Waves

Play Episode Listen Later Nov 25, 2025 1:57


Visit our website : https://seasprayprivate.ie

Engineering Influence from ACEC
The Data Center Boom: 5 Trends Engineering Firms Need to Know

Engineering Influence from ACEC

Play Episode Listen Later Nov 20, 2025 5:31 Transcription Available


The Data Center Boom: Five Trends Engineering Firms Need to Know The data center market is experiencing unprecedented growth, driven by artificial intelligence adoption and changing infrastructure demands. For ACEC member firms, this represents both a substantial business opportunity and a chance to shape critical national infrastructure. ACEC's latest Market Intelligence Brief reveals a market poised to reach $62 billion in design and construction spending by 2029, with implications that extend far beyond traditional data center engineering. The launch of ChatGPT in 2022 marked an inflection point. What began as voice assistants has evolved into sophisticated language learning models that consume dramatically more energy. A standard AI query uses about 0.012 kilowatt-hours, while generating a single high-quality image requires 2.0 kWh—roughly 20 times the daily consumption of a standard LED lightbulb. As weekly ChatGPT users surged from 100 million to 700 million between November 2023 and August 2025, the infrastructure implications became impossible to ignore. AI-driven data center power demand, which stood at just 4 gigawatts in 2024, is projected to reach 123 gigawatts by 2035. Even more striking: 70 percent of data center power demand will be driven by AI workloads. This explosive growth requires engineering solutions at unprecedented scale, from power distribution and backup systems to advanced cooling technologies and grid integration strategies. Public perception about data center water consumption often overlooks important nuances in cooling technology. While mechanical cooling systems have historically consumed significant water resources, newer approaches could dramatically reduce water use. Free air cooling, closed-loop systems, and liquid immersion technologies offer low-water use alternatives, with some methods reducing freshwater consumption by 70 percent or more compared to traditional systems. As Thom Jackson, mechanical engineer and partner at Dunham Engineering, notes: "Most data centers utilize closed loop cooling systems requiring no makeup water and minimal maintenance." The "big four" hyperscale operators—Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Meta—have all committed to becoming water-positive by 2030, replenishing more water than they consume. These commitments are driving innovation in cooling system design and creating opportunities for engineering firms with expertise in sustainable mechanical systems. The days of one-size-fits-all data centers are over. Latency requirements, scalability needs, and proximity to end users are accelerating adoption of diverse building types. Edge data centers bring computing closer to users for real-time applications like IoT and 5G. Hyperscale facilities support massive cloud and AI workloads with 100,000-plus servers. Colocation models enable scalable shared environments for enterprises, while modular designs—prefabricated with integrated power and cooling—offer rapid, cost-effective deployment. Each model presents distinct engineering challenges and opportunities, from specialized HVAC systems and high floor-to-ceiling ratios for hyperscale facilities to distributed infrastructure planning for edge networks. Two emerging trends deserve particular attention. First, the Department of Energy has selected four federal sites to host AI data centers paired with clean energy generation, including small modular reactors (SMRs). The Nuclear Regulatory Commission anticipates at least 25 SMR license applications by 2029, signaling strong demand for nuclear co-location expertise. Second, developers are increasingly exploring adaptive reuse of underutilized office spaces, Brownfield sites, and historical buildings. These locations offer existing utility infrastructure that can reduce construction time and costs, making them attractive alternatives despite some design constraints. Recent federal policy changes are streamlining data center deployment. Executive Order 14318 directs agencies to accelerate environmental reviews and permitting, while revisions to New Source Review under the Clean Air Act could allow construction to begin before air permits are issued. ACEC recently formed the Data Center Task Force to advocate for policies that balance speed, affordability, and national security in data center development, complimenting EO 14318. For engineering firms, site selection expertise has become increasingly valuable. Success hinges on sales and use tax exemptions, existing power and fiber connectivity, effective community engagement, and thorough environmental risk assessment. AI-driven planning tools like UrbanFootprint and ESRI ArcGIS are helping developers evaluate site suitability, identifying opportunities for firms. The data center market offers engineering firms a chance to lead in sustainable design, infrastructure innovation, and strategic planning at a moment when digital infrastructure has become as critical as traditional utilities.  

Smartinvesting2000
The glut of apartments on the market, Are large hyperscale companies inflating earnings, China isn't just manufacturing; they're innovating & 50-year Mortgages: helpful or harmful?

Smartinvesting2000

Play Episode Listen Later Nov 15, 2025 55:37


No surprise to me that there's a glut of apartments on the market  I saw the potential for this oversupply happening in San Diego a couple of years ago. It seemed anywhere you drove within a short distance you would see the construction of new apartment buildings. It is not just here in San Diego though as the glut of apartments is happening around the country. With the dynamics of supply and demand, if you're looking for an apartment today, you're in for a treat. In September rental rates had the steepest drop in more than 15 years. Landlords are now offering months of free rent, gift cards, free parking and some are even paying for your moving expenses just to get you to sign a lease. You may want to play hardball because in some areas they'll even cut the rent on top of all those incentives. In September, 37% of rentals agreed to concessions like months of free rent. What caused the problem for landlords is during the early years of the pandemic, developers could not begin building apartments fast enough, especially in the Sunbelt area where there was a major population migration. It became the biggest apartment construction boom in 40 years, but because of the delay of construction permits and labor shortages, development took much longer than they had hoped. It seemed no one looked around to see all the apartments going up, and now they're all competing with each other for renters. The landlords are hoping they can raise rents by the end of 2026 or at least sometime in 2027, but I don't think they are factoring in how many apartments are online with more still to come. Based on the current apartment inventory and new apartments coming online, renters could be in for lower rent maybe perhaps until 2028. This will not be good for the housing market because rent for houses will be the next to fall and then people will have to factor in the affordability of renting vs buying a home. This would also likely hurt the demand for buying rental properties as an investment if you can't get as much rent as you thought.    Are the large hyperscale companies like Meta, Microsoft, and Alphabet inflating earnings?  Michael Burry, who was made famous by "The Big Short", made the claim that some of America's largest tech companies are using aggressive accounting to pad their profits. He believes they are understating depreciation expenses by estimating that chips will have a longer life cycle than is realistic. Investors are likely aware of the huge investment these companies are making in AI, but they likely don't understand how the accounting of the investments work. If a business makes an investment in these semiconductors/servers of let's say $100 B, that doesn't hit earnings when the money is spent as under generally accepted accounting principles, or GAAP, they are instead able to spread out the cost of that asset as a yearly expense that is based on the company's estimate of how rapidly that asset depreciates in value. From what I've seen, these companies are generally depreciating their Nvidia chips for over 5 to 6 years. This seems to be a stretch considering Nvidia is on a 1-year chip production cycle, and the technology is changing quite rapidly. Burry estimated that from 2026 through 2028, the accounting maneuver would understate depreciation by about $176 billion and if Burry is correct, hyperscale's will have to write off AI capex as a bad investment, due to depreciation-useful life mismatch. This would then produce a major hit on earnings. While I remain a believer that AI is here to stay, I do believe there will be some big-time losers in this space given all the money that is being spent. Be careful chasing the hype as I do worry the fallout for some of these companies could be larger than many things possible. Burry has also warned this year that AI enthusiasm resembles the late-1990s tech bubble and recently disclosed put options betting against Nvidia and Palantir. He also stated that "more detail" was coming November 25th, and that readers should "stay tuned." I know I'm definitely curious what other information he has!    China is no longer just manufacturing; they are also beginning to innovate.  For many years innovation was generally done here in the US, and we would have the products manufactured in China. China is no longer happy with this arrangement, and its research and development spending is up nearly 9% a year well above the 1.7% here in United States. In 2024, China filed 70,160 international patents which was about 16,000 more than the 54,087 patents the US filed. China also seems to be more advanced in robotics installing 300,000 industrial robots in 2024 compared with roughly 30,000 industrial robots in the US. It also has been noted that when it comes to worldwide sales of electric vehicles, 66% came from China. While these developments seem positive for China, the country is still experiencing problems with a slowing economy as they have seen fixed asset investment decline and a slowdown in retail sales. The population of China has also declined over the last three years, and the real estate market after four years has really taken away a lot of household wealth. China's public and private debt continue to climb rapidly, which is becoming a problem for them as well. It is estimated that China is spending around $85-$95 billion on AI capital spending yet their economy is struggling as noted by the China Merchants Bank which talked about a 11% decline in consumption among customers and retail loans are now under pressure. China's exports to the US are down 27% because of the tariffs, but worldwide their exports are up 8%. It was recently reported that Beijing banned foreign AI chips from Nvidia, Advanced Micro Devices and Intel from government funding data center buildouts. Currently, China cannot pass the US and its allies in producing the most advance semiconductors, but they're making very good progress in developing mid-level chips and parts of the AI ecosystem. The US must continue to forge ahead because if we rest, China will be the world dominant power    Financial Planning: 50-year Mortgage: Helpful or Hurtful? A 50-year mortgage is being discussed as a way to reduce monthly payments and help with affordability, offering borrowers slightly lower costs that could help them qualify for homes otherwise out of reach. Critics argue that these loans would saddle buyers with far more interest paid to banks and that many borrowers would never pay off such a long mortgage, but those arguments often miss the bigger picture. Paying a low rate of interest to a bank is not inherently bad if it allows someone to invest money elsewhere at higher returns, just as today's homeowners with 30-year mortgages at 2% benefit greatly from not paying them off early. Also, most mortgages today are never fully paid off anyway because homes are sold, or loans are refinanced long before they reach maturity. A 50-year loan would be no different, especially since borrowers could always pay more than the minimum if they wanted to accelerate payoff. In practice, savvy investors would likely use the freed-up cash flow from 50-year mortgages to invest in higher-return opportunities, but most borrowers probably wouldn't resulting in slower wealth accumulation for the masses without addressing the root cause of housing affordability. If used correctly, this loan could be a useful tool, but I fear the overall impact could be damaging.     Companies Discussed:  Axon Enterprise (AXON), Zoetis Inc. (ZTS), Elf Beauty Inc. (ELF),Sweetgreen Inc. (SG)  

Camada 8
#70 - Automação de Redes em ambientes de Hyperscale com Leonardo Furtado

Camada 8

Play Episode Listen Later Nov 12, 2025 72:13


No novo episódio do Camada 8, convidamos Leonardo Furtado, Engenheiro de Desenvolvimento de Redes, para uma conversa sobre automação de redes em ambientes hyperscale.Furtado explica o que diferencia as infraestruturas desses ambientes das redes tradicionais, como elas operam em escala global e por que a automação se tornou indispensável para garantir disponibilidade, agilidade e segurança. Ele também fala sobre como a programabilidade está transformando a forma de projetar e operar redes críticas, como provedores de todos os tamanhos podem aplicar esses conceitos no dia a dia, e muito mais!Dê o play e confira agora mesmo o novo episódio do quadro Roteamento de Ideias do Camada 8!#Camada8 #Hyperscale #Infraestrutura #NetworkAutomation #AutomaçãoDeRedes #Programabilidade #programmability #NetworkingParticipantes:Antonio Marcos Moreiras (Host) - Gerente de projetos e desenvolvimento no NIC.br https://www.linkedin.com/in/moreirasEduardo Barasal Morales (Host) - Coordenador da área de formação de sistemas autônomos do Ceptro.br no NIC.br https://www.linkedin.com/in/eduardo-barasal-moralesLeonardo Furtado (Convidado) - Engenheiro de Desenvolvimento de Redes https://www.linkedin.com/in/leofurtadonyc/Links citados:Semana de Infraestrutura da Internet no Brasil: https://semanainfra.nic.br/Curso BCOP EaD: https://cursoseventos.nic.br/curso/curso-bcop-ead/Agenda de cursos do Ceptro|NIC.br: https://ceptro.br/cursos-eventosRedes Sociais:https://www.youtube.com/nicbrvideos/https://x.com/comuNICbr/https://www.telegram.me/nicbr/https://www.linkedin.com/company/nic-br/https://www.instagram.com/nicbr/https://www.facebook.com/nic.br/https://www.flickr.com/NICbr/Contato:Equipe Ceptro.brcursosceptro@nic.brDireção e áudio:Equipe Ceptro.brEquipe de Comunicação do NIC.brEdição completa por Rádiofobia Podcast e Multimídia: https://radiofobia.com.br/Veja também:https://nic.br/https://ceptro.br/

The Vestigo FinTech Podcast
#30 | Building the Backbone of the AI Economy - What It Takes to Deliver Hyperscale Compute with Éanna Murphy - CEO & Founder of Montera Infrastructure

The Vestigo FinTech Podcast

Play Episode Listen Later Nov 11, 2025 60:17


As AI shifts from training to inference, founders, investors and VCs face a new frontier: the physical infrastructure that enables massive compute.  In this episode, Frazer and Éanna discuss:  The hidden development lifecycle of a hyperscale build - from dirt to green lights - and what that means for build‑to‑suit strategies. Why power, latency and land have become the new scalers of value, and how to spot when infrastructure constraints turn into opportunity. How investors and founders can position themselves early in this "third wave" of data‑centre build‑out to win sub‑1% of the market before it becomes crowded. — Éanna Murphy is CEO and Founder of Montera Infrastructure, a Stonepeak-backed datacenter developer focused on single tenant hyperscale campuses in North America. With over 17 years in the digital infrastructure industry, Éanna has held senior roles at Google and Yondr, scaling global delivery and operations across five continents. He serves on the boards of Digital Edge and H&MV Engineering, as an advisor to XYZ Reality and Beacon AI Centers and an Operating Partner at Stonepeak.  Éanna brings a global perspective shaped by deep experience across Digital Infrastructure, tech and capital markets. Originally from Ireland, he now lives in California with his family and is a passionate sports fan, girls soccer coach and golfer.

The Preconstruction Podcast - Commercial Construction.
E155: Samuel Liesmer, Hyperscale Estimator at National Technologies

The Preconstruction Podcast - Commercial Construction.

Play Episode Listen Later Nov 10, 2025 40:15


Our host Gareth McGlynn sits down with Samuel Liesmer, Hyperscale Estimator at National Technologies (a Connex Company), to discuss the growing demand for low voltage expertise in hyperscale data center construction.Discussion Highlights:The surge of AI and data center growth driving electrical and low voltage packages.Understanding Division 27 low voltage systems in large-scale builds.The complexities of structured cabling, fiber optics, and media systems.How CCTV, security, and access control systems are managed within hyperscale projects.Working with owners and general contractors to balance delivery, design, and management challenges.And much, much more.Connect with Sam Liesmer:

JSA Podcasts for Telecom and Data Centers
Jason Walker on DC BLOX's Southeast Growth and Hyperscale Strategy | Datacloud Global Congress

JSA Podcasts for Telecom and Data Centers

Play Episode Listen Later Sep 29, 2025 8:02


JSA Podcasts for Telecom and Data Centers
Why Hyperscale Data Centers Are Flocking to the Nordics | BW Velora & Five Nines at Datacloud Global

JSA Podcasts for Telecom and Data Centers

Play Episode Listen Later Sep 29, 2025 7:39


The Data Center Frontier Show
Nomads at the Summit: Technology Infrastructure Considerations for Hyperscale, MTDC, Wholesale - A Consultant Engineer's Perspective

The Data Center Frontier Show

Play Episode Listen Later Sep 25, 2025 27:56


Speakers:  Joseph Ford, Senior Associate – Technology, Bala Consulting Engineers Eric Klaiber, Data Center Design Manager, Bala Consulting Engineers In this DCF Trends-Nomads at the Summit Podcast episode, Joseph Ford and Eric Klaiber of Bala Consulting Engineers offer a consultant engineer's hard-won perspective on the complex realities of designing infrastructure for hyperscale, MTDC, and wholesale data centers. Drawing on years of field experience, they dig into the nuanced choreography required to align incoming duct banks, meet-me room layouts, and overlapping network systems—all while staying within the spatial constraints driven by power and cooling demands. This candid conversation highlights what it really takes to create design harmony across client expectations, design teams, and contractors, with insights into space planning, coordination strategy, and the delicate balance of infrastructure coexistence that underpins modern high-performance facilities.

The Data Center Frontier Show
Nomads at the Summit: From Liquid Cooling to Hyperscale Frontiers - A Conversation with Vertiv's Greg Stover

The Data Center Frontier Show

Play Episode Listen Later Sep 25, 2025 41:39


In this DCF Trends-Nomads at the Summit Podcast episode, Data Center Frontier editors and Nomad Futurist hosts sit down with Greg Stover, Vertiv's Global Director, Hi-Tech Development. The discussion delves into Stover's work at the intersection of advanced cooling technologies, hyperscale growth, and AI-driven infrastructure design. Drawing on his experience guiding Vertiv's strategy for high-density deployments, liquid cooling adoption, and close collaboration with hyperscalers and chipmakers, Stover offers a forward-looking perspective on how evolving compute architectures, thermal management innovations, and market forces are redefining the competitive edge in the data center industry.

JSA Podcasts for Telecom and Data Centers
PowerHouse's Data Centers: Meeting the Demands of AI & Hyperscale | ITW 2025

JSA Podcasts for Telecom and Data Centers

Play Episode Listen Later Sep 16, 2025 6:40


PowerHouse Data Centers is making major moves to support the future of AI and hyperscale infrastructure. At ITW 2025, Matt Monaco, SVP of Asset Management & Development, and Vardahn Chaudhry, SVP of Asset Management, joined JSA TV to discuss PowerHouse's new talent and growth, why Nevada, Texas, Kentucky, and North Carolina are key growth zones, and how they're overcoming grid limitations with a smarter approach to power procurement. Subscribe to JSATV for more executive insights from ITW 2025.#DataCenters #AI #Hyperscale #Sustainability #ITW2025 #PowerHouseDataCenters #DigitalInfrastructure

Built Environment Matters
AI Edge Revolution: How Data Centres are Reshaping for AI Service Serving a Vertical Need | Emmanuel Becker

Built Environment Matters

Play Episode Listen Later Sep 9, 2025 37:36


The data centre industry is experiencing unprecedented disruption. As AI applications drive explosive demand for computing power, traditional approaches to data centre design and deployment are becoming obsolete almost overnight.In this episode, Emmanuel Becker, CEO of Mediterra Datacenters, shares insights from his extensive career spanning the evolution from on-premise to cloud to AI-driven infrastructure. With NVIDIA releasing new GPU generations every 6-12 months, data centre operators face an impossible equation: building facilities meant to last decades while needing flexibility every few months.Emmanuel discusses Mediterra's innovative approach, including their focus on mid-voltage power infrastructure, liquid cooling readiness, and strategic positioning in tier two cities. The conversation explores how vertical AI specialists are driving regional demand, why ‘permanent retrofit' is becoming standard practice, and how the industry must evolve to serve distributed intelligence rather than centralised hyperscale computing.A must-listen for anyone involved in digital infrastructure, technology deployment, or understanding how AI is reshaping the built environment.Send us a textTo learn more about Bryden Wood's Design to Value philosophy, visit www.brydenwood.com. You can also follow Bryden Wood on LinkedIn.

New Project Media
NPM Interconnections (US) – Episode 161: Putting the Scale in Hyperscale | PANEL

New Project Media

Play Episode Listen Later Aug 19, 2025 57:09


This week's episode is the full recording of an NPM webinar discussion titled “Putting the Scale in Hyperscale,” held on August 12, 2025.Speakers include:Craig McKesson - Chief Commercial Officer, TakanockBill Thomas - Chief Energy Officer, CleanArc Data CentersSyed Ahmed - Head of Digital Infrastructure, Apterra Infrastructure CapitalKyle Younker - Senior Editor, NPM (m)The panel tackles utility constraints and policy shifts, the rise of behind-the-meter strategies, changing siting logic for training vs. inference and latency needs, and how new entrants—from renewable developers to crypto miners—are reshaping capital stacks.NPM is a leading data, intelligence & events company providing business development led coverage of the US & European power, storage & data center markets for the development, finance, M&A and corporate community.Download our mobile app.

ลงทุนแมน
Hyperscale Data Center อุตสาหกรรมดาวรุ่ง ตัวพลิกเกมเศรษฐกิจไทย ? | ลงทุนแมนจะเล่าให้ฟัง

ลงทุนแมน

Play Episode Listen Later Jul 24, 2025 11:47


Hyperscale Data Center อุตสาหกรรมดาวรุ่ง ตัวพลิกเกมเศรษฐกิจไทย ? | ลงทุนแมนจะเล่าให้ฟัง หากนึกถึงประเทศหนึ่งที่ไม่เคยติดลิสต์เทคโนโลยีระดับโลก แต่อยู่ดี ๆ ก็มีชื่อ Amazon, Google, Microsoft และ TikTok ทยอยเข้ามาลงทุนพร้อมกันภายใน 2 ปี ประเทศที่กำลังพูดถึงอยู่นั้น ก็คือ “ไทย” ที่ 2024 เพียงปีเดียว มีโครงการลงทุน Data Center จากต่างชาติ มากถึง 47 โครงการ มูลค่ารวมกว่า 2.4 แสนล้านบาท ซึ่ง Data Center เป็น “อุตสาหกรรมดาวรุ่ง” ที่อาจเปลี่ยนเศรษฐกิจไทยทั้งระบบได้อย่างไร และไทยจะกลายเป็น “โครงสร้างพื้นฐานทางดิจิทัล” ของภูมิภาคได้เลยหรือไม่ ลงทุนแมนจะเล่าให้ฟัง

Leading with Curiosity
Ep.58 What distracts you from running your core business? Corey Hart - Professor of Business, Fractional CXO, Startup & Scaleup Advisor. Grand Rapids, MI

Leading with Curiosity

Play Episode Listen Later Jul 15, 2025 32:04


SummaryIn this conversation, Nate Leslie interviews THE Corey Hart. Yes he's been conveniently confused with the Boy in the Box many times. Corey is an experienced entrepreneur and educator, discussing the challenges and distractions faced by startups, particularly in fundraising. Corey emphasizes the importance of focusing on customer money over investor money, the role of curiosity in entrepreneurship, and the significance of understanding product market fit. They also explore the concept of unreasonable excellence in business, the value of mentorship, the impact of AI, and alternative funding models for startups. The discussion highlights the need for entrepreneurs to maintain their vision while navigating growth and scaling their businesses.Take Nate's free High-Performance Index Leader Self Assessment: ⁠www.nateleslie.ca/giftConnect with Corey: coreythehart.comKeywordsstartups, fundraising, entrepreneurship, curiosity, product market fit, mentorship, AI, hyperscale, business growth, alternative fundingTakeawaysFundraising can distract founders from their core business.Customer money is preferable to investor money.Curiosity is essential for learning and adapting in business.Understanding product market fit is crucial for success.Unreasonable excellence can enhance customer experiences.Mentorship programs like Spring GR are valuable for entrepreneurs.AI can provide insights but requires careful implementation.Hyperscale growth demands a focus on culture and team dynamics.Alternative funding models can reduce pressure on founders.Experience and wisdom are invaluable assets in entrepreneurship.Sound Bites"Why are you fundraising?""It's dangerous to delegate culture too soon.""You should charge for this, man."Chapters00:00 Introduction to Corey Hart and Startups02:58 The Distraction of Fundraising05:58 The Importance of Curiosity in Entrepreneurship08:54 Navigating Product Market Fit11:46 Unreasonable Excellence in Business15:04 Corey Hart's Mentorship and Community Involvement17:45 The Role of AI in Business20:41 Understanding Hyperscale and Growth23:29 Alternative Funding Models for Startups26:37 The Value of Experience in Entrepreneurship

My Climate Journey
Crusoe's Big Bet on AI Infrastructure and Energy

My Climate Journey

Play Episode Listen Later Jun 17, 2025 53:30


Cully Cavness is the co-founder, president, and COO of Crusoe, an energy-first AI infrastructure company. In this live episode recorded in Austin, Texas, Cully shares how Crusoe evolved from capturing flared gas for Bitcoin mining to becoming a leading developer of hyperscale data centers. He discusses the company's pivotal role in Project Stargate—a $500B AI infrastructure effort led by OpenAI, SoftBank, and Oracle—and how Crusoe is building a 1.2 gigawatt data center campus in Abilene, Texas. Cully reflects on the decision to divest its original Bitcoin business, the company's vertical integration strategy, and how energy abundance will shape the future of AI. In this episode, we cover: ⁠[00:24]⁠ An overview of Crusoe ⁠[01:08]⁠ Its role in Project Stargate and Abilene data center⁠[03:41]⁠ Shift from outbound to inbound interest⁠[06:17]⁠ Company pivots and existential startup bets[09:09]⁠ Sale of Bitcoin mining business to NYDIG[11:40]⁠ Flared gas capture and climate impact overview⁠[14:57]⁠ From digital flare mitigation to stranded wind use⁠[17:27]⁠ Cully's personal energy background and worldview⁠[22:14]⁠ Why AI could drive climate and fusion breakthroughs⁠[25:47]⁠ Details of the 1.2 GW Abilene campus for Oracle⁠[36:42]⁠ 3,500 skilled trades supporting data center build⁠[44:42]⁠ Natural gas as a bridge fuel + CCS investmentsEpisode recorded on June 10, 2025 (Published on June 17, 2025) Enjoyed this episode? Please leave us a review! Share feedback or suggest future topics and guests at info@mcj.vc.Connect with MCJ:Cody Simms on LinkedInVisit mcj.vcSubscribe to the MCJ Newsletter*Editing and post-production work for this episode was provided by The Podcast Consultant

JSA Podcasts for Telecom and Data Centers
André Busnardo of Tecto Data Centers on Hyperscale Expansion & Sustainability in Brazil at PTC'25

JSA Podcasts for Telecom and Data Centers

Play Episode Listen Later May 28, 2025 7:17


André Busnardo, Commercial Director at Tecto Data Centers, joins JSA TV at PTC'25 to share details about Tecto's new hyperscale facility in Santana do Parnaíba, the meaning behind “Big Lobster” and “Mega Lobster,” and how the company is balancing rapid expansion with sustainability. He also discusses why Brazil is the ideal location for their $1 billion expansion.

Grow Everything Biotech Podcast
130. SynBioBeta 2025 Recap: Hyperscale Biology

Grow Everything Biotech Podcast

Play Episode Listen Later May 23, 2025 60:33


In this energetic recap of SynBioBeta 2025, Erum and Karl bring listeners straight into the heart of the year's most important synthetic biology gathering. From Drew Endy's visionary keynote on distributed, localized manufacturing to the ever-present theme of AI as the enabler of hyperscale biology, this episode unpacks the biggest ideas shaping the future of biomanufacturing and materials innovation. The duo reflect on how the conference architecture—from hallway conversations to main stage moments—facilitated cross-disciplinary collisions and bold discussions about biology as the foundational infrastructure of the 21st century. Whether you're a bioengineer, startup founder, or investor, this debrief connects the dots between culture, technology, and sustainability in the evolving bioeconomy.Grow Everything brings the bioeconomy to life. Hosts Karl Schmieder and Erum Azeez Khan share stories and interview the leaders and influencers changing the world by growing everything. Biology is the oldest technology. And it can be engineered. What are we growing?Learn more at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.messaginglab.com/groweverything⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Chapters:00:00:00 – Kicking Off with Erum & Karl: SynBio Banter and Big Energy00:00:34 – Straight from San Jose: SynBioBeta 2025, Unpacked00:01:05 – Imagining 2050: Drew Endy's Blueprint for a Biological Future00:02:25 – Bio Meets Geopolitics: Why Localized Manufacturing is Liberation00:03:20 – Not Just Biology—Infrastructure for a New Economy00:03:45 – Who is Drew Endy? The Mind Behind SynBio's Movement00:04:32 – Inside the Conference: Boots on the Ground at San Jose00:05:29 – Hyperscale Bio is Here: AI + SynBio = Exponential Potential00:07:16 – Biopharma in the Hot Seat: What's Holding the Sector Back?00:09:20 – Protein Design Gets Creative: Custom-Built Biology Takes Stage00:19:53 – Space, Security, and the Pentagon's Synthetic Playbook00:28:42 – Best of the Breakouts: Panels, Pitches, and Power Moves00:32:02 – Bio Beyond the Hype: What Practitioners Really Need to Know00:32:29 – The Dirty Biology Manifesto: Getting Real with Nature00:33:14 – Strategic Alliances Panel: Where Collaboration Meets Competition00:33:52 – Deal Drama: Why Biotech Partnerships Are So Hard to Land00:37:48 – Beauty in 2030: The Biodesign Revolution in Personal Care00:39:22 – State of Funding: Who's Still Betting on Biotech?00:41:59 – Rethinking Capital: Creative Models for SynBio Startups00:46:43 – Longevity, Hype, and the Future of Human Lifespan00:53:12 – What We Learned: Reflections, Revelations, and Real Talk00:54:16 – Final Words: Gratitude, Community, and What's NextLinks and Resources:SynBioBetaDrew Endy – Stanford ProfessorConstructive BioBiomatterCradle BioIFF (International Flavors and Fragrances)BIOMADEDARPA BTONASARhodium ScientificLinkgevityAmplify VenturesStarlab SpaceCultivariumAmple AgricultureCargillP&GInsempraBIOWEGHawkwood BiotechGreen BioactivesCellugyOneSkinPlasmidsaurus⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Topics Covered: biomanufacturing, cell free biomanufacturing, enzymes, nutraceuticals, biotech, pharmaceuticals, AI, spinoutsHave a question or comment? Message us here:Text or Call (804) 505-5553 ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  / ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ / ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ / ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ / ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Grow Everything⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Email: groweverything@messaginglab.comMusic by: NihiloreProduction by: Amplafy Media

This Week in HPC
TWIHPC Episode 380 - Jensen Huang Lauds Trump Energy Policies; Pushing the Limits of Hyperscale

This Week in HPC

Play Episode Listen Later May 5, 2025 17:01


On This Week in HPC, Addison Snell and Doug Eadline take a look at Jensen Huang's visit to the White House, and analyze the future of the big players in HPC and AI.

The Data Center Frontier Show
iMasons CEO Santiago Suinaga on the Future of Sustainable AI Data Centers

The Data Center Frontier Show

Play Episode Listen Later Mar 25, 2025 24:47


For this episode of the DCF Show podcast, host Matt Vincent, Editor in Chief of Data Center Frontier, is joined by Santiago Suinaga, CEO of Infrastructure Masons (iMasons), to explore the urgent challenges of scaling data center construction while maintaining sustainability commitments, among other pertinent industry topics. The AI Race and Responsible Construction "Balancing scale and sustainability is key because the AI race is real," Suinaga emphasizes. "Forecasted capacities have skyrocketed to meet AI demand. Hyperscale end users and data center developers are deploying high volumes to secure capacity in an increasingly constrained global market." This surge in demand pressures the industry to build faster than ever before. Yet, as Suinaga notes, speed and sustainability must go hand in hand. "The industry must embrace a build fast, build smart mentality. Leveraging digital twin technology, AI-driven design optimization, and circular economy principles is critical." Sustainability, he argues, should be embedded at every stage of new builds, from integrating low-carbon materials to optimizing energy efficiency from the outset. "We can't afford to compromise sustainability for speed. Instead, we must integrate renewable energy sources and partner with local governments, utilities, and energy providers to accelerate responsible construction." A key example of this thinking is peak shaving—using redundant infrastructure and idle capacities to power the grid when data center demand is low. "99.99% of the time, this excess capacity can support local communities, while ensuring the data center retains prioritized energy supply when needed." Addressing Embodied Carbon and Supply Chain Accountability Decarbonization is a cornerstone of iMasons' efforts, particularly through the iMasons Climate Accord. Suinaga highlights the importance of tackling embodied carbon—the emissions embedded in data center construction materials and IT hardware. "We need standardized reporting metrics and supplier accountability to drive meaningful change," he says. "Greater transparency across the supply chain can be achieved through carbon labeling of materials and stricter procurement policies." To mitigate embodied emissions, companies should prioritize suppliers with validated Environmental Product Declarations (EPDs) and invest in low-carbon alternatives like green concrete and recycled steel. "Collaboration across the industry will be essential to drive policy incentives for greener supply chains," Suinaga asserts. The Role of Modular and Prefabricated Builds As the industry seeks more efficient construction methods, modular and prefabricated builds are emerging as game changers. "They significantly reduce construction waste, improve quality control, and shorten deployment times," Suinaga explains. "By shifting a large portion of the build process to controlled environments, we can improve worker safety and optimize material usage. Companies leveraging prefabrication will gain a competitive edge in both cost savings and sustainability." Modular construction also presents financial advantages. "It allows for deferred CapEx investments, creating attractive internal rates of return (IRRs) for investors while reducing the risk of oversupply by aligning capacity with demand," Suinaga notes. However, he acknowledges that the approach has challenges, including potential supply chain constraints and quick time-to-market pressures during demand spikes. "Maintaining a recurrent production cycle and closely monitoring market conditions are key to ensuring capacity planning aligns with real-time needs." Innovation in Cooling and Water Use With AI workloads driving increasing power densities, the industry is rapidly shifting toward liquid cooling, immersion cooling, and heat reuse strategies. "We're seeing innovations in direct-to-chip cooling and closed-loop water systems that significantly reduce water consumption," Suinaga says. "Some data centers are capturing and repurposing waste heat to provide energy to nearby facilities—an approach that needs to be scaled." Immersion cooling, he adds, offers the potential to shrink data center footprints and dramatically improve Power Usage Effectiveness (PUE). "A hybrid approach combining air and liquid cooling is key," Suinaga explains. "There's still uncertainty around the right mix of technologies, as hyperscalers need to support not just AI but also continued cloud growth. Flexibility in cooling design is now essential to accommodate a diverse range of workloads." Regulatory Pressures and the Future of Sustainability Standards Regulatory frameworks such as the SEC's climate disclosure rules and Europe's Corporate Sustainability Reporting Directive (CSRD) are pushing data center operators toward greater transparency. Suinaga believes these measures will enforce more accurate sustainability reporting and drive greener investment decisions. "This will push data center operators to adopt more energy-efficient designs early in the planning phase and, in the long term, standardize carbon reporting and create incentives for sustainable practices," he explains. He also highlights the role of investors and publicly traded companies in enforcing stricter climate reporting requirements across their portfolios. "At iMasons, we are refining existing reporting benchmarks and frameworks to provide the industry with a holistic view of best practices. This is an area where we aim to support data center operators with an analytical approach." The Road to Net Zero: Overcoming Challenges Despite ambitious net zero goals, execution remains a significant challenge. "The biggest roadblock to net zero is the availability of truly carbon-free energy and materials at scale," Suinaga states. Achieving net zero requires substantial investment in renewable infrastructure, grid connectivity improvements, and energy storage innovation. To accelerate progress, he emphasizes the importance of adopting circular economy practices, advocating for renewable energy policy support, and investing in next-generation cooling and power technologies. "The demand from AI is outpacing current power infrastructure and renewable options. While some net zero commitments may be delayed, investing in new technologies and clean energy solutions will ultimately put us back on the path to net zero." Workforce Development and Addressing the Talent Shortage The digital infrastructure industry has long faced a talent shortage, which has only become more urgent as demand increases. To help address this challenge, iMasons has launched a new job-matching platform. "It's designed to bridge the talent gap by connecting skilled professionals with opportunities in digital infrastructure," Suinaga explains. "For job seekers, it's free to use, providing a streamlined way to match with job listings based on skills, experience, and location." For employers, iMasons partners gain access to the platform to find vetted candidates efficiently. "At the pace this industry is growing, the current workforce isn't enough—we need to bring in talent from other industries and create new career pathways. Digital infrastructure is recession-proof and offers tremendous opportunities for growth." Industry Partnerships Driving Innovation iMasons has been expanding its partnerships, adding 15 new partners in recent months. "We've welcomed companies from various backgrounds, including AI-driven construction management firms, energy-related companies, and cooling solution providers," Suinaga shares. "iMasons is a hub for industry collaboration, helping to drive innovation across the entire digital infrastructure ecosystem. Our mission is simple: to ensure the industry thrives." Looking Ahead As AI accelerates the demand for digital infrastructure, the industry must embrace innovative, responsible strategies to balance scale with sustainability. iMasons, alongside major players in the sector, is committed to ensuring the next generation of data centers are not just fast to deploy but also environmentally responsible.

TechCrunch Startups – Spoken Edition
Amid calls for sovereign EU tech stack, Evroc raises $55M to build a hyperscale cloud in Europe

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Mar 21, 2025 6:28


A Swedish startup aiming to build a hyperscale cloud company in Europe has raised €50.6 million ($55 million) in Series A funding. Evroc, as it's called, says it's laying the foundations for a “secure, sovereign and sustainable hyperscale cloud to reimagine the digital future of Europe.” Learn more about your ad choices. Visit podcastchoices.com/adchoices

Today with Claire Byrne
New "hyperscale" data centres planned for Ireland

Today with Claire Byrne

Play Episode Listen Later Mar 20, 2025 9:31


Daniel Murray, Policy Editor with the Business Post

Energy Policy Now
The Future of Electricity Demand in the AI Era

Energy Policy Now

Play Episode Listen Later Feb 11, 2025 38:44


Grid Strategies’ Rob Gramlich discusses the dramatic increase in electricity demand from data center and manufacturing growth, and the challenges it presents for the grid. --- Electricity demand growth has returned with a vengeance in the United States due to an increase in manufacturing and, most dramatically, the growing use of AI. Across the country, technology giants are racing to build AI data centers, the largest of which will consume as much electricity as an entire mid-sized city. Yet our electrical grid was not built with such large and immediate new sources of power demand in mind, and it has become clear that solutions are urgently needed if our grid is to successfully accommodate this new load. Adding to the challenge is the fact that forecasts of future demand have been frequently and dramatically revised upwards. The future of electricity demand looks big, but just how big remains uncertain. Rob Gramlich, president of power sector consultancy Grid Strategies and a frequent expert witness on grid issues before Congress and regulatory agencies, explores the future of electricity demand. Gramlich discusses data from a new Grid Strategies report on the pace of demand growth, and a variety of strategies by which our electric grid might meet that demand. He also considers implications for the cost of electricity and the pace of grid decarbonization. Rob Gramlich is president of Grid Strategies. Related Content Should ‘Energy Hogs’ Shoulder More of the Utility Cost Burden? https://kleinmanenergy.upenn.edu/research/publications/should-energy-hogs-shoulder-more-of-the-utility-cost-burden/ How Can We Improve the Efficiency of Electricity Pricing Systems? https://kleinmanenergy.upenn.edu/research/publications/how-can-we-improve-the-efficiency-of-electricity-pricing-systems/ Energy Policy Now is produced by The Kleinman Center for Energy Policy at the University of Pennsylvania. For all things energy policy, visit kleinmanenergy.upenn.eduSee omnystudio.com/listener for privacy information.

The IBJ Podcast
With billions at stake, hyperscale data centers become charged issue in Indiana

The IBJ Podcast

Play Episode Listen Later Feb 9, 2025 33:29


You don't need to be too technically savvy to pick up on the charged atmosphere surrounding large-scale data centers. Various technology-heavy industries need data centers as a kind of way station and storage point for all the electronic information they generate and process. As technology evolves at a breakneck speed, the size of these centers grows. In October, the financial firm Blackstone forecast that over the next five years, the United States will see $1 trillion in data center investments. Indiana really wasn't on the map of the big tech firms, at least in terms of building centers, until very recently. In the last 14 months, seven data center projects have been announced for the state representing more than $15 billion in potential investment. Some Indiana legislators see them as huge economic development opportunities. Indiana House Speaker Todd Huston, R-Fishers, has said, quote, “I want every data center that we can get in the state of Indiana.” But the sudden surge in announced centers has generated a lot of concern as well about their drain on Indiana utilities and, in some cases, their water-intensive cooling systems. Indiana lawmakers are considering a spate of bills regarding data centers in the current legislative session. IBJ technology reporter Susan Orr is our guest this week on the IBJ Podcast to get us current on the demand for data centers and how that's manifesting in Indiana.

Unsupervised Learning
Ep 53: SemiAnalysis Founder Dylan Patel on New AI Regulations, Future of Chinese AI & xAI's Scrappy Surge to Hyperscale

Unsupervised Learning

Play Episode Listen Later Jan 21, 2025 84:15


In this episode of Unsupervised Learning, we sit down with Dylan Patel, Chief Analyst at SemiAnalysis, to break down what these sweeping changes really mean. From how they consolidate power among Big Tech to China's narrowing options for AI dominance, we unpacked the impact of this regulatory shift.Follow SemiAnalysis: https://semianalysis.com/ [0:00] Intro[1:07] Grading the AI Diffusion Rule[3:48] What Will Happen to the Malaysian Data Centers?[7:23] How do the Regulations Favor Giant Tech Companies?[9:07] Pre-Regulation AI Landscape[13:00] Where Does Chinese AI Go From Here?[22:00] The Goldie Locks Approach to Regulation[24:16] Size of Cluster Buildouts Today[37:47] How Big Will Cluster Buildouts Get?[43:00] Are Open-Source Models Falling Behind?[47:51] Questions Dylan Wants the Answer To[51:30] Hardware Startups[1:01:05] The Future of Enterprise AI[1:05:10] What Made CoreWeave So Successful?[1:19:28] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

Unnatural Selection
The One About AI

Unnatural Selection

Play Episode Listen Later Jan 19, 2025 84:47


On this week's episode of the Unnatural Selection Podcast, we discuss: The inside story of how an unlikely alliance of Trump and Biden led to historic Gaza ceasefire deal. Biden's administration proposes new rules on exporting AI chips, provoking an industry pushback. Takeaways from Pete Hegseth's contentious confirmation hearing. Melania's $40m Amazon deal: another sign Bezos is capitulating to Donald Trump. The Insurance Apocalypse. Anduril Goes Big In Ohio With Arsenal 1 ‘Hyperscale' Drone Plant. Apple suspends AI-generated news alert service after BBC complaint. The Unnatural Selection podcast is produced by Jorge Tsipos, Adam Direen and Tom Heath. Visit the Unnatural Selection website at www.UnnaturalShow.com for stuff and things. The views expressed are those of the hosts and their guests and do not reflect those of any other entities. Unnatural Selection is a show made for comedic purposes and should not be taken seriously by anyone. Twitter: @JorgeTsipos @TomDHeath @UnnaturalShow Instagram: @JorgeTsipos @Tom.Heath @UnnaturalShow  

Hyperscale by Briar Prestidge
#E54 We Need to Teach Our Kids About AI Literacy With Angela Radcliffe

Hyperscale by Briar Prestidge

Play Episode Listen Later Jan 10, 2025 39:35


Season 3 of HYPERSCALE is here.  Join Briar Prestidge and Angela Radcliffe as they explore how technology and data are shaping the future of the next generation. From AI literacy in the classroom to the risks of deep fakes, they break down why we need to rethink how we educate the next generation. They also discuss how teaching kids about data literacy could open doors to solving real-world challenges. Plus, hear their perspective on why platforms like TikTok and Roblox aren't just distractions—they're an essential part of the digital world kids are already navigating. Angela Radcliffe is a trailblazer in clinical research and a leading advocate for data ethics. With over two decades of experience at the intersection of health, technology, and data, she has driven transformative change and fostered organizational agility. Her work has impacted more than 100 global clinical research initiatives across nearly every therapeutic area. A bestselling author, Angela wrote Quantum Kids: Guardians of AI, an innovative activity book that introduces elementary and middle school students to AI fundamentals. Her expertise not only advances the field but also inspires the next generation, setting new standards for ethical data use and pioneering advancements that make a difference worldwide. FOLLOW ► Instagram LinkedIn  TikTok Website

The Mike Hosking Breakfast
Vanessa Sorensen: Microsoft NZ Managing Director on the new hyperscale data centre in Auckland's Westgate

The Mike Hosking Breakfast

Play Episode Listen Later Dec 11, 2024 4:30 Transcription Available


Microsoft says its new New Zealand facility will create thousands of jobs and pump money into the economy. The tech giant's today opening its first "hyperscale" data centre in Auckland's Westgate to power cloud-based software and AI tools. It will ensure local organisations' data can be stored, processed and backed up locally, addressing sovereignty issues for the Government and banks. Managing Director Vanessa Sorensen told Heather du Plessis-Allan they plan to train up 100,000 people over the next two years. She says it's going to supercharge the country's digital transformation and enhance data residency, security and compliance. LISTEN ABOVE See omnystudio.com/listener for privacy information.

This Week in HPC
TWIHPC Episode 374: Hyperscale AI Boom Continues What to Watch for at SC24

This Week in HPC

Play Episode Listen Later Nov 15, 2024 21:21


Addison Snell and Doug Eadline discuss the ramifications of the AI boom, Intersect360 Research's newest market forecast update, and what to watch for at SC24.

Tech Won't Save Us
Data Vampires: Going Hyperscale (Episode 1)

Tech Won't Save Us

Play Episode Listen Later Oct 7, 2024 31:47


Amazon, Microsoft, and Google are the dominant players in the cloud market. Around the world, they're building massive hyperscale data centers that they claim are necessary to power the future of our digital existence. But they also increase their power over other companies and come with massive resource demands communities are getting fed up with. Is their future really the one we want? This is episode 1 of Data Vampires, a special four-part series from Tech Won't Save Us.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The show is hosted by Paris Marx. Production is by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:Senior cloud consultant Dwayne Monroe and Associate Professor in Economics Cecilia Rikap were interviewed for this episode.Interviews with Jeff Bezos and The Oregonian journalist Mike Rogoway were cited.Support the show

Renewable Energy SmartPod
Suzanne Leta from Fluence on the Policies (And Politics) of Domestic Manufacturing

Renewable Energy SmartPod

Play Episode Listen Later Sep 24, 2024 35:50 Transcription Available


Sponsored by KPMG and EDF RenewablesWith the election fast-approaching, many people in the renewable energy industry are wondering how a potential shift in the current political power structure in the US might impact the push for clean energy. With that uncertainty in mind, Suzanne Leta, vice president of policy and advocacy for the Americas at Fluence, joins the show to discuss tax incentives related to the Inflation Reduction Act, with a particular focus on domestic manufacturing. Suzanne also explains how tariffs related to batteries are structured in a way that makes them surprisingly vulnerable to the whims of whoever occupies the White House. Suzanne also touches on cybersecurity and shares her insights on the impact data centers powering AI are placing on batteries and the rest of the grid … and how smart utilities and other organizations are planning to manage that increased energy demand.More insights from Suzanne:Powering America's Future: Grid-Scale Energy Storage Boosts Grid Reliability, Jobs and US ManufacturingMore resources from KPMG:Survey results:  High energy expectations for renewablesMore resources from EDF Renewables:What We DoKey highlights from Suzanne:The buzz around domestic manufacturing - (4:01)Other IRA incentives like the PTC, ITC and other bonus tax credits - (10:45)The whimsical nature of Section 301 tariffs on batteries - (13:20)The politics of the election and the energy transition in the US - (17:15)Cybersecurity and the grid - (22:12)Manaing the boom in energy demand from AI data centers - (26:41)'Hyperscale' data centers vs. typical data centers - (27:24)Suzanne's bold predictions about the future of battery storage - (31:46)Sign up for the Renewable Energy SmartBriefFollow the show on Twitter @RenewablesPod

The Data Center Frontier Show
Prometheus Hyperscale Expands Data Center Horizons to 1 GW

The Data Center Frontier Show

Play Episode Listen Later Sep 24, 2024 28:04


Prometheus Hyperscale is the new corporate entity formed this month which expands upon the footprint and the promise of the Wyoming Hyperscale White Box project, first reported on by DCF in 2022.  For this episode of the Data Center Frontier Show podcast, we spoke with Trenton Thornock, founder of Wyoming Hyperscale, who has been appointed as Chief Executive Officer of Prometheus Hyperscale; Trevor Neilson, a seasoned climate-tech CEO and energy transition investor, who joins as the company's President; and John Gross, President of J.M. Gross Engineering, who is handling the project's liquid cooling infrastructure.  The Wyoming Hyperscale White Box data center has been under construction since 2022 on 58 acres of land near Aspen Mountain Evanston, Wyoming, and represents a blueprint for creating super-efficient data centers with low impact on the environment and benefits for the local community. In the companies' transition, Wyoming Hyperscale has merged with Prometheus Hyperscale and been expanded from a 120 MW project to plans for a data center campus with 1 GW of IT capacity. The data center is being built on land owned by Thornock's family, which has been involved in ranching for 6 generations. The location benefits from ready access to renewable energy from nearby wind and solar farms. Wyoming Hyperscale has a contract with Rocky Mountain Power for 120 megawatts of power and a 138 kV substation, which is fed by the same switchgear as the renewable energy generation sites. The site sits on a major east-west fiber highway that tracks the 41st parallel, along which data center hubs have emerged in places like Ohio, Iowa, Nebraska and Utah. The Union-Pacific Railroad line, which provides key right-of-ways for fiber deployment, runs through nearby Aspen Mountain. The Evanston project underscores Prometheus Hyperscale's commitment to sustainability and innovation. By integrating 100% renewable energy and advanced liquid cooling technology combined with heat reuse, the Evanston facility promises to be one of the most efficient and environmentally friendly data centers in the world.  Importantly, less than 10% of the project's power development plan is grid dependent (120 MW of 1,220MW or 9.84%). The first facilities yielded by Phase 1 of the Evanston project are expected to come online within the next 18 months. Prometheus Hyperscale has also revealed plans to construct four other data centers across Arizona and Colorado. And as previously reported by DCF, this May saw the announcement of a 20-year power purchase agreement (PPA) by fission-based nuclear small modular reactor (SMR) specialist Oklo to deliver 100 MW of power to Prometheus, using Oklo's Aurora Powerhouse reactors for power generation. "Our partnership with Oklo not only provides us with a reliable, clean energy source but also positions us as a leader in sustainable data center operations," said Thornock. "Sam Altman's and Jacob Dewitte's vision for a sustainable future through advanced energy solutions aligns perfectly with our mission at Prometheus Hyperscale." During the podcast, Thornock discussed the evolution of the Wyoming hyperscale project with Prometheus, highlighting its growth to a 1 GW prospect since the groundbreaking of the Evanston County project in 2022. For his part, Trevor Nielsen emphasized increasing demand for Prometheus driven by advancements in computing power and the importance of sustainability in the energy transition.  Our conversation also covered the company's partnership with Oklo, focusing on the streamlined permitting process for small modular reactors in Wyoming and the strategic use of resources for data center energy generation.

The Azure Podcast
Episode 500 - Database Watcher

The Azure Podcast

Play Episode Listen Later Jul 20, 2024


The team catches up with Daniel Taylor and Bradley Ball of the Azure Fast Track team and learns about how Database Watcher can help customers better manage and debug their Azure SQL, Elastic Pools, SQL Managed Instances, and Hyperscale and the replicas.  Daniel and Bradley also share the really cool work they are doing helping Azure customers learn about Azure with their Tales from the Field YouTube channel, plus we hear about some upcoming events where you can see Daniel and Bradley in person!!!   Media file: https://azpodcast.blob.core.windows.net/episodes/Episode500.mp3 YouTube: https://youtu.be/Z_jBWLUBY9g   Resources:  https://learn.microsoft.com/en-us/azure/azure-sql/database-watcher-overview?view=azuresql https://youtu.be/qr1Qxwao68M?si=mO24KZF3PAvTnnmn   Other updates: https://azure.microsoft.com/en-us/updates/v2/Guest-OS-Family-Retirement https://azure.microsoft.com/en-us/updates/v2/cmk-for-backup-vaults-ga/ https://azure.microsoft.com/en-us/updates/v2/Azure-Elastic-SAN-Feature-Updates Azure classic resource providers will be retired on 31 August 2024 | Azure updates | Microsoft Azure

Inside Data Centre Podcast
Hannah Kramer, Chief Legal Officer, Apto: Working with Hyperscale customers

Inside Data Centre Podcast

Play Episode Listen Later Jul 15, 2024 37:36


Send us a Text Message.In this episode I am joined by Hannah Kramer, Chief Legal Officer at Apto. We discuss Hannah's career in the sector, the dynamics of working with hyperscale customers, and how we can attract more legal professionals to the world of data centres,Hannah shares how she started her career, how she made the transition to the data centre sector, and the differences between working for the customer and the landlord.We discuss the nuances of working with hyperscale customers, how hyperscale lease provisions differ from institutional leases, and the importance of legal professionals within the sector.Finally, Hannah shares some advice on how we can attract more legal professionals to the data centre sector.Learn more about Apto here - Apto – Hyperscale data centres. Designed for you. Built to scale. (aptodc.com)Support the Show.The Inside Data Centre Podcast is recorded in partnership with DataX Connect, a specialist data centre recruitment company based in the UK. They operate on a global scale to place passionate individuals at the heart of leading data centre companies. To learn more about Andy Davis and the rest of the DataX team, click here: DataX Connect

Beyond The Tech
In Conversation with Max Costa; a case study in how to hyperscale social impact

Beyond The Tech

Play Episode Listen Later Jun 21, 2024 53:43


In this episode Bryan and Alex are joined by Max Costa. Max's journey is one of passion, perseverance, and profound social impact.  His is a unique story that showcases how combining talent, passion, purpose and tech can change the lives of many.  From the small town of Alba, Italy, Max's story begins with his passion and subsequent career as a classical musician, before leaving for New York to pursue his studies at Columbia, joining BCG where he developed his leadership skills as well as his deeper understanding of how tech can help transform both industry as well as the lives of millions of people.  His desire to create social impact (at scale) led him to joining the United Nations World Food Programme, where he was instrumental in the success of the “Share The Meal” app – winning prestigious awards from Tim Cook at Apple as well as Google. This innovative platform has facilitated the donation of over 200 million meals, making significant strides towards combating global hunger.  Today, Max leads Develhope, a startup dedicated to empowering young people with future-proof skills in software development and data engineering. His vision is to bridge the gap between untapped talent and the growing demand for skilled tech professionals, particularly in regions with high youth unemployment.  Tune in to hear Max's inspiring journey, exploring the challenges and triumphs that have shaped his career. Discover how Max's unique blend of musical discipline, academic excellence, and social commitment continues to drive positive change in the world. This episode is a testament to the power of following one's passion and the impact of technology in creating a better future. 

Cloud Realities
CR066: Unlimited power for your data centers with Michael Crabb, Last Energy

Cloud Realities

Play Episode Listen Later May 30, 2024 43:49


There is a scary contradiction in the increased power consumption of the latest technical innovations, and the impact the increased power consumption will have on society and the planet.This week Dave and Rob talk to Michael Carbb SVP of Commercial at Last Energy about huge growth in power consumption due to electrification, power perhaps becoming the new constraint, modular nuclear power, perceived danger vs. real danger of nuclear, do we have any alternatives and how long it takes to get modular power to a data centre.TLDR:03:00: Hyper personalization of communication 06:25: Cloud conversation with Michael Crabb34:00: Hyperscale data centers and sustainability52:35:  Waiting for the baby!GuestMichael Carbb: https://www.linkedin.com/in/michaelcrabb1/HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Sjoukje Zaal: https://www.linkedin.com/in/sjoukjezaal/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ProductionMarcel Van Der Burg: https://www.linkedin.com/in/marcel-van-der-burg-99a655/Dave Chapman: https://www.linkedin.com/in/chapmandr/SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/

JSA Podcasts for Telecom and Data Centers
CleanArc Data Centers' John Day on the Future of Hyperscale Data Center Development

JSA Podcasts for Telecom and Data Centers

Play Episode Listen Later May 17, 2024 7:18


The Data Center Frontier Show
Hyperscale LED Lighting Approach A Pathfinder for All Data Centers

The Data Center Frontier Show

Play Episode Listen Later Jan 17, 2024 33:02


For this episode of the DCF Show podcast, Data Center Frontier spoke with Sam Rabinowitz, CEO of Lantana, a supplier and provider of LED luminaires for the data center industry -- especially for hyperscalers, but also for energy-efficiency retrofits in mature facilities. Key discussion points include the following: 0:15 - Lantana broke into the data center industry by working with a hyperscaler customer to design and implement rapid deployment prototypes for their initial data center builds on the interior structure, including lighting. 3:14 - Lantana's LED fixtures run cool and are energy-efficient, achieving up to 90% efficiency over nearly a decade of use. The LED lighting fixtures are UL certified for elevated ambient operating temperatures, providing operational flexibility for data centers in hot environments. 5:45 - Sam explains how Lantana's focus on energy-efficiency and materials efficiency can lead to cost savings and a positive impact on the environment. 13:26 - Sam emphasizes the importance of a "micro to macro" approach in greening data, starting with individual components, and scaling up to entire campuses and programs. 15:46 - Data Center Frontier Editor in Chief Matt Vincent asks for takes regarding the impact of AI on the data center industry. In response, Sam discusses the need for new products and approaches to designing and engineering data centers to accommodate for chip-level heat. 19:32 - Matt asks about Lantana's plans for 2024. In response, Sam describes Lantana's new products as being tailored for digital infrastructure and expansion of the hyperscalers, as well as furnishing renovations for increased energy efficiency in data centers of all sizes. 26:46 - Sam emphasizes the importance of lighting in data centers for safety and functionality, and the discussion compares it to cabling as a core, fundamental element of every data center. Visit Data Center Frontier.

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Molly Graham on Building a Powerful Network, Managing Your Emotions, and Creating High Performing Teams in Hyperscale

“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies

Play Episode Listen Later Nov 9, 2023 66:44


Molly Graham is a seasoned exec, builder, and widely-read writer in tech. She joins Kelli Dragovich and Nolan Church to discuss her lessons from working on different challenges (HR,Comms, GTM, COO) in hyper-scaling companies, career building, assembling strong leadership teams, and managing the rollercoaster emotions of zero-to-one startups. If you're looking for HR software that drives performance, check out Lattice https://www.lattice.com/hrheretics After the interview, Nolan and Kelli discuss the uproar over whether to use LinkedIn's Open to Work badge, since Nolan was quoted in the press as being very critical of the feature and the signal it sends to hiring managers. – SPONSORS: Lattice | Continuum ✅ Discover HR software that drives performance with Lattice: https://www.lattice.com/hrheretics High performance and great culture should never be at odds; they're better together. With Lattice People Management Platform, companies efficiently run people programs that create enviable cultures where employees want to do their best work. Serving 1000s of customers of all sizes. Learn why companies from Slack to the LA Dodgers choose Lattice. https://www.lattice.com/hrheretics ✅ Hire Fractional Executives with Continuum using this link: https://bit.ly/40hlRa9 Have you ever had a negative experience hiring executives? Continuum connects executives and senior operators to venture-backed tech companies for fractional and full-time roles. You can post any executive-level role to Continuum's marketplace and search through our database of world-class, vetted leaders. There is no hidden cost, you only pay the person you hire. And you can cancel at any time. – KEEP UP WITH MOLLY, NOLAN, + KELLI ON LINKEDIN Molly: https://www.linkedin.com/in/mograham/ Nolan: https://www.linkedin.com/in/nolan-church/ Kelli: https://www.linkedin.com/in/kellidragovich/ – LINKS: Molly's newsletter: https://mollyg.substack.com/ Molly's operator community: https://www.andthen.com/club-for-the-glue-people Give Away Your Legos: https://review.firstround.com/give-away-your-legos-and-other-commandments-for-scaling-startups Make Friends with the Monster Chewing At Your Leg: https://review.firstround.com/make-friends-with-the-monster-chewing-on-your-leg-and-other-tips-for-surviving-startups – TIMESTAMPS: (00:00) Episode Preview (01:06) Introduction to this episode (03:30) Behind Molly's viral articles and her years at Google and Facebook (06:15) Emotions at work (08:39) Normalizing founder/ early team depression (09:50) Lessons scaling at different stages (11:00) Becoming the 9th employee at Quip (14:20) Overnight success (15:35) You have to pick a big problem (19:10) Sponsors: Lattice | Continuum (21:30) Navigating a career with a high risk appetite (23:50) Molly's Chamath story (27:25) Creating opportunities by cementing relationships (33:40) How to get teams to work well together (37:45) Hiring great execs (40:30) Exec tenures (43:00) Moving faster on terminations (49:50) What do founders want now from a CPO? (53:10) Rapid fire: best hire, best interview question (54:30) Kelli and Nolan on the Open to Work controversy – HR Heretics is brought to you by Turpentine www.turpentine.co Producer: Natalie Toren  Production: Lauren Ligovich, Michelle Poreh  For inquiries about guests or sponsoring the podcast, email Natalie@turpentine.co