POPULARITY
Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ
Thomas has written a python script available on his github.com account that you can run against any vSphere server to check hardware compatibility. It uses API's from the VCF Operations which has the current compatibility matrix in it's grasp. It now also can tell you future compatibility risks. A cool small powerful bit of Code. Thomas joins Bob and Eric and Tony to talk about it and other side topics on Compatibility.
Join us as Marian explains what AI governance means for vSphere administrators and why it matters now. Marian walks through practical governance frameworks that vSphere admins need to understand, from IEEE 7000 series standards to mapping governance controls onto infrastructure you already manage. You'll learn what your CISO will ask for, how to respond using your existing VMware stack, and why governance isn't about slowing innovationļæ½ it's about enabling it safely. This episode covers real-world scenarios from data lineage and model transparency to integrating governance tools with existing infrastructure, and addresses the gap between compliance requirements and practical implementation for virtualized environments. Timestamps 0:00 Welcome & Introduction 5:16 Marian's Background in Tech & Governance 6:37 What is Governance? 12:45 IEEE 7000 Series Standards Overview 18:22 AI Governance for vSphere Admins 24:16 Data Lineage & Model Transparency 30:41 Risk Assessment Frameworks 36:52 Practical Implementation Strategies 42:18 Integration with Existing Tools 47:35 Common Governance Challenges 51:12 Vendor Landscape Discussion 54:27 Missing Innovation in the Space 58:09 Wrap-up & Resources How to find Marian: https://www.linkedin.com/in/mariannewsome/ Links from the show: https://ethicaltechmatters.com/
The Cloud Native Computing Foundation has introduced the Certified Kubernetes AI Conformance Program to bring consistency to an increasingly fragmented AI ecosystem. Announced at KubeCon + CloudNativeCon North America 2025, the program establishes open, community-driven standards to ensure AI applications run reliably and portably across different Kubernetes platforms. VMware by Broadcom's vSphere Kubernetes Service (VKS) is among the first platforms to achieve certification.In an interview with The New Stack, Broadcom leaders Dilpreet Bindra and Himanshu Singh explained that the program applies lessons from Kubernetes' early evolution, aiming to reduce the āmuddinessā in AI tooling and improve cross-platform interoperability. They emphasized portability as a core value: organizations should be able to move AI workloads between public and private clouds with minimal friction.VKS integrates tightly with vSphere, using Kubernetes APIs directly to manage infrastructure components declaratively. This approach, along with new add-on management capabilities, reflects Kubernetes' growing maturity. According to Bindra and Singh, this stability now enables enterprises to trust Kubernetes as a foundation for production-grade AI.Ā Learn more from The New Stack about Broadcom's latest updates with Kubernetes:Ā Has VMware Finally Caught Up with Kubernetes?VMware VCF 9.0 Finally Unifies Container and VM ManagementJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.Ā Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The new VMUG Advantage VCP portal is going live over the next two weeks. Eric, Bob and Corey discuss the release and how to get your VCF 9 tokens. If you need to refresh your license, that's support too. Great show and good community participation.
Dive into KubeVirt with the vBrownBag crew and guest Eric Shanks!
Stuart Carrera (Senior Consultant, Mandiant Consulting) joins host Luke McNamara to discuss how threat actors are increasingly targeting the VMware vSphere estate, and leveraging in this environment to conduct extortion and data theft. Stuart details why this has become an attractive target, and ways organizations can better engineer detections to respond to this activity.Ā https://cloud.google.com/blog/topics/threat-intelligence/defending-vsphere-from-unc3944https://cloud.google.com/blog/topics/threat-intelligence/vsphere-active-directory-integration-risks
Recently a new white paper was released on the topic of latency-sensitive workloads. I invited Mark Achtemichuck (X, LinkedIn) to the show to go over the various recommendations and best practices. Mark highlight many important configuration settings, and also recommends everyone to not only read the white paper, but also the vSphere 8 performance documentation. Also, his VMware Explore session comes highly recommended, make sure to watch it!Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
For episode 087 of the podcast, I invited Deji Akomolafe to the show to discuss Microsoft on VMware VCF and vSphere. Deji shares his experiences virtualizing and running Microsoft apps on top of VCF and vSphere and discusses some of the best practices and constraints. Whitepapers and tools discussed in the episode: Protecting mission critical workloads with Live Site Recovery Microsoft SQL on vSphere best practices Diskspd for bench Explore session - Improving Workload Availability Using vMotion Application Notification Explore session - Architecting Your Microsoft SQL Server Workloads on VMware Cloud Foundation Explore session - Improving Workloads Availability on VCF with vMotion Apps Notification Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
Brandon Frost shares his experience with memory tiering and the use of Intel Optane in his large private cloud environment. He explains how memory tiering helped optimize memory usage and reduce costs by offloading cold memory to a non-active tier. He also discusses the challenges of upgrading memory and the benefits of using NVMe-based tiering. Brandon expresses his interest in Project Peaberry, a memory tiering offloading card, and its potential use cases. He highlights the importance of his team in developing capacity data gathering and analytics solutions.Takeaways:Memory tiering helps optimize memory usage and reduce costs by offloading cold memory to a non-active tier.Using NVMe-based tiering provides flexibility and scalability in memory allocation.Project Peaberry, a memory tiering offloading card, offers potential benefits in terms of performance and cost-effectiveness.Effective capacity planning requires robust data gathering and analytics solutions.Brandon Frost emphasizes the importance of his team in developing and implementing memory optimization strategies.Links:VMware Explore session by Brandon and Arvind on memory tieringMemory Tiering blog on vmware.comDisclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
Arvind Jagannath, a platform product manager at Broadcom, discusses memory tiering and its benefits for customers with Duncan. He explains that enterprise applications are becoming more memory-bound and require better performance. Memory tiering helps address the CPU to memory imbalance that many customers face, allowing for better CPU utilization and reducing costs. Arvind also mentions the challenges of expanding memory and the need for a solution like memory tiering. He introduces Project Capitola, which has evolved into NVMe-based memory tiering, and discusses the benefits and recommendations for using NVMe devices. Arvind also touches on the integration of memory tiering with advanced vSphere features like DRS and HA, and hints at future developments with Project Peaberry and CXL.Ā TakeawaysEnterprise applications are becoming more memory-bound and require better performance.Memory tiering helps address the CPU to memory imbalance and improves CPU utilization.NVMe-based memory tiering provides additional memory capacity and reduces costs.Integration with advanced vSphere features like DRS and HA is being developed.Future developments include Project Peaberry and the use of CXL for intelligent devices.LinksExplore Session - Memory Tiering: Power Your Server Infrastructure With Memory Innovations [VCFB1198LV]Yellow-Bricks.com - Memory Tiering BlogSNIA Recording on Memory Tiering and CXL acceleratorsĀ Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
In this conversation, Katarina Brookfield discusses her career trajectory and her current role at Broadcom. She shares defining moments in her career, including her experience working on the Black Sea Maritime Archaeology project. The conversation then shifts to the newly announced vSphere IaaS control plane and its benefits. Katarina explains that the control plane provides a comprehensive solution for deploying workloads, including additional services like storage provisioning and load balancing. The conversation also covers the self-service nature of the control plane, the different interfaces for consumers and admins, and the integration of HashiCorp Packer for building and customizing VM images. The TKG service, which allows for the deployment of managed Kubernetes clusters, is also discussed, highlighting its ease of use and integration with vSphere. The conversation concludes with a discussion of the new features in the latest version of the TKG service, including cluster auto-scaling and the decoupling of TKG from vCenter.TakeawaysThe vSphere IaaS control plane provides a comprehensive solution for deploying workloads, including additional services like storage provisioning and load balancing.The control plane offers a self-service experience for consumers, allowing them to easily deploy the services they need.Different interfaces, including APIs, CLI, and UI, cater to the preferences of different users, making it accessible to both admins and consumers.The integration of HashiCorp Packer allows for the building and customization of VM images, providing flexibility and automation.The TKG service simplifies the deployment of managed Kubernetes clusters, making it accessible to users with little Kubernetes experience.The latest version of the TKG service decouples it from vCenter, allowing for faster delivery of new Kubernetes versions.New features in the TKG service include cluster auto-scaling and the integration of HashiCorp Packer for building and customizing VM images.Chapters00:00 - Kat's Career Trajectory and the Role of Defining Moments09:20 - The Comprehensive Solution of the vSphere IaaS Control Plane11:02 - Enabling Self-Service and Catering to Different User Preferences18:14 - Flexibility and Automation with HashiCorp Packer Integration22:47 - Simplifying Kubernetes Deployment with the TKG Service29:14 - Decoupling TKG from vCenter for Faster Delivery of Kubernetes Versions38:36 - New Features in the Latest Version of the TKG ServiceDisclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
Summary:Ala Dewberry introduces Project Marigold, a plugin that aims to make GPUs a first-class citizen on vSphere by providing visibility and observability of GPU metrics at the cluster and data center level. The plugin helps infrastructure administrators manage GPUs more strategically and enables data scientists and application developers to have a frictionless experience. Ala also discusses the use cases for private AI and encourages users to give feedback on the plugin.TakeawaysDefining moments in our careers can shape our trajectory and open up new opportunities for growth and learning.Project Marigold aims to make GPUs a first-class citizen on vSphere by providing visibility and observability of GPU metrics at the cluster and data center level.The plugin helps infrastructure administrators manage GPUs more strategically and enables data scientists and application developers to have a frictionless experience.Private AI allows organizations to leverage their data centers for AI workloads that require data privacy, security, and compliance.Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
Recently Broadcom announced the latest updates to vSphere 8. This new release includes updates that will help enhance operational efficiency for IT admins, supercharge the performance of demanding workloads, and accelerate the pace of innovation for DevOps engineers, developers, and anyone else that can benefit from self-service access to infrastructure services, in a secure and compliant manner.Ā On this episode of the Virtually Speaking Podcast we welcome FĆ©idhlim O'Leary to walk us through the details of this latest release.Ā Links Mentioned: VCF Landing Page āāWhat's New in vSphere 8 Update 3? āāAnnouncing VMware vSphere Foundation 5.2 VMware vCenter Server 8.0 Update 3 Release Notes Embracing Change with VMware vSphere Foundation āāVMware vSphere Foundation (VVF) Licensing Virtually Speaking YouTube Page Virtually Speaking Podcast vSAN on TechZone The Virtually Speaking Podcast The Virtually Speaking Podcast is a technical podcast dedicated to discussing VMware topics related to private and hybrid cloud. Each week Pete Flecha and John Nicholson bring in various subject matter experts from within the industry to discuss their respective areas of expertise. If you're new to the Virtually Speaking Podcast check out all episodes on vspeakingpodcast.com and follow on TwitterX @VirtSpeaking
comfivision CEO Yves Sandfort joins us to talk of the customer journey moving from vSphere to the full VCF stack, acceptance and moving forward.
Yves Sanffort Comdivision CEO gives his perspective on progress as customers move to VCF, AI and our Cloud offerings.
Continuing our special 10-part series on the Virtually Speaking Podcast: "Exploring VMware Cloud Foundation" in Episode 4,titled āVCF Computeā, Himanshu Singh, Director of vSphere Product Marketing, navigates us through the spectrum of vSphere editions, highlighting their adaptability for diverse customer needs. He then showcases the enhanced value proposition of vSphere within VMware Cloud Foundation, harnessing the synergy with NSX and Aria Automation to elevate private cloud infrastructures. Drawing from the essence of VMware vSphere, Himanshu emphasizes its role as the enterprise workload engine, integrating cutting-edge cloud infrastructure technology with DPU and GPU-based acceleration to amplify workload performance. vSphere optimizes IT environments, bolstering availability, simplifying lifecycle management, and streamlining maintenance for heightened operational efficiency. Moreover, it establishes an intrinsically secure infrastructure engine, fortified out-of-the-box and complemented by straightforward hardening guidance for compliance adherence. Links Mentioned: VCF Landing Page Announcing General Availability of VMware Cloud Foundation 5.1.1 VCF Webinars VCF YouTube Page Virtually Speaking YouTube Page Virtually Speaking Podcast Watch the Entire Series Ep 01: Inside the Private Cloud Ep 02: What's Inside Ep 03: The Cloud Admin Journey Ep 04: VCF Compute Ep 05: VCF Storage Ep 06: VCF Networking Ep 07: A Cloud Management Experience Ep 08: VMware Private AI Ep 09: Data Services ManagerĀ Ep 10: VMware vDefendĀ The Virtually Speaking Podcast The Virtually Speaking Podcast is a technical podcast dedicated to discussing VMware topics related to private and hybrid cloud. Each week Pete Flecha and John Nicholson bring in various subject matter experts from VMware and from within the industry to discuss their respective areas of expertise. If you're new to the Virtually Speaking Podcast check out all episodes on vspeakingpodcast.com and follow on TwitterX @VirtSpeaking
In Episode 5 of the series "Exploring VMware Cloud Foundation" titled 'VCF Storage,' we dive into the dynamic world of storage solutions within VMware Cloud Foundation. Our guest, Rakesh Radhakrishnan, Senior Director of Product Management, expertly navigates through the latest storage offerings and their benefits within the new Go-to-Market strategy for VCF. Rakesh begins by dissecting the array of storage offerings integrated into the revamped VCF platform, shedding light on their diverse functionalities and advantages. He elaborates on how these innovations elevate storage capabilities within VCF, enhancing performance, scalability, and agility for users. Throughout the episode, Rakesh guides us through a compelling customer journey, illustrating the seamless transition from vSphere to VCF from a storage perspective. By highlighting real-world scenarios and best practices, he illuminates the path for customers looking to leverage VCF's storage capabilities to their fullest potential. Furthermore, Rakesh unveils exciting new innovations in storage within VCF, offering a glimpse into the future of data management and storage infrastructure. He shares insights into the strategic direction for storage within VCF, outlining VMware's vision for evolving storage solutions to meet the ever-expanding needs of modern cloud environments. Join us on this Virtually Speaking Podcast series as we unravel the intricacies of VCF Storage with Rakesh Radhakrishnan, exploring the transformative potential of storage innovations within VMware Cloud Foundation. Links Mentioned: VCF Landing Page Announcing General Availability of VMware Cloud Foundation 5.1.1 VCF Webinars VCF YouTube Page Virtually Speaking YouTube Page Virtually Speaking Podcast vSAN on TechZone Watch the Entire Series Ep 01: Inside the Private Cloud Ep 02: What's Inside Ep 03: The Cloud Admin Journey Ep 04: VCF Compute Ep 05: VCF Storage Ep 06: VCF Networking Ep 07: A Cloud Management Experience Ep 08: VMware Private AI Ep 09: Data Services ManagerĀ Ep 10: VMware vDefendĀ The Virtually Speaking Podcast The Virtually Speaking Podcast is a technical podcast dedicated to discussing VMware topics related to private and hybrid cloud. Each week Pete Flecha and John Nicholson bring in various subject matter experts from VMware and from within the industry to discuss their respective areas of expertise. If you're new to the Virtually Speaking Podcast check out all episodes on vspeakingpodcast.com and follow on TwitterX @VirtSpeaking
VMware recently released vSphere and vSAN 8.0 Update 3, and of course, we had to invite Feidhlim, Jason, and Pete back on the show to discuss what's new in these releases. There's awesome new functionality released and some great enhancements, so make sure to listen to the full episode. Key Takeaways:vSphere 8.0 update 3 introduces the vSphere Live Patch Update Path, which allows for patching ESXi hosts without evacuating VMs or entering full maintenance mode.Improvements in GPU functionality include the ability to use two DPUs in an ESXi host for availability, better support for VGPUs with different profiles and memory sizes, and simplified activation of GPU mobility with DRS.The vSphere cluster service (VCLS) has been re-architected to reduce resource consumption and improve rollback mechanisms.The 8.0 update 3 introduces stretched VVols, which customers have been asking for, and support for stretched fault tolerance.There are enhancements in VVols, including unmapped support for NVMe over fabrics.The updates in NVMe over Fabrics provide faster data migration and cloning.NFS enhancements include VMK port binding and support for NFS version 4.1.vSAN 8.0 U3 introduces new features and enhancements in flexible topologies, agile data protection, and enhanced management.The support for stretch cluster arrangement in VCF allows customers to take full advantage of ESA and improve performance, storage efficiency, and resilience.The full support of vSAN Max as principal storage within a workload domain enables customers to maintain a centralized shared storage model while leveraging the capabilities of vSAN.vSAN data protection allows users to create snapshots based on groups of VMs, set snapshotting schedules, and easily recover VMs without them being part of the inventory.Enhancements in alerting capabilities for NVMe storage devices and proactive hardware management provide better visibility and intelligence about the health and wellbeing of storage devices.Follow us on Twitter for updates and news about upcoming episodes: https://twitter.com/UnexploredPod.Last but not least, make sure to hit that subscribe button, rate wherever possible, and share the episode with your friends and colleagues!Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
We interview Rutger and Luis covering their trending SDDC Lab Version 6, a blog article that can be found at https://https://rutgerblom.com/2024/01/26/sddc-lab-v6-released/. The SDDC.Lab project is a series of Ansible Playbooks that preform fully automated doployments of nested VMware Software Defined Data Center Environments called pods. Each Pod consists of solutions like vSphere, VSAN, NSX, Tanzu and NSX Advanced Load Balancer, Aria Operations for Logs and VyOS Router.
In this conversation, Duncan and Cormac Hogan discuss VMware's Data Services Manager (DSM) and its role in offering data services in a full-stack private cloud. They cover topics such as the use cases for DSM, the integration with Kubernetes, the support for different databases, the automation capabilities, and the licensing model. Cormac highlights the features of DSM, including lifecycle management, backups, scaling, monitoring, and advanced settings. He also mentions the upcoming release of new features and additional data services.Ā TakeawaysData Services Manager (DSM) is a VMware product that offers data services in a full-stack private cloud.DSM integrates with Kubernetes and allows VI administrators to maintain control of vSphere resources while offering data services.DSM supports databases such as Postgres and MySQL, with support for other data services like AlloyDB in tech preview.DSM provides features such as lifecycle management, backups, scaling, monitoring, and advanced settings.DSM is included in VMware Cloud Foundation (VCF) and support can be added through the Private AI Foundation add-on.Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
In this episode, Suresh Madhu, Lead Architect for Pure Storage, walks us through new innovations in the Pure Storage Disaster Recovery portfolio & gives us a live demo of the Pure Protect DRaaS platform! __________ Chapters 00:00 - Introduction 01:33 - The landscape today 07:46 - Data Resiliency Strategies 12:21 - What is Pure Protect // DRaaS? 23:03 - Solution Overview // DRaaS for vSphere to AWS Native 50:24 - Live Demo! __________ Resources https://www.purestorage.com/ https://www.linkedin.com/in/suresh-madhu-9544181/
Welcome back, long-awaiting listeners, as your favorite IT podcast with a healthy dose of empathy, Data Center Therapy, returns after a seven-month hiatus.Ā It's been too long, we know, but we're back and we're recharged so we can bring you the first of many new episodes with exciting, topical and relevant content.Ā Thanks for joining us!On this episode, your intrepid hosts, Mr. Matt ābeen through the desert on a horse with no nameā Cozzolino and Mr. Matt āCall me Heisenbergā Yette share Cozzo's adventures hiking from Supai to Havasupai Falls in the great state of Arizona, and talk all things Broadcom, given the big news of the acquisition of VMware completed since the last episode aired.While sharing their thoughts on the whole VMware ecosystem and the changes, the DCT crew muse about:The new tiers of subscription licensing that replace many of the old VMware SKUs,The changes in licensing from sockets to cores,The aborted licensing change back in the VMware days regarding vRAM,The āsecond dayā about face on ROBO licensingViable alternatives to vSphereThe spinning off of the Horizon technology to the new firm named Omnissa,and much, much more.As a reminder, IVOXY are hosting a vSphere 8 Advanced Class in just two weeks, and we're developing classes and workshops for Disaster Recovery and Aria Operations (formerly VMware vRealize Operations.)Ā If you're interested in Matt Cozzolino's Networking for Server admins Ask Me Anything on June 20th at 11AM Pacific, register at: https://ivoxy.com/ama-networkingforserveradmins. Ā If you're looking for the DirtFish 2024 event registration, it may be found at: https://ivoxy.com/dirtfish2024.Ā As always, if you enjoy Data Center Therapy, please tell three friends and be sure to like, share and subscribe wherever you get your podcasts.Ā Thanks for your patience, your attention, and we eagerly look forward to sharing more in 2024 with you all on the next episode of Data Center Therapy!
In this episode of Virtually Speaking, Pete and John dive deep into the transformative impact of Data Processing Units (DPUs) on virtual infrastructure with special guest Dave Morera from vSphere Technical Marketing. They explore how vSphere Distributed Services Engine leverages DPUs to modernize infrastructure by offloading critical functions from the CPU. This advancement enables significant resource savings, accelerated networking, and enhanced workload security. Dave explains how the integration of DPU lifecycle management into vSphere simplifies operations, reduces the need for third-party tools, and strengthens security with agentless controls. Tune in to learn how DPUs are enhancing performance, improving workload consolidation, and reducing total cost of ownership while simplifying infrastructure management and boosting security.
SummaryFor this special edition of the podcast Duncan invited Michael Roy to discuss the latest VMware Workstation and VMware Fusion announcements. VMware Workstation and Fusion are desktop hypervisor products that allow users to run virtual machines on their PC or Mac. Starting today, Workstation and Fusion commercial licenses will only be available through annual subscriptions. The price for both products is now $199 per year. The free versions of Fusion Player and Workstation Player are being discontinued, but the Pro versions will be available for free for personal use. Support for personal use products will be community-based, while commercial users will have support included in their subscription. The focus of future innovation will be on the integration between vSphere and Workstation/Fusion, providing a local virtual sandbox for learning, development, and testing.TakeawaysVMware Workstation and Fusion are desktop hypervisor products for running virtual machines on PC and Mac.Commercial use of Workstation and Fusion is shifting from perpetual licenses to annual subscriptions.The free versions of Fusion Player and Workstation Player are being discontinued, but the Pro versions will be available for free for personal use.Support for personal use products will be community-based, while commercial users will have support included in their subscription.Future innovation will focus on integrating vSphere with Workstation and Fusion to provide a local virtual sandbox for learning, development, and testing.LinksAnnouncement BlogThe Register articleDisclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
In episode 073 Duncan and Pete discuss various updates and changes related to vSAN, including ReadyNode configurations, licensing, vSAN Max, capacity reporting, and compression ratios. They highlight the improvements in compression ratios with vSAN ESA, which can result in significant space efficiency gains. They also discuss the use cases for vSAN Max and vSAN HCI, as well as the flexibility in making changes to ReadyNode configurations. Overall, they emphasize the ongoing development and exciting future of vSAN and VMware Cloud Foundation.TakeawaysvSAN ESA offers improved compression ratios, with an average of 1.5x and some customers achieving 1.7x or better.vSAN Max is a centralized shared storage solution for vSphere clusters, providing storage services to multiple vSphere clusters.Customers can choose between vSAN Max and vSAN HCI based on their needs, such as independent scaling of storage and compute, separate lifecycle management, extending the life of existing vSphere clusters, or specific application requirements.Changes in ReadyNode configurations for vSAN Max have reduced the minimum number of hosts required and lowered the hardware requirements, making it more accessible for smaller enterprises.Capacity reporting in vSAN has been improved with the introduction of L0FS overhead, providing more accurate information on capacity usage.vSAN ESA's improved compression ratios, combined with RAID 5 or RAID 6 erasure coding, can result in significant space efficiency gains compared to the original storage architecture.Ongoing development and updates are expected in vSAN and VMware Cloud Foundation, with exciting new capabilities on the horizon.Linkshttps://core.vmware.com/blog/smaller-vsan-esa-readynodes-accommodate-vmware-vsphere-foundations-trial-capacity-capabilityhttps://docs.vmware.com/en/VMware-vSphere/8.0/rn/vsphere-esxi-80u2b-release-notes/index.htmlhttps://core.vmware.com/blog/improved-capacity-reporting-vmware-cloud-foundation-51-and-vsan-8-u2Ā https://core.vmware.com/blog/greater-flexibility-vsan-max-through-lower-hardware-and-cluster-requirementsĀ Follow us on X for updates and news about upcoming episodes: https://x.com/UnexploredPod.Last but not least, make sure to hit that subscribe button and share the episode with your friends and colleagues!Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
For some strange reason, āmaintenanceā has been in the news quite a bit lately. Is there ever a time when maintenance is enjoyable, or appreciated?Ā SHOW: 814SHOW TRANSCRIPT: The Cloudcast #814SHOW VIDEO: https://youtube.com/@TheCloudcastNETĀ CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW NOTES:AWS increased the price of longer-running EKS clusters by 6xBroadcom changes VMware licensing from perpetual to subscriptionBroadcom offers security patches to perpetual license customersIncreasing the Kubernetes support window to 1 yearDiscovering the XZ backdoor (Oxide and Friends, podcast)IS MAINTENANCE EVER APPRECIATED OR ENJOYABLE?Spent the day surrounded by maintenance activities (oil, AC, power-wash)The costs of maintenance are real and opportunityMaintenance often goes unappreciated and unseenNaming: Release Notes, Technical Debt, Chaos EngineeringTECHNICAL DEBT VS. MAINTENANCEShould we encourage a lack of maintenance vs. innovation as a priority?Should we encourage active maintenance with lower hard costs?Is there a way to put respect on maintenance? (e.g. OSS maintainers)Do we undervalue maintenance (e.g. Backup/Recovery, DisasterRecovery, etc.)?What maintenance best practices do you use? What are the good and bad of them?FEEDBACK?Email: show at the cloudcast dot netTwitter: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod
In this episode of the Virtually Speaking Podcast, Pete and John sat down with Chen Wei, the creator of HCIBench, from VMware by Broadcom. HCIBench streamlines HCI benchmarking, simplifying a once-complex process. Chen Wei shared HCIBench's journey, emphasizing its user-friendly interface, which caters to both novices and experts. We explored its technical architecture and discussed key metrics like throughput and latency crucial for assessing HCI performance accurately. Our conversation also touched on HCIBench's collaborative development model, driven by user feedback and community contributions. Looking ahead, Chen discussed upcoming features and the tool's evolving role in the HCI industry. Tune in to gain insights into HCI benchmarking and discover how HCIBench is shaping the future of infrastructure optimization. Links Mentioned: Download HCIBench - https://via.vmware.com/hcibench Blog: How do I Identify if I have bad performance Reference Architectures - https://core.vmware.com/reference-architectures The Virtually Speaking Podcast The Virtually Speaking Podcast is a technical podcast dedicated to discussing VMware topics related to storage and availability. In each episode, Pete Flecha and John Nicholson bring in various subject matter experts from VMware and within the industry to discuss their respective areas of expertise. If you're new to the Virtually Speaking Podcast check out all episodes on vSpeakingPodcast.com and follow on Twitter @VirtSpeaking
Welcome to episode 249 of the CloudPod Podcast ā where the forecast is always cloudy! This week, Justin and Ryan put on their scuba suits and dive into the latest cloud news, from Google Geminiās āwokeā woes, to Azure VMware Solution innovations, and some humorous takes on Reddit and Googleās unexpected collaboration. Join the conversation on AI, storage solutions, and more this week in the Cloud! Titles we almost went with this week: Gemini Has Gone Woke? Uhhhā¦ok.Ā A big thanks to this week's sponsor: We're sponsorless this week! Interested in sponsoring us and having access to a specialized and targeted market? We'd love to talk to you. Send us an email or hit us up on our Slack Channel.Ā General NewsĀ 01:48 DigitalOcean beats expectations under the helm of new CEO Paddy Srinivasan Quick earnings chat. Digital Ocean, under their new CEO Paddy Srinivasan reported earnings of 44 centers per share, well ahead of Wall Streetās target of 37 cents per share.Ā Revenue growth was a little sluggish at 11% more than a year earlier, but the companies 181 million in reported sales still beat analysts expectations.Ā Full year revenue was 693M for the year.Ā Weāre really glad to see the business is still going, and instead of going back on-premise, we think itās a viable option for many workloads so don't sleep on them. 02:46Ā Ryan ā āI like that, you know, while they are very focused on, you know, traditional compute workloads, you can still see them. Dip in their toes into managed services and, and, um, their interaction with the community and documentation of how to do things. I think itās really impactful.ā 03:34 VMware moves to quell concern over rapid series of recent license changesĀ Ā As we have reported multiple times on the VMWARE shellacking they are doing to the customers, Vmware has released a blog post trying to convince you that theyāre **not** screwing you.Ā Broadcom has realigned operations around VMWare Cloud Foundation private cloud portfolio and data center-focused VMWare Vsphere suite, and no longer sells discrete products such as vSphere hypervisor, vSAN virtual storage and NSX network storage virtualization software.Ā Ā They also are eliminating perpetual licensing in favor of subscription-only pricing, with VCF users getting vSAN, NSX and the Aria Management and orchestration components bundled whether you want them or not.Ā Broadcom says this is about focusing on best-of-breed silos, and not disparate products without an integrated experience.Ā
Will Broadcom's bold move with VMware's licensing leave your budget on cloud nine or bring it crashing back down to earth? This episode is a whirlwind tour through the cost conundrum shaking the foundations of VMware Cloud Foundation's license portability, and we do it all with the cheeky banter you've come to love. Plus, we're not shy about calling out the elephant in the room: the industry's skeptical eye on the promised TCO reductions. So buckle up, tech enthusiasts, as we dissect just how the Broadcom-VMware alliance is reshaping the game for everyone from fledgling startups to tech goliaths.The virtualization space is at a crossroads, and VMware's path is looking as rugged as the surface of Mars. Say farewell to the free ride with vSphere hypervisor ESXi and hello to potential new horizons with contenders like Proxmox and Nutanix. As we explore the ripples of VMware's licensing labyrinth, we also cast a spotlight on the startling layoffs at Ciscoāno easy feat for a company that's been a bedrock in the tech landscape. In this cheeky chat, we're spilling the tea on Cisco's strategic shuffle and musing about how Nvidia's astronomical growth could be rewriting the rulebook for tech titans.Who needs a crystal ball when you've got the inside scoop on the cloud market's future? In the final stretch, we're breaking down the Aviatrix report's revelations on cloud cost optimization and why the big CSPs might be keeping those purse strings a bit too tight. Get ready for a lively debate on Microsoft's potential to outpace AWS by 2026 thanks to its ecosystem integration strategy. With our usual mix of sass and savvy, we promise you won't look at the cloudāor your cloud budgetāthe same way again after tuning into this episode of Cables to Clouds.Check out the Fortnightly Cloud Networking NewsVisit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatjArt of Network Engineering (AONE): https://artofnetworkengineering.com
There are two new tools to help understand and calculate the required subscription capacity for the new VMware Cloud Foundation (VCF) and VMware vSphere Foundation (VVF) offerings, which are licensed based on physical CPU Cores for compute and total raw physical storage (TiBs) for vSAN. On this episode of the Virtually Speaking Podcast Pete and John welcome William Lam to discuss the details of these new scripts and how demonstrates with a live demo.Ā Links MentionedĀ WilliamLam.com VMware KB 95927 - Counting Cores for VMware Cloud Foundation and vSphere Foundation and TiBs for vSAN VMware KB 96426 - License Calculator for VMware Cloud Foundation, VMware vSphere Foundation and VMware vSAN
On this week's episode I do a roundup of this month's Windows Updates, I get into the recent VMware announcement of the end of free vSphere hypervisors and much more! Reference Links: https://www.rorymon.com/blog/vmware-pulls-free-hypervisor-new-cvad-ltsr-patch-tuesday-news/
Cohesity is set to acquire the data protection assets of Veritas in a huge deal. Cohesity will be picking up the NetBackup portfolio as well as SaaS offering Alta. The move is seen as a huge bolster to Cohesity both in their cloud offerings as well as traditional on-prem enterrpise back and data protection. The deal will merge the two companies into a $7 billion. The remaining parts of Veritas that aren't purchased will be rebranded as DataCo. Time Stamps: 0:00 - Welcome to the Rundown 0:50 - NetApp and SpectraLogic Team Up for On-Prem Archival Storage 4:33 - Flipper Zero is Canada's Number One Enemy 9:18 - AI Continues to Shape Google Gemini 13:42 - Free VMware vSphere Hypervisor Ended by Broadcom 17:56 - Alcion Announces New MSP Partner Program 20:36 - Cohesity to Acquire Veritas Technologies 32:17 - The Weeks Ahead 34:10 - Thanks for Watching Follow our Hosts on Social Media Tom Hollingsworth:Ā ā ā ā ā ā ā ā ā ā ā ā https://www.twitter.com/NetworkingNerdā ā ā ā ā ā ā ā ā ā ā Stephen Foskett:Ā ā ā ā ā ā ā ā ā ā ā ā https://www.twitter.com/SFoskettā ā ā ā ā ā ā ā ā ā ā Ā Follow Gestalt IT Website:Ā ā ā ā ā ā ā ā ā ā ā ā https://www.GestaltIT.com/ā ā ā ā ā ā ā ā ā ā ā Twitter:Ā ā ā ā ā ā ā ā ā ā ā ā https://www.twitter.com/GestaltITā ā ā ā ā ā ā ā ā ā ā LinkedIn:Ā ā ā ā ā ā ā ā ā ā ā ā https://www.linkedin.com/company/Gestalt-ITā #Rundown, #Storage, #AI, #Data, #vSphere, #Gemini, #Security, #AIFD4, #NFD34, #XFD11, @Cohesity, @VeritasTechLLC, @AlcionHQ, @Broadcom, @VMware, @Google, @GoogleCloud, @Flipper_Zero, @NetApp, @SpectraLogic, @TheFuturumGroup, @GestaltIT, @SFoskett, @NetworkingNerd,
In this episode, join us as we delve into the intricate world of Platform Engineering with Aparna Subramanian, Director of Production Engineering at Shopify. Discover how Shopify, a powerhouse in e-commerce, masters the art of scaling platform engineering. Gain invaluable insights into their strategies, innovations, and lessons learned while navigating the complexities of sustaining and evolving a robust infrastructure to support millions, even through special peak events like Black Friday and Cyber Monday. If you're keen on understanding the backbone of a thriving online platform, don't miss out on this episode. Aparna started her career as a Software Engineer and has spent most part of her almost two decades of technology experience specializing in Infrastructure and Data Platforms. In her current role she leads Shopify's Cloud Native Production Platform. Previously, she was Director of Engineering at VMware where she was a founding member of Tanzu on vSphere, a Kubernetes Platform for the hybrid cloud. She also serves as co-chair of the āCNCF End User Developer Experienceā SIG and as member of the CNCF End user technical advisory board. The episode was live-streamed on 11 January 2024 and the video is available at https://www.youtube.com/watch?v=6ShtsTTUizI OpenObservability Talks episodes are released monthly, on the last Thursday of each month and are available for listening on your favorite podcast app and on YouTube. We live-stream the episodes on Twitch and YouTube Live - tune in to see us live, and chime in with your comments and questions on the live chat. https://www.youtube.com/@openobservabilitytalksĀ Ā ā https://www.twitch.tv/openobservability Show Notes: 00:00 - Show intro & 2023 stats 01:49 - Episode and guest intro 04:15 - Shopify's scale 06:09 - Shopify's journey to Platform Engineering 08:56 - Shopify's platform structure 11:49 - division of responsibility 13:51 - golden path vs flexibility 17:58 - balancing flexibility and abstraction 19:56 - platform group structure 23:28 - handling load spikes 28:55 - FinOps in Platform Engineering 38:38 - avoiding silos and the cultural aspect 41:13 - CNCF end-user SIG and community challenges 49:24 - KubeCon Paris and guest contactĀ 51:03 - OpenTofu reached GA 53:33 - Isovalent acquired by Cisco 55:00 - year-end summary articles 57:07 - .NET Aspire released preview2 58:58 - Episode and show outro Resources: Shopify Engineering BlogĀ https://shopify.engineering/ Performance wins at Shopify:Ā https://www.shopify.com/news/performance%F0%9F%91%86-complexity%F0%9F%91%87-killer-updates-from-shopify-engineering CNCF End User SIGĀ https://github.com/cncf/enduser-public OpenTofu has reached GA https://logz.io/blog/terraform-is-no-longer-open-source-is-opentofu-opentf-the-successor/?utm_source=devrel&utm_medium=devrel Observability in 2024: https://thenewstack.io/observability-in-2024-more-opentelemetry-less-confusion/ OpenTelemetry in 2024: https://www.apmdigest.com/2024-application-performance-management-apm-predictions-4 .NET Aspire preview2: https://devblogs.microsoft.com/dotnet/announcing-dotnet-aspire-preview-2/Ā Socials: Twitter:ā https://twitter.com/OpenObservā YouTube: ā https://www.youtube.com/@openobservabilitytalksā Dotan Horovits ============ Twitter: @horovits LinkedIn: in/horovits Mastodon: @horovits@fosstodon Aparna Subramanian ================= Twitter: @aparnastweets LinkedIn: https://www.linkedin.com/in/subramanianaparna/Ā
Welcome to episode 240! It's a doozy this week! Justin, Ryan, Jonathan and Matthew are your hosts in this supersized episode. Today we talk about Google Gemini, the GCP sales force (you won't believe the numbers) and Google feudalism. (There's some lovely filth over here!) Plus we discuss the latest happenings over at HashiCorp, Broadcom, and the Code family of software. So put away your ugly sweaters and settle in for episode 240 of The Cloud Pod podcast - where the forecast is always cloudy!Ā Titles we almost went with this week:
This week VMware announced the latest release of VMware Data Services Manager (DSM), version 2.0. DSM enables the provisioning of data services (e.g., databases, object stores) on vSphere infrastructure. This release builds on earlier versions of VMware Data Services Manager 1.x, but also extends the product. On this episode of the Virtually Speaking Podcast Cormac Hogan shares the details of this new release.
vSphere 8 Update 2 introduces several significant announcements to vVols such as new VASA specs, better performance and resilience, enhanced certificate management, and support for NVMeoF. On this episode of the Virtually Speaking Podcast Pete and John welcome Jason Massae and Sudhir Balasubramanian to discuss the details of this release.
Plants by nature are designed to interact with light. Satellites can measure the light reflected by plants to detect grapevine diseases before they are visible to the human eye. Katie Gold, Assistant Professor of Grape Pathology, Susan Eckert Lynch Faculty Fellow, School of Integrative Plant Science Plant Pathology and Plant-Microbe Biology Section of Cornell AgriTech is trailblazing remote disease detection with imaging spectroscopy also known as hyperspectral imaging. Imaging spectroscopy was developed by NASA to tell us what Mars was made out of. By turning satellites back on Earth, Katie and a team of scientists are learning how to use the light reflected back to manage grapevine viral and foliar diseases. Listen in to the end to get Katie's number one piece of advice on the importance of data management. Resources: Alyssa K. Whitcraft, University of Maryland Disease Triangle of Plant Pathology Gold Lab Katie Gold, Cornell University Ā Katie Gold - Twitter NASA AVIRIS (Airborne Visible and InfraRed Imaging Spectrometer) NASA Acres - applying satellite data solutions to the most pressing challenges facing U.S. agriculture NASA Emit Satellite NASA JPL (Jet Propulsion Laboratory) Planet Labs References: Vineyard Team Programs: Juan Nevarez Memorial Scholarship - Donate SIP Certified ā Show your care for the people and planet Ā Sustainable Ag Expo ā The premiere winegrowing event of the year - $50 OFF with code PODCAST23 Sustainable Winegrowing On-Demand (Western SARE) ā Learn at your own pace Vineyard Team ā Become a Member Get More Subscribe wherever you listen so you never miss an episode on the latest science and research with the Sustainable Winegrowing Podcast. Since 1994, Vineyard Team has been your resource for workshops and field demonstrations, research, and events dedicated to the stewardship of our natural resources. Learn more at www.vineyardteam.org. Ā Transcript Craig MacmillanĀ 0:00Ā With us today is Katie Gold, Assistant Professor of Grape Pathology at Cornell AgraTech campus of the Cornell University. Thanks for being on the show. Ā Katie GoldĀ 0:08Ā Well, thanks for having me. Ā Craig MacmillanĀ 0:09Ā Today, we're going to talk about some really cool technology. I've been interested in it for a long time, and I can't wait to get an update on what all is happening. There's some really exciting work being done on using remote sensing for the detection of plant diseases. Can you tell us a little bit about what that research is about what's going on in that field? Ā Katie GoldĀ 0:25Ā Sure, what isn't going on in this field, it's a really exciting time to be here. So I guess to put into context, we're really at this precipice of an unprecedented era of agricultural monitoring. And this comes from the intersection of you know, hardware becoming accessible, the data analytics becoming accessible, but also investment, you know, a lot of talk of ag tech being the next big thing. And with that comes this interest in using these cool and novel data streams for disease detection. So my group specializes in plant disease sensing, it's our bread and butter to what we entirely focus on. And we specialize in a technology called imaging spectroscopy for disease detection. So this is also known as hyperspectral imaging. Imaging spectroscopy is the technical term. And this is a type of remote sensing that it differs from, you know, radio wave remote sensing, and it focuses on light in the visible to shortwave infrared range. Ā Craig MacmillanĀ 1:13Ā Talk a little bit more about that. So when we talk about hyperspectral, we're looking outside of the range of radiation, essentially, that's not just light. Ā Katie GoldĀ 1:24Ā So yes, and no. So hyperspectral is a word that describes how the light is being measured, kind of colloquially, we assigned to it more meaning that it actually has. That's why I often like to differentiate between it for explanation sake, what hyperspectral imaging is, when we talk about using it in the full vSphere range, these are all types of light, you know, it's all aspects of the electromagnetic radiation scale. But this spectrum of light that ranges from the visible to the shortwave infrared, this spans a range of about 2100 wavelengths. So to put that into context, we see visible light only. And this spans a range of wavelengths, that's about 300 nanometers, and went from about 450 to 750. So if you think about all the richness of radiation, the subtlety in differences in color that you see in everyday light, all of that comes from those subtle interactions of, you know, specific wavelengths of light hitting that stuff and bouncing back into our eye. So now imagine having seven times more wavelengths than that, you know, we have 2100, different wavelengths that we measure. And those wavelengths that are beyond the range that we can see the reason why we don't see them as they're less abundant, they're less emitted by our sun, but they're still present, and they still interact with the world. In particular, they interact very strongly with chemistry, such as environmental chemistry. So imaging spectroscopy was developed by NASA to tell us what Mars was made out of, then one day, they're like, let's turn this baby around and pointed at the Earth. And we discovered that it's quite applicable for vegetative spectroscopy. So telling us what vegetation is made of what the composition of the Earth is. And because plant disease impacts chemistry, so dramatically, plant physiology, chemistry, morphology, such a dramatic chaotic impact. It's a really excellent technology to use for early detection. So those subtle little changes that occur within a plant before it becomes diseased to the human eye, but it's undergoing that process of disease. Ā Craig MacmillanĀ 3:12Ā Can you expand on that point? Exactly how does this work in terms of the changes in the plant that are being picked up by viewing certain wavelengths? What's the connection there? Ā Katie GoldĀ 3:23Ā Consider the leaf, right. So plants are an amazing thing to remotely sense because they're designed by nature to interact with light. Now that's in contrast to skin right that's designed to keep light out plants are designed to have light go in and out, etcetera. So light will enter our atmosphere from the sun, and it will do one of three things when it encounters a plant, it'll be reflected back, it will be absorbed for photosynthesis, or it will be transmitted through the plant. And the wealth of that light is actually reflected back. And that reflected light can be detected by something as distantly placed as a satellite in orbit. And how that light is reflecting off a plant is determined by the health status of a plant. So a healthy leaf, right? It's going to be photosynthesizing. This means that it's going to be absorbing red and blue light for photosynthesis, it's going to have a lot of chlorophyll, it's going to be nice, bright and green, it's going to reflect back a lot of green light. And then it's going to reflect back near infrared light, because that is the sort of light that corresponds really well to the cellular structure of a leaf, right, so a nice healthy leaf is going to bounce back near infrared light. Now an unhealthy plant, it's not going to be photosynthesizing properly. So it's going to be absorbing less red and blue light. Therefore, it will be reflecting more of that red light back, it's not going to have a lot of chlorophyll. So it's going to reflect back less green light, and it's not as healthy. It's not as robust, so it will reflect back less near infrared light. So by looking at those subtle differences, and this is where we get back to that idea of hyperspectral. Right. hyperspectral is a word about how a sensor is measuring light. And hyperspectral means that a sensor is measuring light at such narrow intervals, that it's a near continuous data product. And this is in contrast to a multispectral sensor something Like NDVI that measures light in big chunks. The power is when you have continuous data, right? You could do more complex analyses you just have more to work with. And when you have discrete data, this is what makes hyperspectral sensors more powerful. It's how they're measuring the light, and often, that they're measuring more light that our eyes can see. But that's not necessarily a given hyperspectral sensors do not need to measure beyond the visible range, they can solely be focused on the visual visible range. Because once again, hyperspectral is a word about how the light is being measured. But we oftentimes kind of colloquially, so assign more value to it. But let's take that in combination, right. So you have a hyperspectral sensor that's measuring light and very, very narrow intervals near continuous data product, you're measuring seven times more wavelengths than the eye can see, combined together. That's how this works, right? So those subtle differences and those wavebands how they're reflecting both direct interactions with plant chemistry, you know, some certain wavelengths of light will hit nitrogen bonds go wackadoo and bounce back, all crazy. Otherwise, we're making indirect inferences, right, you know, plant disease as a chaotic impact of plant health that impacts lots of areas of the spectrum. So we're not directly measuring the chemical impact, right? We're not saying okay, well, nitrogen is down two sugars are up three starch XYZ, we're measuring that indirect impact. Ā Craig MacmillanĀ 6:19Ā That's pretty amazing. And so... Ā Katie GoldĀ 6:21Ā I think it's cool, right? Yeah. Ā Craig MacmillanĀ 6:24Ā The idea here is that there are changes in the leaf that can be picked up and these other wave lengths that we wouldn't see until it's too late. Ā Katie GoldĀ 6:34Ā Exactly. Ā Craig MacmillanĀ 6:35Ā Okay. So it's a warning sign. That gives us a chance to change management. Ā Katie GoldĀ 6:40Ā Ideally, so. Right, so it depends on with the scale at which you're operating. So now here comes another level, right. So if you're considering just that one individual plant, it's different from when you're considering the whole scale of a vineyard, right, you want your sensing to be right size to the intervention that you're going to take. So my group works with two types of diseases primarily, we work with grape vine viral diseases, as well as grape vine foliar diseases, for example, a grape vine downy mildew, which is an Erysiphe caused by a Erysiphe pathogen, and grapevine powdery mildew, which is caused by a fungal pathogen. Now the sort of intervention that you would take for those two diseases is very different, right? With a viral disease, the only treatment that you have is removal, there's no cure for being infected with the virus. Now, with a fungal pathogen or an Erysiphe pathogen like grape downy mildew. If you detect that early, there are fungicides you can use with kickback action. Or otherwise, you might change the sort of what sort of choice you might make a fungicide right. If you know there's an actual risk in this location, you might put your most heavy hitting fungicides there than in areas where there is no disease detected, or the risk is incredibly low, you might feel more comfortable relying on a biological, thereby reducing the impact. So given the sort of intervention, you would take, we want to right size, our sensing approach for it. So with grapevine viral diseases, when the intervention is so has such a vast financial impact, right removal, we want to be incredibly sure of our data. So we focused on high spectral resolution data products for that ones, where we have lots of wavelengths being measured with the most precise accuracy so that we can have high confidence in that result, right? We want to give that to someone and say, Hey, we are very confident this is undergoing asymptomatic infection. Now, on the other hand, with these foliar diseases, they change at such a rapid timescale that you're more benefited by having an early warning that may be less accurate, right? So you're saying, hey, this area of your vineyard is undergoing rapid change it might be due to disease might be because your kid drove a golf cart through the vineyard, however, we're warning you regardless, to send someone out there and take a look and make a decision as to what you might do. Ideally, we would have a high spectral resolution regardless, right? Because more spectrum or better, but the realities of the physics and the actual logistics of doing the sensing is that we don't get to do that we have to do a trade off with spectral spatial and temporal resolution. So if we want rapid return, high degrees of monitoring, and we want that high spatial resolution suitable for a vineyard, we lose our spectral resolution, so we lose our confidence in that result. But our hope is that by saying, Hey, this is a high area of change, and giving you that information very quickly, you can still make an intervention that will be yield successful response, right? You'll go out there and you're like, Oh, yep, that's downy mildew. Otherwise, like, I'm going to take my kid keys like he's out here, my vineyard again. Right? So it's, it's kind of work balancing, right. So we have the logistics of the real world to contend with in terms of using sensing to make to inform management intervention. Ā Craig MacmillanĀ 9:36Ā This technology can be used or applied at a variety of distances if I understand everything from proximal like driving through a vineyard to satellite. Ā Katie GoldĀ 9:48Ā Oh, yeah. And we've worked with everything. Ā Craig MacmillanĀ 9:50Ā Yeah, yeah. And everything in between. I mean, could you fly over is a lot of companies that do NDVIs with flyover. Ā Katie GoldĀ 9:55Ā You can use robots like we do. Ā We can use robots, there's all kinds of things we can do. Or what is a what is NDVI for the audience, even though that's not what we're talking about. You and I keep using it. Ā So NDVI stands for Normalized Difference vegetative index. It's a normalized difference between near infrared light reflecting and red light. And it is probably the most accurate measurement we have of how green something is. And it's quite a powerful tool. As you you know, we've been using NDVI for well over 50 years to measure how green the earth is from space. That's powerful. But the power of NDVI is also its downside. And that because it is so effective at telling you how green something is, it cannot tell you why something is green. Or it cannot tell you why something is not green, it's going to pick up on a whole range of subtle things that impact plant health. Ā Craig MacmillanĀ 10:40Ā And whereas the kind of work that you're doing differs from that in that it's looking at different frequencies, and a higher resolution of frequencies. Ā Katie GoldĀ 10:51Ā Exactly. So for the most part, we do use NDVI. But we use it more as a stepping stone, a filtering step rather than the kind of end all be all. Additionally to we use an index that's a cousin to NDVI called EDI, that is adjusted for blue light reflectance, which is very helpful in the vineyard because it helps you deal with the shadow effects. Given the trellising system Iin the vineyard. But yes, exactly. We, for the most part are looking at more narrow intervals of light than NDVI and ranges beyond what NDVI is measuring. Ā Craig MacmillanĀ 11:22Ā What's the resolution from space? Ā Katie GoldĀ 11:24Ā That's a great question. Ā Craig MacmillanĀ 11:25Ā What's the pixel size? Ā Katie GoldĀ 11:27Ā One of the commercial satellite products we work with has half a meter resolution from space. Ā Craig MacmillanĀ 11:32Ā Wow. Ā Katie GoldĀ 11:33Ā Yeah, 50 centimeters, which is amazing. Yeah, that was exactly my reaction. When I heard about it, it was like I didn't get my hands on this. But as I mentioned before, right, you know, if that resolution, we trade off the spectral resolution. So actually, that imagery only has four bands, that effectively is quite similar to an NDVI sensor, that we do have a little more flexibility, we can calculate different indices with it. So we use that data product, 50 centimeters, we use three meter data products from commercial sources. And then we're also looking towards the future, a lot of my lab is funded by NASA, in support of a future satellite that's going to be launched at the end of the decade, called surface biology and geology. And this is going to put a full range Hyperspectral Imager into space that will yield global coverage for the first time. So this satellite will have 30 meter resolution. And it will have that amazing spectral resolution about 10 day return. And that 30 meter spatial size. So again, kind of mixing and matching, you don't get to optimize all three resolutions at once. Unfortunately, maybe sometime in my career, I'll get to the point where I get to optimize exactly what I want, but I'm not there yet. Ā Craig MacmillanĀ 12:41Ā And I hadn't thought about that. So there's also a there's a time lag between when the data comes in and when it can be used. Ā Katie GoldĀ 12:48Ā Yes. Ā Craig MacmillanĀ 12:48Ā What are those lags like? Ā Katie GoldĀ 12:50Ā It depends. So with some of the NASA data that we work with, it can be quite lagged, because it's not designed for rapid response. It's designed for research grade, right? So it's assuming that you have time, and it's going through a processing stage, it's going through corrections, etc. And this process is not designed to be rapid, because it's not for rapid response. Otherwise, sometimes when we're working with commercial imagery that can be available. If we task it, it can be available to us within 24 hours. So that's if I say, Hey, make me an acquisition. And they do and then within 24 hours, I get my imagery in hand. Otherwise to there's a there's delays up to seven days. But for the most part, you can access commercial satellite imagery of a scene of your choosing, generally within 24 hours of about three meter resolution to half a meter resolution. That is if you're willing to pay not available from the space agencies. Ā Craig MacmillanĀ 13:42Ā I want to go back to that space agency thing first or in a second. What talk to me about satellite, we've got all kinds of satellites flying around out there. Ā Oh, we do. Ā All kinds of who's doing what and where and how and what are they? And how long are they up there. And... Ā Katie GoldĀ 13:58Ā Well, I'll talk a little bit about the satellites that my program is most obsessed with. We'll call it that. I'll first start with the commercial satellite imagery that we use. This comes from Planet Labs. They're a commercial provider, they're quite committed to supporting research usages, but we've been using their data for three years now. Both they're tasked imagery, which is half a meter resolution, as well as their planet scope data, which is three meter resolution. And we've been looking at this for grapevine downy mildew. Planet Labs, their whole thing is that they have constellation architecture of cube sets. So one of the reasons why satellites are the big thing right now they are what everyone's talking about, is because we're at this point of accessibility to satellite data that's facilitated by these advances in hardware design. So one the design of satellites you know, we now have little satellites called CubeSats that are the size of footballs maybe a little bit bigger. Ā Craig MacmillanĀ 14:48Ā Oh, really? Ā Katie GoldĀ 14:48Ā Yeah, yeah, they're cool. They're cute. You can actually like kids science fair projects can design a CubeSat now, fancy kid school projects, at least not not where I was. As well as constellation architecture. So this is instead of having one big satellite, the size of a bus, you have something like 10, CubeSat, that are all talking to each other and working together to generate your imagery. So that's how you're able to have far more rapid returns, instead of one thing circling around the planet, you have 10 of them circling a little bit off. So you're able to get imagery far more frequently at higher spatial resolution. And this is now you know, trickled down to agriculture. Of course, you know, what did the Department of Defense have X years ago, they've, I'm excited to see what will finally be declassified eventually, right. But this is why satellite imagery is such a heyday. But anyway, that's, that's the whole Planet Labs stick, they use CubeSats and constellation design. And that's how they're able to offer such high spatial resolution imagery. Ā Craig MacmillanĀ 15:44Ā Just real quick, I want to try understand this, you have x units, and they're spaced apart from each other in their orbit. Ā Katie GoldĀ 15:52Ā That's my understanding. So remember, I'm the plant pathologist here I just usethis stuff. So that's my understanding is that the physicists, you know, and NASA speak, they classify us into three categories. They've got applications, like myself, I use data for something, you have algorithms, which is like I study how to make satellite, talk to the world, right, like, make useful data out of satellite. And then there's hardware people, right, they design the satellite, that's their whole life. And I'm on the other side of the pipeline. So this is my understanding of how this works. But yes, they have slightly different orbits, but they talk to each other very, very like intimately so that the data products are unified. Ā Craig MacmillanĀ 16:33Ā Got it. But there's also other satellites that you're getting information from data from. Ā Katie GoldĀ 16:37Ā Yes, yeah. So now kind of going on to the other side of things. So Planet Labs has lesser spectral resolution, they have four to eight, maybe 10 bands is the most that you can get from them. We're looking towards NASA surface biology and geology data. And we use NASA's Avaris instrument suite, the family suite, that includes next generation, as well as brand new Avaris three, and this stands for the Airborne, Visible and Infrared Imaging Spectrometer. Now, this is an aircraft mounted device, but this is the sort of sensor that we'll be going into space. Additionally, we're just starting to play around with data from the new NASA satellite called Emit. Emit is an imaging spectrometer that was initially designed to study dust emission. So like, tell us what the dust is made out of where it's coming from. But they've opened up the mask to allow its collection over other areas. And Emit has outstanding spectral resolution, and about 60 meter spatial resolution. It's based on the International Space. Ā Craig MacmillanĀ 17:32Ā Station. It's located on the International Space Station? Ā Katie GoldĀ 17:36Ā Yes, yeah. And that actually impacts how its imagery is collected. So if you take a look at a map of Emit collections, there are these stripes across the world. And that's because it's on the ISS. So it only collects imagery wherever the ISS goes. And that's a little bit different from this idea of constellation architecture, have these free living satellites floating through orbit and talking to each other. Ā Craig MacmillanĀ 17:56Ā Are there other things like Landsat 7, Landsat 8? Ā Katie GoldĀ 18:02Ā Oh, we're on Landsat 9 , baby! Ā Craig MacmillanĀ 18:04Ā Oh, we're on Landsat 9 now. Cool. Ā Katie GoldĀ 18:05Ā Yeah. Yeah, Landsat 9 was successfully launched. I'm really excited about its data. Ā Craig MacmillanĀ 18:10Ā And it's coming in? Ā Katie GoldĀ 18:11Ā Just to my understanding, yes, so we don't use Landsat and Sentinel data as much otherwise, our focus is on that spectral resolution, but Landsat 9 and its its partner from the European Space Agency's Sentinel 2, they're truly the workhorses of the agricultural monitoring industry. Without those two satellites, we would be in a very different place in this world. Ā Craig MacmillanĀ 18:32Ā Right, exactly. Now, you said that your work is funded partially or all by NASA? Ā Katie GoldĀ 18:37Ā Yes, partially. Ā Craig MacmillanĀ 18:38Ā So partially, so what is the relationship there? Ā Katie GoldĀ 18:40Ā So before I started with Cornell, I was hired by Cornell while I was still a graduate student, and as part of their support for my early career development, they sponsored a short postdoc for me a fellowship, they called it I got to stay with a faculty fellow feel better about myself at the Jet Propulsion Laboratory, where my graduate co advisor Phil Townsend had a relationship with so I spent nine months fully immersed in JPL. People think of JPL is like, you know, the rocket launchers, which they are, but they also study, you know, like some of those phase out and go out into the world. But some of the things they launched turn around and study the Earth, and they had the carbon and ecosystem cycling group there. So I was able to work with them, as well as the imaging spectroscopy group for nine months. And it completely changed my entire life just opened up the world to me about what was possible with NASA data, what was coming for potential use of NASA data. And it really changed the trajectory of my career. So I made connections, made friends got my first graduate student from JPL, that have truly defined my career path. So I work very closely with NASA, originating from that relationship, as well as I'm the pest and disease risk mitigation lead for the newly established domestic agriculture consortium called NASA Acres. So this is NASA's most recent investment in supporting domestic agriculture. Through this consortium we're funded to continue some of our research myself and my good colleague, Yu Jiang who's an engineer who builds me my robots. It's confounding our work continuously, as well as giving us the opportunity to try to expand our approach to other domains through interactions, one on one, collaborations with other researchers and importantly work with stakeholders. And this consortium, the Acres consortium is led by my colleague, Dr. Alyssa Woodcraft, based at the University of Maryland. Ā Craig MacmillanĀ 20:20Ā Going back to some of the things that you mentioned earlier, and I think I just didn't ask the question at the time, how often does the satellite travel over any particular point on Earth? Ā Katie GoldĀ 20:32Ā So it depends on the type of satellite design. Is it the big one satellite sort of design? Or is it constellation? Or the ISS, right? Like they think the ISS orbits every 90 minutes, something like that? So it really depends, but their satellites crossing us overhead every moment. I think at night, if you ever look up into the night sky, and you see a consistent light, just traveling across the world, not blinking. That's a satellite going overhead. Ā Craig MacmillanĀ 20:59Ā Wow, that's amazing. Actually, are there applications for this technology on other crops? Ā Katie GoldĀ 21:04Ā Oh, certainly. So yeah. Oh, absolutely. So the use of this technology for understanding vegetative chemistry was really trailblaze by the terrestrial ecologist, in particular, the forest ecologist because it's a, you know, it's how you study things at scale, unlike the vineyards would have nice paths between them for researchers like myself, and you know, us all to walk between forests are incredibly difficult to navigate, especially the ones in more remote locations. So for the past two decades, it really spear spearheaded and trailblaze this use, and then I work with vineyards for the most part, I'm a grape pathologist, I was hired to support the grape industry, they saw the research I was doing, they said, great, keep doing it in garpes. So I'm a reformed potato and vegetable pathologist, I like to say, but there's no reason at all why the work I'm doing isn't applicable to other crops. I just happened to be doing it in grape, and I happen to really adore working with the wine and grape industry. Ā Craig MacmillanĀ 21:54Ā Yeah, yeah, absolutely. That, it totally makes sense. How is this translating are going to translate for growers into grower practices? Ā Katie GoldĀ 22:02Ā That's a great question. So the idea is that by trailblazing these functionalities, eventually, we'll be able to partner with commercial industry to bring this to growers, right. We want these this utility to be adopted for management intervention. But there's only so much one academic lab alone can do and the my role in the world is to trailblaze the use cases and then to partner with private industry to bring it to the people at scale. But the hope is that, you know, I want every venue manager to be looking at aerial images of their vineyards. Every day, right? I have a vision of interactive dashboards, maps of informed risk. One day, I want to have live risk maps informed by remote sensing. And I want every vineyard manager to be as familiar with their aerial view of their vines as they are with that side view of their vines. Right. And I think we're getting there sooner than you realize we're really at the precipice of this unprecedented era of monitoring or monitoring ability, right? And I'm really excited about what it will hold for management. Ā Craig MacmillanĀ 23:02Ā And so you must have cooperators I'm guessing. Ā Katie GoldĀ 23:05Ā Oh, I do. Yes. I've wonderful cooperators. Ā Craig MacmillanĀ 23:08Ā At this stage. It sounds like we're still kind of in a beta stage. Ā Katie GoldĀ 23:13Ā Oh, yes, very much in the beta stage. Ā Craig MacmillanĀ 23:15Ā So I'm guessing that you're looking at imagery and spotting areas that would suggest that there's some kind of a pathology problem, and then you're going on ground truthing it? Ā Katie GoldĀ 23:27Ā So yes, and no, it's more of a testbed sort of case study. We have nine acres of pathology vineyards here at Cornell, Agrotech, and Geneva, New York. And then we do partner with cooperators. We have wonderful cooperators based out in California, as well as here in New York. But those are for more on testbed sort of thing. So we're not just monitoring vineyards, and like watching them and say, Ooh, the spot appears here. We're doing more of a case studies where we intentionally go out and ground truth, then build those links between the imagery because we're not quite there yet, in terms of having this whole thing automated, we're still building those algorithms building that functionality. Now we've established proof of concept. You know, we know this works. So we're working on the proof of practicality, right? Building robust pipelines, ones that are that are resilient to varying environmental geographic conditions, right, different crop varieties resilient to confounding abiotic stress, that one drives us nuts. So that's the stage that we're at, but our collaborators and our industry stakeholders who partner with us. Without them the sort of work I do just simply would not be possible. And I'm extremely grateful for their part. Ā Craig MacmillanĀ 24:29Ā So what, what is next, what's next in the world of Katie Gold and in the world of hyperspectral plant pathology? Ā Katie GoldĀ 24:34Ā What's next for me is in a week, I'm boarding an airplane to go to Europe for a jaunt. I'm giving two international keynotes at plant pathology conferences about methods but what I really see as next for me is I really want to see the tools that technologies the approach that my group is using, percolate through the domain of plant pathology. We're such a small discipline, there's only about 2000 of us Around the world, in plant pathology, and you know, there's not even 10, great pathologist in this country, I can name every single one of them if you wanted me to. And I think I've got their number and my phone, really, I strongly believe we're at the precipice of such an exciting era in plant pathology, due to the availability of these imagery, these data streams, just simply an unprecedented era. And it will be a paradigm shift in how we ask and answer questions about Plant Pathology, because for the first time, we have accessible, accurate imagery that we can use to study plant disease at the scale at which it occurs in the field in real time. So I want to see these ideas percolate through the skill sets adopted, taken up and embraced and it we're seeing that start, you know, we're seeing that start, there's really excitement in plant pathology, about the use of remote sensing about GIS and that skill set in its value to our discipline. But I'd really like to see that expand. I think I am the first ever plant pathologist to receive funding from NASA Earth Science Division. When I started at JPL, they would introduce me as a disease ecologist, because no one had ever heard of plant pathology. And my wonderful colleague at JPL, Brian Pavlik, who's a JPL technologist, when we started working together, he had never once been into a vineyard. He didn't know about Plant Pathology, he was the one that called me a disease ecologist. And recently, I heard him explain the disease triangle to someone, which is, of course, the fundamental theory of plant pathology. And I was just so proud. But it also really represented this real excitement for me this embrace this acknowledgement of the challenges we face in plant pathology in these domains that otherwise have not heard of us, right and beyond the USDA, funding from NASA, just awareness from these other organizations, excitement from engineers, AI experts about solving plant disease problems. It's truly invigorating and exciting to me. That's where I see you going next. And I'm really excited about the future. Ā Craig MacmillanĀ 26:51Ā There was one thing that you could say to grape growers on this topic, what would it be? Ā Katie GoldĀ 26:58Ā Oh, that's such a great question. There's so much that I want to say. Ā Craig MacmillanĀ 27:01Ā One thing, Katie. Ā Katie GoldĀ 27:04Ā I would say your data is valuable and to be aware of how you keep track of your data, that the keeping track of your data, keeping your data organized, keeping, just having reproducible organized workflows will enable you to make the most out of these forthcoming technologies. It will enable you to calibrate it will enable you to train these technologies to work better for you, but your data is valuable, don't give it away to just anyone and to be aware of it. Ā Craig MacmillanĀ 27:33Ā I agree wholeheartedly. And I think that applies everything from how much time it takes to leaf an acre of ground. And how much wood you are removing when you prune to when and how much water you're applying. Data is gold. Ā Katie GoldĀ 27:49Ā Ā Data is gold. Ā Craig MacmillanĀ 27:50Ā It takes time and energy. Ā Katie GoldĀ 27:52Ā Institutional knowledge. For example, my field research manager Dave Combs has been doing this job for over 25 years, I inherited him from my predecessor, and he trained our robot how to see disease in its imagery. And the goal of our robots is not to replace the expertise like Dave, but to preserve them right to preserve that 25 years of knowledge into a format that will live beyond any of us. So I see keeping track of your data keeping track of that knowledge you have, you know, you know, in your vineyard where a disease is going to show up first, you know your problem areas, keeping track of that in an organized manner, annotating your datasets. I'm starting to adopt GIS in a way just simply like, here are my field boundaries, even simply just taking notes on your in your data sets that are timed and dated. I think it's incredibly important. Ā Craig MacmillanĀ 28:38Ā Where can people find out more about you and your work? Ā Katie GoldĀ 28:41Ā Well, so you can visit my Web website or I've got a public Twitter page where you can see me retweet cool things that I think are cool. I tweet a lot about NASA I tweet a lot about Greek disease. If you want to see pictures of dying grapes come to my Twitter page, as well as Cornell regularly publishes things about me. Ā Craig MacmillanĀ 28:57Ā Fantastic. Ā Katie GoldĀ 28:58Ā So be sure to Google Katie Gold Cornell. Ā Cornell that's the key. Yeah, Katie go to Cornell or you might get an unwelcome surprise. Ā Craig MacmillanĀ 29:04Ā And we have lots of links and stuff on the show page. So listeners you can go there. I want to thank our guest today. Ā Unknown SpeakerĀ 29:13Ā Thank you so much for having me, Craig. This has been wonderful. Ā Craig MacmillanĀ 29:16Ā Had Katie Gould, Assistant Professor of rape pathology at Cornell agritech campus of Cornell University. Ā Ā Nearly Perfect Transcription by https://otter.ai
Last month at VMware Explore Las Vegas, VMware announced the latest updates to vSphere 8. This includes not only Version 8 Update 2 of VMware's enterprise workload platform, but also includes a new cloud service to which users of vSphere+ will soon have access. These updates will help enhance the operational efficiencies of IT admins, supercharge the performance of demanding workloads, and accelerate the pace of innovation for DevOps engineers, developers, and anyone else that can benefit from self-service access to infrastructure services. On this episode of The Virtually Speaking Podcast Pete and John welcome vSphere Senior Technical Marketing Architect, FƩidhlim O'Leary to walk through the details of this release. Read more
VMware Data Services Manager is a software solution that provides the same convenience as a public cloud-based DBaaS platform, but it is installed and managed on-premises eliminating the challenges of building a custom on-premises DBaaS platform. It helps enterprise IT team stays in control, automate day 2 operations to free up valuable DBA time and provide App developers with the agility they need. On this episode of the Virtually Speaking Podcast. Michael Gandy and Christos Karamanolis share the details on how VMware is helping customers to implement a DBaaS on vSphere and how it fits into the āCloud Smartā strategy. Watch the video of this episode Watch all VMware Explore Recap episodes
vSphere 8 Update 2 introduces significant announcements, and the storage side is no exception. For example, there are now new vVols VASA specs, better performance and resilience, enhanced certificate management, and support for NVMeoF to name a few. On this episode of the Virtually Speaking Podcast, Pete and John welcome Jason Massae and Naveen Krishnamurthy to discuss the details of the vSphere 8 Update 2 Core Storage announcements. Watch the video of this episode Watch all VMware Explore Recap episodes
At VMware Explore there were various announcements around vSphere+. Considering vSphere+ is still a relatively new offering we wanted to make sure that we would not onlyĀ cover the basics but also got to hear all of the intricate details from an expert like Dave Morera!Make sure to follow Dave on Twitter (https://twitter.com/GreatWhiteTec) to keep up to date with his adventures, and check out the VMware website for more documents, whitepapers, and announcements around vSphere+ here: https://core.vmware.com/vsphere#vsphereFollow us on Twitter for updates and news about upcoming episodes: https://twitter.com/UnexploredPod.Last but not least, make sure to hit that subscribe button, rate where ever possible, and share the episode with your friends and colleagues!
We invited FƩidhlim O'Leary back to the show to discuss the vSphere 8.0 Update 2 announcement. What new features is VMware introducing in vSphere 8.0 U2, and how does it benefit our customers? Make sure to visit Techzone, and read up on all the announcements and the enhancements introduced.Follow us on Twitter for updates and news about upcoming episodes: https://twitter.com/UnexploredPod.Last but not least, hit that subscribe button, rate where ever possible, and share the episode with your friends and colleagues!
Dive into the latest on what's going on with Pure integrations for VMware with Jason Langer, Pure's Cloud Solutions Marketing lead. Jason has great insights about Pure as a former partner and spent a few years at VMware before arriving here. We discuss the current state of the market with VMware and how VMware has pervaded every workload in the datacenter (databases, data protection, and analytics) - plus how VMware has moved aggressively into the hybrid cloud space. Jason highlights his favorite aspects of Pure's solutions for VMware, including Virtual Volumes (vVols), vSphere integrations, Site Recovery Manager (SRM) and more. For more on Pure and VMware solutions, go to www.purestorage.com/vmware.
Today VMware announces the vSphere 8 Update 1 release. With this release, customers benefit from enhanced operational efficiency for admins, supercharged performance for higher-end AI/ML workloads, and elevated security across the environment. Read more
This week we discuss digital transformation at Southwest and Delta Airlines, Shopify cancels all meetings, Salesforce's M&A strategy, and A.I. is everywhere. Plus, thoughts on bike lanes⦠Watch the YouTube Live Recording of Episode 396 (https://youtu.be/tmm8rH9fZEE) Runner-up Titles Work trying to get on my personal calendar Traveling with an infant =BLACKSWAN(A1:G453) Socks in a Costco Can't do the business case on savings until you loose it. Pay transparency for you, not me We don't pay for things on the Internet Semper Nimbus Privatus Rundown Dutch residents are the most physically active on earth, (https://twitter.com/BrentToderian/status/1611901297552396289) Digital Transformation Travel Edition Delta plans to offer free Wi-Fi starting Feb. 1 (https://www.cnbc.com/2023/01/05/delta-plans-to-offer-free-wi-fi-starting-feb-1.html) The Southwest Airlines Meltdown (https://www.nytimes.com/2023/01/10/podcasts/the-daily/the-southwest-airlines-meltdown.html) Southwest's Meltdown Could Cost It Up to $825 Million (https://www.nytimes.com/2023/01/06/business/southwest-airlines-meltdown-costs-reimbursement.html) Southwest pilots union writes scathing letter to airline executives after holiday travel fiasco (https://www.yahoo.com/now/southwest-pilots-union-writes-scathing-011720946.html) Southwest makes frequent flyer miles offer while lots of luggage remains in limbo (https://www.cnn.com/travel/article/southwest-airlines-frequent-flyer-miles-meltdown/index.html) Point of Sale: Scan and Pay (https://twitter.com/pitdesi/status/1602843962602975233?s=20&t=YdGNYzReSf4r1twJ1hRfbA) Work Life Shopify Tells Employees to Just Say No to Meetings (https://www.bloomberg.com/news/articles/2023-01-03/shopify-ceo-tobi-lutke-tells-employees-to-just-say-no-to-meetings) Netflix Revokes Some Staff's Access to Other People's Salary Information (https://apple.news/A--bGmZgJTQCgHQ-9QdWu4w) U.S. Moves to Bar Noncompete Agreements in Labor Contracts (https://www.nytimes.com/2023/01/05/business/economy/ftc-noncompete.html) Gartner HR expert: Quiet hiring will dominate U.S. workplaces in 2023 (https://www.cnbc.com/2023/01/04/gartner-hr-expert-quiet-hiring-will-dominate-us-workplaces-in-2023.html) Netflix revokes some staff's access to other people's salary information (https://www.marketwatch.com/story/netflix-revokes-some-staffs-access-to-other-peoples-salary-information-11673384493) SFDC Salesforce: There's no more Slack left to cut (https://www.theregister.com/2023/01/10/salesforce_comment/) Salesforce to Lay Off 10 Percent of Staff and Cut Office Space (https://www.nytimes.com/2023/01/04/business/salesforce-layoffs.html) After layoffs, Salesforce CEO still blasts worker productivi (https://www.sfgate.com/tech/article/salesforce-ceo-blasts-worker-productivity-17708474.php)ty (https://www.sfgate.com/tech/article/salesforce-ceo-blasts-worker-productivity-17708474.php) AI is everywhere Google execs warn company's reputation could suffer if it moves too fast on AI-chat technology (https://www.cnbc.com/2022/12/13/google-execs-warn-of-reputational-risk-with-chatgbt-like-tool.html) Microsoft and OpenAI Working on ChatGPT-Powered Bing in Challenge to Google (https://www.theinformation.com/articles/microsoft-and-openai-working-on-chatgpt-powered-bing-in-challenge-to-google?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) Microsoft eyes $10 billion bet on ChatGPT (https://www.semafor.com/article/01/09/2023/microsoft-eyes-10-billion-bet-on-chatgpt) Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT (https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/) Relevant to your Interests 2023 Bum Steer of the Year: Austin (https://www.texasmonthly.com/news-politics/2023-bum-steer-of-year-austin/) Twitter's Rivals Try to Capitalize on Musk-Induced Chaos (https://www.nytimes.com/2022/12/07/technology/twitter-rivals-alternative-platforms.html) On Organizational Structures and the Developer Experience (https://redmonk.com/sogrady/2022/12/13/org-structure-devx/) KubeCon + CloudNativeCon North America 2022 Transparency Report | Cloud Native Computing Foundation (https://www.cncf.io/reports/kubecon-cloudnativecon-north-america-2022-transparency-report/) Inside the chaos at Washington's most connected military tech startup (https://www.vox.com/recode/23507236/inside-disruption-rebellion-defense-washington-connected-military-tech-startup) Elon Musk Starts Week As World's Second Richest Person (https://www.forbes.com/sites/mattdurot/2022/12/12/elon-musk-starts-week-as-worlds-second-richest-person/) 10 Tesla Investors Lose $132.5 Billion From Musk's Twitter Fiasco (https://www.investors.com/etfs-and-funds/sectors/tesla-stock-investors-lose-132-5-billion-from-musks-twitter-fiasco/) Rackspace's ransomware messaging dilemma (https://www.axios.com/newsletters/axios-login-83146574-380f-4e37-965d-7fd79bce7278.html?chunk=2&utm_term=emshare#story2) Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 (https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/) A MultiCloud Rant (https://www.lastweekinaws.com/blog/a_multicloud_rant/) Great visualization of the revenue breakdown of the 4 largest tech companies. (https://twitter.com/Carnage4Life/status/1603012861017862144?s=20&t=HC2UuMCHBB408xae6tZpbQ) AG Paxton's Google Suit Makes the Perfect the Enemy of the Good (https://truthonthemarket.com/2022/12/14/ag-paxtons-google-suit-makes-the-perfect-the-enemy-of-the-good/) AWS simplifies Simple Storage Service to prevent data leaks (https://www.theregister.com/2022/12/14/aws_simple_storage_service_simplified/) Creating the ultimate smart map with new map data initiative launched by Linux Foundation (https://venturebeat.com/virtual/creating-the-ultimate-smart-map-with-new-map-data-initiative-launched-by-linux-foundation/) Spotify's grand plan to monetize developers via its open source Backstage project (https://techcrunch.com/2022/12/15/spotifys-plan-to-monetize-its-open-source-backstage-developer-project/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cubGlua2VkaW4uY29tLw&guce_referrer_sig=AQAAAAlyOmdhogtX6nuQkNHQ7mVSyci6aMv7X6QwRTvS9PHGJmjO_wjCqsJXXPKI36A9MkIclSIQoHQ_dz7wJ-WzfaYQT_clMcUijiC28ZQhEau4NOcU-70wy5m0Q9LLmtvWuQbWQQEccEbQH2Lvg4_GqfnQBYNPZWRcgpx7XMLas_2R) VMware offers subs for server consolidation vSphere cut (https://www.theregister.com/2022/12/15/vsphere_plus_standard/) Senior execs to leave VMware before acquisition by Broadcom (https://www.bizjournals.com/sanjose/news/2022/12/13/three-senior-execs-to-leave-vmware.html#:~:text=Mark%20Lohmeyer%2C%20who%20heads%20cloud,Raghuram%20announced%20in%20a%20memo) China Bans Exports of Loongson CPUs to Russia, Other Countries: Report (https://www.tomshardware.com/news/china-bans-exports-of-its-loongson-cpus-to-russia-other-countries) Dropbox buys form management platform FormSwift for $95M in cash (https://techcrunch.com/2022/12/16/dropbox-buys-form-management-platform-formswift-for-95m-in-cash/) Sweep, a no-code config tool for Salesforce software, raises $28M (https://techcrunch.com/2022/12/15/sweep-a-no-code-config-tool-for-salesforce-software-raises-28m/) Twitter Aided the Pentagon in its Covert Online Propaganda Campaign (https://theintercept.com/2022/12/20/twitter-dod-us-military-accounts/) Okta's source code stolen after GitHub repositories hacked (https://www.bleepingcomputer.com/news/security/oktas-source-code-stolen-after-github-repositories-hacked/) Workday appoints VMware veteran as co-CEO (https://www.theregister.com/2022/12/21/workday_co_ceo/) Top Paying Tools (https://softwaredefinedtalk.slack.com/archives/C04EK1VBK/p1671635825838769) Winging It: Inside Amazon's Quest to Seize the Skies (https://www.wired.com/story/amazon-air-quest-to-seize-the-skies/) CIS Benchmark Framework Scanning Tools Comparison (https://www.armosec.io/blog/cis-kubernetes-benchmark-framework-scanning-tools-comparison/) MSG defends using facial recognition to kick lawyer out of Rockettes show (https://arstechnica.com/tech-policy/2022/12/facial-recognition-flags-girl-scout-mom-as-security-risk-at-rockettes-show/) OpenAI releases Point-E, an AI that generates 3D models (https://techcrunch.com/2022/12/20/openai-releases-point-e-an-ai-that-generates-3d-models/) No, You Haven't Won a Yeti Cooler From Dick's Sporting Goods (https://www.wired.com/story/email-scam-dicks-sporting-goods-yeti-cooler/) The Lastpass hack was worse than the company first reported (https://www.engadget.com/the-lastpass-hack-was-worse-than-the-company-first-reported-000501559.html?utm_source=facebook&utm_medium=news_tab) IRS delays tax reporting change for 1099-K on Venmo, Paypal business payments (https://www.cnbc.com/2022/12/23/irs-delays-tax-reporting-change-for-1099-k-on-venmo-paypal-payments.html) Cyber attacks set to become āuninsurable', says Zurich chief (https://www.ft.com/content/63ea94fa-c6fc-449f-b2b8-ea29cc83637d) Google Employees Brace for a Cost-Cutting Drive as Anxiety Mounts (https://www.nytimes.com/2022/12/28/technology/google-job-cuts.html) IBM beat all its large-cap tech peers in 2022 as investors shunned growth for safety (https://www.cnbc.com/2022/12/27/ibm-stock-outperformed-technology-sector-in-2022.html) Europe Taps Tech's Power-Hungry Data Centers to Heat Homes (https://www.wsj.com/articles/europe-taps-techs-power-hungry-data-centers-to-heat-homes-11672309944?mod=djemalertNEWS) List of defunct social networking services (https://en.wikipedia.org/wiki/List_of_defunct_social_networking_services) 2023 Predictions | No Mercy / No Malice (https://www.profgalloway.com/2023-predictions/) Twitter rival Mastodon rejects funding to preserve nonprofit status (https://arstechnica.com/tech-policy/2022/12/twitter-rival-mastodon-rejects-funding-to-preserve-nonprofit-status/) TSMC Starts Next-Gen Mass Production as World Fights Over Chips (https://www.bloomberg.com/news/articles/2022-12-29/tsmc-mass-produces-next-gen-chips-to-safeguard-global-lead) Microsoft and FTC pre-trial hearing set for January 3rd (https://www.engadget.com/pre-trial-hearing-between-microsoft-and-ftc-set-for-january-3rd-203320387.html) The infrastructure behind ATMs (https://www.bitsaboutmoney.com/archive/the-infrastructure-behind-atms/) Apple is increasing battery replacement service charges for out-of-warranty devices (https://techcrunch.com/2023/01/03/apple-is-increasing-battery-replacement-service-charges-for-out-of-warranty-devices/) Snowflake's business and how the weakening economy is impacting cloud vendors (https://twitter.com/LiebermanAustin/status/1607376944873754626) Shift Happens: A book about keyboards (https://shifthappens.site/) Amazon to cut 18,000 jobs (https://www.axios.com/2023/01/05/amazon-layoffs-18000-jobs) CircleCI security alert: Rotate any secrets stored in CircleCI (https://circleci.com/blog/january-4-2023-security-alert/) Video game workers form Microsoft's first U.S. labor union (https://www.nbcnews.com/tech/tech-news/video-game-workers-form-microsofts-first-us-labor-union-rcna64103) World's Premier Investors Line Up to Partner with Netskope as the SASE Security and Networking Platform of Choice (https://www.prnewswire.com/news-releases/worlds-premier-investors-line-up-to-partner-with-netskope-as-the-sase-security-and-networking-platform-of-choice-301712417.html) omg.lol - A lovable web page and email address, just for you (https://home.omg.lol/) Alphabet led a $100 million funding of Chronosphere, a startup that helps companies monitor and cut cloud bills. (https://twitter.com/theinformation/status/1611165698868367360) Confluent expands Kafka Streams capabilities, acquires Apache Flink vendor (https://venturebeat.com/enterprise-analytics/confluent-acquires-apache-flink-vendor-immerok-to-expand-data-stream-processing/) Excel & Google Sheets AI Formula Generator - Excelformulabot.com (https://excelformulabot.com/) Has the Internet Reached Peak Clickability? (https://tedgioia.substack.com/p/has-the-internet-reached-peak-clickability) Adobe's CEO Sizes Up the State of Tech Now (https://www.wsj.com/articles/adobes-ceo-sizes-up-the-state-of-tech-now-11673151167?mod=djemalertNEWS) Researchers Hacked California's Digital License Plates, Gaining Access to GPS Location and User Info (https://jalopnik.com/researchers-hacked-californias-digital-license-plates-1849966295) Microsoft's New AI Can Simulate Anyone's Voice With 3 Seconds of Audio (https://slashdot.org/story/23/01/10/0749241/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio?utm_source=slashdot&utm_medium=twitter) Observability platform Chronosphere raises another $115M at a $1.6B valuation (https://techcrunch.com/2023/01/10/observability-platform-chronosphere-raises-another-115m-at-a-1-6b-valuation/) Why IBM is no longer interested in breaking patent recordsāand how it plans to measure innovation in the age of open source and quantum computing (https://fortune.com/2023/01/06/ibm-patent-record-how-to-measure-innovation-open-source-quantum-computing-tech/) New research aims to analyze how widespread COBOL is (https://www.theregister.com/2022/12/14/cobol_research/) Companies are still waiting for their cloud ROI (https://www.infoworld.com/article/3675374/companies-are-still-waiting-for-their-cloud-roi.html) What TNS Readers Want in 2023: More DevOps, API Coverage (https://thenewstack.io/what-tns-readers-want-in-2023-more-devops-api-coverage/) Tech Debt Yo-Yo Cycle. (https://twitter.com/wardleymaps/status/1605860426671177728) How a single developer dropped AWS costs by 90%, then disappeared (https://scribe.rip/@maximetopolov/how-a-single-developer-dropped-aws-costs-by-90-then-disappeared-2b46a115103a) A look at the 2022 velocity of CNCF, Linux Foundation, and top 30 open source projects (https://www.cncf.io/blog/2023/01/11/a-look-at-the-2022-velocity-of-cncf-linux-foundation-and-top-30-open-source-projects/) The golden age of the streaming wars has ended (https://www.theverge.com/2022/12/14/23507793/streaming-wars-hbo-max-netflix-ads-residuals-warrior-nun) YouTube exec says NFL Sunday Ticket will have multiscreen functionality (https://awfulannouncing.com/youtube/nfl-sunday-ticket-multiscreen-mosaic-mode.html) (https://twitter.com/theinformation/status/1611165698868367360)## Nonsense The $11,500 toilet with Alexa inside can now be put inside your home (https://www.theverge.com/2022/12/19/23510864/kohler-numi-smart-toilet-alexa-ces-2022) Starbucks updating its loyalty program starting in February (https://www.axios.com/2022/12/28/starbucks-rewards-program-changes-coming) The revenue model of a popular YouTube channel about Lego. (https://paper.dropbox.com/doc/SDT-396--BwhY9F5kpz_BI2kkdw63ZpJ~Ag-MVMKwqqBEH5SzYKqYO2Jc) Conferences THAT Conference Texas Speakers and Schedule (https://that.us/events/tx/2023/schedule/), Round Rock, TX Jan 15th-18th Use code SDT for 5% off SpringOne (https://springone.io/), Jan 24ā26. CotĆ© speaking at cfgmgmtcamp (https://cfgmgmtcamp.eu/ghent2023/), Feb 6th to 8th, Ghent. State of Open Con 2023, (https://stateofopencon.com/sponsors/) London, UK, February 7th-8th 2023 CloudNativeSecurityCon North America (https://events.linuxfoundation.org/cloudnativesecuritycon-north-america/), Seattle, Feb 1 ā 2, 2023 Southern California Linux Expo, (https://www.socallinuxexpo.org/scale/20x) Los Angeles, March 9-12, 2023 DevOpsDays Birmingham, AL 2023 (https://devopsdays.org/events/2023-birmingham-al/welcome/), April 20 - 21, 2023 SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Get a SDT Sticker! Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), LinkedIn (https://www.linkedin.com/company/software-defined-talk/) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Use the code SDT to get $20 off CotĆ©'s book, Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: Industrial Garage Shelves (https://www.homedepot.com/p/Husky-5-Tier-Industrial-Duty-Steel-Freestanding-Garage-Storage-Shelving-Unit-in-Black-90-in-W-x-90-in-H-x-24-in-D-N2W902490W5B/319132842) Matt: Oxide and Friends: Breaking it down with Ian Brown (https://oxide.computer/podcasts/oxide-and-friends/1150480) Wu Tang Saga (https://www.imdb.com/title/tt9113406/) Season 3 coming next month! CotĆ©: Mouth to Mouth (https://www.goodreads.com/en/book/show/58438631-mouth-to-mouth) by Antoine Wilson (https://www.goodreads.com/en/book/show/58438631-mouth-to-mouth). Photo Credits Header (https://unsplash.com/photos/euaDCtB_jyw) CoverArt (https://unsplash.com/photos/9xdho4stJQ8)
About SamSam Nicholls: Veeam's Director of Public Cloud Product Marketing, with 10+ years of sales, alliance management and product marketing experience in IT. Sam has evolved from his on-premises storage days and is now laser-focused on spreading the word about cloud-native backup and recovery, packing in thousands of viewers on his webinars, blogs and webpages.Links Referenced: Veeam AWS Backup: https://www.veeam.com/aws-backup.html Veeam: https://veeam.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. Tired of observability costs going up every year without getting additional value? Or being locked in to a vendor due to proprietary data collection, querying and visualization? Modern day, containerized environments require a new kind of observability technology that accounts for the massive increase in scale and attendant cost of data. With Chronosphere, choose where and how your data is routed and stored, query it easily, and get better context and control. 100% open source compatibility means that no matter what your setup is, they can help. Learn how Chronosphere provides complete and real-time insight into ECS, EKS, and your microservices, whereever they may be at snark.cloud/chronosphere That's snark.cloud/chronosphere Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want⦠without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted guest episode is brought to us by and sponsored by our friends over at Veeam. And as a part of that, they have thrown one of their own to the proverbial lion. My guest today is Sam Nicholls, Director of Public Cloud over at Veeam. Sam, thank you for joining me.Sam: Hey. Thanks for having me, Corey, and thanks for everyone joining and listening in. I do know that I've been thrown into the lion's den, and I am [laugh] hopefully well-prepared to answer anything and everything that Corey throws my way. Fingers crossed. [laugh].Corey: I don't think there's too much room for criticizing here, to be direct. I mean, Veeam is a company that is solidly and thoroughly built around a problem that absolutely no one cares about. I mean, what could possibly be wrong with that? You do backups; which no one ever cares about. Restores, on the other hand, people care very much about restores. And that's when they learn, āOh, I really should have cared about backups at any point prior to 20 minutes ago.āSam: Yeah, it's a great point. It's kind of like taxes and insurance. It's almost like, you know, something that you have to do that you don't necessarily want to do, but when push comes to shove, and something's burning down, a file has been deleted, someone's made their way into your account and, you know, running a right mess within there, that's when you really, kind of, care about what you mentioned, which is the recovery piece, the speed of recovery, the reliability of recovery.Corey: It's been over a decade, and I'm still sore about losing my email archives from 2006 to 2009. There's no way to get it back. I ran my own mail server; it was an iPhone setting that said, āOh, yeah, automatically delete everything in your trash folderāor archive folderāafter 30 days.ā It was just a weird default setting back in that era. I didn't realize it was doing that. Yeah, painful stuff.And we learned the hard way in some of these cases. Not that I really have much need for email from that era of my life, but every once in a while it still bugs me. Which gets speaks to the point that the people who are the most fanatical about backing things up are the people who have been burned by not having a backup. And I'm fortunate in that it wasn't someone else's data with which I had been entrusted that really cemented that lesson for me.Sam: Yeah, yeah. It's a good point. I could remember a few years ago, my wife migrated a very aging, polycarbonate white Mac to one of the shiny new aluminum ones and thought everything was goodāCorey: As the white polycarbonate Mac becomes yellow, then yeah, all right, you know, it's time to replace it. Yeah. So yeah, so she wiped the drive, and what happened?Sam: That was her moment where she learned the value and importance of backup unless she backs everything up now. I fortunately have never gone through it. But I'm employed by a backup vendor and that's why I care about it. But it's incredibly important to have, of course.Corey: Oh, yes. My spouse has many wonderful qualities, but one that drives me slightly nuts is she's something of a digital packrat where her hard drives on her laptop will periodically fill up. And I used to take the approach of oh, you can be more efficient and do the rest. And I realized no, telling other people they're doing it wrong is generally poor practice, whereas just buying bigger drives is way easier. Let's go ahead and do that. It's small price to pay for domestic tranquility.And there's a lesson in that. We can map that almost perfectly to the corporate world where you folks tend to operate in. You're not doing home backup, last time I checked; you are doing public cloud backup. Actually, I should ask that. Where do you folks start and where do you stop?Sam: Yeah, no, it's a great question. You know, we started over 15 years ago when virtualization, specifically VMware vSphere, was really the up-and-coming thing, and, you know, a lot of folks were there trying to utilize agents to protect their vSphere instances, just like they were doing with physical Windows and Linux boxes. And, you know, it kind of got the job done, but was it the best way of doing it? No. And that's kind of why Veeam was pioneered; it was this agentless backup, image-based backup for vSphere.And, of course, you know, in the last 15 years, we've seen lots of transitions, of course, we're here at Screaming in the Cloud, with you, Corey, so AWS, as well as a number of other public cloud vendors we can help protect as well, as a number of SaaS applications like Microsoft 365, metadata and data within Salesforce. So, Veeam's really kind of come a long way from just virtual machines to really taking a global look at the entirety of modern environments, and how can we best protect each and every single one of those without trying to take a square peg and fit it in a round hole?Corey: It's a good question and a common one. We wind up with an awful lot of folks who are confused by the proliferation of data. And I'm one of them, let's be very clear here. It comes down to a problem where backups are a multifaceted, deep problem, and I don't think that people necessarily think of it that way. But I take a look at all of the different, even AWS services that I use for my various nonsense, and which ones can be used to store data?Well, all of them. Some of them, you have to hold it in a particularly wrong sort of way, but they all store data. And in various contexts, a lot of that data becomes very important. So, what service am I using, in which account am I using, and in what region am I using it, and you wind up with data sprawl, where it's a tremendous amount of data that you can generally only track down by looking at your bills at the end of the month. Okay, so what am I being charged, and for what service?That seems like a good place to start, but where is it getting backed up? How do you think about that? So, some people, I think, tend to ignore the problem, which we're seeing less and less, but other folks tend to go to the opposite extreme and we're just going to backup absolutely everything, and we're going to keep that data for the rest of our natural lives. It feels to me that there's probably an answer that is more appropriate somewhere nestled between those two extremes.Sam: Yeah, snapshot sprawl is a real thing, and it gets very, very expensive very, very quickly. You know, your snapshots of EC2 instances are stored on those attached EBS volumes. Five cents per gig per month doesn't sound like a lot, but when you're dealing with thousands of snapshots for thousands machines, it gets out of hand very, very quickly. And you don't know when to delete them. Like you say, folks are just retaining them forever and dealing with this unfortunate bill shock.So, you know, where to start is automating the lifecycle of a snapshot, right, from its creationāhow often do we want to be creating themāfrom the retentionāhow long do we want to keep these forāand where do we want to keep them because there are other storage services outside of just EBS volumes. And then, of course, the ultimate: deletion. And that's important even from a compliance perspective as well, right? You've got to retain data for a specific number of years, I think healthcare is like seven years, but then you'veāCorey: And then not a day more.Sam: Yeah, and then not a day more because that puts you out of compliance, too. So, policy-based automation is your friend and we see a number of folks building these policies out: gold, silver, bronze tiers based on criticality of data compliance and really just kind of letting the machine do the rest. And you can focus on not babysitting backup.Corey: What was it that led to the rise of snapshots? Because back in my very early days, there was no such thing. We wound up using a bunch of servers stuffed in a rack somewhere and virtualization was not really in play, so we had file systems on physical disks. And how do you back that up? Well, you have an agent of some sort that basically looks at all the files and according to some ruleset that it has, it copies them off somewhere else.It was slow, it was fraught, it had a whole bunch of logic that was pushed out to the very edge, and forget about restoring that data in a timely fashion or even validating a lot of those backups worked other than via checksum. And God help you if you had data that was constantly in the state of flux, where anything changing during the backup run would leave your backups in an inconsistent state. That on some level seems to have largely been solved by snapshots. But what's your take on it? You're a lot closer to this part of the world than I am.Sam: Yeah, snapshots, I think folks have turned to snapshots for the speed, the lack of impact that they have on production performance, and again, just the ease of accessibility. We have access to all different kinds of snapshots for EC2, RDS, EFS throughout the entirety of our AWS environment. So, I think the snapshots are kind of like the default go-to for folks. They can help deliver those very, very quick RPOs, especially in, for example, databases, like you were saying, that change very, very quickly and we all of a sudden are stranded with a crash-consistent backup or snapshot versus an application-consistent snapshot. And then they're also very, very quick to recover from.So, snapshots are very, very appealing, but they absolutely do have their limitations. And I think, you know, it's not a one or the other; it's that they've got to go hand-in-hand with something else. And typically, that is an image-based backup that is stored in a separate location to the snapshot because that snapshot is not independent of the disk that it is protecting.Corey: One of the challenges with snapshots is most of them are created in a copy-on-write sense. It takes basically an instant frozen point in time backāonce upon a time when we ran MySQL databases on top of the NetApp Filerāwhich works surprisingly wellāwe would have a script that would automatically quiesce the database so that it would be in a consistent state, snapshot the file and then un-quiesce it, which took less than a second, start to finish. And that was awesome, but then you had this snapshot type of thing. It wasn't super portable, it needed to reference a previous snapshot in some cases, and AWS takes the same approach where the first snapshot it captures every block, then subsequent snapshots wind up only taking up as much size as there have been changes since the first snapshots. So, large quantities of data that generally don't get access to a whole lot have remarkably small, subsequent snapshot sizes.But that's not at all obvious from the outside, and looking at these things. They're not the most portable thing in the world. But it's definitely the direction that the industry has trended in. So, rather than having a cron job fire off an AWS API call to take snapshots of my volumes as a sort of the baseline approach that we all started with, what is the value proposition that you folks bring? And please don't say it's, āWell, cron jobs are hard and we have a friendlier interface for that.āSam: [laugh]. I think it's really starting to look at the proliferation of those snapshots, understanding what they're good at, and what they are good for within your environmentāas previously mentioned, low RPOs, low RTOs, how quickly can I take a backup, how frequently can I take a backup, and more importantly, how quickly can I restoreābut then looking at their limitations. So, I mentioned that they were not independent of that disk, so that certainly does introduce a single point of failure as well as being not so secure. We've kind of touched on the cost component of that as well. So, what Veeam can come in and do is then take an image-based backup of those snapshots, rightāso you've got your initial snapshot and then your incremental onesāwe'll take the backup from that snapshot, and then we'll start to store that elsewhere.And that is likely going to be in a different account. We can look at the Well-Architected Framework, AWS deeming accounts as a security boundary, so having that cross-account function is critically important so you don't have that single point of failure. Locking down with IAM roles is also incredibly important so we haven't just got a big wide open door between the two. But that data is then stored in a separate accountāpotentially in a separate region, maybe in the same regionāAmazon S3 storage. And S3 has the wonderful benefit of being still relatively performant, so we can have quick recoveries, but it is much, much cheaper. You're dealing with 2.3 cents per gig per month, instead ofāCorey: To start, and it goes down from there with sizeable volumes.Sam: Absolutely, yeah. You can go down to S3 Glacier, where you're looking at, I forget how many points and zeros and nines it is, but it's fractions of a cent per gig per month, but it's going to take you a couple of days to recover that daāCorey: Even infrequent access cuts that in half.Sam: Oh yeah.Corey: And let's be clear, these are snapshot backups; you probably should not be accessing them on a consistent, sustained basis.Sam: Well, exactly. And this is where it's kind of almost like having your cake and eating it as well. Compliance or regulatory mandates or corporate mandates are saying you must keep this data for this length of time. Keeping thatāyou know, let's just say it's three years' worth of snapshots in an EBS volume is going to be incredibly expensive. What's the likelihood of you needing to recover something from two yearsāactually, even two months ago? It's very, very small.So, the performance part of S3 is, you don't need to take it as much into consideration. Can you recover? Yes. Is it going to take a little bit longer? Absolutely. But it's going to help you meet those retention requirements while keeping your backup bill low, avoiding that bill shock, right, spending tens and tens of thousands every single month on snapshots. This is what I mean by kind of having your cake and eating it.Corey: I somewhat recently have had a client where EBS snapshots are one of the driving costs behind their bill. It is one of their largest single line items. And I want to be very clear here because if one of those people who listen to this and thinking, āWell, hang on. Wait, they're telling stories about us, even though they're not naming us by name?ā Yeah, there were three of you in the last quarter.So, at that point, it becomes clear it is not about something that one individual company has done and more about an overall driving trend. I am personalizing it a little bit by referring to as one company when there were three of you. This is a narrative device, not me breaking confidentiality. Disclaimer over. Now, when you talk to people about, āSo, tell me why you've got 80 times more snapshots than you do EBS volumes?ā The answer is as, āWell, we wanted to back things up and we needed to get hourly backups to a point, then daily backups, then monthly, and so on and so forth. And when this was set up, there wasn't a great way to do this natively and we don't always necessarily know what we need versus what we don't. And the cost of us backing this up, well, you can see it on the bill. The cost of us deleting too much and needing it as soon as we do? Well, that cost is almost incalculable. So, this is the safe way to go.ā And they're not wrong in anything that they're saying. But the world has definitely evolved since then.Sam: Yeah, yeah. It's a really great point. Again, it just folds back into my whole having your cake and eating it conversation. Yes, you need to retain data; it gives you that kind of nice, warm, cozy feeling, it's a nice blanket on a winter's day that that data, irrespective of what happens, you're going to have something to recover from. But the question is does that need to be living on an EBS volume as a snapshot? Why can't it be living on much, much more cost-effective storage that's going to give you the warm and fuzzies, but is going to make your finance team much, much happier [laugh].Corey: One of the inherent challenges I think people have is that snapshots by themselves are almost worthless, in that I have an EBS snapshot, it is sitting there now, it's costing me an undetermined amount of money because it's not exactly clear on a per snapshot basis exactly how large it is, and okay, great. Well, I'm looking for a file that was not modified since X date, as it was on this time. Well, great, you're going to have to take that snapshot, restore it to a volume and then go exploring by hand. Oh, it was the wrong one. Great. Try it again, with a different one.And after, like, the fifth or six in a row, you start doing a binary search approach on this thing. But it's expensive, it's time-consuming, it takes forever, and it's not a fun user experience at all. Part of the problem is it seems that historically, backup systems have no context or no contextual awareness whatsoever around what is actually contained within that backup.Sam: Yeah, yeah. I mean, you kind of highlighted two of the steps. It's more like a ten-step process to do, you know, granular file or folder-level recovery from a snapshot, right? You've got to, like you say, you've got to determine the point in time when that, you know, you knew the last time that it was around, then you're going to have to determine the volume size, the region, the OS, you're going to have to create an EBS volume of the same size, region, from that snapshot, create the EC2 instance with the same OS, connect the two together, boot the EC2 instance, mount the volume search for the files to restore, download them manually, at which point you have your file back. It's not back in the machine where it was, it's now been downloaded locally to whatever machine you're accessing that from. And then you got to tear it all down.And that is again, like you say, predicated on the fact that you knew exactly that that was the right time. It might not be and then you have to start from scratch from a different point in time. So, backup tooling from backup vendors that have been doing this for many, many years, knew about this problem long, long ago, and really seek to not only automate the entirety of that process but make the whole e-discovery, the search, the location of those files, much, much easier. I don't necessarily want to do a vendor pitch, but I will say with Veeam, we have explorer-like functionality, whereby it's just a simple web browser. Once that machine is all spun up again, automatic process, you can just search for your individual file, folder, locate it, you can download it locally, you can inject it back into the instance where it was through Amazon Kinesis or AWS KinesisāI forget the right terminology for it; some of its AWS, some of its Amazon.But by-the-by, the whole recovery process, especially from a file or folder level, is much more pain-free, but also much faster. And that's ultimately what people care about how reliable is my backup? How quickly can I get stuff online? Because the time that I'm down is costing me an indescribable amount of time or money.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database. If you're tired of managing open source Redis on your own, or if you are looking to go beyond just caching and unlocking your data's full potential, these folks have you covered. Redis Enterprise is the go-to managed Redis service that allows you to reimagine how your geo-distributed applications process, deliver, and store data. To learn more from the experts in Redis how to be real-time, right now, from anywhere, visit redis.com/duckbill. That's R - E - D - I - S dot com slash duckbill.Corey: Right, the idea of RPO versus RTO: recovery point objective and recovery time objective. With an RPO, it's great, disaster strikes right now, how long is acceptable to it have been since the last time we backed up data to a restorable point? Sometimes it's measured in minutes, sometimes it's measured in fractions of a second. It really depends on what we're talking about. Payments databases, that needs to beāthe RPO is basically an asymptotically approaches zero.The RTO is okay, how long is acceptable before we have that data restored and are back up and running? And that is almost always a longer time, but not always. And there's a different series of trade-offs that go into that. But both of those also presuppose that you've already dealt with the existential question of is it possible for us to recover this data. And that's where I know that you are obviouslyāyou have a position on this that is informed by where you work, but I don't, and I will call this out as what I see in the industry: AWS backup is compelling to me except for one fatal flaw that it has, and that is it starts and stops with AWS.I am not a proponent of multi-cloud. Lord knows I've gotten flack for that position a bunch of times, but the one area where it makes absolute sense to me is backups. Have your data in a rehydrate-the-business level state backed up somewhere that is not your primary cloud provider because you're otherwise single point of failure-ing through a company, through the payment instrument you have on file with that company, in the blast radius of someone who can successfully impersonate you to that vendor. There has to be a gap of some sort for the truly business-critical data. Yes, egress to other providers is expensive, but you know what also is expensive? Irrevocably losing the data that powers your business. Is it likely? No, but I would much rather do it than have to justify why I'm not doing it.Sam: Yeah. Wasn't likely that I was going to win that 2 billion or 2.1 billion on the Powerball, but [laugh] I still play [laugh]. But I understand your standpoint on multi-cloud and I read your newsletters and understand where you're coming from, but I think the reality is that we do live in at least a hybrid cloud world, if not multi-cloud. The number of organizations that are sole-sourced on a single cloud and nothing else is relatively small, single-digit percentage. It's around 80-some percent that are hybrid, and the remainder of them are your favorite: multi-cloud.But again, having something that is one hundred percent sole-source on a single platform or a single vendor does expose you to a certain degree of risk. So, having the ability to do cross-platform backups, recoveries, migrations, for whatever reason, right, because it might not just be a disaster like you'd mentioned, it might also just be⦠I don't know, the company has been taken over and all of a sudden, the preference is now towards another cloud provider and I want you to refactor and re-architect everything for this other cloud provider. If all that data is locked into one platform, that's going to make your job very, very difficult. So, we mentioned at the beginning of the call, Veeam is capable of protecting a vast number of heterogeneous workloads on different platforms, in different environments, on-premises, in multiple different clouds, but the other key piece is that we always use the same backup file format. And why that's key is because it enables portability.If I have backups of EC2 instances that are stored in S3, I could copy those onto on-premises disk, I could copy those into Azure, I could do the same with my Azure VMs and store those on S3, or again, on-premises disk, and any other endless combination that goes with that. And it's really kind of centered around, like control and ownership of your data. We are not prescriptive by any means. Like, you do what is best for your organization. We just want to provide you with the toolset that enables you to do that without steering you one direction or the other with fee structures, disparate feature sets, whatever it might be.Corey: One of the big challenges that I keep seeing across the board is just a lack of awareness of what the data that matters is, where you see people backing up endless fleets of web server instances that are auto-scaled into existence and then removed, but you can create those things at will; why do you care about the actual data that's on these things? It winds up almost at the library management problem, on some level. And in that scenario, snapshots are almost certainly the wrong answer. One thing that I saw previously that really changed my way of thinking about this was back many years ago when I was working at a startup that had just started using GitHub and they were paying for a third-party service that wound up backing up Git repos. Today, that makes a lot more sense because you have a bunch of other stuff on GitHub that goes well beyond the stuff contained within Git, but at the time, it was silly. It was, why do that? Every Git clone is a full copy of the entire repository history. Just grab it off some developer's laptop somewhere.It's like, āReally? You want to bet the company, slash your job, slash everyone else's job on that being feasible and doable or do you want to spend the 39 bucks a month or whatever it was to wind up getting that out the door now so we don't have to think about it, and they validate that it works?ā And that was really a shift in my way of thinking because, yeah, backing up things can get expensive when you have multiple copies of the data living in different places, but what's really expensive is not having a company anymore.Sam: Yeah, yeah, absolutely. We can tie it back to my insurance dynamic earlier where, you know, it's something that you know that you have to have, but you don't necessarily want to pay for it. Well, you know, just like with insurances, there's multiple different ways to go about recovering your data and it's only in crunch time, do you really care about what it is that you've been paying for, right, when it comes to backup?Could you get your backup through a git clone? Absolutely. Could you get your data backāhow long is that going to take you? How painful is that going to be? What's going to be the impact to the business where you're trying to figure that out versus, like you say, the 39 bucks a month, a year, or whatever it might be to have something purpose-built for that, that is going to make the recovery process as quick and painless as possible and just get things back up online.Corey: I am not a big fan of the fear, uncertainty, and doubt approach, but I do practice what I preach here in that yeah, there is a real fear against data loss. It's not, āPeople are coming to get you, so you absolutely have to buy whatever it is I'm selling,ā but it is something you absolutely have to think about. My core consulting proposition is that I optimize the AWS bill. And sometimes that means spending more. Okay, that one S3 bucket is extremely important to you and you say you can't sustain the loss of it ever so one zone is not an option. Where is it being backed up? Oh, it's not? Yeah, I suggest you spend more money and back that thing up if it's as irreplaceable as you say. It's about doing the right thing.Sam: Yeah, yeah, it's interesting, and it's going to be hard for you to prove the value of doing that when you are driving their bill up when you're trying to bring it down. But again, you have to look at something that's not itemized on that bill, which is going to be the impact of downtime. I'm not going to pretend to try and recall the exact figures because it also varies depending on your business, your industry, the size, but the impact of downtime is massive financially. Tens of thousands of dollars for small organizations per hour, millions and millions of dollars per hour for much larger organizations. The backup component of that is relatively small in comparison, so having something that is purpose-built, and is going to protect your data and help mitigate that impact of downtime.Because that's ultimately what you're trying to protect against. It is the recovery piece that you're buying is the most important piece. And like you, I would say, at least be cognizant of it and evaluate your options and what can you live with and what can you live without.Corey: That's the big burning question that I think a lot of people do not have a good answer to. And when you don't have an answer, you either backup everything or nothing. And I'm not a big fan of doing either of those things blindly.Sam: Yeah, absolutely. And I think this is why we see varying different backup options as well, you know? You're not going to try and apply the same data protection policies each and every single workload within your environment because they've all got different types of workload criticality. And like you say, some of them might not even need to be backed up at all, just because they don't have data that needs to be protected. So, you need something that is going to be able to be flexible enough to apply across the entirety of your environment, protect it with the right policy, in terms of how frequently do you protect it, where do you store it, how often, or when are you eventually going to delete that and apply that on a workload by workload basis. And this is where the joy of things like tags come into play as well.Corey: One last thing I want to bring up is that I'm a big fan of watching for companies saying the quiet part out loud. And one area in which they do thisābecause they're forced to by brevityāis in the title tag of their website. I pull up veeam.com and I hover over the tab in my browser, and it says, āVeeam Software: Modern Data Protection.āAnd I want to call that out because you're not framing it as explicitly backup. So, the last topic I want to get into is the idea of security. Because I think it is not fully appreciated on a lived-experience basisāalthough people will of course agree to this when they're having ivory tower whiteboard discussionsāthat every place your data lives is a potential for a security breach to happen. So, you want to have your data living in a bunch of places ideally, for backup and resiliency purposes. But you also want it to be completely unworkable or illegible to anyone who is not authorized to have access to it.How do you balance those trade-offs yourself given that what you're fundamentally saying is, āTrust us with your Holy of Holies when it comes to things that power your entire business?ā I mean, I can barely get some companies to agree to show me their AWS bill, let alone this is the data that contains all of this stuff to destroy our company.Sam: Yeah. Yeah, it's a great question. Before I explicitly answer that piece, I will just go to say that modern data protection does absolutely have a security component to it, and I think that backup absolutely needs to be aāI'm going to say this an air quotesāa āfirst class citizenā of any security strategy. I think when people think about security, their mind goes to the preventative, like how do we keep these bad people out?This is going to be a bit of the FUD that you love, but ultimately, the bad guys on the outside have an infinite number of attempts to get into your environment and only have to be right once to get in and start wreaking havoc. You on the other hand, as the good guy with your cape and whatnot, you have got to be right each and every single one of those times. And we as humans are fallible, right? None of us are perfect, and it's incredibly difficult to defend against these ever-evolving, more complex attacks. So backup, if someone does get in, having a clean, verifiable, recoverable backup, is really going to be the only thing that is going to save your organization, should that actually happen.And what's key to a secure backup? I would say separation, isolation of backup data from the production data, I would say utilizing things like immutability, so in AWS, we've got Amazon S3 object lock, so it's that write once, read many state for whatever retention period that you put on it. So, the data that they're seeking to encrypt, whether it's in production or in their backup, they cannot encrypt it. And then the other piece that I think is becoming more and more into play, and it's almost table stakes is encryption, right? And we can utilize things like AWS KMS for that encryption.But that's there to help defend against the exfiltration attempts. Because these bad guys are realizing, āHey, people aren't paying me my ransom because they're just recovering from a clean backup, so now I'm going to take that backup data, I'm going to leak the personally identifiable information, trade secrets, or whatever on the internet, and that's going to put them in breach compliance and give them a hefty fine that way unless they pay me my ransom.ā So encryption, so they can't read that data. So, not only can they not change it, but they can't read it is equally important. So, I would say those are the three big things for me on what's needed for backup to make sure it is clean and recoverable.Corey: I think that is one of those areas where people need to put additional levels of thought in. I think that if you have access to the production environment and have full administrative rights throughout it, you should definitionally notāat least with that account and ideally not you at all personallyāhave access to alter the backups. Full stop. I would say, on some level, there should not be the ability to alter backups for some particular workloads, the idea being that if you get hit with a ransomware infection, it's pretty bad, let's be clear, but if you can get all of your data back, it's more of an annoyance than it is, again, the existential business crisis that becomes something that redefines you as a company if you still are a company.Sam: Yeah. Yeah, I mean, we can turn to a number of organizations. Code Spaces always springs to mind for me, I love Code Spaces. It was kind of one of those precursors toāCorey: It's amazing.Sam: Yeah, but they were running on AWS and they had everything, production and backups, all stored in one account. Got into the account. āWe're going to delete your data if you don't pay us this ransom.ā They were like, āWell, we're not paying you the ransoms. We got backups.ā Well, they deleted those, too. And, you know, unfortunately, Code Spaces isn't around anymore. But it really kind of goes to show just the importance of at least logically separating your data across different accounts and not having that god-like access to absolutely everything.Corey: Yeah, when you talked about Code Spaces, I was in [unintelligible 00:32:29] talking about GitHub Codespaces specifically, where they have their developer workstations in the cloud. They're still very much around, at least last time I saw unless you know something I don't.Sam: Precursor to that. I can send you the linkāCorey: Oh ohāSam: You can share it with the listeners.Corey: Oh, yes, please do. I'd love to see that.Sam: Yeah. Yeah, absolutely.Corey: And it's been a long and strange time in this industry. Speaking of links for the show notes, I appreciate you're spending so much time with me. Where can people go to learn more?Sam: Yeah, absolutely. I think veeam.com is kind of the first place that people gravitate towards. Me personally, I'm kind of like a hands-on learning kind of guy, so we always make free product available.And then you can find that on the AWS Marketplace. Simply search āVeeam' through there. A number of free products; we don't put time limits on it, we don't put feature limitations. You can backup ten instances, including your VPCs, which we actually didn't talk about today, but I do think is important. But I won't waste any more time on that.Corey: Oh, configuration of these things is critically important. If you don't know how everything was structured and built out, you're basically trying to re-architect from first principles based upon archaeology.Sam: Yeah [laugh], that's a real pain. So, we can help protect those VPCs and we actually don't put any limitations on the number of VPCs that you can protect; it's always free. So, if you're going to use it for anything, use it for that. But hands-on, marketplace, if you want more documentation, want to learn more, want to speak to someone veeam.com is the place to go.Corey: And we will, of course, include that in the show notes. Thank you so much for taking so much time to speak with me today. It's appreciated.Sam: Thank you, Corey, and thanks for all the listeners tuning in today.Corey: Sam Nicholls, Director of Public Cloud at Veeam. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry insulting comment that takes you two hours to type out but then you lose it because you forgot to back it up.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Today's Tech Bytes podcast, sponsored by VMware, dives into VMware's vSphere+. vSphere+ allows you to operate your on-prem workloads and infrastructure as if they were a public cloud. It supports VMs and Kubernetes, and provides admin, developer, and add-on services delivered via SaaS.
Today's Tech Bytes podcast, sponsored by VMware, dives into VMware's vSphere+. vSphere+ allows you to operate your on-prem workloads and infrastructure as if they were a public cloud. It supports VMs and Kubernetes, and provides admin, developer, and add-on services delivered via SaaS. The post Tech Bytes: Run On-Prem Infrastructure Like Public Cloud With vSphere+ (Sponsored) appeared first on Packet Pushers.
Today's Tech Bytes podcast, sponsored by VMware, dives into VMware's vSphere+. vSphere+ allows you to operate your on-prem workloads and infrastructure as if they were a public cloud. It supports VMs and Kubernetes, and provides admin, developer, and add-on services delivered via SaaS. The post Tech Bytes: Run On-Prem Infrastructure Like Public Cloud With vSphere+ (Sponsored) appeared first on Packet Pushers.