Podcasts about VMware

Company that makes virtualization software; publicly traded subsidiary of Dell

  • 1,821PODCASTS
  • 7,126EPISODES
  • 37mAVG DURATION
  • 2DAILY NEW EPISODES
  • Mar 9, 2026LATEST
VMware

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about VMware

Show all podcasts related to vmware

Latest podcast episodes about VMware

Profiles in Leadership
Kendra Dahlstrom, Curiosity Guides You to Clarity and Purpose

Profiles in Leadership

Play Episode Listen Later Mar 9, 2026 62:30


Kendra Dahlstrom is a transformational executive coach, keynote speaker, and host of The Unworthy Leader PodcastTM. With over two decades of experience guiding senior executives at Amazon, VMware, and other Fortune 50 companies, Kendra helps high-achieving leaders stop performing and start leading—from a place of presence, clarity, andpower.  Her work is rooted in the belief that imposter thoughts, self-doubt, and vulnerability aren't weaknesses—they're data. Through her signature frameworks—Your Doubt is Your DataTM, Your Failure is Your FuelTM, and Your Vulnerability is ViralTM—Kendra has helped hundreds of leaders turn silent suffering into strategic clarity.  She brings a proven, bulletproof process that demystifies what's really happening behind theleadership mask. Her insights—including the 2 Hidden Brand Liabilities affecting executive presence, 6 Revelations about Imposter Thoughts, and her proprietary 4-Step framework to transform inner conflict into action—challenge conventional leadership models and unlock a more human, sustainable approach to influence. Kendra doesn't just talk about authentic leadership.  She lives it. This isn't theory—it's lived truth, earned insight, and a revolution in how leadership gets done well.

Auto Remarketing Podcast
Widewail CEO Cuyler Owens

Auto Remarketing Podcast

Play Episode Listen Later Mar 9, 2026 26:37


Cuyler Owens is CEO of Widewail, a reputation and customer experience platform for the auto industry Cuyler joined Widewail as CEO in December, following nearly three years as chief revenue officer of TrustRadius, and time with Dell, Vmware and Dealerware. Owens talks with Auto Remarketing senior editor Joe Overby about how Widewail is using artificial intelligence to help analyze Google reviews of car dealerships and make recommendations for dealers, the issues and topics that often surface in reviews, what's on the horizon for Widewail and more.

Datacenter Technical Deep Dives
Troubleshooting AWS Hallucinations from Vector Store DBs

Datacenter Technical Deep Dives

Play Episode Listen Later Mar 5, 2026 48:04


Join us as Amelia shares the debugging story nobody tells you about - how her vector store DB couldn't surface specific data until she tested it with simplified data from ChatGPT. Amelia walks through her journey from throwing JIRA tickets into a large language model without understanding pipelines or data cleaning, to discovering why her production vector store was failing. You'll learn about the gap between chatting with data and getting accurate connections, how to validate vector similarity search results, the difference between production and synthetic test data, and practical troubleshooting workflows for AWS vector stores. This episode reveals the messy reality of RAG systems - when everything seems fine but the outputs are subtly wrong, and how testing with simplified data can expose what production complexity hides. Timestamps 0:00 Cold Open 1:03 Welcome & Introduction 2:06 Amelia's Background & DeepRacer Trophy 4:49 The JIRA Ticket Use Case Origin Story 5:53 Getting Into the Presentation 6:03 Accessing & Cleaning Data Sets 8:12 Losing Production Data & Recreating with ChatGPT 12:45 Understanding Vector Databases 18:22 How Embeddings Work 24:16 The Hallucination Discovery 30:41 Testing Strategies for Vector Stores 36:52 Debugging Vector Similarity Search 42:18 Real-World Troubleshooting Workflows 44:26 Where to Find Amelia & Wrap-up How to find Amelia: https://www.linkedin.com/in/ameliahoughross/

Tech Hive: The Tech Leaders Podcast
#127 Special Episode: The Tesco v Broadcom Case Explained: Why Enterprise IT Leaders Should Pay Attention

Tech Hive: The Tech Leaders Podcast

Play Episode Listen Later Mar 5, 2026 58:58


This episode of The Tech Leaders Podcast takes a different format from our usual leadership interviews. The recent acquisition of VMware by Broadcom has sent significant ripples through the enterprise IT landscape. As one of the most widely deployed infrastructure platforms in the world, VMware sits at the heart of many global technology estates. The commercial changes that have followed the acquisition have prompted widespread discussion across the market. The ongoing court case involving Tesco and Broadcom has brought those issues into sharp focus. What may appear to be a contractual dispute raises much broader questions around platform dependency, vendor discretion and how resilient enterprise IT contracts are when ownership changes hands. In this special episode, Gareth Davies is joined by licensing specialist Barry Pilling to unpack the case, explore the latest developments, and discuss what it could mean for enterprise software licensing and commercial governance going forward. For CIOs, technology leaders and procurement teams responsible for major vendor relationships, this conversation looks at the wider implications, and the questions organisations should be asking right now. Timestamps: Introduction and Barry's Background (2:04) VMWare and the Broadcom Acquisition (5:00) The Rise of Virtualisation (8:50) The Tesco Dispute Background (12:38) Organisations Exposure to Licensing Risk (22:45) Other Broadcom Cases (29:22) Death of the Middleman? (37:51) Potential Consequences and Advice for VMWare Customers (41:49) Recent Developments (47:30) https://www.bedigitaluk.com/

Datacenter Technical Deep Dives
AI Agents Made Simple: Everything You Need to Know

Datacenter Technical Deep Dives

Play Episode Listen Later Feb 27, 2026 69:21


Join us as Du'An breaks down AI agents in a way that actually makes sense - what they are, how to use them, and how to get started today. Du'An walks through the fundamentals of AI agents with live demos and practical code examples you can use immediately. You'll learn about agent frameworks, when to use agents versus simple LLM calls, building your first agent, and real-world applications from bookmark management to automated workflows. This episode cuts through the hype with realistic expectations about what agents can and can't do, while showing you concrete examples including MCP servers, Strands Pack, and Du'An's personal second brain system. Timestamps 0:00 Welcome & Introduction 1:39 Du'An's Background & Previous Episode Success 3:06 Segueing from Last Week's Episode 4:03 CEOs Vibe Coding Discussion 6:49 Real Estate Developer Building Apps Story 8:23 Getting Started with the Presentation 12:45 What Are AI Agents? 18:22 Agent Frameworks Overview 24:16 When to Use Agents vs Simple LLM Calls 30:41 Building Your First Agent 36:52 Live Demo: Strands Pack 42:18 MCP Servers Explained 47:35 WriteStats MCP Demo 52:14 Real-World Applications 58:33 Du'An's Second Brain System 1:04:01 Bookmark Manager Walkthrough 1:07:17 Organizing Cloud Storage & Email 1:09:06 Wrap-up & Next Episode Teaser How to find Du'An: https://www.duanlightfoot.com/ https://github.com/labeveryday/ Links from the show: https://github.com/labeveryday/strands-pack https://github.com/labeveryday/writestat-mcp https://github.com/labeveryday/bookmark-manager-site https://bookmarks.duanlightfoot.com/ https://github.com/openai/whisper https://openai.com/index/whisper/

Software Sessions
Bryan Cantrill on Oxide Computer

Software Sessions

Play Episode Listen Later Feb 27, 2026 89:58


Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ

Business of Tech
Pentagon Pressures Anthropic for AI Access; VMware Exit Costs and Compliance Risks for MSPs

Business of Tech

Play Episode Listen Later Feb 26, 2026 13:58


The episode's central development is the ongoing dispute between the U.S. Department of Defense and Anthropic regarding Pentagon demands for unrestricted access to Claude, Anthropic's AI model. According to Dave Sobel, the Pentagon has threatened to sever ties or invoke the Defense Production Act if the company does not comply, seeking capabilities that Anthropic argues may be illegal—specifically mass surveillance without warrants and autonomous weapons systems without human control. This move exposes Managed Service Providers (MSPs) serving defense contractors to unpredictable legal, operational, and compliance risks embedded in their AI workflows. The analysis highlights that a commercial AI provider's acceptable use policy now intersects directly with national security policy, and even partial vendor compliance can trigger regulatory or legal instability for dependent organizations. For MSPs, this means that building service offerings on AI infrastructures without clear fallback strategies or documented policy change clauses can lead to unmanageable risk and liability in the event of provider or legal regime shifts. Dave Sobel stresses that failing to address policy volatility as part of a managed service amounts to underwriting geopolitical risk without compensation. Other notable developments include the passage of the Small Business Artificial Intelligence Advancement Act, federal cybersecurity resource contraction as CISA operates with 38% staffing after layoffs, and heightened uncertainty around cloud infrastructure due to Microsoft's Azure Local “air-gapped” offering not wholly mitigating U.S. CLOUD Act exposure. Vendor news covered new AI-powered compliance features from Compliance Scorecard (version 10) and Beachhead Solutions (ComplianceEZ 2.0), Apple's accelerated retirement of Rosetta 2 translation technology, a Microsoft 365 Copilot DLP change, and continued fallout from VMware's acquisition by Broadcom, which has led to ongoing cost and trust challenges for cloud and infrastructure partners. The episode's clear implications for MSPs and IT providers are operational. Service catalogs and statements of work should actively address AI provider liability, dependency exit planning, and degraded federal cybersecurity support. Without scheduled and documented compatibility and risk reviews, MSPs absorb hidden exposure into their margins. Vendor stability can no longer be assumed, and proactive policy, renewal intelligence, and transparent advisory sessions are now required to avoid unplanned liability, budget crises, and damaged client trust. Four things to know today 00:00 Pentagon Threatens Anthropic Over Claude Access, Demands Autonomous Weapons Use 04:31 CISA Cuts, Azure Sovereignty Push Signal End of Federal MSP Safety Net 06:56 AI Compliance Tools Flood Market as MSPs Face Validation Gap 09:54 86% of Firms Cutting VMware Ties as Broadcom Renewal Costs Loom   This is the Business of Tech.    Supported by: Small Biz Thoughts Community

Dark Rhino Security Podcast
S18 E08 The Hidden Risks of Autonomous AI

Dark Rhino Security Podcast

Play Episode Listen Later Feb 25, 2026 50:42


Filip Verloy is a technology leader with over 25 years of experience across enterprise IT, consulting, and global vendors. Currently working on securing Agentic AI for the enterprise, he brings deep expertise in API security, infrastructure, and large-scale complex environments. Before joining Rubrik, Filip served as Global Field CTO at API security startup Noname Security and held senior architecture and solutions roles at Citrix, Dell, Riverbed, and VMware. Known for his curiosity and commitment to understanding the fundamentals behind technology, Filip challenges the “illusion of knowledge” and focuses on building secure, resilient systems from first principles.00:00 Intro02:30 Our Guest05:06 Illusion of Knowledge 07:04 Unknown-Unknowns in AI09:57 Increasing the Attack Surface12:58 Risk in the Age of Agentic AI 17:56 How do you secure that data?25:00 How do we deal with IAM in this world of Agentic AI?31:22 API Security and API Access in Agentic AI39:02 How is the model of consuming surfaces over the internet going to change? 43:00 Agentic AI Governance49:25 More about Filip

Get Amplified
Accountability Without Authority: The Hidden Skill Of High-Performing Sellers

Get Amplified

Play Episode Listen Later Feb 23, 2026 51:29 Transcription Available


Send a textSelling complex technology isn't about the lone genius with a quota. It's about orchestrating people, timing and trust across a messy, customer-led journey.We sit down with Cliff Keast - former sales leader at VMware, SAP and Business Objects, now a coach to revenue teams - to unpack how enterprise deals really get done when 20, 30 or even over 100 people touch a single opportunity.Separating Average Performers from Reliable ClosersCliff shares the identity shift that separates average performers from reliable closers: stop trying to be the hero and become the integrator of value. Your credibility in the C‑suite comes from your ability to marshal your company's full expertise - pre-sales, legal, services, customer success, partners - exactly when it matters. Focusing on Soft Skills That Make the Hard Things WorkWe get practical on the soft skills that make the hard things work: establish psychological safety, show trust first, share credit publicly, handle issues privately, and keep communication ruthlessly clear. A simple discipline, write actions clearly and start every meeting by reviewing them, turns vague updates into peer accountability without the drama.Facing the Reality of Cross-Functional FrictionWe also confront the reality of cross-functional friction. As organisations scale, process and function disaggregate. Quoting systems stall over irrelevant fields, legal arrives too late, and rules designed for efficiency create bottlenecks. Finding the Selling LineCliff draws the line between customer-centric rule pushing and selfish rule breaking, and explains how top sellers earn an “unfair share” of scarce resources by qualifying well, setting purpose, and making it easy for specialists to win. Shaping the PathFor sales leaders, the mandate is to shape the path: clear the runway with adjacent functions, coach orchestration skills, and measure the operating rhythm that keeps cross-functional teams moving.Who This Is ForIf you're navigating enterprise sales, team performance or revenue leadership, you'll leave with a sharper playbook for influence without authority, smarter stakeholder timing, and a renewed respect for the human side of selling. Subscribe, share with a teammate who needs a better deal rhythm, and drop a review to tell us which function is hardest to align in your world.We would love you to follow us on LinkedIn! https://www.linkedin.com/company/amplified-group/

Datacenter Technical Deep Dives
This is Fine: Tech Employment in the AI Era

Datacenter Technical Deep Dives

Play Episode Listen Later Feb 20, 2026


Join us as Chris gets brutally honest about tech employment in the AI era: what's dying, what's thriving, and how to position yourself to survive the chaos. Chris walks through the current state of tech layoffs hitting record numbers while companies post record profits, the disappearance of entry-level roles, and practical strategies for navigating this unprecedented moment. You'll learn about skill development in the AI era, why fundamentals still matter more than hype, how to build resilience through community, and what hiring managers are actually looking for right now. This episode doesn't sugarcoat the challenges from hollowed-out expertise at major companies to early-career professionals wondering if their degree still matters, but it also provides actionable guidance on positioning yourself and why humor and human connection remain irreplaceable in an AI-driven world. Timestamps 0:00 Welcome & Setting the Tone 3:09 Chris Miller's Background & Journey 7:30 The Current State of Tech Employment 12:45 Layoffs vs Record Profits Discussion 18:22 Entry-Level Roles Disappearing 24:16 What Skills Actually Matter Now 30:41 Building Career Resilience 36:52 The Fundamentals Still Win 42:18 Community & Support Networks 47:35 Practical Job Search Strategies 52:14 What AI Can't Replace (Yet) 55:06 Things We're Thankful For 59:00 Wrap-up & Resources How to find Chris: https://www.linkedin.com/in/chris-t-miller/ https://www.chrismiller.com/ Links from the show: https://roadmap.sh

IT Visionaries
How the Smartest Companies Build Infrastructure That Wins

IT Visionaries

Play Episode Listen Later Feb 19, 2026 60:36


Most companies don't realize it yet, but the way they built their technology foundations is quietly becoming a liability.Cloud costs are rising. Platforms change underneath you. AI is reshaping infrastructure from hardware to data to governance. And the strategies that once felt “safe” are now the ones creating the most risk.In this episode of IT Visionaries, host Chris Brandt sits down with Mano Bhattacharya, CTO of Nutanix, to unpack what's really happening inside enterprise technology right now. This isn't a conversation about chasing the newest tools or betting on a single future. It's about why adaptability has become the most important design principle in modern tech.Mano explains why many organizations are rethinking long-held assumptions about virtualization, cloud, and containers, and why the smartest teams are building infrastructure that gives them options over the next three to five years. They explore how AI changes the entire stack, not just applications, why data has become the real bottleneck, and why moving fast without a coherent plan can be more dangerous than moving slowly. Chapters:00:00 - The VMware Exodus Wave is Coming03:34 - VMware Broadcom Acquisition: What Changed and Why It Matters05:56 - Three Migration Paths: Stay, Move to Cloud, or Modernize09:59 - Why Containers on VMs Make Sense for Most Enterprises15:40 - The Five Stages of VMware Migration Grief21:20 - VMware Admin to Nutanix Admin: Closing the Skills Gap24:14 - The Cloud-in-a-Box Philosophy: From Boxes to Software32:30 - Opening Up the Platform: Pure Storage and Third-Party Integrations40:54 - AI Infrastructure: The End-to-End Challenge48:01 - Enterprise AI Strategy: Use Cases, Economics, and Governance56:44 - What's Next: Building the Invisible Platform for AI  -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Motivated to Lead Podcast - Mark Klingsheim
Episode 312: Stacey Porter (replay)

Motivated to Lead Podcast - Mark Klingsheim

Play Episode Listen Later Feb 19, 2026 16:20


This week, we revisit our interview with Stacey Porter. Stacey is the Chief People Officer at PROCEPT BioRobotics. Prior to that, she was the Chief People Officer at Outset Medical. Stacey has expertise in organizational design, innovative talent practices, and employee engagement.  Prior to Outset, she was the Head of Global Talent Development for Intuitive Surgical, Inc., responsible for designing performance and succession practices, while developing leaders and high-performing teams. Stacey has held leadership roles at VMware and Roche, and has built a career on creating nimble, progressive organizations. She has a MSW and doctoral studies in industrial/organizational psychology.  

VMware Communities Roundtable
#756 - HashiCorp Packer Plugins for VMware with Ryan Johnson

VMware Communities Roundtable

Play Episode Listen Later Feb 17, 2026


Guest Ryan Johnson talks about automation and his open source plugin project that enables Packer in VCF environments. Learn what Ryan does in his open source project including the packer examples directory on github

Datacenter Technical Deep Dives
AI Governance for Virtualized Infrastructure: What vSphere Admins Need to Know

Datacenter Technical Deep Dives

Play Episode Listen Later Feb 16, 2026


Join us as Marian explains what AI governance means for vSphere administrators and why it matters now. Marian walks through practical governance frameworks that vSphere admins need to understand, from IEEE 7000 series standards to mapping governance controls onto infrastructure you already manage. You'll learn what your CISO will ask for, how to respond using your existing VMware stack, and why governance isn't about slowing innovation� it's about enabling it safely. This episode covers real-world scenarios from data lineage and model transparency to integrating governance tools with existing infrastructure, and addresses the gap between compliance requirements and practical implementation for virtualized environments. Timestamps 0:00 Welcome & Introduction 5:16 Marian's Background in Tech & Governance 6:37 What is Governance? 12:45 IEEE 7000 Series Standards Overview 18:22 AI Governance for vSphere Admins 24:16 Data Lineage & Model Transparency 30:41 Risk Assessment Frameworks 36:52 Practical Implementation Strategies 42:18 Integration with Existing Tools 47:35 Common Governance Challenges 51:12 Vendor Landscape Discussion 54:27 Missing Innovation in the Space 58:09 Wrap-up & Resources How to find Marian: https://www.linkedin.com/in/mariannewsome/ Links from the show: https://ethicaltechmatters.com/

Unexplored Territory
#112 - Introducing VMware vDefend featuring Chris McCain!

Unexplored Territory

Play Episode Listen Later Feb 16, 2026 56:35


A few episodes ago we had Yves Hertoghs on the show to discuss networking, so I decided it was also time to discuss security. As Chris McCain has been at the forefront of networking and security at VMware for over a decade, it only made sense to reach out and invite him to introduce vDefend!Not only was Chris part of the Explore keynote, but he also delivered various excellent sessions on networking and security in the past couple of years, and I would highly encourage you to watch those as well!Explore 2025 - Demystifying VMware vDefend Distributed Security Within VMware Cloud FoundationExplore 2025 - Building Secure Private AI Deep Dive

La French Connection
Épisode 0x287 - Les gardiens de la garderie

La French Connection

Play Episode Listen Later Feb 16, 2026 66:16


Synopsis Dans l'épisode 0x287, Patrick Mathieu, Jacques, Francis et Richer reçoivent Patrick Roy. Le fil conducteur, c'est la surveillance qui rassure sur papier, mais qui échoue au moment critique. On décortique le réflexe “check the box”, les tests qui ne sont jamais faits, et ce que ça coûte quand des systèmes supposés protéger deviennent des angles morts. On parle notamment d'un cas où des caméras ne fonctionnent pas pendant des heures, sans détection, avec des rapports validés quand même, puis une vague de sanctions qui remet l'imputabilité au centre. La discussion touche aussi à la sécurité du quotidien (garderies) et à la vie privée, ainsi qu'à l'actualité CISA et VMware ESXi liée à des campagnes ransomware. Invité Patrick Roy Crew Patrick Mathieu Jacques Sauvé Francis Coats Richer Dinelle Liens et ressources Patrick Roy Vie privée https://nophonehome.com/ Banque sur papier Francis La sécurité dans les garderies: C'EST MAL! Imputabilité: 15 responsables d'une prison, y compris le directeur, sanctionnés!!! Elton John accuse le daily mail d'écoute électronique de ses lignes privées Jacques CISA orders US federal agencies to replace unsupported edge devices CISA confirms exploitation of VMware ESXi flaw by ransomware attackers Shamelessplug Join Hackfest/La French Connection Discord #La-French-Connection Join Hackfest us on Masodon POLAR - Québec - 29 Octobre 2026 Hackfest - Québec - 29-30-31 Octobre 2026 Crédits Montage audio par Hackfest Communication Music par Planewalker - Psychic Evolution - First Light Locaux virtuels par Streamyard

Azure Italia Podcast
Azure Italia Podcast - Puntata 68 - M365, AI e SMB con Gabriele Scorpaniti

Azure Italia Podcast

Play Episode Listen Later Feb 16, 2026 65:24


Bentornati e bentornate su Azure Italia Podcast, il podcast in italiano su Microsoft Azure!Per non perderti nessun nuovo episodio clicca sul tasto FOLLOW del tuo player

The Modern People Leader
282 - The Career Tradeoffs No One Talks About: Liz Bronson (VP People, Skimmer)

The Modern People Leader

Play Episode Listen Later Feb 13, 2026 57:22


Liz Bronson, VP of People at Skimmer, joined us on The Modern People Leader to talk about intentionally “flatlining” her career for a period of time to prioritize parenting. ----  Downloadable PDF with top takeaways: https://modernpeopleleader.kit.com/episode282Sponsor Links:

Datacenter Technical Deep Dives
FinOps - What It Is & Why It Matters

Datacenter Technical Deep Dives

Play Episode Listen Later Feb 12, 2026


Join us as Peter explores the core principles and practices of FinOps that help organizations optimize cloud spend without slowing innovation. Peter walks through what FinOps really is, why it matters beyond just cost cutting, and how engineers can collaborate effectively with finance teams to design cost-aware architectures. You'll learn about the three phases of FinOps (Inform, Optimize, Operate), how to get leadership buy-in for cloud initiatives, and practical strategies for managing cloud costs from the architecture phase through operations. This episode covers real-world scenarios from hybrid cloud cost tracking to building cost models before migrations, and explains how FinOps fits into your existing team structure regardless of organization size. Timestamps 0:00 Welcome & Introduction 6:10 Peter's Background & Journey to FinOps 10:45 What is FinOps? 16:32 The Three Phases: Inform, Optimize, Operate 22:18 Getting Leadership Buy-In 28:45 Cost-Aware Architecture Design 34:20 Hybrid Cloud & On-Prem Cost Tracking 40:15 FinOps Team Structure & Roles 46:30 Tools & Platforms Discussion 52:14 Accounting & Finance Collaboration 54:13 Starting FinOps Before Cloud Migration 57:17 FinOps for Small Teams & DBAs 1:00:13 Wrap-up & Resources How to find Peter: https://www.linkedin.com/in/petercrenshaw/ Links from the show: https://finops.org https://finopsweekly.com https://thefrugalarchitect.com

VMware Communities Roundtable
#755 - VMware{code} labs with Minisform and GPU

VMware Communities Roundtable

Play Episode Listen Later Feb 11, 2026


Just a brief podcast that talks about the VMware{code} labs at connect and the GPU's we ordered for the MInisform systems. Running Private AI. Technical difficulties made this a short podcast.

LINUX Unplugged
653: The Kernel Always Wins

LINUX Unplugged

Play Episode Listen Later Feb 9, 2026 65:50 Transcription Available


The news this week highlights shifts in Linux from multiple angles. What's evolving, why it matters, and that moment where the future actually works.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:

7:47 Conversations
Sandy Hogan: Graceful Disruption

7:47 Conversations

Play Episode Listen Later Feb 3, 2026 54:51


"You're going to be okay." These five simple words from a 98-year-old grandmother became the cornerstone of a leadership philosophy that has driven over $20 billion in revenue influence.In this episode of Gratitude Through Hard Times, Chris Shambra sits down with Sandy Hogan—a powerhouse revenue leader who has held the helm at tech giants like Cisco, Rackspace, VMware, and LivePerson. But this isn't a conversation about go-to-market strategies or revenue multiples. This is a deep dive into the "Graceful Disruption" of the self.Sandy shares her incredibly raw journey from a childhood as the daughter of Yugoslavian immigrants to a mid-career health crisis that forced her to "bet on herself." We explore how resilience isn't just a buzzword, but a protective layer formed in the fires of hard work and immigrant sacrifice.10 Memorable Quotes:"It's a protective layer, not a punitive layer that's unfolding.""You can get through anything your heart and mind determines you truly can.""Progress is the touchdown.""Work ethic and your attitude. Everything falls into place, never perfectly, but those two are everything.""I didn't control the circumstances around me, but I choose every day what I do about it.""Trust is a little overused and undervalued. It has to be earned.""Mindset leads, always—as a leader, as a human.""I need you [Younger Sandy] as a partner to walk with me on the rest of my journey.""What this world needs are... more emotionally regulated adults that aren't running around like little babies.""I can be in pain physically or emotionally... but boy, I get back up very, very quickly."10 Key Takeaways:Reframing the Past: What we often label as "childhood wounds" can be reframed as a "protective layer" that builds the resilience needed for future leadership.The "Elder" Gap: The modern world lacks "maternal/paternal" figures who provide emotional regulation. We need leaders who can say, "You're going to be okay," to calm the collective chaos.Immigrant Work Ethic: Success isn't just about the title; it's about bringing your best self and knowing you aren't taking shortcuts.Self-Gratitude: We often thank our mentors and families, but rarely think to thank our "younger selves" for the grit they showed during hard times.Moving from Sacrifice to Self: There comes a moment where you must stop working solely to honor the sacrifices of others and start working in honor of yourself.Mindset Over Reactivity: "Graceful Disruption" is the shift from letting change happen to you, to having an intentional impact on the change.Trust via Friction: Meaningful trust isn't built on convenience; it is earned through "inconvenient" moments of friction and accountability.The Power of Intent: In an era of instant gratification, the most powerful tool a leader has is the ability to pause and ask, "Why the heck am I doing this?"Radical Agency: While we cannot control external turbulence (like health crises or market shifts), we have absolute power over our choice of response.Momentum Through Movement: Perfection is the enemy of progress. The goal is "momentum through movement," not waiting for the perfect conditions.About our Guest: Sandy HoganFounder & CEO, BozQSandy Hogan is a passionate, seasoned transformation architect and award-winning executive, renowned for orchestrating strategic go-to-market transformations, delivering more than $20 billion in revenue influence. With more than two decades at the helm of industry powerhouses like Cisco, VMware, Rackspace, and LivePerson, plus agile engagements with high-growth startups, Sandy has earned a reputation for turning hype into measurable results; building Customer for Life revenue engines that deliver tangible, lasting outcomes.Her track record is underscored by multiple industry recognitions, including CRN's “Top 100 Executives” and “Power 100 Women of the Channel,” as well as accolades for channel leadership and ecosystem innovation. She is known for pioneering frameworks such as the Customer-for-Life GTM model, the Digital Outcomes Approach, and orchestrating multi-billion-dollar ecosystems—initiatives that have been adopted as benchmarks by both Fortune 100s and ambitious startups alike.Sandy's philosophy centers on "Graceful Disruption," blending operational rigor with empathy to confront hard truths and drive transformation that sticks. Whether leading high-stakes 100-day turnarounds under private equity pressure or steering multi-year industry pivots that redefine entire market landscapes, she brings authentic honesty about the political, emotional, and organizational realities beneath large-scale change.Teams and audiences praise Sandy for her combination of strategic clarity, pragmatic real-world perspective, and the ability to demystify the complexities of transformation through stories that inspire meaningful change. Her workshop sessions are ideal for conferences and forums seeking candid insights into navigating market disruption, cultivating high-impact partner ecosystems, and scaling sustainable Customer-for-Live growth systems that deliver lasting impact.Sandy inspires leaders to tackle transformation with courage, clarity, and the operational discipline to move from vision to execution—and she does it with a grace that makes even the most uncomfortable change possible.

Datacenter Technical Deep Dives
Observability 2.0 - More Than Just Logs, Metrics & Traces

Datacenter Technical Deep Dives

Play Episode Listen Later Feb 1, 2026


Join us as Neel explores how observability is evolving beyond traditional logs, metrics, and traces into a predictive, AI-powered discipline. Neel walks through the evolution of Observability, demonstrating how OpenTelemetry, machine learning, and LLMs are transforming how we monitor and maintain modern applications. You'll learn about dynamic sampling techniques that reduce costs while maintaining visibility, how ML algorithms detect anomalies before they cause outages, and practical implementations using tools like the OpenTelemetry Collector. This episode covers real-world scenarios from reducing massive log volumes to predicting system failures before they impact customers. Timestamps 0:00 Welcome & Introduction 4:29 Neel's Background & Community Work 5:03 The Evolution of Observability 6:29 The 2 AM Production Incident Scenario 8:13 OpenTelemetry's Role in Modern Observability 12:45 Dynamic Sampling Techniques 18:22 ML & AI in Anomaly Detection 24:16 LLM Observability Explained 28:32 Cost Optimization Strategies 30:04 Context Windows & Token Management 32:00 Self-Healing Systems Discussion 34:15 Edge Cases: When Dynamic Sampling Doesn't Work 36:27 Wrap-up & Resources How to find Neel: https://www.linkedin.com/in/neelcshah/ https://bento.me/neelshah Links from the show: https://neelshah.dev/blogs/observability-2 https://opentelemetry.io/ https://middleware.io/blog/observability-2-0/

The Agile World with Greg Kihlstrom
#805: Omnissa CMO Renu Upadhyay on balancing AI innovation with org chart disruption

The Agile World with Greg Kihlstrom

Play Episode Listen Later Jan 30, 2026 28:31


What if the biggest risk to your marketing AI strategy isn't the technology itself, but the org chart it's fracturing? Agility requires more than just speed; it demands a framework of trust and collaboration. When it comes to AI, this means your ability to innovate is directly tied to your ability to partner effectively across the organization, especially with IT and security. Today, we're going to talk about a critical tension point in the modern enterprise: Marketing is moving at the speed of AI, adopting powerful, often low-code tools to drive results. But this speed creates new complexities and risks, disrupting traditional roles and processes. Success is no longer just about having the best tech stack; it's about forging a strategic partnership between the CMO and IT leaders to balance innovation with governance, and productivity with security. To help me discuss this topic, I'd like to welcome, Renu Upadhyay, Chief Marketing Officer at Omnissa. About Renu Upadhyay Renu Upadhyay is senior vice president of Marketing at Omnissa, leading global marketing strategy, demand generation, product and solution marketing and brand to establish Omnissa as the leading digital work platform company. Renu is an experienced technology marketer with a deep understanding of products, industry, and customers spanning mobile, wireless networking and collaboration solutions across large and mid-size organizations. Prior to Omnissa, she served as vice president of Marketing for VMware's End-user Computing (EUC) business. In that role, she led marketing strategy and was responsible for customer messaging, demand, content marketing, sales and technical enablement, and product pricing strategy. She oversaw marketing programs and campaigns for EUC's comprehensive portfolio of solutions including employee engagement programs. Prior to VMware, Renu held senior product marketing roles at leading companies including Good Technology, Cisco Systems and AT&T Wireless. ,Yes,This will be completed shortly Renu Upadhyay on LinkedIn: https://www.linkedin.com/in/renuupadhyay/ Resources Omnissa: https://www.omnissa.com/ Take your personal data back with Incogni! Use code AGILE at the link below and get 60% off an annual plan: https://incogni.com/agile The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow Catch the future of e-commerce at eTail Palm Springs, Feb 23-26 in Palm Springs, CA. Go here for more details: https://etailwest.wbresearch.com/Drive your customers to new horizons at the premier retail event of the year for Retail and Brand marketers. Learn more at CRMC 2026, June 1-3. https://www.thecrmc.com/ Enjoyed the show? Tell us more at and give us a rating so others can find the show at: https://advertalize.com/r/faaed112fc9887f3 Connect with Greg on LinkedIn: https://www.linkedin.com/in/gregkihlstromDon't miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.showCheck out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company

Unsupervised Learning
Ep 81: Ex-OpenAI Researcher On Why He Left, His Honest AGI Timeline, & The Limits of Scaling RL

Unsupervised Learning

Play Episode Listen Later Jan 29, 2026 62:52


This episode features Jerry Tworek, a key architect behind OpenAI's breakthrough reasoning models (o1, o3) and Codex, discussing the current state and future of AI. Jerry explores the real limits and promise of scaling pre-training and reinforcement learning, arguing that while these paradigms deliver predictable improvements, they're fundamentally constrained by data availability and struggle with generalization beyond their training objectives. He reveals his updated belief that continual learning—the ability for models to update themselves based on failure and work through problems autonomously—is necessary for AGI, as current models hit walls and become "hopeless" when stuck. Jerry discusses the convergence of major labs toward similar approaches driven by economic forces, the tension between exploration and exploitation in research, and why he left OpenAI to pursue new research directions. He offers candid insights on the competitive dynamics between labs, the focus required to win in specific domains like coding, what makes great AI researchers, and his surprisingly near-term predictions for robotics (2-3 years) while warning about the societal implications of widespread work automation that we're not adequately preparing for. (0:00) Intro(1:26) Scaling Paradigms in AI(3:36) Challenges in Reinforcement Learning(11:48) AGI Timelines(18:36) Converging Labs(25:05) Jerry's Departure from OpenAI(31:18) Pivotal Decisions in OpenAI's Journey(35:06) Balancing Research and Product Development(38:42) The Future of AI Coding(41:33) Specialization vs. Generalization in AI(48:47) Hiring and Building Research Teams(55:21) Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

Packet Pushers - Full Podcast Feed
TCG067: Progressive Delivery: Shipping Software is Just the Beginning with Adam Zimman

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Jan 28, 2026 55:22


In this episode, we sit down with Adam Zimman, author and VC advisor, to explore the world of progressive delivery and why shipping software is only the beginning. Adam shares his fascinating journey through tech—from his early days as a fire juggler to leadership roles at EMC, VMware, GitHub, and LaunchDarkly – and how those... Read more »

Cables2Clouds
We Flattened The Org And All I Got Was 50 Direct Reports - Monthly News Update

Cables2Clouds

Play Episode Listen Later Jan 28, 2026 27:08 Transcription Available


Send us a textLayoffs, chips, and a lobster-shaped lesson in security—this month's news run is a tour of how tech's biggest bets collide with real-world constraints. We start with Amazon's plan to complete 30,000 job cuts under the banner of “flattening the org.” That might clean up charts, but it also stretches managers thin and risks slowing the very decisions teams need to ship. The human cost is harder to quantify than a balance sheet win, and we unpack where productivity gains end and morale debt begins.From there, we get into Microsoft's Maya 200 inference chip and why efficiency is the story to watch. Performance per dollar, power budgets, and inference at scale matter more than leaderboard sprints. If the claims hold up outside marketing decks, Maya points to a future where better throughput and lower costs beat raw hype. We also dive into Satya Nadella's push to retire “AI slop” and think of these systems as scaffolding for human potential—useful framing for knowledge work, but incomplete for roles where augmentation often previews automation. It's the tension shaping careers, budgets, and product choices across the stack.We pivot to enterprise infrastructure with Nutanix's slower-than-expected VMware migrations. Even when customers want options, they face real friction: tooling parity, skill gaps, data gravity, and the risk of moving mission-critical workloads without bulletproof rollback. The lesson is pragmatic—platforms don't win on promises, they win on migration paths that reduce toil and make costs predictable.And then there's Moltbot, the rebranded assistant formerly known as Clawdbot, which sparked a security backlash and a reminder that agents touching calendars, email, and payments need guardrails before cleverness. Limit scopes, sandbox actions, cap spend, log everything. AI that touches real life must be boringly safe before it's impressive.If this breakdown helped you cut through the noise, follow the show, share it with a friend, and leave a quick review. What story hit you hardest—and why?Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/ Check out the Monthly Cloud Networking Newshttps://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/Visit our website and subscribe: https://www.cables2clouds.com/Follow us on BlueSky: https://bsky.app/profile/cables2clouds.comFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatj

Gestalt IT Rundown
NVIDIA $2B AI Play, Blue Origin TeraWave, & More | Tech Field Day News Rundown: January 28, 2026

Gestalt IT Rundown

Play Episode Listen Later Jan 28, 2026 36:49


Tom Hollingsworth and guest host Jay Cuthrell bring the latest tech news straight to you in this week's Tech Field Day News Rundown!They kick things off with Obsidian Security's new SaaS updates, giving tighter control over third-party integrations and reducing breach risks. Fidelity's legal settlement with Broadcom over VMware software gets discussed, showing why vendor changes can ripple through enterprises.Next up, NVIDIA invests $2B in CoreWeave, boosting AI infrastructure, while Microsoft's Quantum Development Kit updates make quantum coding more practical today. AI-powered coding tools like Anthropic's Claude Code are breaking barriers, and Anthropic's new AI constitution emphasizes ethics, safety, and transparency.Tom and Jay also cover Meta's $6B fiber deal with Corning to fuel AI data centers and Blue Origin's TeraWave satellite network for enterprise connectivity. From SaaS security to AI, quantum computing, and next-gen networking, they break down the tech moves that are shaping the future of enterprise IT.This and more on the Tech Field Day News Rundown with Tom Hollingsworth and guest host Jay Cuthrell. Time Stamps: 0:00 - Cold Open0:36 - Welcome to the Tech Field Day News Rundown1:13 - Obsidian Security Expands Protection for SaaS Integrations3:48 - Fidelity–Broadcom Settlement Exposes VMware Licensing Tensions7:24 - NVIDIA Invests $2B in CoreWeave to Strengthen AI Infrastructure Grip9:59 - Microsoft Open-Sources Quantum Tools for Real-World Use14:29 - AI Tool Ports NVIDIA CUDA Code to AMD GPUs in Minutes18:25 - Meta Signs $6B Fiber Deal With Corning to Power AI Data Centers23:18 - Anthropic Updates Claude's Constitution to Focus on Understanding27:06 - Blue Origin Launches TeraWave Satellite Network for Enterprise Connectivity30:07 - The Weeks Ahead: Upcoming Tech Field Day Events33:39 - Thanks for Watching the Tech Field Day News RundownFollow our hosts ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tom Hollingsworth⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Alastair Cooke⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Stephen Foskett⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Follow Tech Field Day ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠on LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X/Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Bluesky⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, and on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Mastodon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

Nerd Journey Podcast
An Intentional Ending: Completing the Journey for This Body of Work

Nerd Journey Podcast

Play Episode Listen Later Jan 27, 2026 40:21


When does a body of work reach completion? One answer is to end it by choice. This week in episode 356 you'll hear the reasons behind our intentional ending of the Nerd Journey Podcast. We'll rewind the clock and focus on the show's trajectory and inflection points over time just like we've done for guests, share what we learned over the course of an 8-year journey from idea to consistently released show, and discuss our favorite moments. All of our content will remain online and accessible for listeners like you to go back and enjoy. Don't miss our final call to action in this episode. Just because this body of work is complete, there is still work for all of us to do for our careers. Original Recording Date: 12-20-2025 Topics – A Purposeful Ending, Where We Started, Interview Format and Getting to Launch, The Why Behind the Ending, The Lessons We Learned, Our Favorite Moments, What to Expect from Us Moving Forward, There's More to Be Done for All of Us. 1:01 – A Purposeful Ending We'll give you the bottom line up front: this is the last episode of the Nerd Journey podcast. We still love the mission, but the time has come for us to complete this body of work. When we have interviewed guests on the show, we've talked through their career timeline and pulled out the lessons learned. Today, we're going to do it for the show itself. 1:38 – Where We Started John was working as a sales engineer at VMware and was the co-host of the VMware Community Roundtable Podcast. He loved listening to podcasts, enjoyed the medium, and wanted to find a topic for a show. At the same time Nick was in the process of joining VMware, John and Nick were discussing all the things Nick needed to know to transition into sales engineering for a technology vendor. “In that conversation, I said ‘maybe we should start a podcast.'” – John White As Nick remembers it, this happened the weekend before Nick started at VMware in December 2017 (almost exactly 8 years before this episode's recording). Nick wasn't sure what he would talk about on a podcast. This suggestion from John started the ideation period, and our launch of the show was in July 2018. John talks about some of the initial ideas for the focus of the show. At that time, VMware podcasts and blogs were a great way to interact with the greater community. Doing something like this was also a way to become what John calls “nerd famous.” By the way, no one else can use that term now (trademarked by John). We initially considered talking about VMware news and our opinions on it since we both were going to be working at VMware. Both John and Nick came from small-to-medium business IT operations and eventually became sales engineers at a technology vendor. One of the things the show could be for is to talk about that journey and help others understand it was a possibility for them as well. John and Nick recorded about 10 episodes before launching to help hit the release cadence. Nick doesn't remember why they chose a weekly release cadence but remembers the show launched while he was on vacation. John and Nick even recorded a podcast episode while Nick was on that vacation, which started a habit of Nick doing podcast work while on vacation. Because they had recorded so many episodes in advance, they were not going to be timely or points of authority on VMware technology. Both Nick and John's roles were as technical generalists on the VMware side. “The only evergreen stuff that we had was the career stuff, so that became a little bit more the focus. I think that we were still thinking…we'll just record more maybe VMware specific stuff later on…as that happens. For right now, here it is.” – John White Early episodes were very prescriptive about resumes and job interview processes at larger tech companies, for example. Nick points out that John had to carry the conversation in these early episodes because he was just learning to think about career focused topics (sort of like being new to lifting weights). But, Nick picked up a lot just from the conversations on the show. 7:50 – Interview Format and Getting to Launch Nick couldn't remember what made them bring in guests originally, but Episode 13 with Tom Delicati was our very first guest interview on the show. John feels bringing in guests was always back of mind for him, and it was what he saw happen on the VMware Community Roundtable Podcast. “We're just 2 people and we have our experience. But we can't represent that as the full breadth of all of experience. That just doesn't make any sense. So, we need to start exploring what other people's career journeys have looked like and see if we can extract some knowledge and recommendations from that.” – John White Nick doesn't remember having a prescriptive plan for interviewing guests but feels like they settled into long-form interviews as a style pretty quickly. John says this was a structure they hit upon in the beginning (talking through someone's job history). The lessons learned from career inflection points like job transitions emerged from conversations with guests. John and Nick did not know this was going to happen when they began. Nick likes being able to highlight more of one specific guest's story than otherwise could have been done if each interview was only 30 minutes with a guest. But we fully acknowledge people like different lengths of podcasts. “We wanted to tell interesting stories that had an arc: a beginning and an end and a journey in between. And we were able to find those even chopping people's long 2-hour conversations up into 2 or even 3 episodes. I think that worked for us. I don't know if it worked for everybody.” – John White “We probably spent the same time interviewing people as we would have. We just didn't interview as many as if it had been 1 episode per person.” – Nick Korte We also didn't want to release a 2-hour interview as one episode. That's a lot of editing for just one episode release. People might not realize how much time goes into editing and production even after recording an interview. At the beginning, John had to give Nick advice on the kind of microphone to get. Nick started recording with a headset and then bought the same mic as John. They would each later invest in nicer microphones as the show progressed. “I knew nothing about editing and really not that much about how to make a podcast.” – Nick Korte, on beginning as a podcaster There were a lot of things we had to figure out just to make the podcast publicly available. John had researched some of the administrative things. He knew there was a WordPress plugin that could be used to turn MP3 files of released episodes into publicly available audio feed that would be the podcast. John says there were some mental blocks and hurdles he had to get through before launching the show, highlighting the fact that it took 6 months to go from idea to publishing. He was getting overwhelmed trying to figure out the back-end production and publishing process. John thinks it was Nick who kept asking what needed to happen for us to launch, and we went with WordPress and the plugin mentioned but never changed anything…because we had no time to go back. Nick and John learned that once you start a show and get it going, you will never run out of ideas. 13:58 – The Why Behind the Ending We never ran out of ideas. In fact, we still have ideas. So why are we stopping the podcast? We ran out of time. Nick has run out of time to work on editing and production. This has been a weekly show (up until the last couple months of our run), and it takes a large time commitment each week. For guest interview episodes, the intro and outro were not recorded at the same time the interview took place. These had to be recorded before the episode was released. The show notes are not AI-generated. Nick enjoyed writing them and adding in important links and references, feeling like it allowed him to remember the episodes better and internalize the lessons within them. Nick has a teenager now with many extracurricular activities and has had a workload increase at his job. “Probably for the last year I think I've been fooling myself at how much of a toll it's been to just get an episode out each week.” – Nick Korte We even tried changing the release schedule to bi-weekly and have missed that cadence a couple of times. John ran out of time about 4 years ago and hasn't had much time since to handle podcast related tasks. John experienced a job change and new baby at that time and couldn't add anything else. He also moved at some point. John and Nick have been advancing in their own careers over time as well, which has added responsibility. John and his wife recently had a second child. He also left his job in June 2025 and has been doing a job search at the same time. Before Nick and John made this decision, Nick listened back to some previous episodes to get advice and perspective. Some of the advice that echoed the loudest came from Amy Lewis in Episode 302 – Ending with Intention: Once a Geek Whisperer with Amy Lewis (2/2). The idea of ending with intention stood out. “Rather than being spotty on our releases and not keeping our promise of how often we say we're going to get the show out, we wanted to end it with intention and say, ‘ok, this is it.'” – Nick Korte “We haven't lost the love of this task. We both want this to continue. But realistically, we can't do it. And rather than sputter and peter out and never be heard from again, we just thought we'll follow the lessons that we've learned from our bettors and do what they did. Let's be intentional about the end.” – John White 18:02 – The Lessons We Learned John learned how much we can learn from the experience of others. He had ideas and biases about how we should handle specific aspects of our career, but doing the podcast allowed him to pressure test these ideas against the experience of others. John appreciates the breadth of background and experience our collective guests have brought to the show. It made him realize there are so many different ways to do certain things. Nick learned a ton about the mechanics of podcast production. It was around Episode 113 when Nick became the editor because John needed to take a break. If you want to hear more about how this happened, check out this blog post. Nick got hooked into podcast communities and even attended a podcast conference in 2025, meeting many other people who run their own podcast. Nick learned how much salesmanship is involved in getting a guest. You have to sell someone on the idea of being on the show and what they can bring to your listeners. How easy can you make it for them to say yes? John and Nick asked guests for 1.5 – 2 hours for an interview. “If you make it easy for someone to say yes and you build the outline of questions you might ask and you tell them what your show is about and what you want to cover, they'll say yes. And they might give you more time than that…. I learned so much about different people that I never would have met otherwise. I am thankful for all the learnings of all the people who have been on the show. And I'm thankful for everything I've learned from you, John.” – Nick Korte John is grateful for the difference in skills he and Nick have and their ability to learn from one another just by co-hosting together. He likes to apply the idea of making it easy for others to say yes when he's asking something of someone at work, for example. Nick learned how to beat perfectionism weekly. Something can always be edited more or re-recorded. There was a weekly ship date. “The deadline was always there to keep me honest.” – Nick Korte Seth Godin's The Practice talks about keeping a promise to the people who follow you. Having a weekly release cadence meant we were promising to ship episodes weekly. “So, whether one person listened or a million people listened, we tried to keep that promise. And it was important to us to keep it, even if it was hard.” – Nick Korte “Having a million people listen to a specific episode or even hit the site in a specific week wasn't the goal. I think the goal was the breadth of work and making it accessible and having people be able to benefit from it.” – John White We also had to learn how to tell people about the show in a clear, succinct way. When John or Nick would join video calls for work, people would see their microphones and ask if they had a podcast. We also used generative AI in our workflow for production a little bit, even if it was not for show notes. Doing the show has dragged with it some reasons to tinker with generative AI. With John's help Nick learned how to build a Gemini prompt that would take the handwritten show notes and brainstorm titles, episode descriptions, and even create a prompt for a featured image based on the themes in the episode. John shares that we never wanted to use generative AI to take a transcript and generate an episode outline. We might lose touch with the content that way. John talks about the curse of being an audio editor. It's impossible to NOT hear issues in other audio. Nick can hear mouth noises on Zoom calls like you wouldn't believe. John says we can listen to someone else's podcast and may be able to tell who is and is not the editor based on whether they speak into the microphone or move away from it and keep talking. 25:15 – Our Favorite Moments John says it's hard to pick just one favorite moment. We got to meet some of our heroes in podcasting and other people who were “nerd famous” about their career stories. We had some great conversations with John Nicholson about how to evaluate a job offer and personal finance. Check out these for reference: Episode 224 – Tech Marketing, Interview Questions, and Executives as Wild Bears with John Nicholson (1/3) Episode 225 – Take Stock of Your Compensation with John Nicholson (2/3) Episode 226 – Negotiating Job Offers and Personal Finance Tips with John Nicholson (3/3) Having a podcast allowed us to have lengthy conversations with people who may not have otherwise had a reason to talk to us. John doesn't think asking someone out of the blue for 2 hours of time without having a podcast would have worked well. John says he has a strong recency bias, often walking away from an interview with a guest thinking it was the best one yet. Nick's favorite moments Nick remembers the first time we interviewed Mike Burkhart (in Episode 64 and Episode 65). He was having wifi issues and had to move everything into his living room floor to record the episode. John and Mike were kind enough to stay online and still do the interview. John and Nick live in different parts of the United States and have only been able to record together in person a handful of times. These times were special and rare. Nick remembers the time they recorded at VMware Explore and forgot to hit record…twice in a row! If John had to succumb to recency bias, he would pick the recent interview with Milin Desai. This set of interviews stands alone as the only time we were cold pitched a guest by someone we did not know, and it was a perfect fit. We got over 2 hours with a CEO! Episode 349 – Expand Your Curiosity: Build, Own, and Maintain Relevance with Milin Desai (1/3) Episode 350 – Scope and Upside: The Importance of Contextual Communication with Milin Desai (2/3) Episode 350 – Opt In: A CEO's Take on Becoming AI Native with Milin Desai (3/3) People being both generous with their time and inciteful has been a pattern with guests. Nick and John got to have conversations with people both on the air and off the air. Nick appreciated having Dale McKay on the show (a mentor of his). You can find those episodes here: Episode 288 – Guardrails for Growth: A Mentor's Experience with Dale McKay (1/2) Episode 289 – Enhance Your Personal Brand: Feedback as a Catalyst for Change with Dale McKay (2/2) Some other favorites from Nick: He enjoyed all of the conversations about the principal title and principal engineers. See also the principal tag for more of these stories. Nick also really enjoyed hearing the stories about why people went into leadership roles and why they moved away from them. One specific episode Nick highlights as a favorite is Episode 127 – Countdown to Burnout with Tom Hollingsworth (3/3). John mentions we all battle burnout from time to time, and having such great advice to go back to is a gift. Nick says being the editor is also a gift because you're going to get to listen to the recorded discussion multiple times. Many times, the questions Nick and John asked in guest interviews were things they needed help with in their own careers. Hopefully the answers to those questions helped you as a listener too! John liked the fact that we were able to clip some of the times we messed up on the air and include those sound bites at the very end of an episode for people. To find these episodes, look for the Stinger metadata tag on an episode post. Nick mentions the Barry White intro stinger. It's actually at the end of Episode 17. There are also some good stingers with guest Chris Williams. 31:05 – What to Expect from Us Moving Forward What are the things that will, won't, and might happen in the future? The Nerd Journey site will remain online and accessible so our content will not disappear. You can still enjoy past episodes, browse the show notes, and leverage the Layoff Resources Page as well as our Career Uncertainty Action Guide. John and Nick can keep it online in a very cost-effective way just as they have to this point since the podcast was never monetized (not even Amazon affiliate links). John still has a dream of making sure we have transcripts of all the episodes and making these available in addition to the show notes. Maybe that could be extended to an AI chat bot that was trained on the transcripts. There would be some overhead involved in doing it, but John thinks it's definitely possible. You can still reach out to John or Nick on LinkedIn or send us an e-mail. All current communication channels will remain in place. We are available for questions, if you want to talk, etc. We will definitely NOT restart this show. We have declared it complete. Even if we were going to do a show like this again in the future, we would do it differently. We might choose a different name, a different description, or a different format even. But we don't have the time to do that right now anyway. We are NOT starting a new show (at least not right now). 34:59 – There's More to Be Done for All of Us Just because the show is ending, that doesn't mean your work is complete. None of our work is complete when it comes to career. “The things that we've talked about in curating your own career and being intentional about it always apply. We're not going to be around to remind you of that every week, so I hope that people have learned those lessons and internalized them. But if not, do something to make those things intentional. You need to prioritize your career on a consistent basis.” – John White Here are some specific actions that you should take: Document your work. Generate proof of work. Show your work (similar to generating proof of work). John says this is what we were unconsciously doing when we began the podcast, sharing how we got to where we are and our job transitions so others can follow a similar path if they choose. The purpose of showing your work is so that others can learn from your experience and so you can remind yourself of what you've accomplished at a later time. Nick highlights that Episode 66: Three-Month Check-In as a Google Cloud Customer Engineer with John White, Part 1 remains the most downloaded episode in our catalog. Aim for small, iterative improvements. Turn information into knowledge. Some of this is through writing. We spoke several times on the show about writing being thinking, and it was specifically referenced in an episode with Josh Duffney – Episode 156 – Better Notes, Better You with Josh Duffney (1/2). Manage your knowledge in some kind of written form that isn't in your head. Make it a knowledge management system of some kind. Practice Deep Work. It's the most important work you can do because the skill of sustained attention will be the thing for which people are paid. Be mindful of technology waves and trends, and consider placing some small bets. Many guests have invested time and effort to become proficient in a newer technology before or as it was catching on. Don't be afraid to tinker with those newer technologies. Consistently invest in your professional network. One way to do this could be via meetup groups or online communities. Reach out to use if you want to talk about careers, starting a podcast, or other fun topics. Nick can also tell you what it's like to go through the John White School of Mentoring. We want to say a special thank you to every guest who took the time to be on the podcast and every listener who took the time to listen to an episode. Contact the Hosts The hosts of Nerd Journey are John White and Nick Korte. E-mail: nerdjourneypodcast@gmail.com DM us on Twitter/X @NerdJourney Connect with John on LinkedIn or DM him on Twitter/X @vJourneyman Connect with Nick on LinkedIn or DM him on Twitter/X @NetworkNerd_ Leave a Comment on Your Favorite Episode on YouTube If you've been impacted by a layoff or need advice, check out our Layoff Resources Page. If uncertainty is getting to you, check out or Career Uncertainty Action Guide with a checklist of actions to take control during uncertain periods and AI prompts to help you think through topics like navigating a recent layoff, financial planning, or managing your mindset and being overwhelmed.

Explain IT
Explain IT: Highlights from our conversations in 2025

Explain IT

Play Episode Listen Later Jan 27, 2026 38:09


2025 was a defining year for technology. From the rapid acceleration of private cloud adoption and growing security risk challenges, to major shifts in digital employee experience driven by Windows 11, businesses have been pushed to rethink what's possible with IT.In this Best of Explain IT 2025 episode, we revisit our biggest themes and most impactful conversations from the year, bringing together highlights from four standout episodes that shaped how organisations approach modern technology.Including moments from:Navigating Windows 10 End of Support (October)We revisit our episode on Windows 10 end of support, featuring Softcat's Kelly Calver and Ella Chew. This conversation explores how organisations should prepare for the transition to Windows 11, with a strong focus on application readiness, compatibility, and minimising disruption for end users.Everything Data: From Strategy to Silos (August)From our data-focused episode, Andy Crossley (Oakland) and James Wingham (Softcat) discuss why a clear data strategy must come before tackling data silos. The clip highlights how businesses can unlock real value from data by aligning technology with long-term business goals.Streamlining Your Operations (September)In this episode, Oli Meadows (Softcat) and Adam Sperring (ServiceNow) explore how organisations can streamline operations by thinking about service management across the entire business, not just IT. A must-listen for anyone looking to improve efficiency, visibility and outcomes through enterprise service management.VMware Explore on Tour: is Private Cloud the solution your business needs? – Special Edition (November)One of two special editions in 2025, this episode covers VMware Explore On Tour, featuring Richard Fraser (Softcat) and Joe Bagley (Broadcom). It dives into how a unified private cloud platform is driving innovation and helping businesses modernise securely and at scale.About Explain IT:Hosted by Helen Gidney, Head of Architecture at Softcat; Explain IT, where we talk tech in simple, jargon-free language.Keep Listening:If you enjoyed this episode, please consider leaving a review — it helps others discover the podcast. You can also catch up on all the episodes mentioned here, covering topics like private cloud, VMware innovation, data strategy, and service operations.We release new episodes every month, so make sure you follow the podcast.Coming up: insights into AI-driven security, and how to maximise your technology investments ahead of the 2026 public sector year-end.Thanks for listening to the Explain IT podcast from Softcat.Produced by The Podcast Coach. Hosted on Acast. See acast.com/privacy for more information.

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Monday, January 26th, 2026: FortiOS SSO Vuln Updates; Outlook OOB Update; VMware vCenter Exploited

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Jan 26, 2026 4:21


Analysis of Single Sign-On Abuse on FortiOS Fortinet released an advisory. FortiOS devices are vulnerable if configured with any SAML integration, not just FortiCloud https://www.fortinet.com/blog/psirt-blogs/analysis-of-sso-abuse-on-fortios Outlook OOB Update Microsoft released a non-security OOB Update for Outlook, fixing an issue introduced with this months security patches. https://support.microsoft.com/en-us/topic/january-24-2026-kb5078127-os-builds-26200-7628-and-26100-7628-out-of-band-cf5777f6-bb4e-4adb-b9cd-2b64df577491 VMware vCenter Server Vulnerabilities Exploited (CVE-2024-37079, CVE-2024-37080, CVE-2024-37081) A VMWare vCenter vulnerability patched last June is now actively exploited. https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24453

The Segment: A Zero Trust Leadership Podcast
The Monday Microsegment for the week of 1/26/2026

The Segment: A Zero Trust Leadership Podcast

Play Episode Listen Later Jan 26, 2026 6:56


The Monday Microsegment for the week of January 26th. All the cybersecurity news you need to stay ahead, from Illumio's The Segment podcast.A critical vulnerability is being actively exploited in core infrastructure, VMware warns.Hackers looking for extortion payoff tell Nike to… just do it.And a massive database leak exposes 149 million stolen credentials.And Christer Swartz joins us for January's Boos and Bravos. Head to The Zero Trust Hub: hub.illumio.comDownload The 2025 Global Cloud Detection and Response Report: https://www.illumio.com/resource-center/global-cloud-detection-and-response-report-2025 

Cyber Briefing
January 26, 2026 - Cyber Briefing

Cyber Briefing

Play Episode Listen Later Jan 26, 2026 8:48


If you like what you hear, please subscribe, leave us a review and tell a friend!

Datacenter Technical Deep Dives
Teaching AI to Terraform (So We Don't Have To)

Datacenter Technical Deep Dives

Play Episode Listen Later Jan 24, 2026


Join us as Sam demonstrates how to teach AI to write Terraform configurations using Model Context Protocol (MCP) servers. Sam introduces the Terraform MCP server and walks through practical demos showing how AI can understand and safely interact with your infrastructure. You'll see live examples of AI planning, generating, and evolving Terraform configurations� from creating landing zones to setting up workspace variables automatically. Whether you're managing complex multi-cloud environments or just getting started with infrastructure as code, this episode demonstrates how MCP servers bridge the gap between AI capabilities and real-world Terraform workflows. Learn how to get started, which Claude models work best for different tasks, and best practices for integrating AI into your IaC pipelines. Timestamps 0:00 Welcome & Introduction 4:37 Sam McGeown's Background 6:02 Introduction to Terraform MCP Server 12:35 What is Model Context Protocol? 18:22 Setting Up the Terraform MCP Server 24:16 Demo: Claude Desktop Integration 30:41 Creating Infrastructure with AI Prompts 36:52 Reading & Analyzing Existing Terraform Code 42:18 Generating Landing Zone Configurations 47:35 Working with Terraform Workspaces 50:37 Creating Variables Automatically 52:14 Model Selection: Sonnet vs Opus 55:11 Live Demo: Workspace Variable Creation 58:33 Getting Started & Resources How to find Sam: https://www.linkedin.com/in/sammcgeown/ Links from the show: https://developer.hashicorp.com/terraform/mcp-server

Software Defined Talk
Episode 556: This Conversation is Hardened

Software Defined Talk

Play Episode Listen Later Jan 23, 2026 67:35


This week, we discuss the end of Cloud 1.0, AI agents fixing old apps, and Chainguard vs. Docker images. Plus, the mystery of Dutch broth is finally solved. Watch the YouTube Live Recording of Episode 556 Runner-up Titles His overall deal Been there and done that been ignoring that shift key for years Cloud is just fine I'll be back in Bartertown The “F” Word Hardened-washing We'll never do this, but we should check back in in 3 months Libraries are the best Elves don't belong in space Rundown Are we at the end of cloud or cloud 1.0 It's the beginning of Cloud 2.0 Spec-driven development system for Claude Code Anthropic and App Modernization A meta-prompting, context engineering and spec-driven development What comes next, if Claude Code is as good as people say. Microsoft Spending on Anthropic Approaches $500 Million a Year Claude Code Won't Fix Your Life Coté and Tony contemplate day two AI-generated apps, and an excerpt. Why We've Tried to Replace Developers Every Decade Since 1969 Well, that escalated quickly: Zero CVEs, lots of vendors Relevant to your Interests Beijing tells Chinese firms to stop using US and Israeli cybersecurity software China blacklists VMware, Palo Alto Networks software over national security fears Kroger taps Google Gemini, announces more key AI moves Texas judge throws out second lawsuit over CrowdStrike outage Apple will pay billions for Gemini after OpenAI declined Dell wants £10m+ from VMware if Tesco case goes against it Tailscale: The Best Free App Most Mac Power Users Aren't Using How WhatsApp Took Over the Global Conversation Our approach to advertising and expanding access to ChatGPT OpenAI's ARR reached over $20 billion in 2025, CFO says Simon Willison's take on Our approach to advertising and ChatGPT The AI lab revolving door spins ever faster | TechCrunch How Markdown took over the world An Interview with United CEO Scott Kirby About Tech Transformation Conferences cfgmgmtcamp 2026, February 2nd to 4th, Ghent, BE. Coté speaking - anyone interested in being an SDI guest? DevOpsDayLA at SCALE23x, March 6th, Pasadena, CA Use code: DEVOP for 50% off. Devnexus 2026, March 4th to 6th, Atlanta, GA. Use this 30% off discount code from your pals at Tanzu: DN26VMWARE30. KubeCon EU, March 23rd to 26th, 2026 - Coté will be there on a media pass. VMware User Groups (VMUGs): Amsterdam (March 17-19, 2026) Minneapolis (April 7-9, 2026) Toronto (May 12-14, 2026) Dallas (June 9-11, 2026) Orlando (October 20-22, 2026) SDT News & Community Join our Slack community Email the show: questions@softwaredefinedtalk.com Free stickers: Email your address to stickers@softwaredefinedtalk.com Follow us on social media: Twitter, Threads, Mastodon, LinkedIn, BlueSky Watch us on: Twitch, YouTube, Instagram, TikTok Book offer: Use code SDT for $20 off "Digital WTF" by Coté Sponsor the show Recommendations Brandon: The Library will loan you a 5G hotspot Matt: Deep Rock Galactic: Survivor (rogue-like Vampire Hunters-type game) Coté: Streamyard shorts generation. Salesforce was inspired by dolphins.

Datacenter Technical Deep Dives
Evolution of Tool Use and MCP in Generative AI

Datacenter Technical Deep Dives

Play Episode Listen Later Jan 23, 2026


Join us as Gautam breaks down the evolution of tool use in generative AI and dives deep into MCP. Gautam walks through the progression from simple prompt engineering to function calling, structured outputs, and now MCP—explaining why MCP matters and how it's changing the way AI systems interact with external tools and data. You'll learn about the differences between MCP and traditional API integrations, how to build your first MCP server, best practices for implementation, and where the ecosystem is heading. Whether you're building AI-powered applications, integrating AI into your infrastructure workflows, or just trying to keep up with the latest developments, this episode provides the practical knowledge you need. Gautam also shares real-world examples and discusses the competitive landscape between various AI workflow approaches. Subscribe to vBrownBag for weekly tech education covering AI, cloud, DevOps, and more! ⸻ Timestamps 0:00 Introduction & Welcome 7:28 Gautam's Background & Journey to AI Product Management 12:45 The Evolution of Tool Use in AI 18:32 What is Model Context Protocol (MCP)? 24:16 MCP vs Traditional API Integrations 30:41 Building Your First MCP Server 36:52 MCP Server Discovery & Architecture 42:18 Real-World Use Cases & Examples 47:35 Best Practices & Implementation Tips 51:12 The Competitive Landscape: Skills, Extensions, & More 52:14 Q&A: AI Agents & Infrastructure Predictions 55:09 Closing & Giveaway How to find Gautam: https://gautambaghel.com/ https://www.linkedin.com/in/gautambaghel/ Links from the show: https://www.hashicorp.com/en/blog/build-secure-ai-driven-workflows-with-new-terraform-and-vault-mcp-servers Presentation from HashiConf: https://youtu.be/eamE18_WrW0?si=9AJ9HUBOy7-HlQOK Kiro Powers: https://www.hashicorp.com/en/blog/hashicorp-is-a-kiro-powers-launch-partner Slides: https://docs.google.com/presentation/d/11dZZUO2w7ObjwYtf1At4WnL-ZPW1QyaWnNjzSQKQEe0/edit?usp=sharing

Tank Talks
Building a Solo GP Fund with Timothy Chen of Essence VC

Tank Talks

Play Episode Listen Later Jan 22, 2026 64:42


In this episode of Tank Talks, Matt Cohen sits down with Timothy Chen, the sole General Partner at Essence VC. Tim shares his remarkable journey from being a “nerdy, geeky kid” who hacked open-source projects to becoming one of the most respected early-stage infrastructure investors, backing breakout companies like Tabular (acquired by Databricks for $2.2 billion). A former engineer at Microsoft and VMware, co-founder of Hyperpilot (acquired by Cloudera), and now a solo GP who quietly raised over $41 million for his latest fund, Tim offers a unique, no-BS perspective on spotting technical founders, navigating the idea maze, and rethinking sales and traction in the world of AI and infrastructure.We dive deep into his unconventional path into VC, rejected by traditional Sand Hill Road firms, only to build a powerhouse reputation through sheer technical credibility and founder empathy. Tim reveals the patterns behind disruptive infra companies, why most VCs can't help with product-market fit, and how he leverages his engineering background to win competitive deals.Whether you're a founder building the next foundational layer or an investor trying to understand the infra and AI boom, this conversation is packed with hard-won insights.The Open Source Resume (00:03:44)* How contributing to Apache projects (Drill, Cloud Foundry) built his career when a CS degree couldn't.* The moment he realized open source was a path to industry influence, not just a hobby.* Why the open source model is more “vertical than horizontal”, allowing deep contribution without corporate red tape.From Engineer to Founder: The Hyperpilot Journey (00:13:24)* Leaving Docker to start Hyperpilot and raising seed funding from NEA and Bessemer.* The harsh reality of founder responsibility: “It's not about the effort hard, it's about all the other things that has to go right.”* Learning from being “way too early to market” and the acquisition by Cloudera.The Unlikely Path into Venture Capital (00:26:07)* Rejected by top-tier VC firms for a job, then prompted to start his own fund via AngelList.* Starting with a $1M “Tim Chen Angel Fund” focused solely on infrastructure.* How Bain Capital's small anchor investment gave him the initial credibility.Building a Brand Through Focus & Reputation (00:30:42)* Why focusing exclusively on infrastructure was his “best blessing” creating a standout identity in a sparse field.* The reputation flywheel: Founders praising his help led to introductions from top-tier GPs and LPs.* StepStone reaching out for a commitment before he even had fund documents ready.The Essence VC Investment Philosophy (00:44:34)* Pattern Recognition: What he learned from witnessing the early days of Confluent, Databricks, and Docker.* Seeking Disruptors, Not Incrementalists: Backing founders who have a “non-common belief” that leads to a 10x better product (e.g., Modal Labs, Cursor, Warp).* Rethinking Sales & Traction: Why revenue-first playbooks don't apply in early-stage infra; comfort comes from technical co-building and roadmap planning.* The “Superpower”: Using his engineering background to pressure-test technical assumptions and timelines with founders.The Future of Infra & AI (00:52:09)* Infrastructure as an “enabler” for new application paradigms (real-time video, multimodal apps).* The coming democratization of building complex systems (the “next Netflix” built by smaller teams).* The shift from generalist backend engineers to specialists, enabled by new stacks and AI.Solo GP Life & Staying Relevant (00:54:55)* Why being a solo GP doesn't mean being a lone wolf; 20-30% of his time is spent syncing with other investors to learn.* The importance of continuous learning and adaptation in a fast-moving tech landscape.* His toolkit: Using portfolio company Clerky (a CRM) to manage workflow.About Timothy ChenFounder and Sole General Partner, Essence VCTimothy Chen is the Sole General Partner at Essence VC, a fund focused on early-stage infrastructure, AI, and open-source innovation. A three-time founder with an exit, his journey from Microsoft engineer to sought-after investor is a masterclass in building credibility through technical depth and founder-centric support. He has backed companies like Tabular, Iteratively, and Warp, and his insights are shaped by hundreds of conversations at the bleeding edge of infrastructure.Connect with Timothy Chen on LinkedIn: linkedin.com/in/timchenVisit the Essence VC Website: https://www.essencevc.fund/Connect with Matt Cohen on LinkedIn: https://ca.linkedin.com/in/matt-cohen1Visit the Ripple Ventures website: https://www.rippleventures.com/ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit tanktalks.substack.com

The Pure Report
Top of Mind: Navigating Massive Enterprise Infrastructure Shifts Now

The Pure Report

Play Episode Listen Later Jan 21, 2026 49:27


This week we welcome Paul Joyce, who leads one of Pure Storage's largest Field Solution Architect (FSA) teams. Our discussion begins by exploring the philosophy behind building a team of super technical specialists and key capabilities for this specialized role. Paul highlights that in addition to deep technical expertise in areas like databases and virtualization, he seeks candidates who demonstrate passion, a willingness to be vulnerable to build their personal brand, and, most critically, empathy for customers. This empathy should be rooted in a foundation of hands-on experience, where the architects have "lived the pain" of IT operations and can truly understand the challenges faced by customers, allowing them to focus on delivering time and value back to the business. We then move to what's top of mind for Paul, focusing on two major industry dilemmas. First is the ongoing virtualization dilemma, the continuing need for customers to re-evaluate their virtualization strategy following changes to VMware licensing. Paul emphasizes that the key challenge is not just the technology conversion (like moving to another hypervisor) but the business risks involved—including the cost of retraining entire staff on a new enterprise-ready platform and the complications of creating complex, high-risk migration pipelines between different environments. The second dilemma, around Big Iron, covers the massive shift in mission-critical storage. Paul contrasts the legacy multi-controller, spinning-disk systems of the past, built primarily for high availability, with Pure Storage's all-flash, two-controller architecture, which he attests delivers equal or greater availability with a simpler architecture and superior performance. This simplified approach enables massive consolidation for complex database environments. Finally, Paul shares his hot takes on database trends. He points to the growing importance of vector embedding, noting that major enterprise databases like Oracle and Microsoft SQL Server 2025 are building native vector capabilities into their platforms to bring AI/data lake functionality directly to the data. He also discusses the implications of Oracle's 23ai release, which has focused on cloud and engineered systems, prompting on-premises customers to consult with their FSA teams on their future database strategy. The episode concludes with a classic IT mess up story from Paul's early career as a jack-of-all-trades network administrator, recounting a failed, all-weekend core switch replacement in a freezing data center. To learn more, visit https://www.purestorage.com/databases Check out the new Pure Storage digital customer community to join the conversation with peers and Pure experts: https://purecommunity.purestorage.com/ 00:00 Intro and Welcome 01:44 Building a Specialist Team 05:53 Prior Experience in IT 09:45 Working with Rockets 15:15 Virtualization Dilemma 24:09 Enterprise Storage for Databases 37:01 Hot Takes Segment

AI + a16z
How Should AI Be Regulated? Use vs. Development

AI + a16z

Play Episode Listen Later Jan 20, 2026 46:45


To Regulate AI Effectively, Focus on How It's UsedA conversation with Martin Casado on learning from past computing platform shifts, understanding marginal risk in AI, and why open source matters for US competitiveness.One of the core pillars of our roadmap for federal AI legislation makes clear AI should not excuse wrongdoing. When people or companies use AI to break the law, existing criminal, civil rights, consumer protection, and antitrust frameworks should still apply. Enforcement agencies should have the resources they need to enforce the law. If existing bodies of law fall short in accounting for certain AI use cases, any new laws should be evidence-based, clearly defining marginal risks and the optimal approach to target harms directly. In this conversation, we go deeper on what that principle means in practice with Martin Casado, general partner at a16z where he leads the firm's infrastructure practice and invests in advanced AI systems and foundational compute. Martin has lived through multiple platform shifts–as a researcher where he worked on large-scale simulations for the Department of Defense before working with the intelligence community on networking and cybersecurity, a pioneer of software-defined networking at Stanford, and the cofounder and CTO of Nicira, which was acquired by VMware–giving him a rare perspective on how breakthrough technologies are governed as they develop and scale. Martin joins Jai Ramaswamy and Matt Perault to discuss how decades of technology policy can inform addressing harmful uses of AI, defining marginal risk in AI, the importance of open source for long-term competitiveness, and more.  Follow Jai Ramaswamy on X: https://twitter.com/jai_ramaswamyFollow Matt Perault on X: https://twitter.com/MattPeraultFollow Martin Casado on X: https://twitter.com/martin_casadoRead the a16z AI Policy Brief here: https://a16zpolicy.substack.com/ Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The CTO Advisor
Right-Sizing AI, Rationalizing Virtualization, and Becoming an Infrastructure Arbiter — with Melissa Palmer

The CTO Advisor

Play Episode Listen Later Jan 14, 2026


Episode Summary Enterprise leaders are facing two equally hard problems at the same time: deciding what to do next with virtualization in the post-VMware era, and figuring out how—or whether—to deploy AI infrastructure responsibly. In this episode of The CTO Advisor Podcast, Keith Townsend sits down with long-time industry peer and infrastructure expert [...]

Alexa's Input (AI)
Building with Purpose: Joe Beda on Systems and Self

Alexa's Input (AI)

Play Episode Listen Later Jan 12, 2026 79:40


In this episode of Alexa's Input (AI), Alexa sits down with Joe Beda, co-creator of Kubernetes and one of the key figures behind modern cloud computing.Joe talks through his journey from big tech to founding a startup and back again, and what it actually takes to build systems that scale technically, organizationally, and emotionally. Joe shares the origin story of Kubernetes, what people often misunderstand about open source, and why infrastructure success sometimes comes with unexpected personal costs.They also discuss tradeoffs between shipping fast and getting it right, how incentives shape engineering culture, and why identity standards like SPIFFE/SPIRE is just now getting more attention. Joe gives a wide-ranging, honest look at infrastructure, innovation, and the people behind it.LinksWatch: ⁠⁠⁠⁠https://www.youtube.com/@alexa_griffith⁠⁠⁠⁠Read: ⁠⁠⁠⁠⁠⁠https://alexasinput.substack.com/⁠⁠⁠⁠⁠⁠Listen:⁠⁠ https://creators.spotify.com/pod/profile/alexagriffith/⁠⁠More: ⁠⁠⁠⁠https://linktr.ee/alexagriffith⁠⁠⁠⁠Website: ⁠⁠⁠⁠https://alexagriffith.com/⁠⁠⁠⁠LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/alexa-griffith/⁠⁠⁠⁠Find out more about the guest at:LinkedIn https://www.linkedin.com/in/jbeda/SPIFFE: https://spiffe.io/Kubernetes: https://kubernetes.io/Joe Beda Interview – Increment Magazinehttps://increment.com/containers/joe-beda-interview/Joe Beda on The Podlets Podcasthttps://thepodlets.io/episodes/006-joe-beda/GitLab Blog: Kubernetes & Community (Joe Beda)https://about.gitlab.com/blog/kubernetes-chat-with-joe-beda/KeywordsKubernetes, Joe Beda, cloud-native, open source, technology, Google, VMware, Heptio, AI, security standardsChapters00:00 Introduction to Joe Beda and Kubernetes02:50 Understanding Kubernetes: The Foundation of Modern Computing04:36 The Birth of Kubernetes: From Idea to Reality07:38 Internal Debates: Navigating Challenges at Google10:14 Key Innovations: What Sets Kubernetes Apart13:30 The Role of Community: Collaborating with Red Hat15:26 Design Challenges: Networking and Configuration Pain Points19:28 Joe's Journey: Transitioning from Microsoft to Google23:02 Navigating Corporate Politics: Influence and Success25:24 Career Growth: Balancing Company Success and Personal Development30:44 Navigating Industry Trends and Career Durability35:46 The Balance of Work and Life40:40 Understanding Burnout and Personal Ownership47:49 The Journey of Founding Heptio54:31 The Acquisition by VMware and Its Implications01:00:15 Authenticity in Sales and Motivation01:01:23 Career Transitions: From Engineer to Founder01:02:11 The Evolution of Perspective in Tech Careers01:04:26 Navigating the Challenges of Startup Life01:06:12 Post-Acquisition Dynamics at VMware01:09:52 Finding Purpose in Corporate Structures01:11:32 Philanthropy and Personal Values01:13:02 Open Source Contributions: Spiffy and Spire01:16:51 The State of Security Standards in AI01:22:12 Advising Principles and Green Flags in Startups

SQL Server Radio
Episode 183 - Azure SQL Managed Instance Next Gen

SQL Server Radio

Play Episode Listen Later Jan 12, 2026 36:03


In this year's first episode of the new year, Guy and Eitan discuss the new Next-gen General Purpose tier of Azure SQL Managed Instance, which is now GA. They also discuss a few interesting customer stories, how they were resolved, and how they're not actually SQL Server's fault. Relevant links: Generally Available: Azure SQL Managed Instance Next-gen General Purpose | Microsoft Community Hub The Bitmap Index query plan operator SQL Server on VMware best practices guide How to Save Money on Your SQL Server Hardware

Good Morning, HR
HR News: Learning from the SHRM Verdict with Margarita Ramos

Good Morning, HR

Play Episode Listen Later Dec 25, 2025 48:22


Something New!  For HR teams who discuss this podcast in their team meetings, we've created a discussion starter PDF to help guide your conversation. Download it here https://goodmorninghr.com/EP232 In episode 232, Coffey talks with Margarita Ramos about the importance and future of the employee relations function following the $11.5 million SHRM discrimination verdict. They discuss the SHRM jury verdict and its implications for HR credibility; the role of employee relations at the intersection of compliance and employee experience; proactive versus reactive approaches to workplace conflict; multiple complaint channels and manager escalation obligations; why dismissing concerns as "not illegal" undermines trust; investigation failures highlighted in the SHRM case; investigator neutrality, training, and experience requirements; when and why to use outside investigators or counsel; leadership accountability and the role of the CHRO in employee relations; the three-legged stool of employee relations, HR business partners, and employment counsel; building ER infrastructure with case management systems and data analytics; handling high-performing but high-risk leaders; transparency in employee relations processes; reducing gossip through consistent and fair investigations; and the future of employee relations including responsible use of AI in investigations. Good Morning, HR is brought to you by Imperative—Bulletproof Background Checks. For more information about our commitment to quality and excellent customer service, visit us at https://imperativeinfo.com.  If you are an HRCI or SHRM-certified professional, this episode of Good Morning, HR has been pre-approved for half a recertification credit. To obtain the recertification information for this episode, visit https://goodmorninghr.com.  About our Guest: Margarita Ramos is a highly respected Global Employee Relations executive and employment attorney with more than two decades of experience across technology, SaaS, and financial services. She is trusted by CHROs, HR Business Partners, and C-suite leaders to build scalable ER infrastructures, stabilize organizations through change, and elevate the employee experience through disciplined governance and operational excellence. With a foundation rooted in JD-trained employment law—including roles as In-House Employment Counsel at Merrill Lynch and Principal Corporate Counsel at Microsoft—Margarita developed deep legal expertise in compliance, risk mitigation, and workplace investigations.  She later translated this expertise into senior ER and HR Compliance leadership roles at VMware, Splunk, RBC, and Bank of America, where she supported complex global workforces navigating rapid growth, cultural transformation, and organizational change. Throughout her career, Margarita has been brought in to create structure where ambiguity exists. She has built and led global ER Centers of Excellence, developed investigations and performance-management frameworks, and implemented modern case-management systems such as Workday, HR Acuity, and AI-enabled governance tools. Her approach blends empathy with operational rigor, ensuring ER functions are both employee-centric and aligned with business strategy. A skilled investigator and ER strategist, Margarita advises senior leaders on workplace investigations, conflict resolution, performance management, DEI&B, and global employment compliance. She is known for her ability to translate data, case trends, and cultural signals into actionable insights—leveraging ER metrics, KPIs, and reporting to influence leadership decisions, drive fairness, and strengthen organizational culture. Her data-driven approach enables leaders to make well-informed, consistent decisions that reinforce trust and accountability across the enterprise.  Margarita has also led M&A HR integration efforts at VMware and Splunk, overseeing cultural alignment, workforce assessments, and change-management strategies during periods of significant transformation. Her leadership in these environments reflects her commitment to creating workplaces where clarity, belonging, and operational excellence coexist. Beyond her corporate work, Margarita is deeply committed to developing future talent. She has mentored first-generation college students and contributed to organizations such as Girls Who Code, Year Up, and Hobart & William Smith Colleges. At Microsoft, she provided pro bono support for Kids in Need of Defense (KIND). Outside of work, she enjoys ballroom dancing and cooking. Margarita is passionate about shaping modern, strategic, tech-forward ER functions that support organizational values, reduce risk, build leadership capability, and create an environment where employees can do their best work with trust, fairness, and accountability. Margarita Ramos can be reached athttps://www.linkedin.com/in/margarita-ramos/ About Mike Coffey: Mike Coffey is an entrepreneur, licensed private investigator, business strategist, HR consultant, and registered yoga teacher.In 1999, he founded Imperative, a background investigations and due diligence firm helping risk-averse clients make well-informed decisions about the people they involve in their business.Imperative delivers in-depth employment background investigations, know-your-customer and anti-money laundering compliance, and due diligence investigations to more than 300 risk-averse corporate clients across the US, and, through its PFC Caregiver & Household Screening brand, many more private estates, family offices, and personal service agencies.Imperative has been named a Best Places to Work, the Texas Association of Business' small business of the year, and is accredited by the Professional Background Screening Association. Mike shares his insight from 25+ years of HR-entrepreneurship on the Good Morning, HR podcast, where each week he talks to business leaders about bringing people together to create value for customers, shareholders, and community.Mike has been recognized as an Entrepreneur of Excellence by FW, Inc. and has twice been recognized as the North Texas HR Professional of the Year. Mike serves as a board member of a number of organizations, including the Texas State Council, where he serves Texas' 31 SHRM chapters as State Director-Elect; Workforce Solutions for Tarrant County; the Texas Association of Business; and the Fort Worth Chamber of Commerce, where he is chair of the Talent Committee.Mike is a certified Senior Professional in Human Resources (SPHR) through the HR Certification Institute and a SHRM Senior Certified Professional (SHRM-SCP). He is also a Yoga Alliance registered yoga teacher (RYT-200) and teach...

The Brave Marketer
Navigating AI's Hidden Risks: Lessons from the Nova Bridge Chatbot Failure

The Brave Marketer

Play Episode Listen Later Dec 23, 2025 40:56


Bhavesh Mehta and Mahesh Kumar—senior technology leaders at Uber and co-authors of the practical guide AI-First Leader—discuss the lessons learned from Nova Bridge's collapse, and share best practices for mitigating hidden risks that can derail ambitious AI projects. They also share specific ways that small businesses and Fortune 500 companies can embrace AI from a place of empowerment rather than fear. Key Takeaways:  Ways to align C-suite leaders and engineering teams around a unified AI roadmap The most underestimated human factor that determines whether an AI transformation succeeds How overlooked vulnerabilities, insufficient oversight, and the rush to deploy led to unexpected fallout of the Nova Bridge Chat The unforeseen dangers lurking within AI systems Guest Bio:  Bhavesh Mehta is a technology leader and co-author of AI-First Leader, a practical guide for executives navigating enterprise AI adoption. With over 20 years of experience across Cisco, Uber, and VMware, Bhavesh has architected large-scale conversational and generative AI systems that support millions of users daily. His work bridges deep technical design and executive strategy, helping organizations deploy AI responsibly and at scale. Mahesh Kumar is a seasoned product executive and co-author of AI-First Leader, a practical guide for executives navigating enterprise AI adoption. With over 20 years of experience across Uber, Veritas, and VMware, Mahesh has led the development of multi-billion-dollar product portfolios and enterprise AI strategies. Known for bridging deep technology with strategic vision, he helps organizations move from experimentation to large-scale AI transformation. His work focuses on responsible innovation, combining business storytelling with technical fluency to make AI both accessible and actionable for leaders.   ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte  

Datacenter Technical Deep Dives
Learn Infrastructure-as-Code [the FUN way] through Minecraft

Datacenter Technical Deep Dives

Play Episode Listen Later Dec 23, 2025


Join us for the final episode of 2025 as Mark Tinderholt (Principal Software Engineer at Microsoft Azure, HashiCorp Ambassador, and author of "Mastering Terraform") teaches us Infrastructure as Code through Minecraft! If you've ever wanted to learn Terraform in a fun, visual way, this is the episode for you. Mark demonstrates how to use the Minecraft Terraform provider to build infrastructure in-game, making complex IaC concepts tangible and engaging. You'll see live demos of provisioning Minecraft resources, managing dependencies, handling state, and even importing existing structures into Terraform. This unique approach transforms abstract infrastructure concepts into something you can literally see and interact with—perfect for visual learners, educators, or anyone looking to make IaC training more engaging. Whether you're teaching your team Terraform or just want a creative way to understand infrastructure patterns, this episode shows you how gaming and cloud engineering can come together. Subscribe to vBrownBag for weekly tech education! ⸻ Timestamps 0:00 Welcome & Technical Difficulties 1:27 Last Episode of 2025! 4:41 Planning for 2026 5:37 Mark Tinderholt Joins 6:14 Introduction to Minecraft + Terraform 8:52 Why Use Minecraft for Teaching IaC? 12:35 Getting Started: Requirements & Setup 16:47 The Minecraft Terraform Provider 20:18 First Demo: Provisioning Basic Blocks 28:32 Managing State in Minecraft 35:41 Working with Dependencies 42:16 Advanced Patterns: For_each & Count 48:55 Importing Existing Structures 55:23 Real-World Applications & Teaching 1:00:17 Q&A: Provider Limitations & Features 1:05:24 Minecraft Level Building Tools Discussion 1:09:05 Final Giveaway & Wrap-Up How to find Mark: https://www.linkedin.com/in/marktinderholt/ Links from the show: Marks repos: https://github.com/markti?tab=repositories Marks book: https://amzn.to/3N1rnuJ Mark's Ignite talk: https://ignite.microsoft.com/en-US/sessions/7fa5095f-9f65-46e3-9f82-9af6603ea903

Unsupervised Learning
AI Vibe Check: The Actual Bottleneck In Research, SSI's Mystique, & Spicy 2026 Predictions

Unsupervised Learning

Play Episode Listen Later Dec 18, 2025 78:04


Ari Morcos and Rob Toews return for their spiciest conversation yet. Fresh from NeurIPS, they debate whether models are truly plateauing or if we're just myopically focused on LLMs while breakthroughs happen in other modalities.They reveal why infinite capital at labs may actually constrain innovation, explain the narrow "Goldilocks zone" where RL actually works, and argue why U.S. chip restrictions may have backfired catastrophically—accelerating China's path to self-sufficiency by a decade. The conversation covers OpenAI's code red moment and structural vulnerabilities, the mystique surrounding SSI and Ilya's "two words," and why the real bottleneck in AI research is compute, not ideas.The episode closes with bold 2026 predictions: Rob forecasts Sam Altman won't be OpenAI's CEO by year-end, while Ari gives 50%+ odds a Chinese open-source model will be the world's best at least once next year. (0:00) Intro(1:51) Reflections on NeurIPS Conference(5:14) Are AI Models Plateauing?(11:12) Reinforcement Learning and Enterprise Adoption(16:16) Future Research Vectors in AI(28:40) The Role of Neo Labs(39:35) The Myth of the Great Man Theory in Science(41:47) OpenAI's Code Red and Market Position(47:19) Disney and OpenAI's Strategic Partnership(51:28) Meta's Super Intelligence Team Challenges(54:33) US-China AI Chip Dynamics(1:00:54) Amazon's Nova Forge and Enterprise AI(1:03:38) End of Year Reflections and Predictions With your co-hosts:@jacobeffron  - Partner at Redpoint, Former PM Flatiron Health@patrickachase  - Partner at Redpoint, Former ML Engineer LinkedIn@ericabrescia  - Former COO Github, Founder Bitnami (acq'd by VMWare)@jordan_segall  - Partner at Redpoint

The PowerShell Podcast
Mentorship, Mindset, and Microsoft Ignite with Shannon Eldridge-Kuehn

The PowerShell Podcast

Play Episode Listen Later Dec 15, 2025 78:33


In this episode of The PowerShell Podcast, Shannon Eldridge-Kuehn returns to discuss her journey since becoming a Microsoft MVP, her experiences at Microsoft Ignite, and her evolving views on technology, communication, and personal growth. Shannon shares stories from Ignite, including Mark Russinovich's fascinating demo on optical computing, and offers insight into how AI is reshaping IT work, both in efficiency and responsibility.The conversation expands beyond tech, touching on mentorship, emotional intelligence, and the importance of grace, empathy, and connection in professional and personal life. Shannon and host Andrew Pla explore how better communication, mental health awareness, and authentic collaboration can transform careers and communities alike.   Key Takeaways: AI as a partner, not a replacement – Shannon views AI as a powerful companion that amplifies human creativity, not a threat to jobs or individuality. Communication is the real superpower – Technical skills open doors, but empathy, curiosity, and active listening sustain success and build trust. Find your community and give grace – Whether mentoring or learning, everyone benefits from patience, understanding, and a supportive network. Guest Bio: Shannon Eldridge-Kuehn is a Principal Solutions Architect at AHEAD and a Microsoft MVP with a unique blend of technical depth and strong communication roots. A University of Nebraska–Lincoln graduate in Communication Studies with a minor in English, she began her journey into tech through DJing and audio troubleshooting, which sparked a passion for problem-solving. Over time, she progressed from help desk roles into advanced infrastructure and cloud engineering, with experience spanning Windows systems, VMware, Exchange, Office 365, and Azure. Her career includes roles at Microsoft and 10th Magnitude, where her love for cloud truly flourished. Shannon leverages her background in public speaking and writing to bridge the gap between business needs and technical solutions.   Resource Links: Shannon's Blog – https://shankuehn.io Shannon on X (Twitter) – https://twitter.com/shankuehn Connect with Andrew - https://andrewpla.tech/links Microsoft Ignite – https://ignite.microsoft.com PDQ Discord – https://discord.gg/PDQ PowerShell Wednesdays – https://www.youtube.com/watch?v=lBLDfE1aiuE&list=PL1mL90yFExsix-L0havb8SbZXoYRPol0B The PowerShell Podcast on YouTube: https://youtu.be/okVO33wX5xY

Unsupervised Learning
Ep 80: CEO of Surge AI Edwin Chen on Why Frontier Labs Are Diverging, RL Environments & Developing Model Taste

Unsupervised Learning

Play Episode Listen Later Dec 15, 2025 48:01


Edwin Chen is the founder and CEO of Surge AI, the data infrastructure company behind nearly every major frontier model. Surge works with OpenAI, Anthropic, Meta, and Google, providing the high-quality data and evaluation infrastructure that powers their models.  Edwin reveals why optimizing for popular benchmarks like LMArena is "basically optimizing for clickbait," how one frontier lab's models regressed for 6-12 months without anyone knowing, and why the industry's approach to measurement is fundamentally broken. Jacob and Edwin discuss what actually makes elite AI evaluators, why "there's never going to be a one size fits all solution" for AI models, and how frontier labs are taking surprisingly divergent paths to AGI. (0:00) Intro(0:56) The Pitfalls of Optimizing for LMArena(4:34) Issues with Data Quality and Measurement(9:44) The Importance of Human Evaluations(13:40) The Rise of RL Environments(17:21) Challenges and Lessons in Model Training(19:59) Silicon Valley's Pivot Culture(23:06) Technology-Driven Approach(24:18) Quality Beyond Credentials(27:51) Impact of Scale Acquisition(28:35) Hiring for Research Culture(30:48) Divergence in AI Training Paradigms(34:16) Future of AI Models(39:32) Multimodal AI and Quality(43:44) Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint

Packet Pushers - Full Podcast Feed
NB555: AI and APIs Drive HPE's Dual-Platform WLAN Strategy; Dell, HPE Dangle VMware Alternatives

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Dec 8, 2025 47:06


Take a Network Break! Our Red Alert calls out a dangerous vulnerability in the popular open-source React library. On the news front, HPE decides on a “both and” strategy for its two wireless portfolios and rolls out an option to let customers pick and choose among cross-platform features in Mist and Aruba Networking Central through... Read more »