Defunct American computer hardware and software company
POPULARITY
Categories
An airhacks.fm conversation with Daniel Terhorst-North (@tastapod.com) about: first computer experience with the ZX81 and its 1K memory, the 1K chess game on ZX81, the ZX Spectrum with 16K and later 48K memory, the Amstrad 128K, typing in game listings from computer magazines, Dan's brother John hacking ZX spectrum games using a hardware freeze device and memory peeking/poking, cracking game encryption and copy protection on 8-bit tape cassette games, the arms race between game publishers and hackers, cracking the Star Wars game security before its release, ZX Spectrum fan sites and retro gaming communities, classic games including 3D Monster Maze and Manic Miner and Jet Set Willy, sprite graphics innovation on the Z80 chip, first internship at Domark publishing Empire Strikes Back on ZX Spectrum and Commodore 64, second internship at IBM Hursley Park working on CICS in PL/1 and Rexx, the contrast between casual game studio culture and IBM corporate culture in the 1980s, IBM's role as a founding partner of J2EE Enterprise Java, JMS wrapping MQ Series, the reliability of MQ Series compared to later messaging technologies, finding and reporting a concurrency bug in MQ Series with JUnit tests and IBM's rapid response with an emergency patch, IBM alphaWorks portal and experimental technologies, IBM Aglets mobile Java agent framework compared to modern A2A agent protocols, Jini and JavaSpaces from Sun Microsystems with leasing and self-healing, JXTA peer-to-peer technology, IBM Jikes Compiler performance compared to javac, IBM's own JVM, JVM running on Palm Pilot around 1999, VisualAge for Java as a port of VisualAge for SmallTalk with its image-based architecture and no file system exposure, Java's coupling of class and package names to files and directories as a design weakness, the difficulty of refactoring without IDE support, Eclipse as the first IDE with proper refactoring, NetBeans IDE performance compared to Visual Studio Code, third internship writing X-ray machine control software in Turbo Pascal doing digital image processing, the pace of technological innovation slowing from kaikaku (abrupt change) to kaizen (continuous improvement), Douglas Adams quote about technology perception by age, DEC Alpha 64-bit Unix performance, commodity Linux hardware replacing exotic RISC machines, Apple M series chips rediscovering RISC Architecture and system-on-chip design, innovation fatigue and signal-to-noise ratio in modern tech, LLMs and the trillion-dollar bet on the wrong technology, electric cars as an example of ongoing innovation, Tailwind CSS shutting down due to AI-generated code replacing paid expertise, Stack Overflow in trouble due to AI summarization, open source innovation continuing with tools like Astral's uv replacing the python toolchain, cross-community collaboration between rust and Python and Ruby ecosystems, first graduate job at Crossfield (Fuji/DuPont joint venture) doing electronic pre-press and color transformation through 4D CMYK color cubes, writing a TIFF decoder from scratch in C, Raster Image Processor technology and its connection to Adobe, transition from C++ to Java feeling quirky, joining ThoughtWorks in 2002 for enterprise Java work Daniel Terhorst-North on twitter: @tastapod.com
Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ
In this episode, I sit down with Prashant Sridharan, a 30-year veteran of developer marketing who has shaped go-to-market strategies for tech giants like Sun Microsystems, Microsoft, AWS, Facebook, and Twitter, and currently runs product marketing at Supabase. We dive deep into the origins of DevRel and how marketing to developers has evolved in an increasingly noisy, AI-saturated landscape.Topics covered:- Transitioning from massive tech companies to the fast-paced startup world - How to genuinely measure the success of Developer Relations without ruining communities - Using AI tools like Claude to accelerate mechanical marketing tasks while preserving authentic storytelling - The shift from traditional SEO to GEO (Generative Engine Optimization) for developer tools - The thrill of live, unscripted coding demos and stories from sharing the stage with Steve Ballmer - Prashant's upcoming fiction novel, The Midnight Coders Children, and the craft of writing Find more from Prashant at StrategicNerds.com and check out his non-fiction book, Picks and Shovels: https://amzn.to/4cJ2TRO
An airhacks.fm conversation with Simon Ritter (@speakjava) about: first computer experiences with TRS-80 and mainframe ALGOL68 programming via punched cards in the 1970s UK, one-week turnaround times for program execution, writing battleship games on mainframes, bbc micro with color graphics and dual floppy drives, father's influence as a tech enthusiast with a PDP-8 in his chemistry lab, early fascination with robotics and controlling machines through programming, writing card games and Mandelbrot set fractal generators in Basic, transition from BASIC to C programming through sponsored university degree, working at Rocc Computers on Unix device drivers and kernel debugging, the teleputer, memory leak debugging requiring half-inch mag tape transfers and two-week investigation periods, AT&T Unix source code license access and kernel modifications, Unix System V Release 4 and Bell Labs heritage, Motorola 68000 processor's flat memory model versus Intel's near/far pointers, Novell acquisition of Unix from AT&T in 1993, Unixware development and time spent in Utah, SCO's acquisition of Unix IP and subsequent IP trolling, joining Sun Microsystems in 1996 as Solaris sales engineer, transition to Java evangelism in 1997, working under Reggie Hutcherson and Matt Thompson for nearly 10 years, building Lego Mindstorms blackjack-dealing robot with Java speech recognition and computer vision, using Sphinx for voice recognition and FreeTTS for speech synthesis, JMF webcam integration for card recognition, JavaOne 2004 robot demonstration, Glassfish application server evangelism and reference implementation benefits, Sun's technology focus versus business development challenges, CDE desktop environment nostalgia, Oracle acquisition of Sun in 2010, Jonathan Schwartz's acquisition announcement email, Oracle's successful stewardship of Java through openJDK, praise for Brian Goetz Mark Reinhold John Rose and Stuart Marks, six-month release cycle benefits, Project Amber Loom Panama and Valhalla developments, OpenSolaris discontinuation leading to docker adoption for server containerization, Oracle's 2015 pivot to cloud focus, career-defining conversation in Japan about cloud versus Java evangelism, layoff during vacation in September 2015, joining Azul Systems after three-and-a-half-hour interview with Gil Tene, ten years at Azul working on high-performance JVM Platform Prime garbage collection and CRaC technology, comparison of Azul culture to Sun Microsystems innovation environment, commercial Java distribution value propositions and runtime inventory features Simon Ritter on twitter: @speakjava
We think that offering different sizes is serving our customers, but is it actually? Does standard sizing make it easier for the customer or does it just make it easier for the brand? Rick Levine and Steven Heard have thoughts. They've each run multiple manufacturing businesses in different industries and are currently partners of the made-to-measure development and manufacturing studio ApparelWerks. No matter the business, their goal with product design, fit, and sizing has been the same: make each customer insanely happy. It impacts how they see production, technology, entrepreneurship, craftsmanship, customer relationships, and more. In episode 130, we cover it all. Steven and Rick were introduced when Rick was looking into body scanning for a problem his daughter, an engineer, was trying to solve. "There's a guy in Portland who's been making customized clothing for decades. He knows all about scanners and measurement." They discovered a shared appreciation for manufacturing technology, a fascination with old sewing machines, and a view that tech is only a means to an end; their past businesses were focused on making customers happy. Both were also looking for something new and interesting to do, and the result was starting ApparelWerks, a manufacturing and product development studio in Portland creating made-to-measure clothing. Steven Heard has decades of experience making clothing, starting at the Levi Strauss factory on Valencia Street in San Francisco, at a time when all patterns and samples for the company were still created there. He was a senior pattern-maker for Levi's Dockers brand, and went on to spearhead the world's first large-scale bespoke jeans production, leveraging body scanning technology to craft custom jeans for thousands of consumers. He founded pattern service bureau Clinton Park, doing garment development and pattern work for numerous national and start-up brands, and developed a reputation for being the go-to patternmaker for denim development. He went on to found Japanese-inspired San Francisco denim and workwear brand Dillon Montara in 2014, and was the development and manufacturing partner behind Portland's Ship John brand. Rick Levine is the engineering black sheep in a family of artists. His father was a ceramist and designer, making and using tools to create mid-century ceramic tile and lamps on a large scale. Rick spent a lot of his time growing up around clay and machinery. Rick started his career as a producer and editor for film and video, and stepped sideways into programming tools and user interfaces for computer systems. He worked at Sun Microsystems early in its existence, and then at a series of start-ups. In 2006, Rick followed his interest in manufacturing automation to found chocolate brand Sun Cups. He repurposed industrial-scale chocolate techniques to create artisanal, organic, nut-free chocolates and made them available in thousands of stores. In 2013, he and his brother, designer Neil Levine, founded sock company XOAB, focusing on creating comfortable socks with a broad palette of Merino wool and Supima® cotton colors. They created a domestic supply chain, and used modified knitting machines and pattern analysis software to take new designs from sketch to shelf in less than a week, a capability unique in the hosiery industry. This episode explores: Fitting the customer Why the garment won't fit unless you've had a conversation with the customer The difference between solving fit for your brand versus solving fit for your customer The advantages and limitations of 3D body scanning for apparel development How they know when they got the fit right Fitting the lifestyle Scaling on-demand production How one-piece production flow changes the way you see efficiency How Rick's and Steven's background lead to their perspectives on manufacturing The tools Rick and Steven use to systemetize custom clothing Fitting the values Why there's value in both craft and technology Why Rick has a “healthy disrespect” for tools People and resources mentioned in this episode: ApparelWerks website Dillon Montara website ApparelWerks Instagram Do you want fashion business tips and resources like this sent straight to your inbox? Sign up for the How Fitting newsletter to receive new podcast episodes plus daily content on creating fashion that fits your customer, lifestyle, and values.
“AI shouldn't replace judgment. It should amplify it.” As AI rapidly collapses product development cycles and accelerates decision-making, the margin for getting things wrong has never been smaller. Product leaders today are navigating faster launches, heightened customer expectations, rising concerns around trust and bias, and an increasing reliance on automated intelligence, while still being accountable for outcomes, accuracy, and long-term value creation. In this episode, Ajay Singh, Chief Product Officer at Pure Storage, joins Namita Adavi, Partner at Zinnov, for a candid conversation on what it truly takes to lead products in an AI-accelerated world, where speed is abundant, but judgment remains scarce. Ajay brings over 25 years of experience spanning Silicon Valley startups and global enterprises, having led technology and product teams through multiple waves of disruption, from the PC era and cloud revolution to today's AI inflection point. Drawing from his journey across Sun Microsystems, VMware, Hewlett Packard, and as a founder whose companies were acquired by BMC Software and Intuit, Ajay offers a grounded operator's perspective on how innovation actually scales. The conversation explores how AI is reshaping product management and customer experience, why biased outcomes often originate in data rather than models, and how product teams must move from feature-centric execution to outcome-driven design. Ajay also shares why the venture-driven startup model continues to be the most effective engine for innovation, even inside large enterprises, and why, despite all the technological change, leaders must continue to place their biggest bets on people. This episode offers a pragmatic view of AI as an enabler, not a replacement, where success is defined not by adopting the latest tools, but by building resilient teams, trustworthy systems, and products that deliver real customer outcomes. Tune in to hear: Why AI is compressing product cycles, but not eliminating accountability How product leaders can manage trust, accuracy, and bias in AI-driven systems Why product management is shifting from features to outcomes and behavior patterns How startup-style venture models can unlock innovation inside large enterprises Leadership lessons on judgment, competitiveness, and building teams that win Tune in to hear the full episode.
In this episode of The Dutch Kubernetes Podcast, Ronald and Jan talk with Yahya Al-Salqan, CEO and co-founder of Jaffa.Net Software, about building and scaling global software companies far beyond the traditional tech hubs.Yahya shares his personal journey from academia and Silicon Valley, where he worked at Sun Microsystems, back to Palestine to found Jaffa.Net. What started as a mission-driven decision to contribute to his community has grown into a company with over 26 years of experience, serving international clients such as Intel, BMW, Fujitsu, Lufthansa, Oxford University, and several Dutch organizations.The conversation explores how modern software engineering practices and cloud-native technologies make it possible to deliver enterprise-grade solutions globally. Kubernetes and container technologies play a key enabling role by providing consistent environments, repeatable deployments, version control, and zero-downtime upgrades for customers running ERP and custom software solutions.Beyond technology, the episode highlights the Palestinian IT ecosystem, the importance of education, and how software development allows talent to transcend physical and political borders. Yahya explains why the IT sector is one of the fastest-growing contributors to the local economy and why investing in people and skills is the most sustainable path forward.The discussion also touches on future trends such as AI, blockchain, and programmable digital money, and how companies must continuously evolve to stay relevant. Throughout the episode, one theme remains central: global software scale is no longer defined by geography, but by mindset, tooling, and execution.Powered by ACC ICTStuur ons een bericht.DevOps ConferenceThe Conference for CI/CD, Kubernetes, Platform Engineering & DevSecOps k8_Podcast voor 15% kortingSupport the showLike and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT
Corey Zumar is a Product Manager at Databricks, working on MLflow and LLM evaluation, tracing, and lifecycle tooling for generative AI.Jules Damji is a Lead Developer Advocate at Databricks, working on Spark, lakehouse technologies, and developer education across the data and AI community.Danny Chiao is an Engineering Leader at Databricks, working on data and AI observability, quality, and production-grade governance for ML and agent systems.MLflow Leading Open Source // MLOps Podcast #356 with Databricks' Corey Zumar, Jules Damji, and Danny ChiaoJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Databricks for powering this MLOps Podcast episode.// AbstractMLflow isn't just for data scientists anymore—and pretending it is is holding teams back. Corey Zumar, Jules Damji, and Danny Chiao break down how MLflow is being rebuilt for GenAI, agents, and real production systems where evals are messy, memory is risky, and governance actually matters. The takeaway: if your AI stack treats agents like fancy chatbots or splits ML and software tooling, you're already behind.// BioCorey ZumarCorey has been working as a Software Engineer at Databricks for the last 4 years and has been an active contributor to and maintainer of MLflow since its first release. Jules Damji Jules is a developer advocate at Databricks Inc., an MLflow and Apache Spark™ contributor, and Learning Spark, 2nd Edition coauthor. He is a hands-on developer with over 25 years of experience. He has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, Anyscale, and Databricks, building large-scale distributed systems. He holds a B.Sc. and M.Sc. in computer science (from Oregon State University and Cal State, Chico, respectively) and an MA in political advocacy and communication (from Johns Hopkins University)Danny ChiaoDanny is an engineering lead at Databricks, leading efforts around data observability (quality, data classification). Previously, Danny led efforts at Tecton (+ Feast, an open source feature store) and Google to build ML infrastructure and large-scale ML-powered features. Danny holds a Bachelor's Degree in Computer Science from MIT.// Related LinksWebsite: https://mlflow.org/https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Corey on LinkedIn: /corey-zumar/Connect with Jules on LinkedIn: /dmatrix/Connect with Danny on LinkedIn: /danny-chiao/Timestamps:[00:00] MLflow Open Source Focus[00:49] MLflow Agents in Production[00:00] AI UX Design Patterns[12:19] Context Management in Chat[19:24] Human Feedback in MLflow[24:37] Prompt Entropy and Optimization[30:55] Evolving MLFlow Personas[36:27] Persona Expansion vs Separation[47:27] Product Ecosystem Design[54:03] PII vs Business Sensitivity[57:51] Wrap up
An airhacks.fm conversation with Alvaro Hernandez (@ahachete) about: discussion about LLMs generating Java code with BCE patterns and architectural rules, Java being 20-30% better for LLM code generation than python and typescript, embedding business knowledge in Java source code for LLM context, stackgres as a curated opinionated stack for running Postgres on kubernetes, Postgres requiring external tools for connection pooling and high availability and backup and monitoring, StackGres as a Helm package and Kubernetes operator, comparison with oxide hardware for on-premise cloud environments, experimenting with Incus for system containers and VMS, limitations of Ansible for infrastructure automation and code reuse, Kubernetes as an API-driven architecture abstracting compute and storage, Custom Resource Definitions (CRDs) for declarative Postgres cluster management, StackGres supporting sharding with automated multi-cluster deployment, 13 lines of YAML to create 60-node sharded clusters, three interfaces for StackGres including CRDs and web console and REST API, operator written in Java with quarkus unlike typical Go-based operators, Google study showing Java faster than Go, GraalVM native compilation for 80MB container images versus 400-500MB JVM images, fabric8 Kubernetes client for API communication, reconciliation cycle running every 10 seconds to maintain desired state, pod local controller as Quarkus sidecar for local Postgres operations, dynamic extension installation without rebuilding container images, grpc bi-directional communication between control plane and control nodes, inverse connection pattern where nodes initiate connections to control plane, comparison with Jini and JavaSpaces leasing concepts from Sun Microsystems, quarter million lines of Java code in the operator mostly POJOs predating records, PostgreSQL configuration validation with 300+ parameters, automated tuning applied by default in StackGres, potential for LLM-driven optimization with clone clusters for testing, Framework Computer laptop automation with Ubuntu auto-install and Ansible and Nix, five to ten minute full system reinstall including BIOS updates Alvaro Hernandez on twitter: @ahachete
Alors qu'il était encore étudiant, Google est venu recruter ce français pour un projet ultra secret. 18 ans plus tard, le monde entier l'utilise. C'est un parcours complètement unique. Un dev qui, adolescent, écrivait des articles dans des magasines tech. Traduisait des bouquins O'Reilly. Et surtout… Codait en Java. À une époque où l'open-source est balbutiant et où les compilateurs sont encore payants, Romain va créer, tout seul, Un éditeur de code Java. Gratuit. C'est tellement exceptionnel, qu'en plus de gagner plusieurs prix, il va être repéré par Sun Microsystems. La boite Californienne qui, entre autres choses, développe le langage Java. Ni une, ni deux, Romain fait ses valises. Direction : la Silicon Valley.
เคยสงสัยไหมครับว่า ทำไมคอมพิวเตอร์ที่เราใช้กันอยู่ทุกวันนี้ ถึงต้องใช้ชิปของ Intel หรือ AMD ที่เป็นสถาปัตยกรรมแบบ x86 ทำไมชิปในตำนานที่เราเคยได้ยินชื่ออย่าง Sun Microsystems, DEC Alpha หรือ PowerPC ที่ครั้งหนึ่งเคยถูกยกย่องว่า “เร็วกว่า” และ “ล้ำหน้ากว่า” ถึงได้ล้มหายตายจากไปจากตลาดพีซีและเซิร์ฟเวอร์ เรื่องราวนี้ไม่ใช่แค่การต่อสู้ทางเทคนิค แต่มันคือสงครามธุรกิจที่กินเวลายาวนานเกือบ 20 ปี สงครามที่มียักษ์ใหญ่จากทั่วโลก ทั้งจากอเมริกา ญี่ปุ่น และยุโรป กระโดดเข้ามารุมสกรัมเพื่อแย่งชิงบัลลังก์เจ้าแห่งการประมวลผล มันคือยุคสมัยของการเปลี่ยนขั้วพันธมิตร การบลัฟกันด้วยตัวเลขผลทดสอบ และการตัดสินใจที่เดิมพันด้วยชะตาชีวิตของบริษัท และจุดเริ่มต้นของเรื่องทั้งหมด มันเริ่มมาจากเสียงกระซิบเล็กๆ ที่ดังออกมาจากรอบนอกอาณาจักรของยักษ์ใหญ่สีฟ้าอย่าง IBM วันนี้ เราจะย้อนเวลากลับไปดูตำนานของสงครามที่มีชื่อว่า “RISC Wars” เลือกฟังกันได้เลยนะครับ อย่าลืมกด Follow ติดตาม PodCast ช่อง Geek Forever's Podcast ของผมกันด้วยนะครับ #Intel #คอมพิวเตอร์ #CPU #ประวัติศาสตร์เทคโนโลยี #สาระไอที #SunMicrosystems #RISC #CISC #เทคโนโลยี #กรณีศึกษาธุรกิจ #ความรู้รอบตัว #วิศวกรรมคอมพิวเตอร์ #IBM #TechHistory #เรื่องเล่าไอที #geekstory #geekforeverpodcast
In this conversation, Ben Bajarin and Jay Goldberg engage with Benedict Evans to explore the current state of AI development, its historical context, and future predictions. They discuss the potential for an AI bubble, the importance of productization for user adoption, and the varying levels of AI integration across different industries. The conversation also touches on the comparison between Nvidia and Sun Microsystems, highlighting the challenges and opportunities in the AI landscape.
This podcast is brought to you by Outcomes Rocket, your exclusive healthcare marketing agency. Learn how to accelerate your growth by going to outcomesrocket.com AI security is no longer optional; it's the foundation that determines whether innovation in healthcare will thrive or fail. In this episode, Steve Wilson, Chief AI & Product Officer for Exabeam and author, discusses the hidden vulnerabilities inside modern AI systems, why traditional software assumptions break down, and how healthcare must rethink safety, trust, and security from the ground up. He explains the risks of prompt injection and indirect prompt injection, highlights the fragile nature of AI “intuition,” and compares securing AI to training unpredictable employees rather than testing deterministic code. Steve also explores issues such as supply chain integrity, output filtering, trust boundaries, and the growing need for continuous evaluation rather than one-time testing. Finally, he shares stories from his early career at Sun Microsystems, Java's early days, startup lessons from the 90s, and how modern AI agents are reshaping cybersecurity operations. Tune in and learn how today's most advanced AI systems can be both powerful and dangerously gullible, and what it takes to secure them! Resources Connect with and follow Steve Wilson on LinkedIn. Follow Exabeam on LinkedIn and visit their website! Buy Steve Wilson's book The Developer's Playbook for Large Language Model Security here.
Jeffrey Allen, a respected energy healer and Mindvalley author, is known for his teachings on personal transformation and spiritual awakening. His ‘Duality' training with Mindvalley and ‘Spirit Mind' training with his wife Hisami assist people worldwide in transforming their lives and reconnecting with their true essence. Prior to entering the world of spirituality, Jeffrey had a 15 year career as software engineer with the US Department of Energy and Sun Microsystems. Since then he has spent over 15 years teaching clairvoyance, healing, and mediumship studies around the world. Jeffrey has studied with world renowned teachers Michael Tamura, Mary Bell Nyman, Jim Self, John Fulton, and Nassim Haramein of the Resonance Project. We discuss: The Spirit Body Why men don't feel energy like women Types of energy healing Insight on the current energy right now How to recognize your natural gifts Follow Jeffrey Allen on Instagram @iamjeffreyallen Explore Jeffrey's Duality or Unlocking Transcendence classes with Mindvalley https://www.mindvalley.com Learn more about Jeffrey Allen www.IAMJeffreyAllen.com www.SpiritMind.com Follow Chef Whitney Aronoff on Instagram at @whitneyaronoff and @starseedkitchen Learn more about High Vibration Living with Chef Whitney Aronoff on www.StarseedKitchen.com Get 10% off your order of Chef Whitney's organic spices with code STARSEED on www.starseedkitchen.com Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of The Segment, host Raghu Nandakumara sits down with one of the most influential technology leaders of our time: Tony Scott, President & CEO of Intrusion and former U.S. Federal CIO under President Obama. With CIO roles at Microsoft, VMware, Disney, General Motors, Bristol Myers Squibb, and Sun Microsystems, Tony brings a rare, decades-wide perspective on how enterprise technology evolves—and where it's heading next.Tony shares his journey through some of the world's most complex organizations, offering a candid look at the forces that drive digital transformation, why organizational silos still shape most architectures, and how AI may finally help dissolve them. He breaks down how cybersecurity models must shift in an era of ubiquitous AI, legacy infrastructure, and escalating regulatory complexity—and explains why continuous monitoring and long-term institutional memory are now essential.We also dive into Tony's leadership philosophy, how he balances transformation with cyber risk, and what he's learned transitioning from CIO to CEO of a cybersecurity company tackling some of today's hardest problems. Key themes discussed:The evolution of the CIO role across decades of transformation Managing cyber risk amid AI proliferation, legacy systems, and modern architectures The importance of “useful life” frameworks for tech modernization Leadership lessons from navigating both public and private sector tech at scale A must-listen for CIOs, CISOs, tech leaders, and anyone preparing their organization for what's next in AI-driven transformation and cybersecurity.
It began in the 1970s, with rumors rumbling from the outskirts of the American technology giant, IBM. A new chip architecture capable of revolutionary processing speeds. It was called RISC. The RISC Wars were fought over nearly 20 years, with the most intensive battles in the late 1980s and early 1990s. At its peak, it involved a mix of young chip upstarts and old giants across the world throwing around benchmark results. Sun Microsystems. MIPS Computer. PA-RISC. IBM. PowerPC. DEC Alpha. Fujitsu and NEC in Japan. Siemens and Philips in Europe. And of course, looming over them all: Intel and the burgeoning Wintel Death Machine. It was a time of shifting alliances, leaps of inspiration, wild technical claims, and the Iron Fist of Intel. Today, we delve into legends of the RISC Wars.
It began in the 1970s, with rumors rumbling from the outskirts of the American technology giant, IBM. A new chip architecture capable of revolutionary processing speeds. It was called RISC. The RISC Wars were fought over nearly 20 years, with the most intensive battles in the late 1980s and early 1990s. At its peak, it involved a mix of young chip upstarts and old giants across the world throwing around benchmark results. Sun Microsystems. MIPS Computer. PA-RISC. IBM. PowerPC. DEC Alpha. Fujitsu and NEC in Japan. Siemens and Philips in Europe. And of course, looming over them all: Intel and the burgeoning Wintel Death Machine. It was a time of shifting alliances, leaps of inspiration, wild technical claims, and the Iron Fist of Intel. Today, we delve into legends of the RISC Wars.
What if the real key to exploding your business isn't just innovation—but mastering the art of pivoting through change? In this episode of Sharkpreneur, Seth Greene interviews Blair LaCorte, CEO at LaCorte Ventures. Blair is a leader who has guided multiple companies from startup to IPO and through major industry disruptions. Blair's career includes C-level roles at ExoJet Vista, TPG, Autodesk, Sun Microsystems, and the world's largest live entertainment production company. He's currently training as an astronaut for Virgin Galactic, serves as Vice Chairman of the Buck Institute for Research on Aging, and has collaborated with icons like Richard Branson, Elon Musk, and Bill Clinton. In this candid conversation, Blair shares how to recognize when to pivot versus double down, why change is the ultimate business opportunity, and how to build lasting connections that fuel personal and professional growth. Key Takeaways: → The two essential skills every entrepreneur needs: fact-finding and quick-start decision-making. → How to tell if you're pivoting too much—or not enough. → Why change should be viewed as a profit opportunity, not a threat. → The biggest mistakes leaders make when reacting to disruption—and how to avoid them. → Why restructuring and scaling have more in common than most think. Blair LaCorte is a dynamic business executive with a diverse career spanning entertainment, aviation, AI, technology, aerospace, consulting, investing, and military logistics. Raised by entrepreneurs, he has held CEO and C-level roles at major companies like PRG, XOJET/Vista, TPG, Autodesk, and Sun Microsystems/Oracle. Blair has helped lead multiple startups to successful IPOs, including AEye Technologies and VerticalNet. Currently, he is an astronaut-in-training for Virgin Galactic and serves as Vice Chairman of the Buck Institute, a leader in longevity research. He also co-founded and facilitates a Mastermind group of 40 global CEOs. Known for his engaging leadership and strategic vision, Blair has served on nonprofit boards alongside luminaries like Steve Kerr, Phil Jackson, Richard Branson, Elon Musk, and Bill Clinton. Connect With Blair LaCorte: Website: https://mastermindinnovate.com/ LinkedIn: https://www.linkedin.com/in/blair-lacorte-68084/ Learn more about your ad choices. Visit megaphone.fm/adchoices
What if the real key to exploding your business isn't just innovation—but mastering the art of pivoting through change? In this episode of Sharkpreneur, Seth Greene interviews Blair LaCorte, CEO at LaCorte Ventures. Blair is a leader who has guided multiple companies from startup to IPO and through major industry disruptions. Blair's career includes C-level roles at ExoJet Vista, TPG, Autodesk, Sun Microsystems, and the world's largest live entertainment production company. He's currently training as an astronaut for Virgin Galactic, serves as Vice Chairman of the Buck Institute for Research on Aging, and has collaborated with icons like Richard Branson, Elon Musk, and Bill Clinton. In this candid conversation, Blair shares how to recognize when to pivot versus double down, why change is the ultimate business opportunity, and how to build lasting connections that fuel personal and professional growth. Key Takeaways: → The two essential skills every entrepreneur needs: fact-finding and quick-start decision-making. → How to tell if you're pivoting too much—or not enough. → Why change should be viewed as a profit opportunity, not a threat. → The biggest mistakes leaders make when reacting to disruption—and how to avoid them. → Why restructuring and scaling have more in common than most think. Blair LaCorte is a dynamic business executive with a diverse career spanning entertainment, aviation, AI, technology, aerospace, consulting, investing, and military logistics. Raised by entrepreneurs, he has held CEO and C-level roles at major companies like PRG, XOJET/Vista, TPG, Autodesk, and Sun Microsystems/Oracle. Blair has helped lead multiple startups to successful IPOs, including AEye Technologies and VerticalNet. Currently, he is an astronaut-in-training for Virgin Galactic and serves as Vice Chairman of the Buck Institute, a leader in longevity research. He also co-founded and facilitates a Mastermind group of 40 global CEOs. Known for his engaging leadership and strategic vision, Blair has served on nonprofit boards alongside luminaries like Steve Kerr, Phil Jackson, Richard Branson, Elon Musk, and Bill Clinton. Connect With Blair LaCorte: Website: https://mastermindinnovate.com/ LinkedIn: https://www.linkedin.com/in/blair-lacorte-68084/ Learn more about your ad choices. Visit megaphone.fm/adchoices
BONUS: The Evolution of Agile - From Project Management to Adaptive Intelligence, With Mario Aiello In this BONUS episode, we explore the remarkable journey of Mario Aiello, a veteran agility thinker who has witnessed and shaped the evolution of Agile from its earliest days. Now freshly retired, Mario shares decades of hard-won insights about what works, what doesn't, and where Agile is headed next. This conversation challenges conventional thinking about methodologies, certifications, and what it truly means to be an Agile coach in complex environments. The Early Days: Agilizing Before Agile Had a Name "I came from project management and project management was, for me, was not working. I used to be a wishful liar, basically, because I used to manipulate reports in such a way that would please the listener. I knew it was bullshit." Mario's journey into Agile began around 2001 at Sun Microsystems, where he was already experimenting with iterative approaches while the rest of the world was still firmly planted in traditional project management. Working in Palo Alto, he encountered early adopters discussing Extreme Programming and had an "aha moment" - realizing that concepts like short iterations, feedback loops, and learning could rescue him from the unsustainable madness of traditional project management. He began incorporating these ideas into his work with PRINCE2, calling stages "iterations" and making them as short as possible. His simple agile approach focused on: work on the most important thing first, finish it, then move to the next one, cooperate with each other, and continuously improve. The Trajectory of Agile: From Values to Mechanisms "When the craze of methodologies came about, I started questioning the commercialization and monetization of methodologies. That's where things started to get a little bit complicated because the general focus drifted from values and principles to mechanisms and metrics." Mario describes witnessing three distinct phases in Agile's evolution. The early days were authentic - software developers speaking from the heart about genuine needs for new ways of working. The Agile Manifesto put important truths in front of everyone. However, as methodologies became commercialized, the focus shifted dangerously away from the core values and principles toward prescriptive mechanisms, metrics, and ceremonies. Mario emphasizes that when you focus on values and principles, you discover the purpose behind changing your ways of working. When you focus only on mechanics, you end up just doing things without real purpose - and that's when Agile became a noun, with people trying to "be agile" instead of achieving agility. He's clear that he's not against methodologies like Scrum, XP, SAFe, or LeSS - but rather against their mindless application without understanding the essence behind them. Making Sense Before Methodology: The Four-Fit Framework "Agile for me has to be fit for purpose, fit for context, fit for practice, and I even include a fourth dimension - fit for improvement." Rather than jumping straight to methodology selection, Mario advocates for a sense-making approach. First, understand your purpose - why do you want Agile? Then examine your context - where do you live, how does your company work? Only after making sense of the gap between your current state and where the values and principles suggest you should be, should you choose a methodology. This might mean Scrum for complex environments, or perhaps a flow-based approach for more predictable work, or creating your own hybrid. The key insight is that anyone who understands Agile's principles and values is free to create their own approach - it's fundamentally about plan, do, inspect, and adapt. Learning Through Failure: Context is Paramount "I failed more often than I won. That teaches you - being brave enough to say I failed, I learned, I move on because I'm going to use it better next time." Mario shares pivotal learning moments from his career, including an early attempt to "agilize PRINCE2" in a command-and-control startup environment. While not an ultimate success, this battle taught him that context is paramount and cannot be ignored. You must start by understanding how things are done today - identifying what's good (keep doing it), what's bad (try to improve it), and what's ugly (eradicate it to the extent possible). This lesson shaped his next engagement at a 300-person organization, where he spent nearly five months preparing the organizational context before even introducing Scrum. He started with "simple agile" practices, then took a systems approach to the entire delivery system. A Systems Approach: From Idea to Cash "From the moment sales and marketing people get brilliant ideas they want built, until the team delivers them into production and supports them - all that is a system. You cannot have different parts finger-pointing." Mario challenges the common narrow view of software development systems. Rather than focusing only on prioritization, development, and testing, he advocates for considering everything that influences delivery - from conception through to cash. His approach involved reorganizing an entire office floor, moving away from functional silos (sales here, marketing there, development over there) to value stream-based organization around products. Everyone involved in making work happen, including security, sales, product design, and client understanding, is part of the system. In one transformation, he shifted security from being gatekeepers at the end of the line to strategic partners from day one, embedding security throughout the entire value stream. This comprehensive systems thinking happened before formal Scrum training began. Beyond the Job Description: What Can an Agile Coach Really Do? "I said to some people, I'm not a coach. I'm just somebody that happens to have experience. How can I give something that can help and maybe influence the system?" Mario admits he doesn't qualify as a coach by traditional standards - he has no formal coaching qualifications. His coaching approach comes from decades of Rugby experience and focuses on establishing relationships with teams, understanding where they're going, and helping them make sense of their path forward. He emphasizes adaptive intelligence - the probe, sense, respond cycle. Rather than trying to change everything at once and capsizing the boat, he advocates for challenging one behavior at a time, starting with the most important, encouraging adaptation, and probing quickly to check for impact of specific changes. His role became inviting people to think outside the box, beyond the rigidity of their training and certifications, helping individuals and teams who could then influence the broader system even when organizational change seemed impossible. The Future: Adaptive Intelligence and Making Room for Agile "I'm using a lot of adaptive intelligence these days - probe, sense, respond, learn and adapt. That sequence will take people places." Looking ahead, Mario believes the valuable core of Agile - its values and principles - will remain, but the way we apply them must evolve. He advocates for adaptive intelligence approaches that emphasize sense-making and continuous learning rather than rigid adherence to frameworks. As he enters retirement, Mario is determined to make room for Agile in his new life, seeking ways to give back to the community through his blog, his new Substack "Adaptive Ways," and by inviting others to think differently. He's exploring a "pay as you wish" approach to sharing his experience, recognizing that while he may not be a traditional coach or social media expert, his decades of real-world experience - with its failures and successes - holds value for those still navigating the complexity of organizational change. About Mario Aiello Retired from full-time work, Mario is an agility thinker shaped by real-world complexity, not dogma. With decades in VUCA environments, he blends strategic clarity, emotional intelligence, and creative resilience. He designs context-driven agility, guiding teams and leaders beyond frameworks toward genuine value, adaptive systems, and meaningful transformation. You can link with Mario Aiello on LinkedIn, visit his website at Agile Ways.
In this episode, we sit down with Peter Schein, co-founder and CEO of the Organizational Culture and Leadership Institute, to explore the power of asking the right questions and building open, trusting relationships. Peter, who contributed to the second and third editions of Humble Inquiry: The Gentle Art of Asking Instead of Telling (originally written by his father, Edgar), discusses how curiosity in leadership is more important than ever in today's fast-paced, innovation-driven world. Join us to discover: · How to effectively ask open-ended questions to foster trust and curiosity. · The key differences between inquiry and interrogation, and why they matter. · The transformative power of asking instead of telling. · How to navigate and overcome challenges in the modern workplace with humble inquiry. With over 30 years of leadership experience in the technology sector, including roles at Apple, SGI, and Sun Microsystems, Peter brings invaluable insights into organizational culture, leadership development, and communication. His work offers a fresh perspective on leadership, emphasizing trust and inquiry over command and control. Learn more about Peter and his work by visiting his website today! Episode also available on Apple Podcasts: https://apple.co/38oMlMr Keep up with Peter Schein socials here: Facebook: https://www.facebook.com/pschein/ X: https://x.com/scheinocli
Merriam-Webster's Word of the Day for October 10, 2025 is: obviate AHB-vee-ayt verb To obviate something (usually a need for something, or a necessity) is to anticipate and prevent it. A formal word, obviate can also mean "to make an action unnecessary." // The new medical treatment obviates the need for surgery. // Allowing workers flexibility should obviate any objections to the change. See the entry > Examples: "In 1987, a new kind of computer workstation debuted from Sun Microsystems. These workstations, as well as increasingly powerful desktop computers from IBM and Apple, obviated the need for specialized LISP machines. Within a year, the market for LISP machines evaporated." — Jeremy Kahn, Fortune, 3 Sept. 2025 Did you know? It's most often needs that get obviated. And a need that's obviated is a need that's been anticipated and prevented. That sentence may obviate your need to consult the definition again, for example. Obviate comes ultimately from the Latin adjective obviam, meaning "in the way," and obviating does often involve figuratively putting something in the way, as when an explanatory sentence placed just so blocks a need to consult a definition. (Obviam is also an ancestor of our adjective obvious.) Obviate has a number of synonyms in English, including prevent, preclude, and avert, which all can mean "to hinder or stop something." Preclude often implies that a degree of chance was involved in stopping an event, while avert always implies that a bad situation has been anticipated and prevented or deflected by the application of immediate and effective means. Obviate generally suggests the use of intelligence or forethought to ward off trouble.
Blair LaCorte is the Vice Chair of the Board of Directors at the Buck Institute for Research on Aging—the world's first biomedical research institution dedicated solely to understanding aging and age-related diseases, and the largest independent scientific institute in the Bay Area. A seasoned leader and strategist, Blair has a track record of transforming companies across five industries, leveraging his expertise in change management to drive operational alignment, scale, and market leadership. Most recently, he led AEye's $1.5B IPO, advancing the company's mission to enable safe, reliable vehicle autonomy. Prior to that, Blair served as Global President of PRG, the world's largest live event technology and services company; CEO of XOJET, one of the fastest-growing aviation companies in history; and Senior Advisor and Operating Partner at TPG, a leading private equity firm managing over $97 billion in global investments. His earlier career includes executive roles at technology innovators such as VerticalNet, Savi Technologies, Autodesk, and Sun Microsystems. Blair is an active board member and advisor to organizations spanning science, business, and education, including the Positive Coaching Alliance, the Kairos Society, the Graduate Business Foundation, and alma maters Dartmouth College and the University of Maine. His leadership has been recognized by Fast Company, Ad Age, NASA, and the ITAS “100 Most Influential Leaders in Transportation” list. His insights have been featured in Forbes, Fortune, The Wall Street Journal, and on major networks including ABC, Bloomberg, CNN, and CNBC. Holding multiple patents across hardware, software, communications, security, and defense, Blair is also an astronaut-in-training and is scheduled to fly with Virgin Galactic. Outside of his professional pursuits, he is a dedicated father to three sons and the owner of a slightly anxious Weimaraner named Bella. Work With Us: Arétē by RAPID Health Optimization Links: Blair LaCorte on LinkedIn Anders Varner on Instagram Doug Larson on Instagram Coach Travis Mash on Instagram
On "A Brush With Death: 5 Minutes On...," we spend 5 minutes providing listeners with quick insights into various funeral trends, products, events, organizations, and goings-on. In this episode, host, Gabe Schauf, sits down with Welton Hong, founder and CEO of Ring Ring Marketing. Welton and Gabe discuss AI's effect on search engines as well as a few things you can do to keep your website SEO working for you. Ring Ring Marketing specializes in helping funeral homes grow by making their phones ring. With a focus on generating quality leads, improving online presence, and building stronger connections with families in need, Ring Ring Marketing provides proven strategies tailored to the funeral profession. Their goal is simple: bring more at-need and pre-need families to your funeral home so you can focus on what matters most—serving them with care and compassion. Welton is a leading expert in helping funeral homes convert leads from online directly to the phone line. He's the author of the book Making Your Phone Ring with Internet Marketing for Funeral Homes and a regular contributor to NFDA's The Director magazine and several other publications. Welton has a graduate degree in Electrical Engineering from the University of Colorado at Boulder. Prior to starting Ring Ring Marketing, he was a senior technologist at R&D facilities for Intel, Sun Microsystems, and Oracle. He regularly speaks at conferences and other events for people in the death care industry. Click here to learn more about Ring Ring Marketing.
Simon Ritter has been in the IT industry for 40 years. He went from university to work on Unix in the early days, employed by AT&T and programming in the C language. In 1996, he switched gears to join Sun Microsystems, programming in Java. Years later, after the Oracle transition, he started to dig into what might be next. Outside of tech, he is married with an older son. He is a complete petro-head - meaning, he is really into cars. In fact, in the last few years, he and his son re-built a classic mini from the ground up.While Simon was at Oracle, he started to crave a different opportunity, but still in the Java space. He stumbled upon a company digging into powering the Java platform, to make it the most secure, efficient and trusted platform on the planet - and he, and the company, found a great fit.This is Simon's creation story at Azul.SponsorsFull ScalePaddle.comSema SoftwarePropelAuthPostmanMeilisearchLinkshttps://www.azul.com/https://www.linkedin.com/in/siritter/Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Join us in this episode as we dive into the art of building open and trusting relationships with Peter Schein. In 2013, Peter's father, Edgar wrote Humble Inquiry: The Gentle Art of Asking Instead of Telling. Since then, Peter has contributed to the second and third edition of the book to bring a fresh perspective on how to see human conversational dynamics and relationships, presented in a compact, personal, and eminently practical way. Why do we need Humble Inquiry more than ever? Peter sits down to explain… Join in to discover: How to curiously ask people what's going on in their world. The key differences between inquiry and interrogation. The power of asking instead of telling. How to confidently navigate challenges inherent in today's workplace. Peter is the co-founder and CEO of the Organizational Culture and Leadership Institute in Menlo Park, California. He contributed to the 5th edition of Organizational Culture and Leadership (2017) and brings more than 30 years of experience in the technology sector. His career spans leadership roles in marketing, corporate development, and strategy at both emerging startups and global IT leaders such as Apple, SGI, and Sun Microsystems. In driving new strategies and integrating smaller ventures into larger enterprises, Peter developed a deep expertise in the organizational and cultural challenges that innovation-driven companies face. Want to learn more about Peter and his work? Click here now!
Alles außer Politik: Peter Filzmaier und Ali Mahlodji reden über Integration. „Wir sollten jedem Menschen in dieser Welt endlich mal das Gefühl geben, so wie du bist, bist du gut genug“, sagt der Unternehmens- und Politikberater. Ein Podcast vom Pragmaticus. Das Thema:Beide sind in einem Gemeindebau aufgewachsen, jetzt stehen sie gemeinsam im Tonstudio: Ali Mahlodji und Peter Filzmaier. Er fragt ihn: „Was ist Integration?“ Mahlodji antwortet: „Der Raum, in dem jeder das Gefühl hat, ich werde gesehen und ich werde gehört und es ist okay so.“Diese 8. Episode von Alles außer Politik, der Podcastreihe mit Peter Filzmaier, ist die bisher persönlichste Folge. Während der Unternehmens- und Politikberater und der Politikwissenschaftler die Kriminalität von Ausländern besprechen, an den Fragen des Staatsbürgerschaftstests scheitern, die Anzahl der Ausländer in Österreich erraten und schließlich die österreichische Nationalhymne singen, ist die Frage, wofür es sich zu leben lohnt, das eigentliche Thema. Unser Gast in dieser Folge: Ali Mahlodji stammt aus dem Iran. Er war zwei Jahre alt, als er mit seinen Eltern über die Türkei nach Österreich flüchtete. Im Flüchtlingslager Traiskirchen aufgewachsen, habe er seine Karriere als „stotternder Schulabbrecher“ begonnen, schreibt er auf seiner Website über sich selbst. Er hatte über 40 verschiedene Jobs und verdankte seine erste Anstellung bei Sun Microsystems seiner Hartnäckigkeit. Doch die, sagt er, ist (anders als der Ort der Geburt) kein Zufall: „Ich hatte Glück, weil ich in meiner Jugend viel Liebe erfahren habe.“Er wurde berühmt durch seine Erfindung von whatchado.com, einer Video-Platform, wo berufstätige Menschen aller Sparten und jeder Hierarchie-Stufe erzählen, wie sie zu ihren Berufen gekommen sind. Heute hilft Mahlodji einzelnen Menschen, Organisationen, Institutionen und Unternehmen, sich weiterzuentwickeln. Vom Radiosender Ö1 als „Philosoph der Arbeitswelt“ bezeichnet, ist seine Hilfe sehr konkret: „Ich habe noch niemals eine Lebensgeschichte oder einen Menschen erlebt, der nicht das Potenzial hat, das eigene Leben zu leben.“ Der Podcast Alles außer Politik mit Peter FilzmaierIn „Alles außer Politik“ vollzieht der Politikwissenschaftler und Polit-Analyst Peter Filzmaier den Drahtseilakt im Gespräch mit Wissenschaftlern und Experten alles zu bereden und doch nicht bei der Politik anzustreifen. Gar nicht so leicht. Und doch ein weites Feld: Jeden 3. Donnerstag im Monat also Gespräche über Alltag, Leben, Philosophie, Kultur und neue Ideen abseits des Politzirkus.Was bisher besprochen wurde: Das Geld mit Gabriel FelbermayrDie Gesundheit mit Katharina Reich Der Marathon mit Julia Mayer Das Land mit Lisz Hirn Die Demokratie mit Oliver RathkolbDie Sicherheit mit Bruno HofbauerDie Durchsetzungsstärke mit Helga Rabl-StadlerDer Host, Peter FilzmaierPeter Filzmaier stammt aus Wien und ist der Politanalyst des Landes. Die Frequenz seiner Auftritte in den Nachrichtensendungen des ORF kann als Indikator für die Intensität einer politischen Krise dienen. Filzmaier formuliert dann im berühmten Schnellsprech präzise Einschätzungen zur Lage der Parteien und zum Urteil der Wähler. Der Politikwissenschaftler forscht und lehrt ansonsten an den Universitäten Graz und Krems, wo er Professuren für Politische Kommunikation sowie Politikforschung innehat. Und er ist Leiter des Instituts für Strategieanalysen (ISA) in Wien. Alles außer Politik ist der einzige Podcast, in dem er nicht über Politik spricht.Dies ist ein Podcast von Der Pragmaticus. Sie finden uns auch auf Instagram, Facebook, LinkedIn und X (Twitter).
Kohsuke Kawaguchi is a prominent software engineer, best known as the creator of Jenkins, an open-source automation server that is widely used for continuous integration and continuous delivery (CI/CD). He is currently the Co-Head of AI at leading DevOps provider, CloudBees and the former Co-CEO of Launchable, an AI platform that speeds up testing to help teams expedite their continuous integration (CI) and delivery pipelines, which was acquired by CloudBees in 2024.Kawaguchi developed Jenkins as a side project when working at Sun Microsystems in 2011. Since then, it has become an essential tool for developers and DevOps professionals around the world helping teams automate parts of software development, testing, and deployment.In addition to his work on Jenkins, Kawaguchi has contributed to the broader open-source community and has worked with various technologies related to software development, automation, and cloud computing. He is also known for his contributions to the world of Java and DevOps.You can find Kohsuke on the following sites:WebsiteXLinkedInGitHubHere are some links provided by Kohsuke:CloudBeesPLEASE SUBSCRIBE TO THE PODCASTSpotifyApple PodcastsYouTube MusicAmazon MusicRSS FeedYou can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.comCoffee and Open Source is hosted by Isaac Levin
Andrew Casey remembers a moment when colleagues truly looked to him for leadership. At ServiceNow, a then‑$400 million company with little go‑to‑market infrastructure, the team faced a long list of missing elements: no functioning comp plan, no partner ecosystem, and no clear strategy for scaling sales. “Whenever people said they didn't know how,” Casey recalls, “I started raising my hand and said, I don't know either, but I know what we're going to go do… and then we're going to adjust as we go.” That willingness to lead through uncertainty became a turning point in his career.ServiceNow would grow from $400 million to $4.5 billion during his tenure, and colleagues still use the pricing and deal frameworks he created, he tells us. The experience cemented his approach: chase experiences, not titles, and transform finance into a partner that drives business outcomes.That mindset carried into his first CFO role at WalkMe in 2020, where, just two weeks in, COVID forced an immediate office shutdown. “We didn't even have a work‑from‑home policy,” he tells us. The sudden disruption forced him to navigate crisis management, team alignment, and IPO preparation simultaneously.His journey through Sun Microsystems, Symantec, Oracle, HP, ServiceNow, and Lacework sharpened his ability to guide transformation and scale. Today, as CFO of Amplitude, Casey draws on those lessons to help a smaller public company grow with discipline. Each chapter—from orchestrating 37 acquisitions at Oracle to steering turnarounds—reflects a career built on stepping into complexity, listening first, and leading change with confidence.
About 22% of adults age 65 and older reported volunteering in 2021, according to data from the U.S. Census Bureau, Current Population Survey, Volunteering and Civic Life Supplement. Around 22% of people in their 70s and 80s volunteer on a weekly basis, which is higher than the rate among older adults in their 50s. This week on the Swimming Upstream Radio Show, we'll meet two people repelling for a cause and one who says public speaking is a path into lending a hand to people Repelling for a Cause Meet Jon Hubble, age 84, and Diane Malone, both members of a senior residential community, They're choosing to raise money for an important project by repelling (that's dropping down with a rope). Anyway, they'll be coming down the front of a four story building. They'll be back next month to tell us how it went. Bil Lewis, Toastmasters Bil Lewis is a Computer Scientist and has worked in research and taught most of his life, most recently doing Genetics Research at MIT. He has taught at Stanford and Tufts Universities and worked for FMC, Sun Microsystems, and Nokia Data. Bil is a Past District Governor for Toastmasters (Eastern Massachusetts and Rhode Island), an Eagle Scout, a Returned Peace Corps Volunteer, and a Patriotic Citizen of the United States. Bil joined Toastmasters when his mother dragged him by the ear to a meeting after he graduated college. Bil discovered that being able to speak well in public was a very useful skill, which he was weak in. He has improved. Using his speaking skills, Bil ran his own company for a decade, teaching and consulting in Computer Science. In 2015, Bil took on the persona of James Madison and began performing for schools, libraries, and conferences. As a District Governor, Bil got to practice his leadership skills. He had 50 direct reports and 3,000 members, with a budget of $50,000. He ran two major conferences and organized 100 contests and trainings. He learned a lot. All because of Toastmasters. Links: Bil Lewis on LinkedIn - https://www.linkedin.com/in/bil-lewis-4986314/ Toastmasters International - https://www.toastmasters.org Learn more about your ad choices. Visit megaphone.fm/adchoices
Despite being the son of a pharmacist turned wine professional, I did not know the purpose of an Entheogen. Ross Halleck set me straight,so much so, that after defining an entheogen, the value of the consumption of wine became clearer. Ross Halleck doesn't just make wine—he might just ask you to close your eyes and seek the divine within a single glass. In this episode of Wine Talks, you'll be swept past the typical vineyard tales and deep into the spiritual and mystical roots of wine itself. You'll learn how Ross stumbled into the wine trade not through family legacy or grand ambition, but with the curiosity of a seeker and a penchant for unearthing life's mysteries. Paul and Ross unravel why, for some, wine is more than a social lubricant or status symbol; it's an “entheogen”—a conduit to something sacred. Discover how the trappings of wine culture, from magazine scores to over-intellectualization, can miss the magic entirely, and why Ross is on a mission to return wine to its ancient role: bringing people together, not driving them apart. You'll step inside his West Sebastopol vineyard and hear why he believes winning top awards means little if you can't connect with people's hearts. The conversation flows from digital reviews and the democratization of taste, to the pitfalls of marketing wine as pure commerce, to modern-day plant medicine ceremonies designed to foster self-discovery, belonging, and reverence. As the layers peel back, you'll come away with a fresh perspective on wine—not just as a beverage, but as a timeless link to the sacred, the mysterious, and the collective human story. And if you've ever wondered why a certain glass makes you feel something inexplicable, or why wine alone among drinks is revered across cultures and epochs, this episode offers more than an explanation—it offers an invitation to experience the “vine intervention” for yourself. Halleck Vineyard Website: halleckvineyard.com (Ross Halleck's winery, mentioned as halleckvineyard.com under events for wine ceremonies.) Starbucks Website: starbucks.com Hewlett-Packard (HP) Website: hp.com Apple Website: apple.com Sun Microsystems (company no longer independent; acquired, but for historical reference): Website: oracle.com (redirects to Oracle) Wine Spectator Website: winespectator.com Robert Parker/Wine Advocate Website: robertparker.com Wine of the Month Club Website: wineofthemonthclub.com Michelin (Michelin Guide for restaurants) Website: guide.michelin.com Yelp Website: yelp.com Foursquare Website: foursquare.com Kosta Brown Website: kostabrowne.com Kendall-Jackson Website: kj.com Rombauer Vineyards Website: rombauer.com Cheval Blanc Website: chateau-cheval-blanc.com #wine #winetalks #paulkalemkiarian #rosshalleck #halleckvineyard #winepodcast #wineindustry #pinotnoir #sonomacounty #wineandspirit #wineculture #enthiogen #wineexperience #winelover #winecommunity #wineclub #winemarketing #winepassion #spiritualwine #wineceremony
Peter A. Schein is the co-founder and CEO of OCLI.org in Menlo Park, California. He is a contributing author to the 5th edition of Organizational Culture and Leadership (2017). With Edgar H. Schein he is co-author of Humble Leadership (2018, 2nd ed. 2023), The Corporate Culture Survival Guide, 3rd ed. (2019), Humble Inquiry, (2nd ed. 2021 and 3rd ed. 2025), and Career Anchors Reimagined (2023). Peter's work brings 30 years of technology industry experience in marketing, corporate development, and strategy, at large and small IT companies including Apple, Sun Microsystems and numerous start-ups. While forging new strategies and merging smaller entities into a larger company, Peter developed a keen focus on the organizational development challenges faced by innovation-driven enterprises. Peter was educated at Stanford University (BA in social anthropology with honors and distinction), Northwestern University (Kellogg MBA), and the USC Marshall School of Business (HCEO Certificate).Link to claim CME credit: https://www.surveymonkey.com/r/3DXCFW3CME credit is available for up to 3 years after the stated release dateContact CEOD@bmhcc.org if you have any questions about claiming credit.
Back in 1994, Peter Deutsch and his colleagues at Sun Microsystems identified what they described as the "eight fallacies of distributed computing" — flawed assumptions that often get made when teams move from monolithic to distributed software architectures. In recent years, software architecture experts and regular writing partners Neal Ford and Mark Richards have identified a further three new fallacies of distributed computing: versioning is easy; compensating updates always work; and observability is optional. In this episode of the Technology Podcast, Neal and Mark join host Prem Chandrasekaran to talk through these three new fallacies, before digging deeper into other important issues in software architecture, including modular monoliths and governing architectural characteristics. Listen for a fresh perspective on software architecture and to explore key ideas shaping the discipline in 2025. Learn more about the second edition of Neal and Mark's Fundamentals of Software Architecture: https://www.oreilly.com/library/view/fundamentals-of-software/9781098175504/
Sun Microsystems เติบโตอย่างต่อเนื่อง ทุกๆ 6 เดือนตั้งแต่ปี 1982 ถึง 1987 ยอดจัดส่งเวิร์กสเตชันเพิ่มขึ้นเป็นสองเท่า ในปี 1985 บริษัทเปิดตัว Sun 3 ที่ใช้โพรเซสเซอร์ Motorola MC68020 รุ่นใหม่ พร้อมแนะนำมาตรฐาน Network File System (NFS) ที่ได้รับการยอมรับอย่างกว้างขวาง จนในที่สุด Sun สามารถแซงหน้า Apollo Computer คู่แข่งรายใหญ่ได้ในปี 1987 Sun เข้าตลาดหลักทรัพย์ (IPO) ในเดือนมีนาคม 1986 ด้วยรายได้ 210 ล้านดอลลาร์ ระดมทุนได้ 45 ล้านดอลลาร์ ซึ่งเป็น IPO ด้านเทคโนโลยีที่ใหญ่ที่สุดในรอบสามปี รายได้ของบริษัทเพิ่มขึ้นห้าเท่าจาก 210 ล้านดอลลาร์ในปี 1986 เป็น 1.1 พันล้านดอลลาร์ในปี 1988 เลือกฟังกันได้เลยนะครับ อย่าลืมกด Follow ติดตาม PodCast ช่อง Geek Forever's Podcast ของผมกันด้วยนะครับ #SunMicrosystems #SiliconValley #StartupThailand #ธุรกิจเทคโนโลยี #บทเรียนธุรกิจ #TechStartup #CaseStudy #กรณีศึกษาธุรกิจ #HistoryOfTechnology #TechHistory #JavaProgramming #BusinessLesson #DigitalTransformation #TechCompany #DigitalDisruption #ความล้มเหลวธุรกิจ #HistoryOfComputing #BusinessFailure #Innovation #StanfordUniversity #geekstory #geekforeverpodcast
In this episode, Avanish and Andrew discuss:Andrew's journey as an "operational CFO" from Sun Microsystems through ServiceNow, WalkMe, Lacework, and now Amplitude, being part of the team that built ServiceNow from $400M to $4.5B ARRWhy CFOs must "play chess, not checkers" - thinking several moves ahead about decision implications and making strategic investment pivots for anticipated future growthThe critical difference between multi-product and platform strategies: true platforms have definite customer adoption journeys where products aren't sold independentlyRecognizing platform readiness signals: when customers organically create their own workflows and use cases you never conceived, like hospitals using Amplitude for emergency room optimizationBuilding effective teams by mixing "veterans with rookies" to solve problems rather than just "admire problems," and driving focused execution around single key investmentsThe "fair exchange of value" approach to pricing and partnerships that emphasizes customer adoption, transparency, and simplicity over complexityAbout Avanish Sahai:Avanish Sahai is a Tidemark Fellow and served as a Board Member of Hubspot from 2018 to 2023; he currently serves on the boards of Birdie.ai, Flywl.com and Meta.com.br as well as a few non-profits end educational boards. Previously, Avanish served as the vice president, ISV and Apps partner ecosystem of Google from 2019 until 2021. From 2016 to 2019, he served as the global vice president, ISV and Technology alliances at ServiceNow. From 2014 to 2015, he was the senior vice president and chief product officer at Demandbase. Prior to Demandbase, Avanish built and led the Appexchange platform ecosystem team at Salesforce, and was an executive at Oracle and McKinsey & Company, as well as various early-to-mid stage startups in Silicon Valley.About Andrew Casey: Andrew Casey is Chief Financial Officer at Amplitude, where he leads Amplitude's General & Administrative organization, which includes finance, accounting, and legal. With more than 25 years of enterprise software experience, Casey brings deep financial expertise combined with extensive go-to-market strategy and business operations experience.Casey joined Amplitude from Lacework, where he served as CFO and oversaw its successful acquisition by Fortinet. Prior to that, he was the CFO of WalkMe, where he led its Initial Public Offering (IPO) and transformed its enterprise sales motion. Casey's career also includes senior finance roles with ServiceNow, Hewlett-Packard, NortonLifeLock Inc. (formerly Symantec), Oracle, and Sun Microsystems.About TidemarkTidemark is a venture capital firm, foundation, and community built to serve category-leading technology companies as they scale. Tidemark was founded in 2021 by David Yuan, who has been investing, advising, and building technology companies for over 20 years. Learn more at www.tidemarkcap.com.LinksFollow our guest, Andrew CaseyFollow our host, Avanish SahaiLearn more about Tidemark
Deepak Bhootra is the CEO of Jabulani Consulting, with over 19 years of experience in the tech industry, including significant roles at Hewlett Packard and Sun Microsystems. Deepak has a deep understanding of pricing strategies and their impact on sales performance. He is passionate about helping organizations navigate the complexities of pricing and sales operations. In this episode, Deepak shares his journey into pricing and sales, discussing the cultural nuances of negotiation in India and how they influence pricing strategies. Together, they explore the challenges salespeople face with pricing, the importance of understanding value from the customer's perspective, and how AI can play a role in pricing strategies. Why you have to check out today's podcast: Discover the common pitfalls salespeople face when discussing pricing. Explore the importance of aligning pricing with customer value and the psychological aspects of pricing. Learn how AI can enhance pricing strategies and sales effectiveness. “Pricing is something that companies use to control sales behavior. Salespeople don't like to be controlled.” – Deepak Bhootra Topics Covered: 01:46 – Deepak introduces himself and shares his background in pricing. 03:10 – The cultural significance of negotiation in India and its impact on pricing. 07:44 – The relationship between sales and pricing and the challenges salespeople face. 14:21 – Discussion on the emotional aspects of pricing and how they affect sales decisions. 17:12 – Insights into the importance of understanding value from the customer's perspective. 23:09 – The role of AI in enhancing pricing strategies and sales effectiveness. 30:35 – Deepak's pricing advice. 33:18 – Connect with Deepak. Key Takeaways: “Salespeople need to understand the value of pricing and how it relates to customer perception.” – Deepak Bhootra “Value is in the eye of the beholder. Understand what the customer values before discussing pricing.” – Deepak Bhootra “When you ask a budget question right up front, you're actually setting yourself up for a pricing discussion.” – Deepak Bhootra “Pricing is one of those conversations where you have complete control of your CRM updates, you have complete control over your forecast, your relationship, but you do not have control over the price because someone else dictates the price.” – Deepak Bhootra “When you are looking at price, giving a discount is the easiest lever to pull right up front. And typically (salespeople) they do it because they can also bamboozle you with a lot of stuff.” – Deepak Bhootra People/Resources Mentioned: Jabulani Consulting: https://jabulaniconsulting.com Amartya Sen: https://en.wikipedia.org/wiki/Amartya_Sen Connect with Deepak Bhootra: LinkedIn: https://www.linkedin.com/in/deepakbhootra/ Email: deepak@jabulaniconsulting.com Connect with Mark Stiving: LinkedIn: https://www.linkedin.com/in/stiving/ Email: mark@impactpricing.com
An airhacks.fm conversation with Colt McNealy (@coltmcnealy) about: first computing experience with Sun workstations and network computing, background in hockey and other sports, using system76 Linux laptops for development, starting programming in high school with Java and later learning C, fortran, assembly, C++ and python, working at a real estate company with kubernetes and Kafka, the genesis of LittleHorse from experiencing challenges with distributed microservices and workflow management, LittleHorse as an open source workflow orchestration engine using Kafka as a commit log rather than a message queue, building a custom distributed database optimized for workflow orchestration, the recent move to fully open source licensing, comparison with AWS Step Functions but with more capabilities and open source benefits, using RocksDB and Kafka Streams for the underlying implementation, performance metrics of 12-40ms latency between tasks and hundreds of tasks per second, the multi-tenant architecture allowing for serverless offerings, integration with Kafka for event-driven architectures, the distinction between orchestration and choreography in distributed systems, using Java 21 with benefits from virtual threads and generational garbage collection, plans for Java 25 adoption, the naming story behind "Little Horse" and its competition with MuleSoft, the Sun Microsystems legacy and innovation culture, recent adoption of Quarkus for some components, the "Know Your Customer" flow as the Hello World example for Little Horse, the importance of observability and durability in workflow management, plans for serverless offerings and multi-tenant architecture, the balance between open source core and commercial offerings Colt McNealy on twitter: @coltmcnealy
Johnson Yan, a trailblazer in real-time 3D graphics, joins the podcast to recount his remarkable journey from the earliest days of computer graphics and flight simulation. Starting in the late 1970s, Johnson tackled fundamental challenges like texture mapping, anti-aliasing, translucency, and scalability, long before today's GPU technology emerged. He shares insights into his pioneering work at Singer-Link, where he developed flight simulators utilizing vector graphics and early raster technology, laying the groundwork for both military training and future advancements in real-time visualization. In this episode, Johnson also discusses his transition into the commercial sector, detailing his impactful roles at companies like Sun Microsystems and Oak Technology. He explores his efforts to develop affordable 3D graphics chips, significantly enhancing consumer PCs' capabilities. Reflecting on industry milestones such as the rise of NVIDIA, the evolution from rasterization to ray tracing, and the integration of AI into modern graphics, Johnson provides unique historical context and personal anecdotes. His firsthand perspective offers a rare glimpse into the technological evolution of real-time graphics spanning nearly half a century.
Welcome to episode 276 of the Grow Your Law Firm podcast, hosted by Ken Hardison. In this episode, Ken sits down with Hamid Kohan, founder of Law Practice AI. Hamid is an experienced entrepreneur with a diverse background in technology and law. He earned his engineering degree at 17 from Chico State University and was quickly recruited to Silicon Valley, working for prominent companies. By 21, he completed an MBA in business marketing, propelling his career in business and technology. Hamid was integral in developing the world's first laptop at Grid Systems and later worked at SUN Microsystems, helping the company grow from 200 to 13,000 employees. He also held senior positions at Hitachi and Tandem Computers, directing business and technology development. In 1999, Hamid became Division President of Emblazed Technology, where he led the company to a 300% growth and a $1 billion valuation in just one year. In 2004, he co-founded CAPLUCK Inc., launching Cap60, a data management system provider recognized as the largest service provider for nonprofits in the U.S. In 2016, Hamid entered the legal field by founding Law Practice AI (formerly Legal Soft Inc.) offering practice management solutions for law firms. Under his leadership, Law Practice AI grew rapidly, helping firms expand across the U.S. Hamid's expertise in law firm management has made him a sought-after speaker and author of three books, including How to Scale Your Stupid Law Firm. His practical approach has made him a respected figure in legal practice management. What you'll learn about in this episode: 1. Client Follow-up and Communication: - Law Practice AI streamlines client follow-up processes through automated calls, texts, and emails, allowing for personalized sequences and efficient communication. - The AI technology collects and analyzes documents in real-time, providing immediate feedback and facilitating document collection during client interactions. 2. Document Summarization and Organization: - Law Practice AI offers document summarization and analysis, enabling the rapid processing of large volumes of documents, such as medical records, in under five minutes. - The platform allows for easy organization and filing of documents, enhancing client file management and workflow efficiency. 3. Centralized AI Solutions for Legal Operations: - Centralized AI solutions like Law Practice AI aim to simplify legal operations by integrating with CRMs to automate data management, calendaring, and client interactions. - Virtual staff integration alongside AI tools presents a strategic approach to scaling law firms efficiently and cost-effectively. 4. Simplified Tech Environment: - Law firms benefit from a centralized tech environment provided by platforms like Law Practice AI, avoiding the need to navigate multiple systems for different tasks. - Future versions of Law Practice AI feature API integrations with CRMs to automate matter opening, data storage, calendaring, and flag-setting processes. 5. Intake AI and Client Communication: - Intake AI technology addresses challenges in client communication by providing a seamless experience, including quick escalation to live agents for high-value cases. - Law Practice AI differentiates itself by offering personalized and efficient intake processes tailored to the legal industry's unique needs and complexities. Resources: Website http://www.mylawfirm.ai/ Facebook https://www.facebook.com/people/Law-Practice-AI/61556510846445/ Twitter https://x.com/LawPracticeAI LinkedIn https://www.linkedin.com/company/law-practice-ai/ Additional Resources: https://www.pilmma.org/aiworkshop https://www.pilmma.org/the-mastermind-effect https://www.pilmma.org/resources https://www.pilmma.org/mastermind
Sponsored By AdCirrus ERP, your trusted partner for cloud ERP solutions. Learn more at adcirruserp.com.Meet Vivek JoshiVivek is the founder and CEO of Entytle, a provider of Installed Base Intelligence solutions to Original Equipment Manufacturers. He has extensive leadership experience in various industries, spanning diversified industrial manufacturing, healthcare, high technology and private equity. He previously was founder and CEO of LumaSense Technologies Inc., an Operating Partner at Shah Capital Partners, and Senior Vice President of Marketing for Sun Services, a $3.6 billion division of Sun Microsystems. He also served at Webvan as Vice-President of Program Operation; at GE Transportation as General Manager, Off Highway/Transit Systems; at GE Corporate as Manager of Corporate Initiatives; at Booz Allen & Hamilton as a Management Consultant; and at Johnson & Johnson in an operations role. Vivek has an M.S. in Chemical Engineering and an M.B.A. from the Darden School of Business at the University of Virginia, Charlottesville and a B.Tech in Chemical Engineering from IIT, Mumbai.Connect with Vivek!Entytlevivek.joshi@entytle.com LinkedInAftermarket Champions PodcastLinksKirin Holdings will begin online sales of "Electric Salt Spoon", a spoon that uses electricity to enhance salty and umami tasteHighlights00:00 Fun Team Question: What's Your Career Theme Song?01:55 Introducing Our Guest: Vivek Joshi04:58 Vivek's Journey in Manufacturing08:50 The Impact of Key Mentors11:10 Why Entrepreneurship?13:03 The Importance of Aftermarket Services16:28 I Just Learned That: Fascinating Insights21:31 Addressing the Labor Crisis in Manufacturing24:49 Conclusion and Contact InformationConnect with the Broads!Connect with Lori on LinkedIn and visit www.keystoneclick.com for your strategic digital marketing needs! Connect with Kris on LinkedIn and visit www.genalpha.com for OEM and aftermarket digital solutions!Connect with Erin on LinkedIn!
In this episode, recorded at the 2025 Abundance Summit, Vinod, Brett, & Peter dive into a Q&A on the future of humanoid robots, transport, and more. Recorded on March 11th, 2025 Views are my own thoughts; not Financial, Medical, or Legal Advice. Vinod Khosla is an Indian-American entrepreneur and venture capitalist. He co-founded Sun Microsystems in 1982, serving as its first chairman and CEO. In 2004, he founded Khosla Ventures, focusing on technology and social impact investments. As of January 2025, his net worth is estimated at $9.2 billion. He is known for his bold bets on transformative innovations in fields like AI, robotics, healthcare, and clean energy. With a deep belief in abundance and the power of technology to solve global challenges, Khosla continues to shape the future through visionary investing. Brett Adcock is an American technology entrepreneur and the founder of Figure, an AI robotics company developing general-purpose humanoid robots designed to perform human-like tasks in both industrial and home settings. In 2023, he also founded Cover, an AI security company focused on building weapon detection systems for schools. Previously, Brett founded Archer Aviation, an urban air mobility company that went public at a valuation of $2.7 billion, and Vettery, a machine learning-based talent marketplace acquired for $110 million. Learn about Figure: https://www.figure.ai/ Learn more about Vinod: https://www.khoslaventures.com/ Learn more about Abundance360: https://bit.ly/ABUNDANCE360 For free access to the Abundance Summit Summary click: diamandis.com/breakthroughs ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at https://www.oneskin.co/ #oneskinpod ____________ I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now: Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
In this episode, recorded at the 2025 Abundance Summit, Vinod Khosla explores how AI will make expertise essentially free, why robots could surpass the auto industry, and how technologies like geothermal and fusion will reshape our energy landscape. Recorded on March 11th, 2025 Views are my own thoughts; not Financial, Medical, or Legal Advice. Vinod Khosla is an Indian-American entrepreneur and venture capitalist. He co-founded Sun Microsystems in 1982, serving as its first chairman and CEO. In 2004, he founded Khosla Ventures, focusing on technology and social impact investments. As of January 2025, his net worth is estimated at $9.2 billion. He is known for his bold bets on transformative innovations in fields like AI, robotics, healthcare, and clean energy. With a deep belief in abundance and the power of technology to solve global challenges, Khosla continues to shape the future through visionary investing. Learn more about Vinod: https://www.khoslaventures.com/ Learn more about Abundance360: https://bit.ly/ABUNDANCE360 For free access to the Abundance Summit Summary click: diamandis.com/breakthroughs ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors: Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/ AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter Get 15% off OneSkin with the code PETER at https://www.oneskin.co/ #oneskinpod ____________ I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now: Blog _____________ Connect With Peter: Twitter Instagram Youtube Moonshots
In this episode of The Eric Ries Show, I sit down with Marten Mickos, a serial tech CEO who has been at the forefront of some of the most transformative moments in open-source technology. From leading MySQL through its groundbreaking journey to guiding HackerOne as a pioneering bug bounty platform, Marten's career is a masterclass in building innovative, trust-driven organizations.Our wide-ranging conversation explores Marten's remarkable journey through tech leadership, touching on his experiences building game-changing companies and, more recently, his work coaching emerging CEOs. We dive deep into the world of open source, company culture, and the nuanced art of leadership.In our conversation today, we talk about the following topics: • How MySQL revolutionized open-source databases and became Facebook's database• The strategic decision to make MySQL open source and leverage Linux distributions• The art of building a beloved open-source project while creating a profitable business model• How a lawsuit solidified MySQL's position in the open-source database market• The role of transparency and direct feedback in building organizational trust• Why Marten was drawn to HackerOne's disruptive approach to cybersecurity• Marten's transition to coaching new CEOs • Marten's unique "contrast framework" for making complex decisions• And much more!—Brought to you by:• Wilson Sonsini – Wilson Sonsini is the innovation economy's law firm. Learn more.• Gusto – Gusto is an easy payroll and benefits software built for small businesses. Get 3 months free.—Where to find Marten Mickos: • LinkedIn: https://www.linkedin.com/in/martenmickos/• Bluesky: https://bsky.app/profile/martenmickos.bsky.social—Where to find Eric:• Newsletter:https://ericries.carrd.co/ • Podcast:https://ericriesshow.com/ • YouTube:https://www.youtube.com/@theericriesshow —In This Episode We Cover:(00:00) Intro(03:15) The first time Eric used MySQL(07:10) The origins of MySQL and how Marten got involved (13:22) Why MySQL pivoted to open source to leverage the power of Linux distros(17:03) Open source vs. closed (18:56) Building profitable open-source companies (24:52) The fearless company culture at MySQL and the Progress lawsuit(29:30) The value of not cutting any corners (33:35) How a dolphin became part of the MySQL logo (35:55) What it was like to build a company of true believers(38:47) Marten's management approach emphasizes kindness and direct feedback (42:12) Marten's hiring philosophy(45:14) Why MySQL sold to Sun Microsystems and tried to avoid Oracle (50:24) How Oracle has made MySQL even better(52:22) Why Marten decided to lead at HackerOne(55:41) An overview of HackerOne(59:31) How HackerOne got started and landed the Department of Defense contract(1:03:19) The trust-building power of transparency(1:08:30) Marten's successor and the state of HackerOne now(1:09:23) Marten's work coaching CEOs(1:14:20) Common issues CEOs struggle with (1:16:45) Marten's contrast framework (1:26:12) The book of Finnish poetry that inspired Marten's love of polarities—You can find the transcript and references at https://www.ericriesshow.com/—Production and marketing byhttps://penname.co/.Eric may be an investor in the companies discussed.
An airhacks.fm conversation with Volker Simonis (@volker_simonis) about: early computing experiences with Schneider CPC (Amstrad in UK) with Z80 CPU, CP/M operating system as an add-on that provided a real file system, programming in Basic and Turbo Pascal on early computers, discussion about gaming versus programming interests, using a 9-pin needle printer for school work, programming on pocket computers with BASIC in school, memories of Digital Research's CP/M and DR-DOS competing with MS-DOS, HiMEM memory management in early operating systems, programming in Logo language with turtle graphics and fractals, fascination with Lindenmayer systems (L-systems) for simulating biological growth patterns, interest in biology and carnivorous plants, transition to PCs with floppy disk drives, using SGI Iris workstations at university with IRIX operating system, early experiences with Linux installed from floppy disks, challenges of configuring X Window System, programming graphics on interlaced monitors, early work with HP using Tickle/Tk and python around 1993, first experiences with Java around version 0.8/0.9, attraction to Java's platform-independent networking and graphics capabilities, using Blackdown Java for Linux created by Johan Vos, freelance work creating Java applets for accessing databases of technical standards, PhD work creating software for analyzing parallel text corpora in multiple languages, developing internationalization and XML capabilities in Java Swing applications, career at Sun Microsystems porting MaxDB to Solaris, transition to SAP to work on JVM development, Adabas and MaxDB, reflections on ABAP programming language at SAP and its database-centric nature Volker Simonis on twitter: @volker_simonis
What to say when Steve Jobs threatens to sue you. Original text by Jonathan Schwartz. More about Lighthouse Design's Concurrence courtesy of the Apple Wikia instance. Sun famously sued Microsoft over their incompatible Java implenentation variant in 1997. Microsoft settled by paying Sun a bunch of money. Please enjoy this Flash animation shown at JavaOne 2004 retelling the story. Steve Jobs quotes from Triumph of the Nerds, WWDC 1997 Q&A, and Macworld San Francisco 2003. In the mid-1990s, Sun Microsystems acquired StarDivision and its StarOffice product, which Sun open sourced and renamed OpenOffice. After some entirely predictable grief from Oracle, the community forked the project and delivered what we know today as LibreOffice. Apple adopted Sun's dynamic system-wide tracing and performance profiling framework DTrace, known as Instruments in Xcode's collection of tools. Apple announced Snow Leopard Server would ship with Sun's ZFS but that ultimately never happened for licensing and patent reasons. Whether Sun's soon-to-be-acquisition by Oracle and the Steve Jobs/Larry Ellison relationship would have helped or hindered this, we'll never know. Either way, Apple, I know you're reading this and I'd like APFS to checksum my data blocks too, not just the metadata. Thank you. Jonathan Schwartz and Scott McNealy quotes from Sun's NC03-Q3 (2003) keynote and JavaOne 2004. See Project Looking Glass in action.
In this episode of the IoT For All Podcast, Alper Yegin, President and CEO of the LoRa Alliance, joins Ryan Chacon to discuss the state of LoRaWAN in 2025. The conversation covers LoRaWAN adoption, LoRaWAN use cases, the role of satellite IoT, edge, and AI, LoRaWAN certification and interoperability, misconceptions about LoRaWAN, and the future of LoRaWAN.Alper Yegin is the President and CEO of the LoRa Alliance. He oversees the organization's strategic direction and supports the development and global adoption of LoRaWAN, a key standard for low-power wide-area networks (LPWAN) in the Internet of Things (IoT). Before becoming CEO, he chaired the LoRa Alliance Technical Committee for eight years and served as Vice-Chair of the board for seven years.With over 25 years of experience in the IoT, mobile, and wireless communication industries, Yegin has held senior roles, including CTO at Actility, and various positions at Samsung Electronics, DoCoMo, and Sun Microsystems. He has contributed to global standards development in organizations such as IETF, 3GPP, ETSI, Zigbee Alliance, WiMAX Forum, and IPv6 Forum. Yegin holds 16 patents and has authored numerous technical standards and papers.The LoRa Alliance is an open, non-profit association that has grown into one of the largest and fastest-growing alliances in the technology industry since its inception in 2015. Its members work closely together and share knowledge to develop and disseminate the LoRaWAN standard, the de facto global standard for secure, quality IoT LPWAN bearer connectivity.Discover more about IoT at https://www.iotforall.comFind IoT solutions: https://marketplace.iotforall.comMore about LoRa Alliance: https://lora-alliance.orgConnect with Alper: https://www.linkedin.com/in/alperyegin/(00:00) Intro(00:18) Alper Yegin and LoRa Alliance(02:58) Current state of LoRaWAN adoption(04:17) The role of LoRaWan in the IoT ecosystem(07:19) Certification and interoperability(09:48) LoRaWAN use cases(15:03) Impact of AI and edge computing(18:09) Misconceptions about LoRaWAN(21:14) Future of LoRaWAN and challenges(24:14) Upcoming initiatives and eventsSubscribe to the Channel: https://bit.ly/2NlcEwmJoin Our Newsletter: https://newsletter.iotforall.comFollow Us on Social: https://linktr.ee/iot4all
From assembling elite teams at Ford Motor Company and Sun Microsystems to navigating the high standards of Bridgewater Associates, Steve Fitzgerald has honed the craft of leadership—yet every other day, you'll find him carving through fresh powder in the Rocky Mountains. As a seasoned HR leader, startup advisor, and board member, Steve has spent three decades weaving together people and profits, championing both efficient business outcomes and more fulfilling personal lives. In this episode, Ryan and Steve dive into the principles that have shaped Steve's unconventional career path, such as strategic leaps of faith and walking away from corporate safety in pursuit of authentic balance. They explore Ray Dalio's “pain plus reflection equals progress” outlook, offering tangible takeaways on how to welcome tough feedback, develop a growth mindset, and build teams that thrive on continuous practice.
If you're in SF, join us tomorrow for a fun meetup at CodeGen Night!If you're in NYC, join us for AI Engineer Summit! The Agent Engineering track is now sold out, but 25 tickets remain for AI Leadership and 5 tickets for the workshops. You can see the full schedule of speakers and workshops at https://ai.engineer!It's exceedingly hard to introduce someone like Bret Taylor. We could recite his Wikipedia page, or his extensive work history through Silicon Valley's greatest companies, but everyone else already does that.As a podcast by AI engineers for AI engineers, we had the opportunity to do something a little different. We wanted to dig into what Bret sees from his vantage point at the top of our industry for the last 2 decades, and how that explains the rise of the AI Architect at Sierra, the leading conversational AI/CX platform.“Across our customer base, we are seeing a new role emerge - the role of the AI architect. These leaders are responsible for helping define, manage and evolve their company's AI agent over time. They come from a variety of both technical and business backgrounds, and we think that every company will have one or many AI architects managing their AI agent and related experience.”In our conversation, Bret Taylor confirms the Paul Buchheit legend that he rewrote Google Maps in a weekend, armed with only the help of a then-nascent Google Closure Compiler and no other modern tooling. But what we find remarkable is that he was the PM of Maps, not an engineer, though of course he still identifies as one. We find this theme recurring throughout Bret's career and worldview. We think it is plain as day that AI leadership will have to be hands-on and technical, especially when the ground is shifting as quickly as it is today:“There's a lot of power in combining product and engineering into as few people as possible… few great things have been created by committee.”“If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a maniacal focus on outcomes.”“And I think the reason why is if you look at like software as a service five years ago, maybe you can have a separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of technological breakthroughs required for most business applications. And if you're making expense reporting software or whatever, it's useful… You kind of know how databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem. "When you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it and the capabilities of the technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself.”This is the first time the difference between technical leadership for “normal” software and for “AI” software was articulated this clearly for us, and we'll be thinking a lot about this going forward. We left a lot of nuggets in the conversation, so we hope you'll just dive in with us (and thank Bret for joining the pod!)Timestamps* 00:00:02 Introductions and Bret Taylor's background* 00:01:23 Bret's experience at Stanford and the dot-com era* 00:04:04 The story of rewriting Google Maps backend* 00:11:06 Early days of interactive web applications at Google* 00:15:26 Discussion on product management and engineering roles* 00:21:00 AI and the future of software development* 00:26:42 Bret's approach to identifying customer needs and building AI companies* 00:32:09 The evolution of business models in the AI era* 00:41:00 The future of programming languages and software development* 00:49:38 Challenges in precisely communicating human intent to machines* 00:56:44 Discussion on Artificial General Intelligence (AGI) and its impact* 01:08:51 The future of agent-to-agent communication* 01:14:03 Bret's involvement in the OpenAI leadership crisis* 01:22:11 OpenAI's relationship with Microsoft* 01:23:23 OpenAI's mission and priorities* 01:27:40 Bret's guiding principles for career choices* 01:29:12 Brief discussion on pasta-making* 01:30:47 How Bret keeps up with AI developments* 01:32:15 Exciting research directions in AI* 01:35:19 Closing remarks and hiring at Sierra Transcript[00:02:05] Introduction and Guest Welcome[00:02:05] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host swyx, founder of smol.ai.[00:02:17] swyx: Hey, and today we're super excited to have Bret Taylor join us. Welcome. Thanks for having me. It's a little unreal to have you in the studio.[00:02:25] swyx: I've read about you so much over the years, like even before. Open AI effectively. I mean, I use Google Maps to get here. So like, thank you for everything that you've done. Like, like your story history, like, you know, I think people can find out what your greatest hits have been.[00:02:40] Bret Taylor's Early Career and Education[00:02:40] swyx: How do you usually like to introduce yourself when, you know, you talk about, you summarize your career, like, how do you look at yourself?[00:02:47] Bret: Yeah, it's a great question. You know, we, before we went on the mics here, we're talking about the audience for this podcast being more engineering. And I do think depending on the audience, I'll introduce myself differently because I've had a lot of [00:03:00] corporate and board roles. I probably self identify as an engineer more than anything else though.[00:03:04] Bret: So even when I was. Salesforce, I was coding on the weekends. So I think of myself as an engineer and then all the roles that I do in my career sort of start with that just because I do feel like engineering is sort of a mindset and how I approach most of my life. So I'm an engineer first and that's how I describe myself.[00:03:24] Bret: You majored in computer[00:03:25] swyx: science, like 1998. And, and I was high[00:03:28] Bret: school, actually my, my college degree was Oh, two undergrad. Oh, three masters. Right. That old.[00:03:33] swyx: Yeah. I mean, no, I was going, I was going like 1998 to 2003, but like engineering wasn't as, wasn't a thing back then. Like we didn't have the title of senior engineer, you know, kind of like, it was just.[00:03:44] swyx: You were a programmer, you were a developer, maybe. What was it like in Stanford? Like, what was that feeling like? You know, was it, were you feeling like on the cusp of a great computer revolution? Or was it just like a niche, you know, interest at the time?[00:03:57] Stanford and the Dot-Com Bubble[00:03:57] Bret: Well, I was at Stanford, as you said, from 1998 to [00:04:00] 2002.[00:04:02] Bret: 1998 was near the peak of the dot com bubble. So. This is back in the day where most people that they're coding in the computer lab, just because there was these sun microsystems, Unix boxes there that most of us had to do our assignments on. And every single day there was a. com like buying pizza for everybody.[00:04:20] Bret: I didn't have to like, I got. Free food, like my first two years of university and then the dot com bubble burst in the middle of my college career. And so by the end there was like tumbleweed going to the job fair, you know, it was like, cause it was hard to describe unless you were there at the time, the like level of hype and being a computer science major at Stanford was like, A thousand opportunities.[00:04:45] Bret: And then, and then when I left, it was like Microsoft, IBM.[00:04:49] Joining Google and Early Projects[00:04:49] Bret: And then the two startups that I applied to were VMware and Google. And I ended up going to Google in large part because a woman named Marissa Meyer, who had been a teaching [00:05:00] assistant when I was, what was called a section leader, which was like a junior teaching assistant kind of for one of the big interest.[00:05:05] Bret: Yes. Classes. She had gone there. And she was recruiting me and I knew her and it was sort of felt safe, you know, like, I don't know. I thought about it much, but it turned out to be a real blessing. I realized like, you know, you always want to think you'd pick Google if given the option, but no one knew at the time.[00:05:20] Bret: And I wonder if I'd graduated in like 1999 where I've been like, mom, I just got a job at pets. com. It's good. But you know, at the end I just didn't have any options. So I was like, do I want to go like make kernel software at VMware? Do I want to go build search at Google? And I chose Google. 50, 50 ball.[00:05:36] Bret: I'm not really a 50, 50 ball. So I feel very fortunate in retrospect that the economy collapsed because in some ways it forced me into like one of the greatest companies of all time, but I kind of lucked into it, I think.[00:05:47] The Google Maps Rewrite Story[00:05:47] Alessio: So the famous story about Google is that you rewrote the Google maps back in, in one week after the map quest quest maps acquisition, what was the story there?[00:05:57] Alessio: Is it. Actually true. Is it [00:06:00] being glorified? Like how, how did that come to be? And is there any detail that maybe Paul hasn't shared before?[00:06:06] Bret: It's largely true, but I'll give the color commentary. So it was actually the front end, not the back end, but it turns out for Google maps, the front end was sort of the hard part just because Google maps was.[00:06:17] Bret: Largely the first ish kind of really interactive web application, say first ish. I think Gmail certainly was though Gmail, probably a lot of people then who weren't engineers probably didn't appreciate its level of interactivity. It was just fast, but. Google maps, because you could drag the map and it was sort of graphical.[00:06:38] Bret: My, it really in the mainstream, I think, was it a map[00:06:41] swyx: quest back then that was, you had the arrows up and down, it[00:06:44] Bret: was up and down arrows. Each map was a single image and you just click left and then wait for a few seconds to the new map to let it was really small too, because generating a big image was kind of expensive on computers that day.[00:06:57] Bret: So Google maps was truly innovative in that [00:07:00] regard. The story on it. There was a small company called where two technologies started by two Danish brothers, Lars and Jens Rasmussen, who are two of my closest friends now. They had made a windows app called expedition, which had beautiful maps. Even in 2000.[00:07:18] Bret: For whenever we acquired or sort of acquired their company, Windows software was not particularly fashionable, but they were really passionate about mapping and we had made a local search product that was kind of middling in terms of popularity, sort of like a yellow page of search product. So we wanted to really go into mapping.[00:07:36] Bret: We'd started working on it. Their small team seemed passionate about it. So we're like, come join us. We can build this together.[00:07:42] Technical Challenges and Innovations[00:07:42] Bret: It turned out to be a great blessing that they had built a windows app because you're less technically constrained when you're doing native code than you are building a web browser, particularly back then when there weren't really interactive web apps and it ended up.[00:07:56] Bret: Changing the level of quality that we [00:08:00] wanted to hit with the app because we were shooting for something that felt like a native windows application. So it was a really good fortune that we sort of, you know, their unusual technical choices turned out to be the greatest blessing. So we spent a lot of time basically saying, how can you make a interactive draggable map in a web browser?[00:08:18] Bret: How do you progressively load, you know, new map tiles, you know, as you're dragging even things like down in the weeds of the browser at the time, most browsers like Internet Explorer, which was dominant at the time would only load two images at a time from the same domain. So we ended up making our map tile servers have like.[00:08:37] Bret: Forty different subdomains so we could load maps and parallels like lots of hacks. I'm happy to go into as much as like[00:08:44] swyx: HTTP connections and stuff.[00:08:46] Bret: They just like, there was just maximum parallelism of two. And so if you had a map, set of map tiles, like eight of them, so So we just, we were down in the weeds of the browser anyway.[00:08:56] Bret: So it was lots of plumbing. I can, I know a lot more about browsers than [00:09:00] most people, but then by the end of it, it was fairly, it was a lot of duct tape on that code. If you've ever done an engineering project where you're not really sure the path from point A to point B, it's almost like. Building a house by building one room at a time.[00:09:14] Bret: The, there's not a lot of architectural cohesion at the end. And then we acquired a company called Keyhole, which became Google earth, which was like that three, it was a native windows app as well, separate app, great app, but with that, we got licenses to all this satellite imagery. And so in August of 2005, we added.[00:09:33] Bret: Satellite imagery to Google Maps, which added even more complexity in the code base. And then we decided we wanted to support Safari. There was no mobile phones yet. So Safari was this like nascent browser on, on the Mac. And it turns out there's like a lot of decisions behind the scenes, sort of inspired by this windows app, like heavy use of XML and XSLT and all these like.[00:09:54] Bret: Technologies that were like briefly fashionable in the early two thousands and everyone hates now for good [00:10:00] reason. And it turns out that all of the XML functionality and Internet Explorer wasn't supporting Safari. So people are like re implementing like XML parsers. And it was just like this like pile of s**t.[00:10:11] Bret: And I had to say a s**t on your part. Yeah, of[00:10:12] Alessio: course.[00:10:13] Bret: So. It went from this like beautifully elegant application that everyone was proud of to something that probably had hundreds of K of JavaScript, which sounds like nothing. Now we're talking like people have modems, you know, not all modems, but it was a big deal.[00:10:29] Bret: So it was like slow. It took a while to load and just, it wasn't like a great code base. Like everything was fragile. So I just got. Super frustrated by it. And then one weekend I did rewrite all of it. And at the time the word JSON hadn't been coined yet too, just to give you a sense. So it's all XML.[00:10:47] swyx: Yeah.[00:10:47] Bret: So we used what is now you would call JSON, but I just said like, let's use eval so that we can parse the data fast. And, and again, that's, it would literally as JSON, but at the time there was no name for it. So we [00:11:00] just said, let's. Pass on JavaScript from the server and eval it. And then somebody just refactored the whole thing.[00:11:05] Bret: And, and it wasn't like I was some genius. It was just like, you know, if you knew everything you wished you had known at the beginning and I knew all the functionality, cause I was the primary, one of the primary authors of the JavaScript. And I just like, I just drank a lot of coffee and just stayed up all weekend.[00:11:22] Bret: And then I, I guess I developed a bit of reputation and no one knew about this for a long time. And then Paul who created Gmail and I ended up starting a company with him too, after all of this told this on a podcast and now it's large, but it's largely true. I did rewrite it and it, my proudest thing.[00:11:38] Bret: And I think JavaScript people appreciate this. Like the un G zipped bundle size for all of Google maps. When I rewrote, it was 20 K G zipped. It was like much smaller for the entire application. It went down by like 10 X. So. What happened on Google? Google is a pretty mainstream company. And so like our usage is shot up because it turns out like it's faster.[00:11:57] Bret: Just being faster is worth a lot of [00:12:00] percentage points of growth at a scale of Google. So how[00:12:03] swyx: much modern tooling did you have? Like test suites no compilers.[00:12:07] Bret: Actually, that's not true. We did it one thing. So I actually think Google, I, you can. Download it. There's a, Google has a closure compiler, a closure compiler.[00:12:15] Bret: I don't know if anyone still uses it. It's gone. Yeah. Yeah. It's sort of gone out of favor. Yeah. Well, even until recently it was better than most JavaScript minifiers because it was more like it did a lot more renaming of variables and things. Most people use ES build now just cause it's fast and closure compilers built on Java and super slow and stuff like that.[00:12:37] Bret: But, so we did have that, that was it. Okay.[00:12:39] The Evolution of Web Applications[00:12:39] Bret: So and that was treated internally, you know, it was a really interesting time at Google at the time because there's a lot of teams working on fairly advanced JavaScript when no one was. So Google suggest, which Kevin Gibbs was the tech lead for, was the first kind of type ahead, autocomplete, I believe in a web browser, and now it's just pervasive in search boxes that you sort of [00:13:00] see a type ahead there.[00:13:01] Bret: I mean, chat, dbt[00:13:01] swyx: just added it. It's kind of like a round trip.[00:13:03] Bret: Totally. No, it's now pervasive as a UI affordance, but that was like Kevin's 20 percent project. And then Gmail, Paul you know, he tells the story better than anyone, but he's like, you know, basically was scratching his own itch, but what was really neat about it is email, because it's such a productivity tool, just needed to be faster.[00:13:21] Bret: So, you know, he was scratching his own itch of just making more stuff work on the client side. And then we, because of Lars and Yen sort of like setting the bar of this windows app or like we need our maps to be draggable. So we ended up. Not only innovate in terms of having a big sync, what would be called a single page application today, but also all the graphical stuff you know, we were crashing Firefox, like it was going out of style because, you know, when you make a document object model with the idea that it's a document and then you layer on some JavaScript and then we're essentially abusing all of this, it just was running into code paths that were not.[00:13:56] Bret: Well, it's rotten, you know, at this time. And so it was [00:14:00] super fun. And, and, you know, in the building you had, so you had compilers, people helping minify JavaScript just practically, but there is a great engineering team. So they were like, that's why Closure Compiler is so good. It was like a. Person who actually knew about programming languages doing it, not just, you know, writing regular expressions.[00:14:17] Bret: And then the team that is now the Chrome team believe, and I, I don't know this for a fact, but I'm pretty sure Google is the main contributor to Firefox for a long time in terms of code. And a lot of browser people were there. So every time we would crash Firefox, we'd like walk up two floors and say like, what the hell is going on here?[00:14:35] Bret: And they would load their browser, like in a debugger. And we could like figure out exactly what was breaking. And you can't change the code, right? Cause it's the browser. It's like slow, right? I mean, slow to update. So, but we could figure out exactly where the bug was and then work around it in our JavaScript.[00:14:52] Bret: So it was just like new territory. Like so super, super fun time, just like a lot of, a lot of great engineers figuring out [00:15:00] new things. And And now, you know, the word, this term is no longer in fashion, but the word Ajax, which was asynchronous JavaScript and XML cause I'm telling you XML, but see the word XML there, to be fair, the way you made HTTP requests from a client to server was this.[00:15:18] Bret: Object called XML HTTP request because Microsoft and making Outlook web access back in the day made this and it turns out to have nothing to do with XML. It's just a way of making HTTP requests because XML was like the fashionable thing. It was like that was the way you, you know, you did it. But the JSON came out of that, you know, and then a lot of the best practices around building JavaScript applications is pre React.[00:15:44] Bret: I think React was probably the big conceptual step forward that we needed. Even my first social network after Google, we used a lot of like HTML injection and. Making real time updates was still very hand coded and it's really neat when you [00:16:00] see conceptual breakthroughs like react because it's, I just love those things where it's like obvious once you see it, but it's so not obvious until you do.[00:16:07] Bret: And actually, well, I'm sure we'll get into AI, but I, I sort of feel like we'll go through that evolution with AI agents as well that I feel like we're missing a lot of the core abstractions that I think in 10 years we'll be like, gosh, how'd you make agents? Before that, you know, but it was kind of that early days of web applications.[00:16:22] swyx: There's a lot of contenders for the reactive jobs of of AI, but no clear winner yet. I would say one thing I was there for, I mean, there's so much we can go into there. You just covered so much.[00:16:32] Product Management and Engineering Synergy[00:16:32] swyx: One thing I just, I just observe is that I think the early Google days had this interesting mix of PM and engineer, which I think you are, you didn't, you didn't wait for PM to tell you these are my, this is my PRD.[00:16:42] swyx: This is my requirements.[00:16:44] mix: Oh,[00:16:44] Bret: okay.[00:16:45] swyx: I wasn't technically a software engineer. I mean,[00:16:48] Bret: by title, obviously. Right, right, right.[00:16:51] swyx: It's like a blend. And I feel like these days, product is its own discipline and its own lore and own industry and engineering is its own thing. And there's this process [00:17:00] that happens and they're kind of separated, but you don't produce as good of a product as if they were the same person.[00:17:06] swyx: And I'm curious, you know, if, if that, if that sort of resonates in, in, in terms of like comparing early Google versus modern startups that you see out there,[00:17:16] Bret: I certainly like wear a lot of hats. So, you know, sort of biased in this, but I really agree that there's a lot of power and combining product design engineering into as few people as possible because, you know few great things have been created by committee, you know, and so.[00:17:33] Bret: If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a. Maniacal focus on outcomes.[00:17:53] Bret: And I think the reason why it's, I think for some areas, if you look at like software as a service five years ago, maybe you can have a [00:18:00] separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of like. Technological breakthroughs required for most, you know, business applications.[00:18:11] Bret: And if you're making expense reporting software or whatever, it's useful. I don't mean to be dismissive of expense reporting software, but you probably just want to understand like, what are the requirements of the finance department? What are the requirements of an individual file expense report? Okay.[00:18:25] Bret: Go implement that. And you kind of know how web applications are implemented. You kind of know how to. How databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem when you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it.[00:18:58] Bret: And the capabilities of the [00:19:00] technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself. And that's why I use the word conversation. It's not literal. That's sort of funny to use that word in the age of conversational AI.[00:19:15] Bret: You're constantly sort of saying, like, ideally, you could sprinkle some magic AI pixie dust and solve all the world's problems, but it's not the way it works. And it turns out that actually, I'll just give an interesting example.[00:19:26] AI Agents and Modern Tooling[00:19:26] Bret: I think most people listening probably use co pilots to code like Cursor or Devon or Microsoft Copilot or whatever.[00:19:34] Bret: Most of those tools are, they're remarkable. I'm, I couldn't, you know, imagine development without them now, but they're not autonomous yet. Like I wouldn't let it just write most code without my interactively inspecting it. We just are somewhere between it's an amazing co pilot and it's an autonomous software engineer.[00:19:53] Bret: As a product manager, like your aspirations for what the product is are like kind of meaningful. But [00:20:00] if you're a product person, yeah, of course you'd say it should be autonomous. You should click a button and program should come out the other side. The requirements meaningless. Like what matters is like, what is based on the like very nuanced limitations of the technology.[00:20:14] Bret: What is it capable of? And then how do you maximize the leverage? It gives a software engineering team, given those very nuanced trade offs. Coupled with the fact that those nuanced trade offs are changing more rapidly than any technology in my memory, meaning every few months you'll have new models with new capabilities.[00:20:34] Bret: So how do you construct a product that can absorb those new capabilities as rapidly as possible as well? That requires such a combination of technical depth and understanding the customer that you really need more integration. Of product design and engineering. And so I think it's why with these big technology waves, I think startups have a bit of a leg up relative to incumbents because they [00:21:00] tend to be sort of more self actualized in terms of just like bringing those disciplines closer together.[00:21:06] Bret: And in particular, I think entrepreneurs, the proverbial full stack engineers, you know, have a leg up as well because. I think most breakthroughs happen when you have someone who can understand those extremely nuanced technical trade offs, have a vision for a product. And then in the process of building it, have that, as I said, like metaphorical conversation with the technology, right?[00:21:30] Bret: Gosh, I ran into a technical limit that I didn't expect. It's not just like changing that feature. You might need to refactor the whole product based on that. And I think that's, that it's particularly important right now. So I don't, you know, if you, if you're building a big ERP system, probably there's a great reason to have product and engineering.[00:21:51] Bret: I think in general, the disciplines are there for a reason. I think when you're dealing with something as nuanced as the like technologies, like large language models today, there's a ton of [00:22:00] advantage of having. Individuals or organizations that integrate the disciplines more formally.[00:22:05] Alessio: That makes a lot of sense.[00:22:06] Alessio: I've run a lot of engineering teams in the past, and I think the product versus engineering tension has always been more about effort than like whether or not the feature is buildable. But I think, yeah, today you see a lot more of like. Models actually cannot do that. And I think the most interesting thing is on the startup side, people don't yet know where a lot of the AI value is going to accrue.[00:22:26] Alessio: So you have this rush of people building frameworks, building infrastructure, layered things, but we don't really know the shape of the compute. I'm curious that Sierra, like how you thought about building an house, a lot of the tooling for evals or like just, you know, building the agents and all of that.[00:22:41] Alessio: Versus how you see some of the startup opportunities that is maybe still out there.[00:22:46] Bret: We build most of our tooling in house at Sierra, not all. It's, we don't, it's not like not invented here syndrome necessarily, though, maybe slightly guilty of that in some ways, but because we're trying to build a platform [00:23:00] that's in Dorian, you know, we really want to have control over our own destiny.[00:23:03] Bret: And you had made a comment earlier that like. We're still trying to figure out who like the reactive agents are and the jury is still out. I would argue it hasn't been created yet. I don't think the jury is still out to go use that metaphor. We're sort of in the jQuery era of agents, not the react era.[00:23:19] Bret: And, and that's like a throwback for people listening,[00:23:22] swyx: we shouldn't rush it. You know?[00:23:23] Bret: No, yeah, that's my point is. And so. Because we're trying to create an enduring company at Sierra that outlives us, you know, I'm not sure we want to like attach our cart to some like to a horse where it's not clear that like we've figured out and I actually want as a company, we're trying to enable just at a high level and I'll, I'll quickly go back to tech at Sierra, we help consumer brands build customer facing AI agents.[00:23:48] Bret: So. Everyone from Sonos to ADT home security to Sirius XM, you know, if you call them on the phone and AI will pick up with you, you know, chat with them on the Sirius XM homepage. It's an AI agent called Harmony [00:24:00] that they've built on our platform. We're what are the contours of what it means for someone to build an end to end complete customer experience with AI with conversational AI.[00:24:09] Bret: You know, we really want to dive into the deep end of, of all the trade offs to do it. You know, where do you use fine tuning? Where do you string models together? You know, where do you use reasoning? Where do you use generation? How do you use reasoning? How do you express the guardrails of an agentic process?[00:24:25] Bret: How do you impose determinism on a fundamentally non deterministic technology? There's just a lot of really like as an important design space. And I could sit here and tell you, we have the best approach. Every entrepreneur will, you know. But I hope that in two years, we look back at our platform and laugh at how naive we were, because that's the pace of change broadly.[00:24:45] Bret: If you talk about like the startup opportunities, I'm not wholly skeptical of tools companies, but I'm fairly skeptical. There's always an exception for every role, but I believe that certainly there's a big market for [00:25:00] frontier models, but largely for companies with huge CapEx budgets. So. Open AI and Microsoft's Anthropic and Amazon Web Services, Google Cloud XAI, which is very well capitalized now, but I think the, the idea that a company can make money sort of pre training a foundation model is probably not true.[00:25:20] Bret: It's hard to, you're competing with just, you know, unreasonably large CapEx budgets. And I just like the cloud infrastructure market, I think will be largely there. I also really believe in the applications of AI. And I define that not as like building agents or things like that. I define it much more as like, you're actually solving a problem for a business.[00:25:40] Bret: So it's what Harvey is doing in legal profession or what cursor is doing for software engineering or what we're doing for customer experience and customer service. The reason I believe in that is I do think that in the age of AI, what's really interesting about software is it can actually complete a task.[00:25:56] Bret: It can actually do a job, which is very different than the value proposition of [00:26:00] software was to ancient history two years ago. And as a consequence, I think the way you build a solution and For a domain is very different than you would have before, which means that it's not obvious, like the incumbent incumbents have like a leg up, you know, necessarily, they certainly have some advantages, but there's just such a different form factor, you know, for providing a solution and it's just really valuable.[00:26:23] Bret: You know, it's. Like just think of how much money cursor is saving software engineering teams or the alternative, how much revenue it can produce tool making is really challenging. If you look at the cloud market, just as a analog, there are a lot of like interesting tools, companies, you know, Confluent, Monetized Kafka, Snowflake, Hortonworks, you know, there's a, there's a bunch of them.[00:26:48] Bret: A lot of them, you know, have that mix of sort of like like confluence or have the open source or open core or whatever you call it. I, I, I'm not an expert in this area. You know, I do think [00:27:00] that developers are fickle. I think that in the tool space, I probably like. Default towards open source being like the area that will win.[00:27:09] Bret: It's hard to build a company around this and then you end up with companies sort of built around open source to that can work. Don't get me wrong, but I just think that it's nowadays the tools are changing so rapidly that I'm like, not totally skeptical of tool makers, but I just think that open source will broadly win, but I think that the CapEx required for building frontier models is such that it will go to a handful of big companies.[00:27:33] Bret: And then I really believe in agents for specific domains which I think will, it's sort of the analog to software as a service in this new era. You know, it's like, if you just think of the cloud. You can lease a server. It's just a low level primitive, or you can buy an app like you know, Shopify or whatever.[00:27:51] Bret: And most people building a storefront would prefer Shopify over hand rolling their e commerce storefront. I think the same thing will be true of AI. So [00:28:00] I've. I tend to like, if I have a, like an entrepreneur asked me for advice, I'm like, you know, move up the stack as far as you can towards a customer need.[00:28:09] Bret: Broadly, but I, but it doesn't reduce my excitement about what is the reactive building agents kind of thing, just because it is, it is the right question to ask, but I think we'll probably play out probably an open source space more than anything else.[00:28:21] swyx: Yeah, and it's not a priority for you. There's a lot in there.[00:28:24] swyx: I'm kind of curious about your idea maze towards, there are many customer needs. You happen to identify customer experience as yours, but it could equally have been coding assistance or whatever. I think for some, I'm just kind of curious at the top down, how do you look at the world in terms of the potential problem space?[00:28:44] swyx: Because there are many people out there who are very smart and pick the wrong problem.[00:28:47] Bret: Yeah, that's a great question.[00:28:48] Future of Software Development[00:28:48] Bret: By the way, I would love to talk about the future of software, too, because despite the fact it didn't pick coding, I have a lot of that, but I can talk to I can answer your question, though, you know I think when a technology is as [00:29:00] cool as large language models.[00:29:02] Bret: You just see a lot of people starting from the technology and searching for a problem to solve. And I think it's why you see a lot of tools companies, because as a software engineer, you start building an app or a demo and you, you encounter some pain points. You're like,[00:29:17] swyx: a lot of[00:29:17] Bret: people are experiencing the same pain point.[00:29:19] Bret: What if I make it? That it's just very incremental. And you know, I always like to use the metaphor, like you can sell coffee beans, roasted coffee beans. You can add some value. You took coffee beans and you roasted them and roasted coffee beans largely, you know, are priced relative to the cost of the beans.[00:29:39] Bret: Or you can sell a latte and a latte. Is rarely priced directly like as a percentage of coffee bean prices. In fact, if you buy a latte at the airport, it's a captive audience. So it's a really expensive latte. And there's just a lot that goes into like. How much does a latte cost? And I bring it up because there's a supply chain from growing [00:30:00] coffee beans to roasting coffee beans to like, you know, you could make one at home or you could be in the airport and buy one and the margins of the company selling lattes in the airport is a lot higher than the, you know, people roasting the coffee beans and it's because you've actually solved a much more acute human problem in the airport.[00:30:19] Bret: And, and it's just worth a lot more to that person in that moment. It's kind of the way I think about technology too. It sounds funny to liken it to coffee beans, but you're selling tools on top of a large language model yet in some ways your market is big, but you're probably going to like be price compressed just because you're sort of a piece of infrastructure and then you have open source and all these other things competing with you naturally.[00:30:43] Bret: If you go and solve a really big business problem for somebody, that's actually like a meaningful business problem that AI facilitates, they will value it according to the value of that business problem. And so I actually feel like people should just stop. You're like, no, that's, that's [00:31:00] unfair. If you're searching for an idea of people, I, I love people trying things, even if, I mean, most of the, a lot of the greatest ideas have been things no one believed in.[00:31:07] Bret: So I like, if you're passionate about something, go do it. Like who am I to say, yeah, a hundred percent. Or Gmail, like Paul as far, I mean I, some of it's Laura at this point, but like Gmail is Paul's own email for a long time. , and then I amusingly and Paul can't correct me, I'm pretty sure he sent her in a link and like the first comment was like, this is really neat.[00:31:26] Bret: It would be great. It was not your email, but my own . I don't know if it's a true story. I'm pretty sure it's, yeah, I've read that before. So scratch your own niche. Fine. Like it depends on what your goal is. If you wanna do like a venture backed company, if its a. Passion project, f*****g passion, do it like don't listen to anybody.[00:31:41] Bret: In fact, but if you're trying to start, you know an enduring company, solve an important business problem. And I, and I do think that in the world of agents, the software industries has shifted where you're not just helping people more. People be more productive, but you're actually accomplishing tasks autonomously.[00:31:58] Bret: And as a consequence, I think the [00:32:00] addressable market has just greatly expanded just because software can actually do things now and actually accomplish tasks and how much is coding autocomplete worth. A fair amount. How much is the eventual, I'm certain we'll have it, the software agent that actually writes the code and delivers it to you, that's worth a lot.[00:32:20] Bret: And so, you know, I would just maybe look up from the large language models and start thinking about the economy and, you know, think from first principles. I don't wanna get too far afield, but just think about which parts of the economy. We'll benefit most from this intelligence and which parts can absorb it most easily.[00:32:38] Bret: And what would an agent in this space look like? Who's the customer of it is the technology feasible. And I would just start with these business problems more. And I think, you know, the best companies tend to have great engineers who happen to have great insight into a market. And it's that last part that I think some people.[00:32:56] Bret: Whether or not they have, it's like people start so much in the technology, they [00:33:00] lose the forest for the trees a little bit.[00:33:02] Alessio: How do you think about the model of still selling some sort of software versus selling more package labor? I feel like when people are selling the package labor, it's almost more stateless, you know, like it's easier to swap out if you're just putting an input and getting an output.[00:33:16] Alessio: If you think about coding, if there's no ID, you're just putting a prompt and getting back an app. It doesn't really matter. Who generates the app, you know, you have less of a buy in versus the platform you're building, I'm sure on the backend customers have to like put on their documentation and they have, you know, different workflows that they can tie in what's kind of like the line to draw there versus like going full where you're managed customer support team as a service outsource versus.[00:33:40] Alessio: This is the Sierra platform that you can build on. What was that decision? I'll sort of[00:33:44] Bret: like decouple the question in some ways, which is when you have something that's an agent, who is the person using it and what do they want to do with it? So let's just take your coding agent for a second. I will talk about Sierra as well.[00:33:59] Bret: Who's the [00:34:00] customer of a, an agent that actually produces software? Is it a software engineering manager? Is it a software engineer? And it's there, you know, intern so to speak. I don't know. I mean, we'll figure this out over the next few years. Like what is that? And is it generating code that you then review?[00:34:16] Bret: Is it generating code with a set of unit tests that pass, what is the actual. For lack of a better word contract, like, how do you know that it did what you wanted it to do? And then I would say like the product and the pricing, the packaging model sort of emerged from that. And I don't think the world's figured out.[00:34:33] Bret: I think it'll be different for every agent. You know, in our customer base, we do what's called outcome based pricing. So essentially every time the AI agent. Solves the problem or saves a customer or whatever it might be. There's a pre negotiated rate for that. We do that. Cause it's, we think that that's sort of the correct way agents, you know, should be packaged.[00:34:53] Bret: I look back at the history of like cloud software and notably the introduction of the browser, which led to [00:35:00] software being delivered in a browser, like Salesforce to. Famously invented sort of software as a service, which is both a technical delivery model through the browser, but also a business model, which is you subscribe to it rather than pay for a perpetual license.[00:35:13] Bret: Those two things are somewhat orthogonal, but not really. If you think about the idea of software running in a browser, that's hosted. Data center that you don't own, you sort of needed to change the business model because you don't, you can't really buy a perpetual license or something otherwise like, how do you afford making changes to it?[00:35:31] Bret: So it only worked when you were buying like a new version every year or whatever. So to some degree, but then the business model shift actually changed business as we know it, because now like. Things like Adobe Photoshop. Now you subscribe to rather than purchase. So it ended up where you had a technical shift and a business model shift that were very logically intertwined that actually the business model shift was turned out to be as significant as the technical as the shift.[00:35:59] Bret: And I think with [00:36:00] agents, because they actually accomplish a job, I do think that it doesn't make sense to me that you'd pay for the privilege of like. Using the software like that coding agent, like if it writes really bad code, like fire it, you know, I don't know what the right metaphor is like you should pay for a job.[00:36:17] Bret: Well done in my opinion. I mean, that's how you pay your software engineers, right? And[00:36:20] swyx: and well, not really. We paid to put them on salary and give them options and they vest over time. That's fair.[00:36:26] Bret: But my point is that you don't pay them for how many characters they write, which is sort of the token based, you know, whatever, like, There's a, that famous Apple story where we're like asking for a report of how many lines of code you wrote.[00:36:40] Bret: And one of the engineers showed up with like a negative number cause he had just like done a big refactoring. There was like a big F you to management who didn't understand how software is written. You know, my sense is like the traditional usage based or seat based thing. It's just going to look really antiquated.[00:36:55] Bret: Cause it's like asking your software engineer, how many lines of code did you write today? Like who cares? Like, cause [00:37:00] absolutely no correlation. So my old view is I don't think it's be different in every category, but I do think that that is the, if an agent is doing a job, you should, I think it properly incentivizes the maker of that agent and the customer of, of your pain for the job well done.[00:37:16] Bret: It's not always perfect to measure. It's hard to measure engineering productivity, but you can, you should do something other than how many keys you typed, you know Talk about perverse incentives for AI, right? Like I can write really long functions to do the same thing, right? So broadly speaking, you know, I do think that we're going to see a change in business models of software towards outcomes.[00:37:36] Bret: And I think you'll see a change in delivery models too. And, and, you know, in our customer base you know, we empower our customers to really have their hands on the steering wheel of what the agent does they, they want and need that. But the role is different. You know, at a lot of our customers, the customer experience operations folks have renamed themselves the AI architects, which I think is really cool.[00:37:55] Bret: And, you know, it's like in the early days of the Internet, there's the role of the webmaster. [00:38:00] And I don't know whether your webmaster is not a fashionable, you know, Term, nor is it a job anymore? I just, I don't know. Will they, our tech stand the test of time? Maybe, maybe not. But I do think that again, I like, you know, because everyone listening right now is a software engineer.[00:38:14] Bret: Like what is the form factor of a coding agent? And actually I'll, I'll take a breath. Cause actually I have a bunch of pins on them. Like I wrote a blog post right before Christmas, just on the future of software development. And one of the things that's interesting is like, if you look at the way I use cursor today, as an example, it's inside of.[00:38:31] Bret: A repackaged visual studio code environment. I sometimes use the sort of agentic parts of it, but it's largely, you know, I've sort of gotten a good routine of making it auto complete code in the way I want through tuning it properly when it actually can write. I do wonder what like the future of development environments will look like.[00:38:55] Bret: And to your point on what is a software product, I think it's going to change a lot in [00:39:00] ways that will surprise us. But I always use, I use the metaphor in my blog post of, have you all driven around in a way, Mo around here? Yeah, everyone has. And there are these Jaguars, the really nice cars, but it's funny because it still has a steering wheel, even though there's no one sitting there and the steering wheels like turning and stuff clearly in the future.[00:39:16] Bret: If once we get to that, be more ubiquitous, like why have the steering wheel and also why have all the seats facing forward? Maybe just for car sickness. I don't know, but you could totally rearrange the car. I mean, so much of the car is oriented around the driver, so. It stands to reason to me that like, well, autonomous agents for software engineering run through visual studio code.[00:39:37] Bret: That seems a little bit silly because having a single source code file open one at a time is kind of a goofy form factor for when like the code isn't being written primarily by you, but it begs the question of what's your relationship with that agent. And I think the same is true in our industry of customer experience, which is like.[00:39:55] Bret: Who are the people managing this agent? What are the tools do they need? And they definitely need [00:40:00] tools, but it's probably pretty different than the tools we had before. It's certainly different than training a contact center team. And as software engineers, I think that I would like to see particularly like on the passion project side or research side.[00:40:14] Bret: More innovation in programming languages. I think that we're bringing the cost of writing code down to zero. So the fact that we're still writing Python with AI cracks me up just cause it's like literally was designed to be ergonomic to write, not safe to run or fast to run. I would love to see more innovation and how we verify program correctness.[00:40:37] Bret: I studied for formal verification in college a little bit and. It's not very fashionable because it's really like tedious and slow and doesn't work very well. If a lot of code is being written by a machine, you know, one of the primary values we can provide is verifying that it actually does what we intend that it does.[00:40:56] Bret: I think there should be lots of interesting things in the software development life cycle, like how [00:41:00] we think of testing and everything else, because. If you think about if we have to manually read every line of code that's coming out as machines, it will just rate limit how much the machines can do. The alternative is totally unsafe.[00:41:13] Bret: So I wouldn't want to put code in production that didn't go through proper code review and inspection. So my whole view is like, I actually think there's like an AI native I don't think the coding agents don't work well enough to do this yet, but once they do, what is sort of an AI native software development life cycle and how do you actually.[00:41:31] Bret: Enable the creators of software to produce the highest quality, most robust, fastest software and know that it's correct. And I think that's an incredible opportunity. I mean, how much C code can we rewrite and rust and make it safe so that there's fewer security vulnerabilities. Can we like have more efficient, safer code than ever before?[00:41:53] Bret: And can you have someone who's like that guy in the matrix, you know, like staring at the little green things, like where could you have an operator [00:42:00] of a code generating machine be like superhuman? I think that's a cool vision. And I think too many people are focused on like. Autocomplete, you know, right now, I'm not, I'm not even, I'm guilty as charged.[00:42:10] Bret: I guess in some ways, but I just like, I'd like to see some bolder ideas. And that's why when you were joking, you know, talking about what's the react of whatever, I think we're clearly in a local maximum, you know, metaphor, like sort of conceptual local maximum, obviously it's moving really fast. I think we're moving out of it.[00:42:26] Alessio: Yeah. At the end of 23, I've read this blog post from syntax to semantics. Like if you think about Python. It's taking C and making it more semantic and LLMs are like the ultimate semantic program, right? You can just talk to them and they can generate any type of syntax from your language. But again, the languages that they have to use were made for us, not for them.[00:42:46] Alessio: But the problem is like, as long as you will ever need a human to intervene, you cannot change the language under it. You know what I mean? So I'm curious at what point of automation we'll need to get, we're going to be okay making changes. To the underlying languages, [00:43:00] like the programming languages versus just saying, Hey, you just got to write Python because I understand Python and I'm more important at the end of the day than the model.[00:43:08] Alessio: But I think that will change, but I don't know if it's like two years or five years. I think it's more nuanced actually.[00:43:13] Bret: So I think there's a, some of the more interesting programming languages bring semantics into syntax. So let me, that's a little reductive, but like Rust as an example, Rust is memory safe.[00:43:25] Bret: Statically, and that was a really interesting conceptual, but it's why it's hard to write rust. It's why most people write python instead of rust. I think rust programs are safer and faster than python, probably slower to compile. But like broadly speaking, like given the option, if you didn't have to care about the labor that went into it.[00:43:45] Bret: You should prefer a program written in Rust over a program written in Python, just because it will run more efficiently. It's almost certainly safer, et cetera, et cetera, depending on how you define safe, but most people don't write Rust because it's kind of a pain in the ass. And [00:44:00] the audience of people who can is smaller, but it's sort of better in most, most ways.[00:44:05] Bret: And again, let's say you're making a web service and you didn't have to care about how hard it was to write. If you just got the output of the web service, the rest one would be cheaper to operate. It's certainly cheaper and probably more correct just because there's so much in the static analysis implied by the rest programming language that it probably will have fewer runtime errors and things like that as well.[00:44:25] Bret: So I just give that as an example, because so rust, at least my understanding that came out of the Mozilla team, because. There's lots of security vulnerabilities in the browser and it needs to be really fast. They said, okay, we want to put more of a burden at the authorship time to have fewer issues at runtime.[00:44:43] Bret: And we need the constraint that it has to be done statically because browsers need to be really fast. My sense is if you just think about like the, the needs of a programming language today, where the role of a software engineer is [00:45:00] to use an AI to generate functionality and audit that it does in fact work as intended, maybe functionally, maybe from like a correctness standpoint, some combination thereof, how would you create a programming system that facilitated that?[00:45:15] Bret: And, you know, I bring up Rust is because I think it's a good example of like, I think given a choice of writing in C or Rust, you should choose Rust today. I think most people would say that, even C aficionados, just because. C is largely less safe for very similar, you know, trade offs, you know, for the, the system and now with AI, it's like, okay, well, that just changes the game on writing these things.[00:45:36] Bret: And so like, I just wonder if a combination of programming languages that are more structurally oriented towards the values that we need from an AI generated program, verifiable correctness and all of that. If it's tedious to produce for a person, that maybe doesn't matter. But one thing, like if I asked you, is this rest program memory safe?[00:45:58] Bret: You wouldn't have to read it, you just have [00:46:00] to compile it. So that's interesting. I mean, that's like an, that's one example of a very modest form of formal verification. So I bring that up because I do think you have AI inspect AI, you can have AI reviewed. Do AI code reviews. It would disappoint me if the best we could get was AI reviewing Python and having scaled a few very large.[00:46:21] Bret: Websites that were written on Python. It's just like, you know, expensive and it's like every, trust me, every team who's written a big web service in Python has experimented with like Pi Pi and all these things just to make it slightly more efficient than it naturally is. You don't really have true multi threading anyway.[00:46:36] Bret: It's just like clearly that you do it just because it's convenient to write. And I just feel like we're, I don't want to say it's insane. I just mean. I do think we're at a local maximum. And I would hope that we create a programming system, a combination of programming languages, formal verification, testing, automated code reviews, where you can use AI to generate software in a high scale way and trust it.[00:46:59] Bret: And you're [00:47:00] not limited by your ability to read it necessarily. I don't know exactly what form that would take, but I feel like that would be a pretty cool world to live in.[00:47:08] Alessio: Yeah. We had Chris Lanner on the podcast. He's doing great work with modular. I mean, I love. LVM. Yeah. Basically merging rust in and Python.[00:47:15] Alessio: That's kind of the idea. Should be, but I'm curious is like, for them a big use case was like making it compatible with Python, same APIs so that Python developers could use it. Yeah. And so I, I wonder at what point, well, yeah.[00:47:26] Bret: At least my understanding is they're targeting the data science Yeah. Machine learning crowd, which is all written in Python, so still feels like a local maximum.[00:47:34] Bret: Yeah.[00:47:34] swyx: Yeah, exactly. I'll force you to make a prediction. You know, Python's roughly 30 years old. In 30 years from now, is Rust going to be bigger than Python?[00:47:42] Bret: I don't know this, but just, I don't even know this is a prediction. I just am sort of like saying stuff I hope is true. I would like to see an AI native programming language and programming system, and I use language because I'm not sure language is even the right thing, but I hope in 30 years, there's an AI native way we make [00:48:00] software that is wholly uncorrelated with the current set of programming languages.[00:48:04] Bret: or not uncorrelated, but I think most programming languages today were designed to be efficiently authored by people and some have different trade offs.[00:48:15] Evolution of Programming Languages[00:48:15] Bret: You know, you have Haskell and others that were designed for abstractions for parallelism and things like that. You have programming languages like Python, which are designed to be very easily written, sort of like Perl and Python lineage, which is why data scientists use it.[00:48:31] Bret: It's it can, it has a. Interactive mode, things like that. And I love, I'm a huge Python fan. So despite all my Python trash talk, a huge Python fan wrote at least two of my three companies were exclusively written in Python and then C came out of the birth of Unix and it wasn't the first, but certainly the most prominent first step after assembly language, right?[00:48:54] Bret: Where you had higher level abstractions rather than and going beyond go to, to like abstractions, [00:49:00] like the for loop and the while loop.[00:49:01] The Future of Software Engineering[00:49:01] Bret: So I just think that if the act of writing code is no longer a meaningful human exercise, maybe it will be, I don't know. I'm just saying it sort of feels like maybe it's one of those parts of history that just will sort of like go away, but there's still the role of this offer engineer, like the person actually building the system.[00:49:20] Bret: Right. And. What does a programming system for that form factor look like?[00:49:25] React and Front-End Development[00:49:25] Bret: And I, I just have a, I hope to be just like I mentioned, I remember I was at Facebook in the very early days when, when, what is now react was being created. And I remember when the, it was like released open source I had left by that time and I was just like, this is so f*****g cool.[00:49:42] Bret: Like, you know, to basically model your app independent of the data flowing through it, just made everything easier. And then now. You know, I can create, like there's a lot of the front end software gym play is like a little chaotic for me, to be honest with you. It is like, it's sort of like [00:50:00] abstraction soup right now for me, but like some of those core ideas felt really ergonomic.[00:50:04] Bret: I just wanna, I'm just looking forward to the day when someone comes up with a programming system that feels both really like an aha moment, but completely foreign to me at the same time. Because they created it with sort of like from first principles recognizing that like. Authoring code in an editor is maybe not like the primary like reason why a programming system exists anymore.[00:50:26] Bret: And I think that's like, that would be a very exciting day for me.[00:50:28] The Role of AI in Programming[00:50:28] swyx: Yeah, I would say like the various versions of this discussion have happened at the end of the day, you still need to precisely communicate what you want. As a manager of people, as someone who has done many, many legal contracts, you know how hard that is.[00:50:42] swyx: And then now we have to talk to machines doing that and AIs interpreting what we mean and reading our minds effectively. I don't know how to get across that barrier of translating human intent to instructions. And yes, it can be more declarative, but I don't know if it'll ever Crossover from being [00:51:00] a programming language to something more than that.[00:51:02] Bret: I agree with you. And I actually do think if you look at like a legal contract, you know, the imprecision of the English language, it's like a flaw in the system. How many[00:51:12] swyx: holes there are.[00:51:13] Bret: And I do think that when you're making a mission critical software system, I don't think it should be English language prompts.[00:51:19] Bret: I think that is silly because you want the precision of a a programming language. My point was less about that and more about if the actual act of authoring it, like if you.[00:51:32] Formal Verification in Software[00:51:32] Bret: I'll think of some embedded systems do use formal verification. I know it's very common in like security protocols now so that you can, because the importance of correctness is so great.[00:51:41] Bret: My intellectual exercise is like, why not do that for all software? I mean, probably that's silly just literally to do what we literally do for. These low level security protocols, but the only reason we don't is because it's hard and tedious and hard and tedious are no longer factors. So, like, if I could, I mean, [00:52:00] just think of, like, the silliest app on your phone right now, the idea that that app should be, like, formally verified for its correctness feels laughable right now because, like, God, why would you spend the time on it?[00:52:10] Bret: But if it's zero costs, like, yeah, I guess so. I mean, it never crashed. That's probably good. You know, why not? I just want to, like, set our bars really high. Like. We should make, software has been amazing. Like there's a Mark Andreessen blog post, software is eating the world. And you know, our whole life is, is mediated digitally.[00:52:26] Bret: And that's just increasing with AI. And now we'll have our personal agents talking to the agents on the CRO platform and it's agents all the way down, you know, our core infrastructure is running on these digital systems. We now have like, and we've had a shortage of software developers for my entire life.[00:52:45] Bret: And as a consequence, you know if you look, remember like health care, got healthcare. gov that fiasco security vulnerabilities leading to state actors getting access to critical infrastructure. I'm like. We now have like created this like amazing system that can [00:53:00] like, we can fix this, you know, and I, I just want to, I'm both excited about the productivity gains in the economy, but I just think as software engineers, we should be bolder.[00:53:08] Bret: Like we should have aspirations to fix these systems so that like in general, as you said, as precise as we want to be in the specification of the system. We can make it work correctly now, and I'm being a little bit hand wavy, and I think we need some systems. I think that's where we should set the bar, especially when so much of our life depends on this critical digital infrastructure.[00:53:28] Bret: So I'm I'm just like super optimistic about it. But actually, let's go to w
Jeffrey Allen, a respected energy healer and Mindvalley author, is known for his teachings on personal transformation and spiritual awakening. His ‘Duality' training with Mindvalley and ‘Spirit Mind' training with his wife Hisami assist people worldwide in transforming their lives and reconnecting with their true essence. Prior to entering the world of spirituality, Jeffrey had a 15 year career as software engineer with US Department of Energy and Sun Microsystems. Since then he has spent over 15 years teaching clairvoyance, healing, and mediumship studies around the world. Jeffrey has studied with world renowned teachers Michael Tamura, Mary Bell Nyman, Jim Self, John Fulton, and Nassim Haramein of the Resonance Project. We discuss: The Spirit Body Why men don't feel energy like women Types of energy healing Insight on the current energy right now How to recognize your natural gifts Follow Jeffrey Allen on Instagram @iamjeffreyallen Explore Jeffrey's Duality or Unlocking Transcendence classes with Mindvalley https://www.mindvalley.com Learn more about Jeffrey Allen www.IAMJeffreyAllen.com www.SpiritMind.com Learn more about High Vibration Living with Chef Whitney Aronoff on www.StarseedKitchen.com Get 10% off your order of Chef Whitney's organic spices with code STARSEED on www.starseedkitchen.com Follow Chef Whitney Aronoff on Instagram at @whitneyaronoff and @starseedkitchen Learn more about your ad choices. Visit megaphone.fm/adchoices