POPULARITY
Categories
Turbopuffer came out of a reading app.In 2022, Simon was helping his friends at Readwise scale their infra for a highly requested feature: article recommendations and semantic search. Readwise was paying ~$5k/month for their relational database and vector search would cost ~$20k/month making the feature too expensive to ship. In 2023 after mulling over the problem from Readwise, Simon decided he wanted to “build a search engine” which became Turbopuffer.We discuss:• Simon's path: Denmark → Shopify infra for nearly a decade → “angel engineering” across startups like Readwise, Replicate, and Causal → turbopuffer almost accidentally becoming a company • The Readwise origin story: building an early recommendation engine right after the ChatGPT moment, seeing it work, then realizing it would cost ~$30k/month for a company spending ~$5k/month total on infra and getting obsessed with fixing that cost structure • Why turbopuffer is “a search engine for unstructured data”: Simon's belief that models can learn to reason, but can't compress the world's knowledge into a few terabytes of weights, so they need to connect to systems that hold truth in full fidelity • The three ingredients for building a great database company: a new workload, a new storage architecture, and the ability to eventually support every query plan customers will want on their data • The architecture bet behind turbopuffer: going all in on object storage and NVMe, avoiding a traditional consensus layer, and building around the cloud primitives that only became possible in the last few years • Why Simon hated operating Elasticsearch at Shopify: years of painful on-call experience shaped his obsession with simplicity, performance, and eliminating state spread across multiple systems • The Cursor story: launching turbopuffer as a scrappy side project, getting an email from Cursor the next day, flying out after a 4am call, and helping cut Cursor's costs by 95% while fixing their per-user economics • The Notion story: buying dark fiber, tuning TCP windows, and eating cross-cloud costs because Simon refused to compromise on architecture just to close a deal faster • Why AI changes the build-vs-buy equation: it's less about whether a company can build search infra internally, and more about whether they have time especially if an external team can feel like an extension of their own • Why RAG isn't dead: coding companies still rely heavily on search, and Simon sees hybrid retrieval semantic, text, regex, SQL-style patterns becoming more important, not less • How agentic workloads are changing search: the old pattern was one retrieval call up front; the new pattern is one agent firing many parallel queries at once, turning search into a highly concurrent tool call • Why turbopuffer is reducing query pricing: agentic systems are dramatically increasing query volume, and Simon expects retrieval infra to adapt to huge bursts of concurrent search rather than a small number of carefully chosen calls • The philosophy of “playing with open cards”: Simon's habit of being radically honest with investors, including telling Lachy Groom he'd return the money if turbopuffer didn't hit PMF by year-end • The “P99 engineer”: Simon's framework for building a talent-dense company, rejecting by default unless someone on the team feels strongly enough to fight for the candidate —Simon Hørup Eskildsen• LinkedIn: https://www.linkedin.com/in/sirupsen• X: https://x.com/Sirupsen• https://sirupsen.com/aboutturbopuffer• https://turbopuffer.com/Full Video PodTimestamps00:00:00 The PMF promise to Lachy Groom00:00:25 Intro and Simon's background00:02:19 What turbopuffer actually is00:06:26 Shopify, Elasticsearch, and the pain behind the company00:10:07 The Readwise experiment that sparked turbopuffer00:12:00 The insight Simon couldn't stop thinking about00:17:00 S3 consistency, NVMe, and the architecture bet00:20:12 The Notion story: latency, dark fiber, and conviction00:25:03 Build vs. buy in the age of AI00:26:00 The Cursor story: early launch to breakout customer00:29:00 Why code search still matters00:32:00 Search in the age of agents00:34:22 Pricing turbopuffer in the AI era00:38:17 Why Simon chose Lachy Groom00:41:28 Becoming a founder on purpose00:44:00 The “P99 engineer” philosophy00:49:30 Bending software to your will00:51:13 The future of turbopuffer00:57:05 Simon's tea obsession00:59:03 Tea kits, X Live, and P99 LiveTranscriptSimon Hørup Eskildsen: I don't think I've said this publicly before, but I just called Lockey and was like, local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you. But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working.So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people. We're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards. Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before.Alessio: Hey everyone, welcome to the Leading Space podcast. This is Celesio Pando, Colonel Laz, and I'm joined by Swix, editor of Leading Space.swyx: Hello. Hello, uh, we're still, uh, recording in the Ker studio for the first time. Very excited. And today we are joined by Simon Eski. Of Turbo Farer welcome.Simon Hørup Eskildsen: Thank you so much for having me.swyx: Turbo Farer has like really gone on a huge tear, and I, I do have to mention that like you're one of, you're not my newest member of the Danish AHU Mafia, where like there's a lot of legendary programmers that have come out of it, like, uh, beyond Trotro, Rasmus, lado Berg and the V eight team and, and Google Maps team.Uh, you're mostly a Canadian now, but isn't that interesting? There's so many, so much like strong Danish presence.Simon Hørup Eskildsen: Yeah, I was writing a post, um, not that long ago about sort of the influences. So I grew up in Denmark, right? I left, I left when, when I was 18 to go to Canada to, to work at Shopify. Um, and so I, like, I've, I would still say that I feel more Danish than, than Canadian.This is also the weird accent. I can't say th because it, this is like, I don't, you know, my wife is also Canadian, um, and I think. I think like one of the things in, in Denmark is just like, there's just such a ruthless pragmatism and there's also a big focus on just aesthetics. Like, they're like very, people really care about like where, what things look like.Um, and like Canada has a lot of attributes, US has, has a lot of attributes, but I think there's been lots of the great things to carry. I don't know what's in the water in Ahu though. Um, and I don't know that I could be considered part of the Mafi mafia quite yet, uh, compared to the phenomenal individuals we just mentioned.Barra OV is also, uh, Danish Canadian. Okay. Yeah. I don't know where he lives now, but, and he's the PHP.swyx: Yeah. And obviously Toby German, but moved to Canada as well. Yes. Like this is like import that, uh, that, that is an interesting, um, talent move.Alessio: I think. I would love to get from you. Definition of Turbo puffer, because I think you could be a Vector db, which is maybe a bad word now in some circles, you could be a search engine.It's like, let, let's just start there and then we'll maybe run through the history of how you got to this point.Simon Hørup Eskildsen: For sure. Yeah. So Turbo Puffer is at this point in time, a search engine, right? We do full text search and we do vector search, and that's really what we're specialized in. If you're trying to do much more than that, like then this might not be the right place yet, but Turbo Buffer is all about search.The other way that I think about it is that we can take all of the world's knowledge, all of the exabytes and exabytes of data that there is, and we can use those tokens to train a model, but we can't compress all of that into a few terabytes of weights, right? Compress into a few terabytes of weights, how to reason with the world, how to make sense of the knowledge.But we have to somehow connect it to something externally that actually holds that like in full fidelity and truth. Um, and that's the thing that we intend to become. Right? That's like a very holier than now kind of phrasing, right? But being the search engine for unstructured, unstructured data is the focus of turbo puffer at this point in time.Alessio: And let's break down. So people might say, well, didn't Elasticsearch already do this? And then some other people might say, is this search on my data, is this like closer to rag than to like a xr, like a public search thing? Like how, how do you segment like the different types of search?Simon Hørup Eskildsen: The way that I generally think about this is like, there's a lot of database companies and I think if you wanna build a really big database company, sort of, you need a couple of ingredients to be in the air.We don't, which only happens roughly every 15 years. You need a new workload. You basically need the ambition that every single company on earth is gonna have data in your database. Multiple times you look at a company like Oracle, right? You will, like, I don't think you can find a company on earth with a digital presence that it not, doesn't somehow have some data in an Oracle database.Right? And I think at this point, that's also true for Snowflake and Databricks, right? 15 years later it's, or even more than that, there's not a company on earth that doesn't, in. Or directly is consuming Snowflake or, or Databricks or any of the big analytics databases. Um, and I think we're in that kind of moment now, right?I don't think you're gonna find a company over the next few years that doesn't directly or indirectly, um, have all their data available for, for search and connect it to ai. So you need that new workload, like you need something to be happening where there's a new workload that causes that to happen, and that new workload is connecting very large amounts of data to ai.The second thing you need. The second condition to build a big database company is that you need some new underlying change in the storage architecture that is not possible from the databases that have come before you. If you look at Snowflake and Databricks, right, commoditized, like massive fleet of HDDs, like that was not possible in it.It just wasn't in the air in the nineties, right? So you just didn't, we just didn't build these systems. S3 and and and so on was not around. And I think the architecture that is now possible that wasn't possible 15 years ago is to go all in on NVME SSDs. It requires a particular type of architecture for the database that.It's difficult to retrofit onto the databases that are already there, including the ones you just mentioned. The second thing is to go all in on OIC storage, more so than we could have done 15 years ago. Like we don't have a consensus layer, we don't really have anything. In fact, you could turn off all the servers that Turbo Buffer has, and we would not lose any data because we have all completely all in on OIC storage.And this means that our architecture is just so simple. So that's the second condition, right? First being a new workload. That means that every company on earth, either indirectly or directly, is using your database. Second being, there's some new storage architecture. That means that the, the companies that have come before you can do what you're doing.I think the third thing you need to do to build a big database company is that over time you have to implement more or less every Cory plan on the data. What that means is that you. You can't just get stuck in, like, this is the one thing that a database does. It has to be ever evolving because when someone has data in the database, they over time expect to be able to ask it more or less every question.So you have to do that to get the storage architecture to the limit of what, what it's capable of. Those are the three conditions.swyx: I just wanted to get a little bit of like the motivation, right? Like, so you left Shopify, you're like principal, engineer, infra guy. Um, you also head of kernel labs, uh, inside of Shopify, right?And then you consulted for read wise and that it kind of gave you that, that idea. I just wanted you to tell that story. Um, maybe I, you've told it before, but, uh, just introduce the, the. People to like the, the new workload, the sort of aha moment for turbo PufferSimon Hørup Eskildsen: For sure. So yeah, I spent almost a decade at Shopify.I was on the infrastructure team, um, from the fairly, fairly early days around 2013. Um, at the time it felt like it was growing so quickly and everything, all the metrics were, you know, doubling year on year compared to the, what companies are contending with today. It's very cute in growth. I feel like lot some companies are seeing that month over month.Um, of course. Shopify compound has been compounding for a very long time now, but I spent a decade doing that and the majority of that was just make sure the site is up today and make sure it's up a year from now. And a lot of that was really just the, um, you know, uh, the Kardashians would drive very, very large amounts of, of data to, to uh, to Shopify as they were rotating through all the merch and building out their businesses.And we just needed to make sure we could handle that. Right. And sometimes these were events, a million requests per second. And so, you know, we, we had our own data centers back in the day and we were moving to the cloud and there was so much sharding work and all of that that we were doing. So I spent a decade just scaling databases ‘cause that's fundamentally what's the most difficult thing to scale about these sites.The database that was the most difficult for me to scale during that time, and that was the most aggravating to be on call for, was elastic search. It was very, very difficult to deal with. And I saw a lot of projects that were just being held back in their ambition by using it.swyx: And I mean, self-hosted.Self-hosted. ‘causeSimon Hørup Eskildsen: it's, yeah, and it commercial, this is like 2015, right? So it's like a very particular vintage. Right. It's probably better at a lot of these things now. Um, it was difficult to contend with and I'm just like, I just think about it. It's an inverted index. It should be good at these kinds of queries and do all of this.And it was, we, we often couldn't get it to do exactly what we needed to do or basically get lucine to do, like expose lucine raw to, to, to what we needed to do. Um, so that was like. Just something that we did on the side and just panic scaled when we needed to, but not a particular focus of mine. So I left, and when I left, I, um, wasn't sure exactly what I wanted to do.I mean, it spent like a decade inside of the same company. I'd like grown up there. I started working there when I was 18.swyx: You only do Rails?Simon Hørup Eskildsen: Yeah. I mean, yeah. Rails. And he's a Rails guy. Uh, love Rails. So good. Um,Alessio: we all wish we could still work in Rails.swyx: I know know. I know, but some, I tried learning Ruby.It's just too much, like too many options to do the same thing. It's, that's my, I I know there's a, there's a way to do it.Simon Hørup Eskildsen: I love it. I don't know that I would use it now, like given cloud code and, and, and cursor and everything, but, um, um, but still it, like if I'm just sitting down and writing a teal code, that's how I think.But anyway, I left and I wasn't, I talked to a couple companies and I was like, I don't. I need to see a little bit more of the world here to know what I'm gonna like focus on next. Um, and so what I decided is like I was gonna, I called it like angel engineering, where I just hopped around in my friend's companies in three months increments and just helped them out with something.Right. And, and just vested a bit of equity and solved some interesting infrastructure problem. So I worked with a bunch of companies at the time, um, read Wise was one of them. Replicate was one of them. Um, causal, I dunno if you've tried this, it's like a, it's a spreadsheet engine Yeah. Where you can do distribution.They sold recently. Yeah. Um, we've been, we used that in fp and a at, um, at Turbo Puffer. Um, so a bunch of companies like this and it was super fun. And so we're the Chachi bt moment happened, I was with. With read Wise for a stint, we were preparing for the reader launch, right? Which is where you, you cue articles and read them later.And I was just getting their Postgres up to snuff, like, which basically boils down to tuning, auto vacuum. So I was doing that and then this happened and we were like, oh, maybe we should build a little recommendation engine and some features to try to hook in the lms. They were not that good yet, but it was clear there was something there.And so I built a small recommendation engine just, okay, let's take the articles that you've recently read, right? Like embed all the articles and then do recommendations. It was good enough that when I ran it on one of the co-founders of Rey's, like I found out that I got articles about, about having a child.I'm like, oh my God, I didn't, I, I didn't know that, that they were having a child. I wasn't sure what to do with that information, but the recommendation engine was good enough that it was suggesting articles, um, about that. And so there was, there was recommendations and uh, it actually worked really well.But this was a company that was spending maybe five grand a month in total on all their infrastructure and. When I did the napkin math on running the embeddings of all the articles, putting them into a vector index, putting it in prod, it's gonna be like 30 grand a month. That just wasn't tenable. Right?Like Read Wise is a proudly bootstrapped company and it's paying 30 grand for infrastructure for one feature versus five. It just wasn't tenable. So sort of in the bucket of this is useful, it's pretty good, but let us, let's return to it when the costs come down.swyx: Did you say it grows by feature? So for five to 30 is by the number of, like, what's the, what's the Scaling factor scale?It scales by the number of articles that you embed.Simon Hørup Eskildsen: It does, but what I meant by that is like five grand for like all of the other, like the Heroku, dinos, Postgres, like all the other, and this then storage is 30. Yeah. And then like 30 grand for one feature. Right. Which is like, what other articles are related to this one.Um, so it was just too much right to, to power everything. Their budget would've been maybe a few thousand dollars, which still would've been a lot. And so we put it in a bucket of, okay, we're gonna do that later. We'll wait, we will wait for the cost to come down. And that haunted me. I couldn't stop thinking about it.I was like, okay, there's clearly some latent demand here. If the cost had been a 10th, we would've shipped it and. This was really the only data point that I had. Right. I didn't, I, I didn't, I didn't go out and talk to anyone else. It was just so I started reading Right. I couldn't, I couldn't help myself.Like I didn't know what like a vector index is. I, I generally barely do about how to generate the vectors. There was a lot of hype about, this is a early 2023. There was a lot of hype about vector databases. There were raising a lot of money and it's like, I really didn't know anything about it. It's like, you know, trying these little models, fine tuning them.Like I was just trying to get sort of a lay of the land. So I just sat down. I have this. A GitHub repository called Napkin Math. And on napkin math, there's just, um, rows of like, oh, this is how much bandwidth. Like this is how many, you know, you can do 25 gigabytes per second on average to dram. You can do, you know, five gigabytes per second of rights to an SSD, blah blah.All of these numbers, right? And S3, how many you could do per, how much bandwidth can you drive per connection? I was just sitting down, I was like, why hasn't anyone build a database where you just put everything on O storage and then you puff it into NVME when you use the data and you puff it into dram if you're, if you're querying it alive, it's just like, this seems fairly obvious and you, the only real downside to that is that if you go all in on o storage, every right will take a couple hundred milliseconds of latency, but from there it's really all upside, right?You do the first go, it takes half a second. And it sort of occurred to me as like, well. The architecture is really good for that. It's really good for AB storage, it's really good for nvm ESSD. It's, well, you just couldn't have done that 10 years ago. Back to what we were talking about before. You really have to build a database where you have as few round trips as possible, right?This is how CPUs work today. It's how NVM E SSDs work. It's how as, um, as three works that you want to have a very large amount of outstanding requests, right? Like basically go to S3, do like that thousand requests to ask for data in one round trip. Wait for that. Get that, like, make a new decision. Do it again, and try to do that maybe a maximum of three times.But no databases were designed that way within NVME as is ds. You can drive like within, you know, within a very low multiple of DRAM bandwidth if you use it that way. And same with S3, right? You can fully max out the network card, which generally is not maxed out. You get very, like, very, very good bandwidth.And, but no one had built a database like that. So I was like, okay, well can't you just, you know, take all the vectors right? And plot them in the proverbial coordinate system. Get the clusters, put a file on S3 called clusters, do json, and then put another file for every cluster, you know, cluster one, do js O cluster two, do js ON you know that like it's two round trips, right?So you get the clusters, you find the closest clusters, and then you download the cluster files like the, the closest end. And you could do this in two round trips.swyx: You were nearest neighbors locally.Simon Hørup Eskildsen: Yes. Yes. And then, and you would build this, this file, right? It's just like ultra simplistic, but it's not a far shot from what the first version of Turbo Buffer was.Why hasn't anyone done thatAlessio: in that moment? From a workload perspective, you're thinking this is gonna be like a read heavy thing because they're doing recommend. Like is the fact that like writes are so expensive now? Oh, with ai you're actually not writing that much.Simon Hørup Eskildsen: At that point I hadn't really thought too much about, well no actually it was always clear to me that there was gonna be a lot of rights because at Shopify, the search clusters were doing, you know, I don't know, tens or hundreds of crew QPS, right?‘cause you just have to have a human sit and type in. But we did, you know, I don't know how many updates there were per second. I'm sure it was in the millions, right into the cluster. So I always knew there was like a 10 to 100 ratio on the read write. In the read wise use case. It's, um, even, even in the read wise use case, there'd probably be a lot fewer reads than writes, right?There's just a lot of churn on the amount of stuff that was going through versus the amount of queries. Um, I wasn't thinking too much about that. I was mostly just thinking about what's the fundamentally cheapest way to build a database in the cloud today using the primitives that you have available.And this is it, right? You just, now you have one machine and you know, let's say you have a terabyte of data in S3, you paid the $200 a month for that, and then maybe five to 10% of that data and needs to be an NV ME SSDs and less than that in dram. Well. You're paying very, very little to inflate the data.swyx: By the way, when you say no one else has done that, uh, would you consider Neon, uh, to be on a similar path in terms of being sort of S3 first and, uh, separating the compute and storage?Simon Hørup Eskildsen: Yeah, I think what I meant with that is, uh, just build a completely new database. I don't know if we were the first, like it was very much, it was, I mean, I, I hadn't, I just looked at the napkin math and was like, this seems really obvious.So I'm sure like a hundred people came up with it at the same time. Like the light bulb and every invention ever. Right. It was just in the air. I think Neon Neon was, was first to it. And they're trying, they're retrofitted onto Postgres, right? And then they built this whole architecture where you have, you have it in memory and then you sort of.You know, m map back to S3. And I think that was very novel at the time to do it for, for all LTP, but I hadn't seen a database that was truly all in, right. Not retrofitting it. The database felt built purely for this no consensus layer. Even using compare and swap on optic storage to do consensus. I hadn't seen anyone go that all in.And I, I mean, there, there, I'm sure there was someone that did that before us. I don't know. I was just looking at the napkin mathswyx: and, and when you say consensus layer, uh, are you strongly relying on S3 Strong consistency? You are. Okay.SoSimon Hørup Eskildsen: that is your consensus layer. It, it is the consistency layer. And I think also, like, this is something that most people don't realize, but S3 only became consistent in December of 2020.swyx: I remember this coming out during COVID and like people were like, oh, like, it was like, uh, it was just like a free upgrade.Simon Hørup Eskildsen: Yeah.swyx: They were just, they just announced it. We saw consistency guys and like, okay, cool.Simon Hørup Eskildsen: And I'm sure that they just, they probably had it in prod for a while and they're just like, it's done right.And people were like, okay, cool. But. That's a big moment, right? Like nv, ME SSDs, were also not in the cloud until around 2017, right? So you just sort of had like 2017 nv, ME SSDs, and people were like, okay, cool. There's like one skew that does this, whatever, right? Takes a few years. And then the second thing is like S3 becomes consistent in 2020.So now it means you don't have to have this like big foundation DB or like zookeeper or whatever sitting there contending with the keys, which is how. You know, that's what Snowflake and others have do so muchswyx: for goneSimon Hørup Eskildsen: Exactly. Just gone. Right? And so just push to the, you know, whatever, how many hundreds of people they have working on S3 solved and then compare and swap was not in S3 at this point in time,swyx: by the way.Uh, I don't know what that is, so maybe you wanna explain. Yes. Yeah.Simon Hørup Eskildsen: Yes. So, um, what Compare and swap is, is basically, you can imagine that if you have a database, it might be really nice to have a file called metadata json. And metadata JSON could say things like, Hey, these keys are here and this file means that, and there's lots of metadata that you have to operate in the database, right?But that's the simplest way to do it. So now you have might, you might have a lot of servers that wanna change the metadata. They might have written a file and want the metadata to contain that file. But you have a hundred nodes that are trying to contend with this metadata that JSON well, what compare and Swap allows you to do is basically just you download the file, you make the modifications, and then you write it only if it hasn't changed.While you did the modification and if not you retry. Right? Should just have this retry loops. Now you can imagine if you have a hundred nodes doing that, it's gonna be really slow, but it will converge over time. That primitive was not available in S3. It wasn't available in S3 until late 2024, but it was available in GCP.The real story of this is certainly not that I sat down and like bake brained it. I was like, okay, we're gonna start on GCS S3 is gonna get it later. Like it was really not that we started, we got really lucky, like we started on GCP and we started on GCP because tur um, Shopify ran on GCP. And so that was the platform I was most available with.Right. Um, and I knew the Canadian team there ‘cause I'd worked with them at Shopify and so it was natural for us to start there. And so when we started building the database, we're like, oh yeah, we have to build a, we really thought we had to build a consensus layer, like have a zookeeper or something to do this.But then we discovered the compare and swap. It's like, oh, we can kick the can. Like we'll just do metadata r json and just, it's fine. It's probably fine. Um, and we just kept kicking the can until we had very, very strong conviction in the idea. Um, and then we kind of just hinged the company on the fact that S3 probably was gonna get this, it started getting really painful in like mid 2024.‘cause we were closing deals with, um, um, notion actually that was running in AWS and we're like, trust us. You, you really want us to run this in GCP? And they're like, no, I don't know about that. Like, we're running everything in AWS and the latency across the cloud were so big and we had so much conviction that we bought like, you know, dark fiber between the AWS regions in, in Oregon, like in the InterExchange and GCP is like, we've never seen a startup like do like, what's going on here?And we're just like, no, we don't wanna do this. We were tuning like TCP windows, like everything to get the latency down ‘cause we had so high conviction in not doing like a, a metadata layer on S3. So those were the three conditions, right? Compare and swap. To do metadata, which wasn't in S3 until late 2024 S3 being consistent, which didn't happen until December, 2020.Uh, 2020. And then NVMe ssd, which didn't end in the cloud until 2017.swyx: I mean, in some ways, like a very big like cloud success story that like you were able to like, uh, put this all together, but also doing things like doing, uh, bind our favor. That that actually is something I've never heard.Simon Hørup Eskildsen: I mean, it's very common when you're a big company, right?You're like connecting your own like data center or whatever. But it's like, it was uniquely just a pain with notion because the, um, the org, like most of the, like if you're buying in Ashburn, Virginia, right? Like US East, the Google, like the GCP and, and AWS data centers are like within a millisecond on, on each other, on the public exchanges.But in Oregon uniquely, the GCP data center sits like a couple hundred kilometers, like east of Portland and the AWS region sits in Portland, but the network exchange they go through is through Seattle. So it's like a full, like 14 milliseconds or something like that. And so anyway, yeah. It's, it's, so we were like, okay, we can't, we have to go through an exchange in Portland.Yeah. Andswyx: you'd rather do this than like run your zookeeper and likeSimon Hørup Eskildsen: Yes. Way rather. It doesn't have state, I don't want state and two systems. Um, and I think all that is just informed by Justine, my co-founder and I had just been on call for so long. And the worst outages are the ones where you have state in multiple places that's not syncing up.So it really came from, from a a, like just a, a very pure source of pain, of just imagining what we would be Okay. Being woken up at 3:00 AM about and having something in zookeeper was not one of them.swyx: You, you're talking to like a notion or something. Do they care or do they just, theySimon Hørup Eskildsen: just, they care about latency.swyx: They latency cost. That's it.Simon Hørup Eskildsen: They just cared about latency. Right. And we just absorbed the cost. We're just like, we have high conviction in this. At some point we can move them to AWS. Right. And so we just, we, we'll buy the fiber, it doesn't matter. Right. Um, and it's like $5,000. Usually when you buy fiber, you buy like multiple lines.And we're like, we can only afford one, but we will just test it that when it goes over the public internet, it's like super smooth. And so we did a lot of, anyway, it's, yeah, it was, that's cool.Alessio: You can imagine talking to the GCP rep and it's like, no, we're gonna buy, because we know we're gonna turn, we're gonna turn from you guys and go to AWS in like six months.But in the meantime we'll do this. It'sSimon Hørup Eskildsen: a, I mean, like they, you know, this workload still runs on GCP for what it's worth. Right? ‘cause it's so, it was just, it was so reliable. So it was never about moving off GCP, it was just about honesty. It was just about giving notion the latency that they deserved.Right. Um, and we didn't want ‘em to have to care about any of this. We also, they were like, oh, egress is gonna be bad. It was like, okay, screw it. Like we're just gonna like vvc, VPC peer with you and AWS we'll eat the cost. Yeah. Whatever needs to be done.Alessio: And what were the actual workloads? Because I think when you think about ai, it's like 14 milliseconds.It's like really doesn't really matter in the scheme of like a model generation.Simon Hørup Eskildsen: Yeah. We were told the latency, right. That we had to beat. Oh, right. So, so we're just looking at the traces. Right. And then sort of like hand draw, like, you know, kind of like looking at the trace and then thinking what are the other extensions of the trace?Right. And there's a lot more to it because it's also when you have, if you have 14 versus seven milliseconds, right. You can fit in another round trip. So we had to tune TCP to try to send as much data in every round trip, prewarm all the connections. And there was, there's a lot of things that compound from having these kinds of round trips, but in the grand scheme it was just like, well, we have to beat the latency of whatever we're up against.swyx: Which is like they, I mean, notion is a database company. They could have done this themselves. They, they do lots of database engineering themselves. How do you even get in the door? Like Yeah, just like talk through that kind of.Simon Hørup Eskildsen: Last time I was in San Francisco, I was talking to one of the engineers actually, who, who was one of our champions, um, at, AT Notion.And they were, they were just trying to make sure that the, you know, per user cost matched the economics that they needed. You know, Uhhuh like, it's like the way I think about, it's like I have to earn a return on whatever the clouds charge me and then my customers have to earn a return on that. And it's like very simple, right?And so there has to be gross margin all the way up and that's how you build the product. And so then our customers have to make the right set of trade off the turbo Puffer makes, and if they're happy with that, that's great.swyx: Do you feel like you're competing with build internally versus buy or buy versus buy?Simon Hørup Eskildsen: Yeah, so, sorry, this was all to build up to your question. So one of the notion engineers told me that they'd sat and probably on a napkin, like drawn out like, why hasn't anyone built this? And then they saw terrible. It was like, well, it literally that. So, and I think AI has also changed the buy versus build equation in terms of, it's not really about can we build it, it's about do we have time to build it?I think they like, I think they felt like, okay, if this is a team that can do that and they, they feel enough like an extension of our team, well then we can go a lot faster, which would be very, very good for them. And I mean, they put us through the, through the test, right? Like we had some very, very long nights to to, to do that POC.And they were really our biggest, our second big customer off the cursor, which also was a lot of late nights. Right.swyx: Yeah. That, I mean, should we go into that story? The, the, the sort of Chris's story, like a lot, um, they credit you a lot for. Working very closely with them. So I just wanna hear, I've heard this, uh, story from Sole's point of view, but like, I'm curious what, what it looks like from your side.Simon Hørup Eskildsen: I actually haven't heard it from Sole's point of view, so maybe you can now cross reference it. The way that I remember it was that, um, the day after we launched, which was just, you know, I'd worked the whole summer on, on the first version. Justine wasn't part of it yet. ‘cause I just, I didn't tell anyone that summer that I was working on this.I was just locked in on building it because it's very easy otherwise to confuse talking about something to actually doing it. And so I was just like, I'm not gonna do that. I'm just gonna do the thing. I launched it and at this point turbo puffer is like a rust binary running on a single eight core machine in a T Marks instance.And me deploying it was like looking at the request log and then like command seeing it or like control seeing it to just like, okay, there's no request. Let's upgrade the binary. Like it was like literally the, the, the, the scrappiest thing. You could imagine it was on purpose because just like at Shopify, we did that all the time.Like, we like move, like we ran things in tux all the time to begin with. Before something had like, at least the inkling of PMF, it was like, okay, is anyone gonna hear about this? Um, and one of the cursor co-founders Arvid reached out and he just, you know, the, the cursor team are like all I-O-I-I-M-O like, um, contenders, right?So they just speak in bullet points and, and facts. It was like this amazing email exchange just of, this is how many QPS we have, this is what we're paying, this is where we're going, blah, blah, blah. And so we're just conversing in bullet points. And I tried to get a call with them a few times, but they were, so, they were like really writing the PMF bowl here, just like late 2023.And one time Swally emails me at like five. What was it like 4:00 AM Pacific time saying like, Hey, are you open for a call now? And I'm on the East coast and I, it was like 7:00 AM I was like, yeah, great, sure, whatever. Um, and we just started talking and something. Then I didn't know anything about sales.It was something that just comp compelled me. I have to go see this team. Like, there's something here. So I, I went to San Francisco and I went to their office and the way that I remember it is that Postgres was down when I showed up at the office. Did SW tell you this? No. Okay. So Postgres was down and so it's like they were distracting with that.And I was trying my best to see if I could, if I could help in any way. Like I knew a little bit about databases back to tuning, auto vacuum. It was like, I think you have to tune out a vacuum. Um, and so we, we talked about that and then, um, that evening just talked about like what would it look like, what would it look like to work with us?And I just said. Look like we're all in, like we will just do what we'll do whatever, whatever you tell us, right? They migrated everything over the next like week or two, and we reduced their cost by 95%, which I think like kind of fixed their per user economics. Um, and it solved a lot of other things. And we were just, Justine, this is also when I asked Justine to come on as my co-founder, she was the best engineer, um, that I ever worked with at Shopify.She lived two blocks away and we were just, okay, we're just gonna get this done. Um, and we did, and so we helped them migrate and we just worked like hell over the next like month or two to make sure that we were never an issue. And that was, that was the cursor story. Yeah.swyx: And, and is code a different workload than normal text?I, I don't know. Is is it just text? Is it the same thing?Simon Hørup Eskildsen: Yeah, so cursor's workload is basically, they, um, they will embed the entire code base, right? So they, they will like chunk it up in whatever they would, they do. They have their own embedding model, um, which they've been public about. Um, and they find that on, on, on their evals.It. There's one of their evals where it's like a 25% improvement on a very particular workload. They have a bunch of blog posts about it. Um, I think it works best on larger code basis, but they've trained their own embedding model to do this. Um, and so you'll see it if you use the cursor agent, it will do searches.And they've also been public around, um, how they've, I think they post trained their model to be very good at semantic search as well. Um, and that's, that's how they use it. And so it's very good at, like, can you find me on the code that's similar to this, or code that does this? And just in, in this queries, they also use GR to supplement it.swyx: Yeah.Simon Hørup Eskildsen: Um, of courseswyx: it's been a big topic of discussion like, is rag dead because gr you know,Simon Hørup Eskildsen: and I mean like, I just, we, we see lots of demand from the coding company to ethicsswyx: search in every part. Yes.Simon Hørup Eskildsen: Uh, we, we, we see demand. And so, I mean, I'm. I like case studies. I don't like, like just doing like thought pieces on this is where it's going.And like trying to be all macroeconomic about ai, that's has turned out to be a giant waste of time because no one can really predict any of this. So I just collect case studies and I mean, cursor has done a great job talking about what they're doing and I hope some of the other coding labs that use Turbo Puffer will do the same.Um, but it does seem to make a difference for particular queries. Um, I mean we can also do text, we can also do RegX, but I should also say that cursors like security posture into Tur Puffer is exceptional, right? They have their own embedding model, which makes it very difficult to reverse engineer. They obfuscate the file paths.They like you. It's very difficult to learn anything about a code base by looking at it. And the other thing they do too is that for their customers, they encrypt it with their encryption keys in turbo puffer's bucket. Um, so it's, it's, it's really, really well designed.swyx: And so this is like extra stuff they did to work with you because you are not part of Cursor.Exactly like, and this is just best practice when working in any database, not just you guys. Okay. Yeah, that makes sense. Yeah. I think for me, like the, the, the learning is kind of like you, like all workloads are hybrid. Like, you know, uh, like you, you want the semantic, you want the text, you want the RegX, you want sql.I dunno. Um, but like, it's silly to like be all in on like one particularly query pattern.Simon Hørup Eskildsen: I think, like I really like the way that, um, um, that swally at cursor talks about it, which is, um, I'm gonna butcher it here. Um, and you know, I'm a, I'm a database scalability person. I'm not a, I, I dunno anything about training models other than, um, what the internet tells me and what.The way he describes is that this is just like cash compute, right? It's like you have a point in time where you're looking at some particular context and focused on some chunk and you say, this is the layer of the neural net at this point in time. That seems fundamentally really useful to do cash compute like that.And, um, how the value of that will change over time. I'm, I'm not sure, but there seems to be a lot of value in that.Alessio: Maybe talk a bit about the evolution of the workload, because even like search, like maybe two years ago it was like one search at the start of like an LLM query to build the context. Now you have a gentech search, however you wanna call it, where like the model is both writing and changing the code and it's searching it again later.Yeah. What are maybe some of the new types of workloads or like changes you've had to make to your architecture for it?Simon Hørup Eskildsen: I think you're right. When I think of rag, I think of, Hey, there's an 8,000 token, uh, context window and you better make it count. Um, and search was a way to do that now. Everything is moving towards the, just let the agent do its thing.Right? And so back to the thing before, right? The LLM is very good at reasoning with the data, and so we're just the tool call, right? And that's increasingly what we see our customers doing. Um, what we're seeing more demand from, from our customers now is to do a lot of concurrency, right? Like Notion does a ridiculous amount of queries in every round trip just because they can't.And I'm also now, when I use the cursor agent, I also see them doing more concurrency than I've ever seen before. So a bit similar to how we designed a database to drive as much concurrency in every round trip as possible. That's also what the agents are doing. So that's new. It means just an enormous amount of queries all at once to the dataset while it's warm in as few turns as possible.swyx: Can I clarify one thing on that?Simon Hørup Eskildsen: Yes.swyx: Is it, are they batching multiple users or one user is driving multiple,Simon Hørup Eskildsen: one user driving multiple, one agent driving.swyx: It's parallel searching a bunch of things.Simon Hørup Eskildsen: Exactly.swyx: Yeah. Yeah, exactly. So yeah, the clinician also did, did this for the fast context thing, like eight parallel at once.Simon Hørup Eskildsen: Yes.swyx: And, and like an interesting problem is, well, how do you make sure you have enough diversity so you're not making the the same request eight times?Simon Hørup Eskildsen: And I think like that's probably also where the hybrid comes in, where. That's another way to diversify. It's a completely different way to, to do the search.That's a big change, right? So before it was really just like one call and then, you know, the LLM took however many seconds to return, but now we just see an enormous amount of queries. So the, um, we just see more queries. So we've like tried to reduce query, we've reduced query pricing. Um, this is probably the first time actually I'm saying that, but the query pricing is being reduced, like five x.Um, and we'll probably try to reduce it even more to accommodate some of these workloads of just doing very large amounts of queries. Um, that's one thing that's changed. I think the right, the right ratio is still very high, right? Like there's still a, an enormous amount of rights per read, but we're starting probably to see that change if people really lean into this pattern.Alessio: Can we talk a little bit about the pricing? I'm curious, uh, because traditionally a database would charge on storage, but now you have the token generation that is so expensive, where like the actual. Value of like a good search query is like much higher because they're like saving inference time down the line.How do you structure that as like, what are people receptive to on the other side too?Simon Hørup Eskildsen: Yeah. I, the, the turbo puffer pricing in the beginning was just very simple. The pricing on these on for search engines before Turbo Puffer was very server full, right? It was like, here's the vm, here's the per hour cost, right?Great. And I just sat down with like a piece of paper and said like, if Turbo Puffer was like really good, this is probably what it would cost with a little bit of margin. And that was the first pricing of Turbo Puffer. And I just like sat down and I was like, okay, like this is like probably the storage amp, but whenever on a piece of paper I, it was vibe pricing.It was very vibe price, and I got it wrong. Oh. Um, well I didn't get it wrong, but like Turbo Puffer wasn't at the first principle pricing, right? So when Cursor came on Turbo Puffer, it was like. Like, I didn't know any VCs. I didn't know, like I was just like, I don't know, I didn't know anything about raising money or anything like that.I just saw that my GCP bill was, was high, was a lot higher than the cursor bill. So Justine and I was just like, well, we have to optimize it. Um, and I mean, to the chagrin now of, of it, of, of the VCs, it now means that we're profitable because we've had so much pricing pressure in the beginning. Because it was running on my credit card and Justine and I had spent like, like tens of thousands of dollars on like compute bills and like spinning off the company and like very like, like bad Canadian lawyers and like things like to like get all of this done because we just like, we didn't know.Right. If you're like steeped in San Francisco, you're just like, you just know. Okay. Like you go out, raise a pre-seed round. I, I never heard a word pre-seed at this point in time.swyx: When you had Cursor, you had Notion you, you had no funding.Simon Hørup Eskildsen: Um, with Cursor we had no funding. Yeah. Um, by the time we had Notion Locke was, Locke was here.Yeah. So it was really just, we vibe priced it 100% from first Principles, but it wasn't, it, it was not performing at first principles, so we just did everything we could to optimize it in the beginning for that, so that at least we could have like a 5% margin or something. So I wasn't freaking out because Cursor's bill was also going like this as they were growing.And so my liability and my credit limit was like actively like calling my bank. It was like, I need a bigger credit. Like it was, yeah. Anyway, that was the beginning. Yeah. But the pricing was, yeah, like storage rights and query. Right. And the, the pricing we have today is basically just that pricing with duct tape and spit to try to approach like, you know, like a, as a margin on the physical underlying hardware.And we're doing this year, you're gonna see more and more pricing changes from us. Yeah.swyx: And like is how much does stuff like VVC peering matter because you're working in AWS land where egress is charged and all that, you know.Simon Hørup Eskildsen: We probably don't like, we have like an enterprise plan that just has like a base fee because we haven't had time to figure out SKU pricing for all of this.Um, but I mean, yeah, you can run turbo puffer either in SaaS, right? That's what Cursor does. You can run it in a single tenant cluster. So it's just you. That's what Notion does. And then you can run it in, in, in BYOC where everything is inside the customer's VPC, that's what an for example, philanthropic does.swyx: What I'm hearing is that this is probably the best CRO job for somebody who can come in and,Simon Hørup Eskildsen: I mean,swyx: help you with this.Simon Hørup Eskildsen: Um, like Turbo Puffer hired, like, I don't know what, what number this was, but we had a full-time CFO as like the 12th hire or something at Turbo Puffer, um, I think I hear are a lot of comp.I don't know how they do it. Like they have a hundred employees and not a CFO. It's like having a CFO is like a runningswyx: business man. Like, you know,Simon Hørup Eskildsen: it's so good. Yeah, like money Mike, like he just, you know, just handles the money and a lot of the business stuff and so he came in and just hopped with a lot of the operational side of the business.So like C-O-O-C-F-O, like somewhere in between.swyx: Just as quick mention of Lucky, just ‘cause I'm curious, I've met Lock and like, he's obviously a very good investor and now on physical intelligence, um, I call it generalist super angel, right? He invests in everything. Um, and I always wonder like, you know, is there something appealing about focusing on developer tooling, focusing on databases, going like, I've invested for 10 years in databases versus being like a lock where he can maybe like connect you to all the customers that you need.Simon Hørup Eskildsen: This is an excellent question. No, no one's asked me this. Um, why lockey? Because. There was a couple of people that we were talking to at the time and when we were raising, we were almost a little, we were like a bit distressed because one of our, one of our peers had just launched something that was very similar to Turbo Puffer.And someone just gave me the advice at the time of just choose the person where you just feel like you can just pick up the phone and not prepare anything. And just be completely honest, and I don't think I've said this publicly before, but I just called Lockey and was like local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you.But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working. So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people and we're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards and.Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before. As I said, I didn't even know what a seed or pre-seed round was like before, probably even at this time. So I was just like very honest with him. And I asked him like, Lockie, have you ever have, have you ever invested in database company?He was just like, no. And at the time I was like, am I dumb? Like, but I think there was something that just like really drew me to Lockie. He is so authentic, so honest, like, and there was something just like, I just felt like I could just play like, just say everything openly. And that was, that was, I think that that was like a perfect match at the time, and, and, and honestly still is.He was just like, okay, that's great. This is like the most honest, ridiculous thing I've ever heard anyone say to me. But like that, like that, whyswyx: is this ridiculous? Say competitor launch, this may not work out. It wasSimon Hørup Eskildsen: more just like. If this doesn't work out, I'm gonna close up shop by the end of the mo the year, right?Like it was, I don't know, maybe it's common. I, I don't know. He told me it was uncommon. I don't know. Um, that's why we chose him and he'd been phenomenal. The other people were talking at the, at the time were database experts. Like they, you know, knew a lot about databases and Locke didn't, this turned out to be a phenomenal asset.Right. I like Justine and I know a lot about databases. The people that we hire know a lot about databases. What we needed was just someone who didn't know a lot about databases, didn't pretend to know a lot about databases, and just wanted to help us with candidates and customers. And he did. Yeah. And I have a list, right, of the investors that I have a relationship with, and Lockey has just performed excellent in the number of sub bullets of what we can attribute back to him.Just absolutely incredible. And when people talk about like no ego and just the best thing for the founder, I like, I don't think that anyone, like even my lawyer is like, yeah, Lockey is like the most friendly person you will find.swyx: Okay. This is my most glow recommendation I've ever heard.Alessio: He deserves it.He's very special.swyx: Yeah. Yeah. Yeah. Okay. Amazing.Alessio: Since you mentioned candidates, maybe we can talk about team building, you know, like, especially in sf, it feels like it's just easier to start a company than to join a company. Uh, I'm curious your experience, especially not being n SF full-time and doing something that is maybe, you know, a very low level of detail and technical detail.Simon Hørup Eskildsen: Yeah. So joining versus starting, I never thought that I would be a founder. I would start with it, like Turbo Puffer started as a blog post, and then it became a project and then sort of almost accidentally became a company. And now it feels like it's, it's like becoming a bigger company. That was never the intention.The intentions were very pure. It's just like, why hasn't anyone done this? And it's like, I wanna be the, like, I wanna be the first person to do it. I think some founders have this, like, I could never work for anyone else. I, I really don't feel that way. Like, it's just like, I wanna see this happen. And I wanna see it happen with some people that I really enjoy working with and I wanna have fun doing it and this, this, this has all felt very natural on that, on that sense.So it was never a like join versus versus versus found. It was just dis found me at the right moment.Alessio: Well I think there's an argument for, you should have joined Cursor, right? So I'm curious like how you evaluate it. Okay, I should actually go raise money and make this a company versus like, this is like a company that is like growing like crazy.It's like an interesting technical problem. I should just build it within Cursor and then they don't have to encrypt all this stuff. They don't have to obfuscate things. Like was that on your mind at all orSimon Hørup Eskildsen: before taking the, the small check from Lockie, I did have like a hard like look at myself in the mirror of like, okay, do I really want to do this?And because if I take the money, I really have to do it right. And so the way I almost think about it's like you kind of need to ha like you kind of need to be like fucked up enough to want to go all the way. And that was the conversation where I was like, okay, this is gonna be part of my life's journey to build this company and do it in the best way that I possibly can't.Because if I ask people to join me, ask people to get on the cap table, then I have an ultimate responsibility to give it everything. And I don't, I think some people, it doesn't occur to me that everyone takes it that seriously. And maybe I take it too seriously, I don't know. But that was like a very intentional moment.And so then it was very clear like, okay, I'm gonna do this and I'm gonna give it everything.Alessio: A lot of people don't take it this seriously. But,swyx: uh, let's talk about, you have this concept of the P 99 engineer. Uh, people are 10 x saying, everyone's saying, you know, uh, maybe engineers are out of a job. I don't know.But you definitely see a P 99 engineer, and I just want you to talk about it.Simon Hørup Eskildsen: Yeah, so the P 99 engineer was just a term that we started using internally to talk about candidates and talk about how we wanted to build the company. And you know, like everyone else is, like we want a talent dense company.And I think that's almost become trite at this point. What I credit the cursor founders a lot with is that they just arrived there from first principles of like, we just need a talent dense, um, talent dense team. And I think I've seen some teams that weren't talent dense and like seemed a counterfactual run, which if you've run in been in a large company, you will just see that like it's just logically will happen at a large company.Um, and so that was super important to me and Justine and it's very difficult to maintain. And so we just needed, we needed wording for it. And so I have a document called Traits of the P 99 Engineer, and it's a bullet point list. And I look at that list after every single interview that I do, and in every single recap that we do and every recap we end with.End with, um, some version of I'm gonna reject this candidate completely regardless of what the discourse was, because I wanna see people fight for this person because the default should not be, we're gonna hire this person. The default should be, we're definitely not hiring this person. And you know, if everyone was like, ah, maybe throw a punch, then this is not the right.swyx: Do, do you operate, like if there's one cha there must have at least one champion who's like, yes, I will put my career on, on, on the line for this. You know,Simon Hørup Eskildsen: I think career on the line,swyx: maybe a chair, butSimon Hørup Eskildsen: yeah. You know, like, um, I would say so someone needs to like, have both fists up and be like, I'd fight.Right? Yeah. Yeah. And if one person said, then, okay, let's do it. Right?swyx: Yeah.Simon Hørup Eskildsen: Um. It doesn't have to be absolutely everyone. Right? And like the interviews are always the sign that you're checking for different attributes. And if someone is like knocking it outta the park in every single attribute, that's, that's fairly rare.Um, but that's really important. And so the traits of the P 99 engineer, there's lots of them. There's also the traits of the p like triple nine engineer and the quadruple nine engineer. This is like, it's a long list.swyx: Okay.Simon Hørup Eskildsen: Um, I'll give you some samples, right. Of what we, what we look for. I think that the P 99 engineer has some history of having bent, like their trajectory or something to their will.Right? Some moment where it was just, they just, you know, made the computer do what it needed to do. There's something like that, and it will, it will occur to have them at some point in their career. And, uh. Hopefully multiple times. Right.swyx: Gimme an example of one of your engineers that like,Simon Hørup Eskildsen: I'll give an eng.Uh, so we, we, we launched this thing called A and NV three. Um, we could, we're also, we're working on V four and V five right now, but a and NV three can search a hundred billion vectors with a P 50 of around 40 milliseconds and a p 99 of 200 milliseconds. Um, maybe other people have done this, I'm sure Google and others have done this, but, uh, we haven't seen anyone, um, at least not in like a public consumable SaaS that can do this.And that was an engineer, the chief architect of Turbo Puffer, Nathan, um, who more or less just bent this, the software was not capable of this and he just made it capable for a very particular workload in like a, you know, six to eight week period with the help of a lot of the team. Right. It's been, been, there's numerous of examples of that, like at, at turbo puff, but that's like really bending the software and X 86 to your will.It was incredible to watch. Um. You wanna see some moments like that?swyx: Isn't that triple nine?Simon Hørup Eskildsen: Um, I think Nathan, what's calledAlessio: group nine, that was only nine. I feel like this is too high forSimon Hørup Eskildsen: Nathan. Nathan is, uh, Nathan is like, yeah, there's a lot of nines. Okay. After that p So I think that's one trait. I think another trait is that, uh, the P 99 spends a lot of time looking at maps.Generally it's their preferred ux. They just love looking at maps. You ever seen someone who just like, sits on their phone and just like, scrolls around on a map? Or did you not look at maps A lot? You guys don't look atswyx: maps? I guess I'm not feeling there. I don't know, butSimon Hørup Eskildsen: you just dis What about trains?Do you like trains?swyx: Uh, I mean they, not enough. Okay. This is just like weapon nice. Autism is what I call it. Like, like,Simon Hørup Eskildsen: um, I love looking at maps, like, it's like my preferred UX and just like I, you know, I likeswyx: lotsAlessio: of, of like random places, soswyx: like,youswyx: know.Alessio: Yes. Okay. There you go. So instead of like random places, like how do you explore the maps?Simon Hørup Eskildsen: No, it's, it's just a joke.swyx: It's autism laugh. It's like you are just obsessed by something and you like studying a thing.Simon Hørup Eskildsen: The origin of this was that at some point I read an interview with some IOI gold medalistswyx: Uhhuh,Simon Hørup Eskildsen: and it's like, what do you do in your spare time? I was just like, I like looking at maps.I was like, I feel so seen. Like, I just like love, like swirling out. I was like, oh, Canada is so big. Where's Baffin Island? I don't know. I love it. Yeah. Um, anyway, so the traits of P 99, P 99 is obsessive, right? Like, there's just like, you'll, you'll find traits of that we do an interview at, at, at, at turbo puffer or like multiple interviews that just try to screen for some of these things.Um, so. There's lots of others, but these are the kinds of traits that we look for.swyx: I'll tell you, uh, some people listen for like some of my dere stuff. Uh, I do think about derel as maps. Um, you draw a map for people, uh, maps show you the, uh, what is commonly agreed to be the geographical features of what a boundary is.And it shows also shows you what is not doing. And I, I think a lot of like developer tools, companies try to tell you they can do everything, but like, let's, let's be real. Like you, your, your three landmarks are here, everyone comes here, then here, then here, and you draw a map and, and then you draw a journey through the map.And like that. To me, that's what developer relations looks like. So I do think about things that way.Simon Hørup Eskildsen: I think the P 99 thinks in offs, right? The P 99 is very clear about, you know, hey, turbo puffer, you can't run a high transaction workload on turbo puffer, right? It's like the right latency is a hundred milliseconds.That's a clear trade off. I think the P 99 is very good at articulating the trade offs in every decision. Um. Which is exactly what the map is in your case, right?swyx: Uh, yeah, yeah. My, my, my world. My world.Alessio: How, how do you reconcile some of these things when you're saying you bend the will the computer versus like the trade
**Palabras clave:** actualizaciones, macOS, Mac Studio, disco externo, sincronización, APFS, memoria RAM --- ### Problema de las actualizaciones de macOS ### ️ Mac Studio M4 Ultra / Max ### Discos externos y formatos de archivo ### Problemas al expulsar discos externos ### Inconsistencias en el sistema de archivos APFS ### Consejos de prevención ### Problemas de memoria RAM ### Próximos pasos
Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ
In January 2024, co-founder of Thornbridge Brewery, Simon Webster received an email from Garrett Oliver. He'd heard a rumour that Carlsberg Britvic, the now owners of Marston's Brewery in Burton-upon-Trent, were “laying the union to rest.” The Brooklyn brewmaster continued to ask Simon if he'd be interested in taking a set, in turn saving it from being permanently erased from British brewing history. Six months later, after Simon and Thornbridge's brewing director Rob Lovatt had visited Burton to assess the situation, a single Union set was delivered to their brewery in Bakewell, Derbyshire. In the months since, it has become a focal point for their brewery, and something that has stirred plenty of excitement in the process. This has no doubt been assisted by the fact it's been used to produce some exciting collaborations, including with the likes of The Kernel and Odell Brewing. They've even produced a Strong Dark Mild with Garrett himself, a beer that would go on to become award-winning. In October 2025, host Matthew Curtis was invited to spend two days at Thornbridge and document a collaboration on the union system with Theakston Brewery of Masham, North Yorkshire. As the brewers set about making a version of the Yorkshire brewery's famous Masham Ale, Matthew set about filming, interviewing and documenting as much as he possibly could. The idea was to get to the heart of why the arrival of the Burton union at Thornbridge felt so significant. In this documentary-style episode of the Pellicle Podcast, you'll hear from several people at Thornbridge, including Simon Webster, Rob Lovatt, brewing manager Dominic Driscoll, and several others, plus Theakston's head brewer, Mark Slater. With plenty of analysis throughout, plus an original soundtrack composed by the host himself, this is the story of how Thornbridge saved the Burton Union. We're able to produce The Pellicle Podcast thanks to our Patreon subscribers, and our sponsor Get ‘Er Brewed. If you're enjoying this podcast, or the weekly articles we publish, please consider taking out a monthly subscription for less than the price of a pint a month.
KiwiSaver holds $145 billion so why are so many Kiwis still heading for a broken retirement?In this episode, we're joined by Dean Anderson, Founder and CEO of Kernel, to unpack what's holding KiwiSaver back - from weak incentives and disengaged members to election-year policy risks, default fund underperformance, gaps for the self-employed, and why short-term political decisions could cost New Zealand decades of future wealth.For more money tips follow us on:FacebookInstagramThe content in this podcast is the opinion of the hosts. It should not be treated as financial advice. It is important to take into consideration your own personal situation and goals before making any financial decisions.
We were minutes away from shutting down our Matrix server when the Discord news hit. Now we're not just keeping it, we're doubling down. Can open source seize this moment?Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:
The Bitcoin Kernel project is one of the most misunderstood developments in Bitcoin Core. In this conversation, Shinobi and Sedited explain how isolating validation logic increases flexibility, improves security, and enables alternative node implementations. From multi-process architecture to formal protocol specifications, this episode covers why kernel development matters now. #Bitcoin #BitcoinDevelopment #BitcoinCore⭐️⚔: SIGN UP WITH DUELBITS TODAY FOR A CHANCE TO WIN UP TO 2 BTC:
The news this week highlights shifts in Linux from multiple angles. What's evolving, why it matters, and that moment where the future actually works.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:
Bitcoin Core doesn't stand still even if consensus rules don't change. In this episode, Stéphan (Core Developer at Brink) explains how the Kernel and multiprocess projects are reshaping Bitcoin Core for long-term reliability. From modular validation logic to safer development workflows, this conversation shows why maintenance work matters. Hosted by Shinobi of Bitcoin Magazine.#BitcoinCore #BitcoinDevelopment #BitcoinKernel ⭐️⚔: SIGN UP WITH DUELBITS TODAY FOR A CHANCE TO WIN UP TO 2 BTC:
The news this week highlights shifts in Linux from multiple angles. What's evolving, why it matters, and that moment where the future actually works.
Depois de uma tempestade que deixou metade do país às escuras, aproveitámos e voltámos à Idade Média (ou seja, Portugal contemporâneo). Com os preços dos Raspberry Pi a subir, estará na altura de voltar aos ábacos? Como foi a peregrinação à FOSDEM? Que novidades traz o Diogo? O mundo do Firefox está em polvorosa com Autos de Fé; o LibreOffice pula e avança no mundo germanófono; os Snaps podem agora ser farejados por uma nova ferramenta; anda aí um novo Kernel e imagens de Ubuntu 26.04 para testar; o Papers já permite desenhar bonecos e o Steam tem ainda mais jogos para arrombar o monopólio de outros sistemas (por favor não usem o termo Microslop).
In our latest Open Source Startup Podcast episode, co-hosts Robby and Tim talk with Catherine Jue, Co-Founder and CEO of browser infrastructure company Kernel. Their open source images acts as a browsers-as-a-service for automations and web agents.In this episode, we break down what Kernel is building today and why browser infrastructure has quietly become one of the most important layers for AI agents. We talk about Kernel's focus on fast, low-latency cloud browsers, why performance matters more than people expect, and how developers can connect agents via APIs or MCP servers without spinning up heavy infrastructure themselves.We also explore the real-world use cases driving adoption - from a new wave of RPA for industries without APIs, to real-time web analysis, sales intelligence, and voice agents that need to respond instantly. Finally, we dig into Kernel's open-source, developer-first DNA, the technical bets behind its control plane and unikernel-based browsers, and why the team believes agentic workflows are still early, but inevitable.
Bryan Johnson is the founder of Braintree, Kernel, a futurist, biohacker and an author. Is it possible for humans to never die? In recent years, Bryan Johnson has drawn global attention for the extreme experiments he's running on his own body in pursuit of radical longevity. So what does the latest science actually say about his quest to live forever, and how close are we really to immortality? Expect to learn why it is our human obligation to fight against death, the most life-changing pivots Bryan made that helped him the most, how to get perfect sleep, what it takes to build an anti-fragile routine, the most important treatments that help increase your changes of living longer, how to improve your vascular health, how Bryan deals with complex emotions life grief, Bryan's strange prediction on how he might die and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 35% off your first subscription on the best supplements from Momentous at https://livemomentous.com/modernwisdom New pricing since recording: Function is now just $365, plus get $25 off at https://functionhealth.com/modernwisdom Get a Free Sample Pack of LMNT's most popular flavours with your first purchase at https://drinklmnt.com/modernwisdom Get up to $350 off the Pod 5 at https://eightsleep.com/modernwisdom Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
Bishop Tony Percy says think nutshell and kernel. Moses receives the Ten Words. Jesus gives the Eight Beatitudes. The Eight Beatitudes are the inner core of the Ten Commandments. Moses received, but Jesus gives
This week John Terrill from Red Hat joins us to talk about branding in Open Source. Many projects don't have a unified communication strategy, but having one is important. -- During The Show -- 00:50 Intro Steve is in Vegas Noah "getting deals" 04:37 Lossless Cut Reached for Avidemux Avoiding re-encoding Learning a new program Lossless Cut Webpage Lossless Cut GitHub 08:30 OmniTools Small docker image Does everything in the web browser OmniTools GitHub 12:43 NLSH For people who want to learn the CLI Translates natural language into CLI Making things easy can backfire NLSH GitHub 16:00 Xyops Features: Job scheduling Workflow automation Server monitoring Alerting Incident response Self Hosted Open Source Xyops 18:18 News Wire Squid 7.4 - github.com Openttd 15.1 - openttd.org Rust 1.93 - rust-lang.org GNU C lwn.net Extix - linux.exton.net Liya Linux 2.5 - distrowatch.com CachyOS 2026 - cachyos.org GNU Guix 1.5 - quix.gnu.org MxLinux 25.1 - mxlinux.org Crypto Thieves in Snap Store - helpnetsecurity.com Open Source Quantum Computer - uwaterloo.ca AI Weather Models - kqed.org Banks Prefer Open-Source - finainews.com Rack-Scale AI - theasset.com OIN 2.0 - yahoo.com 19:55 Python Built-in Web Server Use python to host a web server Used for file transfers Use case Steve's travel router 24:17 Proxy RCE - PSA Basics of the problem grahamhelton.com 26:30 Matter Camera 1920x1080 30fps Synology App to turn a phone into a camera linuxgizmos.com 30:06 MS Gives Keys to FBI Microsoft confirms they gave the FBI BitLocker keys ~ 20 requests per year Home users forced to upload keys Jump through hoops to avoid uploading Encryption is only as good as who holds the keys Pragmatism aardwolfsecurity.com ArsTechnica 35:20 Linux Continuity Continuity document merged into Kernel docs theregister.com 36:40 John Terrill PR in open source Branding open source One size fits all projects Extracting technical knowledge Technical accuracy How to talk with journalists Separating personal views from project views Negative coverage Messaging guidelines Branding and marketing Attracting media attention -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard Phone Systems for Ask Noah provided by Voxtelesys Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux Ask Noah Show Altispeed Technologies
1. I Was Involuntarily Committed to a Mental Institution. 2. This Was How God Answered my Prayer. 3. What I Thought Was the Worst Day of my Life Was the Beginning of Something Wonderful. 4. Jesus Died in Order to Bear Much Fruit. 5. Jesus' Death Is the Great Crisis of World History. 6. It Is Also the Casting Down of Satan. 7. The Cross Is the Great Crisis. 8. The Cross Changes the Focus on the Subjects of Redemption. 9. Jesus Is the Light of the World, and He Has Called us to Let our Light Shine. 10. What Does it Mean to Hate our Life in this World? 11. Sandy and I Have Enjoyed Great Fruitfulness over the Past Decade. 12. What Are you Holding onto?
Dr. Rabbi Tzemah Yoreh has a PhD in Bible from Hebrew University, as well as a PhD in Wisdom Literature of the Hellenistic period from the University of Toronto. He has written many books focusing on his reconstruction of the redaction history of Genesis through Kings. He is the author of The First Book of God, and the multi-volume Kernel to Canon series, with books like Jacob's Journey and Moses's Mission. Yoreh has taught at Ben Gurion University and American Jewish University. He is currently the leader of the City Congregation for Humanistic Judaism in New York.
Episode 157: In this episode of Critical Thinking - Bug Bounty Podcast we're joined by Hypr to talk about hacking Mediatek and his experiences with HackerOne and Pwn2Own Ecosystems.Follow us on twitter at: https://x.com/ctbbpodcastGot any ideas and suggestions? Feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!====== Links ======Follow your hosts Rhynorater, rez0 and gr3pme on X: https://x.com/Rhynoraterhttps://x.com/rez0__https://x.com/gr3pmeCritical Research Lab:https://lab.ctbb.show/ ====== Ways to Support CTBBPodcast ======Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.You can also find some hacker swag at https://ctbb.show/merch!Today's Guest: https://x.com/hyprdude====== This Week in Bug Bounty ======Top 10 web hacking techniques of 2025: call for nominationshttps://portswigger.net/research/top-10-web-hacking-techniques-of-2025-nominations-openCVE-2025-13467https://access.redhat.com/security/cve/cve-2025-13467====== Resources ======Hypr's Bloghttps://blog.coffinsec.commediatek? more like media-rekt, amirite.https://blog.coffinsec.com/0days/2025/12/15/more-like-mediarekt-amirite.htmlkernel-utilshttps://github.com/mellow-hype/kernel-utils====== Timestamps ======(00:00:00) Introduction(00:03:23) Heap Overflow in Mediatek Kernel Drivers(00:19:23) Kernel Debugging & ioctl Handlers (00:43:30) Input Structs, Sync to Source, & Privilege Escalation (00:51:30) HackerOne Ecosystem vs Pwn2Own Ecosystem (01:17:00) Kernel Utils(01:26:46) Real World Bugs for Exploit Development vs CTFs
Parce que… c'est l'épisode 0x693! Shameless plug 25 et 26 février 2026 - SéQCure 2026 CfP 31 mars au 2 avril 2026 - Forum INCYBER - Europe 2026 14 au 17 avril 2026 - Botconf 2026 28 et 29 avril 2026 - Cybereco Cyberconférence 2026 9 au 17 mai 2026 - NorthSec 2026 3 au 5 juin 2026 - SSTIC 2026 19 septembre 2026 - Bsides Montréal Notes IA Grok / juvénile Grok Is Pushing AI ‘Undressing' Mainstream Grok assumes users seeking images of underage girls have “good intent” Dems pressure Google, Apple to drop X app as international regulators turn up heat Tim Cook and Sundar Pichai are cowards MCP The 5 Knights of the MCP Apocalypse
This is a recap of the top 10 posts on Hacker News on January 08, 2026. This podcast was generated by wondercraft.ai (00:30): Bose has released API docs and opened the API for its EoL SoundTouch speakersOriginal post: https://news.ycombinator.com/item?id=46541892&utm_source=wondercraft_ai(01:50): Google AI Studio is now sponsoring Tailwind CSSOriginal post: https://news.ycombinator.com/item?id=46545077&utm_source=wondercraft_ai(03:11): The Jeff Dean FactsOriginal post: https://news.ycombinator.com/item?id=46540498&utm_source=wondercraft_ai(04:32): Open Infrastructure MapOriginal post: https://news.ycombinator.com/item?id=46536866&utm_source=wondercraft_ai(05:53): Project Patchouli: Open-source electromagnetic drawing tablet hardwareOriginal post: https://news.ycombinator.com/item?id=46537489&utm_source=wondercraft_ai(07:13): Iran Goes Into IPv6 BlackoutOriginal post: https://news.ycombinator.com/item?id=46542683&utm_source=wondercraft_ai(08:34): How to Code Claude Code in 200 Lines of CodeOriginal post: https://news.ycombinator.com/item?id=46545620&utm_source=wondercraft_ai(09:55): A closer look at a BGP anomaly in VenezuelaOriginal post: https://news.ycombinator.com/item?id=46538001&utm_source=wondercraft_ai(11:16): Minnesota officials say they can't access evidence after fatal ICE shootingOriginal post: https://news.ycombinator.com/item?id=46543457&utm_source=wondercraft_ai(12:36): Kernel bugs hide for 2 years on average. Some hide for 20Original post: https://news.ycombinator.com/item?id=46536340&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
This episode covers: • FDA Loosens Supplement Warning Labels The FDA is considering a rule change that would allow supplement companies to include the DSHEA disclaimer only once per package rather than next to every claim. Dave explains why fewer visible warnings could make marketing look more like medical claims, and why biohackers should treat labels as advertising rather than evidence. He shares how to protect yourself now: add one variable at a time, run baseline labs, and rely on data instead of packaging. Source: https://www.nbcnews.com/health/health-news/fda-supplements-warning-label-rule-change-rfk-jr-rcna249321 • Quantum Sensors for Early Heart Attack Detection Mayo Clinic is testing a contactless heart-monitoring system called CardiAQ using quantum magnetic sensors and AI noise filtering. The device reads subtle electromagnetic signatures from the heart and compares them to invasive angiography. Dave breaks down why earlier detection of ischemia could shift heart care from reactive treatment to proactive screening — and why building baseline metrics like VO₂max, blood pressure and HRV today will pay off when next-gen diagnostics arrive. Source: https://www.sandboxaq.com/press/sandboxaq-collaborates-with-mayo-clinic-on-novel-cardiac-diagnostics • Sauna Detox for MicroplasticsEmerging research shows that sweating meaningfully removes plastic-related chemicals like BPA and phthalate metabolites from the body, often more efficiently than blood or urine alone. Sauna use amplifies this effect by increasing circulation, mobilizing stored toxins from tissues, and accelerating sweat-based excretion. When you combine regular heat exposure with reduced environmental plastic contact, you create a powerful detox strategy that targets a chemical burden once thought unavoidable. Dave breaks down how sauna protocols can support toxin elimination, improve cardiovascular resilience, strengthen autonomic balance, and help counteract the metabolic and hormonal disruptions linked to microplastics in modern life.Source: https://superage.com/can-you-sweat-out-microplastics-in-the-sauna/ • Psychedelics and Longevity Biomarkers Bryan Johnson treated a guided psilocybin experience as a structured longevity experiment, collecting nearly 250 biomarkers including CGM, stress markers, HRV and Kernel brain imaging. The experiment revealed a surprising metabolic change: mean glucose dropped 8 percent, variability fell 11 percent, and estimated HbA1c moved from 4.7 to 4.4 — similar to months of metformin but after a single session. Dave explores the emerging idea that neuroplastic events might influence glucose regulation through brain-pancreas signaling, while emphasizing the need for supervised, legal use and proper clinical trials. Source: https://www.businessinsider.com/bryan-johnson-trip-on-mushrooms-five-hours-live-2025-12 • A Mitochondrial Protein that Extends Mouse Lifespan Researchers boosted the mitochondrial protein COX7RP and extended mouse lifespan by ~6.6 percent while improving insulin sensitivity, lipid handling, endurance and liver fat metabolism. COX7RP supports formation of mitochondrial “supercomplexes,” improving respiratory efficiency and ATP generation. Dave explains how this reinforces lifestyle levers — strength training, aerobic capacity, stabilizing blood sugar — as tools that likely preserve supercomplex architecture and mitochondrial resilience. Source: https://www.eurekalert.org/news-releases/1109082 All source links provided for direct access to the original research and reporting. This episode is designed for biohackers, longevity seekers and high-performance listeners who want practical strategies rooted in cutting-edge science. Dave Asprey translates emerging research into actionable upgrades for your biology — from metabolism and mitochondria to nervous system health, detox, and prevention. New episodes every Tuesday, Thursday, Friday, and Sunday. Keywords: FDA supplement rule change, supplement warning labels, DSHEA disclaimer removal, supplement regulation risks, quantum cardiac scanner, Mayo Clinic heart attack detection, AI heart monitoring, early ischemia detection technology, sauna detox evidence, sweating out toxins research, BPA phthalate sweat studies, microplastics sauna myth, Bryan Johnson psilocybin experiment, psychedelic longevity research, psychedelic metabolic reset, glucose control psychedelics, HbA1c psilocybin results, continuous glucose monitor insights, mitochondria lifespan research, COX7RP protein aging study, mitochondrial supercomplex benefits, ATP energy output aging, metabolic flexibility longevity, biohacking news update, anti-aging science breakthroughs, evidence-based longevity tools, biological age biomarkers Thank you to our sponsors! -BEYOND Conference 2026 | Register now at https://beyondconference.com/ -BodyGuardz | Visit https://www.bodyguardz.com/ and use code DAVE for 25% off. Resources: • Subscribe to my weekly newsletter: https://substack.daveasprey.com/welcome • Danger Coffee: https://dangercoffee.com/discount/dave15 • My Daily Supplements: SuppGrade Labs (15% Off) • Favorite Blue Light Blocking Glasses: TrueDark (15% Off) • Dave Asprey's BEYOND Conference: https://beyondconference.com • Dave Asprey's New Book – Heavily Meditated: https://daveasprey.com/heavily-meditated • Upgrade Collective: https://www.ourupgradecollective.com • Upgrade Labs: https://upgradelabs.com • 40 Years of Zen: https://40yearsofzen.com Timestamps: 0:00 - Intro 0:18 - Story 1: FDA Supplement Label Changes 1:43 - Story 2: CardiAQ Heart Scanner 2:59 - Story 3: Saunas and Microplastics 4:58 - Story 4: Psychedelics and Blood Sugar 8:22 - Story 5: Mitochondrial Longevity Research 10:29 - Weekly Wrap-Up See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Visit www.joniradio.org for more inspiration and encouragement! --------This Christmas, you can shine the light of Christ into places of darkness and pain with a purchase from the Joni and Friends Christmas catalog. You are sending hope and practical care to people with disabilities, all in the name of Jesus! Thank you for listening! Your support of Joni and Friends helps make this show possible. Joni and Friends envisions a world where every person with a disability finds hope, dignity, and their place in the body of Christ. Become part of the global movement today at www.joniandfriends.org. Find more encouragement on Instagram, TikTok, Facebook, and YouTube.
Page faults occurs when the process tries to access a memory that isn't backed by a physical page kernel raises a fault which loads a page. It happens on first access, stack expansion, COW, swap and much more. However it comes with a cost. In this episode of the backend engineering show I dissect the need and the cost page faults in the kernel. 0:00 Intro 4:00 Virtual memoryAbstraction of physical memoryMemory sharingAllow more processes to run , unused go to diskNuma, kernel can place memory near the cpu12:00 VMA areasText/code Data BSSHeapStack19:50 Kernel mode25:30 What is a Page fault?30:30 First access page fault33:00 Stack Expansion page fault34:30 CoW page fault38:00 Swap page fault39:39 File backed page fault40:29 Permission page fault 45:30 Summary
If you're looking for a way to demonstrate how the UK craft beer scene has changed and evolved in the last decade, then the Bermondsey Beer Mile is a pretty good place to start. An amalgamation of breweries, taproom and bars that stretch along much of Enid Street and Druid Street in London SE1, businesses have come – and businesses have gone since The Kernel started brewing back in 2009. While a lay person might not have heard of your favourite brewery or your favourite brew, the success of the Bermondsey Beer Mile meant it entered the lexicon of many outside of the craft bubble. The wealth of hospitality environments, not all exclusively beer-forward, have helped create a destination for thirsty patrons. And if you're remotely interested in the world of beer and great liquid then you've undoubtedly got your own anecdote about a visit to this part of Southeast London. If not, you certainly know someone that does. While establishments such as Partizan, Brew By Numbers, The Bottle Shop, Hawkes Cider, uBrew, Affinity, The Outpost and Fourpure have all vacated these environs during this period, the mile – now realistically closer to being the Bermondsey Beer Two Miles – has welcomed newer names, too. Mash Paddle, Craft Beer Junction, It Ain't Much If It Ain't Dutch and The Kernel's celebrated new Spa Road location have joined establishments like Cloudwater, Southwark Brewing, Anspach & Hobday, and Kanpai Sake in recent years. And another business that has been part of the Bermondsey fabric since 2019 is Bianca Road Brew Co. Founded by engineer Reece Wood, the brewery began trading from a unit in Bianca Road, Peckham back in 2016. A move to Bermondsey followed a year later before securing its current home on Enid Street some six years ago. Much has changed in that time, though. The brewery would initially call Brew By Numbers and The Kernel its neighbours. Though the former's beers are now brewed outside of London by Keystone Brewing while The Kernel has its aforementioned new home a minute up the road. And earlier this year, Bianca Road Brew Co would announce a significant milestone – with a new ownership structure taking on the Bermondsey business. As of 2025, the brewery is co-owned by Matt Simpson, Jordan Fancey, and Terry Staples – familiar faces who have helped shape Bianca Road from the inside over the years. They were joined by respected figures in UK brewing including Dennis Ratliff who brings extensive experience and a fresh sense of purpose to this new chapter. Ratliff was previously head brewer at Brew By Numbers and Partizan and is known for his precision, balance, and commitment to brewing quality. And central to this evolution is the production of the core beers fans love, a wealth of seasonal and limited releases and its recently-opened new community-focused taproom. Thanks to a successful crowdfunding campaign earlier in the summer, the team were able to relocate and redesign their taproom operation. As a result, they've created a space with more comfortable seating and capacity for live events, while also freeing up critical space inside the brewery to install a dedicated cold store – a move that will allow for increased production, tighter cold-chain control, and even higher quality standards. And the new taproom, which is now fully up-and-running, is already proving to be the perfect showcase for the beers the team brew. With that in mind, we recently caught up with Dennis and Jordan to discuss their own journeys, the new-look Bianca Road Brew Co, the importance of their taproom and their plans for the road ahead.
Starting off with the original Xbox getting a new cosmetic hardmod to introduce power and eject button sounds, as well as a return to form with XBMC4XBOX Redux's release. We also discuss the latest Atmosphere CFW update as it contains some pretty important information for CFW users thanks to the latest system firmware update. The PS4 and PS5 get a lot more love thanks to a new kernel exploit release working on both systems. There's also a few entry points which have gotten awesome developments with Y2JB being paired with Lapse for a full jailbreak, as well as new entry points being seen with the Netflix app and now games using the Ren'py engine!
Igniting Contagious Faith!Sermon Notes: https://links.kchanford.com/sunday
In this episode of The Cybersecurity Defenders Podcast, we discuss some intel being shared in the LimaCharlie community.Google's Threat Intelligence Group has observed a significant shift in 2025, threat actors are no longer using AI to just speed up operations, they are now integrating LLMs directly into the malware.Unit 42 has identified a previously undocumented Android spyware family, named LandFall, discovered during an investigation into iOS exploit chains involving malicious DNG images.Microsoft's November Patch Tuesday rollout includes fixes for over 60 vulnerabilities, one of which is a zero-day privilege escalation flaw in the Windows kernel that has already been exploited in the wild.Former executive at L3Harris Trenchant, Peter Williams, has pleaded guilty in U.S. federal court to selling 8 trade secrets valued at over 1.3 million to a Russian-based software broker involved in the zero-day exploit market.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform.This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io.
In this month's Patch [FIX] Tuesday, Ryan Braunstein and Mat Lee dig into kernel flaws, WSL privilege escalation, and the growing risk of Agentic AI in developer environments. They explore how attackers are targeting AI extensions, why developer endpoints are prime targets, and what you can do to patch smarter and train your teams effectively. Stay ahead of new exploits and secure every system before threat actors get the chance.
Hello to you listening in Quezon City, the Philippines!Coming to you from Whidbey Island, Washington this is Stories From Women Who Walk with 60 Seconds for Wednesdays on Whidbey and your host, Diane Wyzga.In 2017 (years before the current madness) Pope Francis said, “Hitler didn't steal the power, his people voted for him, then he destroyed his people.” That's what con men do. Yes, there are days when We the People feel ashamed - even hopeless - for having been duped.At the same time I'm reminded of a line in William Faulkner's 1936 novel Absalom, Absalom!. “Well, Kernel, they kilt us but they ain't whupped us yit!” The quote captures the spirit of the post-Civil War South, suggesting a resilience despite a devastating military loss. For those who paid attention, with that quote Tim Kaine introduced Hillary Clinton ahead of her concession speech. It still applies. Work still remains. Question: If it's true - and I believe it is - we are responsible for the world in which we find ourselves because we alone can change it, how are We the People showing up, no matter how small or seemingly insignificant? How are you finding your voice in these times and what are you saying when you speak up? We the People are casting off our feelings of helplessness, committing to action, reaching for miracle. Where do you find yourself reaching for miracles? Reach! They ain't whupped us yit! You're always welcome: "Come for the stories - Stay for the magic!" Speaking of magic, I hope you'll subscribe, share a 5-star rating and nice review on your social media or podcast channel of choice, bring your friends and rellies, and join us! You will have wonderful company as we continue to walk our lives together. Be sure to stop by my Quarter Moon Story Arts website, check out the Services, arrange a no-obligation Discovery Call, and stay current with me as "Wyzga on Words" on Substack.Stories From Women Who Walk Production TeamPodcaster: Diane F Wyzga & Quarter Moon Story ArtsMusic: Mer's Waltz from Crossing the Waters by Steve Schuch & Night Heron MusicALL content and image © 2019 to Present Quarter Moon Story Arts. All rights reserved. If you found this podcast episode helpful, please consider sharing and attributing it to Diane Wyzga of Stories From Women Who Walk podcast with a link back to the original source.
The 7th Commandment... This one's one that sounds pretty simple on the surface — “You shall not steal.” Most of us probably think, I don't rob banks, I don't swipe candy bars from the store… I'm good!”But like most of the Commandments, the Seventh one goes way deeper than we think. It's not just about stealing—it's about stewardship, justice, and love.In the beginning, God gave us this beautiful Earth and said, “Take care of it.” He didn't say, “This is yours, keep everyone else out.” He said, “Be stewards.” Meaning, take care of what you have — not just for yourself, but for others too.The Catechism actually says that everything on Earth is entrusted to humanity's care. We can own things, but ownership isn't supposed to be selfish. It's a way to serve others. That's kind of a mindset shift, isn't it? What if instead of asking, “What's mine?” we asked, “How can what I have help someone else?”It reminds me of that moment with Zacchaeus in the Gospel — the tax collector who climbed the tree to see Jesus. When Jesus came to his house, Zacchaeus basically said, “Lord, I'll give half my possessions to the poor, and if I've cheated anyone, I'll pay them back four times over.”That's repentance in action. That's what the Catechism calls reparation—making things right when we've taken something unjustly. And this commandment doesn't just deal with money or stuff. It's also about respecting people and creation.It even forbids slavery and using others for personal gain. That's powerful — because it shows that “stealing” can mean taking someone's dignity or freedom, not just their property.And it even extends to animals! We're called to treat them with kindness. God gave us creation to care for — not to exploit.That means how we consume, how we waste, how we treat the environment — it all ties into the Seventh Commandment.When I waste food or buy stuff I don't really need, I have to ask myself — am I being a good steward of what God gave me?And then Jesus takes it even further in Matthew 25, where He says, “Whatever you did not do for one of the least of these, you did not do for me.”Because it's not just about not stealing — it's about actively giving.The Seventh Commandment calls us to generosity — giving alms, loving the poor, doing works of mercy. Feeding the hungry, sheltering the homeless, visiting the sick — those are all ways we show that we're not attached to stuff, but attached to God.It's funny how Jesus flips things. He's not saying, “Don't own anything.” He's saying, “Don't let what you own own you.”True freedom comes from trusting that God provides — and that what we have is meant to bless others.So maybe take a second and ask yourself — how attached am I to my stuff?Yeah… how quick am I to share my time, my resources, my money, even my attention?Maybe it's not about stealing in the obvious way, but about those subtle ways we “take” — like taking credit, taking advantage of someone's generosity, or hoarding what we could be sharing.Living out the Seventh Commandment is about living with open hands. Receiving everything as a gift from God, and offering it back to Him through love of others. So next time you think of “You shall not steal,” remember — it's not just about what we don't do. It's about how we give, share, and care.---------------------------------------------------------------------------------------------"music by audionautix.com"Adventures by A Himitsu https://soundcloud.com/a-himitsuMusic released by Argofox https://youtu.be/8BXNwnxaVQEMusic provided by Audio Library https://youtu.be/MkNeIUgNPQ8
Farmers rely on many different businesses to maintain successful operations. Scherer Inc. has been meeting producer needs through the company's cutting-edge work in mill rolls, roller mills, and kernel processors. On the show today, Director of Kernel Processor Sales Lyndon Luckasson shares insights into the unique role Scherer plays in supporting ag.
Pastor Josh Hall is the Lead Pastor of Ocean Church, located in Estero FL.Ocean Church exists to partner with the work of God in people's lives.To stay connected to Ocean Church:Website: https://bit.ly/2vx8M2oOcean Church Facebook: https://bit.ly/2IXUsTqOcean Church Instagram: https://bit.ly/2vx8x7u
As we continue our journey through Ordinary Time, Charlette and David guide us through this week's Gospel reading from Luke 17:5-10. With reflections on how even when things are rough, Christ assures us that our faith is enough, and how Jesus tells us to "Just do it." Consider how this passage speaks to you this week!Faith to Go is a ministry of The Episcopal Diocese of San Diego. Click here to learn more about EDSD's great work in our region and how you can support this ministry.Remember to get in contact with us!Email: faithtogo@edsd.orgInstagram: @faithtogo
Elizabeth Figura is a Wine developer at Code Weavers. We discuss how Wine and Proton make it possible to run Windows applications on other operating systems. Related links WineHQ Proton Crossover Direct3D MoltenVK XAudio2 Mesa 3D Graphics Library Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Elizabeth Figuera. She's a wine developer at Code Weavers. And today we're gonna talk about what that is and, uh, all the work that goes into it. [00:00:09] Elizabeth: Thank you Jeremy. I'm glad to be here. What's Wine [00:00:13] Jeremy: I think the first thing we should talk about is maybe saying what Wine is because I think a lot of people aren't familiar with the project. [00:00:20] Elizabeth: So wine is a translation layer. in fact, I would say wine is a Windows emulator. That is what the name originally stood for. it re implements the entire windows. Or you say win 32 API. so that programs that make calls into the API, will then transfer that code to wine and and we allow that Windows programs to run on, things that are not windows. So Linux, Mac, os, other operating systems such as Solaris and BSD. it works not by emulating the CPU, but by re-implementing every API, basically from scratch and translating them to their equivalent or writing new code in case there is no, you know, equivalent. System Calls [00:01:06] Jeremy: I believe what you're doing is you're emulating system calls. Could you explain what those are and, and how that relates to the project? [00:01:15] Elizabeth: Yeah. so system call in general can be used, referred to a call into the operating system, to execute some functionality that's built into the operating system. often it's used in the context of talking to the kernel windows applications actually tend to talk at a much higher level, because there's so much, so much high level functionality built into Windows. When you think about, as opposed to other operating systems that we basically, we end up end implementing much higher level behavior than you would on Linux. [00:01:49] Jeremy: And can you give some examples of what some of those system calls would be and, I suppose how they may be higher level than some of the Linux ones. [00:01:57] Elizabeth: Sure. So of course you have like low level calls like interacting with a file system, you know, created file and read and write and such. you also have, uh, high level APIs who interact with a sound driver. [00:02:12] Elizabeth: There's, uh, one I was working on earlier today, called XAudio where you, actually, you know, build this bank of of sounds. It's meant to be, played in a game and then you can position them in various 3D space. And the, and the operating system in a sense will, take care of all of the math that goes into making that work. [00:02:36] Elizabeth: That's all running on your computer and. And then it'll send that audio data to the sound card once it's transformed it. So it sounds like it's coming from a certain space. a lot of other things like, you know, parsing XML is another big one. That there's a lot of things. The, there, the, the, the space is honestly huge [00:02:59] Jeremy: And yeah, I can sort of see how those might be things you might not expect to be done by the operating system. Like you gave the example of 3D audio and XML parsing and I think XML parsing in, in particular, you would've thought that that would be something that would be handled by the, the standard library of whatever language the person was writing their application as. [00:03:22] Jeremy: So that's interesting that it's built into the os. [00:03:25] Elizabeth: Yeah. Well, and languages like, see it's not, it isn't even part of the standard library. It's higher level than that. It's, you have specific libraries that are widespread but not. Codified in a standard, but in Windows you, in Windows, they are part of the operating system. And in fact, there's several different, XML parsers in the operating system. Microsoft likes to deprecate old APIs and make new ones that do the same thing very often. [00:03:53] Jeremy: And something I've heard about Windows is that they're typically very reluctant to break backwards compatibility. So you say they're deprecated, but do they typically keep all of them still in there? [00:04:04] Elizabeth: It all still It all still works. [00:04:07] Jeremy: And that's all things that wine has to implement as well to make sure that the software works as well. [00:04:14] Jeremy: Yeah. [00:04:14] Elizabeth: Yeah. And, and we also, you know, need to make it work. we also need to implement those things to make old, programs work because there is, uh, a lot of demand, at least from, at least from people using wine for making, for getting some really old programs, working from the. Early nineties even. What people run with Wine (Productivity, build systems, servers) [00:04:36] Jeremy: And that's probably a good, thing to talk about in terms of what, what are the types of software that, that people are trying to run with wine, and what operating system are they typically using? [00:04:46] Elizabeth: Oh, in terms of software, literally all kinds, any software you can imagine that runs on Windows, people will try to run it on wine. So we're talking games, office software productivity, software accounting. people will run, build systems on wine, build their, just run, uh, build their programs using, on visual studio, running on wine. people will run wine on servers, for example, like software as a service kind of things where you don't even know that it's running on wine. really super domain specific stuff. Like I've run astronomy, software, and wine. Design, computer assisted design, even hardware drivers can sometimes work unwind. There's a bit of a gray area. How games are different [00:05:29] Jeremy: Yeah, it's um, I think from. Maybe the general public, or at least from what I've seen, I think a lot of people's exposure to it is for playing games. is there something different about games versus all those other types of, productivity software and office software that, that makes supporting those different. [00:05:53] Elizabeth: Um, there's some things about it that are different. Games of course have gotten a lot of publicity lately because there's been a huge push, largely from valve, but also some other companies to get. A lot of huge, wide range of games working well under wine. And that's really panned out in the, in a way, I think, I think we've largely succeeded. [00:06:13] Elizabeth: We've made huge strides in the past several years. 5, 5, 10 years, I think. so when you talk about what makes games different, I think, one thing games tend to do is they have a very limited set of things they're working with and they often want to make things run fast, and so they're working very close to the me They're not, they're not gonna use an XML parser, for example. [00:06:44] Elizabeth: They're just gonna talk directly as, directly to the graphics driver as they can. Right. And, and probably going to do all their own sound design. You know, I did talk about that XAudio library, but a lot of games will just talk directly as, directly to the sound driver as Windows Let some, so this is a often a blessing, honestly, because it means there's less we have to implement to make them work. when you look at a lot of productivity applications, and especially, the other thing that makes some productivity applications harder is, Microsoft makes 'em, and They like to, make a library, for use in this one program like Microsoft Office and then say, well, you know, other programs might use this as well. Let's. Put it in the operating system and expose it and write an API for it and everything. And maybe some other programs use it. mostly it's just office, but it means that office relies on a lot of things from the operating system that we all have to reimplement. [00:07:44] Jeremy: Yeah, that's somewhat counterintuitive because when you think of games, you think of these really high performance things that that seem really complicated. But it sounds like from what you're saying, because they use the lower level primitives, they're actually easier in some ways to support. [00:08:01] Elizabeth: Yeah, certainly in some ways, they, yeah, they'll do things like re-implement the heap allocator because the built-in heap allocator isn't fast enough for them. That's another good example. What makes some applications hard to support (Some are hard, can't debug other people's apps) [00:08:16] Jeremy: You mentioned Microsoft's more modern, uh, office suites. I, I've noticed there's certain applications that, that aren't supported. Like, for example, I think the modern Adobe Creative Suite. What's the difference with software like that and does that also apply to the modern office suite, or is, or is that actually supported? [00:08:39] Elizabeth: Well, in one case you have, things like Microsoft using their own APIs that I mentioned with Adobe. That applies less, I suppose, but I think to some degree, I think to some degree the answer is that some applications are just hard and there's, and, and there's no way around it. And, and we can only spend so much time on a hard application. I. Debugging things. Debugging things can get very hard with wine. Let's, let me like explain that for a minute because, Because normally when you think about debugging an application, you say, oh, I'm gonna open up my debugger, pop it in, uh, break at this point, see what like all the variables are, or they're not what I expect. Or maybe wait for it to crash and then get a back trace and see where it crashed. And why you can't do that with wine, because you don't have the application, you don't have the symbols, you don't have your debugging symbols. You don't know anything about the code you're running unless you take the time to disassemble and decompile and read through it. And that's difficult every time. It's not only difficult, every time I've, I've looked at a program and been like, I really need to just. I'm gonna just try and figure out what the program is doing. [00:10:00] Elizabeth: It takes so much time and it is never worth it. And sometimes you have to, sometimes you have no other choice, but usually you end up, you ask to rely on seeing what calls it makes into the operating system and trying to guess which one of those is going wrong. Now, sometimes you'll get lucky and it'll crash in wine code, or sometimes it'll make a call into, a function that we don't implement yet, and we know, oh, we need to implement that function. But sometimes it does something, more obscure and we have to figure out, well, like all of these millions of calls it made, which one of them is, which one of them are we implementing incorrectly? So it's returning the wrong result or not doing something that it should. And, then you add onto that the. You know, all these sort of harder to debug things like memory errors that we could make. And it's, it can be very difficult and so sometimes some applications just suffer from those hard bugs. and sometimes it's also just a matter of not enough demand for something for us to spend a lot of time on it. [00:11:11] Elizabeth: Right. [00:11:14] Jeremy: Yeah, I can see how that would be really challenging because you're, like you were saying, you don't have the symbols, so you don't have the source code, so you don't know what any of this software you're supporting, how it was actually written. And you were saying that I. A lot of times, you know, there may be some behavior that's wrong or a crash, but it's not because wine crashed or there was an error in wine. [00:11:42] Jeremy: so you just know the system calls it made, but you don't know which of the system calls didn't behave the way that the application expected. [00:11:50] Elizabeth: Exactly. Test suite (Half the code is tests) [00:11:52] Jeremy: I can see how that would be really challenging. and wine runs so many different applications. I'm, I'm kind of curious how do you even track what's working and what's not as you, you change wine because if you support thousands or tens thousands of applications, you know, how do you know when you've got a, a regression or not? [00:12:15] Elizabeth: So, it's a great question. Um, probably over half of wine by like source code volume. I actually actually check what it is, but I think it's, i, I, I think it's probably over half is what we call is tests. And these tests serve two purposes. The one purpose is a regression test. And the other purpose is they're conformance tests that test, that test how, uh, an API behaves on windows and validates that we are behaving the same way. So we write all these tests, we run them on windows and you know, write the tests to check what the windows returns, and then we run 'em on wine and make sure that that matches. and we have just such a huge body of tests to make sure that, you know, we're not breaking anything. And that every, every, all the code that we, that we get into wine that looks like, wow, it's doing that really well. Nope, that's what Windows does. The test says so. So pretty much any code that we, any new code that we get, it has to have tests to validate, to, to demonstrate that it's doing the right thing. [00:13:31] Jeremy: And so rather than testing against a specific application, seeing if it works, you're making a call to a Windows system call, seeing how it responds, and then making the same call within wine and just making sure they match. [00:13:48] Elizabeth: Yes, exactly. And that is obviously, or that is a lot more, automatable, right? Because otherwise you have to manually, you know, there's all, these are all graphical applications. [00:14:02] Elizabeth: You'd have to manually do the things and make sure they work. Um, but if you write automateable tests, you can just run them all and the machine will complain at you if it fails it continuous integration. How compatibility problems appear to users [00:14:13] Jeremy: And because there's all these potential compatibility issues where maybe a certain call doesn't behave the way an application expects. What, what are the types of what that shows when someone's using software? I mean, I, I think you mentioned crashes, but I imagine there could be all sorts of other types of behavior. [00:14:37] Elizabeth: Yes, very much so. basically anything, anything you can imagine again is, is what will happen. You can have, crashes are the easy ones because you know when and where it crashed and you can work backwards from there. but you can also get, it can, it could hang, it could not render, right? Like maybe render a black screen. for, you know, for games you could very frequently have, graphical glitches where maybe some objects won't render right? Or the entire screen will be read. Who knows? in a very bad case, you could even bring down your system and we usually say that's not wine's fault. That's the graphics library's fault. 'cause they're not supposed to do that, uh, no matter what we do. But, you know, sometimes we have to work around that anyway. but yeah, there's, there's been some very strange and idiosyncratic bugs out there too. [00:15:33] Jeremy: Yeah. And like you mentioned that uh, there's so many different things that could have gone wrong that imagine's very difficult to find. Yeah. And when software runs through wine, I think, Performance is comparable to native [00:15:49] Jeremy: A lot of our listeners will probably be familiar with running things in a virtual machine, and they know that there's a big performance impact from doing that. [00:15:57] Jeremy: How does the performance of applications compare to running natively on the original Windows OS versus virtual machines? [00:16:08] Elizabeth: So. In theory. and I, I haven't actually done this recently, so I can't speak too much to that, but in theory, the idea is it's a lot faster. so there, there, is a bit of a joke acronym to wine. wine is not an emulator, even though I started out by saying wine is an emulator, and it was originally called a Windows emulator. but what this basically means is wine is not a CPU emulator. It doesn't, when you think about emulators in a general sense, they're often, they're often emulators for specific CPUs, often older ones like, you know, the Commodore emulator or an Amiga emulator. but in this case, you have software that's written for an x86 CPU. And it's running on an x86 CPU by giving it the same instructions that it's giving on windows. It's just that when it says, now call this Windows function, it calls us instead. So that all should perform exactly the same. The only performance difference at that point is that all should perform exactly the same as opposed to a, virtual machine where you have to interpret the instructions and maybe translate them to a different instruction set. The only performance difference is going to be, in the functions that we are implementing themselves and we try to, we try to implement them to perform. As well, or almost as well as windows. There's always going to be a bit of a theoretical gap because we have to translate from say, one API to another, but we try to make that as little as possible. And in some cases, the operating system we're running on is, is just better than Windows and the libraries we're using are better than Windows. [00:18:01] Elizabeth: And so our games will run faster, for example. sometimes we can, sometimes we can, do a better job than Windows at implementing something that's, that's under our purview. there there are some games that do actually run a little bit faster in wine than they do on Windows. [00:18:22] Jeremy: Yeah, that, that reminds me of how there's these uh, gaming handhelds out now, and some of the same ones, they have a, they either let you install Linux or install windows, or they just come with a pre-installed, and I believe what I've read is that oftentimes running the same game on both operating systems, running the same game on Linux, the battery life is better and sometimes even the performance is better with these handhelds. [00:18:53] Jeremy: So it's, it's really interesting that that can even be the case. [00:18:57] Elizabeth: Yeah, it's really a testament to the huge amount of work that's gone into that, both on the wine side and on the, side of the graphics team and the colonel team. And, and of course, you know, the years of, the years of, work that's gone into Linux, even before these gaming handhelds were, were even under consideration. Proton and Valve Software's role [00:19:21] Jeremy: And something. So for people who are familiar with the handhelds, like the steam deck, they may have heard of proton. Uh, I wonder if you can explain what proton is and how it relates to wine. [00:19:37] Elizabeth: Yeah. So, proton is basically, how do I describe this? So, proton is a sort of a fork, uh, although we try to avoid the term fork. It's a, we say it's a downstream distribution because we contribute back up to wine. so it is a, it is, it is a alternate distribution fork of wine. And it's also some code that basically glues wine into, an embedding application originally intended for steam, and developed for valve. it has also been used in, others, but it has also been used in other software. it, so where proton differs from wine besides the glue part is it has some, it has some extra hacks in it for bugs that are hard to fix and easy to hack around as some quick hacks for, making games work now that are like in the process of going upstream to wine and getting their code quality improved and going through review. [00:20:54] Elizabeth: But we want the game to work now, when we distribute it. So that'll, that'll go into proton immediately. And then once we have, once the patch makes it upstream, we replace it with the version of the patch from upstream. there's other things to make it interact nicely with steam and so on. And yeah, I think, yeah, I think that's, I got it. [00:21:19] Jeremy: Yeah. And I think for people who aren't familiar, steam is like this, um, I, I don't even know what you call it, like a gaming store and a [00:21:29] Elizabeth: store game distribution service. it's got a huge variety of games on it, and you just publish. And, and it's a great way for publishers to interact with their, you know, with a wider gaming community, uh, after it, just after paying a cut to valve of their profits, they can reach a lot of people that way. And because all these games are on team and, valve wants them to work well on, on their handheld, they contracted us to basically take their entire catalog, which is huge, enormous. And trying and just step by step. Fix every game and make them all work. [00:22:10] Jeremy: So, um, and I guess for people who aren't familiar Valve, uh, softwares the company that runs steam, and so it sounds like they've asked, uh, your company to, to help improve the compatibility of their catalog. [00:22:24] Elizabeth: Yes. valve contracted us and, and again, when you're talking about wine using lower level libraries, they've also contracted a lot of other people outside of wine. Basically, the entire stack has had a tremendous, tremendous investment by valve software to make gaming on Linux work. Well. The entire stack receives changes to improve Wine compatibility [00:22:48] Jeremy: And when you refer to the entire stack, like what are some, some of those pieces, at least at a high level. [00:22:54] Elizabeth: I, I would, let's see, let me think. There is the wine project, the. Mesa Graphics Libraries. that's a, that's another, you know, uh, open source, software project that existed, has existed for a long time. But Valve has put a lot of, uh, funding and effort into it, the Linux kernel in various different ways. [00:23:17] Elizabeth: the, the desktop, uh, environment and Window Manager for, um, are also things they've invested in. [00:23:26] Jeremy: yeah. Everything that the game needs, on any level and, and that the, and that the operating system of the handheld device needs. Wine's history [00:23:37] Jeremy: And wine's been going on for quite a while. I think it's over a decade, right? [00:23:44] Elizabeth: I believe. Oh, more than, oh, far more than a decade. I believe it started in 1990, I wanna say about 1995, mid nineties. I'm, I probably have that date wrong. I believe Wine started about the mid nineties. [00:24:00] Jeremy: Mm. [00:24:00] Elizabeth: it's going on for three decades at this rate. [00:24:03] Jeremy: Wow. Okay. [00:24:06] Jeremy: And so all this time, how has the, the project sort of sustained itself? Like who's been involved and how has it been able to keep going this long? [00:24:18] Elizabeth: Uh, I think as is the case with a lot of free software, it just, it just keeps trudging along. There's been. There's been times where there's a lot of interest in wine. There's been times where there's less, and we are fortunate to be in a time where there's a lot of interest in it. we've had the same maintainer for almost this entire, almost this entire existence. Uh, Alexander Julliard, there was one person starting who started, maintained it before him and, uh, left it maintainer ship to him after a year or two. Uh, Bob Amstat. And there has been a few, there's been a few developers who have been around for a very long time. a lot of developers who have been around for a decent amount of time, but not for the entire duration. And then a very, very large number of people who come and submit a one-off fix for their individual application that they want to make work. [00:25:19] Jeremy: How does crossover relate to the wine project? Like, it sounds like you had mentioned Valve software hired you for subcontract work, but crossover itself has been around for quite a while. So how, how has that been connected to the wine project? [00:25:37] Elizabeth: So I work for, so the, so the company I work for is Code Weavers and, crossover is our flagship software. so Code Weavers is a couple different things. We have a sort of a porting service where companies will come to us and say, can we port my application usually to Mac? And then we also have a retail service where Where we basically have our own, similar to Proton, but you know, older, but the same idea where we will add some hacks into it for very difficult to solve bugs and we have a, a nice graphical interface. And then, the other thing that we're selling with crossover is support. So if you, you know, try to run a certain application and you buy crossover, you can submit a ticket saying this doesn't work and we now have a financial incentive to fix it. You know, we'll try to, we'll try to fix your, we'll spend company resources to fix your bug, right? So that's been so, so code we v has been around since 1996 and crossover, I don't know the date, but it's crossover has been around for probably about two decades, if I'm not mistaken. [00:27:01] Jeremy: And when you mention helping companies port their software to, for example, MacOS. [00:27:07] Jeremy: Is the approach that you would port it natively to MacOS APIs or is it that you would help them get it running using wine on MacOS? [00:27:21] Elizabeth: Right. That's, so that's basically what makes us so unique among porting companies is that instead of rewriting their software, we just, we just basically stick it inside of crossover and, uh, and, and make it run. [00:27:36] Elizabeth: And the idea has always been, you know, the more we implement, the more we get correct, the, the more applications will, you know, work. And sometimes it works out that way. Sometimes not really so much. And there's always work we have to do to get any given application to work, but. Yeah, so it's, it's very unusual because we don't ask companies for any of their code. We don't need it. We just fix the windows API [00:28:07] Jeremy: And, and so in that case, the ports would be let's say someone sells a MacOS version of their software. They would bundle crossover, uh, with their software. [00:28:18] Elizabeth: Right? And usually when you do this, it doesn't look like there's crossover there. Like it just looks like this software is native, but there is soft, there is crossover under the hood. Loading executables and linked libraries [00:28:32] Jeremy: And so earlier we were talking about how you're basically intercepting the system calls that these binaries are making, whether that's the executable or the, the DLLs from Windows. Um, but I think probably a lot of our listeners are not really sure how that's done. Like they, they may have built software, but they don't know, how do I basically hijack, the system calls that this application is making. [00:29:01] Jeremy: So maybe you could talk a little bit about how that works. [00:29:04] Elizabeth: So there, so there's a couple steps to go into it. when you think about a program that's say, that's a big, a big file that's got all the machine code in it, and then it's got stuff at the beginning saying, here's how the program works and here's where in the file the processor should start running. that's, that's your EXE file. And then in your DLL files are libraries that contain shared code and you have like a similar sort of file. It says, here's the entry point. That runs this function, this, you know, this pars XML function or whatever have you. [00:29:42] Elizabeth: And here's this entry point that has the generate XML function and so on and so forth. And, and, then the operating system will basically take the EXE file and see all the bits in it. Say I want to call the pars XML function. It'll load that DLL and hook it up. So it, so the processor ends up just seeing jump directly to this pars XML function and then run that and then return and so on. [00:30:14] Elizabeth: And so what wine does, is it part of wine? That's part of wine is a library, is that, you know, the implementing that parse XML and read XML function, but part of it is the loader, which is the part of the operating system that hooks everything together. And when we load, we. Redirect to our libraries. We don't have Windows libraries. [00:30:38] Elizabeth: We like, we redirect to ours and then we run our code. And then when you jump back to the program and yeah. [00:30:48] Jeremy: So it's the, the loader that's a part of wine. That's actually, I'm not sure if running the executable is the right term. [00:30:58] Elizabeth: no, I think that's, I think that's a good term. It's, it's, it's, it starts in a loader and then we say, okay, now run the, run the machine code and it's executable and then it runs and it jumps between our libraries and back and so on. [00:31:14] Jeremy: And like you were saying before, often times when it's trying to make a system call, it ends up being handled by a function that you've written in wine. And then that in turn will call the, the Linux system calls or the MacOS system calls to try and accomplish the, the same result. [00:31:36] Elizabeth: Right, exactly. [00:31:40] Jeremy: And something that I think maybe not everyone is familiar with is there's this concept of user space versus kernel space. you explain what the difference is? [00:31:51] Elizabeth: So the way I would explain, the way I would describe a kernel is it's the part of the operating system that can do anything, right? So any program, any code that runs on your computer is talking to the processor, and the processor has to be able to do anything the computer can do. [00:32:10] Elizabeth: It has to be able to talk to the hardware, it has to set up the memory space. That, so actually a very complicated task has to be able to switch to another task. and, and, and, and basically talk to another program and. You have to have something there that can do everything, but you don't want any program to be able to do everything. Um, not since the, not since the nineties. It's about when we realized that we can't do that. so the kernel is a part that can do everything. And when you need to do something that requires those, those permissions that you can't give everyone, you have to talk to the colonel and ask it, Hey, can you do this for me please? And in a very restricted way where it's only the safe things you can do. And a degree, it's also like a library, right? It's the kernel. The kernels have always existed, and since they've always just been the core standard library of the computer that does the, that does the things like read and write files, which are very, very complicated tasks under the hood, but look very simple because all you say is write this file. And talk to the hardware and abstract away all the difference between different drivers. So the kernel is doing all of these things. So because the kernel is a part that can do everything and because when you think about the kernel, it is basically one program that is always running on your computer, but it's only one program. So when a user calls the kernel, you are switching from one program to another and you're doing a lot of complicated things as part of this. You're switching to the higher privilege level where you can do anything and you're switching the state from one program to another. And so it's a it. So this is what we mean when we talk about user space, where you're running like a normal program and kernel space where you've suddenly switched into the kernel. [00:34:19] Elizabeth: Now you're executing with increased privileges in a different. idea of the process space and increased responsibility and so on. [00:34:30] Jeremy: And, and so do most applications. When you were talking about the system calls for handling 3D audio or parsing XML. Are those considered, are those system calls considered part of user space and then those things call the kernel space on your behalf, or how, how would you describe that? [00:34:50] Elizabeth: So most, so when you look at Windows, most of most of the Windows library, the vast, vast majority of it is all user space. most of these libraries that we implement never leave user space. They never need to call into the kernel. there's the, there only the core low level stuff. Things like, we need to read a file, that's a kernel call. when you need to sleep and wait for some seconds, that's a kernel. Need to talk to a different process. Things that interact with different processes in general. not just allocate memory, but allocate a page of memory, like a, from the memory manager and then that gets sub allocated by the heap allocator. so things like that. [00:35:31] Jeremy: Yeah, so if I was writing an application and I needed to open a file, for example, does, does that mean that I would have to communicate with the kernel to, to read that file? [00:35:43] Elizabeth: Right, exactly. [00:35:46] Jeremy: And so most applications, it sounds like it's gonna be a mixture. You're gonna have a lot of things that call user space calls. And then a few, you mentioned more low level ones that are gonna require you to communicate with the kernel. [00:36:00] Elizabeth: Yeah, basically. And it's worth noting that in, in all operating systems, you're, you're almost always gonna be calling a user space library. That might just be a thin wrapper over the kernel call. It might, it's gonna do like just a little bit of work in end call the kernel. [00:36:19] Jeremy: [00:36:19] Elizabeth: In fact, in Windows, that's the only way to do it. Uh, in many other operating systems, you can actually say, you can actually tell the processor to make the kernel call. There is a special instruction that does this and just, and it'll go directly to the kernel, and there's a defined interface for this. But in Windows, that interface is not defined. It's not stable. Or backwards compatible like the rest of Windows is. So even if you wanted to use it, you couldn't. and you basically have to call into the high level libraries or low level libraries, as it were, that, that tell you that create a file. And those don't do a lot. [00:37:00] Elizabeth: They just kind of tweak their parameters a little and then pass them right down to the kernel. [00:37:07] Jeremy: And so wine, it sounds like it needs to implement both the user space calls of windows, but then also the, the kernel, calls as well. But, but wine itself does that, is that only in Linux user space or MacOS user space? [00:37:27] Elizabeth: Yes. This is a very tricky thing. but all of wine, basically all of what is wine runs in, in user space and we use. Kernel calls that are already there to talk to the colonel, to talk to the host Colonel. You have to, and you, you get, you get, you get the sort of second nature of thinking about the Windows, user space and kernel. [00:37:50] Elizabeth: And then there's a host user space and Kernel and wine is running all in user, in the user, in the host user space, but it's emulating the Windows kernel. In fact, one of the weirdest, trickiest parts is I mentioned that you can run some drivers in wine. And those drivers actually, they actually are, they think they're running in the Windows kernel. which in a sense works the same way. It has libraries that it can load, and those drivers are basically libraries and they're making, kernel calls and they're, they're making calls into the kernel library that does some very, very low level tasks that. You're normally only supposed to be able to do in a kernel. And, you know, because the kernel requires some privileges, we kind of pretend we have them. And in many cases, you're even the drivers are using abstractions. We can just implement those abstractions kind of over the slightly higher level abstractions that exist in user space. [00:39:00] Jeremy: Yeah, I hadn't even considered the being able to use hardware devices, but I, I suppose if in, in the end, if you're reproducing the kernel, then whether you're running software or you're talking to a hardware device, as long as you implement the calls correctly, then I, I suppose it works. [00:39:18] Elizabeth: Cause you're, you're talking about device, like maybe it's some kind of USB device that has drivers for Windows, but it doesn't for, for Linux. [00:39:28] Elizabeth: no, that's exactly, that's a, that's kind of the, the example I've used. Uh, I think there is, I think I. My, one of my best success stories was, uh, drivers for a graphing calculator. [00:39:41] Jeremy: Oh, wow. [00:39:42] Elizabeth: That connected via USB and I basically just plugged the windows drivers into wine and, and ran it. And I had to implement a lot of things, but it worked. But for example, something like a graphics driver is not something you could implement in wine because you need the graphics driver on the host. We can't talk to the graphics driver while the host is already doing so. [00:40:05] Jeremy: I see. Yeah. And in that case it probably doesn't make sense to do so [00:40:11] Elizabeth: Right? [00:40:12] Elizabeth: Right. It doesn't because, the transition from user into kernel is complicated. You need the graphics driver to be in the kernel and the real kernel. Having it in wine would be a bad idea. Yeah. [00:40:25] Jeremy: I, I think there's, there's enough APIs you have to try and reproduce that. I, I think, uh, doing, doing something where, [00:40:32] Elizabeth: very difficult [00:40:33] Jeremy: right. Poor system call documentation and private APIs [00:40:35] Jeremy: There's so many different, calls both in user space and in kernel space. I imagine the, the user space ones Microsoft must document to some extent, but, oh. Is that, is that a [00:40:51] Elizabeth: well, sometimes, [00:40:54] Jeremy: Sometimes. Okay. [00:40:55] Elizabeth: I think it's actually better now than it used to be. But some, here's where things get fun, because sometimes there will be, you know, regular documented calls. Sometimes those calls are documented, but the documentation isn't very good. Sometimes programs will just sort of look inside Microsoft's DLLs and use calls that they aren't supposed to be using. Sometimes they use calls that they are supposed to be using, but the documentation has disappeared. just because it's that old of an API and Microsoft hasn't kept it around. sometimes some, sometimes Microsoft, Microsoft own software uses, APIs that were never documented because they never wanted anyone else using them, but they still ship them with the operating system. there was actually a kind of a lawsuit about this because it is an antitrust lawsuit, because by shipping things that only they could use, they were kind of creating a trust. and that got some things documented. At least in theory, they kind of haven't stopped doing it, though. [00:42:08] Jeremy: Oh, so even today they're, they're, I guess they would call those private, private APIs, I suppose. [00:42:14] Elizabeth: I suppose. Uh, yeah, you could say private APIs. but if we want to get, you know, newer versions of Microsoft Office running, we still have to figure out what they're doing and implement them. [00:42:25] Jeremy: And given that they're either, like you were saying, the documentation is kind of all over the place. If you don't know how it's supposed to behave, how do you even approach implementing them? [00:42:38] Elizabeth: and that's what the conformance tests are for. And I, yeah, I mentioned earlier we have this huge body of conformance tests that double is regression tests. if we see an API, we don't know what to do with or an API, we do know, we, we think we know what to do with because the documentation can just be wrong and often has been. Then we write tests to figure out what it's supposed to behave. We kind of guess until we, and, and we write tests and we pass some things in and see what comes out and see what. The see what the operating system does until we figure out, oh, so this is what it's supposed to do and these are the exact parameters in, and, and then we, and, and then we implement it according to those tests. [00:43:24] Jeremy: Is there any distinction in approach for when you're trying to implement something that's at the user level versus the kernel level? [00:43:33] Elizabeth: No, not really. And like I, and like I mentioned earlier, like, well, I mean, a kernel call is just like a library call. It's just done in a slightly different way, but it's still got, you know, parameters in, it's still got a set of parameters. They're just encoded differently. And, and again, like the, the way kernel calls are done is on a level just above the kernel where you have a library, that just passes things through. Almost verbatim to the kernel and we implement that library instead. [00:44:10] Jeremy: And, and you've been working on i, I think, wine for over, over six years now. [00:44:18] Elizabeth: That sounds about right. Debugging and having broad knowledge of Wine [00:44:20] Jeremy: What does, uh, your, your day to day look like? What parts of the project do you, do you work on? [00:44:27] Elizabeth: It really varies from day to day. and I, I, a lot of people, a lot of, some people will work on the same parts of wine for years. Uh, some people will switch around and work on all sorts of different things. [00:44:42] Elizabeth: And I'm, I definitely belong to that second group. Like if you name an area of wine, I have almost certainly contributed a patch or two to it. there's some areas I work on more than others, like, 3D graphics, multimedia, a, I had, I worked on a compiler that exists, uh, socket. So networking communication is another thing I work a lot on. day to day, I kind of just get, I, I I kind of just get a bug for some program or another. and I take it and I debug it and figure out why the program's broken and then I fix it. And there's so much variety in that. because a bug can take so many different forms like I described, and, and, and the, and then the fix can be simple or complicated or, and it can be in really anywhere to a degree. [00:45:40] Elizabeth: being able to work on any part of wine is sometimes almost a necessity because if a program is just broken, you don't know why. It could be anything. It could be any sort of API. And sometimes you can hand the API to somebody who's got a lot of experience in that, but sometimes you just do whatever. You just fix whatever's broken and you get an experience that way. [00:46:06] Jeremy: Yeah, I mean, I was gonna ask about the specialized skills to, to work on wine, but it sounds like maybe in your case it's all of them. [00:46:15] Elizabeth: It's, there's a bit of that. it's a wine. We, the skills to work on wine are very, it's a very unique set of skills because, and it largely comes down to debugging because you can't use the tools you normally use debug. [00:46:30] Elizabeth: You have to, you have to be creative and think about it different ways. Sometimes you have to be very creative. and programs will try their hardest to avoid being debugged because they don't want anyone breaking their copy protection, for example, or or hacking, or, you know, hacking in sheets. They want to be, they want, they don't want anyone hacking them like that. [00:46:54] Elizabeth: And we have to do it anyway for good and legitimate purposes. We would argue to make them work better on more operating systems. And so we have to fight that every step of the way. [00:47:07] Jeremy: Yeah, it seems like it's a combination of. F being able, like you, you were saying, being able to, to debug. and you're debugging not necessarily your own code, but you're debugging this like behavior of, [00:47:25] Jeremy: And then based on that behavior, you have to figure out, okay, where in all these different systems within wine could this part be not working? [00:47:35] Jeremy: And I, I suppose you probably build up some kind of, mental map in your head of when you get a, a type of bug or a type of crash, you oh, maybe it's this, maybe it's here, or something [00:47:47] Elizabeth: Yeah. That, yeah, there is a lot of that. there's, you notice some patterns, you know, after experience helps, but because any bug could be new, sometimes experience doesn't help and you just, you just kind of have to start from scratch. Finding a bug related to XAudio [00:48:08] Jeremy: At sort of a high level, can you give an example of where you got a specific bug report and then where you had to look to eventually find which parts of the the system were the issue? [00:48:21] Elizabeth: one, one I think good example, that I've done recently. so I mentioned this, this XAudio library that does 3D audio. And if you say you come across a bug, I'm gonna be a little bit generics here and say you come across a bug where some audio isn't playing right, maybe there's, silence where there should be the audio. So you kind of, you look in and see, well, where's that getting lost? So you can basically look in the input calls and say, here's the buffer it's submitting that's got all the audio data in it. And you look at the output, you look at where you think the output should be, like, that library will internally call a different library, which programs can interact with directly. [00:49:03] Elizabeth: And this our high level library interacts with that is the, give this sound to the audio driver, right? So you've got XAudio on top of, um. mdev, API, which is the other library that gives audio to the driver. And you see, well, the ba the buffer is that XAudio is passing into MM Dev, dev API. They're empty, there's nothing in them. So you have to kind of work through the XAudio library to see where is, where's that sound getting lost? Or maybe, or maybe that's not getting lost. Maybe it's coming through all garbled. And I've had to look at the buffer and see why is it garbled. I'll open up it up in Audacity and look at the weight shape of the wave and say, huh, that shape of the wave looks like it's, it looks like we're putting silence every 10 nanoseconds or something, or, or reversing something or interpreting it wrong. things like that. Um, there's a lot of, you'll do a lot of, putting in print fs basically all throughout wine to see where does the state change. Where was, where is it? Where is it? Right? And then where do things start going wrong? [00:50:14] Jeremy: Yeah. And in the audio example, because they're making a call to your XAudio implementation, you can see that Okay, the, the buffer, the audio that's coming in. That part is good. It, it's just that later on when it sends it to what's gonna actually have it be played by the, the hardware, that's when missing. So, [00:50:37] Elizabeth: We did something wrong in a library that destroyed the buffer. And I think on a very, high level a lot of debugging, wine is about finding where things are good and finding where things are bad, and then narrowing that down until we find the one spot where things go wrong. There's a lot of processes that go like that. [00:50:57] Jeremy: like you were saying, the more you see these problems, hopefully the, the easier it gets to, to narrow down where, [00:51:04] Elizabeth: Often. Yeah. Especially if you keep debugging things in the same area. How much code is OS specific?c [00:51:09] Jeremy: And wine supports more than one operating system. I, I saw there was Linux, MacOS I think free BSD. How much of the code is operating system specific versus how much can just be shared across all of them? [00:51:27] Elizabeth: Not that much is operating system specific actually. so when you think about the volume of wine, the, the, the, vast majority of it is the high level code that doesn't need to interact with the operating system on a low level. Right? Because Windows keeps putting, because Microsoft keeps putting lots and lots of different libraries in their operating system. And a lot of these are high level libraries. and even when we do interact with the operating system, we're, we're using cross-platform libraries or we're using, we're using ics. The, uh, so all these operating systems that we are implementing are con, basically conformed to the posix standard. which is basically like Unix, they're all Unix based. Psic is a Unix based standard. Microsoft is, you know, the big exception that never did implement that. And, and so we have to translate its APIs to Unix, APIs. now that said, there is a lot of very operating system, specific code. Apple makes things difficult by try, by diverging almost wherever they can. And so we have a lot of Apple specific code in there. [00:52:46] Jeremy: another example I can think of is, I believe MacOS doesn't support, Vulkan [00:52:53] Elizabeth: yes. Yeah.Yeah, That's a, yeah, that's a great example of Mac not wanting to use, uh, generic libraries that work on every other operating system. and in some cases we, we look at it and are like, alright, we'll implement a wrapper for that too, on top of Yuri, on top of your, uh, operating system. We've done it for Windows, we can do it for Vulkan. and that's, and then you get the Molten VK project. Uh, and to be clear, we didn't invent molten vk. It was around before us. We have contributed a lot to it. Direct3d, Vulkan, and MoltenVK [00:53:28] Jeremy: Yeah, I think maybe just at a high level might be good to explain the relationship between Direct 3D or Direct X and Vulcan and um, yeah. Yeah. Maybe if you could go into that. [00:53:42] Elizabeth: so Direct 3D is Microsoft's 3D API. the 3D APIs, you know, are, are basically a way to, they're way to firstly abstract out the differences between different graphics, graphics cards, which, you know, look very different on a hardware level. [00:54:03] Elizabeth: Especially. They, they used to look very different and they still do look very different. and secondly, a way to deal with them at a high level because actually talking to the graphics card on a low level is very, very complicated. Even talking to it on a high level is complicated, but it gets, it can get a lot worse if you've ever been a, if you've ever done any graphics, driver development. so you have a, a number of different APIs that achieve these two goals of, of, abstraction and, and of, of, of building a common abstraction and of building a, a high level abstraction. so OpenGL is the broadly the free, the free operating system world, the non Microsoft's world's choice, back in the day. [00:54:53] Elizabeth: And then direct 3D was Microsoft's API and they've and Direct 3D. And both of these have evolved over time and come up with new versions and such. And when any, API exists for too long. It gains a lot of croft and needs to be replaced. And eventually, eventually the people who developed OpenGL decided we need to start over, get rid of the Croft to make it cleaner and make it lower level. [00:55:28] Elizabeth: Because to get in a maximum performance games really want low level access. And so they made Vulcan, Microsoft kind of did the same thing, but they still call it Direct 3D. they just, it's, it's their, the newest version of Direct 3D is lower level. It's called Direct 3D 12. and, and, Mac looked at this and they decided we're gonna do the same thing too, but we're not gonna use Vulcan. [00:55:52] Elizabeth: We're gonna define our own. And they call it metal. And so when we want to translate D 3D 12 into something that another operating system understands. That's probably Vulcan. And, and on Mac, we need to translate it to metal somehow. And we decided instead of having a separate layer from D three 12 to metal, we're just gonna translate it to Vulcan and then translate the Vulcan to metal. And it also lets things written for Vulcan on Windows, which is also a thing that exists that lets them work on metal. [00:56:30] Jeremy: And having to do that translation, does that have a performance impact or is that not really felt? [00:56:38] Elizabeth: yes. It's kind of like, it's kind of like anything, when you talk about performance, like I mentioned this earlier, there's always gonna be overhead from translating from one API to another. But we try to, what we, we put in heroic efforts to. And try, try to make sure that doesn't matter, to, to make sure that stuff that needs to be fast is really as fast as it can possibly be. [00:57:06] Elizabeth: And some very clever things have been done along those lines. and, sometimes the, you know, the graphics drivers underneath are so good that it actually does run better, even despite the translation overhead. And then sometimes to make it run fast, we need to say, well, we're gonna implement a new API that behaves more like windows, so we can do less work translating it. And that's, and sometimes that goes into the graphics library and sometimes that goes into other places. Targeting Wine instead of porting applications [00:57:43] Jeremy: Yeah. Something I've found a little bit interesting about the last few years is [00:57:49] Jeremy: Developers in the past, they would generally target Windows and you might be lucky to get a Mac port or a Linux port. And I wonder, like, in your opinion now, now that a lot of developers are just targeting Windows and relying on wine or, or proton to, to run their software, is there any, I suppose, downside to doing that? [00:58:17] Jeremy: Or is it all just upside, like everyone should target Windows as this common platform? [00:58:23] Elizabeth: Yeah. It's an interesting question. I, there's some people who seem to think it's a bad thing that, that we're not getting native ports in the same sense, and then there's some people who. Who See, no, that's a perfectly valid way to do ports just right for this defacto common API it was never intended as a cross platform common API, but we've made it one. [00:58:47] Elizabeth: Right? And so why is that any worse than if it runs on a different API on on Linux or Mac and I? Yeah, I, I, I guess I tend to, I, that that argument tends to make sense to me. I don't, I don't really see, I don't personally see a lot of reason for, to, to, to say that one library is more pure than another. [00:59:12] Elizabeth: Right now, I do think Windows APIs are generally pretty bad. I, I'm, this might be, you know, just some sort of, this might just be an effect of having to work with them for a very long time and see all their flaws and have to deal with the nonsense that they do. But I think that a lot of the. Native Linux APIs are better. But if you like your Windows API better. And if you want to target Windows and that's the only way to do it, then sure why not? What's wrong with that? [00:59:51] Jeremy: Yeah, and I think the, doing it this way, targeting Windows, I mean if you look in the past, even though you had some software that would be ported to other operating systems without this compatibility layer, without people just targeting Windows, all this software that people can now run on these portable gaming handhelds or on Linux, Most of that software was never gonna be ported. So yeah, absolutely. And [01:00:21] Elizabeth: that's [01:00:22] Jeremy: having that as an option. Yeah. [01:00:24] Elizabeth: That's kind of why wine existed, because people wanted to run their software. You know, that was never gonna be ported. They just wanted, and then the community just spent a lot of effort in, you know, making all these individual programs run. Yeah. [01:00:39] Jeremy: I think it's pretty, pretty amazing too that, that now that's become this official way, I suppose, of distributing your software where you say like, Hey, I made a Windows version, but you're on your Linux machine. it's officially supported because, we have this much belief in this compatibility layer. [01:01:02] Elizabeth: it's kind of incredible to see wine having got this far. I mean, I started working on a, you know, six, seven years ago, and even then, I could never have imagined it would be like this. [01:01:16] Elizabeth: So as we, we wrap up, for the developers that are listening or, or people who are just users of wine, um, is there anything you think they should know about the project that we haven't talked about? [01:01:31] Elizabeth: I don't think there's anything I can think of. [01:01:34] Jeremy: And if people wanna learn, uh, more about the wine project or, or see what you're up to, where, where should they, where should they head? Getting support and contributing [01:01:45] Elizabeth: We don't really have any things like news, unfortunately. Um, read the release notes, uh, follow some, there's some, there's some people who, from Code Weavers who do blogs. So if you, so if you go to codeweavers.com/blog, there's some, there's, there's some codeweavers stuff, uh, some marketing stuff. But there's also some developers who will talk about bugs that they are solving and. And how it's easy and, and the experience of working on wine. [01:02:18] Jeremy: And I suppose if, if someone's. Interested in like, like let's say they have a piece of software, it's not working through wine. what's the best place for them to, to either get help or maybe even get involved with, with trying to fix it? [01:02:37] Elizabeth: yeah. Uh, so you can file a bug on, winehq.org,or, or, you know, find, there's a lot of developer resources there and you can get involved with contributing to the software. And, uh, there, there's links to our mailing list and IRC channels and, uh, and, and the GitLab, where all places you can find developers. [01:03:02] Elizabeth: We love to help you. Debug things. We love to help you fix things. We try our very best to be a welcoming community and we have got a long, we've got a lot of experience working with people who want to get their application working. So, we would love to, we'd love to have another. [01:03:24] Jeremy: Very cool. Yeah, I think wine is a really interesting project because I think for, I guess it would've been for decades, it seemed like very niche, like not many people [01:03:37] Jeremy: were aware of it. And now I think maybe in particular because of the, the Linux gaming handhelds, like the steam deck,wine is now something that a bunch of people who would've never heard about it before, and now they're aware of it. [01:03:53] Elizabeth: Absolutely. I've watched that transformation happen in real time and it's been surreal. [01:04:00] Jeremy: Very cool. Well, Elizabeth, thank you so much for, for joining me today. [01:04:05] Elizabeth: Thank you, Jeremy. I've been glad to be here.
Can't get enough Linux? How about multiple kernels running simultaneously, side by side, not in a VM, all on the same hardware; this week it's finally looking real.Sponsored By:Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:
Can't get enough Linux? How about multiple kernels running simultaneously, side by side, not in a VM, all on the same hardware; this week it's finally looking real.
This Week the crew talks about the latest AI in Open Source. Then the new OBS Beta is out, there's a new Init system in town, and Agama 17 is out for SUSE Linux 16. There's kernel drama, with a Btrfs develop stepping back, Bcachefs is maintained outside the kernel, and Rosenzweig is now at Intel. And don't forget the Android bombshell, that sideloading will soon be limited to verified developers. For tips, we have systemctl restart options, wpctl inspect for WirePlumber information, aptitude for more package management, and gdisk for converting an MBR drive to GPT. The show notes are available at https://bit.ly/45T37AJ and enjoy! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Ken McDonald, and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
(Some content may not be suitable for sensitive ears.)True love requires true freedom—and that's why understanding the Sixth Commandment is so important. Hey it's Cathy and Jake, and here's another Catholic Kernel of Truth.God actually has a beautiful plan for love, marriage, and sexuality. When we understand His design, we experience real freedom and joy—the kind of love our hearts are craving.In Matthew 5, Jesus says: “You've heard it said, ‘You shall not commit adultery,' but I say to you, everyone who looks at someone with lust has already committed adultery in their heart.” This begins on what is going on inside—our hearts, our desires, the way we see other people. Jesus cares about the whole person.The reason God cares so much about sexuality is because He designed us—male and female—in perfect equality and complementarity.God made us for love in the deepest sense, so that we may be a self-gift. The Catechism of the Catholic Church says that God gave each of us a sexual identity that involves both body and soul. Sexuality is meant to be oriented toward marriage and family, where that love can be total, faithful, fruitful, and free. (CCC 2331-2334)And all baptized Christians are called to chastity—not just single people or priests, but everyone. It looks different depending on your state of life: celibacy, faithfulness in marriage, or living chastity as a widow or widower. Chastity involves self-mastery and the cardinal virtue of temperance. (CCC 2337-2349)Lust is the opposite of love. Love gives; lust takes. Lust sees someone as an object instead of a person.The Catechism points out that sins of pornography, fornication, adultery, and masturbation are ways we can misuse God's gift of sexuality. They isolate sexual pleasure from its real purpose: love and life together. (CCC 2396)The Catechism of the Catholic Church states that "homosexual acts are intrinsically disordered." They are contrary to the natural law. They close the sexual act to the gift of life.” The Catechism goes on to explain that those with same-sex attraction must be accepted with respect, compassion, and sensitivity. Same-sex attraction is a cross and those who choose to live a chaste life are living out this commandment. (CCC 2357-2358)When man and woman give themselves totally to each other in marriage, they become co-creators with God. Marriage has this twofold purpose: the good of the spouses and the transmission of life.And that's why the Church teaches that contraceptives or sterilization go against God's plan—they close off the openness to life. The Church does allow for Natural Family Planning, which is discerning to space out children for just reasons. It involves engaging during certain times of a woman's cycle. (2368-2370)And just like us, some couples struggle with infertility. It's a very heavy cross, yet the Catechism says a child is always a gift, not something “owed.” Techniques like IVF, sperm and ovum donation, surrogate uterus, and artificial insemination sadly separate procreation from the loving union of husband and wife. However even if a couple can't have a biological child, God can bring incredible spiritual fruitfulness when this suffering is united to Him on the Cross. (CCC 2376-2379)So, to live this out practically, we must see people the way God sees them—as whole persons, body and soul. It means being intentional about what we look at online, how we think, and how we talk about love and marriage.And when we mess up... that's what confession is for. God isn't waiting to tear us down; He wants to restore us to freedom and joy.Here's our challenge for you this week: Ask God to help you see others—and yourself—with His eyes. Where do you need healing in the area of love and chastity?And remember, God's plan for love isn't about rules to punish you or make you feel bad about yourself; it's about freedom to love fully. He wants our hearts to be whole.
In this episode of The Cybersecurity Defenders Podcast, we discuss some intel being shared in the LimaCharlie community.• Attackers are actively exploiting CVE-2023-46604, a remote code execution vulnerability in Apache ActiveMQ first disclosed in October 2023, that is used to compromise cloud-hosted Linux servers.• AshES Cybersecurity has publicly disclosed a critical zero-day vulnerability in Elastic's Endpoint Detection and Response (EDR) platform, specifically in the Microsoft-signed kernel driver elastic-endpoint-driver.sys.• At least a dozen ransomware groups are now deploying kernel-level EDR killers - tools designed specifically to disable endpoint detection and response solutions - as part of their malware arsenal.• Microsoft has released an in-depth technical analysis of PipeMagic, a modular backdoor linked to ransomware operations carried out by Storm-2460, a financially motivated threat group associated with RansomEXX.Support our show by sharing your favorite episodes with a friend, subscribe, give us a rating or leave a comment on your podcast platform.This podcast is brought to you by LimaCharlie, maker of the SecOps Cloud Platform, infrastructure for SecOps where everything is built API first. Scale with confidence as your business grows. Start today for free at limacharlie.io.
Microsoft is rethinking allowing endpoint security software to run in the Windows kernel (including third-party and Microsoft’s own endpoint security software). While there are benefits to running security software in the kernel, there are also serious downsides (see the CrowdStrike outage). Dan Massameno joins JJ and Drew on Packet Protector to talk about the role... Read more »
Microsoft is rethinking allowing endpoint security software to run in the Windows kernel (including third-party and Microsoft’s own endpoint security software). While there are benefits to running security software in the kernel, there are also serious downsides (see the CrowdStrike outage). Dan Massameno joins JJ and Drew on Packet Protector to talk about the role... Read more »
There's drama about the latest RISC-V patches in the kernel, SparkyLinux and Kaisen Linux have updates, and GCC is looking to drop some architectures. Nvidia ships a driver update, ffmpeg and OnlyOffice adds AI, and distros are shipping the soft reboot. For tips we have SystemD-Manager-TUI for managing Systemd, a step-through of auditing a downloadable install script, the timeout bash command, and an interesting question about how to get colors back in grep output. You can find the show notes at http://bit.ly/4mEkufi and have a great week! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Ken McDonald, and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
When personalities clash, the users come last. Meanwhile, Chris' hyper-tuned setup stops being a toy and starts looking like a daily driver.Sponsored By:Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:
Another fuel-mileage race, this time at Iowa Speedway, sees one driver stretch his gas tank for the final 144 laps of the race to return to victory lane. Jeff and Jordan break it all down and discuss the latest in the race to the playoffs as well as this week's NASCAR news and more.
The Threadripper leads the conversation this week, with it's impressive Linux performance and many, many cores and PCIe lanes. Then Bcachefs dangles over the precipice, NetworkManager 1.54 has some impressive features, and the Kernel calendar turns the page from 6.16 to the 6.17 merge window. Firewire is still around, KDE gets a day/night mode, and Wayland may never be ready. For tips we have OpenSnitch and a novel use for qemu-img. You can see the show notes at https://bit.ly/40OxbMf and until next week! Host: Jonathan Bennett Co-Host: Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
This week it's a duet, with Jonathan and Jeff chatting about Clear Linux' last hurrah, and some other Intel projects. The kernel may be about to adopt an AI code policy, and Fedora debates how to handle BIOS bugs. FFmpeg is about to release 8.0, KDE is adding printer ink monitoring, and Valve has a Steam refresh in the works. Our command line tips are vity for AI help with the command line, and immich for building your own video and image store and timeline. You can catch the show notes at http://bit.ly/4lKOPZz Have a great week! Host: Jonathan Bennett Co-Host: Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
There’s lots of juicy stories in our monthly security news roundup. The Scattered Spider hacking group makes effective use of social engineering to target MSPs, Microsoft pushes for better Windows resiliency by rethinking kernel access policies for third-party endpoint security software, and the US Justice Department files indictments against alleged operators of laptop farms that... Read more »
There’s lots of juicy stories in our monthly security news roundup. The Scattered Spider hacking group makes effective use of social engineering to target MSPs, Microsoft pushes for better Windows resiliency by rethinking kernel access policies for third-party endpoint security software, and the US Justice Department files indictments against alleged operators of laptop farms that... Read more »
The Kernel drama isn't over, Gaming is just better on Linux, and driver performance is getting better in a FineWine sort of way. KDE is rolling ahead, getting closer to session restore in Wayland, doing accessibility work, and making HDR even better. Speaking of which, Blender has a preview of HDR support on Wayland, with no immediate plans to add support on Windows. Canonical had a great year in 2025, and more! You can find the show notes at https://bit.ly/4kypjFT and Happy Linuxing! Host: Jonathan Bennett Co-Hosts: Rob Campbell and Ken McDonald Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
If you've been on social media for more than 20 minutes in the past two years, chances are you know about Bryan Johnson, the tech millionaire in Silicon Valley intent on living forever. For most of his career, Bryan has been an unassuming CEO in the style of early Jeff Bezos. He has been documenting every moment of his process on social media, including the 100 pills he takes every day, constant red light therapies, experimental gene therapy, plasma swaps with his son, and nighttime boner monitoring. However, while a lot is known about his current obsession to live forever, less is known about his early life, time with the Mormon church, scandals with an ex-partner, ideologies, or even his motivations behind wanting to live forever. Support our Sponsors! Hiya Health: https://rocketmoney.com/milehigher Timestamps: Intro 0:00 An Extremely Brief 20 Minute Rant 0:53 The Early Life of Bryan Johnson 28:58 Bryan Finds his "Calling" 35:45 From "Rags" to Riches 39:48 Wanting to be Seen 45:14 The Founding of Kernel 47:35 Horrors at Neuralink 48:02 We Truly Hate AI 49:49 Silicon Valley Believes in the Absence of Choice 57:21 Taryn Southern's Controversy 1:01:29 $2M a Year to Live Forever 1:15:11 Would You Want to Live Forever? 1:17:55 Doctor Zolman's 3 Levels 1:24:18 Bryan's Unbelievable Routine 1:38:00 Trying Bryan's Supplements 1:45:24 Trying Bryan's Olive Oil 1:49:42 Trying Bryan's Cocoa Mix 1:54:48 We Were Very Disappointed 1:57:28 30 Minutes of Social Time, 7 Hours of Sleep 2:02:04 Not Beating Those 'Vampire' Allegations 2:07:03 Unofficial Trials and Risky Tests 2:14:01 Hiking With Bryan 2:23:43 Bryan's Inevitable Future(?) 2:27:12 Final Thoughts & Outro 2:32:14 Mile Higher Merch: HTTP://milehigher.shop Charity Merch for NCMEC: https://kendallrae.shop Higher Hope Foundation: https://higherhope.org Check out our other podcasts! The Sesh https://bit.ly/3Mtoz4X Lights Out https://bit.ly/3n3Gaoe Planet Sleep https://linktr.ee/planetsleep Join our official FB group! https://bit.ly/3kQbAxg Join our Discord community, it's free! https://discord.gg/hZ356G9 MHP YouTube: http://bit.ly/2qaDWGf Are You Subscribed On Apple Podcast & Spotify?! Support MHP by leaving a rating or review on Apple Podcast :) https://apple.co/2H4kh58 MHP Topic Request Form: https://forms.gle/gUeTEzL9QEh4Hqz88 Merch designer application: https://forms.gle/ha2ErBnv1gK4rj2Y6 You can follow us on all the things: @milehigherpod Twitter: http://www.twitter.com/milehigherpod Instagram: http://www.instagram.com/milehigherpod YouTube: https://www.youtube.com/@MileHigher Hosts: Kendall: @kendallraeonyt IG: http://instagram.com/kendallraeonyt TW: https://www.twitter.com/kendallraeonyt YT: https://www.youtube.com/c/kendallsplace Josh: @milehigherjosh IG: http://www.instagram.com/milehigherjosh TW: https://www.twitter.com/milehigherjosh Producer: Janelle: @janelle_fields_ IG: https://www.instagram.com/janelle_fie... TW: https://www.twitter.com/janelle_fields_ Editor: Tom: @tomfoolery_photo IG: https://www.instagram.com/tomfoolery_photo/ Podcast sponsor inquires: joshthomas@night.co ✉ Send Us Mail & Fan Art ✉ Kendall Rae & Josh Thomas 8547 E Arapahoe Rd Ste J # 233 Greenwood Village, CO 80112 Music By: Mile Higher Boys YT: https://bit.ly/2Q7N5QO Spotify: https://open.spotify.com/artist/0F4ik... The creator hosts a documentary series for educational purposes (EDSA). These include authoritative sources such as interviews, newspaper articles, and TV news reporting meant to educate and memorialize notable cases in our history. Videos come with an editorial and artistic value. SOURCES CITED: https://pastebin.com/zjdgTvDL