Podcasts about nvme

Interface used for connecting storage devices

  • 245PODCASTS
  • 615EPISODES
  • 44mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Mar 16, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about nvme

Show all podcasts related to nvme

Latest podcast episodes about nvme

DekNet
Problema de los NAS NVME

DekNet

Play Episode Listen Later Mar 16, 2026 34:48


TECNOLOGIA y LIBERTAD  -------------------------- twitter.com/D3kkaR  #Bitcoin BTC: dekkar$paystring.crypt  Seedbox: https://members.rapidseedbox.com/ref.php?id=66848 CANAL PRIVADO DEKNET https://t.me/+0W_fPQXXOFAyNzE8

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Retrieval After RAG: Hybrid Search, Agents, and Database Design — Simon Hørup Eskildsen of Turbopuffer

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 12, 2026 60:32


Turbopuffer came out of a reading app.In 2022, Simon was helping his friends at Readwise scale their infra for a highly requested feature: article recommendations and semantic search. Readwise was paying ~$5k/month for their relational database and vector search would cost ~$20k/month making the feature too expensive to ship. In 2023 after mulling over the problem from Readwise, Simon decided he wanted to “build a search engine” which became Turbopuffer.We discuss:• Simon's path: Denmark → Shopify infra for nearly a decade → “angel engineering” across startups like Readwise, Replicate, and Causal → turbopuffer almost accidentally becoming a company • The Readwise origin story: building an early recommendation engine right after the ChatGPT moment, seeing it work, then realizing it would cost ~$30k/month for a company spending ~$5k/month total on infra and getting obsessed with fixing that cost structure • Why turbopuffer is “a search engine for unstructured data”: Simon's belief that models can learn to reason, but can't compress the world's knowledge into a few terabytes of weights, so they need to connect to systems that hold truth in full fidelity • The three ingredients for building a great database company: a new workload, a new storage architecture, and the ability to eventually support every query plan customers will want on their data • The architecture bet behind turbopuffer: going all in on object storage and NVMe, avoiding a traditional consensus layer, and building around the cloud primitives that only became possible in the last few years • Why Simon hated operating Elasticsearch at Shopify: years of painful on-call experience shaped his obsession with simplicity, performance, and eliminating state spread across multiple systems • The Cursor story: launching turbopuffer as a scrappy side project, getting an email from Cursor the next day, flying out after a 4am call, and helping cut Cursor's costs by 95% while fixing their per-user economics • The Notion story: buying dark fiber, tuning TCP windows, and eating cross-cloud costs because Simon refused to compromise on architecture just to close a deal faster • Why AI changes the build-vs-buy equation: it's less about whether a company can build search infra internally, and more about whether they have time especially if an external team can feel like an extension of their own • Why RAG isn't dead: coding companies still rely heavily on search, and Simon sees hybrid retrieval semantic, text, regex, SQL-style patterns becoming more important, not less • How agentic workloads are changing search: the old pattern was one retrieval call up front; the new pattern is one agent firing many parallel queries at once, turning search into a highly concurrent tool call • Why turbopuffer is reducing query pricing: agentic systems are dramatically increasing query volume, and Simon expects retrieval infra to adapt to huge bursts of concurrent search rather than a small number of carefully chosen calls • The philosophy of “playing with open cards”: Simon's habit of being radically honest with investors, including telling Lachy Groom he'd return the money if turbopuffer didn't hit PMF by year-end • The “P99 engineer”: Simon's framework for building a talent-dense company, rejecting by default unless someone on the team feels strongly enough to fight for the candidate —Simon Hørup Eskildsen• LinkedIn: https://www.linkedin.com/in/sirupsen• X: https://x.com/Sirupsen• https://sirupsen.com/aboutturbopuffer• https://turbopuffer.com/Full Video PodTimestamps00:00:00 The PMF promise to Lachy Groom00:00:25 Intro and Simon's background00:02:19 What turbopuffer actually is00:06:26 Shopify, Elasticsearch, and the pain behind the company00:10:07 The Readwise experiment that sparked turbopuffer00:12:00 The insight Simon couldn't stop thinking about00:17:00 S3 consistency, NVMe, and the architecture bet00:20:12 The Notion story: latency, dark fiber, and conviction00:25:03 Build vs. buy in the age of AI00:26:00 The Cursor story: early launch to breakout customer00:29:00 Why code search still matters00:32:00 Search in the age of agents00:34:22 Pricing turbopuffer in the AI era00:38:17 Why Simon chose Lachy Groom00:41:28 Becoming a founder on purpose00:44:00 The “P99 engineer” philosophy00:49:30 Bending software to your will00:51:13 The future of turbopuffer00:57:05 Simon's tea obsession00:59:03 Tea kits, X Live, and P99 LiveTranscriptSimon Hørup Eskildsen: I don't think I've said this publicly before, but I just called Lockey and was like, local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you. But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working.So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people. We're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards. Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before.Alessio: Hey everyone, welcome to the Leading Space podcast. This is Celesio Pando, Colonel Laz, and I'm joined by Swix, editor of Leading Space.swyx: Hello. Hello, uh, we're still, uh, recording in the Ker studio for the first time. Very excited. And today we are joined by Simon Eski. Of Turbo Farer welcome.Simon Hørup Eskildsen: Thank you so much for having me.swyx: Turbo Farer has like really gone on a huge tear, and I, I do have to mention that like you're one of, you're not my newest member of the Danish AHU Mafia, where like there's a lot of legendary programmers that have come out of it, like, uh, beyond Trotro, Rasmus, lado Berg and the V eight team and, and Google Maps team.Uh, you're mostly a Canadian now, but isn't that interesting? There's so many, so much like strong Danish presence.Simon Hørup Eskildsen: Yeah, I was writing a post, um, not that long ago about sort of the influences. So I grew up in Denmark, right? I left, I left when, when I was 18 to go to Canada to, to work at Shopify. Um, and so I, like, I've, I would still say that I feel more Danish than, than Canadian.This is also the weird accent. I can't say th because it, this is like, I don't, you know, my wife is also Canadian, um, and I think. I think like one of the things in, in Denmark is just like, there's just such a ruthless pragmatism and there's also a big focus on just aesthetics. Like, they're like very, people really care about like where, what things look like.Um, and like Canada has a lot of attributes, US has, has a lot of attributes, but I think there's been lots of the great things to carry. I don't know what's in the water in Ahu though. Um, and I don't know that I could be considered part of the Mafi mafia quite yet, uh, compared to the phenomenal individuals we just mentioned.Barra OV is also, uh, Danish Canadian. Okay. Yeah. I don't know where he lives now, but, and he's the PHP.swyx: Yeah. And obviously Toby German, but moved to Canada as well. Yes. Like this is like import that, uh, that, that is an interesting, um, talent move.Alessio: I think. I would love to get from you. Definition of Turbo puffer, because I think you could be a Vector db, which is maybe a bad word now in some circles, you could be a search engine.It's like, let, let's just start there and then we'll maybe run through the history of how you got to this point.Simon Hørup Eskildsen: For sure. Yeah. So Turbo Puffer is at this point in time, a search engine, right? We do full text search and we do vector search, and that's really what we're specialized in. If you're trying to do much more than that, like then this might not be the right place yet, but Turbo Buffer is all about search.The other way that I think about it is that we can take all of the world's knowledge, all of the exabytes and exabytes of data that there is, and we can use those tokens to train a model, but we can't compress all of that into a few terabytes of weights, right? Compress into a few terabytes of weights, how to reason with the world, how to make sense of the knowledge.But we have to somehow connect it to something externally that actually holds that like in full fidelity and truth. Um, and that's the thing that we intend to become. Right? That's like a very holier than now kind of phrasing, right? But being the search engine for unstructured, unstructured data is the focus of turbo puffer at this point in time.Alessio: And let's break down. So people might say, well, didn't Elasticsearch already do this? And then some other people might say, is this search on my data, is this like closer to rag than to like a xr, like a public search thing? Like how, how do you segment like the different types of search?Simon Hørup Eskildsen: The way that I generally think about this is like, there's a lot of database companies and I think if you wanna build a really big database company, sort of, you need a couple of ingredients to be in the air.We don't, which only happens roughly every 15 years. You need a new workload. You basically need the ambition that every single company on earth is gonna have data in your database. Multiple times you look at a company like Oracle, right? You will, like, I don't think you can find a company on earth with a digital presence that it not, doesn't somehow have some data in an Oracle database.Right? And I think at this point, that's also true for Snowflake and Databricks, right? 15 years later it's, or even more than that, there's not a company on earth that doesn't, in. Or directly is consuming Snowflake or, or Databricks or any of the big analytics databases. Um, and I think we're in that kind of moment now, right?I don't think you're gonna find a company over the next few years that doesn't directly or indirectly, um, have all their data available for, for search and connect it to ai. So you need that new workload, like you need something to be happening where there's a new workload that causes that to happen, and that new workload is connecting very large amounts of data to ai.The second thing you need. The second condition to build a big database company is that you need some new underlying change in the storage architecture that is not possible from the databases that have come before you. If you look at Snowflake and Databricks, right, commoditized, like massive fleet of HDDs, like that was not possible in it.It just wasn't in the air in the nineties, right? So you just didn't, we just didn't build these systems. S3 and and and so on was not around. And I think the architecture that is now possible that wasn't possible 15 years ago is to go all in on NVME SSDs. It requires a particular type of architecture for the database that.It's difficult to retrofit onto the databases that are already there, including the ones you just mentioned. The second thing is to go all in on OIC storage, more so than we could have done 15 years ago. Like we don't have a consensus layer, we don't really have anything. In fact, you could turn off all the servers that Turbo Buffer has, and we would not lose any data because we have all completely all in on OIC storage.And this means that our architecture is just so simple. So that's the second condition, right? First being a new workload. That means that every company on earth, either indirectly or directly, is using your database. Second being, there's some new storage architecture. That means that the, the companies that have come before you can do what you're doing.I think the third thing you need to do to build a big database company is that over time you have to implement more or less every Cory plan on the data. What that means is that you. You can't just get stuck in, like, this is the one thing that a database does. It has to be ever evolving because when someone has data in the database, they over time expect to be able to ask it more or less every question.So you have to do that to get the storage architecture to the limit of what, what it's capable of. Those are the three conditions.swyx: I just wanted to get a little bit of like the motivation, right? Like, so you left Shopify, you're like principal, engineer, infra guy. Um, you also head of kernel labs, uh, inside of Shopify, right?And then you consulted for read wise and that it kind of gave you that, that idea. I just wanted you to tell that story. Um, maybe I, you've told it before, but, uh, just introduce the, the. People to like the, the new workload, the sort of aha moment for turbo PufferSimon Hørup Eskildsen: For sure. So yeah, I spent almost a decade at Shopify.I was on the infrastructure team, um, from the fairly, fairly early days around 2013. Um, at the time it felt like it was growing so quickly and everything, all the metrics were, you know, doubling year on year compared to the, what companies are contending with today. It's very cute in growth. I feel like lot some companies are seeing that month over month.Um, of course. Shopify compound has been compounding for a very long time now, but I spent a decade doing that and the majority of that was just make sure the site is up today and make sure it's up a year from now. And a lot of that was really just the, um, you know, uh, the Kardashians would drive very, very large amounts of, of data to, to uh, to Shopify as they were rotating through all the merch and building out their businesses.And we just needed to make sure we could handle that. Right. And sometimes these were events, a million requests per second. And so, you know, we, we had our own data centers back in the day and we were moving to the cloud and there was so much sharding work and all of that that we were doing. So I spent a decade just scaling databases ‘cause that's fundamentally what's the most difficult thing to scale about these sites.The database that was the most difficult for me to scale during that time, and that was the most aggravating to be on call for, was elastic search. It was very, very difficult to deal with. And I saw a lot of projects that were just being held back in their ambition by using it.swyx: And I mean, self-hosted.Self-hosted. ‘causeSimon Hørup Eskildsen: it's, yeah, and it commercial, this is like 2015, right? So it's like a very particular vintage. Right. It's probably better at a lot of these things now. Um, it was difficult to contend with and I'm just like, I just think about it. It's an inverted index. It should be good at these kinds of queries and do all of this.And it was, we, we often couldn't get it to do exactly what we needed to do or basically get lucine to do, like expose lucine raw to, to, to what we needed to do. Um, so that was like. Just something that we did on the side and just panic scaled when we needed to, but not a particular focus of mine. So I left, and when I left, I, um, wasn't sure exactly what I wanted to do.I mean, it spent like a decade inside of the same company. I'd like grown up there. I started working there when I was 18.swyx: You only do Rails?Simon Hørup Eskildsen: Yeah. I mean, yeah. Rails. And he's a Rails guy. Uh, love Rails. So good. Um,Alessio: we all wish we could still work in Rails.swyx: I know know. I know, but some, I tried learning Ruby.It's just too much, like too many options to do the same thing. It's, that's my, I I know there's a, there's a way to do it.Simon Hørup Eskildsen: I love it. I don't know that I would use it now, like given cloud code and, and, and cursor and everything, but, um, um, but still it, like if I'm just sitting down and writing a teal code, that's how I think.But anyway, I left and I wasn't, I talked to a couple companies and I was like, I don't. I need to see a little bit more of the world here to know what I'm gonna like focus on next. Um, and so what I decided is like I was gonna, I called it like angel engineering, where I just hopped around in my friend's companies in three months increments and just helped them out with something.Right. And, and just vested a bit of equity and solved some interesting infrastructure problem. So I worked with a bunch of companies at the time, um, read Wise was one of them. Replicate was one of them. Um, causal, I dunno if you've tried this, it's like a, it's a spreadsheet engine Yeah. Where you can do distribution.They sold recently. Yeah. Um, we've been, we used that in fp and a at, um, at Turbo Puffer. Um, so a bunch of companies like this and it was super fun. And so we're the Chachi bt moment happened, I was with. With read Wise for a stint, we were preparing for the reader launch, right? Which is where you, you cue articles and read them later.And I was just getting their Postgres up to snuff, like, which basically boils down to tuning, auto vacuum. So I was doing that and then this happened and we were like, oh, maybe we should build a little recommendation engine and some features to try to hook in the lms. They were not that good yet, but it was clear there was something there.And so I built a small recommendation engine just, okay, let's take the articles that you've recently read, right? Like embed all the articles and then do recommendations. It was good enough that when I ran it on one of the co-founders of Rey's, like I found out that I got articles about, about having a child.I'm like, oh my God, I didn't, I, I didn't know that, that they were having a child. I wasn't sure what to do with that information, but the recommendation engine was good enough that it was suggesting articles, um, about that. And so there was, there was recommendations and uh, it actually worked really well.But this was a company that was spending maybe five grand a month in total on all their infrastructure and. When I did the napkin math on running the embeddings of all the articles, putting them into a vector index, putting it in prod, it's gonna be like 30 grand a month. That just wasn't tenable. Right?Like Read Wise is a proudly bootstrapped company and it's paying 30 grand for infrastructure for one feature versus five. It just wasn't tenable. So sort of in the bucket of this is useful, it's pretty good, but let us, let's return to it when the costs come down.swyx: Did you say it grows by feature? So for five to 30 is by the number of, like, what's the, what's the Scaling factor scale?It scales by the number of articles that you embed.Simon Hørup Eskildsen: It does, but what I meant by that is like five grand for like all of the other, like the Heroku, dinos, Postgres, like all the other, and this then storage is 30. Yeah. And then like 30 grand for one feature. Right. Which is like, what other articles are related to this one.Um, so it was just too much right to, to power everything. Their budget would've been maybe a few thousand dollars, which still would've been a lot. And so we put it in a bucket of, okay, we're gonna do that later. We'll wait, we will wait for the cost to come down. And that haunted me. I couldn't stop thinking about it.I was like, okay, there's clearly some latent demand here. If the cost had been a 10th, we would've shipped it and. This was really the only data point that I had. Right. I didn't, I, I didn't, I didn't go out and talk to anyone else. It was just so I started reading Right. I couldn't, I couldn't help myself.Like I didn't know what like a vector index is. I, I generally barely do about how to generate the vectors. There was a lot of hype about, this is a early 2023. There was a lot of hype about vector databases. There were raising a lot of money and it's like, I really didn't know anything about it. It's like, you know, trying these little models, fine tuning them.Like I was just trying to get sort of a lay of the land. So I just sat down. I have this. A GitHub repository called Napkin Math. And on napkin math, there's just, um, rows of like, oh, this is how much bandwidth. Like this is how many, you know, you can do 25 gigabytes per second on average to dram. You can do, you know, five gigabytes per second of rights to an SSD, blah blah.All of these numbers, right? And S3, how many you could do per, how much bandwidth can you drive per connection? I was just sitting down, I was like, why hasn't anyone build a database where you just put everything on O storage and then you puff it into NVME when you use the data and you puff it into dram if you're, if you're querying it alive, it's just like, this seems fairly obvious and you, the only real downside to that is that if you go all in on o storage, every right will take a couple hundred milliseconds of latency, but from there it's really all upside, right?You do the first go, it takes half a second. And it sort of occurred to me as like, well. The architecture is really good for that. It's really good for AB storage, it's really good for nvm ESSD. It's, well, you just couldn't have done that 10 years ago. Back to what we were talking about before. You really have to build a database where you have as few round trips as possible, right?This is how CPUs work today. It's how NVM E SSDs work. It's how as, um, as three works that you want to have a very large amount of outstanding requests, right? Like basically go to S3, do like that thousand requests to ask for data in one round trip. Wait for that. Get that, like, make a new decision. Do it again, and try to do that maybe a maximum of three times.But no databases were designed that way within NVME as is ds. You can drive like within, you know, within a very low multiple of DRAM bandwidth if you use it that way. And same with S3, right? You can fully max out the network card, which generally is not maxed out. You get very, like, very, very good bandwidth.And, but no one had built a database like that. So I was like, okay, well can't you just, you know, take all the vectors right? And plot them in the proverbial coordinate system. Get the clusters, put a file on S3 called clusters, do json, and then put another file for every cluster, you know, cluster one, do js O cluster two, do js ON you know that like it's two round trips, right?So you get the clusters, you find the closest clusters, and then you download the cluster files like the, the closest end. And you could do this in two round trips.swyx: You were nearest neighbors locally.Simon Hørup Eskildsen: Yes. Yes. And then, and you would build this, this file, right? It's just like ultra simplistic, but it's not a far shot from what the first version of Turbo Buffer was.Why hasn't anyone done thatAlessio: in that moment? From a workload perspective, you're thinking this is gonna be like a read heavy thing because they're doing recommend. Like is the fact that like writes are so expensive now? Oh, with ai you're actually not writing that much.Simon Hørup Eskildsen: At that point I hadn't really thought too much about, well no actually it was always clear to me that there was gonna be a lot of rights because at Shopify, the search clusters were doing, you know, I don't know, tens or hundreds of crew QPS, right?‘cause you just have to have a human sit and type in. But we did, you know, I don't know how many updates there were per second. I'm sure it was in the millions, right into the cluster. So I always knew there was like a 10 to 100 ratio on the read write. In the read wise use case. It's, um, even, even in the read wise use case, there'd probably be a lot fewer reads than writes, right?There's just a lot of churn on the amount of stuff that was going through versus the amount of queries. Um, I wasn't thinking too much about that. I was mostly just thinking about what's the fundamentally cheapest way to build a database in the cloud today using the primitives that you have available.And this is it, right? You just, now you have one machine and you know, let's say you have a terabyte of data in S3, you paid the $200 a month for that, and then maybe five to 10% of that data and needs to be an NV ME SSDs and less than that in dram. Well. You're paying very, very little to inflate the data.swyx: By the way, when you say no one else has done that, uh, would you consider Neon, uh, to be on a similar path in terms of being sort of S3 first and, uh, separating the compute and storage?Simon Hørup Eskildsen: Yeah, I think what I meant with that is, uh, just build a completely new database. I don't know if we were the first, like it was very much, it was, I mean, I, I hadn't, I just looked at the napkin math and was like, this seems really obvious.So I'm sure like a hundred people came up with it at the same time. Like the light bulb and every invention ever. Right. It was just in the air. I think Neon Neon was, was first to it. And they're trying, they're retrofitted onto Postgres, right? And then they built this whole architecture where you have, you have it in memory and then you sort of.You know, m map back to S3. And I think that was very novel at the time to do it for, for all LTP, but I hadn't seen a database that was truly all in, right. Not retrofitting it. The database felt built purely for this no consensus layer. Even using compare and swap on optic storage to do consensus. I hadn't seen anyone go that all in.And I, I mean, there, there, I'm sure there was someone that did that before us. I don't know. I was just looking at the napkin mathswyx: and, and when you say consensus layer, uh, are you strongly relying on S3 Strong consistency? You are. Okay.SoSimon Hørup Eskildsen: that is your consensus layer. It, it is the consistency layer. And I think also, like, this is something that most people don't realize, but S3 only became consistent in December of 2020.swyx: I remember this coming out during COVID and like people were like, oh, like, it was like, uh, it was just like a free upgrade.Simon Hørup Eskildsen: Yeah.swyx: They were just, they just announced it. We saw consistency guys and like, okay, cool.Simon Hørup Eskildsen: And I'm sure that they just, they probably had it in prod for a while and they're just like, it's done right.And people were like, okay, cool. But. That's a big moment, right? Like nv, ME SSDs, were also not in the cloud until around 2017, right? So you just sort of had like 2017 nv, ME SSDs, and people were like, okay, cool. There's like one skew that does this, whatever, right? Takes a few years. And then the second thing is like S3 becomes consistent in 2020.So now it means you don't have to have this like big foundation DB or like zookeeper or whatever sitting there contending with the keys, which is how. You know, that's what Snowflake and others have do so muchswyx: for goneSimon Hørup Eskildsen: Exactly. Just gone. Right? And so just push to the, you know, whatever, how many hundreds of people they have working on S3 solved and then compare and swap was not in S3 at this point in time,swyx: by the way.Uh, I don't know what that is, so maybe you wanna explain. Yes. Yeah.Simon Hørup Eskildsen: Yes. So, um, what Compare and swap is, is basically, you can imagine that if you have a database, it might be really nice to have a file called metadata json. And metadata JSON could say things like, Hey, these keys are here and this file means that, and there's lots of metadata that you have to operate in the database, right?But that's the simplest way to do it. So now you have might, you might have a lot of servers that wanna change the metadata. They might have written a file and want the metadata to contain that file. But you have a hundred nodes that are trying to contend with this metadata that JSON well, what compare and Swap allows you to do is basically just you download the file, you make the modifications, and then you write it only if it hasn't changed.While you did the modification and if not you retry. Right? Should just have this retry loops. Now you can imagine if you have a hundred nodes doing that, it's gonna be really slow, but it will converge over time. That primitive was not available in S3. It wasn't available in S3 until late 2024, but it was available in GCP.The real story of this is certainly not that I sat down and like bake brained it. I was like, okay, we're gonna start on GCS S3 is gonna get it later. Like it was really not that we started, we got really lucky, like we started on GCP and we started on GCP because tur um, Shopify ran on GCP. And so that was the platform I was most available with.Right. Um, and I knew the Canadian team there ‘cause I'd worked with them at Shopify and so it was natural for us to start there. And so when we started building the database, we're like, oh yeah, we have to build a, we really thought we had to build a consensus layer, like have a zookeeper or something to do this.But then we discovered the compare and swap. It's like, oh, we can kick the can. Like we'll just do metadata r json and just, it's fine. It's probably fine. Um, and we just kept kicking the can until we had very, very strong conviction in the idea. Um, and then we kind of just hinged the company on the fact that S3 probably was gonna get this, it started getting really painful in like mid 2024.‘cause we were closing deals with, um, um, notion actually that was running in AWS and we're like, trust us. You, you really want us to run this in GCP? And they're like, no, I don't know about that. Like, we're running everything in AWS and the latency across the cloud were so big and we had so much conviction that we bought like, you know, dark fiber between the AWS regions in, in Oregon, like in the InterExchange and GCP is like, we've never seen a startup like do like, what's going on here?And we're just like, no, we don't wanna do this. We were tuning like TCP windows, like everything to get the latency down ‘cause we had so high conviction in not doing like a, a metadata layer on S3. So those were the three conditions, right? Compare and swap. To do metadata, which wasn't in S3 until late 2024 S3 being consistent, which didn't happen until December, 2020.Uh, 2020. And then NVMe ssd, which didn't end in the cloud until 2017.swyx: I mean, in some ways, like a very big like cloud success story that like you were able to like, uh, put this all together, but also doing things like doing, uh, bind our favor. That that actually is something I've never heard.Simon Hørup Eskildsen: I mean, it's very common when you're a big company, right?You're like connecting your own like data center or whatever. But it's like, it was uniquely just a pain with notion because the, um, the org, like most of the, like if you're buying in Ashburn, Virginia, right? Like US East, the Google, like the GCP and, and AWS data centers are like within a millisecond on, on each other, on the public exchanges.But in Oregon uniquely, the GCP data center sits like a couple hundred kilometers, like east of Portland and the AWS region sits in Portland, but the network exchange they go through is through Seattle. So it's like a full, like 14 milliseconds or something like that. And so anyway, yeah. It's, it's, so we were like, okay, we can't, we have to go through an exchange in Portland.Yeah. Andswyx: you'd rather do this than like run your zookeeper and likeSimon Hørup Eskildsen: Yes. Way rather. It doesn't have state, I don't want state and two systems. Um, and I think all that is just informed by Justine, my co-founder and I had just been on call for so long. And the worst outages are the ones where you have state in multiple places that's not syncing up.So it really came from, from a a, like just a, a very pure source of pain, of just imagining what we would be Okay. Being woken up at 3:00 AM about and having something in zookeeper was not one of them.swyx: You, you're talking to like a notion or something. Do they care or do they just, theySimon Hørup Eskildsen: just, they care about latency.swyx: They latency cost. That's it.Simon Hørup Eskildsen: They just cared about latency. Right. And we just absorbed the cost. We're just like, we have high conviction in this. At some point we can move them to AWS. Right. And so we just, we, we'll buy the fiber, it doesn't matter. Right. Um, and it's like $5,000. Usually when you buy fiber, you buy like multiple lines.And we're like, we can only afford one, but we will just test it that when it goes over the public internet, it's like super smooth. And so we did a lot of, anyway, it's, yeah, it was, that's cool.Alessio: You can imagine talking to the GCP rep and it's like, no, we're gonna buy, because we know we're gonna turn, we're gonna turn from you guys and go to AWS in like six months.But in the meantime we'll do this. It'sSimon Hørup Eskildsen: a, I mean, like they, you know, this workload still runs on GCP for what it's worth. Right? ‘cause it's so, it was just, it was so reliable. So it was never about moving off GCP, it was just about honesty. It was just about giving notion the latency that they deserved.Right. Um, and we didn't want ‘em to have to care about any of this. We also, they were like, oh, egress is gonna be bad. It was like, okay, screw it. Like we're just gonna like vvc, VPC peer with you and AWS we'll eat the cost. Yeah. Whatever needs to be done.Alessio: And what were the actual workloads? Because I think when you think about ai, it's like 14 milliseconds.It's like really doesn't really matter in the scheme of like a model generation.Simon Hørup Eskildsen: Yeah. We were told the latency, right. That we had to beat. Oh, right. So, so we're just looking at the traces. Right. And then sort of like hand draw, like, you know, kind of like looking at the trace and then thinking what are the other extensions of the trace?Right. And there's a lot more to it because it's also when you have, if you have 14 versus seven milliseconds, right. You can fit in another round trip. So we had to tune TCP to try to send as much data in every round trip, prewarm all the connections. And there was, there's a lot of things that compound from having these kinds of round trips, but in the grand scheme it was just like, well, we have to beat the latency of whatever we're up against.swyx: Which is like they, I mean, notion is a database company. They could have done this themselves. They, they do lots of database engineering themselves. How do you even get in the door? Like Yeah, just like talk through that kind of.Simon Hørup Eskildsen: Last time I was in San Francisco, I was talking to one of the engineers actually, who, who was one of our champions, um, at, AT Notion.And they were, they were just trying to make sure that the, you know, per user cost matched the economics that they needed. You know, Uhhuh like, it's like the way I think about, it's like I have to earn a return on whatever the clouds charge me and then my customers have to earn a return on that. And it's like very simple, right?And so there has to be gross margin all the way up and that's how you build the product. And so then our customers have to make the right set of trade off the turbo Puffer makes, and if they're happy with that, that's great.swyx: Do you feel like you're competing with build internally versus buy or buy versus buy?Simon Hørup Eskildsen: Yeah, so, sorry, this was all to build up to your question. So one of the notion engineers told me that they'd sat and probably on a napkin, like drawn out like, why hasn't anyone built this? And then they saw terrible. It was like, well, it literally that. So, and I think AI has also changed the buy versus build equation in terms of, it's not really about can we build it, it's about do we have time to build it?I think they like, I think they felt like, okay, if this is a team that can do that and they, they feel enough like an extension of our team, well then we can go a lot faster, which would be very, very good for them. And I mean, they put us through the, through the test, right? Like we had some very, very long nights to to, to do that POC.And they were really our biggest, our second big customer off the cursor, which also was a lot of late nights. Right.swyx: Yeah. That, I mean, should we go into that story? The, the, the sort of Chris's story, like a lot, um, they credit you a lot for. Working very closely with them. So I just wanna hear, I've heard this, uh, story from Sole's point of view, but like, I'm curious what, what it looks like from your side.Simon Hørup Eskildsen: I actually haven't heard it from Sole's point of view, so maybe you can now cross reference it. The way that I remember it was that, um, the day after we launched, which was just, you know, I'd worked the whole summer on, on the first version. Justine wasn't part of it yet. ‘cause I just, I didn't tell anyone that summer that I was working on this.I was just locked in on building it because it's very easy otherwise to confuse talking about something to actually doing it. And so I was just like, I'm not gonna do that. I'm just gonna do the thing. I launched it and at this point turbo puffer is like a rust binary running on a single eight core machine in a T Marks instance.And me deploying it was like looking at the request log and then like command seeing it or like control seeing it to just like, okay, there's no request. Let's upgrade the binary. Like it was like literally the, the, the, the scrappiest thing. You could imagine it was on purpose because just like at Shopify, we did that all the time.Like, we like move, like we ran things in tux all the time to begin with. Before something had like, at least the inkling of PMF, it was like, okay, is anyone gonna hear about this? Um, and one of the cursor co-founders Arvid reached out and he just, you know, the, the cursor team are like all I-O-I-I-M-O like, um, contenders, right?So they just speak in bullet points and, and facts. It was like this amazing email exchange just of, this is how many QPS we have, this is what we're paying, this is where we're going, blah, blah, blah. And so we're just conversing in bullet points. And I tried to get a call with them a few times, but they were, so, they were like really writing the PMF bowl here, just like late 2023.And one time Swally emails me at like five. What was it like 4:00 AM Pacific time saying like, Hey, are you open for a call now? And I'm on the East coast and I, it was like 7:00 AM I was like, yeah, great, sure, whatever. Um, and we just started talking and something. Then I didn't know anything about sales.It was something that just comp compelled me. I have to go see this team. Like, there's something here. So I, I went to San Francisco and I went to their office and the way that I remember it is that Postgres was down when I showed up at the office. Did SW tell you this? No. Okay. So Postgres was down and so it's like they were distracting with that.And I was trying my best to see if I could, if I could help in any way. Like I knew a little bit about databases back to tuning, auto vacuum. It was like, I think you have to tune out a vacuum. Um, and so we, we talked about that and then, um, that evening just talked about like what would it look like, what would it look like to work with us?And I just said. Look like we're all in, like we will just do what we'll do whatever, whatever you tell us, right? They migrated everything over the next like week or two, and we reduced their cost by 95%, which I think like kind of fixed their per user economics. Um, and it solved a lot of other things. And we were just, Justine, this is also when I asked Justine to come on as my co-founder, she was the best engineer, um, that I ever worked with at Shopify.She lived two blocks away and we were just, okay, we're just gonna get this done. Um, and we did, and so we helped them migrate and we just worked like hell over the next like month or two to make sure that we were never an issue. And that was, that was the cursor story. Yeah.swyx: And, and is code a different workload than normal text?I, I don't know. Is is it just text? Is it the same thing?Simon Hørup Eskildsen: Yeah, so cursor's workload is basically, they, um, they will embed the entire code base, right? So they, they will like chunk it up in whatever they would, they do. They have their own embedding model, um, which they've been public about. Um, and they find that on, on, on their evals.It. There's one of their evals where it's like a 25% improvement on a very particular workload. They have a bunch of blog posts about it. Um, I think it works best on larger code basis, but they've trained their own embedding model to do this. Um, and so you'll see it if you use the cursor agent, it will do searches.And they've also been public around, um, how they've, I think they post trained their model to be very good at semantic search as well. Um, and that's, that's how they use it. And so it's very good at, like, can you find me on the code that's similar to this, or code that does this? And just in, in this queries, they also use GR to supplement it.swyx: Yeah.Simon Hørup Eskildsen: Um, of courseswyx: it's been a big topic of discussion like, is rag dead because gr you know,Simon Hørup Eskildsen: and I mean like, I just, we, we see lots of demand from the coding company to ethicsswyx: search in every part. Yes.Simon Hørup Eskildsen: Uh, we, we, we see demand. And so, I mean, I'm. I like case studies. I don't like, like just doing like thought pieces on this is where it's going.And like trying to be all macroeconomic about ai, that's has turned out to be a giant waste of time because no one can really predict any of this. So I just collect case studies and I mean, cursor has done a great job talking about what they're doing and I hope some of the other coding labs that use Turbo Puffer will do the same.Um, but it does seem to make a difference for particular queries. Um, I mean we can also do text, we can also do RegX, but I should also say that cursors like security posture into Tur Puffer is exceptional, right? They have their own embedding model, which makes it very difficult to reverse engineer. They obfuscate the file paths.They like you. It's very difficult to learn anything about a code base by looking at it. And the other thing they do too is that for their customers, they encrypt it with their encryption keys in turbo puffer's bucket. Um, so it's, it's, it's really, really well designed.swyx: And so this is like extra stuff they did to work with you because you are not part of Cursor.Exactly like, and this is just best practice when working in any database, not just you guys. Okay. Yeah, that makes sense. Yeah. I think for me, like the, the, the learning is kind of like you, like all workloads are hybrid. Like, you know, uh, like you, you want the semantic, you want the text, you want the RegX, you want sql.I dunno. Um, but like, it's silly to like be all in on like one particularly query pattern.Simon Hørup Eskildsen: I think, like I really like the way that, um, um, that swally at cursor talks about it, which is, um, I'm gonna butcher it here. Um, and you know, I'm a, I'm a database scalability person. I'm not a, I, I dunno anything about training models other than, um, what the internet tells me and what.The way he describes is that this is just like cash compute, right? It's like you have a point in time where you're looking at some particular context and focused on some chunk and you say, this is the layer of the neural net at this point in time. That seems fundamentally really useful to do cash compute like that.And, um, how the value of that will change over time. I'm, I'm not sure, but there seems to be a lot of value in that.Alessio: Maybe talk a bit about the evolution of the workload, because even like search, like maybe two years ago it was like one search at the start of like an LLM query to build the context. Now you have a gentech search, however you wanna call it, where like the model is both writing and changing the code and it's searching it again later.Yeah. What are maybe some of the new types of workloads or like changes you've had to make to your architecture for it?Simon Hørup Eskildsen: I think you're right. When I think of rag, I think of, Hey, there's an 8,000 token, uh, context window and you better make it count. Um, and search was a way to do that now. Everything is moving towards the, just let the agent do its thing.Right? And so back to the thing before, right? The LLM is very good at reasoning with the data, and so we're just the tool call, right? And that's increasingly what we see our customers doing. Um, what we're seeing more demand from, from our customers now is to do a lot of concurrency, right? Like Notion does a ridiculous amount of queries in every round trip just because they can't.And I'm also now, when I use the cursor agent, I also see them doing more concurrency than I've ever seen before. So a bit similar to how we designed a database to drive as much concurrency in every round trip as possible. That's also what the agents are doing. So that's new. It means just an enormous amount of queries all at once to the dataset while it's warm in as few turns as possible.swyx: Can I clarify one thing on that?Simon Hørup Eskildsen: Yes.swyx: Is it, are they batching multiple users or one user is driving multiple,Simon Hørup Eskildsen: one user driving multiple, one agent driving.swyx: It's parallel searching a bunch of things.Simon Hørup Eskildsen: Exactly.swyx: Yeah. Yeah, exactly. So yeah, the clinician also did, did this for the fast context thing, like eight parallel at once.Simon Hørup Eskildsen: Yes.swyx: And, and like an interesting problem is, well, how do you make sure you have enough diversity so you're not making the the same request eight times?Simon Hørup Eskildsen: And I think like that's probably also where the hybrid comes in, where. That's another way to diversify. It's a completely different way to, to do the search.That's a big change, right? So before it was really just like one call and then, you know, the LLM took however many seconds to return, but now we just see an enormous amount of queries. So the, um, we just see more queries. So we've like tried to reduce query, we've reduced query pricing. Um, this is probably the first time actually I'm saying that, but the query pricing is being reduced, like five x.Um, and we'll probably try to reduce it even more to accommodate some of these workloads of just doing very large amounts of queries. Um, that's one thing that's changed. I think the right, the right ratio is still very high, right? Like there's still a, an enormous amount of rights per read, but we're starting probably to see that change if people really lean into this pattern.Alessio: Can we talk a little bit about the pricing? I'm curious, uh, because traditionally a database would charge on storage, but now you have the token generation that is so expensive, where like the actual. Value of like a good search query is like much higher because they're like saving inference time down the line.How do you structure that as like, what are people receptive to on the other side too?Simon Hørup Eskildsen: Yeah. I, the, the turbo puffer pricing in the beginning was just very simple. The pricing on these on for search engines before Turbo Puffer was very server full, right? It was like, here's the vm, here's the per hour cost, right?Great. And I just sat down with like a piece of paper and said like, if Turbo Puffer was like really good, this is probably what it would cost with a little bit of margin. And that was the first pricing of Turbo Puffer. And I just like sat down and I was like, okay, like this is like probably the storage amp, but whenever on a piece of paper I, it was vibe pricing.It was very vibe price, and I got it wrong. Oh. Um, well I didn't get it wrong, but like Turbo Puffer wasn't at the first principle pricing, right? So when Cursor came on Turbo Puffer, it was like. Like, I didn't know any VCs. I didn't know, like I was just like, I don't know, I didn't know anything about raising money or anything like that.I just saw that my GCP bill was, was high, was a lot higher than the cursor bill. So Justine and I was just like, well, we have to optimize it. Um, and I mean, to the chagrin now of, of it, of, of the VCs, it now means that we're profitable because we've had so much pricing pressure in the beginning. Because it was running on my credit card and Justine and I had spent like, like tens of thousands of dollars on like compute bills and like spinning off the company and like very like, like bad Canadian lawyers and like things like to like get all of this done because we just like, we didn't know.Right. If you're like steeped in San Francisco, you're just like, you just know. Okay. Like you go out, raise a pre-seed round. I, I never heard a word pre-seed at this point in time.swyx: When you had Cursor, you had Notion you, you had no funding.Simon Hørup Eskildsen: Um, with Cursor we had no funding. Yeah. Um, by the time we had Notion Locke was, Locke was here.Yeah. So it was really just, we vibe priced it 100% from first Principles, but it wasn't, it, it was not performing at first principles, so we just did everything we could to optimize it in the beginning for that, so that at least we could have like a 5% margin or something. So I wasn't freaking out because Cursor's bill was also going like this as they were growing.And so my liability and my credit limit was like actively like calling my bank. It was like, I need a bigger credit. Like it was, yeah. Anyway, that was the beginning. Yeah. But the pricing was, yeah, like storage rights and query. Right. And the, the pricing we have today is basically just that pricing with duct tape and spit to try to approach like, you know, like a, as a margin on the physical underlying hardware.And we're doing this year, you're gonna see more and more pricing changes from us. Yeah.swyx: And like is how much does stuff like VVC peering matter because you're working in AWS land where egress is charged and all that, you know.Simon Hørup Eskildsen: We probably don't like, we have like an enterprise plan that just has like a base fee because we haven't had time to figure out SKU pricing for all of this.Um, but I mean, yeah, you can run turbo puffer either in SaaS, right? That's what Cursor does. You can run it in a single tenant cluster. So it's just you. That's what Notion does. And then you can run it in, in, in BYOC where everything is inside the customer's VPC, that's what an for example, philanthropic does.swyx: What I'm hearing is that this is probably the best CRO job for somebody who can come in and,Simon Hørup Eskildsen: I mean,swyx: help you with this.Simon Hørup Eskildsen: Um, like Turbo Puffer hired, like, I don't know what, what number this was, but we had a full-time CFO as like the 12th hire or something at Turbo Puffer, um, I think I hear are a lot of comp.I don't know how they do it. Like they have a hundred employees and not a CFO. It's like having a CFO is like a runningswyx: business man. Like, you know,Simon Hørup Eskildsen: it's so good. Yeah, like money Mike, like he just, you know, just handles the money and a lot of the business stuff and so he came in and just hopped with a lot of the operational side of the business.So like C-O-O-C-F-O, like somewhere in between.swyx: Just as quick mention of Lucky, just ‘cause I'm curious, I've met Lock and like, he's obviously a very good investor and now on physical intelligence, um, I call it generalist super angel, right? He invests in everything. Um, and I always wonder like, you know, is there something appealing about focusing on developer tooling, focusing on databases, going like, I've invested for 10 years in databases versus being like a lock where he can maybe like connect you to all the customers that you need.Simon Hørup Eskildsen: This is an excellent question. No, no one's asked me this. Um, why lockey? Because. There was a couple of people that we were talking to at the time and when we were raising, we were almost a little, we were like a bit distressed because one of our, one of our peers had just launched something that was very similar to Turbo Puffer.And someone just gave me the advice at the time of just choose the person where you just feel like you can just pick up the phone and not prepare anything. And just be completely honest, and I don't think I've said this publicly before, but I just called Lockey and was like local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you.But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working. So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people and we're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards and.Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before. As I said, I didn't even know what a seed or pre-seed round was like before, probably even at this time. So I was just like very honest with him. And I asked him like, Lockie, have you ever have, have you ever invested in database company?He was just like, no. And at the time I was like, am I dumb? Like, but I think there was something that just like really drew me to Lockie. He is so authentic, so honest, like, and there was something just like, I just felt like I could just play like, just say everything openly. And that was, that was, I think that that was like a perfect match at the time, and, and, and honestly still is.He was just like, okay, that's great. This is like the most honest, ridiculous thing I've ever heard anyone say to me. But like that, like that, whyswyx: is this ridiculous? Say competitor launch, this may not work out. It wasSimon Hørup Eskildsen: more just like. If this doesn't work out, I'm gonna close up shop by the end of the mo the year, right?Like it was, I don't know, maybe it's common. I, I don't know. He told me it was uncommon. I don't know. Um, that's why we chose him and he'd been phenomenal. The other people were talking at the, at the time were database experts. Like they, you know, knew a lot about databases and Locke didn't, this turned out to be a phenomenal asset.Right. I like Justine and I know a lot about databases. The people that we hire know a lot about databases. What we needed was just someone who didn't know a lot about databases, didn't pretend to know a lot about databases, and just wanted to help us with candidates and customers. And he did. Yeah. And I have a list, right, of the investors that I have a relationship with, and Lockey has just performed excellent in the number of sub bullets of what we can attribute back to him.Just absolutely incredible. And when people talk about like no ego and just the best thing for the founder, I like, I don't think that anyone, like even my lawyer is like, yeah, Lockey is like the most friendly person you will find.swyx: Okay. This is my most glow recommendation I've ever heard.Alessio: He deserves it.He's very special.swyx: Yeah. Yeah. Yeah. Okay. Amazing.Alessio: Since you mentioned candidates, maybe we can talk about team building, you know, like, especially in sf, it feels like it's just easier to start a company than to join a company. Uh, I'm curious your experience, especially not being n SF full-time and doing something that is maybe, you know, a very low level of detail and technical detail.Simon Hørup Eskildsen: Yeah. So joining versus starting, I never thought that I would be a founder. I would start with it, like Turbo Puffer started as a blog post, and then it became a project and then sort of almost accidentally became a company. And now it feels like it's, it's like becoming a bigger company. That was never the intention.The intentions were very pure. It's just like, why hasn't anyone done this? And it's like, I wanna be the, like, I wanna be the first person to do it. I think some founders have this, like, I could never work for anyone else. I, I really don't feel that way. Like, it's just like, I wanna see this happen. And I wanna see it happen with some people that I really enjoy working with and I wanna have fun doing it and this, this, this has all felt very natural on that, on that sense.So it was never a like join versus versus versus found. It was just dis found me at the right moment.Alessio: Well I think there's an argument for, you should have joined Cursor, right? So I'm curious like how you evaluate it. Okay, I should actually go raise money and make this a company versus like, this is like a company that is like growing like crazy.It's like an interesting technical problem. I should just build it within Cursor and then they don't have to encrypt all this stuff. They don't have to obfuscate things. Like was that on your mind at all orSimon Hørup Eskildsen: before taking the, the small check from Lockie, I did have like a hard like look at myself in the mirror of like, okay, do I really want to do this?And because if I take the money, I really have to do it right. And so the way I almost think about it's like you kind of need to ha like you kind of need to be like fucked up enough to want to go all the way. And that was the conversation where I was like, okay, this is gonna be part of my life's journey to build this company and do it in the best way that I possibly can't.Because if I ask people to join me, ask people to get on the cap table, then I have an ultimate responsibility to give it everything. And I don't, I think some people, it doesn't occur to me that everyone takes it that seriously. And maybe I take it too seriously, I don't know. But that was like a very intentional moment.And so then it was very clear like, okay, I'm gonna do this and I'm gonna give it everything.Alessio: A lot of people don't take it this seriously. But,swyx: uh, let's talk about, you have this concept of the P 99 engineer. Uh, people are 10 x saying, everyone's saying, you know, uh, maybe engineers are out of a job. I don't know.But you definitely see a P 99 engineer, and I just want you to talk about it.Simon Hørup Eskildsen: Yeah, so the P 99 engineer was just a term that we started using internally to talk about candidates and talk about how we wanted to build the company. And you know, like everyone else is, like we want a talent dense company.And I think that's almost become trite at this point. What I credit the cursor founders a lot with is that they just arrived there from first principles of like, we just need a talent dense, um, talent dense team. And I think I've seen some teams that weren't talent dense and like seemed a counterfactual run, which if you've run in been in a large company, you will just see that like it's just logically will happen at a large company.Um, and so that was super important to me and Justine and it's very difficult to maintain. And so we just needed, we needed wording for it. And so I have a document called Traits of the P 99 Engineer, and it's a bullet point list. And I look at that list after every single interview that I do, and in every single recap that we do and every recap we end with.End with, um, some version of I'm gonna reject this candidate completely regardless of what the discourse was, because I wanna see people fight for this person because the default should not be, we're gonna hire this person. The default should be, we're definitely not hiring this person. And you know, if everyone was like, ah, maybe throw a punch, then this is not the right.swyx: Do, do you operate, like if there's one cha there must have at least one champion who's like, yes, I will put my career on, on, on the line for this. You know,Simon Hørup Eskildsen: I think career on the line,swyx: maybe a chair, butSimon Hørup Eskildsen: yeah. You know, like, um, I would say so someone needs to like, have both fists up and be like, I'd fight.Right? Yeah. Yeah. And if one person said, then, okay, let's do it. Right?swyx: Yeah.Simon Hørup Eskildsen: Um. It doesn't have to be absolutely everyone. Right? And like the interviews are always the sign that you're checking for different attributes. And if someone is like knocking it outta the park in every single attribute, that's, that's fairly rare.Um, but that's really important. And so the traits of the P 99 engineer, there's lots of them. There's also the traits of the p like triple nine engineer and the quadruple nine engineer. This is like, it's a long list.swyx: Okay.Simon Hørup Eskildsen: Um, I'll give you some samples, right. Of what we, what we look for. I think that the P 99 engineer has some history of having bent, like their trajectory or something to their will.Right? Some moment where it was just, they just, you know, made the computer do what it needed to do. There's something like that, and it will, it will occur to have them at some point in their career. And, uh. Hopefully multiple times. Right.swyx: Gimme an example of one of your engineers that like,Simon Hørup Eskildsen: I'll give an eng.Uh, so we, we, we launched this thing called A and NV three. Um, we could, we're also, we're working on V four and V five right now, but a and NV three can search a hundred billion vectors with a P 50 of around 40 milliseconds and a p 99 of 200 milliseconds. Um, maybe other people have done this, I'm sure Google and others have done this, but, uh, we haven't seen anyone, um, at least not in like a public consumable SaaS that can do this.And that was an engineer, the chief architect of Turbo Puffer, Nathan, um, who more or less just bent this, the software was not capable of this and he just made it capable for a very particular workload in like a, you know, six to eight week period with the help of a lot of the team. Right. It's been, been, there's numerous of examples of that, like at, at turbo puff, but that's like really bending the software and X 86 to your will.It was incredible to watch. Um. You wanna see some moments like that?swyx: Isn't that triple nine?Simon Hørup Eskildsen: Um, I think Nathan, what's calledAlessio: group nine, that was only nine. I feel like this is too high forSimon Hørup Eskildsen: Nathan. Nathan is, uh, Nathan is like, yeah, there's a lot of nines. Okay. After that p So I think that's one trait. I think another trait is that, uh, the P 99 spends a lot of time looking at maps.Generally it's their preferred ux. They just love looking at maps. You ever seen someone who just like, sits on their phone and just like, scrolls around on a map? Or did you not look at maps A lot? You guys don't look atswyx: maps? I guess I'm not feeling there. I don't know, butSimon Hørup Eskildsen: you just dis What about trains?Do you like trains?swyx: Uh, I mean they, not enough. Okay. This is just like weapon nice. Autism is what I call it. Like, like,Simon Hørup Eskildsen: um, I love looking at maps, like, it's like my preferred UX and just like I, you know, I likeswyx: lotsAlessio: of, of like random places, soswyx: like,youswyx: know.Alessio: Yes. Okay. There you go. So instead of like random places, like how do you explore the maps?Simon Hørup Eskildsen: No, it's, it's just a joke.swyx: It's autism laugh. It's like you are just obsessed by something and you like studying a thing.Simon Hørup Eskildsen: The origin of this was that at some point I read an interview with some IOI gold medalistswyx: Uhhuh,Simon Hørup Eskildsen: and it's like, what do you do in your spare time? I was just like, I like looking at maps.I was like, I feel so seen. Like, I just like love, like swirling out. I was like, oh, Canada is so big. Where's Baffin Island? I don't know. I love it. Yeah. Um, anyway, so the traits of P 99, P 99 is obsessive, right? Like, there's just like, you'll, you'll find traits of that we do an interview at, at, at, at turbo puffer or like multiple interviews that just try to screen for some of these things.Um, so. There's lots of others, but these are the kinds of traits that we look for.swyx: I'll tell you, uh, some people listen for like some of my dere stuff. Uh, I do think about derel as maps. Um, you draw a map for people, uh, maps show you the, uh, what is commonly agreed to be the geographical features of what a boundary is.And it shows also shows you what is not doing. And I, I think a lot of like developer tools, companies try to tell you they can do everything, but like, let's, let's be real. Like you, your, your three landmarks are here, everyone comes here, then here, then here, and you draw a map and, and then you draw a journey through the map.And like that. To me, that's what developer relations looks like. So I do think about things that way.Simon Hørup Eskildsen: I think the P 99 thinks in offs, right? The P 99 is very clear about, you know, hey, turbo puffer, you can't run a high transaction workload on turbo puffer, right? It's like the right latency is a hundred milliseconds.That's a clear trade off. I think the P 99 is very good at articulating the trade offs in every decision. Um. Which is exactly what the map is in your case, right?swyx: Uh, yeah, yeah. My, my, my world. My world.Alessio: How, how do you reconcile some of these things when you're saying you bend the will the computer versus like the trade

PC Perspective Podcast
Podcast #858 - Intel & AMD CPU Rumors, NVIDIA Works on Linux, DDR5 Prices Trend, MOZA R5 Bundle

PC Perspective Podcast

Play Episode Listen Later Mar 7, 2026 81:25


Not sure how this got missed last week!  Making up for time, now you get TWO episodes packed into 1 week!Intel might go back to a unified arch, Acer threatens people to buy now or else, and that Discord thing continues.  Oh, and that Nvidia money train just keeps on rolling, plus more 12VHPWR woes.  Do take a listen / look at the Moza R5 virtual driving gear bundle though, very nice.Timestamps:0:00 Intro00:56 Patreon02:18 Food with Josh04:13 News begins - Intel unified core architecture rumor10:24 AMD Zen 6 might not arrive until 202715:37 Acer sees sales jump after warning of price hikes17:15 NVIDIA to improve Linux gaming performance18:26 NVIDIA financials with Josh23:10 Apple to build Mac mini in USA26:31 Some alternatives to rising NVMe costs?33:41 DDR5 prices possibly beginning a downward trend35:11 WireView Pro II to help keep your 5090 from melting41:00 Command line automation comes to AIDA6443:22 Discord45:31 (In)Security Corner55:26 Gaming Quick Hits1:03:00 Josh reviews the MOZA R5 Bundle1:10:41 Picks of the Week1:19:51 Outro ★ Support this podcast on Patreon ★

Unexplored Territory
#113 - Procuring hardware for a vSAN based VCF infrastructure - featuring John Nicholson!

Unexplored Territory

Play Episode Listen Later Mar 2, 2026 42:45


I've been on the Virtually Speaking podcast several times, so it was time to invite one of the hosts to the Unexplored Territory Podcast and discuss his favorite topic, hardware configurations, and the bill of materials!John Nicholson goes over all the ins and outs of procuring new hardware and talks about ordering components for existing hardware. We discuss NICs, Switches, Ready Node configurations, Emulated Ready Node configurations, NVMe devices, and much more.During the discussion, various blogs and videos were mentioned, make sure to check those as well!https://blogs.vmware.com/cloud-foundation/2023/07/26/expanded-hardware-compatibility-for-vsan-express-storage-architecture/https://knowledge.broadcom.com/external/article/326476/what-you-can-and-cannot-change-in-a-vsan.htmlhttps://blogs.vmware.com/cloud-foundation/2023/07/27/yes-you-can-change-things-on-a-vsan-esa-readynode/ https://www.youtube.com/watch?v=jjen1ER8ASc https://www.vmware.com/explore/video-library/video/6360757998112

Software Sessions
Bryan Cantrill on Oxide Computer

Software Sessions

Play Episode Listen Later Feb 27, 2026 89:58


Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ

AWS Morning Brief
Your Account Name Was There All Along (It Wasn't)

AWS Morning Brief

Play Episode Listen Later Feb 9, 2026 6:12


AWS Morning Brief for the week of February 9th, with Corey Quinn. Links:Change the server-side encryption type of Amazon S3 objectsAnnouncing memory-optimized instance bundles for Amazon LightsailAmazon RDS now provides an enhanced console experience to connect to a databaseAWS Multi-party approval now requires one-time password verification for votingAWS Management Console now displays Account Name on the Navigation barStructured outputs now available in Amazon BedrockAmazon EC2 C8id, M8id, and R8id instances with up to 22.8 TB local NVMe storage are generally available AWS IAM Identity Center now supports multi-Region replication for AWS account access and application useTrigger AWS Lambda functions from Amazon RDS for SQL Server database eventsAmazon CloudFront now supports mTLS authentication to originsBevar Ukraine: Empowering Ukrainian refugees with AI-powered support on AWSSecurity Findings in SageMaker Python SDK

Crazy Wisdom
Episode #522: The Hardware Heretic: Why Everything You Think About FPGAs Is Backwards

Crazy Wisdom

Play Episode Listen Later Jan 12, 2026 53:08


In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics.For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com.Timestamps00:00 Introduction to FPGAs and Their Role in Servers02:47 Understanding FPGA Limitations and Use Cases05:55 Exploring Different Types of Servers08:47 The Importance of Memory and Bandwidth11:52 Philosophical Insights on Search and Access Patterns14:50 The Relationship Between Hardware and Search Queries17:45 Challenges of Distributed Systems20:47 The CAP Theorem and Its Implications23:52 The Evolution of Technology and Knowledge Management26:59 FPGAs as IO Expanders29:35 The Trade-offs of FPGAs vs. ASICs and GPUs32:55 The Future of AI Applications with FPGAs35:51 Exciting Developments in Hardware and BusinessKey Insights1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel.2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders.3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware.4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously.5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from fixed graphics pipelines to programmable compute, and now companies like Etched are creating transformer-specific ASICs. This cycle repeats as each generation adds programmability until someone strips it away for performance gains.6. Memory bottlenecks are reshaping the hardware landscape - The AI boom has created severe memory shortages, doubling costs for DRAM components overnight. This affects not just GPU availability but creates opportunities for alternative architectures. When everyone faces higher memory costs, the relative premium for specialized solutions like FPGA-based systems becomes more attractive, potentially shifting the competitive landscape for memory-intensive applications.7. Search applications represent ideal FPGA use cases due to their sparse access patterns - Vector databases and search workloads are particularly well-suited to FPGA acceleration because they involve searching through massive datasets with sparse access patterns rather than dense computation. These applications can effectively utilize the high bandwidth to flash storage and parallel I/O capabilities that FPGAs provide, making them natural early adopters for this type of specialized hardware architecture.

Didactic Mind
Didactic Mind, Ep 123: Force of Habit

Didactic Mind

Play Episode Listen Later Jan 4, 2026 62:55


The Didactic Mind podcast is BACK with an all-new episode - the first in nearly 6 months - to kick off 2026. This podcast is all about how we avoid the stupidity and nonsense of "New Year's Resolutions", and instead focus on building life-affirming, results-building, course-sustaining habits that get us to our chosen destinations. I lay out a straightforward, logical, sensible set of ideas that will help you achieve your goals for the year - whatever those might be - and provide examples showing how you make these goals a reality. Support the War College If you like what I do, and you would like to express your appreciation, please feel free to do so here via my Buy Me a Coffee page. All funds go to upkeep of the site and podcast (well, whatever is left over after buying good Scotch, obviously…) Protect Yourself From Big Tech I make some pretty incendiary statements in this podcast, and in most of my podcasts. I can only do so because I take steps to protect myself from the Big Tech companies, and preserve my identity. You need to do the same – this is no longer optional, because if you don't, the gatekeepers WILL come for your head. If you don't know where to start, then I've got you covered right here with this post. Here are the specific steps that you can take: Make sure that your web traffic is safe and protected from prying eyes using a VPN – click here to get a massive 80% OFF on a 24-month subscription with Surfshark; Be sure also to check out Incogni, the new data and privacy management tool offered by Surfshark, which simply works behind the scenes to ensure that no malign actors can take advantage of your data ever again; Another solid VPN option for you is Atlas VPN, brought to you by the same company that creates NordVPN; The best SSD drive that you can get right now, with blazing fast speeds and near-native storage capabilities, is probably the SanDisk Extreme 1TB Portable SSD with NVMe technology – I bought this myself to keep a moving backup of all of my files, it's the size of a credit card, and it's absolutely superb; Build Your Platform Get yourself a proper domain for your site or business with Namecheap; Put your site onto a shared hosting service using A2Hosting for the fastest, most secure, and stable hosting platform around – along with unlimited email accounts of unlimited size; Create beautiful websites with amazing, feature-rich content using Divi from Elegant Themes; Stand for Western Civilisation Buy yourself a proper Bible; Get your Castalia Library books here; Buy yourself a proper knife for personal defence;

Telecom Reseller
OWC Expands Performance and Flexibility with Thunderbolt 5, Advanced Docking, and SoftRAID 8.6, Podcast

Telecom Reseller

Play Episode Listen Later Dec 16, 2025


Doug Green, Publisher of Technology Reseller News, sat down with Larry O'Connor, Founder of Other World Computing (OWC), for a follow-up conversation focused on OWC's latest product innovations and the company's long-standing philosophy of helping customers get more life, performance, and reliability from their technology. OWC is an ASCDI member and has built a reputation for designing solutions that “just work,” allowing users to focus on their workflows rather than managing infrastructure. O'Connor explained that OWC's roots in memory and storage upgrades naturally evolved into leadership in Thunderbolt connectivity, direct-attached storage, and enterprise NAS platforms. Today, OWC technology is deeply embedded in professional media, creative, and enterprise environments, often powering workflows behind the scenes. “Our goal is to be the boring part,” O'Connor said, noting that once OWC products are installed, they fade into the background while consistently delivering performance. A key focus of the discussion was OWC's expanded Thunderbolt 5 lineup, including the new StudioStack, which combines high-performance NVMe and spinning storage with additional downstream Thunderbolt 5 ports. Designed for systems with limited expansion options, StudioStack effectively turns a single Thunderbolt port into a powerful external PCI-style expansion point, supporting high-resolution displays, additional storage, and peripherals without sacrificing performance. O'Connor also highlighted OWC's new dual 10-gigabit Thunderbolt network dock, built to address specialized but growing needs in media, broadcast, and enterprise workflows. With two fully independent 10GbE ports, the dock enables network segmentation, bonded throughput, and dedicated traffic paths—capabilities that previously required more complex and expensive setups. “It's a game changer for customers who need predictable, high-bandwidth networking off a single cable,” he said. The conversation concluded with an update on SoftRAID 8.6, OWC's flagship software RAID solution, now enhanced for Windows 11 and the latest macOS. O'Connor emphasized SoftRAID's unique cross-platform interoperability between Mac and Windows, along with its ability to segment drives into multiple RAID levels for optimized performance and longevity—capabilities not possible with traditional hardware RAID. These innovations reflect OWC's continued commitment to performance, repairability, and long-term value across the technology lifecycle. For more information, visit https://www.owc.com/.

DMRadio Podcast
Breaking the IO Barrier: NVMe, Linux & the Future of Data

DMRadio Podcast

Play Episode Listen Later Dec 4, 2025 51:06


Join this DM Radio episode as Eric Kavanagh interviews David Flynn of Hammerspace and Mark Madsen of Third Nature. Together they dig into the latest breakthroughs in high-performance computing and data architecture, explore the rise of NVMe technology, Hammerspace's impressive IO500 benchmark results, and why Linux continues to anchor modern data platforms. Learn about the evolution from 2D to 3D file-system architectures and how standards-based, turnkey data layers are powering the next generation of AI and enterprise workloads.

ai data 3d 2d linux barrier nvme mark madsen david flynn hammerspace dm radio
Technology Tap
A+ Fundamentals: Mobile Tech Era | CompTIA Study Guide Chapter 9

Technology Tap

Play Episode Listen Later Dec 2, 2025 30:28 Transcription Available


professorjrod@gmail.comLearn essential IT skills development for passing your CompTIA exams in mobile tech support. A detailed guide to the mobile era for tech exam prep.Phones aren't just gadgets anymore—they're identity, payments, photos, and the keys to work. We take you on a clear, practical tour of the mobile landscape that A+ technicians need to master, from touch layers and camera flex cables to SoCs, batteries, and the accessories that turn a slab of glass into a full workstation. Along the way, we connect the dots between hardware and human stakes: why a loose port mimics a dead battery, how a single certificate blocks corporate Wi‑Fi, and what swollen cells tell you about urgency and safety.We walk through laptop displays and storage—LCD vs OLED, CCFL vs LED backlights, SATA vs NVMe—and explain how soldered RAM and SSDs affect upgrade paths and purchasing advice. Then we map the wireless terrain: Wi‑Fi 5, Wi‑Fi 6, and Wi‑Fi 7 tradeoffs; Bluetooth profiles like A2DP and HID; NFC's tiny range with outsized impact; and mobile broadband with APN, hotspot, and plan pitfalls. On the software side, we compare iOS and Android security models, sandboxing, permissions, and backup strategies; we also show how iCloud, Google, and Exchange sync turn a reset from disaster into a routine fix.Security gets the spotlight: strong lock combos, malware symptoms that masquerade as battery or data issues, malicious QR codes, and why remote wipe is the right call for lost corporate devices. We share a tested troubleshooting playbook—start with simple checks like rotation lock, clean charging ports before replacing batteries, reseat camera cables before swapping modules, and confirm enterprise certs before blaming antennas. Finally, we double down on ethics and workflow: back up first, label everything, respect privacy, and return devices better than they arrived.If you care about faster fixes, safer data, and smarter mobile support, you'll find ready-to-use steps and exam-ready insights here. Subscribe, share with a friend who's studying for A+, and leave a review telling us the toughest mobile issue you want solved next.Psst! The Folium Diary has something it wants to tell you - please come a little closer...YOU can change the world - you do it every day. Let's change it for the better, together.Listen on: Apple Podcasts SpotifySupport the showArt By Sarah/DesmondMusic by Joakim KarudLittle chacha ProductionsJuan Rodriguez can be reached atTikTok @ProfessorJrodProfessorJRod@gmail.com@Prof_JRodInstagram ProfessorJRod

FLASH DIARIO de El Siglo 21 es Hoy
⚡ Steam Machine y su precio ⚡

FLASH DIARIO de El Siglo 21 es Hoy

Play Episode Listen Later Nov 24, 2025 8:41 Transcription Available


La Steam Machine no tendrá subsidio. Puede costar 700 a 900 dólares. Es un PC para tu televisor, pequeño y silenciosoValve confirma Steam Machine sin subsidio y anticipa precio alto comparado con consolas actuales del mercado  Por Félix Riaño @LocutorCo  Valve es una empresa estadounidense conocida por crear Steam, una plataforma digital donde millones de personas compran y juegan videojuegos en computadores. Steam funciona como una tienda y también como una biblioteca personal: guardas tus juegos, los instalas donde quieras y se actualizan solos. Ahora, Valve prepara la Steam Machine, un pequeño computador para el televisor que funciona con su sistema SteamOS y que promete una experiencia directa desde el sofá.  Todo sonaba a rival directo de consolas como PlayStation o Xbox, pero ahora sabemos que el precio va a ser más alto de lo que muchos esperaban. Y aquí nace la pregunta que nos va a acompañar todo el episodio: ¿hasta qué punto un dispositivo tan caro termina compitiendo con las series y películas que hoy vemos para pasar el tiempo?  Si quieres apoyar este proyecto independiente y recibir beneficios especiales, puedes hacerlo aquí:https://www.spreaker.com/podcast/flash-diario-de-el-siglo-21-es-hoy--5835407/support  Valve mostró la Steam Machine como un cubo compacto de unos 15 centímetros de alto, ancho y fondo. Es del tamaño de una pequeña caja de regalo, pero dentro lleva un procesador AMD Zen 4 de seis núcleos y doce hilos capaz de llegar a 4,8 GHz, además de una tarjeta gráfica AMD RDNA 3 con 28 unidades de cómputo. Todo está optimizado para funcionar con poco ruido y para integrarse con un televisor con apenas pulsar un botón en el control.  SteamOS, el sistema operativo basado en Linux creado por Valve, se enfoca en una experiencia de sofá: encender, elegir juego y jugar. Nada más. La idea es que este PC compacto tenga funciones que un computador armado en casa no ofrece fácilmente, como encendido desde el control, Bluetooth preparado para cuatro mandos y una salida DisplayPort que permite llegar a 4K a 240 Hz. Viene con ranura microSD para mover juegos entre Steam Machine, Steam Deck y Steam Frame como si fueran tarjetas intercambiables.Aquí viene el giro: el precio puede estar entre 700 y 900 dólares, según medios especializados. Es un valor alto para algo que se ve como una consola, aunque sea un PC compacto. La razón es clara: Valve no planea vender este equipo con pérdidas. Las consolas suelen tener precio bajo porque recuperan dinero con las ventas de juegos y suscripciones, pero en PC eso no funciona igual, porque los jugadores tienen libertad de comprar donde prefieran.Los ingenieros explicaron que la Steam Machine va a costar lo mismo que un PC armado con rendimiento parecido. Esto coloca el dispositivo en una zona extraña: más costoso que una consola, pero menos personalizable que un PC grande. Además, el mercado de memorias RAM y almacenamiento está subiendo, lo cual afecta el precio final. Este detalle preocupa a quienes deseaban un equipo que reemplazara su consola sin invertir tanto dinero.Un exejecutivo de Xbox sugirió públicamente que Valve debería dejar que terceros fabriquen sus propias Steam Machines con SteamOS en diferentes configuraciones para ofrecer opciones más baratas. Valve aclaró que eso ya es posible y equipos como el Lenovo Legion Go S lo demuestran. Aun así, pocos fabricantes toman ese camino porque implica invertir en diseño, soporte y actualizaciones constantes.Valve defiende que el valor de la Steam Machine está en la experiencia: silencio, tamaño pequeño, encendido desde el control, compatibilidad con miles de juegos de Steam y la posibilidad de mover tarjetas microSD con juegos entre varios dispositivos. Todo eso apunta a una consola-PC pensada para personas que quieren comodidad sin lidiar con cables ni torres grandes. El detalle es que la comodidad tiene un costo alto, y ahí está el debate.El equipo llega con dos opciones de almacenamiento: 512 GB o 2 TB en formato NVMe, ambas compatibles con microSD de alta velocidad. Usa Wi-Fi 6E, Bluetooth 5.3 y un conjunto de antenas diseñadas para conexiones estables con varios controles. Cuenta con DisplayPort 1.4 y HDMI 2.0, suficientes para 4K fluido. Todo en un cuerpo de 2,6 kg, más liviano que muchas consolas actuales.Valve apuesta por una Steam Machine sin subsidio y con precio de PC compacto. Es pequeña, silenciosa y pensada para jugar desde el sofá con comodidad. El costo puede sorprender, pero también puede abrir un espacio nuevo entre consola y PC. ¿Tú pagarías 800 dólares?Sigue el pódcast en Flash Diario para más historias cada día.  BibliografíaWccftechTom's HardwarePC GamerIGNWiredEngadget

2.5 Admins
2.5 Admins 273: Reliability Tracking

2.5 Admins

Play Episode Listen Later Nov 13, 2025 25:58


Allan tells us about the recent OpenZFS Summit including inconsistent JBODs, more details about mixed disk sizes in ZFS with AnyRaid, an upcoming standard that allows you to keep using partially dead hard drives, Seagate's roadmap for 50 and 100 TB drives, and NVMe connected mechanical drives. Plus using a separate mini PC for work. […]

Late Night Linux All Episodes
2.5 Admins 273: Reliability Tracking

Late Night Linux All Episodes

Play Episode Listen Later Nov 13, 2025 25:58


Allan tells us about the recent OpenZFS Summit including inconsistent JBODs, more details about mixed disk sizes in ZFS with AnyRaid, an upcoming standard that allows you to keep using partially dead hard drives, Seagate's roadmap for 50 and 100 TB drives, and NVMe connected mechanical drives. Plus using a separate mini PC for work.... Read More

2.5 Admins
2.5 Admins 272: NVMe Surprise

2.5 Admins

Play Episode Listen Later Nov 6, 2025 24:34


Why you should seriously consider buying refurbished hard drives, why drives might be lasting longer than they once did, Jim's M.2 NVMe drive died at an inopportune moment, using multiple partitions on disks with ZFS.   Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes Advanced ZFS Dataset Management: […]

Late Night Linux All Episodes
2.5 Admins 272: NVMe Surprise

Late Night Linux All Episodes

Play Episode Listen Later Nov 6, 2025 24:34


Why you should seriously consider buying refurbished hard drives, why drives might be lasting longer than they once did, Jim's M.2 NVMe drive died at an inopportune moment, using multiple partitions on disks with ZFS.   Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes Advanced ZFS Dataset Management:... Read More

Geekazine
AGeen 80 Gbps NVMe Enclosure: Worth Buying?

Geekazine

Play Episode Listen Later Oct 31, 2025 20:46 Transcription Available


Make a Logo on FiverrLast Updated on October 31, 2025 1:27 pm by Jeffrey Powers When you think portable NVMe enclosures, speed and compatibility are everything. The AGeen 80 Gbps NVMe Enclosure promises 80 Gbps transfer rates with USB4 and Thunderbolt 5, But is it really the all-rounder AGeen claims? Let's break it down. Get […] The post AGeen 80 Gbps NVMe Enclosure: Worth Buying? appeared first on Geekazine.

Oracle University Podcast
Cloud Data Centers: Core Concepts - Part 2

Oracle University Podcast

Play Episode Listen Later Oct 14, 2025 14:16


Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud?   In this episode, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to discuss cloud storage.   They explore how data is carefully organized, the different ways it can be stored, and what keeps it safe and easy to find.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------   Episode Transcript:    00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven't listened to the episode yet, I'd suggest going back and listening to it before you dive into this one.  Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we're going to ask him about another fundamental concept: storage. 01:04 Lois: That's right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers?  Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center. 01:52 Nikita: But how is data organized and controlled on disks? Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices. Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive. Once partitions are created, they are formatted with a file system. 02:40 Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system?  Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux. Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions.  03:42 Lois: And what are permissions? Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute. This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center.  04:09 Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server?   Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority. Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data. Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server.  05:59 Lois: I'm guessing you're hinting at remote storage. Can you tell us more about that, Orlando? Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications. 06:35 Lois: Let's talk about the common forms of remote storage. Can you run us through them? Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files. A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach.  07:38 Nikita: And what might this approach be?  Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network. iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network.  This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed.  08:47 Nikita: And what's this specialized network called? Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount. 09:42 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 10:26 Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about? Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage.  Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data. This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play. 12:02 Lois: And what's that exactly? Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes. 13:05 Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management.  Nikita: That's right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course.  Lois: In our next episode, we'll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can't wait to learn more about it. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:47 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

DLN Xtend
214: Big Endian, Big Problems | Linux Out Loud 116

DLN Xtend

Play Episode Listen Later Oct 11, 2025 67:13


This week on Linux Out Loud, we're plugging into the source! We kick things off with a look at the wild world of robotics competitions, from the destructive Norwalk Havoc Robot League to updates on our local FLL and FTC teams. Then, we dive into a heated discussion on the great Endianness debate shaking up the RISC-V community and what the 90-10 rule means for kernel support. Plus, we've got updates on a retro 3D printing project, a pro tip for backing up your SSH keys, and a horror story about Nate's poor Commodore 64x. Find the rest of the show notes at: https://tuxdigital.com/podcasts/linux-out-loud/lol-116/ Visit the Tux Digital Merch Store: https://store.tuxdigital.com/ Special Guest: Bill.

Technology Tap
A+ Fundamentals: Boot to Brains Chapter 4

Technology Tap

Play Episode Listen Later Oct 7, 2025 24:14 Transcription Available


professorjrod@gmail.comA dead PC at the worst moment is a gut punch—unless you have a roadmap. We walk through the exact thinking that turns “no lights, no fans, no display” into a calm, step‑by‑step recovery, starting where every system truly begins: firmware. BIOS and UEFI aren't trivia; they decide how your machine discovers drives, validates bootloaders, and applies security like Secure Boot and TPM. That's why a simple post‑update check of boot order, storage mode, and firmware toggles can rescue a lab full of “no boot device” errors in minutes.From there, we get brutally honest about power. PSUs age, rails sag, and idle tests lie. You'll learn the outside‑in “power ladder,” why a line‑interactive UPS prevents ghost errors, and how unstable XMP profiles masquerade as OS problems. We demystify boot and drive failures—wrong boot entries, NVMe lane conflicts, cloning driver mismatches—and show how SMART data, free space, cooling, and firmware updates revive sluggish SSDs. Then we cut through RAID mythology: 0 for speed, 1 for uptime, 5 for read‑heavy with risk, 6 for double‑parity safety, and 10 for fast resilience. And we repeat the rule that saves careers: RAID is not backup. Verify restores, keep copies offsite or offline, and schedule tests before disaster strikes.Video issues get the practical treatment too. No display? Check inputs and connect to the discrete GPU, not the motherboard. Blurry or artifacting under load? Validate refresh rates, cables, thermals, and PSU capacity. We close with a field checklist and a case study where a quality PSU upgrade stabilized 3D renders instantly—proof that systems thinking beats screen-chasing every time. If you want a technician's mindset—evidence over assumptions, one variable at a time—this guide will sharpen your process and speed your fixes.If this helped you think like a tech, follow the show, share it with a teammate who's on call this week, and leave a quick review so more builders and troubleshooters can find it.Support the showIf you want to help me with my research please e-mail me.Professorjrod@gmail.comIf you want to join my question/answer zoom class e-mail me at Professorjrod@gmail.comArt By Sarah/DesmondMusic by Joakim KarudLittle chacha ProductionsJuan Rodriguez can be reached atTikTok @ProfessorJrodProfessorJRod@gmail.com@Prof_JRodInstagram ProfessorJRod

Technology Tap
A+ Fundamentals : Power First, Stability Always Chapter 3

Technology Tap

Play Episode Listen Later Sep 30, 2025 24:45 Transcription Available


professorjrod@gmail.comWhat if the real cause of your random reboots isn't the GPU at all—but the power plan behind it? We take you end to end through a stability-first build, starting with the underrated hero of every system: clean, properly sized power. You'll learn how to calculate wattage with 25–30% headroom, navigate 80 Plus efficiency tiers, and safely adopt ATX 3.0 with the 12VHPWR connector—no sharp bends, modular cable sanity, and the UPS/surge stack that prevents nasty surprises when the lights flicker.From there, we shift into storage strategy that balances speed and safety. HDD, SATA SSD, and NVMe each earn their place, and we break down RAID 0/1/5/6/10 in plain language so you can pick the right array for your workload. We underline a hard truth: RAID protects against disk failure, not human error, so versioned offsite backups remain non-negotiable. Real-world stories—including a painful RAID 5 rebuild gone wrong—highlight why RAID 6 and RAID 10 matter for bigger or busier systems.Memory and CPU round out the backbone. We simplify DDR4 vs DDR5, explain how frequency and CAS affect real latency, and show why matched pairs and dual channel deliver the performance you paid for. You'll get quick wins like enabling XMP/EXPO, when ECC is worth it, and how to troubleshoot training hiccups. Then we open the CPU: cores, threads, cache, sockets, chipsets, and why firmware comes before hardware when upgrades fail to post. Cooling decisions—air, AIO, or custom—tie directly to performance ceilings, along with safe overclock/undervolt practices and thermal targets under sustained load.By the end, you'll have a practical checklist to build smarter, troubleshoot faster, and feel ready for the CompTIA A+ exam: power headroom, cable stewardship, airflow planning, RAID with backups, memory matching, BIOS compatibility, and validation testing. If this guide helps you ship a rock-solid PC, share it with a friend, leave a quick review, and hit follow so you never miss the next masterclass.Support the showIf you want to help me with my research please e-mail me.Professorjrod@gmail.comIf you want to join my question/answer zoom class e-mail me at Professorjrod@gmail.comArt By Sarah/DesmondMusic by Joakim KarudLittle chacha ProductionsJuan Rodriguez can be reached atTikTok @ProfessorJrodProfessorJRod@gmail.com@Prof_JRodInstagram ProfessorJRod

Technology Tap
A+ Fundamentals Chapter 1 and 2: Becoming an IT Specialist: Troubleshooting and Hardware Essentials

Technology Tap

Play Episode Listen Later Sep 23, 2025 23:55 Transcription Available


professorjrod@gmail.comThe digital world seems like magic to many, but behind every functioning computer is a complex system of hardware components and methodical troubleshooting approaches. In this comprehensive episode, we pull back the curtain on what makes IT specialists effective problem-solvers and explore the physical heart of computing systems.We begin by examining the role of IT specialists as workplace heroes who tackle everything from simple password resets to complex network outages. Through real-world stories and practical examples, we highlight how the best tech professionals combine technical knowledge with crucial soft skills like communication and organization. You'll discover why explaining complex concepts in plain language is just as important as understanding those concepts in the first place.At the core of effective IT work lies a structured troubleshooting methodology. We break down CompTIA's six-step approach: identifying problems through careful information gathering, establishing theories of probable cause, testing those theories systematically, implementing solutions, verifying full functionality, and documenting everything for future reference. This methodology isn't just exam material—it's a framework that professionals rely on daily to solve real-world tech problems efficiently.The episode then ventures into hardware territory, exploring the motherboard as the computer's central nervous system. We discuss different form factors, installation procedures, and potential pitfalls like electrostatic discharge. Our journey continues through the evolution of connection standards—from early USB and display technologies to modern Thunderbolt and USB-C implementations—and the expansion cards that enhance computer functionality.Whether you're studying for CompTIA certification, working in IT, or simply curious about what happens when you call tech support, this episode provides valuable insights into the methodical thinking and technical knowledge that powers our digital world. We wrap up with practice questions that reinforce key concepts and prepare you for both certification exams and real-world scenarios.Subscribe to Technology Tap for our continuing series on CompTIA A+ certification topics, with our next episode diving into storage technologies from traditional hard drives to cutting-edge NVMe solutions.The Dom Sub Living BDSM and Kink PodcastCurious about Dominance & submission? Real stories, real fun, really kinky.Listen on: Apple Podcasts SpotifySupport the showIf you want to help me with my research please e-mail me.Professorjrod@gmail.comIf you want to join my question/answer zoom class e-mail me at Professorjrod@gmail.comArt By Sarah/DesmondMusic by Joakim KarudLittle chacha ProductionsJuan Rodriguez can be reached atTikTok @ProfessorJrodProfessorJRod@gmail.com@Prof_JRodInstagram ProfessorJRod

BSD Now
629: Host Naming Conventions

BSD Now

Play Episode Listen Later Sep 18, 2025 68:11


The Death of Industrial Design, Host naming Convensions, Symbian reflections, bash timeouts, nvme vs ssds, a system to organize your life, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines The Death Of Industrial Design And The Era Of Dull Electronics (https://hackaday.com/2025/07/23/the-death-of-industrial-design-and-the-era-of-dull-electronics) Host Naming Convention (https://vulcanridr.mataroa.blog/blog/host-naming-convention) News Roundup Open, free, and completely ignored: The strange afterlife of Symbian (https://www.theregister.com/2025/07/17/symbian_forgotten_foss_phone_os/) TIL: timeout in Bash scripts (https://heitorpb.github.io/bla/timeout/) It seems like NVMe SSDs have overtaken SATA SSDs for high capacities (https://utcc.utoronto.ca/~cks/space/blog/tech/NVMeOvertakingSATAForSSDs) A system to organise your life (https://johnnydecimal.com) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions - Nelson - Books (https://github.com/BSDNow/bsdnow.tv/blob/master/629/feedback/Nelson%20-%20books.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)

MacVoices Video
MacVoices #25239: Jeff Carlson Takes Control of Your Digital Storage

MacVoices Video

Play Episode Listen Later Sep 17, 2025 45:41


Jeff Carlson takes on the latest information in the newly updated Take Control of Your Digital Storage. Topics include choosing SSD vs HDD and NVMe, when Thunderbolt 5 matters, APFS basics, and why cables and enclosures affect speed and reliability. They cover backup strategies, OWC DIY builds, iOS/iPadOS Files support for external drives, NAS pros/cons, and even using SD cards—when it's smart, and when it's not. This MacVoices is supported by OpenCase. MagSafe Perfected. Use the code “macvoices” to save 10% at TheOpenCase.com Show Notes: Chapters: [0:00] Welcome and why storage knowledge matters [1:13] New edition: Take Control of Digital Storage [2:15] When storage goes wrong: errors, space, missing files [3:25] APFS, Finder free space, and modern Mac limits [5:46] SSD vs HDD; Thunderbolt 5 reality checks [7:55] NVMe terms, enclosures, and choosing wisely [9:13] Do you actually need max speed? [10:24] Photographer's perspective on “want vs need” [12:19] Cable chaos: labeling, charging vs data rates [16:43] Backup strategy: fast vs affordable drives [19:03] DIY builds with OWC; reliability over bargain boxes [26:02] iOS/iPadOS Files: formatting and managing externals [29:53] NAS basics: use cases, speed, and security cautions [33:41] “Sneaker-net” to NAS and Ethernet options [37:32] SD cards as storage: pros, cons, and lifespan [43:21] Pricing, page count, and where to learn more Links: Take Control of Your Digital Storage Guests: Jeff Carlson is an author, photographer, and freelance writer. Among many other projects, he publishes the Smarter Image newsletter, which explores how computational photography, AI, and machine learning are fundamentally changing the art and science of photography. He's covered the personal technology field from Macs and PalmPilots to iPhones and mirrorless cameras, publishing in paper magazines, printed books, ebooks, and websites. He's also the co-host of the podcasts PhotoActive, writes for Take Control, has spoken at several conferences and events. He lives in Seattle, where, yes, it is just as gray and wet and coffee-infused as you think it is. Catch up with everything he's doing at JeffCarlson.com. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss

MacVoices Audio
MacVoices #25239: Jeff Carlson Takes Control of Your Digital Storage

MacVoices Audio

Play Episode Listen Later Sep 17, 2025 45:42


Jeff Carlson takes on the latest information in the newly updated Take Control of Your Digital Storage. Topics include choosing SSD vs HDD and NVMe, when Thunderbolt 5 matters, APFS basics, and why cables and enclosures affect speed and reliability. They cover backup strategies, OWC DIY builds, iOS/iPadOS Files support for external drives, NAS pros/cons, and even using SD cards—when it's smart, and when it's not.  This MacVoices is supported by OpenCase. MagSafe Perfected. Use the code “macvoices” to save 10% at TheOpenCase.com Show Notes: Chapters: [0:00] Welcome and why storage knowledge matters [1:13] New edition: Take Control of Digital Storage [2:15] When storage goes wrong: errors, space, missing files [3:25] APFS, Finder free space, and modern Mac limits [5:46] SSD vs HDD; Thunderbolt 5 reality checks [7:55] NVMe terms, enclosures, and choosing wisely [9:13] Do you actually need max speed? [10:24] Photographer's perspective on “want vs need” [12:19] Cable chaos: labeling, charging vs data rates [16:43] Backup strategy: fast vs affordable drives [19:03] DIY builds with OWC; reliability over bargain boxes [26:02] iOS/iPadOS Files: formatting and managing externals [29:53] NAS basics: use cases, speed, and security cautions [33:41] “Sneaker-net” to NAS and Ethernet options [37:32] SD cards as storage: pros, cons, and lifespan [43:21] Pricing, page count, and where to learn more Links: Take Control of Your Digital Storage Guests: Jeff Carlson is an author, photographer, and freelance writer. Among many other projects, he publishes the Smarter Image newsletter, which explores how computational photography, AI, and machine learning are fundamentally changing the art and science of photography. He's covered the personal technology field from Macs and PalmPilots to iPhones and mirrorless cameras, publishing in paper magazines, printed books, ebooks, and websites. He's also the co-host of the podcasts PhotoActive, writes for Take Control, has spoken at several conferences and events. He lives in Seattle, where, yes, it is just as gray and wet and coffee-infused as you think it is. Catch up with everything he's doing at JeffCarlson.com. Support:      Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:      http://macvoices.com      Twitter:      http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices      Mastodon:      https://mastodon.cloud/@chuckjoiner      Facebook:      http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:      http://www.facebook.com/macvoices/      MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice      LinkedIn:      https://www.linkedin.com/in/chuckjoiner/      Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

Business of Tech
Austin McChord on Slide: Building a Better BCDR for MSPs and Lessons from Datto's Journey

Business of Tech

Play Episode Listen Later Aug 4, 2025 28:15


Austin McChord, co-founder and chairman of Slide, discusses the launch of his new BCDR startup, which aims to serve managed service providers (MSPs) with a fresh approach. Drawing from his experience at Datto, McChord emphasizes the importance of maintaining a strong culture and alignment with MSPs, which he feels has eroded in recent years. He addresses comparisons between Slide and Datto, explaining that while there are similarities in vision, Slide is built on a completely different technological foundation, utilizing NVMe flash storage to enhance performance and user experience.McChord expresses a commitment to ensuring Slide remains an independent company, learning from the challenges faced during Datto's journey, including its acquisition and the subsequent cultural shifts. He highlights the importance of setting clear values and operational controls to prevent a repeat of past experiences. The conversation delves into the lessons learned from building Datto, particularly regarding business operations and the unpredictability of public markets, which can lead to unwanted outcomes for company culture and employee satisfaction.The discussion also touches on the impact of AI in the tech landscape, with McChord noting that while AI can enhance efficiency, it currently struggles with tasks that require deep expertise. He believes that AI's potential lies in automating mundane tasks and improving configurations, but warns against rushing to implement AI solutions that may not yet be fully developed. McChord emphasizes the need for MSPs to understand the limitations of AI and to focus on building strong relationships with new small businesses that are emerging in the tech space.Finally, McChord shares his philanthropic endeavors, including transforming an abandoned power plant into a public park. He expresses excitement about projects that prioritize social returns over economic ones, and how these initiatives allow him to apply business principles to create positive change. His passion for building and creating, whether in business or philanthropy, drives his current pursuits and future aspirations. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

The Making Of
In-Camera Visual FX Leader Eric Hasso on Virtual Production Workflows, Plates, & More

The Making Of

Play Episode Listen Later Jul 25, 2025 18:44


In this episode, we welcome back Eric Hasso. Eric is a modern-day master in the art of In-Camera Visual FX, used in both film and TV production. In our conversation, he shares all about his latest projects, recent developments in his workflow and technologies — and other insights into his craft. “The Making Of” is presented by AJA:AJA DRM2-Plus 3RU Frame Unlocks Flexible Mini-Converter ConfigurationsIdeal for production and post environments where signal conversion needs vary, the AJA DRM2-Plus is a high-capacity, 3RU Mini-Converter frame houses up to 12 full-size AJA Mini-Converters of any kind, and up to 24 of AJA's compact Mini-Converters. DRM2-Plus boasts flexible cooling and redundant power supply options and an intuitive faceplate design that lets users quickly access installed converters. Learn more about DRM2-Plus.Massive Speed. Big Capacity. DIY Ready.The OWC Express 4M2 delivers up to 32TB of high-performance NVMe storage with real-world speeds up to 3200MB/s over USB4. Built for demanding workflows like 4K/8K editing and VFX, it features thermally controlled fans for quiet, sustained performance. With massive capacity, a compact footprint, and easy drive installation, it's the ultimate DIY solution for creative pros who need speed and flexibility.Browse hereHED: Bring Filmmaking to Your Classroom with LUMIX EDUDEK: Discover how LUMIX EDU can support your school's video production goals with camera loaners, curriculum resources, and hands-on training. Learn more and get involved hereZEISS Summer Savings EventNow through September 1, save up to $4,000 on select Nano Prime lens sets and another $3,000 on the ZEISS Lightweight Zoom LWZ.3. Explore hereNew Solutions from Videoguys:Bring your vision to life with the SanDisk Professional Creator Series—fast, reliable storage designed specifically for content creators like you. Whether you're capturing footage on your iPhone or Android device, flying a drone, or shooting with a digital camera, SanDisk gives you the tools to stay in control of every shoot, every transfer, every edit, and every backup. From microSD and SD cards to portable SSDs and high-performance flash drives, the Creator Series is built to keep up with your creativity. Ready to take your content to the next level? Call Videoguys today at 800.323.2325 to learn more and find the perfect storage solution for your workflow! Browse here Podcast Rewind:July 2025 - Ep. 90…“The Making Of” is created by Michael Valinsky.To advertise your products or services to 228K filmmakers, TV production pros, and content creators reading this newsletter, send an email to: mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Director Jay Holben on Filmmaking Essentials, the State of the Industry, his New Book, & More

The Making Of

Play Episode Listen Later Jul 17, 2025 43:59


In this episode, we welcome back Jay Holben. Jay is a director, producer, educator and author of many essential filmmaking books. In our chat, we hear about his new book, The Director's Guide to Everything — as well as his thoughts on where the industry is at now, and about new technologies that he finds interesting. Jay also offers useful advice for students, younger filmmakers, and other creatives persevering through these challenging times. “The Making Of” is presented by AJA:AJA DRM2-Plus 3RU Frame Unlocks Flexible Mini-Converter ConfigurationsIdeal for production and post environments where signal conversion needs vary, the AJA DRM2-Plus is a high-capacity, 3RU Mini-Converter frame houses up to 12 full-size AJA Mini-Converters of any kind, and up to 24 of AJA's compact Mini-Converters. DRM2-Plus boasts flexible cooling and redundant power supply options and an intuitive faceplate design that lets users quickly access installed converters. Learn more about DRM2-Plus.Massive Speed. Big Capacity. DIY Ready.The OWC Express 4M2 delivers up to 32TB of high-performance NVMe storage with real-world speeds up to 3200MB/s over USB4. Built for demanding workflows like 4K/8K editing and VFX, it features thermally controlled fans for quiet, sustained performance. With massive capacity, a compact footprint, and easy drive installation, it's the ultimate DIY solution for creative pros who need speed and flexibility.Browse hereFeatured Book: The Director's Guide to Everything is the ultimate road map for mastering the craft. More than just a guide — it's a masterclass in how to turn your vision into reality. The Director's Guide delivers a comprehensive, no-nonsense education in the art and science of directing — with practical knowledge drawn from decades of experience in the trenches of motion picture production.www.adakinpress.comGet your copy hereZEISS Summer Savings EventNow through September 1, save up to $4,000 on select Nano Prime lens sets and another $3,000 on the ZEISS Lightweight Zoom LWZ.3.Browse here New Solutions from Videoguys:The SanDisk Professional G-RAID PROJECT 2 is a powerhouse 2-bay storage system built for serious creators. Pre-configured in RAID 0 and featuring Thunderbolt™ 3 connectivity, it delivers the speed and capacity you need for demanding 4K, 8K, and VR video workflows—up to a massive 52TB. With a PRO-BLADE™ SSD Mag slot for ultra-fast offloads and edits, it's the perfect solution for high-performance production environments. Call Videoguys at 800-323-2325 to for free tech advice and to learn more!Visit herePodcast Rewind:July 2025 - Ep. 89…“The Making Of” is created by Michael Valinsky.Advertise your products or services to over 215K filmmakers, TV production pros, studios, & content creators reading this newsletter… email us: mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Embeth Davidtz on "Don't Let's Go to the Dogs Tonight," Crafting her Directorial Debut, Lessons from Spielberg, & More

The Making Of

Play Episode Listen Later Jul 10, 2025 38:32


In this episode, we welcome Embeth Davidtz. Embeth has her directorial debut, Don't Let's Go to the Dogs Tonight, hitting theaters nationwide this week. A veteran actor, she is known for roles in Schindler's List, Army of Darkness, Matilda, Fallen, Mansfield Park, Bridget Jones' Diary, Junebug, The Girl with the Dragon Tattoo, The Amazing Spider-Man, “Mad Men,” “Ray Donovan,” and “The Morning Show”. In our chat, she shares her backstory, stories from working with Spielberg — and all about her new film which she wrote, directed and starred in. Embeth also offers invaluable advice for actors and filmmakers working today. “The Making Of” is presented by AJA:AJA DRM2-Plus 3RU Frame Unlocks Flexible Mini-Converter ConfigurationsIdeal for production and post environments where signal conversion needs vary, the AJADRM2-Plus is a high-capacity, 3RU Mini-Converter frame houses up to 12 full-size AJA Mini-Converters of any kind, and up to 24 of AJA's compact Mini-Converters. DRM2-Plus boasts flexible cooling and redundant power supply options and an intuitive faceplate design that lets users quickly access installed converters. Learn more about DRM2-Plus.Massive Speed. Big Capacity. DIY Ready.The OWC Express 4M2 delivers up to 32TB of high-performance NVMe storage with real-world speeds up to 3200MB/s over USB4. Built for demanding workflows like 4K/8K editing and VFX, it features thermally controlled fans for quiet, sustained performance. With massive capacity, a compact footprint, and easy drive installation, it's the ultimate DIY solution for creative pros who need speed and flexibility.Browse hereZEISS Summer Savings EventNow through September 1, save up to $4,000 on select Nano Prime lens sets and another $3,000 on the ZEISS Lightweight Zoom LWZ.3.Browse here New Solutions from Videoguys:The SanDisk Professional G-RAID PROJECT 2 is a powerhouse 2-bay storage system built for serious creators. Pre-configured in RAID 0 and featuring Thunderbolt™ 3 connectivity, it delivers the speed and capacity you need for demanding 4K, 8K, and VR video workflows—up to a massive 52TB. With a PRO-BLADE™ SSD Mag slot for ultra-fast offloads and edits, it's the perfect solution for high-performance production environments. Call Videoguys at 800-323-2325 to for free tech advice and to learn more!Visit herePodcast Rewind:July 2025 - Ep. 89…“The Making Of” is created by Michael Valinsky.Advertise your products or services to 205,000 filmmakers, TV production pros, and content creators reading this newsletter — contact us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
"Mission: Impossible" Aerial Cinematography Team on Filming for the Franchise, Collaborating with Cruise, & More

The Making Of

Play Episode Listen Later Jul 1, 2025 43:43


In this episode, we welcome Phil Arntz and Will Banks. Phil is an Aerial Director of Photography and Will a top Camera Pilot. This filmmaking team has worked on a range of projects including F1: The Movie, Mission: Impossible - The Final Reckoning, The Gorge, Mission: Impossible - Dead Reckoning Part One, “The Agency: Central Intelligence,” “The Day of the Jackal” and “The Diplomat”. In our chat, they share their back stories, how they learned their skill-sets, and about working on the latest Mission: Impossible. They also talk about their unique collaboration — and reveal some of the tools, technologies and techniques used to capture these iconic shots. “The Making Of” is presented by AJA:AJA DRM2-Plus 3RU Frame Unlocks Flexible Mini-Converter ConfigurationsIdeal for production and post environments where signal conversion needs vary, the AJADRM2-Plus is a high-capacity, 3RU Mini-Converter frame houses up to 12 full-size AJA Mini-Converters of any kind, and up to 24 of AJA's compact Mini-Converters. DRM2-Plus boasts flexible cooling and redundant power supply options and an intuitive faceplate design that lets users quickly access installed converters. Learn more about DRM2-Plus.Massive Speed. Big Capacity. DIY Ready.The OWC Express 4M2 delivers up to 32TB of high-performance NVMe storage with real-world speeds up to 3200MB/s over USB4. Built for demanding workflows like 4K/8K editing and VFX, it features thermally controlled fans for quiet, sustained performance. With massive capacity, a compact footprint, and easy drive installation, it's the ultimate DIY solution for creative pros who need speed and flexibility.Browse hereFeatured Book: The Horror Movie ReportBlumhouse calls The Horror Movie Report, “the ultimate guide to every horror movie ever made” and that it “helped us shape how we think about horror”. Get under the skin of over 27,000 horror movies with the most detailed data-led insights ever compiled at HorrorMovieReport.comRead more here The new ZEISS Otus ML:Now on sale, the Otus ML 1.4/50mm photography lens from ZEISS is the new generation of high-quality optics for your photographic art. Find it at your favorite photo retailer!Learn more hereNew Solutions from Videoguys:The SanDisk Professional G-RAID PROJECT 2 is a powerhouse 2-bay storage system built for serious creators. Pre-configured in RAID 0 and featuring Thunderbolt™ 3 connectivity, it delivers the speed and capacity you need for demanding 4K, 8K, and VR video workflows—up to a massive 52TB. With a PRO-BLADE™ SSD Mag slot for ultra-fast offloads and edits, it's the perfect solution for high-performance production environments. Call Videoguys at 800-323-2325 to for free tech advice and to learn more!Browse herePodcast Rewind:June 2025 - Ep. 88…“The Making Of” is created by Michael Valinsky.Advertise your products or services to 202,000 filmmakers, TV production pros, and content creators reading this newsletter — contact us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

Didactic Mind
Didactic Mind, Ep 122: Donny Trumpo's Wars

Didactic Mind

Play Episode Listen Later Jun 22, 2025 64:44


As we all know by now, the US, under President Trump's leadership, attacked Iran's nuclear enrichment facilities. This is not merely a bad decision - it is among the craziest, most risky, and most stupid decisions ever taken by any world leader. In the latest episode of Didactic Mind, I explain why Trump has now ensured the total failure of his second term, by handing over control of his foreign policy decisions to the neoclowns, and to the Iranians. I give background and context around the wars in Ukraine and Iran, and point out that, even though Ornj Boi did not start these wars, he was given an explicit mandate by the American people to end them. He decided not to do so, and instead walked straight into not one, but TWO, very obvious and very clear traps, laid out for him by the neoclowns. Because of this, and because of Trump's brinkmanship, the world is now far more dangerous, and the American Empire is one step closer to total collapse. I explain why this is the case toward the end of the podcast, and how that could happen. I also provide my own thoughts about why Drumpf appears to be so willing to bend to the will of the neoclowns - and those thoughts are dark and troubling indeed. Whatever happens next, I think we have passed a threshold, and there is no going back. Trump is now officially a failed President, who has gutted whatever credibility the FUSA once had on the global stage. No one is ever going to take the US at its word again. Its economic collapse is now totally assured. And all that remains, is for Small-Hands Donny to serve out whatever is left of his term, before he is impeached, or unalived. His legacy is, by his own hand, destroyed. Support the War College If you like what I do, and you would like to express your appreciation, please feel free to do so here via my Buy Me a Coffee page. All funds go to upkeep of the site and podcast (well, whatever is left over after buying good Scotch, obviously…) Protect Yourself From Big Tech I make some pretty incendiary statements in this podcast, and in most of my podcasts. I can only do so because I take steps to protect myself from the Big Tech companies, and preserve my identity. You need to do the same – this is no longer optional, because if you don't, the gatekeepers WILL come for your head. If you don't know where to start, then I've got you covered right here with this post. Here are the specific steps that you can take: Make sure that your web traffic is safe and protected from prying eyes using a VPN – click here to get a massive 80% OFF on a 24-month subscription with Surfshark; Be sure also to check out Incogni, the new data and privacy management tool offered by Surfshark, which simply works behind the scenes to ensure that no malign actors can take advantage of your data ever again; Another solid VPN option for you is Atlas VPN, brought to you by the same company that creates NordVPN; The best SSD drive that you can get right now, with blazing fast speeds and near-native storage capabilities, is probably the SanDisk Extreme 1TB Portable SSD with NVMe technology – I bought this myself to keep a moving backup of all of my files, it's the size of a credit card, and it's absolutely superb; Build Your Platform Get yourself a proper domain for your site or business with Namecheap; Put your site onto a shared hosting service using A2Hosting for the fastest, most secure, and stable hosting platform around – along with unlimited email accounts of unlimited size; Create beautiful websites with amazing, feature-rich content using Divi from Elegant Themes; Stand for Western Civilisation Buy yourself a proper Bible; Get your Castalia Library books here; Buy yourself a proper knife for personal defence;

ACG - The Best Gaming Podcast
Blood Message Looks Sweet, The Last 25 Game Reviews, The Best Gaming Podcast this episode: 522

ACG - The Best Gaming Podcast

Play Episode Listen Later Jun 20, 2025 254:20


Sponsor-free, brutally honest, and live every Friday—The Best Gaming Podcast dives into the week's juiciest drops.This episode:• Blood Message trailer breakdown. NVME versus SSD. • Capcom Trailers, and RE Stuff. • News flash – studio closures, surprise Game Pass arrivals, and the changes that could result in massive industry shifts#BestGamingPodcast #ACGReview #BloodMessage #GamingNews #HeadphoneTech #GameLeaks #NoSponsors

Didactic Mind
Domain Query: Neo-Byzantine Intrigues

Didactic Mind

Play Episode Listen Later Jun 9, 2025 29:51


We are back, at long last, after a lengthy break, with a new Domain Query podcast. Once again, we are answering a question from LRFotS Randale6, who draws an intriguing parallel between the current, parlous, state of the FUSA, and the old Byzantine Empire. Our friend argues that, like the Byzantines, the descendants of the Americans may find themselves one day surrounded by foreigners, speaking the languages of those foreigners, treating the English language (and the inheritance given to them by their English and European ancestors) as barbarous relics: As the west declines and dies I am struck by an eerie parallel between the USA and Byzantium. If I remember correctly it was the Byzantine Emperor Michael VIII Palaiologos who said words to the effect of "Latin is the language of barbarians". America I think is heading for the same fate, like the Eastern Romans we are increasingly finding ourselves surrounded by another group (for them it was the native Greeks). This group being primarily Latin American, typical on the mestizo side of the racial scale. Eventually I suspect the outcome will be the same, as the Eastern Romans became the Byzantines (Latin overtaken by Greek) we will become the Neo-Byzantines. We may "live on" as a polity, but we will not be the same. I am personally not sure what to make of this, it certainly seems preferable to collapse but... Will our descendants eventually mutter the words (in Spanish), "English is a foreigner's tongue"? Will this transfigured nation continue descending into decadence or will it give rise to renewal (with a distinctly Latin tinge)? As the USA before this change was the second coming of Northern Europe will this future USA be the New Southern Europe? Will we inherit the same toxic politics and power struggles from the former USA (much as Byzantium inherited the Roman Empire's political intrigues)? Your thoughts on this Didact? I went through this at some length, first by looking at the history behind the (supposed) statement of Michael VIII Palaiologos. (I'm not saying he didn't say it, I just cannot confirm it, as I am not historian enough to do so.) Then, I looked at the way the FUSA is likely to devolve and split apart, and I argued that Amerikhastan will break up into multiple nations – at least one of which will be a majority-White, majority-Christian, English-speaking country, in which aberrations and psychoses like Mohammedanism and the LGBTQWTFISTHISSHIT degeneracy, will be outlawed on pain of death. This is not merely something spawned out of my fevered brain. My own reading into and around the coming breakup of the FUSA, have led me to think that the future of the FUSA will NOT be quite as dire as what our friend predicts. It will likely be much more like what we saw in South Africa under apartheid – which is NOT a justification of that system – or Rhodesia, in which White Christians find a way to build and maintain a beacon of civilisation, in the midst of savagery and darkness around them, by rediscovering their core and roots. Note that I recorded this yesterday, before I saw how badly the riots in Clownipornia had spun out of control, and before I learned about the activation of the National Guard and the US Marines to go stomp on the rioters. Subsequent events lend, in my view, a certain authenticity and validity to the things I have described in this podcast. Reading List Victoria: A Novel of 4th Generation War by William S. Lind The Coming Civil War by Tom Kawczynski Support the War College If you like what I do, and you would like to express your appreciation, please feel free to do so here via my Buy Me a Coffee page. All funds go to upkeep of the site and podcast (well, whatever is left over after buying good Scotch, obviously…) Protect Yourself From Big Tech I make some pretty incendiary statements in this podcast, and in most of my podcasts. I can only do so because I take steps to protect myself from the Big Tech companies, and preserve my identity. You need to do the same – this is no longer optional, because if you don't, the gatekeepers WILL come for your head. If you don't know where to start, then I've got you covered right here with this post. Here are the specific steps that you can take: Make sure that your web traffic is safe and protected from prying eyes using a VPN – click here to get a massive 80% OFF on a 24-month subscription with Surfshark; Be sure also to check out Incogni, the new data and privacy management tool offered by Surfshark, which simply works behind the scenes to ensure that no malign actors can take advantage of your data ever again; Another solid VPN option for you is Atlas VPN, brought to you by the same company that creates NordVPN; The best SSD drive that you can get right now, with blazing fast speeds and near-native storage capabilities, is probably the SanDisk Extreme 1TB Portable SSD with NVMe technology – I bought this myself to keep a moving backup of all of my files, it's the size of a credit card, and it's absolutely superb; Build Your Platform Get yourself a proper domain for your site or business with Namecheap; Put your site onto a shared hosting service using A2Hosting for the fastest, most secure, and stable hosting platform around – along with unlimited email accounts of unlimited size; Create beautiful websites with amazing, feature-rich content using Divi from Elegant Themes; Stand for Western Civilisation Buy yourself a proper Bible; Get your Castalia Library books here; Buy yourself a proper knife for personal defence;

Gamereactor TV - English
Kingston Fury Renegade G5 NVMe

Gamereactor TV - English

Play Episode Listen Later May 27, 2025 1:46


Gamereactor TV - Italiano
Kingston Fury Renegade G5 NVMe

Gamereactor TV - Italiano

Play Episode Listen Later May 27, 2025 1:46


The Making Of
Adobe's Jason Druss Shares On The Latest Solutions, AI, & Much More

The Making Of

Play Episode Listen Later May 5, 2025 38:33


In this episode, we welcome Jason Druss. Jason is Sr. Product Marketing Manager for Video at Adobe. In our chat, he shares about his backstory, experience as a colorist, and his role at Adobe, where his expertise and understanding of the creative community help drive innovation and growth of their applications. Jason also talks about the latest updates to Premiere Pro, including advanced AI solutions — and he offers priceless advice for editors and creatives alike in their career journey. “The Making Of” is presented by AJA:Introducing the all-new KUMO 6464-12GRedesigned for maximum reliability and performance, KUMO 6464-12G is a high-capacity 12G-SDI router featuring 64x 12G-SDI inputs and 64x 12G-SDI outputs that enables cost-effective signal routing for production and post. Ganged dual- and quad-port routing configurations let users combine multiple inputs and outputs for Dual Link video and key, 4K, UltraHD, and Quad Link 8K workflows. KUMO offers seamless routing of uncompressed, compressed, or camera raw signals. Learn more.Vimeo Toronto EventMay 20 | TIFF LightboxA night of inspiring Vimeo Staff Picks + live filmmaker commentary!7pm Doors7:30-9pm Films + commentary 9-11pm Reception - complimentary drinks + bites!Screening + reception are free to attend.RSVP hereHigh-Capacity Storage Meets Pro-Level DockingThe OWC Gemini delivers the best of both worlds—massive dual-drive storage and essential connectivity in one sleek Thunderbolt solution. Perfect for post-production pros, it offers RAID-ready backup, 2.5Gb Ethernet, an SD card reader, and ports for all your gear. Whether you're editing footage or offloading media, Gemini keeps your workflow fast, organized, and ready for anything.Take a look hereThe Glass Dome Rises - Powered by Igelkott PlatesNetflix's The Glass Dome is climbing the charts in the U.S.Behind the scenes: Igelkott Driving Plates. Built from the ground up for In-Camera VFX, captured with One-Lens 360 technology.No stitching. No compromises. No problem. Want the same power behind your production?

The Making Of
ZEISS' Tony Wisniewski on the New Otus ML, Photo Market's Resurgence, & More

The Making Of

Play Episode Listen Later Apr 29, 2025 42:27


In this episode, we welcome back Tony Wisniewski. Tony runs marketing at ZEISS for their photo and cine divisions in America, and helps support photographers, filmmakers and cinematographers every day. In our chat, we hear about their latest updates to the Otus family of lenses, including the brand-new Otus ML. Also, Tony shares about the resurgent photo market — and where things might be heading in the coming months and years. “The Making Of” is presented by AJA:Introducing the all-new KUMO 6464-12GRedesigned for maximum reliability and performance, KUMO 6464-12G is a high-capacity 12G-SDI router featuring 64x 12G-SDI inputs and 64x 12G-SDI outputs that enables cost-effective signal routing for production and post. Ganged dual- and quad-port routing configurations let users combine multiple inputs and outputs for Dual Link video and key, 4K, UltraHD, and Quad Link 8K workflows. KUMO offers seamless routing of uncompressed, compressed, or camera raw signals. Learn more.High-Capacity Storage Meets Pro-Level DockingThe OWC Gemini delivers the best of both worlds—massive dual-drive storage and essential connectivity in one sleek Thunderbolt solution. Perfect for post-production pros, it offers RAID-ready backup, 2.5Gb Ethernet, an SD card reader, and ports for all your gear. Whether you're editing footage or offloading media, Gemini keeps your workflow fast, organized, and ready for anything.Take a look hereCheck out the ZEISS Otus ML:Now on sale, the Otus ML 1.4/50mm photography lens from ZEISS is the new generation of high-quality optics for your photographic art. Find it at your favorite photo retailer!Learn more hereThe Glass Dome Rises - Powered by Igelkott PlatesNetflix's The Glass Dome is climbing the charts in the U.S.Behind the scenes: Igelkott Driving Plates. Built from the ground up for In-Camera VFX, captured with One-Lens 360 technology.No stitching. No compromises. No problem. Want the same power behind your production?

The Making Of
Industry Analyst Stephen Follows on Film Financing, Festivals, Distribution, & More

The Making Of

Play Episode Listen Later Apr 24, 2025 70:20


In this episode, we welcome Stephen Follows. Stephen is an expert industry analyst, as well as a writer, producer and educator. In our chat, he shares his backstory and his thoughts on the worlds of film financing, festivals, and distribution. He also speaks about how he decodes the industry through data — and provides a “Crystal Ball” view of where things might be heading in the coming years. Stephen also offers real-world advice for filmmakers on how best to position themselves for the evolving road ahead. “The Making Of” is presented by AJA:Explore AJA's New Solutions for Next-Gen Production and BroadcastAhead of NAB 2025, AJA debuted innovative solutions for production and broadcast professionals, including the BRIDGE LIVE 3G-8 IP video bridge for remote workflows/streaming/backhaul, the DANTE-12GAM IP audio embedder/disembedder, and KUMO 6464-12G compact SDI router. Find out how your facility, pipeline, or project can benefit from the flexibility these new tools provide here.OWC Powers Indie Horror-Comedy ScreamboatFrom set to post, the Screamboat team trusted OWC to keep their horror-comedy production running smoothly. Atlas media cards captured the action, while Envoy Pro FX and ThunderBlade drives enabled fast offloads. In post, the ThunderBay Flex 8 anchored their workflow with high-capacity, high-performance storage. Explore how OWC powered this ambitious indie project every step of the way.Read more hereFeatured Book: Engage Filmmakers This new report offers a detailed look at what filmmakers read, share, and trust.Based on over 1.5 million articles and posts, Engage Filmmakers is designed for organisations that want to reach filmmaking audiences in a meaningful way. It shows what topics resonate, which formats perform best, and how to build lasting credibility.What's inside:• Audience segmentation by filmmaking role• The content formats that get attention• Platform strategies backed by real data• Practical guidance for building trust with filmmakersIf your team creates content or campaigns for the film industry, this is built to help you do it better.Explore here — 10% off for “The Making Of” readers!Check out the ZEISS Otus ML:Now on sale, the Otus ML 1.4/50mm photography lens from ZEISS is the new generation of high-quality optics for your photographic art. Find it at your favorite photo retailer!Learn more hereA New Solution from Videoguys:The SanDisk Extreme Portable SSD is built for adventure, fitting seamlessly into your mobile lifestyle while delivering blazing-fast NVMe performance with read speeds up to 1050MB/s and write speeds up to 1000MB/s. Designed for content creators and on-the-go professionals, this high-capacity drive is tested and compatible with iPhone, making it easy to free up space on your smartphone. Its rugged design offers up to three-meter drop protection, IP65 water and dust resistance, and a durable silicone shell for extra security. Backed by a 5-year limited warranty, the SanDisk Extreme Portable SSD is now available in an impressive new 8TB capacity at Videoguys.com.Browse it hereUpcoming NYC Event: Gold Women's Health & Business ConferenceJoin us Thursday, May 8 in NYC for the Gold Women's Health & Business Conference—an empowering afternoon focused on elevating women in business and well-being. Learn SEO, Marketing Strategies, Funding resources, & Wellness insights for women. This powerful experience is designed for Founders, Creatives, & Professionals ready to thrive in 2025 and beyond. Visit us herePodcast Rewind:April 2025 - Ep. 75…“The Making Of” is created by Michael Valinsky.Promote your products or services to 155K filmmakers, content creators, TV, broadcast & live event production pros reading this newsletter… email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

BSD Now
607: Sign those commits

BSD Now

Play Episode Listen Later Apr 17, 2025 56:27


We should improve libzfs somewhat, Accurate Effective Storage Performance Benchmark, Debugging aids for pf firewall rules on FreeBSD, OpenBSD and Thunderbolt issue on ThinkPad T480s, Signing Git Commits with an SSH key, Pgrep, LibreOffice downloads on the rise, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines We should improve libzfs somewhat (https://despairlabs.com/blog/posts/2025-03-12-we-should-improve-libzfs-somewhat/) Accurate Effective Storage Performance Benchmark (https://klarasystems.com/articles/accurate-effective-storage-performance-benchmark/?utm_source=BSD%20Now&utm_medium=Podcast) News Roundup Debugging aids for pf firewall rules on FreeBSD (https://dan.langille.org/2025/02/24/debugging-aids-for-pf-firewall-rules-on-freebsd/) OpenBSD and Thunderbolt issue on ThinkPad T480s (https://www.tumfatig.net/2025/openbsd-and-thunderbolt-issue-on-thinkpad-t480s/) Signing Git Commits with an SSH key (https://jpmens.net/2025/02/26/signing-git-commits-with-an-ssh-key/) Pgrep (https://www.c0t0d0s0.org/blog/pgrep-z-r.html) LibreOffice downloads on the rise as users look to avoid subscription costs (https://www.computerworld.com/article/3840480/libreoffice-downloads-on-the-rise-as-users-look-to-avoid-subscription-costs.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Felix - Bhyve and NVME (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/607/feedback/Felix%20-%20bhyve%20and%20nvme.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)

The Making Of
Director Steven C. Miller on "Werewolves," His Creative Process, Independent Filmmaking, & More

The Making Of

Play Episode Listen Later Apr 17, 2025 43:07


In this episode, we welcome Steven C. Miller. Steven is a veteran director with credits including Werewolves, Line of Duty, First Kill, Marauders, Silent Night, and Under the Bed. In our chat, he shares about his early days, education, and pathway into filmmaking. He also takes us behind-the-scenes of creating his recent horror film, Werewolves, starring Frank Grillo, Katrina Law, and Lou Diamond Phillips. In addition, Steven offers many insights for filmmakers getting in the game and working their way up. “The Making Of” is presented by AJA:Explore AJA's New Solutions for Next-Gen Production and BroadcastAhead of NAB 2025, AJA debuted innovative solutions for production and broadcast professionals, including the BRIDGE LIVE 3G-8 IP video bridge for remote workflows/streaming/backhaul, the DANTE-12GAM IP audio embedder/disembedder, and KUMO 6464-12G compact SDI router. Find out how your facility, pipeline, or project can benefit from the flexibility these new tools provide here.OWC Powers Indie Horror-Comedy ScreamboatFrom set to post, the Screamboat team trusted OWC to keep their horror-comedy production running smoothly. Atlas media cards captured the action, while Envoy Pro FX and ThunderBlade drives enabled fast offloads. In post, the ThunderBay Flex 8 anchored their workflow with high-capacity, high-performance storage. Explore how OWC powered this ambitious indie project every step of the way. Read more hereFeatured Filmmaking Book: Kubrick: An OdysseyThe definitive biography of the creator of 2001: A Space Odyssey, The Shining, and A Clockwork Orange, presenting the most in-depth portrait yet of the groundbreaking filmmaker.The enigmatic and elusive filmmaker Stanley Kubrick has not been treated to a full-length biography in over twenty years.Stanley Kubrick: An Odyssey fills that gap. This definitive book is based on access to the latest research, especially Kubrick's archive at the University of the Arts, London, as well as other private papers plus new interviews with family members and those who worked with him. It offers comprehensive and in-depth coverage of Kubrick's personal, private, public, and working life. Stanley Kubrick: An Odyssey investigates not only the making of Kubrick's films, but also about those he wanted (but failed) to make like Burning Secret, Napoleon, Aryan Papers, and A.I. Read more hereZEISS Introduces the Otus ML:The ZEISS Otus ML lenses are crafted for photographers who live to tell stories. Inspired by the legendary ZEISS Otus family, the new lenses bring ZEISS' renowned optical excellence combined with precise mechanics to mirrorless system cameras. Thanks to the distinctive ZEISS Look of true color, outstanding sharpness and the iconic “3D-Pop” of micro-contrast, your story will come to life exactly like you envisioned. A wide f1.4 aperture provides outstanding depth of field directing attention to your focus area, providing a soft bokeh that elegantly separates subjects from the background. The aspherical design effectively minimizes distortion and chromatic aberrations. Coupled with ZEISS T* coating that reduce reflections within a lens, minimizing lens flare and enhancing image contrast, and color fidelity.Learn more hereA New Solution from Videoguys:The SanDisk Extreme Portable SSD is built for adventure, fitting seamlessly into your mobile lifestyle while delivering blazing-fast NVMe performance with read speeds up to 1050MB/s and write speeds up to 1000MB/s. Designed for content creators and on-the-go professionals, this high-capacity drive is tested and compatible with iPhone, making it easy to free up space on your smartphone. Its rugged design offers up to three-meter drop protection, IP65 water and dust resistance, and a durable silicone shell for extra security. Backed by a 5-year limited warranty, the SanDisk Extreme Portable SSD is now available in an impressive new 8TB capacity at Videoguys.com. Check it out hereFeatured Event:Cine Gear Expo LA | Universal Studios LotJune 6-7, 2025A revered film and television production mecca, Universal Studios Lot is known for their legendary stages, beautifully appointed theatres, and outdoor city streets, parks & squares— seen in countless film and television spectacles. "We are excited to welcome the Cine Gear community to this iconic destination," announces Cine Gear Expo Co-Founder/CEO Juliane Grosso. “The Universal Lot offers an abundance of everything we look for to create a valuable and unforgettable experience."A crossroads of filmmakers and cutting-edge technology, Cine Gear Expo is known globally as the best place in filmmaking to discover groundbreaking innovations, connect with top-tier creatives, and discover the latest gear from mainstay brands and next-gen innovators at hundreds of industry booths. Attendees can hone their skills at hands-on equipment demos, pick up tips at filmmaker panels, and enjoy educational sessions, screenings, and guild & association presentations — topped off by world-class mingling with friends & colleagues. Beyond the expo, other offerings include Cine Gear's Film Series Screenings and a Master Class featuring renowned filmmaker instructors.Register here Podcast Rewind:April 2025 - Ep. 74…“The Making Of” is published by Michael Valinsky.Advertise your products or services to 152K filmmakers, video pros, TV, broadcast and live event production pros reading this newsletter, email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Aputure's Mitch Gross on Lighting Technologies, Cinematography and Filmmaking

The Making Of

Play Episode Listen Later Apr 1, 2025 50:52


In this episode, we welcome Mitch Gross. Mitch is Global Director of Product Marketing at Aputure. In our conversation, he shares about his early days, career as a cinematographer in New York City, as well as his experiences working at top companies such as AbelCine, Panasonic, and Aputure. Mitch also offers tons of educational information about cameras, lights, and lenses — and other insights for filmmakers in the trenches. “The Making Of” is presented by AJA:Explore AJA's New Solutions for Next-Gen Production and BroadcastAhead of NAB 2025, AJA debuted innovative solutions for production and broadcast professionals, including the BRIDGE LIVE 3G-8 IP video bridge for remote workflows/streaming/backhaul, the DANTE-12GAM IP audio embedder/disembedder, and KUMO 6464-12G compact SDI router. Find out how your facility, pipeline, or project can benefit from the flexibility these new tools provide here.Vimeo NAB Event:April 7th | The Beverly TheaterA night of inspiring Vimeo Staff Picks, creative community, and drinks!Meet fellow filmmakers, NAB community, and say hi to the Vimeo team!7pm Doors open8-9pm Film screening9-11pm Vimeo Party — beer, wine, + bites!RSVP required. Free tickets hereIgelkott Studios: Redefining Driving PlatesSay goodbye to the limitations of array rig plates. Igelkott's precision-crafted single-lens driving plates deliver perfect parallax, seamless stitching, and true-to-life depth—no mismatched angles or post headaches. The choice of top filmmakers for flawless in-camera realism. Experience the future of driving plates at www.igelkottplates.comOscars Night Puts OWC Jellyfish in the Middle of the ActionOWC Jellyfish was front and center on Oscars night, supporting the behind-the-scenes editing and post workflows that brought the evening to life. From pre-show prep to real-time content delivery, discover how OWC's high-performance shared storage powered the Academy's digital team. See how professionals rely on Jellyfish when the pressure's on—and the world is watching. Read the full story »A New Solution Available from Videoguys…The SanDisk Extreme Portable SSD is built for adventure, fitting seamlessly into your mobile lifestyle while delivering blazing-fast NVMe performance with read speeds up to 1050MB/s and write speeds up to 1000MB/s. Designed for content creators and on-the-go professionals, this high-capacity drive is tested and compatible with iPhone, making it easy to free up space on your smartphone. Its rugged design offers up to three-meter drop protection, IP65 water and dust resistance, and a durable silicone shell for extra security. Backed by a 5-year limited warranty, the SanDisk Extreme Portable SSD is now available in an impressive new 8TB capacity at Videoguys.com. Check it out here ZEISS Cinema To Present New Solutions at NAB 2025ZEISS Cinema is proud to be presenting our Scenario camera tracking solution at 2025 NAB CineCentral in the North Hall. Join ZEISS on Monday, April 7th at 2:30pm in North Hall for a hands-on presentation of how this technology can save you time and cost of IVFX, and post-production workflow. For more info, visit hereCartoni Celebrates 90th Anniversary with New E-Series Launch at NAB ShowCartoni celebrates the company's 90th anniversary at NAB Las Vegas. Find them in the show's North Hall at booth #N2539. Cartoni will showcase their latest support systems, heads, pedestals, and Lifto PTZ elevation columns in a retrospective ranging from the company's earliest 1935 cinema tripod (complete with a 1936 Mitchell NC camera courtesy of the American Society of Cinematographers) to their recently announced E-Series of broadcast/cinema Encoded Heads. Visit here Podcast Rewind:March 2025 - Ep. 73…“The Making Of” is published by Michael Valinsky.To advertise your products or services to 150K filmmakers, TV, broadcast and live event production pros reading this newsletter, email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

2.5 Admins
2.5 Admins 237: Kafkaesque

2.5 Admins

Play Episode Listen Later Mar 6, 2025 32:00


HP was forcing people to wait on hold for 15 minutes to get support, the DOGE site was embarrassingly insecure, setting up encrypted offsite backups, and mixing SATA and NVMe in a server.   Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes Why FreeBSD is the Right Choice […]

Late Night Linux All Episodes
2.5 Admins 237: Kafkaesque

Late Night Linux All Episodes

Play Episode Listen Later Mar 6, 2025 32:00


HP was forcing people to wait on hold for 15 minutes to get support, the DOGE site was embarrassingly insecure, setting up encrypted offsite backups, and mixing SATA and NVMe in a server.   Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes Why FreeBSD is the Right Choice... Read More

PC Perspective Podcast
Podcast #810 - Radeon RX 9070 Pricing, RTX 5090 CableGate, RTX 5070 Delay, MSI SUPRIM LIQUID SOC, Stop infecting yourself + MORE!

PC Perspective Podcast

Play Episode Listen Later Feb 15, 2025 73:46


Josh proves that he's the host with the most - and guides us through a brisk show that ends just before the really good stuff. Probably. Also, his cat tried to host.  We've got the "best" NVME, the most expensive 5090, melty parts, EVGA talk and there is no Subnautica 2 playtest - STOP CLICKING on things!Timestamps:00:00 Intro02:57 (no) Food with Josh03:54 Radeon RX 9070 XT listed early in Canada at 700 USD08:59 Here we go again - reports of melting RTX 5090 power connectors17:01 How far does overclocking the RTX 5080 get you?19:51 PassMark records first ever YoY drop in CPU performance23:43 Is the WD_Black SN7100 the new best SSD? (TPU says yes)28:22 FLAC encodes are now multi-threaded in version 1.529:53 RTX 5070 reportedly delayed until early March33:17 EVGA closes forums35:14 Podcast sponsor Stash36:39 (in)Security Corner43:20 Gaming Quick Hits53:28 MSI RTX 5090 SUPRIM LIQUID SOC reviewed57:05 Picks of the Week1:12:18 Outro ★ Support this podcast on Patreon ★

The Cloud Pod
288: You Might Be Able to Retrain Notebook LM Hosts to be Less Annoyed, But Not Your Cloud Pod Hosts

The Cloud Pod

Play Episode Listen Later Jan 22, 2025 56:25


Welcome to episode 288 of The Cloud Pod – where the forecast is always cloudy! Justin, Ryan, and Jonathan are your hosts as we make our way through this week's cloud and AI news, including back to Vertex AI, Project Digits, Notebook LM, and some major improvements to AI image generation.  Titles we almost went with this week: Digits… I'll show you 5 digits… The only digit the AWS local zone in New York shows me is the middle one Keep one eye open near Mercedes with Agentic AI A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our slack channel for more info.  General News 01:59 Nvidia announces $3,000 personal AI supercomputer called Digits If you don't want to hand over all your money to the cloud providers, you will be able to hand over $3,000 dollars to Nvidia… for a computer that is probably going to be obsolete in

PC Perspective Podcast
Podcast #803 - Arrow Lake Update, Arc B580 Performance and Value, Pi 500 NVMe Workaround, 8GB GPU enough? + MORE!

PC Perspective Podcast

Play Episode Listen Later Dec 21, 2024 70:22


We present our ULTIMATE podcast for the year 2024. See you next year (as in a couple of weeks from now)!And remember, when you want more hard drive capacity, you need to get perpendicular: https://archive.org/details/get-perpendicular00:00 Intro04:03 Food with Josh06:45 Intel shares update about Arrow Lake S performance10:00 Arc B580 performance and value17:13 Is NVIDIA brave enough to release an 8GB RTX 5060?21:32 A fix for NVIDIA App system slowdown23:15 When DMCA takedowns go too far27:58 Pi 500 NVMe update30:43 It's HAMR time34:08 Faxing comes to Windows 11 24H2!34:58 Mentioning the demise of physical media again38:20 (in)Security Corner48:29 Gaming Quick Hits54:01 Picks of the Week1:09:09 Outro ★ Support this podcast on Patreon ★

Hackaday Podcast
Ep 231: Hacking NVMe into Raspberry Pi, Lighting LEDs with Microwaves, and How to Keep Your Fingers

Hackaday Podcast

Play Episode Listen Later Dec 20, 2024 61:29


Twas the week before Christmas when Elliot and Dan sat down to unwrap a pre-holiday bundle of hacks. We kicked things off in a seasonally appropriate way with a PCB Christmas card that harvests power from your microwave or WiFi router, plus has the potential to be a spy tool. We learned how to grow big, beautiful crystals quickly, just in case you need some baubles for the tree or a nice pair of earrings. Speaking of last-minute gifts, perhaps you could build a packable dipole antenna, a very durable PCB motor, or a ridiculously bright Fibonacci simple add-on for your latest conference badge. We also looked into taking a shortcut to homebrew semiconductors via scanning electron microscopes, solved the mystery of early CD caddies, and discussed the sad state of table saw safety and the lamentable loss of fingers, or fractions thereof.

PC Perspective Podcast
Podcast #799 - Intel Promises Arrow Lake Fix, Ryzen 9800X3D Stock Issues, Server 2025 Mess, NVME Cooling

PC Perspective Podcast

Play Episode Listen Later Nov 17, 2024 51:19


We are on the brink of history, as next podcast will be number 800. It's been a long ride, with our first show way back in 2007... Don't worry, this isn't goodbye. This is a podcast. A podcast about computer stuff. (And burgers.)  Be prepared for D-Link whipping, massive NVME cooling, and of course, special guest Lara Croft.  00:00 Intro02:58 Food with Josh04:55 Intel is sorry about the Arrow Lake launch08:40 When is more 9800X3D stock coming?12:14 AMD has a quarter of CPU market share14:18 A new TIM that is 72 percent better than paste?16:42 The enormous Dark Airflow I drive cooler atop the T-Force Z540 SSD20:35 Windows Server 2025 pushed out as KB5044284 update21:57 NVIDIA app exits beta, replaces control panel and GeForce Experience23:52 Rumor: RTX 40 Series production nearing end to make way for RTX 50 Series25:00 (in)Security Corner32:06 Gaming Quick Hits41:18 Picks of the Week49:41 Outro ★ Support this podcast on Patreon ★

Rebel FM
Rebel FM Episode 643 - 11/15/2024

Rebel FM

Play Episode Listen Later Nov 16, 2024 115:20


This week we're talking about some maniac mansion-ass puzzle solving in Rise of the Golden Idol, Half Life 2's big anniversary update, gaming handhelds, CES, NVMe drives, Dragon Age - the works.  This week's music:  Jerry Cantrell - Off the Rails