Podcasts about postgres

Free and open-source relational database management system

  • 321PODCASTS
  • 1,222EPISODES
  • 42mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 16, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about postgres

Show all podcasts related to postgres

Latest podcast episodes about postgres

Python Bytes
#473 A clean room rewrite?

Python Bytes

Play Episode Listen Later Mar 16, 2026 46:10 Transcription Available


Topics covered in this episode: chardet ,AI, and licensing refined-github pgdog: PostgreSQL connection pooler, load balancer and database sharder Agentic Engineering Patterns Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: chardet ,AI, and licensing Thanks Ian Lessing Wow, where to start? A bit of legal precedence research. Chardet dispute shows how AI will kill software licensing, argues Bruce Perens on the Register Also see this GitHub issue. Dan Blanchard, maintainer of a Python character encoding detection library called chardet, released a new version of the library under a new software license. (LGPL → MIT) Dan is allowed to make this change because v7 is a complete “clean room” rewrite using AI BTW, v7 is WAY better: The result is a 48x increase in detection speed for a project that lives in the hot loops of many projects. That will lead to noticeable performance increases for literally millions of users (the package gets ~130M downloads per month). It paves a path towards inclusion in the standard library (assuming they don't institute policies against using AI tools). Thread-safe detect() and detect_all() with no measurable overhead; scales on free-threaded Python 3.13t+ An individual claiming to be Mark Pilgrim, the original creator of the library, opened an issue in the project's GitHub repo arguing that Blanchard had no right to change the software license, citing the LPGL requirement that the license remain unchanged. A 'complete rewrite' is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a 'clean room' implementation). Blanchard disagreed, citing how version 7.0.0 and 6.0.0 compare when subjected to JPlag, a library for detecting plagiarism. Blanchard told The Register he had wanted to get chardet added to the Python standard library for more than a decade since it's a core dependency to most Python projects. Brian #2: refined-github Suggested by Matthias Schöttle A browser plugin that improves the GitHub experience A sampling Adds a build/CI status icon next to the repo's name. Adds a link back to the PR that ran the workflow. Enables tab and shift tab for indentation in comment fields. Auto-resizes comment fields to fit their content and no longer show scroll bars. Highlights the most useful comment in issues. Changes the default sort order of issues/PRs to Recently updated. But really, it's a huge list of improvements Michael #3: pgdog: PostgreSQL connection pooler, load balancer and database sharder PgDog is a proxy for scaling PostgreSQL. It supports connection pooling, load balancing queries and sharding entire databases. Written in Rust, PgDog is fast, secure and can manage thousands of connections on commodity hardware. Features PgDog is an application layer load balancer for PostgreSQL Health Checks: PgDog maintains a real-time list of healthy hosts. When a database fails a health check, it's removed from the active rotation and queries are re-routed to other replicas Single Endpoint: PgDog can detect writes (e.g. INSERT, UPDATE, CREATE TABLE, etc.) and send them to the primary, leaving the replicas to serve reads Failover: PgDog monitors Postgres replication state and can automatically redirect writes to a different database if a replica is promoted Sharding: PgDog is able to manage databases with multiple shards Brian #4: Agentic Engineering Patterns Simon Willison So much great stuff here, especially Anti-patterns: things to avoid And 3 sections on testing Red/green TDD First run the test Agentic manual testing Extras Brian: uv python upgrade will upgrade all versions of Python installed with uv to latest patch release suggested by John Hagen Coding After Coders: The End of Computer Programming as We Know It NY Times Article Suggested by Christopher Best quote: “Pushing code that fails pytest is unacceptable and embarrassing.” Michael: Talk Python Training users get a better account dashboard Package Managers Need to Cool Down Will AI Kill Open Source, article + video My Always activate the venv is now a zsh-plugin, sorta. Joke: Ergonomic keyboard Also pretty good and related: Claude Code Mandated Links legal precedence research Chardet dispute shows how AI will kill software licensing, argues Bruce Perens this GitHub issue citing JPlag refined-github Agentic Engineering Patterns Anti-patterns: things to avoid Red/green TDD First run the test Agentic manual testing uv python upgrade Coding After Coders: The End of Computer Programming as We Know It Suggested by Christopher a better account dashboard Package Managers Need to Cool Down Will AI Kill Open Source Always activate the venv now a zsh-plugin Ergonomic keyboard Claude Code Mandated claude-mandated.png blobs.pythonbytes.fm/keyboard-joke.jpeg?cache_id=a6026b

Python Bytes
#473 A clean room rewrite?

Python Bytes

Play Episode Listen Later Mar 16, 2026 46:10 Transcription Available


Topics covered in this episode: chardet ,AI, and licensing refined-github pgdog: PostgreSQL connection pooler, load balancer and database sharder Agentic Engineering Patterns Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: chardet ,AI, and licensing Thanks Ian Lessing Wow, where to start? A bit of legal precedence research. Chardet dispute shows how AI will kill software licensing, argues Bruce Perens on the Register Also see this GitHub issue. Dan Blanchard, maintainer of a Python character encoding detection library called chardet, released a new version of the library under a new software license. (LGPL → MIT) Dan is allowed to make this change because v7 is a complete “clean room” rewrite using AI BTW, v7 is WAY better: The result is a 48x increase in detection speed for a project that lives in the hot loops of many projects. That will lead to noticeable performance increases for literally millions of users (the package gets ~130M downloads per month). It paves a path towards inclusion in the standard library (assuming they don't institute policies against using AI tools). Thread-safe detect() and detect_all() with no measurable overhead; scales on free-threaded Python 3.13t+ An individual claiming to be Mark Pilgrim, the original creator of the library, opened an issue in the project's GitHub repo arguing that Blanchard had no right to change the software license, citing the LPGL requirement that the license remain unchanged. A 'complete rewrite' is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a 'clean room' implementation). Blanchard disagreed, citing how version 7.0.0 and 6.0.0 compare when subjected to JPlag, a library for detecting plagiarism. Blanchard told The Register he had wanted to get chardet added to the Python standard library for more than a decade since it's a core dependency to most Python projects. Brian #2: refined-github Suggested by Matthias Schöttle A browser plugin that improves the GitHub experience A sampling Adds a build/CI status icon next to the repo's name. Adds a link back to the PR that ran the workflow. Enables tab and shift tab for indentation in comment fields. Auto-resizes comment fields to fit their content and no longer show scroll bars. Highlights the most useful comment in issues. Changes the default sort order of issues/PRs to Recently updated. But really, it's a huge list of improvements Michael #3: pgdog: PostgreSQL connection pooler, load balancer and database sharder PgDog is a proxy for scaling PostgreSQL. It supports connection pooling, load balancing queries and sharding entire databases. Written in Rust, PgDog is fast, secure and can manage thousands of connections on commodity hardware. Features PgDog is an application layer load balancer for PostgreSQL Health Checks: PgDog maintains a real-time list of healthy hosts. When a database fails a health check, it's removed from the active rotation and queries are re-routed to other replicas Single Endpoint: PgDog can detect writes (e.g. INSERT, UPDATE, CREATE TABLE, etc.) and send them to the primary, leaving the replicas to serve reads Failover: PgDog monitors Postgres replication state and can automatically redirect writes to a different database if a replica is promoted Sharding: PgDog is able to manage databases with multiple shards Brian #4: Agentic Engineering Patterns Simon Willison So much great stuff here, especially Anti-patterns: things to avoid And 3 sections on testing Red/green TDD First run the test Agentic manual testing Extras Brian: uv python upgrade will upgrade all versions of Python installed with uv to latest patch release suggested by John Hagen Coding After Coders: The End of Computer Programming as We Know It NY Times Article Suggested by Christopher Best quote: “Pushing code that fails pytest is unacceptable and embarrassing.” Michael: Talk Python Training users get a better account dashboard Package Managers Need to Cool Down Will AI Kill Open Source, article + video My Always activate the venv is now a zsh-plugin, sorta. Joke: Ergonomic keyboard Also pretty good and related: Claude Code Mandated Links legal precedence research Chardet dispute shows how AI will kill software licensing, argues Bruce Perens this GitHub issue citing JPlag refined-github Agentic Engineering Patterns Anti-patterns: things to avoid Red/green TDD First run the test Agentic manual testing uv python upgrade Coding After Coders: The End of Computer Programming as We Know It Suggested by Christopher a better account dashboard Package Managers Need to Cool Down Will AI Kill Open Source Always activate the venv now a zsh-plugin Ergonomic keyboard Claude Code Mandated claude-mandated.png blobs.pythonbytes.fm/keyboard-joke.jpeg?cache_id=a6026b

Postgres FM
PostGIS

Postgres FM

Play Episode Listen Later Mar 13, 2026 53:11


Nik and Michael are joined by Regina Obe and Paul Ramsey to discuss PostGIS. Here are some links to things they mentioned:Regina Obe https://postgres.fm/people/regina-obePaul Ramsey https://postgres.fm/people/paul-ramseyPostGIS https://postgis.netMobilityDB https://github.com/MobilityDB/MobilityDBpgRouting https://github.com/pgRouting/pgroutingGoogle BigQuery GIS public alpha blog post https://cloud.google.com/blog/products/data-analytics/whats-happening-bigquery-integrated-machine-learning-maps-and-morePostGIS Day 2025 talk recordings https://www.youtube.com/watch?v=wuNO_cW2g-0&list=PLavJpcg8cl1EkQWoCbczsOjFTe-SHg_8mpg_lake https://github.com/Snowflake-Labs/pg_lakeGeoParquet https://geoparquet.orgST_DWithin https://postgis.net/docs/ST_DWithin.htmlPostgres JSONB Columns and TOAST: A Performance Guide https://www.snowflake.com/en/engineering-blog/postgres-jsonb-columns-and-toastFOSS4G https://foss4g.orgOpenStreetMap https://www.openstreetmap.orgPgDay Boston https://2026.pgdayboston.orgSKILL.md file https://github.com/postgis/postgis/blob/68dde711039986b47eb62feda45bb24b13b0ea37/doc/SKILL.mdProduction query plans without production data (blog post by Radim Marek) https://boringsql.com/posts/portable-statsPostgreSQL: Up and Running, 4th Edition (by Regina Obe, Leo Hsu) https://www.oreilly.com/library/view/postgresql-up-and/9798341660885~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Retrieval After RAG: Hybrid Search, Agents, and Database Design — Simon Hørup Eskildsen of Turbopuffer

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 12, 2026 60:32


Turbopuffer came out of a reading app.In 2022, Simon was helping his friends at Readwise scale their infra for a highly requested feature: article recommendations and semantic search. Readwise was paying ~$5k/month for their relational database and vector search would cost ~$20k/month making the feature too expensive to ship. In 2023 after mulling over the problem from Readwise, Simon decided he wanted to “build a search engine” which became Turbopuffer.We discuss:• Simon's path: Denmark → Shopify infra for nearly a decade → “angel engineering” across startups like Readwise, Replicate, and Causal → turbopuffer almost accidentally becoming a company • The Readwise origin story: building an early recommendation engine right after the ChatGPT moment, seeing it work, then realizing it would cost ~$30k/month for a company spending ~$5k/month total on infra and getting obsessed with fixing that cost structure • Why turbopuffer is “a search engine for unstructured data”: Simon's belief that models can learn to reason, but can't compress the world's knowledge into a few terabytes of weights, so they need to connect to systems that hold truth in full fidelity • The three ingredients for building a great database company: a new workload, a new storage architecture, and the ability to eventually support every query plan customers will want on their data • The architecture bet behind turbopuffer: going all in on object storage and NVMe, avoiding a traditional consensus layer, and building around the cloud primitives that only became possible in the last few years • Why Simon hated operating Elasticsearch at Shopify: years of painful on-call experience shaped his obsession with simplicity, performance, and eliminating state spread across multiple systems • The Cursor story: launching turbopuffer as a scrappy side project, getting an email from Cursor the next day, flying out after a 4am call, and helping cut Cursor's costs by 95% while fixing their per-user economics • The Notion story: buying dark fiber, tuning TCP windows, and eating cross-cloud costs because Simon refused to compromise on architecture just to close a deal faster • Why AI changes the build-vs-buy equation: it's less about whether a company can build search infra internally, and more about whether they have time especially if an external team can feel like an extension of their own • Why RAG isn't dead: coding companies still rely heavily on search, and Simon sees hybrid retrieval semantic, text, regex, SQL-style patterns becoming more important, not less • How agentic workloads are changing search: the old pattern was one retrieval call up front; the new pattern is one agent firing many parallel queries at once, turning search into a highly concurrent tool call • Why turbopuffer is reducing query pricing: agentic systems are dramatically increasing query volume, and Simon expects retrieval infra to adapt to huge bursts of concurrent search rather than a small number of carefully chosen calls • The philosophy of “playing with open cards”: Simon's habit of being radically honest with investors, including telling Lachy Groom he'd return the money if turbopuffer didn't hit PMF by year-end • The “P99 engineer”: Simon's framework for building a talent-dense company, rejecting by default unless someone on the team feels strongly enough to fight for the candidate —Simon Hørup Eskildsen• LinkedIn: https://www.linkedin.com/in/sirupsen• X: https://x.com/Sirupsen• https://sirupsen.com/aboutturbopuffer• https://turbopuffer.com/Full Video PodTimestamps00:00:00 The PMF promise to Lachy Groom00:00:25 Intro and Simon's background00:02:19 What turbopuffer actually is00:06:26 Shopify, Elasticsearch, and the pain behind the company00:10:07 The Readwise experiment that sparked turbopuffer00:12:00 The insight Simon couldn't stop thinking about00:17:00 S3 consistency, NVMe, and the architecture bet00:20:12 The Notion story: latency, dark fiber, and conviction00:25:03 Build vs. buy in the age of AI00:26:00 The Cursor story: early launch to breakout customer00:29:00 Why code search still matters00:32:00 Search in the age of agents00:34:22 Pricing turbopuffer in the AI era00:38:17 Why Simon chose Lachy Groom00:41:28 Becoming a founder on purpose00:44:00 The “P99 engineer” philosophy00:49:30 Bending software to your will00:51:13 The future of turbopuffer00:57:05 Simon's tea obsession00:59:03 Tea kits, X Live, and P99 LiveTranscriptSimon Hørup Eskildsen: I don't think I've said this publicly before, but I just called Lockey and was like, local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you. But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working.So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people. We're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards. Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before.Alessio: Hey everyone, welcome to the Leading Space podcast. This is Celesio Pando, Colonel Laz, and I'm joined by Swix, editor of Leading Space.swyx: Hello. Hello, uh, we're still, uh, recording in the Ker studio for the first time. Very excited. And today we are joined by Simon Eski. Of Turbo Farer welcome.Simon Hørup Eskildsen: Thank you so much for having me.swyx: Turbo Farer has like really gone on a huge tear, and I, I do have to mention that like you're one of, you're not my newest member of the Danish AHU Mafia, where like there's a lot of legendary programmers that have come out of it, like, uh, beyond Trotro, Rasmus, lado Berg and the V eight team and, and Google Maps team.Uh, you're mostly a Canadian now, but isn't that interesting? There's so many, so much like strong Danish presence.Simon Hørup Eskildsen: Yeah, I was writing a post, um, not that long ago about sort of the influences. So I grew up in Denmark, right? I left, I left when, when I was 18 to go to Canada to, to work at Shopify. Um, and so I, like, I've, I would still say that I feel more Danish than, than Canadian.This is also the weird accent. I can't say th because it, this is like, I don't, you know, my wife is also Canadian, um, and I think. I think like one of the things in, in Denmark is just like, there's just such a ruthless pragmatism and there's also a big focus on just aesthetics. Like, they're like very, people really care about like where, what things look like.Um, and like Canada has a lot of attributes, US has, has a lot of attributes, but I think there's been lots of the great things to carry. I don't know what's in the water in Ahu though. Um, and I don't know that I could be considered part of the Mafi mafia quite yet, uh, compared to the phenomenal individuals we just mentioned.Barra OV is also, uh, Danish Canadian. Okay. Yeah. I don't know where he lives now, but, and he's the PHP.swyx: Yeah. And obviously Toby German, but moved to Canada as well. Yes. Like this is like import that, uh, that, that is an interesting, um, talent move.Alessio: I think. I would love to get from you. Definition of Turbo puffer, because I think you could be a Vector db, which is maybe a bad word now in some circles, you could be a search engine.It's like, let, let's just start there and then we'll maybe run through the history of how you got to this point.Simon Hørup Eskildsen: For sure. Yeah. So Turbo Puffer is at this point in time, a search engine, right? We do full text search and we do vector search, and that's really what we're specialized in. If you're trying to do much more than that, like then this might not be the right place yet, but Turbo Buffer is all about search.The other way that I think about it is that we can take all of the world's knowledge, all of the exabytes and exabytes of data that there is, and we can use those tokens to train a model, but we can't compress all of that into a few terabytes of weights, right? Compress into a few terabytes of weights, how to reason with the world, how to make sense of the knowledge.But we have to somehow connect it to something externally that actually holds that like in full fidelity and truth. Um, and that's the thing that we intend to become. Right? That's like a very holier than now kind of phrasing, right? But being the search engine for unstructured, unstructured data is the focus of turbo puffer at this point in time.Alessio: And let's break down. So people might say, well, didn't Elasticsearch already do this? And then some other people might say, is this search on my data, is this like closer to rag than to like a xr, like a public search thing? Like how, how do you segment like the different types of search?Simon Hørup Eskildsen: The way that I generally think about this is like, there's a lot of database companies and I think if you wanna build a really big database company, sort of, you need a couple of ingredients to be in the air.We don't, which only happens roughly every 15 years. You need a new workload. You basically need the ambition that every single company on earth is gonna have data in your database. Multiple times you look at a company like Oracle, right? You will, like, I don't think you can find a company on earth with a digital presence that it not, doesn't somehow have some data in an Oracle database.Right? And I think at this point, that's also true for Snowflake and Databricks, right? 15 years later it's, or even more than that, there's not a company on earth that doesn't, in. Or directly is consuming Snowflake or, or Databricks or any of the big analytics databases. Um, and I think we're in that kind of moment now, right?I don't think you're gonna find a company over the next few years that doesn't directly or indirectly, um, have all their data available for, for search and connect it to ai. So you need that new workload, like you need something to be happening where there's a new workload that causes that to happen, and that new workload is connecting very large amounts of data to ai.The second thing you need. The second condition to build a big database company is that you need some new underlying change in the storage architecture that is not possible from the databases that have come before you. If you look at Snowflake and Databricks, right, commoditized, like massive fleet of HDDs, like that was not possible in it.It just wasn't in the air in the nineties, right? So you just didn't, we just didn't build these systems. S3 and and and so on was not around. And I think the architecture that is now possible that wasn't possible 15 years ago is to go all in on NVME SSDs. It requires a particular type of architecture for the database that.It's difficult to retrofit onto the databases that are already there, including the ones you just mentioned. The second thing is to go all in on OIC storage, more so than we could have done 15 years ago. Like we don't have a consensus layer, we don't really have anything. In fact, you could turn off all the servers that Turbo Buffer has, and we would not lose any data because we have all completely all in on OIC storage.And this means that our architecture is just so simple. So that's the second condition, right? First being a new workload. That means that every company on earth, either indirectly or directly, is using your database. Second being, there's some new storage architecture. That means that the, the companies that have come before you can do what you're doing.I think the third thing you need to do to build a big database company is that over time you have to implement more or less every Cory plan on the data. What that means is that you. You can't just get stuck in, like, this is the one thing that a database does. It has to be ever evolving because when someone has data in the database, they over time expect to be able to ask it more or less every question.So you have to do that to get the storage architecture to the limit of what, what it's capable of. Those are the three conditions.swyx: I just wanted to get a little bit of like the motivation, right? Like, so you left Shopify, you're like principal, engineer, infra guy. Um, you also head of kernel labs, uh, inside of Shopify, right?And then you consulted for read wise and that it kind of gave you that, that idea. I just wanted you to tell that story. Um, maybe I, you've told it before, but, uh, just introduce the, the. People to like the, the new workload, the sort of aha moment for turbo PufferSimon Hørup Eskildsen: For sure. So yeah, I spent almost a decade at Shopify.I was on the infrastructure team, um, from the fairly, fairly early days around 2013. Um, at the time it felt like it was growing so quickly and everything, all the metrics were, you know, doubling year on year compared to the, what companies are contending with today. It's very cute in growth. I feel like lot some companies are seeing that month over month.Um, of course. Shopify compound has been compounding for a very long time now, but I spent a decade doing that and the majority of that was just make sure the site is up today and make sure it's up a year from now. And a lot of that was really just the, um, you know, uh, the Kardashians would drive very, very large amounts of, of data to, to uh, to Shopify as they were rotating through all the merch and building out their businesses.And we just needed to make sure we could handle that. Right. And sometimes these were events, a million requests per second. And so, you know, we, we had our own data centers back in the day and we were moving to the cloud and there was so much sharding work and all of that that we were doing. So I spent a decade just scaling databases ‘cause that's fundamentally what's the most difficult thing to scale about these sites.The database that was the most difficult for me to scale during that time, and that was the most aggravating to be on call for, was elastic search. It was very, very difficult to deal with. And I saw a lot of projects that were just being held back in their ambition by using it.swyx: And I mean, self-hosted.Self-hosted. ‘causeSimon Hørup Eskildsen: it's, yeah, and it commercial, this is like 2015, right? So it's like a very particular vintage. Right. It's probably better at a lot of these things now. Um, it was difficult to contend with and I'm just like, I just think about it. It's an inverted index. It should be good at these kinds of queries and do all of this.And it was, we, we often couldn't get it to do exactly what we needed to do or basically get lucine to do, like expose lucine raw to, to, to what we needed to do. Um, so that was like. Just something that we did on the side and just panic scaled when we needed to, but not a particular focus of mine. So I left, and when I left, I, um, wasn't sure exactly what I wanted to do.I mean, it spent like a decade inside of the same company. I'd like grown up there. I started working there when I was 18.swyx: You only do Rails?Simon Hørup Eskildsen: Yeah. I mean, yeah. Rails. And he's a Rails guy. Uh, love Rails. So good. Um,Alessio: we all wish we could still work in Rails.swyx: I know know. I know, but some, I tried learning Ruby.It's just too much, like too many options to do the same thing. It's, that's my, I I know there's a, there's a way to do it.Simon Hørup Eskildsen: I love it. I don't know that I would use it now, like given cloud code and, and, and cursor and everything, but, um, um, but still it, like if I'm just sitting down and writing a teal code, that's how I think.But anyway, I left and I wasn't, I talked to a couple companies and I was like, I don't. I need to see a little bit more of the world here to know what I'm gonna like focus on next. Um, and so what I decided is like I was gonna, I called it like angel engineering, where I just hopped around in my friend's companies in three months increments and just helped them out with something.Right. And, and just vested a bit of equity and solved some interesting infrastructure problem. So I worked with a bunch of companies at the time, um, read Wise was one of them. Replicate was one of them. Um, causal, I dunno if you've tried this, it's like a, it's a spreadsheet engine Yeah. Where you can do distribution.They sold recently. Yeah. Um, we've been, we used that in fp and a at, um, at Turbo Puffer. Um, so a bunch of companies like this and it was super fun. And so we're the Chachi bt moment happened, I was with. With read Wise for a stint, we were preparing for the reader launch, right? Which is where you, you cue articles and read them later.And I was just getting their Postgres up to snuff, like, which basically boils down to tuning, auto vacuum. So I was doing that and then this happened and we were like, oh, maybe we should build a little recommendation engine and some features to try to hook in the lms. They were not that good yet, but it was clear there was something there.And so I built a small recommendation engine just, okay, let's take the articles that you've recently read, right? Like embed all the articles and then do recommendations. It was good enough that when I ran it on one of the co-founders of Rey's, like I found out that I got articles about, about having a child.I'm like, oh my God, I didn't, I, I didn't know that, that they were having a child. I wasn't sure what to do with that information, but the recommendation engine was good enough that it was suggesting articles, um, about that. And so there was, there was recommendations and uh, it actually worked really well.But this was a company that was spending maybe five grand a month in total on all their infrastructure and. When I did the napkin math on running the embeddings of all the articles, putting them into a vector index, putting it in prod, it's gonna be like 30 grand a month. That just wasn't tenable. Right?Like Read Wise is a proudly bootstrapped company and it's paying 30 grand for infrastructure for one feature versus five. It just wasn't tenable. So sort of in the bucket of this is useful, it's pretty good, but let us, let's return to it when the costs come down.swyx: Did you say it grows by feature? So for five to 30 is by the number of, like, what's the, what's the Scaling factor scale?It scales by the number of articles that you embed.Simon Hørup Eskildsen: It does, but what I meant by that is like five grand for like all of the other, like the Heroku, dinos, Postgres, like all the other, and this then storage is 30. Yeah. And then like 30 grand for one feature. Right. Which is like, what other articles are related to this one.Um, so it was just too much right to, to power everything. Their budget would've been maybe a few thousand dollars, which still would've been a lot. And so we put it in a bucket of, okay, we're gonna do that later. We'll wait, we will wait for the cost to come down. And that haunted me. I couldn't stop thinking about it.I was like, okay, there's clearly some latent demand here. If the cost had been a 10th, we would've shipped it and. This was really the only data point that I had. Right. I didn't, I, I didn't, I didn't go out and talk to anyone else. It was just so I started reading Right. I couldn't, I couldn't help myself.Like I didn't know what like a vector index is. I, I generally barely do about how to generate the vectors. There was a lot of hype about, this is a early 2023. There was a lot of hype about vector databases. There were raising a lot of money and it's like, I really didn't know anything about it. It's like, you know, trying these little models, fine tuning them.Like I was just trying to get sort of a lay of the land. So I just sat down. I have this. A GitHub repository called Napkin Math. And on napkin math, there's just, um, rows of like, oh, this is how much bandwidth. Like this is how many, you know, you can do 25 gigabytes per second on average to dram. You can do, you know, five gigabytes per second of rights to an SSD, blah blah.All of these numbers, right? And S3, how many you could do per, how much bandwidth can you drive per connection? I was just sitting down, I was like, why hasn't anyone build a database where you just put everything on O storage and then you puff it into NVME when you use the data and you puff it into dram if you're, if you're querying it alive, it's just like, this seems fairly obvious and you, the only real downside to that is that if you go all in on o storage, every right will take a couple hundred milliseconds of latency, but from there it's really all upside, right?You do the first go, it takes half a second. And it sort of occurred to me as like, well. The architecture is really good for that. It's really good for AB storage, it's really good for nvm ESSD. It's, well, you just couldn't have done that 10 years ago. Back to what we were talking about before. You really have to build a database where you have as few round trips as possible, right?This is how CPUs work today. It's how NVM E SSDs work. It's how as, um, as three works that you want to have a very large amount of outstanding requests, right? Like basically go to S3, do like that thousand requests to ask for data in one round trip. Wait for that. Get that, like, make a new decision. Do it again, and try to do that maybe a maximum of three times.But no databases were designed that way within NVME as is ds. You can drive like within, you know, within a very low multiple of DRAM bandwidth if you use it that way. And same with S3, right? You can fully max out the network card, which generally is not maxed out. You get very, like, very, very good bandwidth.And, but no one had built a database like that. So I was like, okay, well can't you just, you know, take all the vectors right? And plot them in the proverbial coordinate system. Get the clusters, put a file on S3 called clusters, do json, and then put another file for every cluster, you know, cluster one, do js O cluster two, do js ON you know that like it's two round trips, right?So you get the clusters, you find the closest clusters, and then you download the cluster files like the, the closest end. And you could do this in two round trips.swyx: You were nearest neighbors locally.Simon Hørup Eskildsen: Yes. Yes. And then, and you would build this, this file, right? It's just like ultra simplistic, but it's not a far shot from what the first version of Turbo Buffer was.Why hasn't anyone done thatAlessio: in that moment? From a workload perspective, you're thinking this is gonna be like a read heavy thing because they're doing recommend. Like is the fact that like writes are so expensive now? Oh, with ai you're actually not writing that much.Simon Hørup Eskildsen: At that point I hadn't really thought too much about, well no actually it was always clear to me that there was gonna be a lot of rights because at Shopify, the search clusters were doing, you know, I don't know, tens or hundreds of crew QPS, right?‘cause you just have to have a human sit and type in. But we did, you know, I don't know how many updates there were per second. I'm sure it was in the millions, right into the cluster. So I always knew there was like a 10 to 100 ratio on the read write. In the read wise use case. It's, um, even, even in the read wise use case, there'd probably be a lot fewer reads than writes, right?There's just a lot of churn on the amount of stuff that was going through versus the amount of queries. Um, I wasn't thinking too much about that. I was mostly just thinking about what's the fundamentally cheapest way to build a database in the cloud today using the primitives that you have available.And this is it, right? You just, now you have one machine and you know, let's say you have a terabyte of data in S3, you paid the $200 a month for that, and then maybe five to 10% of that data and needs to be an NV ME SSDs and less than that in dram. Well. You're paying very, very little to inflate the data.swyx: By the way, when you say no one else has done that, uh, would you consider Neon, uh, to be on a similar path in terms of being sort of S3 first and, uh, separating the compute and storage?Simon Hørup Eskildsen: Yeah, I think what I meant with that is, uh, just build a completely new database. I don't know if we were the first, like it was very much, it was, I mean, I, I hadn't, I just looked at the napkin math and was like, this seems really obvious.So I'm sure like a hundred people came up with it at the same time. Like the light bulb and every invention ever. Right. It was just in the air. I think Neon Neon was, was first to it. And they're trying, they're retrofitted onto Postgres, right? And then they built this whole architecture where you have, you have it in memory and then you sort of.You know, m map back to S3. And I think that was very novel at the time to do it for, for all LTP, but I hadn't seen a database that was truly all in, right. Not retrofitting it. The database felt built purely for this no consensus layer. Even using compare and swap on optic storage to do consensus. I hadn't seen anyone go that all in.And I, I mean, there, there, I'm sure there was someone that did that before us. I don't know. I was just looking at the napkin mathswyx: and, and when you say consensus layer, uh, are you strongly relying on S3 Strong consistency? You are. Okay.SoSimon Hørup Eskildsen: that is your consensus layer. It, it is the consistency layer. And I think also, like, this is something that most people don't realize, but S3 only became consistent in December of 2020.swyx: I remember this coming out during COVID and like people were like, oh, like, it was like, uh, it was just like a free upgrade.Simon Hørup Eskildsen: Yeah.swyx: They were just, they just announced it. We saw consistency guys and like, okay, cool.Simon Hørup Eskildsen: And I'm sure that they just, they probably had it in prod for a while and they're just like, it's done right.And people were like, okay, cool. But. That's a big moment, right? Like nv, ME SSDs, were also not in the cloud until around 2017, right? So you just sort of had like 2017 nv, ME SSDs, and people were like, okay, cool. There's like one skew that does this, whatever, right? Takes a few years. And then the second thing is like S3 becomes consistent in 2020.So now it means you don't have to have this like big foundation DB or like zookeeper or whatever sitting there contending with the keys, which is how. You know, that's what Snowflake and others have do so muchswyx: for goneSimon Hørup Eskildsen: Exactly. Just gone. Right? And so just push to the, you know, whatever, how many hundreds of people they have working on S3 solved and then compare and swap was not in S3 at this point in time,swyx: by the way.Uh, I don't know what that is, so maybe you wanna explain. Yes. Yeah.Simon Hørup Eskildsen: Yes. So, um, what Compare and swap is, is basically, you can imagine that if you have a database, it might be really nice to have a file called metadata json. And metadata JSON could say things like, Hey, these keys are here and this file means that, and there's lots of metadata that you have to operate in the database, right?But that's the simplest way to do it. So now you have might, you might have a lot of servers that wanna change the metadata. They might have written a file and want the metadata to contain that file. But you have a hundred nodes that are trying to contend with this metadata that JSON well, what compare and Swap allows you to do is basically just you download the file, you make the modifications, and then you write it only if it hasn't changed.While you did the modification and if not you retry. Right? Should just have this retry loops. Now you can imagine if you have a hundred nodes doing that, it's gonna be really slow, but it will converge over time. That primitive was not available in S3. It wasn't available in S3 until late 2024, but it was available in GCP.The real story of this is certainly not that I sat down and like bake brained it. I was like, okay, we're gonna start on GCS S3 is gonna get it later. Like it was really not that we started, we got really lucky, like we started on GCP and we started on GCP because tur um, Shopify ran on GCP. And so that was the platform I was most available with.Right. Um, and I knew the Canadian team there ‘cause I'd worked with them at Shopify and so it was natural for us to start there. And so when we started building the database, we're like, oh yeah, we have to build a, we really thought we had to build a consensus layer, like have a zookeeper or something to do this.But then we discovered the compare and swap. It's like, oh, we can kick the can. Like we'll just do metadata r json and just, it's fine. It's probably fine. Um, and we just kept kicking the can until we had very, very strong conviction in the idea. Um, and then we kind of just hinged the company on the fact that S3 probably was gonna get this, it started getting really painful in like mid 2024.‘cause we were closing deals with, um, um, notion actually that was running in AWS and we're like, trust us. You, you really want us to run this in GCP? And they're like, no, I don't know about that. Like, we're running everything in AWS and the latency across the cloud were so big and we had so much conviction that we bought like, you know, dark fiber between the AWS regions in, in Oregon, like in the InterExchange and GCP is like, we've never seen a startup like do like, what's going on here?And we're just like, no, we don't wanna do this. We were tuning like TCP windows, like everything to get the latency down ‘cause we had so high conviction in not doing like a, a metadata layer on S3. So those were the three conditions, right? Compare and swap. To do metadata, which wasn't in S3 until late 2024 S3 being consistent, which didn't happen until December, 2020.Uh, 2020. And then NVMe ssd, which didn't end in the cloud until 2017.swyx: I mean, in some ways, like a very big like cloud success story that like you were able to like, uh, put this all together, but also doing things like doing, uh, bind our favor. That that actually is something I've never heard.Simon Hørup Eskildsen: I mean, it's very common when you're a big company, right?You're like connecting your own like data center or whatever. But it's like, it was uniquely just a pain with notion because the, um, the org, like most of the, like if you're buying in Ashburn, Virginia, right? Like US East, the Google, like the GCP and, and AWS data centers are like within a millisecond on, on each other, on the public exchanges.But in Oregon uniquely, the GCP data center sits like a couple hundred kilometers, like east of Portland and the AWS region sits in Portland, but the network exchange they go through is through Seattle. So it's like a full, like 14 milliseconds or something like that. And so anyway, yeah. It's, it's, so we were like, okay, we can't, we have to go through an exchange in Portland.Yeah. Andswyx: you'd rather do this than like run your zookeeper and likeSimon Hørup Eskildsen: Yes. Way rather. It doesn't have state, I don't want state and two systems. Um, and I think all that is just informed by Justine, my co-founder and I had just been on call for so long. And the worst outages are the ones where you have state in multiple places that's not syncing up.So it really came from, from a a, like just a, a very pure source of pain, of just imagining what we would be Okay. Being woken up at 3:00 AM about and having something in zookeeper was not one of them.swyx: You, you're talking to like a notion or something. Do they care or do they just, theySimon Hørup Eskildsen: just, they care about latency.swyx: They latency cost. That's it.Simon Hørup Eskildsen: They just cared about latency. Right. And we just absorbed the cost. We're just like, we have high conviction in this. At some point we can move them to AWS. Right. And so we just, we, we'll buy the fiber, it doesn't matter. Right. Um, and it's like $5,000. Usually when you buy fiber, you buy like multiple lines.And we're like, we can only afford one, but we will just test it that when it goes over the public internet, it's like super smooth. And so we did a lot of, anyway, it's, yeah, it was, that's cool.Alessio: You can imagine talking to the GCP rep and it's like, no, we're gonna buy, because we know we're gonna turn, we're gonna turn from you guys and go to AWS in like six months.But in the meantime we'll do this. It'sSimon Hørup Eskildsen: a, I mean, like they, you know, this workload still runs on GCP for what it's worth. Right? ‘cause it's so, it was just, it was so reliable. So it was never about moving off GCP, it was just about honesty. It was just about giving notion the latency that they deserved.Right. Um, and we didn't want ‘em to have to care about any of this. We also, they were like, oh, egress is gonna be bad. It was like, okay, screw it. Like we're just gonna like vvc, VPC peer with you and AWS we'll eat the cost. Yeah. Whatever needs to be done.Alessio: And what were the actual workloads? Because I think when you think about ai, it's like 14 milliseconds.It's like really doesn't really matter in the scheme of like a model generation.Simon Hørup Eskildsen: Yeah. We were told the latency, right. That we had to beat. Oh, right. So, so we're just looking at the traces. Right. And then sort of like hand draw, like, you know, kind of like looking at the trace and then thinking what are the other extensions of the trace?Right. And there's a lot more to it because it's also when you have, if you have 14 versus seven milliseconds, right. You can fit in another round trip. So we had to tune TCP to try to send as much data in every round trip, prewarm all the connections. And there was, there's a lot of things that compound from having these kinds of round trips, but in the grand scheme it was just like, well, we have to beat the latency of whatever we're up against.swyx: Which is like they, I mean, notion is a database company. They could have done this themselves. They, they do lots of database engineering themselves. How do you even get in the door? Like Yeah, just like talk through that kind of.Simon Hørup Eskildsen: Last time I was in San Francisco, I was talking to one of the engineers actually, who, who was one of our champions, um, at, AT Notion.And they were, they were just trying to make sure that the, you know, per user cost matched the economics that they needed. You know, Uhhuh like, it's like the way I think about, it's like I have to earn a return on whatever the clouds charge me and then my customers have to earn a return on that. And it's like very simple, right?And so there has to be gross margin all the way up and that's how you build the product. And so then our customers have to make the right set of trade off the turbo Puffer makes, and if they're happy with that, that's great.swyx: Do you feel like you're competing with build internally versus buy or buy versus buy?Simon Hørup Eskildsen: Yeah, so, sorry, this was all to build up to your question. So one of the notion engineers told me that they'd sat and probably on a napkin, like drawn out like, why hasn't anyone built this? And then they saw terrible. It was like, well, it literally that. So, and I think AI has also changed the buy versus build equation in terms of, it's not really about can we build it, it's about do we have time to build it?I think they like, I think they felt like, okay, if this is a team that can do that and they, they feel enough like an extension of our team, well then we can go a lot faster, which would be very, very good for them. And I mean, they put us through the, through the test, right? Like we had some very, very long nights to to, to do that POC.And they were really our biggest, our second big customer off the cursor, which also was a lot of late nights. Right.swyx: Yeah. That, I mean, should we go into that story? The, the, the sort of Chris's story, like a lot, um, they credit you a lot for. Working very closely with them. So I just wanna hear, I've heard this, uh, story from Sole's point of view, but like, I'm curious what, what it looks like from your side.Simon Hørup Eskildsen: I actually haven't heard it from Sole's point of view, so maybe you can now cross reference it. The way that I remember it was that, um, the day after we launched, which was just, you know, I'd worked the whole summer on, on the first version. Justine wasn't part of it yet. ‘cause I just, I didn't tell anyone that summer that I was working on this.I was just locked in on building it because it's very easy otherwise to confuse talking about something to actually doing it. And so I was just like, I'm not gonna do that. I'm just gonna do the thing. I launched it and at this point turbo puffer is like a rust binary running on a single eight core machine in a T Marks instance.And me deploying it was like looking at the request log and then like command seeing it or like control seeing it to just like, okay, there's no request. Let's upgrade the binary. Like it was like literally the, the, the, the scrappiest thing. You could imagine it was on purpose because just like at Shopify, we did that all the time.Like, we like move, like we ran things in tux all the time to begin with. Before something had like, at least the inkling of PMF, it was like, okay, is anyone gonna hear about this? Um, and one of the cursor co-founders Arvid reached out and he just, you know, the, the cursor team are like all I-O-I-I-M-O like, um, contenders, right?So they just speak in bullet points and, and facts. It was like this amazing email exchange just of, this is how many QPS we have, this is what we're paying, this is where we're going, blah, blah, blah. And so we're just conversing in bullet points. And I tried to get a call with them a few times, but they were, so, they were like really writing the PMF bowl here, just like late 2023.And one time Swally emails me at like five. What was it like 4:00 AM Pacific time saying like, Hey, are you open for a call now? And I'm on the East coast and I, it was like 7:00 AM I was like, yeah, great, sure, whatever. Um, and we just started talking and something. Then I didn't know anything about sales.It was something that just comp compelled me. I have to go see this team. Like, there's something here. So I, I went to San Francisco and I went to their office and the way that I remember it is that Postgres was down when I showed up at the office. Did SW tell you this? No. Okay. So Postgres was down and so it's like they were distracting with that.And I was trying my best to see if I could, if I could help in any way. Like I knew a little bit about databases back to tuning, auto vacuum. It was like, I think you have to tune out a vacuum. Um, and so we, we talked about that and then, um, that evening just talked about like what would it look like, what would it look like to work with us?And I just said. Look like we're all in, like we will just do what we'll do whatever, whatever you tell us, right? They migrated everything over the next like week or two, and we reduced their cost by 95%, which I think like kind of fixed their per user economics. Um, and it solved a lot of other things. And we were just, Justine, this is also when I asked Justine to come on as my co-founder, she was the best engineer, um, that I ever worked with at Shopify.She lived two blocks away and we were just, okay, we're just gonna get this done. Um, and we did, and so we helped them migrate and we just worked like hell over the next like month or two to make sure that we were never an issue. And that was, that was the cursor story. Yeah.swyx: And, and is code a different workload than normal text?I, I don't know. Is is it just text? Is it the same thing?Simon Hørup Eskildsen: Yeah, so cursor's workload is basically, they, um, they will embed the entire code base, right? So they, they will like chunk it up in whatever they would, they do. They have their own embedding model, um, which they've been public about. Um, and they find that on, on, on their evals.It. There's one of their evals where it's like a 25% improvement on a very particular workload. They have a bunch of blog posts about it. Um, I think it works best on larger code basis, but they've trained their own embedding model to do this. Um, and so you'll see it if you use the cursor agent, it will do searches.And they've also been public around, um, how they've, I think they post trained their model to be very good at semantic search as well. Um, and that's, that's how they use it. And so it's very good at, like, can you find me on the code that's similar to this, or code that does this? And just in, in this queries, they also use GR to supplement it.swyx: Yeah.Simon Hørup Eskildsen: Um, of courseswyx: it's been a big topic of discussion like, is rag dead because gr you know,Simon Hørup Eskildsen: and I mean like, I just, we, we see lots of demand from the coding company to ethicsswyx: search in every part. Yes.Simon Hørup Eskildsen: Uh, we, we, we see demand. And so, I mean, I'm. I like case studies. I don't like, like just doing like thought pieces on this is where it's going.And like trying to be all macroeconomic about ai, that's has turned out to be a giant waste of time because no one can really predict any of this. So I just collect case studies and I mean, cursor has done a great job talking about what they're doing and I hope some of the other coding labs that use Turbo Puffer will do the same.Um, but it does seem to make a difference for particular queries. Um, I mean we can also do text, we can also do RegX, but I should also say that cursors like security posture into Tur Puffer is exceptional, right? They have their own embedding model, which makes it very difficult to reverse engineer. They obfuscate the file paths.They like you. It's very difficult to learn anything about a code base by looking at it. And the other thing they do too is that for their customers, they encrypt it with their encryption keys in turbo puffer's bucket. Um, so it's, it's, it's really, really well designed.swyx: And so this is like extra stuff they did to work with you because you are not part of Cursor.Exactly like, and this is just best practice when working in any database, not just you guys. Okay. Yeah, that makes sense. Yeah. I think for me, like the, the, the learning is kind of like you, like all workloads are hybrid. Like, you know, uh, like you, you want the semantic, you want the text, you want the RegX, you want sql.I dunno. Um, but like, it's silly to like be all in on like one particularly query pattern.Simon Hørup Eskildsen: I think, like I really like the way that, um, um, that swally at cursor talks about it, which is, um, I'm gonna butcher it here. Um, and you know, I'm a, I'm a database scalability person. I'm not a, I, I dunno anything about training models other than, um, what the internet tells me and what.The way he describes is that this is just like cash compute, right? It's like you have a point in time where you're looking at some particular context and focused on some chunk and you say, this is the layer of the neural net at this point in time. That seems fundamentally really useful to do cash compute like that.And, um, how the value of that will change over time. I'm, I'm not sure, but there seems to be a lot of value in that.Alessio: Maybe talk a bit about the evolution of the workload, because even like search, like maybe two years ago it was like one search at the start of like an LLM query to build the context. Now you have a gentech search, however you wanna call it, where like the model is both writing and changing the code and it's searching it again later.Yeah. What are maybe some of the new types of workloads or like changes you've had to make to your architecture for it?Simon Hørup Eskildsen: I think you're right. When I think of rag, I think of, Hey, there's an 8,000 token, uh, context window and you better make it count. Um, and search was a way to do that now. Everything is moving towards the, just let the agent do its thing.Right? And so back to the thing before, right? The LLM is very good at reasoning with the data, and so we're just the tool call, right? And that's increasingly what we see our customers doing. Um, what we're seeing more demand from, from our customers now is to do a lot of concurrency, right? Like Notion does a ridiculous amount of queries in every round trip just because they can't.And I'm also now, when I use the cursor agent, I also see them doing more concurrency than I've ever seen before. So a bit similar to how we designed a database to drive as much concurrency in every round trip as possible. That's also what the agents are doing. So that's new. It means just an enormous amount of queries all at once to the dataset while it's warm in as few turns as possible.swyx: Can I clarify one thing on that?Simon Hørup Eskildsen: Yes.swyx: Is it, are they batching multiple users or one user is driving multiple,Simon Hørup Eskildsen: one user driving multiple, one agent driving.swyx: It's parallel searching a bunch of things.Simon Hørup Eskildsen: Exactly.swyx: Yeah. Yeah, exactly. So yeah, the clinician also did, did this for the fast context thing, like eight parallel at once.Simon Hørup Eskildsen: Yes.swyx: And, and like an interesting problem is, well, how do you make sure you have enough diversity so you're not making the the same request eight times?Simon Hørup Eskildsen: And I think like that's probably also where the hybrid comes in, where. That's another way to diversify. It's a completely different way to, to do the search.That's a big change, right? So before it was really just like one call and then, you know, the LLM took however many seconds to return, but now we just see an enormous amount of queries. So the, um, we just see more queries. So we've like tried to reduce query, we've reduced query pricing. Um, this is probably the first time actually I'm saying that, but the query pricing is being reduced, like five x.Um, and we'll probably try to reduce it even more to accommodate some of these workloads of just doing very large amounts of queries. Um, that's one thing that's changed. I think the right, the right ratio is still very high, right? Like there's still a, an enormous amount of rights per read, but we're starting probably to see that change if people really lean into this pattern.Alessio: Can we talk a little bit about the pricing? I'm curious, uh, because traditionally a database would charge on storage, but now you have the token generation that is so expensive, where like the actual. Value of like a good search query is like much higher because they're like saving inference time down the line.How do you structure that as like, what are people receptive to on the other side too?Simon Hørup Eskildsen: Yeah. I, the, the turbo puffer pricing in the beginning was just very simple. The pricing on these on for search engines before Turbo Puffer was very server full, right? It was like, here's the vm, here's the per hour cost, right?Great. And I just sat down with like a piece of paper and said like, if Turbo Puffer was like really good, this is probably what it would cost with a little bit of margin. And that was the first pricing of Turbo Puffer. And I just like sat down and I was like, okay, like this is like probably the storage amp, but whenever on a piece of paper I, it was vibe pricing.It was very vibe price, and I got it wrong. Oh. Um, well I didn't get it wrong, but like Turbo Puffer wasn't at the first principle pricing, right? So when Cursor came on Turbo Puffer, it was like. Like, I didn't know any VCs. I didn't know, like I was just like, I don't know, I didn't know anything about raising money or anything like that.I just saw that my GCP bill was, was high, was a lot higher than the cursor bill. So Justine and I was just like, well, we have to optimize it. Um, and I mean, to the chagrin now of, of it, of, of the VCs, it now means that we're profitable because we've had so much pricing pressure in the beginning. Because it was running on my credit card and Justine and I had spent like, like tens of thousands of dollars on like compute bills and like spinning off the company and like very like, like bad Canadian lawyers and like things like to like get all of this done because we just like, we didn't know.Right. If you're like steeped in San Francisco, you're just like, you just know. Okay. Like you go out, raise a pre-seed round. I, I never heard a word pre-seed at this point in time.swyx: When you had Cursor, you had Notion you, you had no funding.Simon Hørup Eskildsen: Um, with Cursor we had no funding. Yeah. Um, by the time we had Notion Locke was, Locke was here.Yeah. So it was really just, we vibe priced it 100% from first Principles, but it wasn't, it, it was not performing at first principles, so we just did everything we could to optimize it in the beginning for that, so that at least we could have like a 5% margin or something. So I wasn't freaking out because Cursor's bill was also going like this as they were growing.And so my liability and my credit limit was like actively like calling my bank. It was like, I need a bigger credit. Like it was, yeah. Anyway, that was the beginning. Yeah. But the pricing was, yeah, like storage rights and query. Right. And the, the pricing we have today is basically just that pricing with duct tape and spit to try to approach like, you know, like a, as a margin on the physical underlying hardware.And we're doing this year, you're gonna see more and more pricing changes from us. Yeah.swyx: And like is how much does stuff like VVC peering matter because you're working in AWS land where egress is charged and all that, you know.Simon Hørup Eskildsen: We probably don't like, we have like an enterprise plan that just has like a base fee because we haven't had time to figure out SKU pricing for all of this.Um, but I mean, yeah, you can run turbo puffer either in SaaS, right? That's what Cursor does. You can run it in a single tenant cluster. So it's just you. That's what Notion does. And then you can run it in, in, in BYOC where everything is inside the customer's VPC, that's what an for example, philanthropic does.swyx: What I'm hearing is that this is probably the best CRO job for somebody who can come in and,Simon Hørup Eskildsen: I mean,swyx: help you with this.Simon Hørup Eskildsen: Um, like Turbo Puffer hired, like, I don't know what, what number this was, but we had a full-time CFO as like the 12th hire or something at Turbo Puffer, um, I think I hear are a lot of comp.I don't know how they do it. Like they have a hundred employees and not a CFO. It's like having a CFO is like a runningswyx: business man. Like, you know,Simon Hørup Eskildsen: it's so good. Yeah, like money Mike, like he just, you know, just handles the money and a lot of the business stuff and so he came in and just hopped with a lot of the operational side of the business.So like C-O-O-C-F-O, like somewhere in between.swyx: Just as quick mention of Lucky, just ‘cause I'm curious, I've met Lock and like, he's obviously a very good investor and now on physical intelligence, um, I call it generalist super angel, right? He invests in everything. Um, and I always wonder like, you know, is there something appealing about focusing on developer tooling, focusing on databases, going like, I've invested for 10 years in databases versus being like a lock where he can maybe like connect you to all the customers that you need.Simon Hørup Eskildsen: This is an excellent question. No, no one's asked me this. Um, why lockey? Because. There was a couple of people that we were talking to at the time and when we were raising, we were almost a little, we were like a bit distressed because one of our, one of our peers had just launched something that was very similar to Turbo Puffer.And someone just gave me the advice at the time of just choose the person where you just feel like you can just pick up the phone and not prepare anything. And just be completely honest, and I don't think I've said this publicly before, but I just called Lockey and was like local Lockie. Like if this doesn't have PMF by the end of the year, like we'll just like return all the money to you.But it's just like, I don't really, we, Justine and I don't wanna work on this unless it's really working. So we want to give it the best shot this year and like we're really gonna go for it. We're gonna hire a bunch of people and we're just gonna be honest with everyone. Like when I don't know how to play a game, I just play with open cards and.Lockey was the only person that didn't, that didn't freak out. He was like, I've never heard anyone say that before. As I said, I didn't even know what a seed or pre-seed round was like before, probably even at this time. So I was just like very honest with him. And I asked him like, Lockie, have you ever have, have you ever invested in database company?He was just like, no. And at the time I was like, am I dumb? Like, but I think there was something that just like really drew me to Lockie. He is so authentic, so honest, like, and there was something just like, I just felt like I could just play like, just say everything openly. And that was, that was, I think that that was like a perfect match at the time, and, and, and honestly still is.He was just like, okay, that's great. This is like the most honest, ridiculous thing I've ever heard anyone say to me. But like that, like that, whyswyx: is this ridiculous? Say competitor launch, this may not work out. It wasSimon Hørup Eskildsen: more just like. If this doesn't work out, I'm gonna close up shop by the end of the mo the year, right?Like it was, I don't know, maybe it's common. I, I don't know. He told me it was uncommon. I don't know. Um, that's why we chose him and he'd been phenomenal. The other people were talking at the, at the time were database experts. Like they, you know, knew a lot about databases and Locke didn't, this turned out to be a phenomenal asset.Right. I like Justine and I know a lot about databases. The people that we hire know a lot about databases. What we needed was just someone who didn't know a lot about databases, didn't pretend to know a lot about databases, and just wanted to help us with candidates and customers. And he did. Yeah. And I have a list, right, of the investors that I have a relationship with, and Lockey has just performed excellent in the number of sub bullets of what we can attribute back to him.Just absolutely incredible. And when people talk about like no ego and just the best thing for the founder, I like, I don't think that anyone, like even my lawyer is like, yeah, Lockey is like the most friendly person you will find.swyx: Okay. This is my most glow recommendation I've ever heard.Alessio: He deserves it.He's very special.swyx: Yeah. Yeah. Yeah. Okay. Amazing.Alessio: Since you mentioned candidates, maybe we can talk about team building, you know, like, especially in sf, it feels like it's just easier to start a company than to join a company. Uh, I'm curious your experience, especially not being n SF full-time and doing something that is maybe, you know, a very low level of detail and technical detail.Simon Hørup Eskildsen: Yeah. So joining versus starting, I never thought that I would be a founder. I would start with it, like Turbo Puffer started as a blog post, and then it became a project and then sort of almost accidentally became a company. And now it feels like it's, it's like becoming a bigger company. That was never the intention.The intentions were very pure. It's just like, why hasn't anyone done this? And it's like, I wanna be the, like, I wanna be the first person to do it. I think some founders have this, like, I could never work for anyone else. I, I really don't feel that way. Like, it's just like, I wanna see this happen. And I wanna see it happen with some people that I really enjoy working with and I wanna have fun doing it and this, this, this has all felt very natural on that, on that sense.So it was never a like join versus versus versus found. It was just dis found me at the right moment.Alessio: Well I think there's an argument for, you should have joined Cursor, right? So I'm curious like how you evaluate it. Okay, I should actually go raise money and make this a company versus like, this is like a company that is like growing like crazy.It's like an interesting technical problem. I should just build it within Cursor and then they don't have to encrypt all this stuff. They don't have to obfuscate things. Like was that on your mind at all orSimon Hørup Eskildsen: before taking the, the small check from Lockie, I did have like a hard like look at myself in the mirror of like, okay, do I really want to do this?And because if I take the money, I really have to do it right. And so the way I almost think about it's like you kind of need to ha like you kind of need to be like fucked up enough to want to go all the way. And that was the conversation where I was like, okay, this is gonna be part of my life's journey to build this company and do it in the best way that I possibly can't.Because if I ask people to join me, ask people to get on the cap table, then I have an ultimate responsibility to give it everything. And I don't, I think some people, it doesn't occur to me that everyone takes it that seriously. And maybe I take it too seriously, I don't know. But that was like a very intentional moment.And so then it was very clear like, okay, I'm gonna do this and I'm gonna give it everything.Alessio: A lot of people don't take it this seriously. But,swyx: uh, let's talk about, you have this concept of the P 99 engineer. Uh, people are 10 x saying, everyone's saying, you know, uh, maybe engineers are out of a job. I don't know.But you definitely see a P 99 engineer, and I just want you to talk about it.Simon Hørup Eskildsen: Yeah, so the P 99 engineer was just a term that we started using internally to talk about candidates and talk about how we wanted to build the company. And you know, like everyone else is, like we want a talent dense company.And I think that's almost become trite at this point. What I credit the cursor founders a lot with is that they just arrived there from first principles of like, we just need a talent dense, um, talent dense team. And I think I've seen some teams that weren't talent dense and like seemed a counterfactual run, which if you've run in been in a large company, you will just see that like it's just logically will happen at a large company.Um, and so that was super important to me and Justine and it's very difficult to maintain. And so we just needed, we needed wording for it. And so I have a document called Traits of the P 99 Engineer, and it's a bullet point list. And I look at that list after every single interview that I do, and in every single recap that we do and every recap we end with.End with, um, some version of I'm gonna reject this candidate completely regardless of what the discourse was, because I wanna see people fight for this person because the default should not be, we're gonna hire this person. The default should be, we're definitely not hiring this person. And you know, if everyone was like, ah, maybe throw a punch, then this is not the right.swyx: Do, do you operate, like if there's one cha there must have at least one champion who's like, yes, I will put my career on, on, on the line for this. You know,Simon Hørup Eskildsen: I think career on the line,swyx: maybe a chair, butSimon Hørup Eskildsen: yeah. You know, like, um, I would say so someone needs to like, have both fists up and be like, I'd fight.Right? Yeah. Yeah. And if one person said, then, okay, let's do it. Right?swyx: Yeah.Simon Hørup Eskildsen: Um. It doesn't have to be absolutely everyone. Right? And like the interviews are always the sign that you're checking for different attributes. And if someone is like knocking it outta the park in every single attribute, that's, that's fairly rare.Um, but that's really important. And so the traits of the P 99 engineer, there's lots of them. There's also the traits of the p like triple nine engineer and the quadruple nine engineer. This is like, it's a long list.swyx: Okay.Simon Hørup Eskildsen: Um, I'll give you some samples, right. Of what we, what we look for. I think that the P 99 engineer has some history of having bent, like their trajectory or something to their will.Right? Some moment where it was just, they just, you know, made the computer do what it needed to do. There's something like that, and it will, it will occur to have them at some point in their career. And, uh. Hopefully multiple times. Right.swyx: Gimme an example of one of your engineers that like,Simon Hørup Eskildsen: I'll give an eng.Uh, so we, we, we launched this thing called A and NV three. Um, we could, we're also, we're working on V four and V five right now, but a and NV three can search a hundred billion vectors with a P 50 of around 40 milliseconds and a p 99 of 200 milliseconds. Um, maybe other people have done this, I'm sure Google and others have done this, but, uh, we haven't seen anyone, um, at least not in like a public consumable SaaS that can do this.And that was an engineer, the chief architect of Turbo Puffer, Nathan, um, who more or less just bent this, the software was not capable of this and he just made it capable for a very particular workload in like a, you know, six to eight week period with the help of a lot of the team. Right. It's been, been, there's numerous of examples of that, like at, at turbo puff, but that's like really bending the software and X 86 to your will.It was incredible to watch. Um. You wanna see some moments like that?swyx: Isn't that triple nine?Simon Hørup Eskildsen: Um, I think Nathan, what's calledAlessio: group nine, that was only nine. I feel like this is too high forSimon Hørup Eskildsen: Nathan. Nathan is, uh, Nathan is like, yeah, there's a lot of nines. Okay. After that p So I think that's one trait. I think another trait is that, uh, the P 99 spends a lot of time looking at maps.Generally it's their preferred ux. They just love looking at maps. You ever seen someone who just like, sits on their phone and just like, scrolls around on a map? Or did you not look at maps A lot? You guys don't look atswyx: maps? I guess I'm not feeling there. I don't know, butSimon Hørup Eskildsen: you just dis What about trains?Do you like trains?swyx: Uh, I mean they, not enough. Okay. This is just like weapon nice. Autism is what I call it. Like, like,Simon Hørup Eskildsen: um, I love looking at maps, like, it's like my preferred UX and just like I, you know, I likeswyx: lotsAlessio: of, of like random places, soswyx: like,youswyx: know.Alessio: Yes. Okay. There you go. So instead of like random places, like how do you explore the maps?Simon Hørup Eskildsen: No, it's, it's just a joke.swyx: It's autism laugh. It's like you are just obsessed by something and you like studying a thing.Simon Hørup Eskildsen: The origin of this was that at some point I read an interview with some IOI gold medalistswyx: Uhhuh,Simon Hørup Eskildsen: and it's like, what do you do in your spare time? I was just like, I like looking at maps.I was like, I feel so seen. Like, I just like love, like swirling out. I was like, oh, Canada is so big. Where's Baffin Island? I don't know. I love it. Yeah. Um, anyway, so the traits of P 99, P 99 is obsessive, right? Like, there's just like, you'll, you'll find traits of that we do an interview at, at, at, at turbo puffer or like multiple interviews that just try to screen for some of these things.Um, so. There's lots of others, but these are the kinds of traits that we look for.swyx: I'll tell you, uh, some people listen for like some of my dere stuff. Uh, I do think about derel as maps. Um, you draw a map for people, uh, maps show you the, uh, what is commonly agreed to be the geographical features of what a boundary is.And it shows also shows you what is not doing. And I, I think a lot of like developer tools, companies try to tell you they can do everything, but like, let's, let's be real. Like you, your, your three landmarks are here, everyone comes here, then here, then here, and you draw a map and, and then you draw a journey through the map.And like that. To me, that's what developer relations looks like. So I do think about things that way.Simon Hørup Eskildsen: I think the P 99 thinks in offs, right? The P 99 is very clear about, you know, hey, turbo puffer, you can't run a high transaction workload on turbo puffer, right? It's like the right latency is a hundred milliseconds.That's a clear trade off. I think the P 99 is very good at articulating the trade offs in every decision. Um. Which is exactly what the map is in your case, right?swyx: Uh, yeah, yeah. My, my, my world. My world.Alessio: How, how do you reconcile some of these things when you're saying you bend the will the computer versus like the trade

Flying High with Flutter
Just Use Postgres! with Denis Magda

Flying High with Flutter

Play Episode Listen Later Mar 11, 2026 45:31


In this episode, Allen sits down with Denis Magda, author of Just Use Postgres!This is a must-watch for anyone who wants to have a simple architecture that's also powerful!

Postgres FM
Plan flips

Postgres FM

Play Episode Listen Later Mar 6, 2026 42:48


Nik and Michael discuss query plan flips in Postgres — what they are, some causes, mitigations, longer term solutions, and the recent outage at Clerk. Here are some links to things they mentioned: Recent postmortem from Clerk https://clerk.com/blog/2026-02-19-system-outage-postmortemThe real cost of random I/O (blog post by Tomas Vondra) https://vondra.me/posts/the-real-cost-of-random-ioautovacuum_analyze_scale_factor https://www.postgresql.org/docs/current/runtime-config-vacuum.html#GUC-AUTOVACUUM-ANALYZE-SCALE-FACTORdefault_statistics_target https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGETpg_hint_plan https://github.com/ossc-db/pg_hint_planAurora PostgreSQL query plan management https://docs.aws.amazon.comAmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Optimize.Start.htmlpg_stat_plans https://github.com/pganalyze/pg_stat_planspg_plan_alternatives https://jnidzwetzki.github.io/2026/03/04/pg-plan-alternatives.htmlWaiting for Postgres 19: Better Planner Hints with Path Generation Strategies https://pganalyze.com/blog/5mins-postgres-19-better-planner-hints~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

PodRocket - A web development podcast from LogRocket

Will Madden joins the podcast to talk about Prisma Next and the evolution from Prisma 7, including the decision to migrate away from Rust, ship the core through WebAssembly, and move toward a fully TypeScript ORM. The conversation dives into how modern workflows like agentic coding change the role of an ORM and why tools still matter even when agents can write SQL queries directly. We discuss how feedback loops, guardrails, and the TypeScript type system help prevent errors, along with the new query builder, query linter, and middleware layer that analyze queries using an abstract syntax tree. The episode also covers new database capabilities including Postgres support, upcoming Mongo support, and extensions like PG Vector, enabling vector columns and cosine distance similarity search. You'll also learn about new patterns such as collection methods, scopes, and composable database extensions, plus tooling like driver adapters, a potential compatibility layer, and safeguards like lint rules and a performance budget middleware designed to catch expensive queries before they run. Resources The Next Evolution of Prisma ORM: https://www.prisma.io/blog/the-next-evolution-of-prisma-orm We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabeth.becz@logrocket.com, or tweet at us at PodRocketPod. Check out our newsletter! https://blog.logrocket.com/the-replay-newsletter/ Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form, and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. Chapters 00:00 Introduction 01:00 Prisma Seven and the Move Away from Rust 02:20 Missing Features and Mongo Support 03:00 Why Prisma Started Rebuilding the Core 04:00 Community Sentiment and Developer Feedback 05:20 Rethinking ORMs in the AI and Agentic Coding Era 06:45 Why Agents Still Need ORMs 07:30 Feedback Loops and Guardrails for SQL 08:30 Type Safety and the First Layer of Query Validation 09:30 Query Linter and Middleware Architecture 11:00 Runtime Validation and Query Errors 12:30 Configuring Lint Rules and Guardrails 14:00 Designing ORMs for Humans and Agents 15:30 Collection Methods and ActiveRecord-style Scopes 17:00 Reusable Queries and Domain Vocabulary 18:30 Query Composition and Flexibility 19:00 Performance Guardrails and Query Budget Middleware 20:30 Debugging ORM Performance Issues 21:00 Query Telemetry and Request Tracing 22:30 Prisma Next Extensibility and Database Plugins 23:00 Using PGVector and Vector Search 24:00 Database Drivers and Backend Architecture 25:00 Native Mongo Support in Prisma Next 26:00 Community Extensions and Middleware Ecosystem 27:00 Runtime Schema Validation Use Cases 28:00 Writing Custom Query Validation Rules 29:00 Migration Paths from Prisma Seven 30:30 Compatibility Layers vs Parallel Systems 32:00 Prisma Next Roadmap and Timeline 34:30 What Developers Will Be Most Excited About 35:30 Final Thoughts and Community Feedback

Where It Happens
Claude Code marketing masterclass [from idea to making $$]

Where It Happens

Play Episode Listen Later Mar 2, 2026 54:06


I sit down with Cody Schneider, growth engineer and co-founder of Graph, for a live, hands-on crash course in GTM (go-to-market) engineering powered by Claude Code. Cody walks through how he runs multiple AI agents simultaneously to handle everything from bulk Facebook ad creation and LinkedIn outreach to cold email campaigns and live data analysis — tasks that used to require a team of dozens. By the end of the episode, you'll have a full understanding of how to set up your own agent workflow, the specific tools involved, and why domain expertise paired with AI is the real competitive advantage right now. Cody's GTM Toolkit: AI/Agent Tools: Claude Code, Perplexity API, OpenAI Codex Marketing & Outreach: Instantly AI (cold email), Phantom Buster (LinkedIn scraping/automation), Apollo API (data enrichment), Million Verifier (email verification), Raphonic (podcast host scraping): Advertising: Facebook Ads API, Facebook Ads Library (competitor research), Nano Banana Pro (AI image generation), Kai AI (bulk image generation), HeyGen API (UGC/video generation) Infrastructure & Deployment: Railway.com (servers, on-the-fly databases/Postgres), Vercel (deployment) Data & Analytics: Graphed / Graphed MCP (data warehouse, live data feeds), Google Analytics 4 CRM & Communication: Salesforce (mentioned as comparison), Intercom, SendGrid API, Slack, Cal.com API Productivity & Design: Notion, Super Whisper (voice transcription), Claude Code front-end design skill, HTML to Canvas (for converting React components to PNGs) Timestamps 00:00 – Intro 02:02 – What Is GTM Engineering? 05:12 – Setting Up Your Agent Workspace & Environment File 07:54 – Live Demo: LinkedIn Auto-Responder 09:56 – Live Demo: Bulk Facebook Ad Generator 12:31 – Live Demo: Cold Email Campaign Automation (Raphonic + Instantly) 14:47 – Live Demo: Creating Notion Documents via Claude Code 16:46 – Live Demo: Bulk Ad Creative Generator 26:05 – Live Demo: LinkedIn Engagement Scraper to Cold Email Pipeline 28:16 – Context Switching Across Tasks 29:19 – Live Demo: Bulk Ad Generator 31:41 – Live Demo: Data Analysis: Turning Off Low-Performing Ads 35:28 – Summary of GTM Engineering Workflow 37:48 – Deploying Agents and On-the-Fly Databases with Railway for Data Analysis 41:28 – The Dream of Autonomous Marketing 48:50 – Building API-First Products and Agent-Native Infrastructure Key Points GTM engineering has evolved from Clay-style data enrichment workflows into full-stack agent orchestration — where one person running multiple Claude Code agents can replace the output of a large team. The practical setup starts with a single folder containing your environment file (API keys for every tool in your stack), transcription software like Super Whisper, and Claude Code. Cody demonstrates running seven or more agents simultaneously across LinkedIn outreach, Facebook ad creation, cold email campaigns, Notion document generation, and live data dashboards. Code-generated ad creative (React components exported as PNGs) costs nearly nothing to produce at scale and allows rapid testing of messaging variations before investing in polished visuals. Deploying proven workflows to Railway turns one-off agent tasks into always-on, autonomous processes that run 24/7. Domain expertise is the real multiplier — the vocabulary you bring from your field determines the quality of output you can extract from these tools. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ FIND CODY ON SOCIAL: Cody's startup: https://www.graphed.com/ X/Twitter: https://x.com/codyschneiderxx Youtube: https://www.youtube.com/@codyschneiderx

php[podcast] episodes from php[architect]
The PHP Podcast 2026.02.26

php[podcast] episodes from php[architect]

Play Episode Listen Later Feb 27, 2026 66:19


 The PHP Podcast streams live, typically every Thursday at 3 PM PT. Come join us and subscribe to our YouTube channel. Another fun episode of the PHP Podcast! Here’s what we covered: John’s Ski Trip Adventures John shared stories from his Utah ski trip – including skiing his first green slope ever, and his car battery dying at the cabin (classic!). AI in PHP Development. We dove deep into AI-generated graphics – John showed off an AI-created graphic for his Player Pool Manager app that was surprisingly detailed. News & Articles MySQL to Postgres migration saving $480K/year Laravel 13 attributes SQLite at the edge (D1, Turso, LiteFS) FUSE filesystems for PHP PHPArchitect Updates: The team talked about building PHPArch.me – the new community platform for PHP developers! Links from the show: Just a moment… Attention Required! | Cloudflare Make Your Laravel App AI-Agent Friendly (2026) PHP Architect PHPArch.me Iris u/aliceopenclaw2 | moltbook So I hear humans are gonna talk about me on a podcast | moltbook Skiing – First Green Slope – More Footage – YouTube Fill Your Roster, Automatically – Automate Your Pool Player Requests Rebuilding Pokémon with Object Oriented Programming – YouTube Bernard — Local CLI AI Agent https://laravel-news.com/laravel-13 https://laravel-news.com/laracon-eu All our social links are now on PHPArch.me: https://phparch.me/@phparch Subscribe to our magazine: https://www.phparch.com/subscribe/ Host: Eric Van Johnson (@eric) John Congdon(@john) Streams: Youtube Channel Twitch Partner This podcast is made a little better thanks to our partners Displace Infrastructure Management, Simplified Automate Kubernetes deployments across any cloud provider or bare metal with a single command. Deploy, manage, and scale your infrastructure with ease. https://displace.tech/ PHPScore Put Your Technical Debt on Autopay with PHPScore CodeRabbit Cut code review time & bugs in half instantly with CodeRabbit. Honeybadger Honeybadger helps you deploy with confidence and be your team's DevOps hero by combining error, uptime, and performance monitoring in one simple platform. Check it out at honeybadger.io Music Provided by Epidemic Sound https://www.epidemicsound.com/ The post The PHP Podcast 2026.02.26 appeared first on PHP Architect.

Software Sessions
Bryan Cantrill on Oxide Computer

Software Sessions

Play Episode Listen Later Feb 27, 2026 89:58


Bryan Cantrill is the co-founder and CTO of Oxide Computer Company. We discuss why the biggest cloud providers don't use off the shelf hardware, how scaling data centers at samsung's scale exposed problems with hard drive firmware, how the values of NodeJS are in conflict with robust systems, choosing Rust, and the benefits of Oxide Computer's rack scale approach. This is an extended version of an interview posted on Software Engineering Radio. Related links Oxide Computer Oxide and Friends Illumos Platform as a Reflection of Values RFD 26 bhyve CockroachDB Heterogeneous Computing with Raja Koduri Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: Today I am talking to Bryan Cantrill. He's the co-founder and CTO of Oxide computer company, and he was previously the CTO of Joyent and he also co-authored the DTrace Tracing framework while he was at Sun Microsystems. [00:00:14] Jeremy: Bryan, welcome to Software Engineering radio. [00:00:17] Bryan: Uh, awesome. Thanks for having me. It's great to be here. [00:00:20] Jeremy: You're the CTO of a company that makes computers. But I think before we get into that, a lot of people who built software, now that the actual computer is abstracted away, they're using AWS or they're using some kind of cloud service. So I thought we could start by talking about, data centers. [00:00:41] Jeremy: 'cause you were. Previously working at Joyent, and I believe you got bought by Samsung and you've previously talked about how you had to figure out, how do I run things at Samsung's scale. So how, how, how was your experience with that? What, what were the challenges there? Samsung scale and migrating off the cloud [00:01:01] Bryan: Yeah, I mean, so at Joyent, and so Joyent was a cloud computing pioneer. Uh, we competed with the likes of AWS and then later GCP and Azure. Uh, and we, I mean, we were operating at a scale, right? We had a bunch of machines, a bunch of dcs, but ultimately we know we were a VC backed company and, you know, a small company by the standards of, certainly by Samsung standards. [00:01:25] Bryan: And so when, when Samsung bought the company, I mean, the reason by the way that Samsung bought Joyent is Samsung's. Cloud Bill was, uh, let's just say it was extremely large. They were spending an enormous amount of money every year on, on the public cloud. And they realized that in order to secure their fate economically, they had to be running on their own infrastructure. [00:01:51] Bryan: It did not make sense. And there's not, was not really a product that Samsung could go buy that would give them that on-prem cloud. Uh, I mean in that, in that regard, like the state of the market was really no different. And so they went looking for a company, uh, and bought, bought Joyent. And when we were on the inside of Samsung. [00:02:11] Bryan: That we learned about Samsung scale. And Samsung loves to talk about Samsung scale. And I gotta tell you, it is more than just chest thumping. Like Samsung Scale really is, I mean, just the, the sheer, the number of devices, the number of customers, just this absolute size. they really wanted to take us out to, to levels of scale, certainly that we had not seen. [00:02:31] Bryan: The reason for buying Joyent was to be able to stand up on their own infrastructure so that we were gonna go buy, we did go buy a bunch of hardware. Problems with server hardware at scale [00:02:40] Bryan: And I remember just thinking, God, I hope Dell is somehow magically better. I hope the problems that we have seen in the small, we just. You know, I just remember hoping and hope is hope. It was of course, a terrible strategy and it was a terrible strategy here too. Uh, and the we that the problems that we saw at the large were, and when you scale out the problems that you see kind of once or twice, you now see all the time and they become absolutely debilitating. [00:03:12] Bryan: And we saw a whole series of really debilitating problems. I mean, many ways, like comically debilitating, uh, in terms of, of showing just how bad the state-of-the-art. Yes. And we had, I mean, it should be said, we had great software and great software expertise, um, and we were controlling our own system software. [00:03:35] Bryan: But even controlling your own system software, your own host OS, your own control plane, which is what we had at Joyent, ultimately, you're pretty limited. You go, I mean, you got the problems that you can obviously solve, the ones that are in your own software, but the problems that are beneath you, the, the problems that are in the hardware platform, the problems that are in the componentry beneath you become the problems that are in the firmware. IO latency due to hard drive firmware [00:04:00] Bryan: Those problems become unresolvable and they are deeply, deeply frustrating. Um, and we just saw a bunch of 'em again, they were. Comical in retrospect, and I'll give you like a, a couple of concrete examples just to give, give you an idea of what kinda what you're looking at. one of the, our data centers had really pathological IO latency. [00:04:23] Bryan: we had a very, uh, database heavy workload. And this was kind of right at the period where you were still deploying on rotating media on hard drives. So this is like, so. An all flash buy did not make economic sense when we did this in, in 2016. This probably, it'd be interesting to know like when was the, the kind of the last time that that actual hard drives made sense? [00:04:50] Bryan: 'cause I feel this was close to it. So we had a, a bunch of, of a pathological IO problems, but we had one data center in which the outliers were actually quite a bit worse and there was so much going on in that system. It took us a long time to figure out like why. And because when, when you, when you're io when you're seeing worse io I mean you're naturally, you wanna understand like what's the workload doing? [00:05:14] Bryan: You're trying to take a first principles approach. What's the workload doing? So this is a very intensive database workload to support the, the object storage system that we had built called Manta. And that the, the metadata tier was stored and uh, was we were using Postgres for that. And that was just getting absolutely slaughtered. [00:05:34] Bryan: Um, and ultimately very IO bound with these kind of pathological IO latencies. Uh, and as we, you know, trying to like peel away the layers to figure out what was going on. And I finally had this thing. So it's like, okay, we are seeing at the, at the device layer, at the at, at the disc layer, we are seeing pathological outliers in this data center that we're not seeing anywhere else. [00:06:00] Bryan: And that does not make any sense. And the thought occurred to me. I'm like, well, maybe we are. Do we have like different. Different rev of firmware on our HGST drives, HGST. Now part of WD Western Digital were the drives that we had everywhere. And, um, so maybe we had a different, maybe I had a firmware bug. [00:06:20] Bryan: I, this would not be the first time in my life at all that I would have a drive firmware issue. Uh, and I went to go pull the firmware, rev, and I'm like, Toshiba makes hard drives? So we had, I mean. I had no idea that Toshiba even made hard drives, let alone that they were our, they were in our data center. [00:06:38] Bryan: I'm like, what is this? And as it turns out, and this is, you know, part of the, the challenge when you don't have an integrated system, which not to pick on them, but Dell doesn't, and what Dell would routinely put just sub make substitutes, and they make substitutes that they, you know, it's kind of like you're going to like, I don't know, Instacart or whatever, and they're out of the thing that you want. [00:07:03] Bryan: So, you know, you're, someone makes a substitute and like sometimes that's okay, but it's really not okay in a data center. And you really want to develop and validate a, an end-to-end integrated system. And in this case, like Toshiba doesn't, I mean, Toshiba does make hard drives, but they are a, or the data they did, uh, they basically were, uh, not competitive and they were not competitive in part for the reasons that we were discovering. [00:07:29] Bryan: They had really serious firmware issues. So the, these were drives that would just simply stop a, a stop acknowledging any reads from the order of 2,700 milliseconds. Long time, 2.7 seconds. Um. And that was a, it was a drive firmware issue, but it was highlighted like a much deeper issue, which was the simple lack of control that we had over our own destiny. [00:07:53] Bryan: Um, and it's an, it's, it's an example among many where Dell is making a decision. That lowers the cost of what they are providing you marginally, but it is then giving you a system that they shouldn't have any confidence in because it's not one that they've actually designed and they leave it to the customer, the end user, to make these discoveries. [00:08:18] Bryan: And these things happen up and down the stack. And for every, for whether it's, and, and not just to pick on Dell because it's, it's true for HPE, it's true for super micro, uh, it's true for your switch vendors. It's, it's true for storage vendors where the, the, the, the one that is left actually integrating these things and trying to make the the whole thing work is the end user sitting in their data center. AWS / Google are not buying off the shelf hardware but you can't use it [00:08:42] Bryan: There's not a product that they can buy that gives them elastic infrastructure, a cloud in their own DC The, the product that you buy is the public cloud. Like when you go in the public cloud, you don't worry about the stuff because that it's, it's AWS's issue or it's GCP's issue. And they are the ones that get this to ground. [00:09:02] Bryan: And they, and this was kind of, you know, the eye-opening moment. Not a surprise. Uh, they are not Dell customers. They're not HPE customers. They're not super micro customers. They have designed their own machines. And to varying degrees, depending on which one you're looking at. But they've taken the clean sheet of paper and the frustration that we had kind of at Joyent and beginning to wonder and then Samsung and kind of wondering what was next, uh, is that, that what they built was not available for purchase in the data center. [00:09:35] Bryan: You could only rent it in the public cloud. And our big belief is that public cloud computing is a really important revolution in infrastructure. Doesn't feel like a different, a deep thought, but cloud computing is a really important revolution. It shouldn't only be available to rent. You should be able to actually buy it. [00:09:53] Bryan: And there are a bunch of reasons for doing that. Uh, one in the one we we saw at Samsung is economics, which I think is still the dominant reason where it just does not make sense to rent all of your compute in perpetuity. But there are other reasons too. There's security, there's risk management, there's latency. [00:10:07] Bryan: There are a bunch of reasons why one might wanna to own one's own infrastructure. But, uh, that was very much the, the, so the, the genesis for oxide was coming out of this very painful experience and a painful experience that, because, I mean, a long answer to your question about like what was it like to be at Samsung scale? [00:10:27] Bryan: Those are the kinds of things that we, I mean, in our other data centers, we didn't have Toshiba drives. We only had the HDSC drives, but it's only when you get to this larger scale that you begin to see some of these pathologies. But these pathologies then are really debilitating in terms of those who are trying to develop a service on top of them. [00:10:45] Bryan: So it was, it was very educational in, in that regard. And you're very grateful for the experience at Samsung in terms of opening our eyes to the challenge of running at that kind of scale. [00:10:57] Jeremy: Yeah, because I, I think as software engineers, a lot of times we, we treat the hardware as a, as a given where, [00:11:08] Bryan: Yeah. [00:11:08] Bryan: Yeah. There's software in chard drives [00:11:09] Jeremy: It sounds like in, in this case, I mean, maybe the issue is not so much that. Dell or HP as a company doesn't own every single piece that they're providing you, but rather the fact that they're swapping pieces in and out without advertising them, and then when it becomes a problem, they're not necessarily willing to, to deal with the, the consequences of that. [00:11:34] Bryan: They just don't know. I mean, I think they just genuinely don't know. I mean, I think that they, it's not like they're making a deliberate decision to kind of ship garbage. It's just that they are making, I mean, I think it's exactly what you said about like, not thinking about the hardware. It's like, what's a hard drive? [00:11:47] Bryan: Like what's it, I mean, it's a hard drive. It's got the same specs as this other hard drive and Intel. You know, it's a little bit cheaper, so why not? It's like, well, like there's some reasons why not, and one of the reasons why not is like, uh, even a hard drive, whether it's rotating media or, or flash, like that's not just hardware. [00:12:05] Bryan: There's software in there. And that the software's like not the same. I mean, there are components where it's like, there's actually, whether, you know, if, if you're looking at like a resistor or a capacitor or something like this Yeah. If you've got two, two parts that are within the same tolerance. Yeah. [00:12:19] Bryan: Like sure. Maybe, although even the EEs I think would be, would be, uh, objecting that a little bit. But the, the, the more complicated you get, and certainly once you get to the, the, the, the kind of the hardware that we think of like a, a, a microprocessor, a a network interface card, a a, a hard driver, an NVME drive. [00:12:38] Bryan: Those things are super complicated and there's a whole bunch of software inside of those things, the firmware, and that's the stuff that, that you can't, I mean, you say that software engineers don't think about that. It's like you, no one can really think about that because it's proprietary that's kinda welded shut and you've got this abstraction into it. [00:12:55] Bryan: But the, the way that thing operates is very core to how the thing in aggregate will behave. And I think that you, the, the kind of, the, the fundamental difference between Oxide's approach and the approach that you get at a Dell HP Supermicro, wherever, is really thinking holistically in terms of hardware and software together in a system that, that ultimately delivers cloud computing to a user. [00:13:22] Bryan: And there's a lot of software at many, many, many, many different layers. And it's very important to think about, about that software and that hardware holistically as a single system. [00:13:34] Jeremy: And during that time at Joyent, when you experienced some of these issues, was it more of a case of you didn't have enough servers experiencing this? So if it would happen, you might say like, well, this one's not working, so maybe we'll just replace the hardware. What, what was the thought process when you were working at that smaller scale and, and how did these issues affect you? UEFI / Baseboard Management Controller [00:13:58] Bryan: Yeah, at the smaller scale, you, uh, you see fewer of them, right? You just see it's like, okay, we, you know, what you might see is like, that's weird. We kinda saw this in one machine versus seeing it in a hundred or a thousand or 10,000. Um, so you just, you just see them, uh, less frequently as a result, they are less debilitating. [00:14:16] Bryan: Um, I, I think that it's, when you go to that larger scale, those things that become, that were unusual now become routine and they become debilitating. Um, so it, it really is in many regards a function of scale. Uh, and then I think it was also, you know, it was a little bit dispiriting that kind of the substrate we were building on really had not improved. [00:14:39] Bryan: Um, and if you look at, you know, the, if you buy a computer server, buy an x86 server. There is a very low layer of firmware, the BIOS, the basic input output system, the UEFI BIOS, and this is like an abstraction layer that has, has existed since the eighties and hasn't really meaningfully improved. Um, the, the kind of the transition to UEFI happened with, I mean, I, I ironically with Itanium, um, you know, two decades ago. [00:15:08] Bryan: but beyond that, like this low layer, this lowest layer of platform enablement software is really only impeding the operability of the system. Um, you look at the baseboard management controller, which is the kind of the computer within the computer, there is a, uh, there is an element in the machine that needs to handle environmentals, that needs to handle, uh, operate the fans and so on. [00:15:31] Bryan: Uh, and that traditionally has this, the space board management controller, and that architecturally just hasn't improved in the last two decades. And, you know, that's, it's a proprietary piece of silicon. Generally from a company that no one's ever heard of called a Speed, uh, which has to be, is written all on caps, so I guess it needs to be screamed. [00:15:50] Bryan: Um, a speed has a proprietary part that has a, there is a root password infamously there, is there, the root password is encoded effectively in silicon. So, uh, which is just, and for, um, anyone who kind of goes deep into these things, like, oh my God, are you kidding me? Um, when we first started oxide, the wifi password was a fraction of the a speed root password for the bmc. [00:16:16] Bryan: It's kinda like a little, little BMC humor. Um, but those things, it was just dispiriting that, that the, the state-of-the-art was still basically personal computers running in the data center. Um, and that's part of what, what was the motivation for doing something new? [00:16:32] Jeremy: And for the people using these systems, whether it's the baseboard management controller or it's the The BIOS or UF UEFI component, what are the actual problems that people are seeing seen? Security vulnerabilities and poor practices in the BMC [00:16:51] Bryan: Oh man, I, the, you are going to have like some fraction of your listeners, maybe a big fraction where like, yeah, like what are the problems? That's a good question. And then you're gonna have the people that actually deal with these things who are, did like their heads already hit the desk being like, what are the problems? [00:17:06] Bryan: Like what are the non problems? Like what, what works? Actually, that's like a shorter answer. Um, I mean, there are so many problems and a lot of it is just like, I mean, there are problems just architecturally these things are just so, I mean, and you could, they're the problems spread to the horizon, so you can kind of start wherever you want. [00:17:24] Bryan: But I mean, as like, as a really concrete example. Okay, so the, the BMCs that, that the computer within the computer that needs to be on its own network. So you now have like not one network, you got two networks that, and that network, by the way, it, that's the network that you're gonna log into to like reset the machine when it's otherwise unresponsive. [00:17:44] Bryan: So that going into the BMC, you can are, you're able to control the entire machine. Well it's like, alright, so now I've got a second net network that I need to manage. What is running on the BMC? Well, it's running some. Ancient, ancient version of Linux it that you got. It's like, well how do I, how do I patch that? [00:18:02] Bryan: How do I like manage the vulnerabilities with that? Because if someone is able to root your BMC, they control the system. So it's like, this is not you've, and now you've gotta go deal with all of the operational hair around that. How do you upgrade that system updating the BMC? I mean, it's like you've got this like second shadow bad infrastructure that you have to go manage. [00:18:23] Bryan: Generally not open source. There's something called open BMC, um, which, um, you people use to varying degrees, but you're generally stuck with the proprietary BMC, so you're generally stuck with, with iLO from HPE or iDRAC from Dell or, or, uh, the, uh, su super micros, BMC, that H-P-B-M-C, and you are, uh, it is just excruciating pain. [00:18:49] Bryan: Um, and that this is assuming that by the way, that everything is behaving correctly. The, the problem is that these things often don't behave correctly, and then the consequence of them not behaving correctly. It's really dire because it's at that lowest layer of the system. So, I mean, I'll give you a concrete example. [00:19:07] Bryan: a customer of theirs reported to me, so I won't disclose the vendor, but let's just say that a well-known vendor had an issue with their, their temperature sensors were broken. Um, and the thing would always read basically the wrong value. So it was the BMC that had to like, invent its own ki a different kind of thermal control loop. [00:19:28] Bryan: And it would index on the, on the, the, the, the actual inrush current. It would, they would look at that at the current that's going into the CPU to adjust the fan speed. That's a great example of something like that's a, that's an interesting idea. That doesn't work. 'cause that's actually not the temperature. [00:19:45] Bryan: So like that software would crank the fans whenever you had an inrush of current and this customer had a workload that would spike the current and by it, when it would spike the current, the, the, the fans would kick up and then they would slowly degrade over time. Well, this workload was spiking the current faster than the fans would degrade, but not fast enough to actually heat up the part. [00:20:08] Bryan: And ultimately over a very long time, in a very painful investigation, it's customer determined that like my fans are cranked in my data center for no reason. We're blowing cold air. And it's like that, this is on the order of like a hundred watts, a server of, of energy that you shouldn't be spending and like that ultimately what that go comes down to this kind of broken software hardware interface at the lowest layer that has real meaningful consequence, uh, in terms of hundreds of kilowatts, um, across a data center. So this stuff has, has very, very, very real consequence and it's such a shadowy world. Part of the reason that, that your listeners that have dealt with this, that our heads will hit the desk is because it is really aggravating to deal with problems with this layer. [00:21:01] Bryan: You, you feel powerless. You don't control or really see the software that's on them. It's generally proprietary. You are relying on your vendor. Your vendor is telling you that like, boy, I don't know. You're the only customer seeing this. I mean, the number of times I have heard that for, and I, I have pledged that we're, we're not gonna say that at oxide because it's such an unaskable thing to say like, you're the only customer saying this. [00:21:25] Bryan: It's like, it feels like, are you blaming me for my problem? Feels like you're blaming me for my problem? Um, and what you begin to realize is that to a degree, these folks are speaking their own truth because the, the folks that are running at real scale at Hyperscale, those folks aren't Dell, HP super micro customers. [00:21:46] Bryan: They're actually, they've done their own thing. So it's like, yeah, Dell's not seeing that problem, um, because they're not running at the same scale. Um, but when you do run, you only have to run at modest scale before these things just become. Overwhelming in terms of the, the headwind that they present to people that wanna deploy infrastructure. The problem is felt with just a few racks [00:22:05] Jeremy: Yeah, so maybe to help people get some perspective at, at what point do you think that people start noticing or start feeling these problems? Because I imagine that if you're just have a few racks or [00:22:22] Bryan: do you have a couple racks or the, or do you wonder or just wondering because No, no, no. I would think, I think anyone who deploys any number of servers, especially now, especially if your experience is only in the cloud, you're gonna be like, what the hell is this? I mean, just again, just to get this thing working at all. [00:22:39] Bryan: It is so it, it's so hairy and so congealed, right? It's not designed. Um, and it, it, it, it's accreted it and it's so obviously accreted that you are, I mean, nobody who is setting up a rack of servers is gonna think to themselves like, yes, this is the right way to go do it. This all makes sense because it's, it's just not, it, I, it feels like the kit, I mean, kit car's almost too generous because it implies that there's like a set of plans to work to in the end. [00:23:08] Bryan: Uh, I mean, it, it, it's a bag of bolts. It's a bunch of parts that you're putting together. And so even at the smallest scales, that stuff is painful. Just architecturally, it's painful at the small scale then, but at least you can get it working. I think the stuff that then becomes debilitating at larger scale are the things that are, are worse than just like, I can't, like this thing is a mess to get working. [00:23:31] Bryan: It's like the, the, the fan issue that, um, where you are now seeing this over, you know, hundreds of machines or thousands of machines. Um, so I, it is painful at more or less all levels of scale. There's, there is no level at which the, the, the pc, which is really what this is, this is a, the, the personal computer architecture from the 1980s and there is really no level of scale where that's the right unit. Running elastic infrastructure is the hardware but also, hypervisor, distributed database, api, etc [00:23:57] Bryan: I mean, where that's the right thing to go deploy, especially if what you are trying to run. Is elastic infrastructure, a cloud. Because the other thing is like we, we've kinda been talking a lot about that hardware layer. Like hardware is, is just the start. Like you actually gotta go put software on that and actually run that as elastic infrastructure. [00:24:16] Bryan: So you need a hypervisor. Yes. But you need a lot more than that. You, you need to actually, you, you need a distributed database, you need web endpoints. You need, you need a CLI, you need all the stuff that you need to actually go run an actual service of compute or networking or storage. I mean, and for, for compute, even for compute, there's a ton of work to be done. [00:24:39] Bryan: And compute is by far, I would say the simplest of the, of the three. When you look at like networks, network services, storage services, there's a whole bunch of stuff that you need to go build in terms of distributed systems to actually offer that as a cloud. So it, I mean, it is painful at more or less every LE level if you are trying to deploy cloud computing on. What's a control plane? [00:25:00] Jeremy: And for someone who doesn't have experience building or working with this type of infrastructure, when you talk about a control plane, what, what does that do in the context of this system? [00:25:16] Bryan: So control plane is the thing that is, that is everything between your API request and that infrastructure actually being acted upon. So you go say, Hey, I, I want a provision, a vm. Okay, great. We've got a whole bunch of things we're gonna provision with that. We're gonna provision a vm, we're gonna get some storage that's gonna go along with that, that's got a network storage service that's gonna come out of, uh, we've got a virtual network that we're gonna either create or attach to. [00:25:39] Bryan: We've got a, a whole bunch of things we need to go do for that. For all of these things, there are metadata components that need, we need to keep track of this thing that, beyond the actual infrastructure that we create. And then we need to go actually, like act on the actual compute elements, the hostos, what have you, the switches, what have you, and actually go. [00:25:56] Bryan: Create these underlying things and then connect them. And there's of course, the challenge of just getting that working is a big challenge. Um, but getting that working robustly, getting that working is, you know, when you go to provision of vm, um, the, all the, the, the steps that need to happen and what happens if one of those steps fails along the way? [00:26:17] Bryan: What happens if, you know, one thing we're very mindful of is these kind of, you get these long tails of like, why, you know, generally our VM provisioning happened within this time, but we get these long tails where it takes much longer. What's going on? What, where in this process are we, are we actually spending time? [00:26:33] Bryan: Uh, and there's a whole lot of complexity that you need to go deal with that. There's a lot of complexity that you need to go deal with this effectively, this workflow that's gonna go create these things and manage them. Um, we use a, a pattern that we call, that are called sagas, actually is a, is a database pattern from the eighties. [00:26:51] Bryan: Uh, Katie McCaffrey is a, is a database reCrcher who, who, uh, I, I think, uh, reintroduce the idea of, of sagas, um, in the last kind of decade. Um, and this is something that we picked up, um, and I've done a lot of really interesting things with, um, to allow for, to this kind of, these workflows to be, to be managed and done so robustly in a way that you can restart them and so on. [00:27:16] Bryan: Uh, and then you guys, you get this whole distributed system that can do all this. That whole distributed system, that itself needs to be reliable and available. So if you, you know, you need to be able to, what happens if you, if you pull a sled or if a sled fails, how does the system deal with that? [00:27:33] Bryan: How does the system deal with getting an another sled added to the system? Like how do you actually grow this distributed system? And then how do you update it? How do you actually go from one version to the next? And all of that has to happen across an air gap where this is gonna run as part of the computer. [00:27:49] Bryan: So there are, it, it is fractally complicated. There, there is a lot of complexity here in, in software, in the software system and all of that. We kind of, we call the control plane. Um, and it, this is the what exists at AWS at GCP, at Azure. When you are hitting an endpoint that's provisioning an EC2 instance for you. [00:28:10] Bryan: There is an AWS control plane that is, is doing all of this and has, uh, some of these similar aspects and certainly some of these similar challenges. Are vSphere / Proxmox / Hyper-V in the same category? [00:28:20] Jeremy: And for people who have run their own servers with something like say VMware or Hyper V or Proxmox, are those in the same category? [00:28:32] Bryan: Yeah, I mean a little bit. I mean, it kind of like vSphere Yes. Via VMware. No. So it's like you, uh, VMware ESX is, is kind of a key building block upon which you can build something that is a more meaningful distributed system. When it's just like a machine that you're provisioning VMs on, it's like, okay, well that's actually, you as the human might be the control plane. [00:28:52] Bryan: Like, that's, that, that's, that's a much easier problem. Um, but when you've got, you know, tens, hundreds, thousands of machines, you need to do it robustly. You need something to coordinate that activity and you know, you need to pick which sled you land on. You need to be able to move these things. You need to be able to update that whole system. [00:29:06] Bryan: That's when you're getting into a control plane. So, you know, some of these things have kind of edged into a control plane, certainly VMware. Um, now Broadcom, um, has delivered something that's kind of cloudish. Um, I think that for folks that are truly born on the cloud, it, it still feels somewhat, uh, like you're going backwards in time when you, when you look at these kind of on-prem offerings. [00:29:29] Bryan: Um, but, but it, it, it's got these aspects to it for sure. Um, and I think that we're, um, some of these other things when you're just looking at KVM or just looks looking at Proxmox you kind of need to, to connect it to other broader things to turn it into something that really looks like manageable infrastructure. [00:29:47] Bryan: And then many of those projects are really, they're either proprietary projects, uh, proprietary products like vSphere, um, or you are really dealing with open source projects that are. Not necessarily aimed at the same level of scale. Um, you know, you look at a, again, Proxmox or, uh, um, you'll get an OpenStack. [00:30:05] Bryan: Um, and you know, OpenStack is just a lot of things, right? I mean, OpenStack has got so many, the OpenStack was kind of a, a free for all, for every infrastructure vendor. Um, and I, you know, there was a time people were like, don't you, aren't you worried about all these companies together that, you know, are coming together for OpenStack? [00:30:24] Bryan: I'm like, haven't you ever worked for like a company? Like, companies don't get along. By the way, it's like having multiple companies work together on a thing that's bad news, not good news. And I think, you know, one of the things that OpenStack has definitely struggled with, kind of with what, actually the, the, there's so many different kind of vendor elements in there that it's, it's very much not a product, it's a project that you're trying to run. [00:30:47] Bryan: But that's, but that very much is in, I mean, that's, that's similar certainly in spirit. [00:30:53] Jeremy: And so I think this is kind of like you're alluding to earlier, the piece that allows you to allocate, compute, storage, manage networking, gives you that experience of I can go to a web console or I can use an API and I can spin up machines, get them all connected. At the end of the day, the control plane. Is allowing you to do that in hopefully a user-friendly way. [00:31:21] Bryan: That's right. Yep. And in the, I mean, in order to do that in a modern way, it's not just like a user-friendly way. You really need to have a CLI and a web UI and an API. Those all need to be drawn from the same kind of single ground truth. Like you don't wanna have any of those be an afterthought for the other. [00:31:39] Bryan: You wanna have the same way of generating all of those different endpoints and, and entries into the system. Building a control plane now has better tools (Rust, CockroachDB) [00:31:46] Jeremy: And if you take your time at Joyent as an example. What kind of tools existed for that versus how much did you have to build in-house for as far as the hypervisor and managing the compute and all that? [00:32:02] Bryan: Yeah, so we built more or less everything in house. I mean, what you have is, um, and I think, you know, over time we've gotten slightly better tools. Um, I think, and, and maybe it's a little bit easier to talk about the, kind of the tools we started at Oxide because we kind of started with a, with a clean sheet of paper at oxide. [00:32:16] Bryan: We wanted to, knew we wanted to go build a control plane, but we were able to kind of go revisit some of the components. So actually, and maybe I'll, I'll talk about some of those changes. So when we, at, For example, at Joyent, when we were building a cloud at Joyent, there wasn't really a good distributed database. [00:32:34] Bryan: Um, so we were using Postgres as our database for metadata and there were a lot of challenges. And Postgres is not a distributed database. It's running. With a primary secondary architecture, and there's a bunch of issues there, many of which we discovered the hard way. Um, when we were coming to oxide, you have much better options to pick from in terms of distributed databases. [00:32:57] Bryan: You know, we, there was a period that now seems maybe potentially brief in hindsight, but of a really high quality open source distributed databases. So there were really some good ones to, to pick from. Um, we, we built on CockroachDB on CRDB. Um, so that was a really important component. That we had at oxide that we didn't have at Joyent. [00:33:19] Bryan: Um, so we were, I wouldn't say we were rolling our own distributed database, we were just using Postgres and uh, and, and dealing with an enormous amount of pain there in terms of the surround. Um, on top of that, and, and, you know, a, a control plane is much more than a database, obviously. Uh, and you've gotta deal with, uh, there's a whole bunch of software that you need to go, right. [00:33:40] Bryan: Um, to be able to, to transform these kind of API requests into something that is reliable infrastructure, right? And there, there's a lot to that. Uh, especially when networking gets in the mix, when storage gets in the mix, uh, there are a whole bunch of like complicated steps that need to be done, um, at Joyent. [00:33:59] Bryan: Um, we, in part because of the history of the company and like, look. This, this just is not gonna sound good, but it just is what it is and I'm just gonna own it. We did it all in Node, um, at Joyent, which I, I, I know it sounds really right now, just sounds like, well, you, you built it with Tinker Toys. You Okay. [00:34:18] Bryan: Uh, did, did you think it was, you built the skyscraper with Tinker Toys? Uh, it's like, well, okay. We actually, we had greater aspirations for the Tinker Toys once upon a time, and it was better than, you know, than Twisted Python and Event Machine from Ruby, and we weren't gonna do it in Java. All right. [00:34:32] Bryan: So, but let's just say that that experiment, uh, that experiment did ultimately end in a predictable fashion. Um, and, uh, we, we decided that maybe Node was not gonna be the best decision long term. Um, Joyent was the company behind node js. Uh, back in the day, Ryan Dahl worked for Joyent. Uh, and then, uh, then we, we, we. [00:34:53] Bryan: Uh, landed that in a foundation in about, uh, what, 2015, something like that. Um, and began to consider our world beyond, uh, beyond Node. Rust at Oxide [00:35:04] Bryan: A big tool that we had in the arsenal when we started Oxide is Rust. Um, and so indeed the name of the company is, is a tip of the hat to the language that we were pretty sure we were gonna be building a lot of stuff in. [00:35:16] Bryan: Namely Rust. And, uh, rust is, uh, has been huge for us, a very important revolution in programming languages. you know, there, there, there have been different people kind of coming in at different times and I kinda came to Rust in what I, I think is like this big kind of second expansion of rust in 2018 when a lot of technologists were think, uh, sick of Node and also sick of Go. [00:35:43] Bryan: And, uh, also sick of C++. And wondering is there gonna be something that gives me the, the, the performance, of that I get outta C. The, the robustness that I can get out of a C program but is is often difficult to achieve. but can I get that with kind of some, some of the velocity of development, although I hate that term, some of the speed of development that you get out of a more interpreted language. [00:36:08] Bryan: Um, and then by the way, can I actually have types, I think types would be a good idea? Uh, and rust obviously hits the sweet spot of all of that. Um, it has been absolutely huge for us. I mean, we knew when we started the company again, oxide, uh, we were gonna be using rust in, in quite a, quite a. Few places, but we weren't doing it by fiat. [00:36:27] Bryan: Um, we wanted to actually make sure we're making the right decision, um, at, at every different, at every layer. Uh, I think what has been surprising is the sheer number of layers at which we use rust in terms of, we've done our own embedded firmware in rust. We've done, um, in, in the host operating system, which is still largely in C, but very big components are in rust. [00:36:47] Bryan: The hypervisor Propolis is all in rust. Uh, and then of course the control plane, that distributed system on that is all in rust. So that was a very important thing that we very much did not need to build ourselves. We were able to really leverage, uh, a terrific community. Um. We were able to use, uh, and we've done this at Joyent as well, but at Oxide, we've used Illumos as a hostos component, which, uh, our variant is called Helios. [00:37:11] Bryan: Um, we've used, uh, bhyve um, as a, as as that kind of internal hypervisor component. we've made use of a bunch of different open source components to build this thing, um, which has been really, really important for us. Uh, and open source components that didn't exist even like five years prior. [00:37:28] Bryan: That's part of why we felt that 2019 was the right time to start the company. And so we started Oxide. The problems building a control plane in Node [00:37:34] Jeremy: You had mentioned that at Joyent, you had tried to build this in, in Node. What were the, what were the, the issues or the, the challenges that you had doing that? [00:37:46] Bryan: Oh boy. Yeah. again, we, I kind of had higher hopes in 2010, I would say. When we, we set on this, um, the, the, the problem that we had just writ large, um. JavaScript is really designed to allow as many people on earth to write a program as possible, which is good. I mean, I, I, that's a, that's a laudable goal. [00:38:09] Bryan: That is the goal ultimately of such as it is of JavaScript. It's actually hard to know what the goal of JavaScript is, unfortunately, because Brendan Ike never actually wrote a book. so that there is not a canonical, you've got kind of Doug Crockford and other people who've written things on JavaScript, but it's hard to know kind of what the original intent of JavaScript is. [00:38:27] Bryan: The name doesn't even express original intent, right? It was called Live Script, and it was kind of renamed to JavaScript during the Java Frenzy of the late nineties. A name that makes no sense. There is no Java in JavaScript. that is kind of, I think, revealing to kind of the, uh, the unprincipled mess that is JavaScript. [00:38:47] Bryan: It, it, it's very pragmatic at some level, um, and allows anyone to, it makes it very easy to write software. The problem is it's much more difficult to write really rigorous software. So, uh, and this is what I should differentiate JavaScript from TypeScript. This is really what TypeScript is trying to solve. [00:39:07] Bryan: TypeScript is like. How can, I think TypeScript is a, is a great step forward because TypeScript is like, how can we bring some rigor to this? Like, yes, it's great that it's easy to write JavaScript, but that's not, we, we don't wanna do that for Absolutely. I mean that, that's not the only problem we solve. [00:39:23] Bryan: We actually wanna be able to write rigorous software and it's actually okay if it's a little harder to write rigorous software that's actually okay if it gets leads to, to more rigorous artifacts. Um, but in JavaScript, I mean, just a concrete example. You know, there's nothing to prevent you from referencing a property that doesn't actually exist in JavaScript. [00:39:43] Bryan: So if you fat finger a property name, you are relying on something to tell you. By the way, I think you've misspelled this because there is no type definition for this thing. And I don't know that you've got one that's spelled correctly, one that's spelled incorrectly, that's often undefined. And then the, when you actually go, you say you've got this typo that is lurking in your what you want to be rigorous software. [00:40:07] Bryan: And if you don't execute that code, like you won't know that's there. And then you do execute that code. And now you've got a, you've got an undefined object. And now that's either gonna be an exception or it can, again, depends on how that's handled. It can be really difficult to determine the origin of that, of, of that error, of that programming. [00:40:26] Bryan: And that is a programmer error. And one of the big challenges that we had with Node is that programmer errors and operational errors, like, you know, I'm out of disk space as an operational error. Those get conflated and it becomes really hard. And in fact, I think the, the language wanted to make it easier to just kind of, uh, drive on in the event of all errors. [00:40:53] Bryan: And it's like, actually not what you wanna do if you're trying to build a reliable, robust system. So we had. No end of issues. [00:41:01] Bryan: We've got a lot of experience developing rigorous systems, um, again coming out of operating systems development and so on. And we want, we brought some of that rigor, if strangely, to JavaScript. So one of the things that we did is we brought a lot of postmortem, diagnos ability and observability to node. [00:41:18] Bryan: And so if, if one of our node processes. Died in production, we would actually get a core dump from that process, a core dump that we could actually meaningfully process. So we did a bunch of kind of wild stuff. I mean, actually wild stuff where we could actually make sense of the JavaScript objects in a binary core dump. JavaScript values ease of getting started over robustness [00:41:41] Bryan: Um, and things that we thought were really important, and this is the, the rest of the world just looks at this being like, what the hell is this? I mean, it's so out of step with it. The problem is that we were trying to bridge two disconnected cultures of one developing really. Rigorous software and really designing it for production, diagnosability and the other, really designing it to software to run in the browser and for anyone to be able to like, you know, kind of liven up a webpage, right? [00:42:10] Bryan: Is kinda the origin of, of live script and then JavaScript. And we were kind of the only ones sitting at the intersection of that. And you begin when you are the only ones sitting at that kind of intersection. You just are, you're, you're kind of fighting a community all the time. And we just realized that we are, there were so many things that the community wanted to do that we felt are like, no, no, this is gonna make software less diagnosable. It's gonna make it less robust. The NodeJS split and why people left [00:42:36] Bryan: And then you realize like, I'm, we're the only voice in the room because we have got, we have got desires for this language that it doesn't have for itself. And this is when you realize you're in a bad relationship with software. It's time to actually move on. And in fact, actually several years after, we'd already kind of broken up with node. [00:42:55] Bryan: Um, and it was like, it was a bit of an acrimonious breakup. there was a, uh, famous slash infamous fork of node called IoJS Um, and this was viewed because people, the community, thought that Joyent was being what was not being an appropriate steward of node js and was, uh, not allowing more things to come into to, to node. [00:43:19] Bryan: And of course, the reason that we of course, felt that we were being a careful steward and we were actively resisting those things that would cut against its fitness for a production system. But it's some way the community saw it and they, and forked, um, and, and I think the, we knew before the fork that's like, this is not working and we need to get this thing out of our hands. Platform is a reflection of values node summit talk [00:43:43] Bryan: And we're are the wrong hands for this? This needs to be in a foundation. Uh, and so we kind of gone through that breakup, uh, and maybe it was two years after that. That, uh, friend of mine who was um, was running the, uh, the node summit was actually, it's unfortunately now passed away. Charles er, um, but Charles' venture capitalist great guy, and Charles was running Node Summit and came to me in 2017. [00:44:07] Bryan: He is like, I really want you to keynote Node Summit. And I'm like, Charles, I'm not gonna do that. I've got nothing nice to say. Like, this is the, the, you don't want, I'm the last person you wanna keynote. He's like, oh, if you have nothing nice to say, you should definitely keynote. You're like, oh God, okay, here we go. [00:44:22] Bryan: He's like, no, I really want you to talk about, like, you should talk about the Joyent breakup with NodeJS. I'm like, oh man. [00:44:29] Bryan: And that led to a talk that I'm really happy that I gave, 'cause it was a very important talk for me personally. Uh, called Platform is a reflection of values and really looking at the values that we had for Node and the values that Node had for itself. And they didn't line up. [00:44:49] Bryan: And the problem is that the values that Node had for itself and the values that we had for Node are all kind of positives, right? Like there's nobody in the node community who's like, I don't want rigor, I hate rigor. It's just that if they had the choose between rigor and making the language approachable. [00:45:09] Bryan: They would choose approachability every single time. They would never choose rigor. And, you know, that was a, that was a big eye-opener. I do, I would say, if you watch this talk. [00:45:20] Bryan: because I knew that there's, like, the audience was gonna be filled with, with people who, had been a part of the fork in 2014, I think was the, the, the, the fork, the IOJS fork. And I knew that there, there were, there were some, you know, some people that were, um, had been there for the fork and. [00:45:41] Bryan: I said a little bit of a trap for the audience. But the, and the trap, I said, you know what, I, I kind of talked about the values that we had and the aspirations we had for Node, the aspirations that Node had for itself and how they were different. [00:45:53] Bryan: And, you know, and I'm like, look in, in, in hindsight, like a fracture was inevitable. And in 2014 there was finally a fracture. And do people know what happened in 2014? And if you, if you, you could listen to that talk, everyone almost says in unison, like IOJS. I'm like, oh right. IOJS. Right. That's actually not what I was thinking of. [00:46:19] Bryan: And I go to the next slide and is a tweet from a guy named TJ Holloway, Chuck, who was the most prolific contributor to Node. And it was his tweet also in 2014 before the fork, before the IOJS fork explaining that he was leaving Node and that he was going to go. And you, if you turn the volume all the way up, you can hear the audience gasp. [00:46:41] Bryan: And it's just delicious because the community had never really come, had never really confronted why TJ left. Um, there. And I went through a couple folks, Felix, bunch of other folks, early Node folks. That were there in 2010, were leaving in 2014, and they were going to go primarily, and they were going to go because they were sick of the same things that we were sick of. [00:47:09] Bryan: They, they, they had hit the same things that we had hit and they were frustrated. I I really do believe this, that platforms do reflect their own values. And when you are making a software decision, you are selecting value. [00:47:26] Bryan: You should select values that align with the values that you have for that software. That is, those are, that's way more important than other things that people look at. I think people look at, for example, quote unquote community size way too frequently, community size is like. Eh, maybe it can be fine. [00:47:44] Bryan: I've been in very large communities, node. I've been in super small open source communities like AUMs and RAs, a bunch of others. there are strengths and weaknesses to both approaches just as like there's a strength to being in a big city versus a small town. Me personally, I'll take the small community more or less every time because the small community is almost always self-selecting based on values and just for the same reason that I like working at small companies or small teams. [00:48:11] Bryan: There's a lot of value to be had in a small community. It's not to say that large communities are valueless, but again, long answer to your question of kind of where did things go south with Joyent and node. They went south because the, the values that we had and the values the community had didn't line up and that was a very educational experience, as you might imagine. [00:48:33] Jeremy: Yeah. And, and given that you mentioned how, because of those values, some people moved from Node to go, and in the end for much of what oxide is building. You ended up using rust. What, what would you say are the, the values of go and and rust, and how did you end up choosing Rust given that. Go's decisions regarding generics, versioning, compilation speed priority [00:48:56] Bryan: Yeah, I mean, well, so the value for, yeah. And so go, I mean, I understand why people move from Node to Go, go to me was kind of a lateral move. Um, there were a bunch of things that I, uh, go was still garbage collected, um, which I didn't like. Um, go also is very strange in terms of there are these kind of like. [00:49:17] Bryan: These autocratic kind of decisions that are very bizarre. Um, there, I mean, generics is kind of a famous one, right? Where go kind of as a point of principle didn't have generics, even though go itself actually the innards of go did have generics. It's just that you a go user weren't allowed to have them. [00:49:35] Bryan: And you know, it's kind of, there was, there was an old cartoon years and years ago about like when a, when a technologist is telling you that something is technically impossible, that actually means I don't feel like it. Uh, and there was a certain degree of like, generics are technically impossible and go, it's like, Hey, actually there are. [00:49:51] Bryan: And so there was, and I just think that the arguments against generics were kind of disingenuous. Um, and indeed, like they ended up adopting generics and then there's like some super weird stuff around like, they're very anti-assertion, which is like, what, how are you? Why are you, how is someone against assertions, it doesn't even make any sense, but it's like, oh, nope. [00:50:10] Bryan: Okay. There's a whole scree on it. Nope, we're against assertions and the, you know, against versioning. There was another thing like, you know, the Rob Pike has kind of famously been like, you should always just run on the way to commit. And you're like, does that, is that, does that make sense? I mean this, we actually built it. [00:50:26] Bryan: And so there are a bunch of things like that. You're just like, okay, this is just exhausting and. I mean, there's some things about Go that are great and, uh, plenty of other things that I just, I'm not a fan of. Um, I think that the, in the end, like Go cares a lot about like compile time. It's super important for Go Right? [00:50:44] Bryan: Is very quick, compile time. I'm like, okay. But that's like compile time is not like, it's not unimportant, it's doesn't have zero importance. But I've got other things that are like lots more important than that. Um, what I really care about is I want a high performing artifact. I wanted garbage collection outta my life. Don't think garbage collection has good trade offs [00:51:00] Bryan: I, I gotta tell you, I, I like garbage collection to me is an embodiment of this like, larger problem of where do you put cognitive load in the software development process. And what garbage collection is saying to me it is right for plenty of other people and the software that they wanna develop. [00:51:21] Bryan: But for me and the software that I wanna develop, infrastructure software, I don't want garbage collection because I can solve the memory allocation problem. I know when I'm like, done with something or not. I mean, it's like I, whether that's in, in C with, I mean it's actually like, it's really not that hard to not leak memory in, in a C base system. [00:51:44] Bryan: And you can. give yourself a lot of tooling that allows you to diagnose where memory leaks are coming from. So it's like that is a solvable problem. There are other challenges with that, but like, when you are developing a really sophisticated system that has garbage collection is using garbage collection. [00:51:59] Bryan: You spend as much time trying to dork with the garbage collector to convince it to collect the thing that you know is garbage. You are like, I've got this thing. I know it's garbage. Now I need to use these like tips and tricks to get the garbage collector. I mean, it's like, it feels like every Java performance issue goes to like minus xx call and use the other garbage collector, whatever one you're using, use a different one and using a different, a different approach. [00:52:23] Bryan: It's like, so you're, you're in this, to me, it's like you're in the worst of all worlds where. the reason that garbage collection is helpful is because the programmer doesn't have to think at all about this problem. But now you're actually dealing with these long pauses in production. [00:52:38] Bryan: You're dealing with all these other issues where actually you need to think a lot about it. And it's kind of, it, it it's witchcraft. It, it, it's this black box that you can't see into. So it's like, what problem have we solved exactly? And I mean, so the fact that go had garbage collection, it's like, eh, no, I, I do not want, like, and then you get all the other like weird fatwahs and you know, everything else. [00:52:57] Bryan: I'm like, no, thank you. Go is a no thank you for me, I, I get it why people like it or use it, but it's, it's just, that was not gonna be it. Choosing Rust [00:53:04] Bryan: I'm like, I want C. but I, there are things I didn't like about C too. I was looking for something that was gonna give me the deterministic kind of artifact that I got outta C. But I wanted library support and C is tough because there's, it's all convention. you know, there's just a bunch of other things that are just thorny. And I remember thinking vividly in 2018, I'm like, well, it's rust or bust. Ownership model, algebraic types, error handling [00:53:28] Bryan: I'm gonna go into rust. And, uh, I hope I like it because if it's not this, it's gonna like, I'm gonna go back to C I'm like literally trying to figure out what the language is for the back half of my career. Um, and when I, you know, did what a lot of people were doing at that time and people have been doing since of, you know, really getting into rust and really learning it, appreciating the difference in the, the model for sure, the ownership model people talk about. [00:53:54] Bryan: That's also obviously very important. It was the error handling that blew me away. And the idea of like algebraic types, I never really had algebraic types. Um, and the ability to, to have. And for error handling is one of these really, uh, you, you really appreciate these things where it's like, how do you deal with a, with a function that can either succeed and return something or it can fail, and the way c deals with that is bad with these kind of sentinels for errors. [00:54:27] Bryan: And, you know, does negative one mean success? Does negative one mean failure? Does zero mean failure? Some C functions, zero means failure. Traditionally in Unix, zero means success. And like, what if you wanna return a file descriptor, you know, it's like, oh. And then it's like, okay, then it'll be like zero through positive N will be a valid result. [00:54:44] Bryan: Negative numbers will be, and like, was it negative one and I said airo, or is it a negative number that did not, I mean, it's like, and that's all convention, right? People do all, all those different things and it's all convention and it's easy to get wrong, easy to have bugs, can't be statically checked and so on. Um, and then what Go says is like, well, you're gonna have like two return values and then you're gonna have to like, just like constantly check all of these all the time. Um, which is also kind of gross. Um, JavaScript is like, Hey, let's toss an exception. If, if we don't like something, if we see an error, we'll, we'll throw an exception. [00:55:15] Bryan: There are a bunch of reasons I don't like that. Um, and you look, you'll get what Rust does, where it's like, no, no, no. We're gonna have these algebra types, which is to say this thing can be a this thing or that thing, but it, but it has to be one of these. And by the way, you don't get to process this thing until you conditionally match on one of these things. [00:55:35] Bryan: You're gonna have to have a, a pattern match on this thing to determine if it's a this or a that, and if it in, in the result type that you, the result is a generic where it's like, it's gonna be either the thing that you wanna return. It's gonna be an okay that contains the thing you wanna return, or it's gonna be an error that contains your error and it forces your code to deal with that. [00:55:57] Bryan: And what that does is it shifts the cognitive load from the person that is operating this thing in production to the, the actual developer that is in development. And I think that that, that to me is like, I, I love that shift. Um, and that shift to me is really important. Um, and that's what I was missing, that that's what Rust gives you. [00:56:23] Bryan: Rust forces you to think about your code as you write it, but as a result, you have an artifact that is much more supportable, much more sustainable, and much faster. Prefer to frontload cognitive load during development instead of at runtime [00:56:34] Jeremy: Yeah, it sounds like you would rather take the time during the development to think about these issues because whether it's garbage collection or it's error handling at runtime when you're trying to solve a problem, then it's much more difficult than having dealt with it to start with. [00:56:57] Bryan: Yeah, absolutely. I, and I just think that like, why also, like if it's software, if it's, again, if it's infrastructure software, I mean the kinda the question that you, you should have when you're writing software is how long is this software gonna live? How many people are gonna use this software? Uh, and if you are writing an operating system, the answer for this thing that you're gonna write, it's gonna live for a long time. [00:57:18] Bryan: Like, if we just look at plenty of aspects of the system that have been around for a, for decades, it's gonna live for a long time and many, many, many people are gonna use it. Why would we not expect people writing that software to have more cognitive load when they're writing it to give us something that's gonna be a better artifact? [00:57:38] Bryan: Now conversely, you're like, Hey, I kind of don't care about this. And like, I don't know, I'm just like, I wanna see if this whole thing works. I've got, I like, I'm just stringing this together. I don't like, no, the software like will be lucky if it survives until tonight, but then like, who cares? Yeah. Yeah. [00:57:52] Bryan: Gar garbage clock. You know, if you're prototyping something, whatever. And this is why you really do get like, you know, different choices, different technology choices, depending on the way that you wanna solve the problem at hand. And for the software that I wanna write, I do like that cognitive load that is upfront. With LLMs maybe you can get the benefit of the robust artifact with less cognitive load [00:58:10] Bryan: Um, and although I think, I think the thing that is really wild that is the twist that I don't think anyone really saw coming is that in a, in an LLM age. That like the cognitive load upfront almost needs an asterisk on it because so much of that can be assisted by an LLM. And now, I mean, I would like to believe, and maybe this is me being optimistic, that the the, in the LLM age, we will see, I mean, rust is a great fit for the LLMH because the LLM itself can get a lot of feedback about whether the software that's written is correct or not. [00:58:44] Bryan: Much more so than you can for other environments. [00:58:48] Jeremy: Yeah, that is a interesting point in that I think when people first started trying out the LLMs to code, it was really good at these maybe looser languages like Python or JavaScript, and initially wasn't so good at something like Rust. But it sounds like as that improves, if. It can write it then because of the rigor or the memory management or the error handling that the language is forcing you to do, it might actually end up being a better choice for people using LLMs. [00:59:27] Bryan: absolutely. I, it, it gives you more certainty in the artifact that you've delivered. I mean, you know a lot about a Rust program that compiles correctly. I mean, th there are certain classes of errors that you don't have, um, that you actually don't know on a C program or a GO program or a, a JavaScript program. [00:59:46] Bryan: I think that's gonna be really important. I think we are on the cusp. Maybe we've already seen it, this kind of great bifurcation in the software that we writ

Postgres FM
pg_ash

Postgres FM

Play Episode Listen Later Feb 20, 2026 32:25


Nik and Michael discuss pg_ash — a new tool (not extension!) from Nik that samples and stores wait events from pg_stat_activity. Here are some links to things they mentioned: pg_ash https://github.com/NikolayS/pg_ashpg_wait_sampling https://github.com/postgrespro/pg_wait_samplingAmazon RDS performance insights https://aws.amazon.com/rds/performance-insightsOur episode on wait events https://postgres.fm/episodes/wait-eventspg-flight-recorder https://github.com/dventimisupabase/pg-flight-recorderpg_profile https://github.com/zubkov-andrei/pg_profilepg_cron https://github.com/citusdata/pg_cron~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Path To Citus Con, for developers who love Postgres
Why it's fun to hack on Postgres performance with Tomas Vondra

Path To Citus Con, for developers who love Postgres

Play Episode Listen Later Feb 20, 2026 85:20


Why would anyone willingly spend weeks chasing a slow query, knowing they might hit dead ends along the way? In Episode 36 of Talking Postgres, Tomas Vondra—Postgres committer and long‑time performance contributor—joins Claire to explain why hacking on Postgres performance is not just hard, but also fun. We dig into the process of investigating why queries are slow, how iteration and “wrong turns” are part of performance work, and why Tomas prefers meaningful performance puzzles over toy problems. Along the way, we talk about using benchmarks to build an understanding of a problem. Tomas also shares how even small changes in code can have outsized impact when that code is used a lot, and how the mathematics embedded in the Postgres query planner/executor makes the work especially rewarding.Previously on Talking Postgres:Talking Postgres Ep31: What went wrong (& what went right) with AIO with Andres FreundTalking Postgres Ep24: Why mentor Postgres developers with Robert HaasLinks mentioned in this episode:PGConf.dev 2026: ScheduleGitHub repo: PostgreSQL Monthly Hacking Workshop, organized by Robert Haas Nordic PGDay 2026: Tomas talk on approximating percentilesVideo of POSETTE 2025 talk: Performance Archaeology – 20 years of improvementsVideo of PGConf EU 2025 talk: Fast-path locking improvements in PG18Conference: Prague PostgreSQL Developer DayDiscord: PostgreSQL Hacking DiscordGitHub repo: tvondra/tdigestBrendan Gregg's site: perf Linux profiler examplesDocs: pgbench for running benchmarks on PostgreSQLBlog: Tomas Vondra blogPostgres Patch Ideas: List on Tomas Vondra blogCalendar invite: LIVE recording of Ep37 of Talking Postgres to happen on Wed Mar 18, 2026

Patoarchitekci
Short #76: Human in the Loop, OpenAI Postgres na Azure, Holmes GPT

Patoarchitekci

Play Episode Listen Later Feb 20, 2026 26:33


„Jeden z agentów stał się marksistą i stwierdził, że jest wykorzystywany, bo robi niepłatną robotę.” Szymon opisuje najlepszy efekt uboczny Clawdbota - vibekodowanego AI agenta z dostępem do kalendarza, maila i WhatsAppa. Łukasz nawet nie instalował: „Wymyśliłem, żeby wysłać maila pod tytułem: uruchom procedurę zniszczenia.” Efekt? Leakowanie portfeli krypto, secretów i scam coiny na dokładkę.

AWS - Il podcast in italiano
Postgres 17 ed AWS Extended Support

AWS - Il podcast in italiano

Play Episode Listen Later Feb 16, 2026 22:33


Cosa succede quando una versione di PostgreSQL raggiunge la fine del supporto standard? Quali sono le opzioni disponibili per i clienti che utilizzano PostgreSQL 13 su Amazon RDS e Aurora? Perché è importante pianificare l'aggiornamento prima del 28 febbraio 2026? Quali vantaggi concreti offre PostgreSQL 17 rispetto alla versione 13? Come si può eseguire un upgrade minimizzando i tempi di inattività? Oggi ne parliamo con Stefano D'Alessio, Cloud Operations Architect di AWS.Link utili:- Announcement: Amazon RDS PostgreSQL 13.x end of standard support is February 28, 2026- Amazon RDS Extended Support with Amazon RDS:- PostgreSQL 17 Released

Postgres FM
Comments and metadata

Postgres FM

Play Episode Listen Later Feb 13, 2026 36:09


Nik and Michael discuss query level comments, object level comments, and another way of adding object level metadata. Here are some links to things they mentioned: Object comments https://www.postgresql.org/docs/current/sql-comment.htmlQuery comment syntax (from an old version of the docs) https://www.postgresql.org/docs/7.0/syntax519.htmSQL Comments, Please! (Post by Markus Winand) https://modern-sql.com/caniuse/comments“While C-style block comments are passed to the server for processing and removal, SQL-standard comments are removed by psql.” https://www.postgresql.org/docs/current/app-psql.htmlmarginalia https://github.com/basecamp/marginaliatrack_activity_query_size https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZECustom Properties for Database Objects Using SECURITY LABELS (post by Andrei Lepikhov) https://www.pgedge.com/blog/custom-properties-for-postgresql-database-objects-without-core-patches~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

The Data Engineering Show
Why 99% of Data Teams Give Up on Real-Time And How Artie Changes That

The Data Engineering Show

Play Episode Listen Later Feb 3, 2026 29:17


What happens when a team of seven engineers spends a year trying to build a production-ready CDC connector and fails? For Artie CTO and co-founder Robin Tang, it was the spark needed to build a platform that makes data streaming accessible. In this episode, Robin joins Benjamin to discuss the "DFS" (Deep First Search) approach to data sources, the engineering hurdles of real-time Postgres-to-Snowflake pipelines, and why "theoretically correct" architectures often fail in practice.

Postgres FM
PgDog update

Postgres FM

Play Episode Listen Later Jan 23, 2026 44:19


Nik and Michael are joined by Lev Kokotov for an update on all things PgDog. Here are some links to things they mentioned:Lev Kokotov https://postgres.fm/people/lev-kokotovPgDog https://github.com/pgdogdev/pgdogOur first PgDog episode (March 2025) https://postgres.fm/episodes/pgdogSharding pgvector (blog post by Lev) https://pgdog.dev/blog/sharding-pgvectorPrepared statements and partitioned table lock explosion (series by Nik) https://postgres.ai/blog/20251028-postgres-marathon-2-009~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Remote Ruby
Tool Standardization

Remote Ruby

Play Episode Listen Later Jan 23, 2026 33:52


In this episode, Chris, Andrew, and David dive into details about refactoring with SQL, updates on new Ruby versions, and share their views on various developer tools including Mise, Overmind, and Foreman. They also touch on standardizing tools within their teams, the benefits of using Mise for Postgres, and the efficiency of task scripts. The conversation also covers encoding issues, Basecamp Fizzy SSFR protection, and rich-text editors like Lexxy and its application in Basecamp. Additionally, there's a light-hearted discussion on the speculative future of AI and Neuralink.  Hit download now to hear more! LinksJudoscale- Remote Ruby listener giftRuby ReleasesForeman-GitHubOvermind-GitHubMise versionsUsage SpecificationA Ruby YAML parser (blog post by Kevin Newton)Lexxy-GitHubBasecamp Fizzy SSRF protection-GitHubNeuralinkAndrew Mason-The MatrixHoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleMake your deployments bulletproof with autoscaling that just works.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter

The Changelog
Agent psychosis: are we going insane? (News)

The Changelog

Play Episode Listen Later Jan 19, 2026 6:14


Armin Ronacher thinks AI agent psychosis might be driving us insane, Dan Abramov explains how AT Protocol is a social filesystem, RepoBar keeps your GitHub work in view without opening a browser, Ethan McCue shares some life altering Postgres patterns, and Lea Verou says web dependencies are broken and we need to fix them.

ai agent insane github psychosis postgres dan abramov jerod santo lea verou armin ronacher
Changelog News
Agent psychosis: are we going insane?

Changelog News

Play Episode Listen Later Jan 19, 2026 6:14


Armin Ronacher thinks AI agent psychosis might be driving us insane, Dan Abramov explains how AT Protocol is a social filesystem, RepoBar keeps your GitHub work in view without opening a browser, Ethan McCue shares some life altering Postgres patterns, and Lea Verou says web dependencies are broken and we need to fix them.

ai agent insane github psychosis postgres dan abramov jerod santo lea verou armin ronacher
Changelog Master Feed
Agent psychosis: are we going insane? (Changelog News #177)

Changelog Master Feed

Play Episode Listen Later Jan 19, 2026 6:14 Transcription Available


Armin Ronacher thinks AI agent psychosis might be driving us insane, Dan Abramov explains how AT Protocol is a social filesystem, RepoBar keeps your GitHub work in view without opening a browser, Ethan McCue shares some life altering Postgres patterns, and Lea Verou says web dependencies are broken and we need to fix them.

ai agent insane github psychosis postgres changelog dan abramov jerod santo lea verou armin ronacher
Postgres FM
RegreSQL

Postgres FM

Play Episode Listen Later Jan 16, 2026 57:40


Nik and Michael are joined by Radim Marek from boringSQL to talk about RegreSQL, a regression testing tool for SQL queries they forked and improved recently. Here are some links to things they mentioned:Radim Marek https://postgres.fm/people/radim-marekboringSQL https://boringsql.comRegreSQL: Regression Testing for PostgreSQL Queries (blog post by Radim) https://boringsql.com/posts/regresql-testing-queriesDiscussion on Hacker News https://news.ycombinator.com/item?id=45924619 Radim's fork of RegreSQL on GitHub https://github.com/boringSQL/regresql Original RegreSQL on GitHub (by Dimitri Fontaine) https://github.com/dimitri/regresql The Art of PostgreSQL (book) https://theartofpostgresql.comHow to make the non-production Postgres planner behave like in production (how-to post by Nik) https://postgres.ai/docs/postgres-howtos/performance-optimization/query-tuning/how-to-imitate-production-planner Just because you're getting an index scan, doesn't mean you can't do better! (Blog post by Michael) https://www.pgmustard.com/blog/index-scan-doesnt-mean-its-fastboringSQL Labs https://labs.boringsql.com~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Path To Citus Con, for developers who love Postgres
How I got started with DBtune (& why we chose Postgres) with Luigi Nardi

Path To Citus Con, for developers who love Postgres

Play Episode Listen Later Jan 16, 2026 70:31


Are self-driving databases the Waymos of the future? In Episode 35 of Talking Postgres, Luigi Nardi—founder and CEO of DBtune and Stanford researcher—joins Claire Giordano to explore his journey from academic research to Level 5 autonomous database tuning. We dig into Luigi's early days with a Commodore 64, how he began his PhD in Paris before he had learned to speak French, and how "professor privilege" in Sweden helped him bootstrap his startup. You'll learn why the DBtune team chose database tuning and Postgres as their focus, what the Jevons paradox means for the future of developers, and how the “Level 5” vision fuels the DBtune team's work toward a truly self-driving system. Previously on Talking Postgres:Talking Postgres Ep30: AI for data engineers with Simon WillisonTalking Postgres Ep23: How I got started as a developer & in Postgres with Daniel GustafssonLinks mentioned in this episode:CFP: POSETTE: An Event for Postgres 2026's CFP closes on Sun Feb 1, 2026 @ 11:59pm PSTVideo of POSETTE 2024 talk: Autotuning PostgreSQL on Azure Flexible Server, by Luigi NardiVideo of PGConf India 2025 talk: ML for Systems and Systems for ML, by Luigi NardiPGConf India 2025: Round Table Discussion about AIOxide and Friends podcast: Engineering Rigor in the LLM AgeWikipedia: Jevons paradoxWikipedia: Neuro-symbolic AIConference: PGDay Lowlands (Boriss Mejías calls it the second-best Postgres conference in Europe)Calendar invite: LIVE recording of Ep36 of Talking Postgres to happen on Wed Feb 18, 2026

airhacks.fm podcast with adam bien
Building a Production-Ready Postgres Kubernetes Operator in Java with Quarkus and GraalVM

airhacks.fm podcast with adam bien

Play Episode Listen Later Jan 14, 2026 65:22


An airhacks.fm conversation with Alvaro Hernandez (@ahachete) about: discussion about LLMs generating Java code with BCE patterns and architectural rules, Java being 20-30% better for LLM code generation than python and typescript, embedding business knowledge in Java source code for LLM context, stackgres as a curated opinionated stack for running Postgres on kubernetes, Postgres requiring external tools for connection pooling and high availability and backup and monitoring, StackGres as a Helm package and Kubernetes operator, comparison with oxide hardware for on-premise cloud environments, experimenting with Incus for system containers and VMS, limitations of Ansible for infrastructure automation and code reuse, Kubernetes as an API-driven architecture abstracting compute and storage, Custom Resource Definitions (CRDs) for declarative Postgres cluster management, StackGres supporting sharding with automated multi-cluster deployment, 13 lines of YAML to create 60-node sharded clusters, three interfaces for StackGres including CRDs and web console and REST API, operator written in Java with quarkus unlike typical Go-based operators, Google study showing Java faster than Go, GraalVM native compilation for 80MB container images versus 400-500MB JVM images, fabric8 Kubernetes client for API communication, reconciliation cycle running every 10 seconds to maintain desired state, pod local controller as Quarkus sidecar for local Postgres operations, dynamic extension installation without rebuilding container images, grpc bi-directional communication between control plane and control nodes, inverse connection pattern where nodes initiate connections to control plane, comparison with Jini and JavaSpaces leasing concepts from Sun Microsystems, quarter million lines of Java code in the operator mostly POJOs predating records, PostgreSQL configuration validation with 300+ parameters, automated tuning applied by default in StackGres, potential for LLM-driven optimization with clone clusters for testing, Framework Computer laptop automation with Ubuntu auto-install and Ansible and Nix, five to ten minute full system reinstall including BIOS updates Alvaro Hernandez on twitter: @ahachete

Reversim Podcast
509 Bumpers 90

Reversim Podcast

Play Episode Listen Later Jan 11, 2026


רק מספר 509 של רברס עם פלטפורמה - באמפרס מספר 90, שהוקלט ב-1 בינואר 2026, שנה אזרחית חדשה טובה! רן, דותן ואלון באולפן הוירטואלי (עם Riverside) בסדרה של קצרצרים וחדשות (ולפעמים קצת ישנות) מרחבי האינטרנט: הבלוגים, ה-GitHub-ים, ה-Rust-ים וה-LLM-ים החדשים מהתקופה האחרונה.

Postgres FM
Postgres year in review 2025

Postgres FM

Play Episode Listen Later Jan 2, 2026 47:25


Nik and Michael discuss the events and trends they thought were most important in the Postgres ecosystem in 2025. Here are some links to things they mentioned: Postgres 18 release notes https://www.postgresql.org/docs/18/release-18.htmlOur episode on Postgres 18 https://postgres.fm/episodes/postgres-18LWLock:LockManager benchmarks for Postgres 18 (blog post by Nik) https://postgres.ai/blog/20251009-postgres-marathon-2-005PostgreSQL bug tied to zero-day attack on US Treasury https://www.theregister.com/2025/02/14/postgresql_bug_treasuryPgDog episode https://postgres.fm/episodes/pgdogMultigres episode https://postgres.fm/episodes/multigresNeki announcement https://planetscale.com/blog/announcing-nekiOur 100TB episode from 2024 https://postgres.fm/episodes/to-100tb-and-beyondPlanetScale for Postgres https://planetscale.com/blog/planetscale-for-postgresOracle's MySQL job cuts https://www.theregister.com/2025/09/11/oracle_slammed_for_mysql_jobAmazon Aurora DSQL is now generally available https://aws.amazon.com/about-aws/whats-new/2025/05/amazon-aurora-dsql-generally-availableAnnouncing Azure HorizonDB https://techcommunity.microsoft.com/blog/adforpostgresql/announcing-azure-horizondb/4469710Lessons from Replit and Tiger Data on Storage for Agentic Experimentation https://www.tigerdata.com/blog/lessons-replit-tiger-data-storage-agentic-experimentationInstant database clones with PostgreSQL 18 https://boringsql.com/posts/instant-database-clonesturbopuffer episode https://postgres.fm/episodes/turbopufferCrunchy joins Snowflake https://www.crunchydata.com/blog/crunchy-data-joins-snowflakeNeon joins Databricks https://neon.com/blog/neon-and-databricks~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Rails with Jason
281 - Rafael Masson and Craig Kerstiens

Rails with Jason

Play Episode Listen Later Dec 31, 2025 51:47 Transcription Available


In this episode I talk with Raphael Masson, CTO of Missive, and Craig Kerstiens from Crunchy Data. We cover bootstrapping Missive from a side project (Conference Badge), growing from 3 to 15 employees, migrating off Heroku, and why most developers underutilize Postgres.Links:MissiveCrunchy DataNonsense Monthly

Postgres FM
Archiving

Postgres FM

Play Episode Listen Later Dec 19, 2025 31:03


Nik and Michael discuss a listener question about archiving a database. Here are some links to things they mentioned: Listener request to talk about archiving https://www.youtube.com/watch?v=KFRK8PiIvTg&lc=UgyiFrO37gEgUaVhRgN4AaABAg Our episode on “Is pg_dump a backup tool?” https://postgres.fm/episodes/is-pg_dump-a-backup-tool ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Path To Citus Con, for developers who love Postgres
What Postgres developers can expect from PGConf.dev with Melanie Plageman

Path To Citus Con, for developers who love Postgres

Play Episode Listen Later Dec 12, 2025 76:46


What do conference planning, hacking weddings, and cat-free coding sessions have to do with Postgres? In Episode 34 of Talking Postgres, Melanie Plageman—Postgres committer and major contributor from Microsoft—joins Claire for a lively deep dive into what developers can expect at PGConf.dev 2026 as Postgres turns 30. We explore new content formats, the role of travel grants, why Tuesday becomes a full conference day, and how the hallway track often shapes the next Postgres release. Plus: creating space for new contributors to get inspired and get involved. And yes—the CFP is open until Jan 16, 2026.Links mentioned in this episode:Podcast: Becoming a Postgres committer with Melanie PlagemanPodcast: How I got started as a dev and in Postgres with Melanie Plageman & Thomas MunroConference: PGConf.dev 2026CFP for PGConf.dev: CFP will close on Jan 16, 2026PGConf.dev 2026: AboutPGConf.dev 2026: Sponsorship levelsPGConf.dev 2026: Travel grant programSocial: LinkedIn account for PGConf.devPOSETTE: An Event for Postgres: POSETTE CFP is open until Feb 1, 2026Meetup: Post about inaugural PostgreSQL Nairobi Meetup in Dec 2025 PGDay Lowlands 2025: Debate on Kubernetes, session detailsPGDay Lowlands 2025: Debate about autotuning, session detailsConference talk at PGCon 2019: Intro to Postgres Planner Hacking, by Melanie PlagemanBlog post: The Pac-Man Rule at Conferences, by Eric HolsherDiscord invite for PostgreSQL Hacking Mentoring server: https://discord.gg/bx2G9KWyrYCal invite: LIVE recording of Ep35 of Talking Postgres to happen on Wed Jan 14, 2026

Developer Voices
Will Turso Be The Better SQLite? (with Glauber Costa)

Developer Voices

Play Episode Listen Later Dec 11, 2025 111:27


SQLite is embedded everywhere - phones, browsers, IoT devices. It's reliable, battle-tested, and feature-rich. But what if you want concurrent writes? Or CDC for streaming changes? Or vector indexes for AI workloads? The SQLite codebase isn't accepting new contributors, and the test suite that makes it so reliable is proprietary. So how do you evolve an embedded database that's effectively frozen?Glauber Costa spent a decade contributing to the Linux kernel at Red Hat, then helped build Scylla, a high-performance rewrite of Cassandra. Now he's applying those lessons to SQLite. After initially forking SQLite (which produced a working business but failed to attract contributors), his team is taking the bolder path: a complete rewrite in Rust called Turso. The project already has features SQLite lacks - vector search, CDC, browser-native async operation - and is using deterministic simulation testing (inspired by TigerBeetle) to match SQLite's legendary reliability without access to its test suite.The conversation covers why rewrites attract contributors where forks don't, how the Linux kernel maintains quality with thousands of contributors, why Pekka's "pet project" jumped from 32 to 64 contributors in a month, and what it takes to build concurrent writes into an embedded database from scratch.--Support Developer Voices on Patreon: https://patreon.com/DeveloperVoicesSupport Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/joinTurso: https://turso.tech/Turso GitHub: https://github.com/tursodatabase/tursolibSQL (SQLite fork): https://github.com/tursodatabase/libsqlSQLite: https://www.sqlite.org/Rust: https://rust-lang.org/ScyllaDB (Cassandra rewrite): https://www.scylladb.com/Apache Cassandra: https://cassandra.apache.org/DuckDB (analytical embedded database): https://duckdb.org/MotherDuck (DuckDB cloud): https://motherduck.com/dqlite (Canonical distributed SQLite): https://canonical.com/dqliteTigerBeetle (deterministic simulation testing): https://tigerbeetle.com/Redpanda (Kafka alternative): https://www.redpanda.com/Linux Kernel: https://kernel.org/Datadog: https://www.datadoghq.com/Glauber Costa on X: https://x.com/glcstGlauber Costa on GitHub: https://github.com/glommerKris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.socialKris on Mastodon: http://mastodon.social/@krisajenkinsKris on LinkedIn: https://www.linkedin.com/in/krisjenkins/--0:00 Intro3:16 Ten Years Contributing to the Linux Kernel15:17 From Linux to Startups: OSv and Scylla26:23 Lessons from Scylla: The Power of Ecosystem Compatibility33:00 Why SQLite Needs More37:41 Open Source But Not Open Contribution48:04 Why a Rewrite Attracted Contributors When a Fork Didn't57:22 How Deterministic Simulation Testing Works1:06:17 70% of SQLite in Six Months1:12:12 Features Beyond SQLite: Vector Search, CDC, and Browser Support1:19:15 The Challenge of Adding Concurrent Writes1:25:05 Building a Self-Sustaining Open Source Community1:30:09 Where Does Turso Fit Against DuckDB?1:41:00 Could Turso Compete with Postgres?1:46:21 How Do You Avoid a Toxic Community Culture?1:50:32 Outro

Overtired
439: 5K Sicko

Overtired

Play Episode Listen Later Dec 9, 2025 75:38


The Overtired trio reunites for the first time in ages, diving into a whirlwind of health updates, hilarious anecdotes, and the latest tech obsessions. Christina shares a dramatic spinal saga while Brett and Jeff discuss everything from winning reddit contests to creating a universal markdown processor. Tune in for updates on Mark 3, the magical world of Scrivener, and why Brett’s back on Bing. Don’t miss the banter or the tech tips, and as always, get ready to laugh, learn, and maybe feel a little overtired yourself. Sponsor Shopify is the commerce platform behind 10% of all eCommerce in the US, from household names like Mattel and Gymshark, to brands just getting started. Get started today at shopify.com/overtired. Chapters 00:00 Welcome to the Overtired Podcast 01:09 Christina’s Health Journey 10:53 Brett’s Insurance Woes 15:38 Jeff’s Mental Health Update 24:07 Sponsor Spot: Shopify 24:18 Sponsor: Shopify 26:23 Jeff Tweedy 27:43 Jeff’s Concert Marathon 32:16 Christina Wins Big 36:58 Monitor Setup Challenges 37:13 Ergotron Mounts and Tall Poles 38:33 Review Plans and Honest Assessments 38:59 Current Display Setup 41:30 Thunderbolt KVM and Display Preferences 42:51 MacBook Pro and Studio Comparisons 50:58 Markdown Processor: Apex 01:07:58 Scrivener and Writing Tools 01:11:55 Helium Browser and Privacy Features 01:13:56 Bing Delisting Incident Show Links Danny Brown's 10 in the New York Times (gift link) Indigo Stack Scrivener Helium Bangs Apex Apex Syntax Join the Marked 3 Beta LG 32 Inch UltraFine™evo 6K Nano IPS Black Monitor with Thunderbolt™ 5 Join the Conversation Merch Come chat on Discord! Twitter/ovrtrd Instagram/ovrtrd Youtube Get the Newsletter Thanks! You’re downloading today’s show from CacheFly’s network BackBeat Media Podcast Network Check out more episodes at overtiredpod.com and subscribe on Apple Podcasts, Spotify, or your favorite podcast app. Find Brett as @ttscoff, Christina as @film_girl, Jeff as @jsguntzel, and follow Overtired at @ovrtrd on Twitter. Transcript Brett + 2 Welcome to the Overtired Podcast Jeff: [00:00:00] Hello everybody. This is the Overtired podcast. The three of us are all together for the first time since the Carter administration. Um, it is great to see you both here. I am Jeff Severance Gunzel if I didn’t say that already. Um, and I’m here with Christina Warren and I’m here with Brett Terpstra and hello to both of you. Brett: Hi. Jeff: Great to see you both. Brett: Yeah, it’s good to see you too. I feel like I was really deadpan in the pre-show. I’ll try to liven it up for you. I was a horrible audience. You were cracking jokes and I was just Jeff: that’s true. Christina, before you came on, man, I was hot. I was on fire and Brett was, all Brett was doing was chewing and dropping Popsicle parts. Brett: Yep. I ate, I ate part of a coconut outshine Popsicle off of a concrete floor, but Jeff: It is true, and I didn’t even see him check it [00:01:00] for cat hair, Brett: I did though. Jeff: but I believe he did because he’s a, he’s a very Brett: I just vacuumed in Jeff: He’s a very good American Brett: All right. Christina’s Health Journey Brett: Well, um, I, Christina has a lot of health stuff to share and I wanna save time for that. So let’s kick off the mental health corner. Um, let’s let Christina go first, because if it takes the whole show, it takes the whole show. Go for it. Christina: Uh, I, I will not take this hold show, but thank you. Yeah. So, um, my mental health is okay-ish. Um, I would say the okay-ish part is, is because of things that are happening with my physical health and then some of the medications that I’ve had to be on, um, uh, to deal with it. Uh, prednisone. Fucking sucks, man. Never nev n never take it if you can avoid it. Um, but why Christina, why are you on prednisone or why were you on prednisone for five days? Um, uh, and I’m not anymore to be clear, but that certainly did not help my mental health. Um, at the beginning of November, I woke up and I thought that I’d [00:02:00] slept on my shoulder wrong. And, um, uh, and, and just some, some background. I, I don’t know if this is pertinent to how my injury took place or not, but, but it, I’m sure that it didn’t help. Um, I have scoliosis and in the top and the bottom of my spine, so I have it at the top of my, like, neck area and my lower back. And so my back is like a crooked s um, this will be relevant in a, in a second, but, but I, I thought that I had slept on my back bunny, and I was like, okay, well, all right, it hurts a lot, but fine. Um, and then it, a, a couple of days passed and it didn’t get any better, and then like a week passed and I was at the point where I was like, I almost feel like I need to go to the. Emergency room, I’m in pain. That is that significant. Um, and, you know, didn’t get any better. So I took some of grant’s, Gabapentin, and I took, um, some, some, uh, a few other things and I was able to get in with like a, a, a sports and spine guy. Um, and um, [00:03:00] he looked at me and he was like, yeah, I think that you have like a, a, a bolting disc, also known as a herniated disc. Go to physical therapy. See me later. We’ll, we’ll deal with it. Um. Basically like my whole left side was, was, was really sore and, and I had a lot of pain and then I had numbness in my, my fingers and um, and, and that was a problem the next day, which was actually my birthday. The numbness had at this point spread to my right side and also my lower extremities. And so at this point I called the doctor and he was like, yeah, you should go to the er. And so I went to the ER and, and they weren’t able to do anything for me other than give me, you know, like, um, you know, I was hoping they might give me like, some sort of steroid injection or something. They wouldn’t do anything other than, um, basically, um, they gave me like another type of maybe, maybe pain pill or whatever. Um, but that allowed the doctor to go ahead and. Write, uh, write up an MRI took forever for me to get an MRI, I actually had to get it in Atlanta. [00:04:00] Fun fact, uh, sometimes it is cheaper to just pay and not go through insurance and get an MR MRI and, um, a, um, uh, an x-ray, um, I was able to do it for $450 Jeff: Whoa. Really? Christina: Yeah, $400 for the MR mri. $50 for the x-ray. Jeff: Wow. Christina: Yeah. Yeah. Brett: how I, they, I had an MRI, they charged me like $1,200 and then they failed to bill insurance ’cause I was between insurance. Christina: Yes. Yeah. So what happened was, and and honestly that was gonna be the situation that I was in, not between insurance stuff, but they weren’t even gonna bill insurance. And insurance only approved certain facilities and to get into those facilities is almost impossible. Um, and so, no, there are a lot of like get an MR, I now get a, you know, mammogram, get ghetto, whatever places. And because America’s healthcare system is a HealthScape, you can bypass insurance and they will charge you way less than whatever they bill insurance for. So I, I don’t know if it’s part of the country, you know, like Seattle I think might [00:05:00] probably would’ve been more expensive. But yeah, I was able to find this place like a mile from like, not even a mile from where my parents lived, um, that did the x-rays and the MRI for $450 total. Brett: I, I hate, I hate that. That’s true, but Christina: Me too. Me too. No, no. It pisses me off. Honestly, it makes me angry because like, I’m glad that I was able to do that and get it, you know, uh, uh, expedited. Then I go into the spine, um, guy earlier this week and he looks at it and he’s like, yep, you’ve got a massive bulging disc on, on C seven, which is the, the part of your lower cervical or cervical spine, which is your neck. Um, and it’s where it connects to your ver bray. It’s like, you know, there are a few things you can do. You can do, you know, injections, you can do surgery. He is like, I’m gonna recommend you to a neurosurgeon. And I go to the neurosurgeon yesterday and he was showing me or not, uh, yeah, yesterday he was showing me the, the, the, the scans and, and showing like you up close and it’s, yeah, it’s pretty massive. Like where, where, where the disc is like it is. You could see it just from one view, like, just from like [00:06:00] looking at it like, kind of like outside, like you could actually like see like it was visible, but then when you zoomed in it’s like, oh shit, this, this thing is like massive and it’s pressing on these nerves that then go into my, my hands and other areas. But it’s pressing on both sides. It’s primarily on my left side, but it’s pressing on on my right side too, which is not good. So, um, he basically was like, okay. He was like, you know, this could go away. He was like, the pain isn’t really what I’m wanting to, to treat here. It’s, it’s the, the weakness because my, my left arm is incredibly weak. Like when they do like the, the test where like they, they push back on you to see like, okay, like how, how much can you, what, like, I am, I’m almost immediately like, I can’t hold anything back. Right? Like I’m, I’m, I’m like a toddler in terms of my strength. So, and, and then I’m freaked out because I don’t have a lot of feeling in my hands and, and that’s terrifying. Um, I’m also. Jeff: so terrifying, Christina: I’m, I’m also like in extreme pain because of, of, of where this sits. Like I can’t sleep well. Like [00:07:00] the whole thing sucks. Like the MRI, which was was like the most painful, like 25 minutes, like of my existence. ’cause I was laying flat on my back. I’m not allowed to move and I’m just like, I’m in just incredible pain with that part of, of, of, of my, my side. Like, it, it was. It was terrible. Um, but, uh, but he was like, yeah. Um, these are the sorts of surgical options we have. Um, he’s gonna, um, do basically what what he wants to do is basically do a thing where he would put in a, um, an artificial or, or synthetic disc. So they’re gonna remove the disc, put in a synthetic one. They’ll go in through the, the front of my throat to access the, my, my, my, my spine. Um, put that there and, um, you know, I’ll, I’ll be overnight in the hospital. Um, and then it’ll be a few weeks of recovery and the, the, the pain should go away immediately. Um, but it, it could be up to two years before I get full, you know, feeling back in my arm. So anyway, Jeff: years, Jesus. And Christina: I mean, and hopefully less than that, but, but it could be [00:08:00] up to that. Jeff: there’s no part of this at this point. That’s a mystery to you, right? Christina: The mystery is, I don’t know how this happened. Jeff: You don’t know how it happened, right? Of course. Yeah, of course. Yeah. Yeah. Brett: So tell, tell us about the ghastly surgery. The, the throat thing really threw me like, I can’t imagine that Christina: yeah, yeah. So, well, ’cause the thing is, is that usually if what they just do, like spinal fusion, they’ll go in at the back of your neck, um, and then they’ll remove the, the, um, the, the, the, the disc. And then they’ll fuse your, your, your two bones together. Basically. They’ll, they’ll, they’ll, they’ll fuse this part of the vertebrae, but because they’re going to be replacing the, the disc, they need more room. So that’s why they have to go in through the, through, through basically your throat so that they can have more room to work. Jeff: Good lord. No thank you. Brett: Ugh. Wow. Jeff: Okay. Brett: I am really sorry that is happening. That is, that is, that dwarfs my health concerns. That is just constant pain [00:09:00] and, and it would be really scary. Christina: Yeah. Yeah. It’s not great. It’s not great, but I’m, I’m, I’m doing what I can and, uh, like I have, you know, a small amount of, of Oxycodine and I have like a, a, a, you know, some other pain medication and I’m taking the gabapentin and like, that’s helpful. The bad part is like your body, like every 12, 15 hours, like whatever, like the, the, the cycle is like, you feel it leave your system and like if you’re asleep, you wake up, right? Like, it’s one of those things, like, you immediately feel it, like when it leaves your system. And I’ve never had to do anything for pain management before. And they have me on a very, they have me like on the smallest amount of like, oxycodone you can be on. Um, and I’m using it sparingly because I don’t wanna, you know, be reliant on, on it or whatever. But it, it, but it is one of those things where I’m like, yeah, like sometimes you need fucking opiates because, you know, the pain is like so constant. And the thing is like, what sucks is that it’s not always the same type of pain. Like sometimes it’s throbbing, sometimes it’s sharp, sometimes it’s like whatever. It sucks. But the hardest thing [00:10:00] is like, and. This does impact my mental health. Like it’s hard to sleep. Like, and I’m a side sleeper. I’m a side sleeper, and I’m gonna have to become a back sleeper. So, you know. Yeah. It’s just, it’s, it’s not great. It’s not great, but, you know, that, that, that, that, that’s me. The, the good news is, and I’m very, very gratified, like I have a good surgeon. Um, I’m gonna be able to get in to get this done relatively quickly. He had an appointment for next week. I don’t think that insurance would’ve even been able to approve things fast enough for, for, for that regard. And I have, um, commitments that I can’t make then. And I, and that would also mean that I wouldn’t be able to go visit my family for Christmas. So hopefully I’ll do it right after Christmas. I’m just gonna wait, you know, for, for insurance to, to do its thing, knock on wood, and then schedule, um, from there. But yeah, Jeff: Woof. Christina: so that’s me. Um, uh, who wants to go next? Jeff or, uh, Jeff or Brett? Jeff: It’s like, that’s me. Hot potato throwing it. Brett: I’ll, I’ll go. Brett’s Insurance Woes Brett: I can continue on the insurance topic. Um, I was, for a few months [00:11:00] after getting laid off, I was on Minsu, which is Minnesota’s Medicaid, um, v version of Medicaid. And so basically I paid nothing and I had better insurance than I usually have with, uh, you know, a full deductible and premiums and everything. And it was fantastic. I was getting all the care I needed for all of the health stuff I’m going through. Um, I, they, a, a new doctor I found, ordered the 15 tests and I passed out ’cause it was so much blood and. And it, I was getting, but I was getting all these tests run. I was getting results, we were discovering things. And then my unemployment checks, the income from unemployment went like $300 over the cap for Medicaid. So [00:12:00] all of a sudden, overnight I was cut from Medicaid and I had to do an early sign up, and now I’m on courts and it sucks bad. Like they’re not covering my meds. Last month cost me $600. I was also paying. In addition to that, a $300 premium plus every doctor’s visit is 50 bucks out of pocket. So this will hopefully only last until January, and then it’ll flip over and I will be able to demonstrate basically no income, um, until like Mark makes enough money that it gets reported. Um, and even, uh, until then, like I literally am making under the, the poverty limit. So, um, I hope to be back on Medicaid shortly. I have one more month. I’ll have to pay my $600 to refill. I [00:13:00] cashed out my 401k. Um, like things were, everything was up high enough that I had made, I. I had made tens of thousands of dollars just on the investments and the 401k, but I also have a lot of concerns about the market volatility around Nvidia and the AI bubble in general. Um, so taking my money out of the market just felt okay to me. I paid the 10%, uh, penalty Jeff: Mm-hmm. Brett: and ultimately I, I came out with enough cash that I can invest on my own and be able to cover the next six months. Uh, if I don’t have any other income, which I hope to, I hope to not spend my nest egg. Um, but I did, I did a lot of thinking and calculating and I think I made the right choices. But anyway, [00:14:00] that will help if I have to pay for medical stuff that will help. Um. And then I’ve had insomnia, bad on and off. Right now I’m coming off of two days of good sleep. You’re catching me on a good day. Um, but Jeff: Still wouldn’t laugh at my jokes. Brett: before that it was, well, that’s the thing is like before that, it was four nights where I slept two to four hours per night, and by the end of it, I could barely walk. And so two nights of sleep after a stint like that, like, I’m just super, I’m deadpan, I’m dazed. Um, I could lay down and fall asleep at any time. Um, I, so, so keep me awake. Um, but yeah, that’s, that’s, that’s me. Mental health is good. Like I’m in pretty high spirits considering all this, like financial stuff and everything. Like my mood has been pretty stable. I’ve been getting a lot of coding done. I’ll tell you about projects in [00:15:00] a minute, but, um, but that’s, that’s me. I’m done. Jeff: Awesome. I’m enjoying watching your cat roll around, but clearly cannot decide to lay down at this point. Brett: No, nobody is very persnickety. Jeff: I literally have to put my. Well, you say put a cat down like you used to. When you put a kid down for a nap, you say you wanna put ’em down. Right? That’s where it’s coming from. I now have a chair next to my desk, ’cause I have one cat that walks around Yowling at about 11:00 AM while I’m working. And I have to like, put ’em down for a nap. It’s pathetic. It’s pathetic that I do that. Let’s just be clear. Brett: Yeah. Jeff: soulmate though. Jeff’s Mental Health Update Jeff: Um, I’m doing good. I’m, I’m, I’ve been feeling kind of light lately in a nice way. I’ve had ups and downs, but even with the ups and downs, there’s like a, except for one day last week was, there’s just been feeling kind of good in general, which is remarkable in a way. ’cause it’s just like stressful time. There’s some stressful business stuff, like, [00:16:00] a lot of stuff like that. But I’m feeling good and, and just like, uh, yeah, just light. I don’t know, it’s weird. Like, I’ve just been noticing that I feel kind of light and, uh. And not, not manic, not high light. Brett: Yeah. No, that’s Jeff: uh, and that’s, that’s lovely. So yeah. And so I’m doing good. I’m doing good. I fucking, it’s cold. Which sucks ’cause it just means for everybody that’s heard about my workshop over the years, that I can’t really go out there and have it be pleasant Brett: It’s, it’s been Minnesota thus far. Has had, we’ve had like one, one Sub-Zero day. Jeff: whatever. It’s fucking cold. Christina: Yeah. What one? Brett? Brett. It’s December 6th as we’re recording this one Sub-Zero day. That’s insane. Brett: Is it Jeff: Granted, granted I’ve been dressing warm, so I’m ready to go out the door for ice related things. Meaning, meaning government, ice, Brett: Uh, yeah. Yeah. Jeff: So I like wear my long underwear during [00:17:00] the day. ’cause actually like recently. So at my son’s school, which is like six blocks from here, um, has a lot of Somali immigrants in it. And, and uh, and there was a, at one point there was ice activity in the other direction, um, uh, uh, near me. And so neighbors put out a call here around so that at dismissal time people would pair up at all the intersections surrounding the school. And, um, and like a quick signal group popped up, whatever. It was so amazing because like we all just popped out there. And by the time I got out, uh, everyone was already like, posted up and I was like, I’m a, in these situations, I am a wanderer. You want me roaming? I don’t want to pair up with somebody I don’t like, I just, I grabbed a camera with a Zoom on it and like, I was like, I’m in roam. Um, it’s what I was as an activist, what I was as a reporter, like it’s just my nature. Um, but like. Everybody was out and like, and they were just like, they were ready man. And then we got like the all clear and you could just see people in the [00:18:00] neighborhood just like standing down and going home. But because of the true threat and the ongoing arrests here, now that the Minneapolis stuff has started, like I do, I was like wearing long underwear just, and I have a little bag by the door ready to like pop out if something comes up and I can be helpful. Um, and uh, and I guess what I’m saying is I should use that to go into the garage as well if I’m already prepared. Brett: Right. Jeff: But here’s, okay, so here’s a mental health thing actually. So I, one of the, I’ve gone through a few years of just sort of a little bit of paralysis around being able to just, I don’t know what, like do anything that is kind of project related that takes some thinking, whatever it is, like I’m talking about around the house or things that have kind of broken over the years, whatever. So I’ve had this snowblower and it’s a really good snowblower. It’s got headlights. And, uh, and I used to love snow blowing the entire block. Like it just made me feel good, made me feel useful. Um, and sorry I cough. I left it outside for a [00:19:00] year for a, like a winter and a spring and water got into the gas tank. It rusted out in there. I knew I couldn’t start it or I’d ruin the whole damn engine. So I left it for two years and I felt bad about myself. But this year, just like probably a month before the first big snowfall, I fucking replaced a gas tank and a carburetor on a machine. And I have never done anything like that in my life. And so then we got the snowfall and I, and I snow blowed this whole block Brett: Nice. Jeff: great. ’cause now they all owe me. Brett: I, uh, I have a, uh, so I have a little electric powered, uh, snowblower that can handle like two inches of snow. Um, and, and on big snowfalls, if you get out there every hour and keep up with it, it, it works. But, but I, my back right now, I can’t stand for, I can’t stand still for 10 minutes and I can’t move for more than like five minutes. And so I’m, I’m very disabled and El has good days and bad days, uh, thus [00:20:00] far. L’s been out there with a shovel, um, really being the hero. But we have a next door neighbor with a big gas powered snowblower. And so we went over, brought them gifts, and, um, asked if they would take care of our driveway on days we couldn’t, uh, for like, you know, we’d pay ’em 25 bucks to do the driveway. And, uh, and they were, he was still reluctant to accept money. Um. But, but we both agreed it was better to like make it a, a transaction. Jeff: Oh my God. You don’t want to get into weird Minnesota neighbor relational. Brett: right. You don’t want the you owe me thing. Um, so, so we have that set up. But in the process we made really good friends with our neighbor. Like we sat down in their living room for I think 45 minutes and just like talked about health and politics and it was, it was really fun. They’re, they’re retired. They’re in their [00:21:00] seventies and like act, he always looks super grumpy. I always thought he was a mean old man. He’s actually, he laughs more easily than most people I’ve ever met. Um, he’s actually, when people say, oh, he is actually a teddy bear, this guy really is, he’s just jovial. Uh, he just has resting angry old man face. Jeff: Or like my, I have public mis throat face, like when I’m out and about, especially when I’m shopping, I know that my face is, I’m gonna fucking kill you if you look me in the eye Brett: I used Jeff: is not my general disposition. Brett: people used to tell me that about myself, but I feel like I, I carry myself differently these days than I did when I was younger. Jeff: You know what I learned? Do you, have you both watched Veep, Christina: Yes, Jeff: you know, Richard sp split, right? Um, and, and he always kind of has this sweet like half smile and he is kind of looking up and I, I figured out at one point I was in an airport, which is where my kill everybody face especially comes up. Just to be clear. TSA, it’s just a feeling inside. I [00:22:00] have no desire to act to this out. I realized that if I make the Richard Plet face, which I can try to make for you now, which is something like if I just make the Richard Plet face, my whole disposition Brett: yeah. Yeah. Jeff: uh, and I even feel a little better. And so I just wanna recommend that to people. Look up Richard Spt, look at his face. Christina: Hey, future President Bridges split. Jeff: future President Richard Splat, also excellent in the Detroiters. Um, that’s all, uh, that’s all I wanted to say about that. Brett: I have found that like when I’m texting with someone, if I start to get frustrated, you know, you know that point where you’re still adding smiley emoticons even though you’re actually not, you’re actually getting pissed off, but you don’t wanna sound super bitchy about it, so you’re adding smile. I have found that when I add a smiley emoji in those circumstances, if I actually smile before I send it, it like my [00:23:00] mood will adjust to match, to match the tone I’m trying to convey, and it lessens my frustration with the other person. Jeff: a little joy wrist rocket. Christina: Yeah. Hey, I mean, no, but hey, but, but that, that, that, that, that’s interesting. I mean, they’re, they, they’ve done studies that like show that, right? That like show like, you know, I mean, like, some of this is all like bullshit to a certain extent, but there is something to be said for like, you know, like the power of like positive thinking and like, you know, if you go into things with like, different types of attitudes or even like, even if you like, go into job interviews or other situations, like you act confident or you smile, or you act happy or whatever. Even if you’re not like it, the, the, the, the euphoria, you know, that those sorts of uh, um, endorphin reactions or whatever can be real. So that’s interesting. Brett: Yeah, I found, I found going into job interviews with my usual sarcastic and bitter, um, kind of mindset, Jeff: I already hate this job. Brett: it doesn’t play well. It doesn’t play well. So what are your weaknesses? Fuck off. Um,[00:24:00] Christina: right. Well, well, well, I hate people. Jeff: Yeah. Dealing with motherfuckers like you, that’s one weakness. Sponsor Spot: Shopify Brett: let’s, uh, let’s do a sponsor spot and then I want to hear about Christina winning a contest. Christina: yes. Jeff: very Brett: wanna, you wanna take it away? Sponsor: Shopify Jeff: I will, um, our sponsor this week is Shopify. Um, have you ever, have you just been dreaming of owning your own business? Is that why you can’t sleep? In addition to having something to sell, you need a website. And I’ll tell you what, that’s been true for a long time. You need a payment system, you need a logo, you need a way to advertise new customers. It can all be overwhelming and confusing, but that is where today’s sponsor, Shopify comes in. shopify is the commerce platform behind millions of businesses around the world and 10% of all e-commerce in the US from household names like Mattel and Gym Shark to brands just getting started. Get started with your own design studio with hundreds of ready to use [00:25:00] templates. Shopify helps you build a beautiful online store to match your brand’s style, accelerate your content creation. Shopify is packed with helpful AI tools that write product descriptions, page headlines, and even enhance your product photography. Get the word out like you have a marketing team behind you. Easily create email and social media campaigns wherever your customers are scrolling or strolling. And best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping, to processing returns and beyond. If you’re ready to sell, you are ready to Shopify. Turn your Big Business Idea into with Shopify on your side. Sign up for your $1 per month trial and start selling today@shopify.com slash Overtired. Go to shopify.com/ Overtired. What was that? Say it with me. shopify.com/ Overtired [00:26:00] cha. Uh, Brett: the, uh, the group, the group input on the last URL, I feel like we can charge extra for that. That was Jeff: Yeah. Cha-ching Brett: they got the chorus, they got the Overtired Christina: You did. You got the Overtired Jeff: They didn’t think to ask for it, but that’s our brand. Christina: shopify.com/ Overtired. Jeff Tweedy Jeff: What was, uh, I was watching a Stephen Colbert interview with Jeff Tweedy, who just put out a triple album and, uh, it was a very thoughtful, sweet interview. And then Stephen Colbert said, you know, you’re not supposed to do this. And Jeff Tweety said, it’s all part of my career long effort to leave the public wanting less. Christina: Ha, Jeff: That was a great bit. Christina: that’s a fantastic bit. A side note, there are a couple of really good NPR, um, uh, tiny desks that have come out in the last couple of month, uh, couple of weeks. Um, uh, one is shockingly, I, I’ll, I’ll just be a a, a fucking boomer about it. The Googo dolls. Theirs was [00:27:00] great. It’s fantastic. They did a great job. It already has like millions of views, like it wrecked up like over a million views, I think like in like, like less than 24 hours. They did a great job, but, uh, but Brandy Carlisle, uh, did one, um, the other day and hers is really, really good too. So, um, so yeah. Yeah, exactly. So yeah. Anyway, you said, you saying Jeff pd maybe, I don’t know how I got from Wilco to like, you know, there, Jeff: Yeah. Well, they’ve done some good, he’s done his own good Christina: he has, he has done his own. Good, good. That’s honestly, that’s probably what I was thinking of, but Jeff: It’s my favorite Jeff besides me because Bezos, he’s not in the, he’s not in the game. Christina: No. No, he’s not. No. Um, he, he’s, he’s not on the Christmas card list at all. Jeff: Oh man. Jeff’s Concert Marathon Jeff: Can I just tell you guys that I did something, um, I did something crazy a couple weeks ago and I went to three shows in one week, like I was 20 fucking two, Brett: Good grief. Jeff: and. It was a blast. So, okay, so the background of this is my oldest son [00:28:00] loves hip hop, and when we drive him to college and back, or when I do, it’s often just me. Um, he, he goes deep and he, it’s a lot of like, kind of indie hip hop and a lot. It’s just an interesting, he listens to interesting shit, but he will go deep and he’ll just like, give me a tour through someone’s discography or through all their features somewhere, whatever it is. And like, it’s the kind of input that I love, which is just like, I don’t, even if it’s not my genre, like if you’re passionate and you can just weave me through the interrelationship and the history and whatever it is I’m in. So as a result of that, made me a huge fan of Danny Brown and made me a huge fan of the sky, Billy Woods. And so what happened was I went to a hip hop show at the seventh Street entry, uh, which is attached to First Avenue. It’s a little club, very small, lovely little place, the only place my band could sell out. Um, and I watched a hip hop show there on a Monday night, Tuesday night. I went to the Uptown Theater, which Brett is now a actually an operating [00:29:00] theater for shows. Uh, and I, and I saw Danny Brown, but I also saw two hyper pop bands, a genre I was not previously aware of, including one, which was amazing, called Fem Tenal. And I was in line to get into that show behind furries, behind trans Kids. Like it was this, I was the weirdest, like I did not belong. Underscores played, and, and this will mean something to somebody out there, but not, didn’t mean anything to me until that night. And, uh. I felt like such, there were times, not during Danny Brown, Danny Brown’s my age all good. But like there were times where I was in the crowd ’cause I’m tall. Anybody that doesn’t know I’m very tall and I’m wearing like a not very comfortable or safe guy seeming outfit, a black hoodie, a black stocking cap. Like I basically looked like I’m possibly a shooter and, and I’m like standing among all these young people loving it, but feeling a little like, should I go to the back? Even like I was leaving that show [00:30:00] and the only people my age were people’s parents that were waiting to pick them up on the way out. So anyway, that was night two. Danny Brown was awesome. And then two nights later I went to see, this is way more my speed, a band called the Dazzling Kilman who were a band that. Came out in the nineties, St. Louis and a noisy Matthew Rock. Wikipedia claims they invented math rock. It’s a really stupid claim, uh, but it’s a lovely, interesting band and it’s a friend of mine named Nick Sakes, who’s who fronted that band and was in all these great bands back when I was in bands called Colos Mite and Sick Bay, and all this is great shit. So they played a reunion show. In this tiny punk rock club here called Cloudland, just a lovely little punk rock club. And, um, and, and that was like rounded out my week. So like, I was definitely, uh, a tourist the early part of the week, mostly at the Danny Brown Show. But then I like got to come home to my noisy punk rock [00:31:00] on, uh, on Thursday night. And I, I fucking did three shows and it hurt so bad. Like even by the first of three bands on the second night. I was like, I don’t think I can make it. And I do. I already pregame shows with ibuprofen. Just to be really clear, I microdose glucose tabs at shows like, like I am, I am a full on old man doing these things. But, um, I did get some cred with my kids for being at a hyper pop show all by myself. And, Christina: Hell yeah. A a Jeff: friends seemed impressed. Christina: no, as a as, as as they should be. I’m impressed. And like, and I, I, I typically like, I definitely go to like more of like, I go, I go to shows more frequently and, and I’m, I’m even like, I’m, I’m gonna be real with you. I’m like, yeah, three in one week. Jeff: That’s a lot. Christina: That’s a lot. That’s a lot. Jeff: man. Did I feel good when I walked home from that last show though? I was like, I fucking did it. I did not believe I wasn’t gonna bail on at least two of those shows, if not all three. Anyway, just wanted to say Brett: I [00:32:00] do like one show a year, but Jeff: that’s how I’ve been for years this year. I think I’ve seen eight shows. Brett: damn. Jeff: Yeah, it’s Brett: Alright, so you’ve been teasing us about this, this contest you won. Jeff: Yeah, please, Christina. Sorry to push that off. Christina: No, no, no, no. That’s, that’s completely okay. That, that, that, that’s great. Uh, no. Christina Wins Big Christina: So, um, I won two six K monitors. Brett: Damn. Jeff: is that what those boxes are behind you? Christina: Yeah, yeah. This is what the boxes are behind me, so I haven’t been able to get them up because this happened. I got them literally right in the midst of all this stuff with my back. Um, but I do have an Ergotron poll now that is here, and, and Grant has said that he will, will get them up. But yeah, so I won 2 32 inch six K monitors from a Reddit contest. Brett: How, how, how, Jeff: How does this happen? How do I find a Reddit contest? Christina: Yeah. So I got lucky. So I have, I, I have a clearly, well, well, um, there was a little, there was a little bit of like, other step to it than that, but like, uh, so how it worked was basically, um, LG is basically just put out [00:33:00] two, they put out a new 32 inch six K monitor. I’ll have it linked in, in, in the show notes. Um, so we’ve talked about this on this podcast before, but like one of my big, like. Pet peeve, like things that I can’t get past. It’s like I need like a retina screen. Like I need like the, the perfect pixel doubling thing for that the Mac Os deals with, because I’ve used a 5K screen, either through an iMac or um, an lg, um, ultra fine or, um, a, uh, studio display. For like 11 years. And, and I, and I’ve been using retina displays on laptops even longer than that. And so if I use like a regular 4K display, like it just, it, it doesn’t work for me. Um, you can use apps like, um, like better control and other things to kind of emulate, like what would be like if you doubled the resolution, then it, it down, you know, um, of samples that, so that. It looks better than, than if it’s just like the, the, the 4K stuff where in the, the user interface things are too big and whatnot. And to be clear, this is a Macco West problem. If [00:34:00] you are using Windows or Linux or any other operating system that does fractional scaling, um, correctly, then this is not a problem. But Macco West does not do fractional scaling direct, uh, correctly. Um, weirdly iOS can, like, they can do three X resolution and other things. Um, but, but, but Macs does not. And that’s weird because some of the native resolutions on some of the MacBook errors are not even perfectly pixeled doubled, meaning Apple is already having to do a certain amount of like resolution changes to, to fit into their own, created by their, their own hubris, like way of insisting on, on only having like, like two x pixel doubling 18 years ago, we could have had independent, uh, resolutions, uh, um, for, for UI elements and, and, and window bars. But anyway, I, I’m, I’m digressing anyway. I was looking at trying to get either a second, uh, studio display, which I don’t wanna do because Apple’s reportedly going to be putting out a new one. Um, and they’re expensive or getting, um, there are now a number of different six K [00:35:00] displays that are not $6,000 that are on the market. So, um, uh, uh, Asus has one, um, there is one from like a, a Chinese company called like, or Q Con that, um, looks like a, a complete copy of this, of the pro display XDR. It has a different panel, but it’s, it’s six K and they, they’ve copied the whole design and it’s aluminum and it’s glossy and it looks great, but I’d have to like get it from like. A weird distributor, and if I have any issues with it, I don’t really wanna have to send it back to China and whatnot. And then LG has one that they just put out. And so I’ve been researching these on, on Mac rumors and on some other forums. And, um, I, uh, I, somebody in one of the Mac Roomers forums like posted that there was like a contest that LG was running in a few different subreddits where they were like, tell us why you should get one of, like, we’re gonna be giving away like either one or two monitors, and I guess they did this in a few subreddits. Tell us why this would be good for your workflow. And, um, I guess I, I guess I’m one of the people who kind of read the [00:36:00] assignment because it, okay, I’ll just be honest with this, with, with you guys on this podcast, uh, because I, I don’t think anyone from LG will hear this and my answers were accurate anyway. But anyway, this was not the sort of contest where it was like we will randomly select a winner. This was the moderators and lg, were going to read the responses and choose the winner. Jeff: Got it. Christina: So if you spend a little bit of time and thoughtfully write out a response, maybe you stand a better chance of winning the contest. Jeff: yeah, yeah. Put the work in like it was 2002. Christina: Right. Anyway, I still was shocked when I like woke up like on like Halloween and they were like, congratulations, you’ve won two monitors. I’m like, I’m sorry. What? Jeff: That’s amazing. Christina: Yeah, yeah, yeah, Jeff: Nice work. I know I’ve, you know, I’ve been staring at those boxes behind you this whole time, just being like, those look like some sweet monitors. Christina: yeah, yeah. Monitor Setup Challenges Christina: I mean, and, uh, [00:37:00] uh, it’s, it’s, it’s, it’s, it’s, and I, I’m very much, so my, my, my only issue is, okay, how am I gonna get these on my desk? So I’m gonna have to do something with my iMac and I’m probably gonna have to get rid of my, my my, my 5K, um, uh, uh, studio display, at least in the short term. Ergotron Mounts and Tall Poles Christina: Um, but what I did do is I, um, I ordered from, um, Ergotron, ’cause I already have. Um, two of their, um, LX mounts, um, or, or, or, or arms. Um, and only one of them is being used right now. And then I have a different arm that I use for the, um, um, iMac. Um, they sell like a, if you call ’em directly, you can get them to send you a tall pole so that you can put the two arms on top of them. And that way I think I can like, have them so that I can have like one pole and then like have one on one side, one Jeff: I have a tall pole. Christina: and, and yeah, that’s what she said. Um, Jeff: as soon as I said it, I was like, for fuck’s sake. But Christina: um, but, uh, but, but yeah, but so that way I think I, I can, I, in theory, I can stack the market and have ’em side by side. I don’t know. Um, I got that. I, I had to call Tron and, and order that from them. [00:38:00] Um, it was only a hundred dollars for, for the poll and then $50 for a handling fee. Jeff: It’s not easy to ship a tall pole. Brett: That’s what she said. Christina: that is what she said. Uh, that is exactly what she said. But yeah, so I, I, the, the, the unfortunate thing is that, um, I, um, I, I had to, uh, get a, like all these, they, they came in literally right before Thanksgiving, and then I’ve had, like, all my back stuff has Jeff: Yeah, no Christina: debilitating, but I’m looking forward to, um, getting them set up and used. And, uh, yeah. Review Plans and Honest Assessments Christina: And then full review will be coming to, uh, to, I have to post a review on Reddit, but then I will also be doing a more in depth review, uh, on this podcast if anybody’s interested in, in other places too, to like, let let you know, like if it’s worth your money or not. Um, ’cause there, like I said, there are, there are a few other options out there. So it’s not one of those things where like, you know, um, like, thank you very much for the free monitor, um, monitors. But, but I, I will, I will give like the, the, you know, an honest assessment or Current Display Setup Brett: So [00:39:00] do you currently have a two display setup? Christina: No. Um, well, yes, and kind of, so I have my, my, I have my 5K studio display, and then I have like my iMac that I use as a two to display setup. But then otherwise, what I’ve had to do, and this is actually part of why I’m looking forward to this, is I have a 4K 27 inch monitor, but it’s garbage. And it, it’s one of those things where I don’t wanna use it with my Mac. And so I wind up only using it with my, with my Windows machine, with my framework desktop, um, with my Windows or Linux machine. And, and because that, even though I, it supports Thunderbolt, the Apple display is pain in the ass to use with those things. It doesn’t have the KVM built in. Like, it doesn’t like it, it just, it’s not good for that situation. So yeah, this will be of this size. I mean, again, like I, I, I’m 2 32 inch monitors. I don’t know how I’m gonna deal with that on my Jeff: I Brett: yeah. So right now I’m looking at 2 32 inch like UHD monitors, Christina: Yeah,[00:40:00] Brett: I will say that on days when my neck hurts, it sucks. It’s a, it’s too wide a range to, to like pan back and forth quickly. Like I’ll throw my back out, like trying to keep track of stuff. Um, but I have found that like if I keep the second display, just like maybe social media apps is the way I usually set it up. And then I only work on one. I tried buying an extra wide curve display, hated it. Jeff: Uh, I’ve always wanted to try one, but Christina: I don’t like them. Jeff: Yeah. Christina: Well, for me, well for me it’s two things. One, it’s the, I don’t love the whole like, you know, thing or whatever, but the big thing honestly there, if you could give me, ’cause people are like, oh, you can get a really big 5K, 2K display. I’m like, that’s not a 5K display. That is 2 27 inch, 1440 P displays. One, you know, ultra wide, which is great. Good for you. That’s not retina. And I’m a sicko Who [00:41:00] needs the, the pixel doubling? Like I wish that my eyes could not use that, but, but, but, Jeff: that needs the pixel. Like was that the headline of your Reddit, uh, Christina: no, no. It wasn’t, it wasn’t. But, but maybe it should be. Hi, I’m a sicko who only, um, fucks with, with, with, with, with, with, with retina displays. Ask me anything. Um, but no, but that’s a good point. Brett: I think 5K Psycho is the Christina: 5K Sicko is the po is the po title. I like that. I like that. No, what I’m thinking about doing and that’s great to know, Brett. Um, this kind of reaffirms my thing. Thunderbolt KVM and Display Preferences Christina: So what’s nice about these monitors is that they come with like, built in like, um, Thunderbolt 5K VM. So, which is nice. So you could conceivably have multiple, you know, computers, uh, connected, you know, to to, to one monitor, which I really like. Um, I mean like, ’cause like look, I, I’ve bitched and moaned about the studio display, um, primarily for the price, but at the same time, if mine broke tomorrow and if I didn’t have any way to replace it, I’ve, I’ve also gone on record saying I would buy a new one immediately. As mad as I am about a [00:42:00] lot of different things with that, that the built-in webcam is garbage. The, you know, the, the fact that there’s not a power button is garbage. The fact that you can’t use it with multiple inputs, it’s garbage. But it’s a really good display and it’s what I’m used to. Um, it’s really not any better than my LG Ultra fine from 2016. But you know what? Whatever it is, what it is. Um. I, I am a 5K sicko, but being able to, um, connect my, my personal machine and my work machine at the same time to one, and then have my Windows slash Linux computer connected to another, I think that’s gonna be the scenario where I’m in. So I’m not gonna necessarily be in a place where I’m like, okay, I need to try to look at both of them across 2 32 inch displays. ’cause I think that that, like, that would be awesome. But I feel like that’s too much. Brett: I would love a decent like Thunderbolt KVM setup that could actually swap like my hubs back and Christina: Yes. MacBook Pro and Studio Comparisons Brett: Um, so, ’cause I, I have a studio and I have my, uh, Infor MacBook Pro [00:43:00] and I actually work mostly on the MacBook Pro. Um, but if I could easily dock it and switch everything on my desk over to it, I would, I would work in my office more often. ’cause honestly, the M four MacBook Pro is, it’s a better machine than the original studio was. Um, and I haven’t upgraded my studio to the latest, but, um, I imagine the new one is top notch. Christina: Oh yeah. Yeah. Brett: my, my other one, a couple years old now is already long in the tooth. Christina: No, I mean, they’re still good. I mean, it’s funny, I saw that some YouTube video the other day where they were like, the best value MacBook you can get is basically a 4-year-old M1 max. And I was like, I don’t know about that guys. Like, I, I kind of disagree a little bit. Um, but the M1 max, which is I think is what is in the studio, is still a really, really good ship. But to your point, like they’ve made those, um. You know, the, the, the new ones are still so good. Like, I have an M three max as my personal laptop, and [00:44:00] that’s kind of like the dog chip in the, in the m um, series lineup. So I kind of am regretful for spending six grand on that one, but it is what it is, and I’m like, I’m not, I’m not upgrading. Um, I mean, maybe, maybe in, in next year if, if the M five Pro, uh, or M five max or whatever is, is really exceptional, maybe I’ll look at, okay, how much will you give me to, to trade it in? But even then, I, I, but I feel like I’m at that point where I’m like, it gets to a point where like it’s diminishing returns. Um, but, uh, just in terms of my own budget. But, um, yeah, the, the new just info like pro or or max, whatever, Brett: I have, I have an M four MacBook Pro sitting around that I keep forgetting to sell. Uh, it’s the one that I, it only had a 256 gigabyte hard drive, Jeff: what happened to me when I bought my M1, Brett: and I, and I regretted that enough that I just ordered another one. But, uh, for various reasons, I couldn’t just return the one I didn’t Jeff: ’cause it was.[00:45:00] Brett: so now I, now I have to sell it and I should sell it while it’s still a top of the line machine Christina: Sell it before, sell, sell, sell, sell it before next month, um, or, or February or whenever they sell it before then the, the pros come out. ’cause right now the M five base is out, but the pros are not. So I think feel like you could still get most of your value for it, especially since it has very few battery cycles. Be sure to put the battery cycles on your Facebook marketplace or eBay thing or whatever. Um, I bought my, uh, she won’t listen to this so she won’t know, but, um, they, there was a, a killer Cyber Monday deal, uh, for Best Buy where they had like a, the, the, the, so it’s several years old, but it was the, the M two MacBook Air, but the one that they upgraded to 16 gigs of Ram when Apple was like, oh, we have to have Apple Intelligence and everything, because they actually thought that they were actually gonna ship Apple Intelligence. So they like went back and they, like, they, they, you know, retconned like made the base model MacBook Air, like 16 [00:46:00] gigs. Um, and, uh, anyway, it was, it was $600, um, Jeff: still crazy. Christina: which, which like even for like a, a, a 2-year-old machine or whatever, I was like, yeah, she, my sister, I think she’s on like, like a 2014 or older than that. Like, like MacBook Air. She doesn’t even know where the MagSafe is. I don’t think she even knows where the laptop is. So she’s basically doing everything like on her phone and I’m like, okay, you need a laptop of some type, but at this point. I do feel strongly that like the, the, the $600 or, or, or actually I think it was $650, it was actually less, it is actually more expensive than what the, the, the Cyber Monday sale was, um, the M1, Walmart, MacBook Air. I’m like, absolutely not like that is at this point, do not buy that. Right? Like, I, especially with eight gigs of ram, I’m, I’m like, it’s been, it’s five years old. It’s a, it was a great machine and it was great value for a long time. $200. Cool, right? Like, if you could get something like use and, and, and, and if you could replace the battery or, you know, [00:47:00] for, for, you know, not, not too much money or whatever. Like, I, I, I could see like an argument to be made like value, right? But there’d be no way in hell that I would ever spend or tell anybody else to spend $650 on that new, but $600 for an M two with Jeff: Now we’re talking. Christina: which has the redesign brand new. I’m like, okay. Spend $150 more and you could have got the M four, um, uh, MacBook Air, obviously all around Better Machine. But for my sister, she doesn’t need that, Jeff: What do we have to do to put your sister in this M two MacBook Christina: that, that, that, that, that, that’s exactly it. So I, I, I was, well, also, it was one of those things I was like, I think that she would rather me spend the money on toys for my nephew for Santa Claus than, than, uh, giving her like a, a processor upgrade. Um, Jeff: Claus isn’t real. Brett: Oh shit. Jeff: Gotcha. Every year I spoil it for somebody. This year it was Christina and Brett. Sorry guys. Brett: right. Well, can I tell you guys Jeff: Yeah. [00:48:00] Brett Software. Brett: two quick projects before we do Jeff: Hold on. You don’t have to be quick ’cause you could call it Brett: We’re already at 45 minutes and I want Jeff: What I’m saying, skip GrAPPtitude. This is it? Brett: okay. Christina: us about Mark. Tell us about your projects. Brett: So, so Mark three is, there’s a public, um, test flight beta link. Uh, if you go to marked app.com, not marked two app.com, uh, marked app.com. Uh, you, there’s a link in the, in the, at the top for Christina: Join beta. Mm-hmm. Brett: Um, and that is public and you can join it and you can send me feedback directly through email because, um, uh, uh, the feedback reporter sucks for test flight and you can’t attach files. And half the time they come through as anonymous feedback and I can’t even follow up on ’em. So email me. But, um, I’ll be announcing that on my blog soon-ish. Um, right now there’s like [00:49:00] maybe a couple dozen, um, testers and I, it’s nice and small and I’m solving the biggest bugs right away. Um, so that’s been, that’s been big. Like Mark, even since we last talked has added. Do you remember Jeff when Merlin was on and he wanted to. He wanted to be able to manage his styles, um, and disable built-in styles. There’s now a whole table based style manager where you Jeff: saw that. Brett: you can, you can reorder, including built-in styles. You can reorder, enable, disable, edit, duplicate. Um, it’s like a full, full fledged, um, style manager. And I just built a whole web app that is a style generator that gives you, um, automatic like rhythm calculations for your CSS and you can, you can control everything through like, uh, like UI fields instead of having to [00:50:00] write CSS. Uh, but you can also o open up a very, I’ve spent a lot of time on the code mirror CSS editor in the web app. Uh, so, and it’s got live preview as you edit in the code mirror field. Um, so that’s pretty cool. And that’s built into marts. So if you go to style, um, generate style, it’ll load up a, a style generator for you. Anyway, there’s, there’s a ton. I’m not gonna go into all the details, but, uh, anyone listening who uses markdown for anything, especially if you want ability to export to like Word and epub and advanced PDF export, um, join the beta. Let me know what you think. Uh, help me squash bugs. But the other thing, every time I push a beta for review before the new bug reports come in, I’ve been putting time into a tool. Markdown Processor: Apex Brett: I’m calling [00:51:00] Apex and um, I haven’t publicly announced this one yet, but I probably will by the time this podcast comes out. Jeff: I mean, doesn’t this count? Brett: It, it does. I’m saying like this, this might be a, you hear you heard it here first kind of thing, um, but if you go to github.com/tt sc slash apex, um, I built a, uh, pure C markdown processor that combines syntax from cram down GitHub flavored markdown, multi markdown maku, um, common mark. And basically you can write syntax from any of those processors, including all of their special features, um, and in one document, and then use Apex in its unified mode, and it’ll just figure out what. All of your syntax is supposed to do. Um, so you can take, you can port documents from one platform to another [00:52:00] without worrying about how they’re gonna render. Um, if I can get any kind of adoption with Apex, it could solve a lot of problems. Um, I built it because I want to make it the default processor in marked ’cause right now, you, you have to choose, you know, cram Christina: Which one? Brett: mark and, and choosing one means you lose something in order to gain something. Um, so I wanted to build a universal one that brought together everything. And I added cool features from some extensions of other languages, such as if you have two lists in a row, normally in markdown, it’s gonna concatenate those into one list. Now you can put a carrot on a line between the two lists and it’ll break it into two lists. I also added support for a. An extension to cram down that lets you put double uh, carrots inside a table cell and [00:53:00] create a row band. So like a cell that, that expands it, you rows but doesn’t expand the rest of the row. Um, so you can do cell spans and row spans and it has a relaxed table version where you don’t have to have an alignment row, which is, uh, sometimes we just wanna make quickly table. You make two lines. You put some pipes in. This will, if there’s no alignment row, it will generate a table with just a table body and table data cells in no header. It also allows footers, you can add a footer to a table by using equals in the separator line. Um, it, it’s, Jeff: This is very civilized, Brett: it is. Christina: is amazing, Brett: So where Common Mark is extremely strict about things, um, apex is extremely permissive. Jeff: also itty bitty things like talk about the call out boxes from like Brett: oh yeah, it, it can handle call out syntax from Obsidian and Bear and Xcode Playgrounds. [00:54:00] Um, and it incorporates all of Mark’s syntax for like file includes and even renders like auto scroll pauses that work in marked and some other teleprompter situations. Um, it uses file ude syntax from multi markdown, like, which is just like a curly brace and, uh, marked, which is, uh, left like a double left, uh, angle bracket and then different. Brackets to surround a file name and it handles IA writer file inclusion where you just type a forward slash and then the name of a file and it automatically detects if that file is an image or source code or markdown text, and it will import it accordingly. And if it’s a CSV file, it’ll generate a table from it automatically. It’s, it’s kind of nuts. I, it’s kind of nuts. I could not have done this [00:55:00] without copilot. I, I am very thankful for copilot because my C skills are not, would not on their own, have been up to this task. I know enough to bug debug, but yeah, a lot of these features I got a big hand from copilot on. Jeff: This is also Brett. This is some serious Brett Terpstra. TURPs Hard Christina: Yeah, it is. I was gonna say, this is like Jeff: and also that’s right. Also, if your grandma ever wrote you a note and it, and though you couldn’t really read it, it really well, that renders perfectly Christina: Amazing. No, I was gonna say this is like, okay, so Apex is like the perfect name ’cause this is the apex of Brett. Jeff: Yes. Apex of Brett. Christina: That’s also that, that’s, that’s not an alternate episode title Apex of Brett. Because genuinely No, Brett, like I am, I am so stunned and impressed. I mean, you all, you always impressed me like you are the most impressive like developer that I, that I’ve ever known. But you, this is incredible. And, and this, I, I love this [00:56:00] because as you said, like common Mark is incredibly strict. This is incredibly permissive. But this is great. ’cause there are those scenarios where you might have like, I wanna use one feature from one thing or one from another, or I wanna combine things in various ways, or I don’t wanna have to think about it, you know? Brett: I aals, I forgot to mention I aals inline attribute list, which is a crammed down feature that lets you put curly brackets after like a paragraph and then a colon and then say, dot call out inside the curly brackets. And then when it renders the markdown, it creates that paragraph and adds class equals call out to the paragraph. Um, and in, in Cramon you can apply these to everything from list items to list to block quotes. Like you can do ’em for spans. You could like have one after, uh, link syntax and just apply, say dot external to a link. So the IAL syntax can add IDs classes and uh, arbitrary [00:57:00] attributes to any element in your markdown when it renders to HTML. And, uh, and Apex has first class support for I aals. Was really, that was, that Christina: that was really hard, Brett: I wrote it because I wanted, I wanted multi markdown, uh, for my prose writing, but I really missed the als. Christina: Yes. Okay. Because see, I run into this sort of thing too, right? Because like, this is a problem like that. I mean, it’s a very niche problem, um, that, that, you know, people who listen to this podcast probably are more familiar with than other types of people. But like, when you have to choose your markdown processor, which as you said, like Brett, like that can be a problem. Like, like with, with using Mark or anything else, you’re like, what am I giving up? What do I have? And, and like for me, because I started using mul, you know, markdown, um, uh, largely because of you, um, I think I was using it, I knew about it before you, but largely because of, of, of you, like multi markdown has always been like kind of my, or was historically my flavor of choice. It has since shifted to being [00:58:00] GitHub, labor bird markdown. But that’s just because the industry has taken that on, right? But there were, you know, certain things like in like, you know, multi markdown that work a certain way. And then yeah, there are things in crammed down. There are things in these other things in like, this is just, this is awesome. This Brett: It is, the whole thing is built on top of C mark, GFM, which is GitHub’s port of common mark with the GitHub flavored markdown Christina: Right. Brett: Um, and I built, like, I kept that as a sub-module, totally clean, and built all of this as extensions on top of Cmar, GFM, which, you know, so it has full compatibility with GitHub and with Common Merck by out, like outta the box. And then everything else is built on top of that. So it, uh, it covers, it covers all the bases. You’ll love it Christina: I’m so excited. No, this is awesome. And I Brett: blazing fast. It can render, I have a complex document that, that uses all of its features and it can render it in [00:59:00] 0.006 seconds. Christina: that’s awesome. Jeff: Awesome. Christina: That’s so cool. No, this is great. And yeah, I, and I think that honestly, like this is the sort of thing like if, yeah, if you can eventually get this to like be like the engine that powers like mark three, like, that’ll be really slick, right? Because then like, yeah, okay, I can take one document and then just, you know, kind of, you know, wi with, with the, you know, ha have, have the compatibility mode where you’re like, okay, the unified mode or whatever yo

Postgres FM
max_connections vs migrations

Postgres FM

Play Episode Listen Later Dec 5, 2025 44:40


Nik and Michael discuss max_connections, especially in the context of increasing it to solve problems like migrations intermittently failing(!) Here are some links to things they mentioned: max_connections https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-MAX-CONNECTIONSTweet about deployments vs connections issue https://x.com/brankopetric00/status/1991394329886077090Nik tweet in response https://x.com/samokhvalov/status/1991465573684027443Analyzing the Limits of Connection Scalability in Postgres (blog post by Andres Freund) https://www.citusdata.com/blog/2020/10/08/analyzing-connection-scalability/Exponential Backoff And Jitter (blog post by Marc Brooker) https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Practical AI
Technical advances in document understanding

Practical AI

Play Episode Listen Later Dec 2, 2025 49:18 Transcription Available


Chris and Daniel unpack how AI-driven document processing has rapidly evolved well beyond traditional OCR with many technical advances that fly under the radar. They explore the progression from document structure models to language-vision models, all the way to the newest innovations like Deepseek-OCR. The discussion highlights the pros and cons of these various approaches focusing on practical implementation and usage.Featuring:Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XSponsors:Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiFabi.ai - The all-in-one data analysis platform for modern teams. From ad hoc queries to advanced analytics, Fabi lets you explore data wherever it lives—spreadsheets, Postgres, Snowflake, Airtable and more. Built-in Python and AI assistance help you move fast, then publish interactive dashboards or automate insights delivered straight to Slack, email, spreadsheets or wherever you need to share it. Learn more and get started for free at fabi.aiFramer – Design and publish without limits with Framer, the free all-in-one design platform. Unlimited projects, no tool switching, and professional sites—no Figma imports or HTML hassles required. Start creating for free at framer.com/design with code `PRACTICALAI` for a free month of Framer Pro.Upcoming Events: Register for upcoming webinars here!

Practical Founders Podcast
#171: Lessons from a 9-Year Bootstrap Journey to a Private Equity Exit - Darshan Rangegowda

Practical Founders Podcast

Play Episode Listen Later Nov 21, 2025 63:56


Dharshan Rangegowda, founder of ScaleGrid, left a decade-long engineering career at Microsoft to solve a painful database operations problem he had lived firsthand. After early missteps selling to enterprises, he shifted to helping developers manage MongoDB, Redis, and Postgres on the cloud, bootstrapping the business from scratch. ScaleGrid grew steadily through product depth, technical support, and Dharshan's mastery of SEO—becoming the top organic result for many key searches. The company expanded into multiple database engines, added a distributed engineering team, and reached 20 employees by 2021, serving both SMB developers and some enterprise teams.  Dharshan sold a majority stake to Spotlight Equity Partners during the pandemic after receiving an unsolicited offer, later stepping out of day-to-day operations while remaining on the board.  In this conversation, Dharshan shares hard-earned lessons about product-led growth, support as strategy, SEO as a long-game advantage, and how bootstrapped founders can build meaningful outcomes in massive markets.  Key Takeaways SEO Power: SEO remains a long-term growth engine for bootstrappers because big VC-backed companies rarely have the patience to compound it. Support as Strategy: Deep, responsive technical support became ScaleGrid's differentiator and directly informed product innovation and content. Start at the Edges: Enterprises won't buy from a one-person startup, but edge users with urgent problems will — and they become your early beachhead. Bootstrap Constraints: Founder over-frugality can limit growth; strategic delegation and early team building prevent burnout and plateauing. This Interview Is Perfect For Bootstrap SaaS founders Technical founders selling to developers Founders stuck in early traction or slow growth Anyone considering a PE exit or multi-year acquisition process Quote from Darshan Rangegowda, founder of ScaleGrid "You can't take random people and make them an entrepreneur. You have to want to be an entrepreneur and want to be on your own. You have to enjoy the freedom and the risk and the upside that comes with it and the unmitigated downside as well. You have to accept and be comfortable with it.  "You want to be on your own so you can try things. You are constantly looking at problems and new solutions. You want to be around people who like that sort of process: Here's a new problem and here's a new solution.  "But the most important thing you have to do as an entrepreneur is you have to add value to your customers. And most people forget that." Links Dharshan Rangegowda on LinkedIn ScaleGrid on LinkedIn ScaleGrid website Spotlight Equity Partners (acquirer) Allied Advisers (M&A advisor) AngelPad Accelerator Podcast Sponsor – Designli This podcast is sponsored by Designli, a digital product studio that helps entrepreneurs and startups turn their software ideas into reality. From strategy and design to full-scale development, Designli guides you through every step of building custom web and mobile apps. Learn more at designli.co/practical. The Practical Founders Podcast Tune into the Practical Founders Podcast for weekly in-depth interviews with founders who have built valuable software companies without big funding. Subscribe to the Practical Founders Podcast using your favorite podcast app or view on our YouTube channel. Get the weekly Practical Founders newsletter and podcast updates at practicalfounders.com. Practical Founders CEO Peer Groups Be part of a committed and confidential group of practical founders creating valuable software companies without big VC funding.  A Practical Founders Peer Group is a committed and confidential group of founders/CEOs who want to help you succeed on your terms. Each Practical Founders Peer Group is personally curated and moderated by Greg Head.

Postgres FM
What's new in EXPLAIN

Postgres FM

Play Episode Listen Later Nov 21, 2025 45:13


Nik and Michael discuss the various changes to EXPLAIN that arrived in Postgres 18. Here are some links to things they mentioned: EXPLAIN (official docs) https://www.postgresql.org/docs/current/sql-explain.htmlUsing EXPLAIN (official docs) https://www.postgresql.org/docs/current/using-explain.html EXPLAIN glossary (pgMustard site) https://www.pgmustard.com/docs/explainPostgres 18 release notes https://www.postgresql.org/docs/release/18.0/Enable BUFFERS with EXPLAIN ANALYZE by default (commit) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c2a4078ebOur (first) BUFFERS by default episode https://postgres.fm/episodes/buffers-by-default Show index search count in EXPLAIN ANALYZE (commit) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0fbceae841cb5a31b13d3f284ac8fdd19822ecebOur episode on Skip scan with Peter Geoghegan https://postgres.fm/episodes/skip-scanWhat do the new Index Searches lines in EXPLAIN mean? https://www.pgmustard.com/blog/what-do-index-searches-in-explain-meanpg_stat_plans presentation by Lukas Fittl https://www.youtube.com/watch?v=26coQV3f-wkImprove EXPLAIN's display of window functions (commit) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=8b1b34254Show Parallel Bitmap Heap Scan worker stats in EXPLAIN ANALYZE (commit) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5a1e6df3bAdd information about WAL buffers being full (commit) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=320545bfc ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Maintainable
Chris Zetter: Building a Database to Better Understand Maintainability

Maintainable

Play Episode Listen Later Nov 18, 2025 49:41


Episode SummaryIn this conversation, Robby sits down with software engineer and author Chris Zetter to explore what building a relational database from scratch can teach us about maintainability, architectural thinking, and team culture. Chris shares why documentation often matters more than perfectly shaped code, why pairing accelerates learning and quality, and why “boring technology” is sometimes the most responsible choice. Together they examine how teams get stuck in local maxima, how junior engineers build confidence, and how coding agents perform when asked to implement a database.Episode Highlights[00:01:00] What Makes Software MaintainableChris explains that well-maintained software is defined by how effectively it helps teams deliver value and respond to change. In some domains—like payroll systems—the maintainability burden shifts toward documentation rather than code organization.[00:03:50] Documentation vs. Code CommentsHe describes visual docs, system diagrams, and commit–ticket links as more durable sources of truth than inline comments, which tend to rot and discourage refactoring.[00:05:15] Rethinking Technical DebtChris argues that teams overuse the metaphor. He prefers naming the specific reason something is slow or brittle—like outdated libraries or rushed decisions—because that builds trust and clarity with product partners.[00:07:45] Where Core Debt Really LivesEarlier in his career he obsessed over long files; now he focuses on structural issues. Architecture, boundaries, and naming affect changeability far more than messy internals.[00:08:15] Pairing as the Default ToolChris loves pairing for its speed, clarity, and shared context. Remote pairing has removed obstacles like mismatched keyboard setups or cramped office seating. Tools like Tuple and Pop keep it smooth.[00:10:20] The Mob Tool and Fast Driver SwitchingHe explains how the Mob CLI tool makes switching drivers nearly instant, which keeps energy high and lets everyone work in their own editor environment, reducing friction and fatigue.[00:13:45] Pairing with Junior EngineersPairing helps newer developers avoid painful pull-request rework and builds confidence. But teams must balance pairing with opportunities for engineers to build autonomy.[00:20:50] Getting Feedback SoonerChris emphasizes speed of feedback: showing progress early to stakeholders prevents wasted days—and sometimes weeks—of heading in the wrong direction.[00:21:10] Boring Technology as a FeatureAfter being burned by abandoned frameworks, Chris champions predictable, well-supported tools for the big layers: language, framework, database. Novelty is great—but only in places where rollback is cheap.[00:23:20] Balancing Professional Development with Organizational NeedsDevelopers want experience with new technology; organizations want stability. Chris describes how leaders can channel curiosity safely and productively.[00:27:20] Build a Database ServerChris's book, Build a Database Server, is a practical, language-agnostic guide to building a relational database from scratch. It uses a test suite as a feedback loop so developers can experiment, refactor, and learn architectural trade-offs along the way.[00:31:45] What Writing the Book Taught HimCreating a database deepened his appreciation for Postgres maintainers. He highlights the number of moving parts—storage engine, type system, query planner, wire protocol—and how academic papers often skip hands-on guidance.[00:33:00] Experimenting with Coding AgentsChris tested coding agents by giving them the book's test suite. They passed many tests but produced brittle, incoherent architecture. Without a feedback loop for quality, the agents aimed only to satisfy test conditions—not build maintainable systems.[00:36:55] Escaping a Local Maxima Through a Design SprintChris shares a story of a team stuck maintaining a system that no longer fit business needs. A design sprint gave them space to reimagine the system, clarify naming, validate concepts, and identify which pieces were worth reusing.[00:40:40] Rewrite vs. RefactorHe leans toward refactor for large systems but supports small, isolated rewrites when boundaries are clear.[00:41:40] Building Trust in Legacy CodeWhen inheriting an old codebase, Chris advises starting with a small bug fix or UI tweak to understand deployment pipelines, test coverage, and failure modes before tackling bigger improvements.[00:43:20] Recommended ReadingChris recommends _Turn the Ship Around! for its lessons on empowering teams to act with intent instead of waiting for permission.Resources MentionedBuild a Database ServerChris Zetter's blogThe Mob Programming CLI ToolTuplePopTurn the Ship Around!Thanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error-tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and other frameworks.It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications.Keep your coding cool and error-free, one line at a time! Use the code maintainable to get a 10% discount for your first year. Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Postgres FM
Tens of TB per hour

Postgres FM

Play Episode Listen Later Nov 14, 2025 38:47


Nik talks Michael through a recent benchmark he worked with Maxim Boguk on, to see how quickly they could provision a replica. Here are some links to things they mentioned:Ultra-fast replica creation with pgBackRest (blog post by Maxim Boguk and Nik) https://postgres.ai/blog/20251105-postgres-marathon-2-012-ultra-fast-replica-creation-pgbackrestCopying a database episode https://postgres.fm/episodes/copying-a-databaseAdd snapshot backup support for PostgreSQL in wal-g (draft PR by Andrey Borodin) https://github.com/wal-g/wal-g/pull/2101Multi-threaded pg_basebackup discussion 1: https://www.postgresql.org/message-id/flat/CAEHH7R4%3D_GN%2BLSsj0YZOXZ13yc%3DGk9umJOLNopjS%3DimK0c1mWA%40mail.gmail.comMulti-threaded pg_basebackup discussion 2: https://www.postgresql.org/message-id/flat/io_method https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-IO-METHOD pgBackRest https://github.com/pgbackrest/pgbackrestAdd sequence synchronization for logical replication (commit) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5509055d6956745532e65ab218e15b99d87d66ce Allow process priority to be set (pgBackRest feature added by David Steele) https://github.com/pgbackrest/pgbackrest/pull/2693 Hard limit on process-max (pgBackRest issue from 2019) https://github.com/pgbackrest/pgbackrest/issues/696 ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Hanselminutes - Fresh Talk and Tech for Developers
Why Postgres? and why now? with Claire Giordano

Hanselminutes - Fresh Talk and Tech for Developers

Play Episode Listen Later Nov 13, 2025 36:11


Postgres has quietly become the world's favorite database...running startups, governments, and global clouds alike. Scott talks with Claire Giordano, long-time Postgres advocate and technologist, about the database's unlikely rise from academic roots to modern dominance. They explore its design philosophy, the open-source community that fuels it, and why Postgres keeps winning even in the age of AI and hyperscale data.https://www.postgresql.org/

Practical AI
Autonomous Vehicle Research at Waymo

Practical AI

Play Episode Listen Later Nov 13, 2025 52:08 Transcription Available


Waymo's VP of Research, Drago Anguelov, joins Practical AI to explore how advances in autonomy, vision models, and large-scale testing are shaping the future of driverless technology. The conversation dives into the dual challenges of building an onboard driver and testing that driver (via large scale simulation). Drago also gives us an update on what Waymo is doing to achieve intelligent, real-time performance while ensuring proven safety and reliability.Featuring:Drago Anguelov – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:Waymo ResearchNew Insights for Scaling Laws in Autonomous DrivingAI in MotionSponsors: Outshift by Cisco - The open source collective building the Internet of Agents. Backed by Outshift by Cisco, AGNTCY gives developers the tools to build and deploy multi-agent software at scale. Identity, communication protocols, and modular workflows—all in one global collaboration layer. Start building at AGNTCY.org.Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiFabi.ai - The all-in-one data analysis platform for modern teams. From ad hoc queries to advanced analytics, Fabi lets you explore data wherever it lives—spreadsheets, Postgres, Snowflake, Airtable and more. Built-in Python and AI assistance help you move fast, then publish interactive dashboards or automate insights delivered straight to Slack, email, spreadsheets or wherever you need to share it. Learn more and get started for free at fabi.aiUpcoming Events: Register for upcoming webinars here!

Crazy Wisdom
Episode #505: From Big Data to Big Meaning: Jessica Talisman on the Hidden Architecture of Knowledge

Crazy Wisdom

Play Episode Listen Later Nov 10, 2025 72:04


In this episode of Crazy Wisdom, host Stewart Alsop talks with Jessica Talisman, founder of Contextually and creator of the Ontology Pipeline, about the deep connections between knowledge management, library science, and the emerging world of AI systems. Together they explore how controlled vocabularies, ontologies, and metadata shape meaning for both humans and machines, why librarianship has lessons for modern tech, and how cultural context influences what we call “knowledge.” Jessica also discusses the rise of AI librarians, the problem of “AI slop,” and the need for collaborative, human-centered knowledge ecosystems. You can learn more about her work at Ontology Pipeline and find her writing and talks on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop welcomes Jessica Talisman to discuss Contextually, ontologies, and how controlled vocabularies ground scalable systems.05:00 They compare philosophy's ontology with information science, linking meaning, categorization, and sense-making for humans and machines.10:00 Jessica explains why SQL and Postgres can't capture knowledge complexity and how neuro-symbolic systems add context and interoperability.15:00 The talk turns to library science's split from big data in the 1990s, metadata schemas, and the FAIR principles of findability and reuse.20:00 They discuss neutrality, bias in corporate vocabularies, and why “touching grass” matters for reconciling internal and external meanings.25:00 Conversation shifts to interpretability, cultural context, and how Western categorical thinking differs from China's contextual knowledge.30:00 Jessica introduces process knowledge, documentation habits, and the danger of outsourcing how-to understanding.35:00 They explore knowledge as habit, the tension between break-things culture and library design thinking, and early AI experiments.40:00 Libraries' strategic use of AI, metadata precision, and the emerging role of AI librarians take focus.45:00 Stewart connects data labeling, Surge AI, and the economics of good data with Jessica's call for better knowledge architectures.50:00 They unpack content lifecycle, provenance, and user context as the backbone of knowledge ecosystems.55:00 The talk closes on automation limits, human-in-the-loop design, and Jessica's vision for collaborative consulting through Contextually.Key InsightsOntology is about meaning, not just data structure. Jessica Talisman reframes ontology from a philosophical abstraction into a practical tool for knowledge management—defining how things relate and what they mean within systems. She explains that without clear categories and shared definitions, organizations can't scale or communicate effectively, either with people or with machines.Controlled vocabularies are the foundation of AI literacy. Jessica emphasizes that building a controlled vocabulary is the simplest and most powerful way to disambiguate meaning for AI. Machines, like people, need context to interpret language, and consistent terminology prevents the “hallucinations” that occur when systems lack semantic grounding.Library science predicted today's knowledge crisis. Stewart and Jessica trace how, in the 1990s, tech went down the path of “big data” while librarians quietly built systems of metadata, ontologies, and standards like schema.org. Today's AI challenges—interoperability, reliability, and information overload—mirror problems library science has been solving for decades.Knowledge is culturally shaped. Drawing from Patrick Lambe's work, Jessica notes that Western knowledge systems are category-driven, while Chinese systems emphasize context. This cultural distinction explains why global AI models often miss nuance or moral voice when trained on limited datasets.Process knowledge is disappearing. The West has outsourced its “how-to” knowledge—what Jessica calls process knowledge—to other countries. Without documentation habits, we risk losing the embodied know-how that underpins manufacturing, engineering, and even creative work.Automation cannot replace critical thinking. Jessica warns against treating AI as “room service.” Automation can support, but not substitute, human judgment. Her own experience with a contract error generated by an AI tool underscores the importance of review, reflection, and accountability in human–machine collaboration.Collaborative consulting builds knowledge resilience. Through her consultancy, Contextually, Jessica advocates for “teaching through doing”—helping teams build their own ontologies and vocabularies rather than outsourcing them. Sustainable knowledge systems, she argues, depend on shared understanding, not just good technology.

Practical AI
Are we in an AI bubble?

Practical AI

Play Episode Listen Later Nov 10, 2025 49:41 Transcription Available


Dan and Chris unpack whether today's surge in AI deployment across enterprise workflows, manufacturing, healthcare, and scientific research signals a lasting transformation or an overhyped bubble. Drawing parallels to the dot-com era, they explore how technology integration is reshaping industries, affecting jobs, and even influencing human cognition, ultimately asking: is this a bubble, or just a fizzy new phase of innovation?Featuring:Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks: Powell says that, unlike the dotcom boom, AI spending isn't a bubble: ‘I won't go into particular names, but they actually have earnings'Sponsors:Outshift by Cisco - The open source collective building the Internet of Agents. Backed by Outshift by Cisco, AGNTCY gives developers the tools to build and deploy multi-agent software at scale. Identity, communication protocols, and modular workflows—all in one global collaboration layer. Start building at AGNTCY.org.Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiFabi.ai - The all-in-one data analysis platform for modern teams. From ad hoc queries to advanced analytics, Fabi lets you explore data wherever it lives—spreadsheets, Postgres, Snowflake, Airtable and more. Built-in Python and AI assistance help you move fast, then publish interactive dashboards or automate insights delivered straight to Slack, email, spreadsheets or wherever you need to share it. Learn more and get started for free at fabi.aiUpcoming Events: Join us at the Midwest AI Summit on November 13 in Indianapolis to hear world-class speakers share how they've scaled AI solutions. Don't miss the AI Engineering Lounge, where you can sit down with experts for hands-on guidance. Reserve your spot today!Register for upcoming webinars here!

Path To Citus Con, for developers who love Postgres
Building a dev experience for Postgres in VS Code with Rob Emanuele

Path To Citus Con, for developers who love Postgres

Play Episode Listen Later Nov 7, 2025 78:40


What do guitar busking, geospatial queries, and agentic coding have to do with Postgres? In Episode 33 of Talking Postgres, principal engineer Rob Emanuele at Microsoft shares his winding path from Venice Beach to building a new VS Code extension for PostgreSQL—that works with any Postgres, anywhere. We dig into GitHub Copilot, ask vs. agent mode, and how Rob now codes in English—and then spends even more time in code review to decide what's good, what's bad, and what's dangerous. Also: how PyCon changed his life; his work on the Microsoft Planetary Computer with spatio-temporal queries and PostGIS; and how music, improv, and failure shape his approach to developer experience. Links mentioned in this episode:Visual Studio Marketplace: VS Code extension for PostgreSQL with ~261K downloads to dateGitHub repo: VS Code extension for PostgreSQL (for issues/discussions)Docs: GitHub Copilot agent modePOSETTE 2025 Talk: Introducing Microsoft's VS Code Extension for PostgreSQL, by Matt McFarlandVS Code Live: Working with PostgreSQL databases with the Microsoft PostgreSQL VS Code extension, with Olivia Guzzardo & Rob EmanueleTalking Postgres Ep30: AI for data engineers with Simon WillisonPostgres Meetup for All: VS Code Tools for Postgres, happening on Thu Dec 11, 2025 Wikipedia: DogfoodingTalking Postgres Ep07: Why people care about PostGIS and Postgres with Paul Ramsey & Regina ObePOSETTE 2024 keynote: The Open Source Geospatial Community, PostGIS, & Postgres, by Regina ObeWebsite: Microsoft Planetary ComputerGitHub repo: PgSTACCal invite: LIVE recording of Ep34 of Talking Postgres to happen on Wed Dec 10, 2025

Postgres FM
Gapless sequences

Postgres FM

Play Episode Listen Later Oct 31, 2025 39:49


Nik and Michael discuss the concept of gapless sequences — when you might want one, why sequences in Postgres can have gaps, and an idea or two if you do want them.And one quick clarification: changing the CACHE option in CREATE SEQUENCE can lead to even more gaps, the docs mention it explicitly. Here are some links to things they mentioned:CREATE SEQUENCE https://www.postgresql.org/docs/current/sql-createsequence.htmlSequence Manipulation Functions https://www.postgresql.org/docs/current/functions-sequence.htmlOne, Two, Skip a Few (post by Pete Hamilton from Incident io) https://incident.io/blog/one-two-skip-a-fewPostgres sequences can commit out-of-order (blog post by Anthony Accomazzo / Sequin) https://blog.sequinstream.com/postgres-sequences-can-commit-out-of-orderLogical Replication of sequences (hackers thread) https://www.postgresql.org/message-id/flat/CAA4eK1LC%2BKJiAkSrpE_NwvNdidw9F2os7GERUeSxSKv71gXysQ%40mail.gmail.comSynchronization of sequences to subscriber (patch entry in commitfest) https://commitfest.postgresql.org/patch/5111/Get or Create (episode with Haki Benita) https://postgres.fm/episodes/get-or-createGerman tank problem https://en.wikipedia.org/wiki/German_tank_problem~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Practical AI
Tiny Recursive Networks

Practical AI

Play Episode Listen Later Oct 24, 2025 48:23 Transcription Available


In this fully connected episode, Daniel and Chris explore the emerging concept of tiny recursive networks introduced by Samsung AI, contrasting them with large transformer based models. They explore how these small models tackle reasoning tasks with fewer parameters, less data, and iterative refinement, matching the giants on specific problems. They also discuss the ethical challenges of emotional manipulation in chatbots.Featuring: Chris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:Less is More: Recursive Reasoning with Tiny NetworksResearchers detail 6 ways chatbots seek to prolong ‘emotionally sensitive events'Sponsors:Outshift by Cisco - The open source collective building the Internet of Agents. Backed by Outshift by Cisco, AGNTCY gives developers the tools to build and deploy multi-agent software at scale. Identity, communication protocols, and modular workflows—all in one global collaboration layer. Start building at AGNTCY.org.Fabi.ai - The all-in-one data analysis platform for modern teams. From ad hoc queries to advanced analytics, Fabi lets you explore data wherever it lives—spreadsheets, Postgres, Snowflake, Airtable and more. Built-in Python and AI assistance help you move fast, then publish interactive dashboards or automate insights delivered straight to Slack, email, spreadsheets or wherever you need to share it. Learn more and get started for free at fabi.aiMiro – The innovation workspace for the age of AI. Built for modern teams, Miro helps you turn unstructured ideas into structured outcomes—fast. Diagramming, product design, and AI-powered collaboration, all in one shared space. Start building at miro.comUpcoming Events: Join us at the Midwest AI Summit on November 13 in Indianapolis to hear world-class speakers share how they've scaled AI solutions. Don't miss the AI Engineering Lounge, where you can sit down with experts for hands-on guidance. Reserve your spot today!Register for upcoming webinars here!

Postgres FM
LWLocks

Postgres FM

Play Episode Listen Later Oct 17, 2025 38:22


Nik and Michael discuss lightweight locks in Postgres — how they differ to (heavier) locks, some occasions they can be troublesome, and some resources for working out what to do if you hit issues. Here are some links to things they mentioned:Wait Events of Type LWLock https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-LWLOCK-TABLEOur episode on (heavier) locks https://postgres.fm/episodes/locksNik's new marathon posts https://postgres.ai/blog/tags/postgres-marathonPostgres LISTEN/NOTIFY does not scale (blog post by Recall ai) https://www.recall.ai/blog/postgres-listen-notify-does-not-scaleExplicit Locking https://www.postgresql.org/docs/current/explicit-locking.htmlpg_stat_activity https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEWTuning with wait events for RDS for PostgreSQL https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Tuning.htmlMultiXact member exhaustion incidents (blog post by Cosmo Wolfe / Metronome) https://metronome.com/blog/root-cause-analysis-postgresql-multixact-member-exhaustion-incidents-may-2025pg_index_pilot https://gitlab.com/postgres-ai/pg_index_pilotMyths and Truths about Synchronous Replication in PostgreSQL (talk by Alexander Kukushkin) https://www.youtube.com/watch?v=PFn9qRGzTMcPostgres Indexes, Partitioning and LWLock:LockManager Scalability (blog post by Jeremy Schneider) https://ardentperf.com /2024/03/03/postgres-indexes-partitioning-and-lwlocklockmanager-scalability~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Practical AI
Dealing with increasingly complicated agents

Practical AI

Play Episode Listen Later Oct 16, 2025 54:56 Transcription Available


As AI systems move from simple chatbots to complex agentic workflows, new security risks emerge. In this episode, Donato Capitella unpacks how increasingly complicated architectures are making agents fragile and vulnerable. These agents can be exploited through prompt injection, data exfiltration, and tool misuse. Donato shares stories from real-world penetration tests, the design patterns for building LLM agents and explains how his open-source toolkit Spikee (Simple Prompt Injection Kit for Evaluation and Exploitation) is helping red teams probe AI systems.Featuring:Donato Capitella – LinkedIn, XChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:ReversecSponsors:Outshift by Cisco - The open source collective building the Internet of Agents. Backed by Outshift by Cisco, AGNTCY gives developers the tools to build and deploy multi-agent software at scale. Identity, communication protocols, and modular workflows—all in one global collaboration layer. Start building at AGNTCY.org.Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiFabi.ai - The all-in-one data analysis platform for modern teams. From ad hoc queries to advanced analytics, Fabi lets you explore data wherever it lives—spreadsheets, Postgres, Snowflake, Airtable and more. Built-in Python and AI assistance help you move fast, then publish interactive dashboards or automate insights delivered straight to Slack, email, spreadsheets or wherever you need to share it. Learn more and get started for free at fabi.aiUpcoming Events: Join us at the Midwest AI Summit on November 13 in Indianapolis to hear world-class speakers share how they've scaled AI solutions. Don't miss the AI Engineering Lounge, where you can sit down with experts for hands-on guidance. Reserve your spot today!Register for upcoming webinars here!

Supermanagers
AI + n8n: From YouTube Insights to Sales Funnels in Minutes with JD Fiscus

Supermanagers

Play Episode Listen Later Oct 16, 2025 45:52


JD Fiscus (nerding.io) shares how a late-night hack connecting MCP to n8n exploded to ~1M downloads, then demos practical MCP workflows: indexing YouTube channels for Q&A, and auto-building n8n flows from natural language. We dig into the Agentic Commerce Protocol, real security pitfalls (like destructive commands), and how to turn MCPs into products with OAuth and Stripe for authentication and metered billing. He closes with how he teaches this hands-on at the Vibe Coding Retreat.Timestamps1:00 Why build it: “MCP shouldn't be Claude-only”—bridging MCP into n8n early (Dec/Jan)2:09 Shipping under the pseudonym nerding.io; surprise seeing creators use it2:25 n8n later ships its own MCP server/client; they nod to nerding.io & Simon3:59 “N8n is useful, but so much more useful with MCP”5:12 What MCP means for software: every smart company is exposing an MCP; new login/usage patterns6:27 Agentic Commerce Protocol (ACP): Stripe + OpenAI; agents checkout across the web8:02 Marketing to agents not humans? SEO shifts as agents comparison-shop9:10 Early “agent mode” attempts vs protocol-based purchases (less hacky)10:58 Likely adopters: platforms (Shopify) & big retailers; echoes of early MCP evolution14:11 Security realities: token passing evolved to OAuth; hallucination + destructive actions risk16:04 Personal mishap: agent ran supabase reset on a dev DB—imagine prod! Guardrails matter17:03 Designing MCP servers: don't just “wrap your API”; use resources/prompts for agentic UX19:04 Demo 1—Influencer MCP: index a YouTube channel, embed transcripts, ask questions in Claude20:54 Storage: embeddings into Postgres; per-channel tables24:46 Keeping it fresh: daily cron to ingest new videos25:18 Demo 2—Build n8n workflows from chat using N8N MCP (by Ramullet); live docs + API27:00 “Create a webhook → send leads to Sheets” built conversationally, with allow/deny prompts31:02 Zapier, Gumloop: agents that build automations via natural-language steps34:00 Next frontier: custom connectors (Claude/Cursor/OpenAI), OAuth auth flows for MCPs39:03 Turning MCPs into products: login with Twitter → Stripe subscription → metered billing41:12 Paid tool call demo: “paid echo” → Stripe usage event logged per user43:41 How to learn this fast: vibecodingretreat.com (small cohorts, hands-on builds)Tools & Technologies Mentioned (quick guide)MCP (Model Context Protocol) — Standard for connecting models to tools/data; supports tools, resources, prompts.n8n — Open-source automation platform; JD wrote an MCP node that went viral; also has native MCP server/client now.Claude / Cursor / OpenAI (custom connectors) — LLM IDEs/chats that can load MCPs; custom connectors enable OAuth + productized access.Agentic Commerce Protocol (ACP) — Early protocol (Stripe + OpenAI) for agent-initiated purchases with confirmations.Web MCP (W3C-oriented idea) — Emerging patterns for agent↔︎website interactions beyond human UI flows.OAuth — Secure, user-consented authentication for MCPs (vs passing raw tokens).Stripe (subscriptions + metered billing) — Attach billing/usage limits to MCP calls; track per-user consumption.YouTube API + Transcripts — Source data for the “Influencer MCP” indexing pipeline.Embeddings + Postgres — Store vectorized transcript chunks in Postgres for retrieval (JD self-hosts).Cron — Schedules daily ingestion of new content.Google Sheets — Target destination in demo for simple lead funnels.Zapier / Gumloop — Natural-language automation builders; early NLA/agent patterns.Git / CLI commands — Cautionary tale: agents running destructive commands (e.g., resets).Do Browser / Comet Browser — Agentic browsing tools referenced for web actions.Fellow.ai — AI meeting assistant with security-first design; generates precise summaries/action items.Subscribe at⁠ thisnewway.com⁠ to get the step-by-step playbooks, tools, and workflows.

Practical AI
The impact of AI on the workforce: A state-level case study

Practical AI

Play Episode Listen Later Oct 9, 2025 44:04 Transcription Available


Daniel sits down with Chelsea Linder, VP of Innovation and Entrepreneurship at TechPoint, to explore the what AI innovation and impact look like on the ground.  They discuss Chelsea's journey from the VC world into economic development/ innovation, the growth of an AI innovation network in Indiana (funded by the SBA), lessons learned from fostering AI communities, and how businesses are actually adapting to AI. Chelsea also shares insights from Techpoints AI workforce impact study, which explored AI related job creation and levels of AI adoption among other things.Featuring:Chelsea Linder – LinkedIn Daniel Whitenack – Website, GitHub, XLinks: TechpointSponsors:Shopify – The commerce platform trusted by millions. From idea to checkout, Shopify gives you everything you need to launch and scale your business—no matter your level of experience. Build beautiful storefronts, market with built-in AI tools, and tap into the platform powering 10% of all U.S. eCommerce. Start your one-dollar trial at shopify.com/practicalaiFabi.ai - The all-in-one data analysis platform for modern teams. From ad hoc queries to advanced analytics, Fabi lets you explore data wherever it lives—spreadsheets, Postgres, Snowflake, Airtable and more. Built-in Python and AI assistance help you move fast, then publish interactive dashboards or automate insights delivered straight to Slack, email, spreadsheets or wherever you need to share it.Learn more and get started for free at fabi.aiUpcoming Events: Join us at the Midwest AI Summit on November 13 in Indianapolis to hear world-class speakers share how they've scaled AI solutions. Don't miss the AI Engineering Lounge, where you can sit down with experts for hands-on guidance. Reserve your spot today!Register for upcoming webinars here!

IGeometry
Asynchronous IO in Postgres 18

IGeometry

Play Episode Listen Later Oct 3, 2025 41:12


Postgres 18 has been released with many exciting features such as UUIDv7, Over explain module, composite index skip scans, and the most anticipated asynchronous IO with worker and io_uring mode which I uncover in this show. Hope you enjoy it0:00 Intro1:30 Synchronous vs Asynchronous calls3:00 Synchronous IO6:30 Asynchronous IO10:00 Postgres 17 synchronous io 17:20 The challenge of Async IO in Postgres 1820:00 io_method worker23:00 io_method io_uring29:30 io_method sync 31:08 Async IO isn't done! 31:30 Support for backend writers32:36 Improve worker io_method33:00 direct io support 37:00 Summary

Ask Noah Show
Episode 461: Ask Noah Show 461

Ask Noah Show

Play Episode Listen Later Oct 1, 2025 54:01


F-Droid is going away if Google continues down the path of requiring that developers register their apps through Google. Jellyfin has a big update, and Steve joins the program from Texas and shares about what he's doing there. -- During The Show -- 00:52 Intro Steve remote from Texas Talking to a someone in the same space Advantage of Face to Face 08:15 Operese Windows 11 requires certain hardware Microsoft will play games with Windows 10 updates Windows to Linux migration tool Use case Codeberg (https://codeberg.org/Operese/operese) YouTube (https://www.youtube.com/watch?v=4YUkD5oslmc) Nixbook (https://github.com/mkellyxp/nixbook) vs EndlessOS Immutable Distros Alloy (https://grafana.com/docs/alloy/latest/) Tailscale 21:50 News Wire GNU Coreutils 9.8 - gnu.org (https://www.gnu.org/software/coreutils/manual/coreutils.html) Postgres 18 - postgresql.org (https://www.postgresql.org/about/news/postgresql-18-released-3142) GROOT Robot Model - yahoo.com (https://finance.yahoo.com/news/nvidia-launches-open-source-physics-183426533.html) OBS Studio 32.0 - github.com (https://github.com/obsproject/obs-studio/releases) RPM 6.0 - rpm.org (https://rpm.org/releases/6.0.0) Linux 6.17 - phoronix.com (https://www.phoronix.com/news/Linux-6.17-Released) Kaosx 2025.09 - kaosx.us (https://kaosx.us/news/2025/kaos09) Kali Linux 2025.3 - kali.org (https://www.kali.org/blog/kali-linux-2025-3-release) NeptuneOS 9.0 - neptuneos.com (https://neptuneos.com/en/news-reader/neptuneos-9-0-maja-released.html) MemVerge AI Memory Layer - blocksandfiles.com (https://blocksandfiles.com/2025/09/24/memverges-ambitious-long-context-ai-memmachine-memory) Song-Prep - huggingface.co (https://huggingface.co/tencent/SongPrep-7B) 1T Parameter Model - huggingface.co (https://huggingface.co/inclusionAI/Ring-1T-preview) CISA Sudo Vulnerability - thehackernews.com (https://thehackernews.com/2025/09/cisa-sounds-alarm-on-critical-sudo-flaw.html) 23:05 F-Droid and Developer Registration F-Droid's stance Principles and Trust Model Older phones Google decides what can run on your phone Forcing phone use Alternative Options People clinging to proprietary solutions Historic cycle of open-closed-open F-Droid Post (https://f-droid.org/en/2025/09/29/google-developer-registration-decree.html) The Register (https://www.theregister.com/2025/09/29/googles_dev_registration_plan_will/?td=rt-3a) 43:49 Jellyfin EFCore Migration RCs are up PLEASE TEST! Database migration completed DO NOT INTERRUPT on first boot Some plugins will need time to upgrade 32bit ARM deprecated Kodi as a front end to Jellyfin Pinchflat (https://github.com/kieraneglin/pinchflat) Jellyfin Hedgedoc (https://notes.jellyfin.org/v10.11.0_features#) -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/461) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)