POPULARITY
Guest Dirkjan Ochtman Panelist Richard Littauer Show Notes In this special Maintainer Month episode of Sustain, host Richard speaks with Dirkjan Ochtman, a long-time open source contributor and Rust advocate. They dive deep into what it's like maintaining critical infrastructure libraries, the motivations behind taking over "abandonware," and how funding ecosystems like GitHub Sponsors and thanks.dev help sustain low-level dependencies. Dirkjan also reflects on how Rust's design lends itself well to long-term maintainability and shares thoughts on the challenges of burnout, context switching, and ensuring project continuity. Hit the download button now! [00:01:33] Dirkjan explains how he chooses which projects he's maintaining, being passionate about memory safety via Rust, and maintaining tools like Rustls, Hickory DNS, and Quinn. [00:03:14] Dirkjan describes his motivation for maintaining abandonware and sees it as providing value to the community. [00:04:23] ISRG funds Dirkjan's work on memory-safe DNS and TLS libraires, and they are replacing C-based libraires with Rust equivalents. [00:05:33] Dirkjan uses thanks.dev to help fund maintainers through the full dependency graph and revenue is limited but promising. [00:08:06] Richard brings up Tidelift and Dirkjan mentions it's not yielding results for Rust projects yet because the Rust ecosystem is smaller. [00:09:30] We hear Dirkjan's journey into Rust, starting in Python but frustrated by lack of type safety and performance, and creating his own compiler before appreciating Rust's complexity. [00:12:20] Dirkjan talks about his transition from Python to Rust. [00:13:39] Dirkjan uses PyO3 to create Python bindings for Rust libraries. [00:15:31] Richard wonders why projects become unmaintained and Dirkjan responds that people have life events, job changes, or shifting interests. [00:17:11] How are unmaintained projects flagged? Dirkjan uses the RustSec Advisory DB to detect projects with no active maintainers. [00:18:47] Dirkjan avoids burnout as a maintainer by keeping the scope narrow, only responds to PRs, doesn't overcommit, and focuses on high-efficiency, low-effort maintenance. [00:19:51] Rust has a strong system, built-in unit tests, great CI support, and Dirkjan encourages atomic commits to simplify code review. [00:21:28] Dirkjan speaks about languages that are more maintainer safe. [00:22:18] Richard brings up attack vectors and the ‘left-pad incident.' Dirkjan shares how he builds trust via his public GitHub record. [00:24:17] We hear Dirkjan's offboarding and succession planning as he explains handing off projects like Askama and promoting multiple maintainers to reduce bus factor. [00:26:08] Dirkjan's long-term vision for OSS sustainability is he hopes to move higher in the stack and wants to make high-quality software easier to build. [00:27:38] Dirkjan explains why he prefers to do asynchronous collaboration over pair programming. [00:28:52] Dirkjan discusses Rust's long-term ecosystem stability. [00:31:09] Find out where you can follow Dirkjan on the web. Quotes [00:03:23] “You call it abandonware and I call it a dependency that has a million users.” [00:19:02] “[When I take on a project], I don't take on the burden of proactively improving the project.” [00:19:11] “I will be there when someone submits a PR." [00:20:37] “I ask folks to make small changes: atomic commits.” Spotlight [00:31:37] Richard's spotlight is Allan Day. [00:32:20] Dirkjan's spotlight is Xilem. Links SustainOSS (https://sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) richard@sustainoss.org (mailto:richard@sustainoss.org) SustainOSS Discourse (https://discourse.sustainoss.org/) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) SustainOSS Bluesky (https://bsky.app/profile/sustainoss.bsky.social) SustainOSS LinkedIn (https://www.linkedin.com/company/sustainoss/) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Socials (https://www.burntfen.com/2023-05-30/socials) Dirkjan Ochtman LinkedIn (https://www.linkedin.com/in/dochtman/?originalSubdomain=nl) Dirkjan Ochtman Blog (https://dirkjan.ochtman.nl/) Dirkjan Ochtman Mastodon (https://hachyderm.io/@djc) Dirkjan Ochtman GitHub (https://github.com/djc) Dirkjan Ochtman Bluesky (https://bsky.app/profile/djc.ochtman.nl) Rust (https://www.rust-lang.org/) Rustls (https://github.com/rustls/rustls) Hickory DNS (https://github.com/hickory-dns/hickory-dns) Quinn (https://github.com/quinn-rs/quinn) Internet Security Research Group (ISRG) (https://www.abetterinternet.org/) Let's Encrypt (https://letsencrypt.org/) Automatic Certificate Management Environment (https://en.wikipedia.org/wiki/Automatic_Certificate_Management_Environment) PyO3 user guide (https://pyo3.rs/v0.15.1/) Sustain Podcast-Episode 108: Sarah Gran and Josh Aas: Sustainable Digital Infrastructure with Memory Safe Code (https://podcast.sustainoss.org/108) Sustain Podcast-Episode 148: Ali Nehzat of thanks.dev and OSS Funding (https://podcast.sustainoss.org/148) Tidelift (https://tidelift.com/) RustSec Advisory Database-GitHub (https://github.com/RustSec/advisory-db) Askama (https://docs.rs/askama/latest/askama/) Allan Day's GNOME Blog (https://blogs.gnome.org/aday/) Xilem (https://xilem.dev/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Dirkjan Ochtman.
Josh and Kurt talk to Brian Fox from Sonatype and Donald Fischer from Tidelift about their recent reports as well as open source. There are really interesting connections between the two reports. The overall theme seems to be open source is huge, everywhere, and needs help. But all is no lost! There's some great ideas on what the future needs to look like. Show Notes Donald Fischer Brian Fox Tidelift Sonatype The 2024 Tidelift state of the open source maintainer report Sonatype State of the Software Supply Chain Anchore 2024 Software Supply Chain Security Report OpenSSF TAC issue 101
Josh and Kurt talk about the 2024 Tidelift maintainer report. The report is pretty big and covers a ton of ground. We focus in a few of the statistics that should worry anyone who uses open source. We've known for a while developers are struggling, and the numbers back that up. This one feels like the old "we've tried nothing and we're all out of ideas". Show Notes THE 2024 TIDELIFT STATE OF THE OPEN SOURCE MAINTAINER REPORT Canadian passport Changelog Interviews #433 Pandas CVE
In this episode, we chat with Luis Villa, co-founder of Tidelift, about everything from supporting open source maintainers to coding with AI. Luis, a former programmer turned attorney, shares stories from his early days of discovering Linux, to his contributions to various projects and organizations including Mozilla and Wikipedia. We discussed the critical importance of open source software, the challenges faced by maintainers, including burnout, and how Tidelift works toward compensating maintainers. We also explore broader themes about the sustainability of open source projects, the impact of AI on code generation and legal concerns, and the need for a more structured and community-driven approach to long-term project maintenance. 00:00 Introduction 03:20 Challenges in Open Source Sustainability 07:43 Tidelift's Role in Supporting Maintainers 14:18 The Future of Open Source and AI 32:44 Optimism and Human Element in Open Source 35:38 Conclusion and Final Thoughts Guest: Luis Villa is co-founder and general counsel at Tidelift. Previously he was a top open source lawyer advising clients, from Fortune 50 companies to leading startups, on product development, open source licensing, and other matters. Luis is also an experienced open source community leader with organizations like the Wikimedia Foundation, where he served as deputy general counsel and then led the Foundation's community engagement team. Before the Wikimedia Foundation, he was with Greenberg Traurig, where he counseled clients such as Google on open source licenses and technology transactions, and Mozilla, where he led the revision of the Mozilla Public License. He has served on the boards at the Open Source Initiative and the GNOME Foundation, and been an invited expert on the Patents and Standards Interest Group of the World Wide Web Consortium and the Legal Working Group of OpenStreetMap. Recent speaking engagements include RedMonk's Monki Gras developer event, FOSDEM, and as a faculty member at the Practicing Law Institute's Open Source Software programs. Luis holds a JD from Columbia Law School and studied political science and computer science at Duke University.
In this podcast episode, host Dave Sobel interviews Paula Paul, the founder and distinguished engineer at Grayshore, about the importance of open source in businesses. Paula emphasizes that open source is already deeply integrated into most commercial applications, with a vast majority of software relying on open-source libraries. She highlights the need for businesses to effectively manage and secure their open-source dependencies, especially in light of recent instances where open-source has been used as an attack vector for social engineering. Paula discusses the challenges faced by organizations in managing dependencies on open-source packages, which have significantly increased in complexity over the years. She advises businesses to become more aware of the open-source packages they rely on and to prioritize securing customer-facing assets. Paula also recommends getting involved with organizations like the OpenJS Foundation and leveraging services from companies like Tidelift and HeroDevs to support and secure open-source dependencies. The conversation delves into the risks and benefits of using open-source software, highlighting the potential for social engineering attacks and licensing issues. Paula argues that the open-source model offers more agility and community support compared to closed-source solutions but also stresses the importance of contributing back to the open-source ecosystem. She encourages businesses to support the preservation of open source as a valuable natural resource and to align their missions with the values of the open-source community. As the discussion turns to the intersection of AI and open source, Paula sees opportunities for leveraging AI tools to enhance open-source projects, particularly in areas like code analysis and testing. She suggests that service organizations looking to engage with open source should explore projects within foundations like the OpenJS Foundation, Finos, and CNCF. Paula emphasizes the importance of human expertise in cybersecurity and the need for continuous monitoring and rapid response in today's threat landscape. Supported by:https://getinsync.ca/mspradio/https://www.huntress.com/mspradio/ All our Sponsors: https://businessof.tech/sponsors/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessoftech.bsky.social
In this episode of the podcast, Grizz sits down with Donald Fischer - CEO and CoFounder at Tidelift (a new Member of FINOS). We talk about Donald's journey through open source in the 90s to today, paying open source maintainers, and events on the horizon. Donald Fischer: https://www.linkedin.com/in/donaldfischer/ Tidelift: https://tidelift.com/ Upstream Event June 5 2024: https://upstream.live/ Attend the London Open Source in Finance Forum 26 June 2024: https://events.linuxfoundation.org/open-source-finance-forum-london/ 2023 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2023 FINOS Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
Josh and Kurt talk to Thomas Depierre about some of the European efforts to secure software. We touch on the CRA, MDA, FOSDEM, and more. As expected Thomas drops a huge amount of knowledge on what's happening in open source. We close the show with a lot of ideas around how to move the needle for open source. It's not easy, but it is possible. Show Notes Thomas Depierre I am not a supplier Open Source In The European Legislative Landscape devroom Cyber Resilience Act The 2023 Tidelift state of the open source maintainer report
Mike Perham is the creator of Sidekiq, a background job processor for Ruby. He's also the creator of Faktory a similar product for multiple language environments. We talk about the RubyConf keynote and Ruby's limitations, supporting products as a solo developer, and some ideas for funding open source like a public utility. Recorded at RubyConf 2023 in San Diego. -- A few topics covered: Sidekiq (Ruby) vs Faktory (Polyglot) Why background job solutions are so common in Ruby Global Interpreter Lock (GIL) Ractors (Actor concurrency) Downsides of Multiprocess applications When to use other languages Getting people to pay for Sidekiq Keeping a solo business Being selective about customers Ways to keep support needs low Open source as a public utility Mike Mike's blog mastodon Sidekiq faktory From Employment to Independence Ruby Ractor The Practical Effects of the GVL on Scaling in Ruby Transcript You can help correct transcripts on GitHub. Introduction [00:00:00] Jeremy: I'm here at RubyConf San Diego with Mike Perham. He's the creator of Sidekiq and Faktory. [00:00:07] Mike: Thank you, Jeremy, for having me here. It's a pleasure. Sidekiq [00:00:11] Jeremy: So for people who aren't familiar with, I guess we'll start with Sidekiq because I think that's what you're most known for. If people don't know what it is, maybe you can give like a small little explanation. [00:00:22] Mike: Ruby apps generally have two major pieces of infrastructure powering them. You've got your app server, which serves your webpages and the browser. And then you generally have something off on the side that... It processes, you know, data for a million different reasons, and that's generally called a background job framework, and that's what Sidekiq is. [00:00:41] It, Rails is usually the thing that, that handles your web stuff, and then Sidekiq is the Sidekiq to Rails, so to speak. [00:00:50] Jeremy: And so this would fit the same role as, I think in Python, there's celery. and then in the Ruby world, I guess there is, uh, Resque is another kind of job. [00:01:02] Mike: Yeah, background job frameworks are quite prolific in Ruby. the Ruby community's kind of settled on that as the, the standard pattern for application development. So yeah, we've got, a half a dozen to a dozen different, different examples throughout history, but the major ones today are, Sidekiq, Resque, DelayedJob, GoodJob, and, and, and others down the line, yeah. Why background jobs are so common in Ruby [00:01:25] Jeremy: I think working in other languages, you mentioned how in Ruby, there's this very clear, preference to use these job scheduling systems, these job queuing systems, and I'm not. I'm not sure if that's as true in, say, if somebody's working in Java, or C sharp, or whatnot. And I wonder if there's something specific about Ruby that makes people kind of gravitate towards this as the default thing they would use. [00:01:52] Mike: That's a good question. What makes Ruby... The one that so needs a background job system. I think Ruby, has historically been very single threaded. And so, every Ruby process can only do so much work. And so Ruby oftentimes does, uh, spin up a lot of different processes, and so having processes that are more focused on one thing is, is, is more standard. [00:02:24] So you'll have your application server processes, which focus on just serving HTTP responses. And then you have some other sort of focused process and that just became background job processes. but yeah, I haven't really thought of it all that much. But, uh, you know, something like Java, for instance, heavily multi threaded. [00:02:45] And so, and extremely heavyweight in terms of memory and startup time. So it's much more frequent in Java that you just start up one process and that's it. Right, you just do everything in that one process. And so you may have dozens and dozens of threads, both serving HTTP and doing work on the side too. Um, whereas in Ruby that just kind of naturally, there was a natural split there. Global Interpreter Lock [00:03:10] Jeremy: So that's actually a really good insight, because... in the keynote at RubyConf, Mats, the creator of Ruby, you know, he mentioned the, how the fact that there is this global, interpreter lock, [00:03:23] or, or global VM lock in Ruby, and so you can't, really do multiple things in parallel and make use of all the different cores. And so it makes a lot of sense why you would say like, okay, I need to spin up separate processes so that I can actually take advantage of, of my, system. [00:03:43] Mike: Right. Yeah. And the, um, the GVL. is the acronym we use in the Ruby community, or GIL. Uh, that global lock really kind of is a forcing function for much of the application architecture in Ruby. Ruby, uh, applications because it does limit how much processing a single Ruby process can do. So, uh, even though Sidekiq is heavily multi threaded, you can only have so many threads executing. [00:04:14] Because they all have to share one core because of that global lock. So unfortunately, that's, that's been, um, one of the limiter, limiting factors to Sidekiq scalability is that, that lock and boy, I would pay a lot of money to just have that lock go away, but. You know, Python is going through a very long term experiment about trying to remove that lock and I'm very curious to see how well that goes because I would love to see Ruby do the same and we'll see what happens in the future, but, it's always frustrating when I come to another RubyConf and I hear another Matt's keynote where he's asked about the GIL and he continues to say, well, the GIL is going to be around, as long as I can tell. [00:04:57] so it's a little bit frustrating, but. It's, it's just what you have to deal with. Ractors [00:05:02] Jeremy: I'm not too familiar with them, but they, they did mention during the keynote I think there Ractors or something like that. There, there, there's some way of being able to get around the GIL but there are these constraints on them. And in the context of Sidekiq and, and maybe Ruby in general, how do you feel about those options or those solutions? [00:05:22] Mike: Yeah, so, I think it was Ruby 3. 2 that introduced this concept of what they call a Ractor, which is like a thread, except it does not have the global lock. It can run independent to the global lock. The problem is, is because it doesn't use the global lock, it has pretty severe constraints on what it can do. [00:05:47] And the, and more specifically, the data it can access. So, Ruby apps and Rails apps throughout history have traditionally accessed a lot of global data, a lot of class level data, and accessed all this data in a, in a read only fashion. so there's no race conditions because no one's changing any of it, but it's still, lots of threads all accessing the same variables. [00:06:19] Well, Ractors can't do that at all. The only data Ractors can access is data that they own. And so that is completely foreign to Ruby application, traditional Ruby applications. So essentially, Ractors aren't compatible with the vast majority of existing Ruby code. So I, I, I toyed with the idea of prototyping Sidekiq and Ractors, and within about a minute or two, I just ran into these, these, uh... [00:06:51] These very severe constraints, and so that's why you don't see a lot of people using Ractors, even still, even though they've been out for a year or two now, you just don't see a lot of people using them, because they're, they're really limited, limited in what they can do. But, on the other hand, they're unlimited in how well they can scale. [00:07:12] So, we'll see, we'll see. Hopefully in the future, they'll make a lot of improvements and, uh, maybe they'll become more usable over time. Downsides of multiprocess (Memory usage) [00:07:19] Jeremy: And with the existence of a job queue or job scheduler like Sidekiq, you're able to create additional processes to get around that global lock, I suppose. What are the... downsides of doing so versus another language like we mentioned Java earlier, which is capable of having true parallelism in the same process. [00:07:47] Mike: Yeah, so you can start up multiple Ruby processes to process things truly in parallel. The issue is that you do get some duplication in terms of memory. So your Ruby app maybe take a gigabyte per process. And, you can do copy on write forking. You can fork and get some memory sharing with copy on write semantics on Unix operating systems. [00:08:21] But you may only get, let's say, 30 percent memory savings. So, there's still a significant memory overhead to forking, you know, let's say, eight processes versus having eight threads. You know, you, you, you may have, uh, eight threads can operate in a gigabyte process, but if you want to have eight processes, that may take, let's say, four gigabytes of RAM. [00:08:48] So you, you still, it's not going to cost you eight gigabytes of RAM, you know, it's not like just one times eight, but, there's still a overhead of having those separate processes. [00:08:58] Jeremy: would you say it's more of a cost restriction, like it costs you more to run these applications, or are there actual problems that you can't solve because of this restriction. [00:09:13] Mike: Help me understand, what do you mean by restriction? Do you mean just the GVL in general, or the fact that forking processes still costs memory? [00:09:22] Jeremy: I think, well, it would be both, right? So you're, you have two restrictions right now. You have the, the GVL, which means you can't have parallelism within the same process. And then your other option is to spin up a bunch of processes, which you have said is the downside there is that you're using a lot more RAM. [00:09:43] I suppose my question is that Does that actually stop you from doing anything? Like, if you throw more money at the problem, you go like, we're going to have more instances, I'll pay for the RAM, it's fine, can that basically get you out of these situations or are these limitations actually stopping you from, from doing things you could do in other languages? [00:10:04] Mike: Well, you certainly have to manage the multiple processes, right? So you've gotta, you know, if one child process crashes, you've gotta have a parent or supervisor process watching all that and monitoring and restarting the process. I don't think it restricts you. Necessarily, it just, it adds complexity to your deployment. [00:10:24] and, and it's just a question of efficiency, right? Instead of being able to deploy on a, on a one gigabyte droplet, I've got to deploy to a four gigabyte droplet, right? Because I just, I need the RAM to run the eight processes. So it, it, it's more of just a purely a function of how much money am I going to have to throw at this problem. [00:10:45] And what's it going to cost me in operational costs to operate this application in production? When to use other languages? [00:10:53] Jeremy: So during the. Keynote, uh, Matz had mentioned that Rails, is really suitable as this one person framework, like you can have a very small team or maybe even yourself and, and build this product. And so I guess from... Your perspective, once you cross a certain threshold, is like, what Ruby and what Sidekiq provides not enough, and that's why you need to start looking into other languages? [00:11:24] Or like, where's the, turning point, or the, if you [00:11:29] Mike: Right, right. The, it's all about the problem you're trying to solve, right? At the end of the day, uh, the, the question is just what are we trying to solve and how are we trying to solve it? So at a higher level, you got to think about the architecture. if the problem you're trying to solve, if the service you're trying to build, if the app you're trying to operate. [00:11:51] If that doesn't really fall into the traditional Ruby application architecture, then you, you might look at it in another language or another ecosystem. something like Go, for instance, can compile down to a single binary, which makes deployment really easy. It makes shipping up a product. on to a user's machine, much simpler than deploying a Ruby application onto a user's desktop machine, for instance, right? [00:12:22] Um, Ruby does have this, this problem of how do you package everything together and deploy it somewhere? Whereas Go, when you can just compile to a single binary, now you've just got a single thing. And it's just... Drop it on the file system and execute it. It's easy. So, um, different, different ecosystems have different application architectures, which empower different ways of solving the same problems. [00:12:48] But, you know, Rails as a, as a one man framework, or sorry, one person framework, It, it, I don't, I don't necessarily, that's a, that's sort of a catchy marketing slogan, but I just think of Rails as the most productive framework you can use. So you, as a single person, you can maximize what you ship and the, the, the value that you can create because Rails is so productive. [00:13:13] Jeremy: So it, seems like it's maybe the, the domain or the type of application you're making. Like you mentioned the command line application, because you want to be able to deliver it to your user easily. Just give them a binary, something like Go or perhaps Rust makes a lot more sense. and then I could see people saying that if you're doing something with machine learning, like the community behind Python, it's, they're just, they're all there. [00:13:41] So Room for more domains in Ruby [00:13:41] Mike: That was exactly the example I was going to use also. Yeah, if you're doing something with data or AI, Python is going to be a more, a more traditional, natural choice. that doesn't mean Ruby can't do it. That doesn't mean, you wouldn't be able to solve the problem with Ruby. And, and there's, that just also means that there's more space for someone who wants to come in and make an impact in the Ruby community. [00:14:03] Find a problem that Ruby's not really well suited to solving right now and build the tooling out there to, to try and solve it. You know, I, I saw a talk, from the fellow who makes the Glimmer gem, which is a native UI toolkit. Uh, a gem for building native UIs in Ruby, which Ruby traditionally can't do, but he's, he's done an amazing job at sort of surfacing APIs to build these, um, these native, uh, native applications, which I think is great. [00:14:32] It's awesome. It's, it's so invigorating to see Ruby in a new space like that. Um, I talked to someone else who's doing the Polars gem, which is focused on data processing. So it kind of takes, um, Python and Pandas and brings that to Ruby, which is, is awesome because if you're a Ruby developer, now you've got all these additional tools which can allow you to solve new sets of problems out there. [00:14:57] So that's, that's kind of what's exciting in the Ruby community right now is just bring it into new spaces. Faktory [00:15:03] Jeremy: In addition to Sidekiq, you have, uh, another product called Faktory, I believe. And so does that serve a, a similar purpose? Is that another job scheduling, job queueing system? [00:15:16] Mike: It is, yes. And it's, it's, it's similar in a way to Sidekiq. It looks similar. It's got similar concepts at the core of it. At the end of the day, Sidekiq is limited to Ruby. Because Sidekiq executes in a Ruby VM, it executes the jobs, and the jobs are, have to be written in Ruby because you're running in the Ruby VM. [00:15:38] Faktory was my attempt to bring, Sidekiq functionality to every other language. I wanted, I wanted Sidekiq for JavaScript. I wanted Sidekiq for Go. I wanted Sidekiq for Python because A, a lot of these other languages also could use a system, a background job system. And the problem though is that. [00:16:04] As a single man, I can't port Sidekiq to every other language. I don't know all the languages, right? So, Faktory kind of changes the architecture and, um, allows you to execute jobs in any language. it, it replaces Redis and provides a server where you just fetch jobs, and you can use it from it. [00:16:26] You can use that protocol from any language to, to build your own worker processes that execute jobs in whatever language you want. [00:16:35] Jeremy: When you say it replaces Redis, so it doesn't use Redis, um, internally, it has its own. [00:16:41] Mike: It does use Redis under the covers. Yeah, it starts Redis as a child process and, connects to it over a Unix socket. And so it's really stable. It's really fast. from the outside, the, the worker processes, they just talk to Faktory. They don't know anything about Redis at all. [00:16:59] Jeremy: I see. And for someone who, like we mentioned earlier in the Python community, for example, there is, um, Celery. For someone who is using a task scheduler like that, what's the incentive to switch or use something different? [00:17:17] Mike: Well, I, I always say if you're using something right now, I'm not going to try and convince you to switch necessarily. It's when you have pain that you want to switch and move away. Maybe you have Maybe there's capabilities in the newer system that you really need that the old system doesn't provide, but Celery is such a widely known system that I'm not necessarily going to try and convince people to move away from it, but if people are looking for a new system, one of the things that Celery does that Faktory does not do is Celery provides like data adapters for using store, lots of different storage systems, right? [00:17:55] Faktory doesn't do that. Faktory is more, has more of the Rails mantra of, you know, Omakase where we choose, I choose to use Redis and that's it. You don't, you don't have a choice for what to use because who cares, you know, at the end of the day, let Faktory deal with it. it's, it's not something that, You should even necessarily be concerned about it. [00:18:17] Just, just try Faktory out and see how it works for you. Um, so I, I try to take those operational concerns off the table and just have the user focus on, you know, usability, performance, and that sort of thing. but it is, it's, it's another background job system out there for people to try out and see if they like that. [00:18:36] And, and if they want to, um, if they know Celery and they want to use Celery, more power to Faktory them. Sidekiq (Ruby) or Faktory (Polyglot) [00:18:43] Jeremy: And Sidekiq and Faktory, they serve a very similar purpose. For someone who they have a new project, they haven't chosen a job. scheduling system, if they were using Ruby, would it ever make sense for them to use Faktory versus use Sidekiq? [00:19:05] Mike: Uh Faktory is excellent in a polyglot situation. So if you're using multiple languages, if you're creating jobs in Ruby, but you're executing them in Python, for instance, um, you know, if you've, I have people who are, Creating jobs in PHP and executing them in Python, for instance. That kind of polyglot scenario, Sidekiq can't do that at all. [00:19:31] So, Faktory is useful there. In terms of Ruby, Ruby is just another language to Faktory. So, there is a Ruby API for using Faktory, and you can create and execute Ruby jobs with Faktory. But, you'll find that in the Ruby community, Sidekiq is much widely... much more widely used and understood and known. So if you're just using Ruby, I think, I think Sidekiq is the right choice. [00:19:59] I wouldn't look at Faktory. But if you do need, find yourself needing that polyglot tool, then Faktory is there. Temporal [00:20:07] Jeremy: And this is maybe one, maybe one layer of abstraction higher, but there's a product called Temporal that has some of this job scheduling, but also this workflow component. I wonder if you've tried that out and how you think about that product? [00:20:25] Mike: I've heard of them. I don't know a lot about the product. I do have a workflow API, the Sidekiq batches, which allow you to fan out jobs and then, and then execute callbacks when all the jobs in that, in that batch are done. But I don't, provide sort of a, a high level. Graphical Workflow Editor or anything like that. [00:20:50] Those to me are more marketing tools that you use to sell the tool for six figures. And I don't think they're usable. And I don't think they're actually used day to day. I provide an API for developers to use. And developers don't like moving blocks of code around in a GUI. They want to write code. And, um, so yeah, temporal, I, like I said, I don't know much about them. [00:21:19] I also, are they a venture capital backed startup? [00:21:22] Jeremy: They are, is my understanding, [00:21:24] Mike: Yeah, that, uh, any, any sort of venture capital backed startup, um, who's building technical infrastructure. I, I would look long and hard at, I'm, I think open source is the right core to build on. Of course I sell commercial software, but. I'm bootstrapped. I'm profitable. [00:21:46] I'm going to be around forever. A VC backed startup, they tend to go bankrupt, because they either get big or they go out of business. So that would be my only comment is, is, be a little bit leery about relying on commercial venture capital based infrastructure for, for companies, uh, long term. Getting people to pay for Sidekiq [00:22:05] Jeremy: So I think that's a really interesting part about your business is that I think a lot of open source maintainers have a really big challenge figuring out how to make it as a living. The, there are so many projects that they all have a very permissive license and you can use them freely one example I can think of is, I, I talked with, uh, David Kramer, who's the CTO at Sentry, and he, I don't think they use it anymore, but they, they were using Nginx, right? [00:22:39] And he's like, well, Nginx, they have a paid product, like Nginx. Plus that or something. I don't know what the name is, but he was like, but I'm not going to pay for it. Right. I'm just going to use the free one. Why would I, you know, pay for the, um, the paid thing? So I, I, I'm kind of curious from your perspective when you were coming up with Sidekiq both as an open source product, but also as a commercial one, how did you make that determination of like to make a product where it's going to be useful in its open source form? [00:23:15] I can still convince people to pay money for it. [00:23:19] Mike: Yeah, the, I was terrified, to be blunt, when I first started out. when I started the Sidekiq project, I knew it was going to take a lot of time. I knew if it was successful, I was going to be doing it for the next decade. Right? So I started in 2012, and here I am in 2023, over a decade, and I'm still doing it. [00:23:38] So my expectation was met in that regard. And I knew I was not going to be able to last that long. If I was making zero dollars, right? You just, you burn out. Nobody can last that long. Well, I guess there are a few exceptions to that rule, but yeah, money, I tend to think makes things a little more sustainable for sure. [00:23:58] Especially if you can turn it into a full time job solving and supporting a project that you, you love and, and is, is, you know, your, your, your baby, your child, so to speak, your software, uh, uh, creation that you've given to the world. but I was terrified. but one thing I did was at the time I was blogging a lot. [00:24:22] And so I was telling people about Sidekiq. I was telling people what was to come. I was talking about ideas and. The one thing that I blogged about was financial experiments. I said bluntly to the, to, to the Ruby community, I'm going to be experimenting with financial stability and sustainability with this project. [00:24:42] So not only did I create this open source project, but I was also publicly saying I I need to figure out how to make this work for the next decade. And so eventually that led to Sidekiq Pro. And I had to figure out how to build a closed source Ruby gem, which, uh, There's not a lot of, so I was kind of in the wild there. [00:25:11] But, you know, thankfully all the pieces came together and it was actually possible. I couldn't have done it if it wasn't possible. Like, we would not be talking if I couldn't make a private gem. So, um, but it happened to work out. Uh, and it allowed me to, to gate features behind a paywall effectively. And, and yeah, you're right. [00:25:33] It can be tough to make people pay for software. but I'm a developer who's selling to other developers, not, not just developers, open source developers, and they know that they have this financial problem, right? They know that there's this sustainability problem. And I was blunt in saying, this is my solution to my sustainability. [00:25:56] So, I charge what I think is a very fair price. It's only a thousand dollars a year to a hobbyist. That may seem like a lot of money to a business. It's a drop in the bucket. So it was easy for developers to say, Hey, listen, we want to buy this tool for a thousand bucks. It'll ensure our infrastructure is maintained for the next decade. [00:26:18] And it's, and it's. And it's relatively cheap. It's way less than, uh, you know, a salary or even a laptop. So, so that's, that's what I did. And, um, it's, it worked out great. People, people really understood. Even today, I talk to people and they say, we, we signed up for Sidekiq Pro to support you. So it's, it's, it's really, um, invigorating to hear people, uh, thank me and, and they're, they're actively happy that they're paying me and our customers. [00:26:49] Jeremy: it's sort of, uh, maybe a not super common story, right, in terms of what you went through. Because when I think of open core businesses, I think of companies like, uh, GitLab, which are venture funded, uh, very different scenario there. I wonder, like, in your case, so you started in 2012, and there were probably no venture backed competitors, right? [00:27:19] People saying that we're going to make this job scheduling system and some VC is going to give me five million dollars and build a team to work on this. It was probably at the time, maybe it was Rescue, which was... [00:27:35] Mike: There was a venture backed system called IronMQ, [00:27:40] Jeremy: Hmm. [00:27:41] Mike: And I'm not sure if they're still around or not, but they... They took, uh, one or more funding rounds. I'm not sure exactly, but they were VC backed. They were doing, background jobs, scheduled jobs, uh, you know, running container, running container jobs. They, they eventually, I think, wound up sort of settling on Docker containers. [00:28:06] They'll basically spin up a Docker container. And that container can do whatever it wants. It can execute for a second and then shut down, or it can run for, for however long, but they would, um, yeah, I, yeah, I'll, I'll stop there because I don't know the actual details of exactly their system, but I'm not sure if they're still around, but that's the only one that I remember offhand that was around, you know, years ago. [00:28:32] Yeah, it's, it's mostly, you know, low level open source infrastructure. And so, anytime you have funded startups, they're generally using that open source infrastructure to build their own SaaS. And so SaaS's are the vast majority of where you see sort of, uh, commercial software. [00:28:51] Jeremy: so I guess in that way it, it, it gave you this, this window or this area where you could come in and there wasn't, other than that iron, product, there wasn't this big money that you were fighting against. It was sort of, it was you telling people openly, I'm, I'm working on this thing. [00:29:11] I need to make money so that I can sustain it. And, if you, yeah. like the work I do, then, you know, basically support me. Right. And, and so I think that, I'm wondering how we can reproduce that more often because when you see new products, a lot of times it is VC backed, right? [00:29:35] Because people say, I need to work on this. I need to be paid. and I can't ask a team to do this. For nothing, right? So [00:29:44] Mike: Yeah. It's. It's a wicked problem. Uh, it's a really, really hard problem to solve if you take vc you there, that that really kind of means that you need to be making tens if not hundreds of millions of dollars in sales. If you are building a small or relatively small. You know, put small in quotes there because I don't really know what that means, but if you have a small open source project, you can't charge huge amounts for it, right? [00:30:18] I mean, Sidekiq is a, I would call a medium sized open source project, and I'm charging a thousand bucks for it. So if you're building, you know, I don't know, I don't even want to necessarily give example, but if you're building some open source project, and It's one of 300 libraries that people's applications will depend on. [00:30:40] You can't necessarily charge a thousand dollars for that library. depending on the size and the capabilities, maybe you can, maybe you can't. But there's going to be a long tail of open source projects that just, they can't, they can't charge much, if anything, for them. So, unfortunately, we have, you know, these You kind of have two pathways. [00:31:07] Venture capital, where you've got to sell a ton, or free. And I've kind of walked that fine line where I'm a small business, I can charge a small amount because I'm bootstrapped. And, and I don't need huge amounts of money, and I, and I have a project that is of the right size to where I can charge a decent amount of money. [00:31:32] That means that I can survive with 500 or a thousand customers. I don't need to have a hundred million dollars worth of customers. Because I, you know, when I started the business, one of the constraints I said is I don't want to hire anybody. I'm just going to be solo. And part of the, part of my ability to keep a low price and, and keep running sustainably, even with just You know, only a few hundred customers is because I'm solo. [00:32:03] I don't have the overhead of investors. I don't have the overhead of other employees. I don't have an office space. You know, my overhead is very small. So that is, um, you know, I just kind of have a unique business in that way, I guess you might say. Keeping the business solo [00:32:21] Jeremy: I think that's that's interesting about your business as well But the fact that you've kept it you've kept it solo which I would imagine in most businesses, they need support people. they need, developers outside of maybe just one. Um, there's all sorts of other, I don't think overhead is the right word, but you just need more people, right? [00:32:45] And, and what do you think it is about Sidekiq that's made it possible for it to just be a one person operation? [00:32:52] Mike: There's so much administrative overhead in a business. I explicitly create business policies so that I can run solo. you know, my support policy is officially you get one email ticket or issue per quarter. And, and anything more than that, I can bounce back and say, well, you're, you're requiring too much support. [00:33:23] In reality, I don't enforce that at all. And people email me all the time, but, but things like. Things like dealing with accounting and bookkeeping and taxes and legal stuff, licensing, all that is, yeah, a little bit of overhead, but I've kept it as minimal as I can. And part of that is I don't want to hire another employee because then that increases the administrative overhead that I have. [00:33:53] And Sidekiq is so tied to me and my knowledge that if I hire somebody, they're probably not going to know Ruby and threading and all the intricate technical detail necessary to build and maintain and support the system. And so really you'll kind of regress a little bit. We won't be able to give as good support because I'm busy helping that other employee. Being selective about customers [00:34:23] Mike: So, yeah, it's, it's a tightrope act where you've got to really figure out how can I scale myself as far as possible without overwhelming myself. The, the overwhelming thing that I have that I've never been able to solve. It's just dealing with billing inquiries, customers, companies, emailing me saying, how do we buy this thing? [00:34:46] Can I get an invoice? Every company out there, it seems wants an invoice. And the problem with invoicing is it takes a lot more. manual labor and administrative overhead to issue that invoice to collect payment on the invoice. So that's one of the reasons why I have a very strict policy about credit card only for, for the vast majority of my customers. [00:35:11] And I demand that companies pay a lot more. You have to have a pretty big enterprise license if you want an invoice. And if the company, if the company comes back and complains and says, well, you know, that's ridiculous. We don't, we don't want to pay that much. We don't need it that much. Uh, you know, I, I say, okay, well then you have two, two things, two, uh, two things. [00:35:36] You can either pay with a credit card or you can not use Sidekiq. Like, that's, that's it. I'm, I don't need your money. I don't want the administrative overhead of dealing with your accounting department. I just want to support my, my customers and build my software. And, and so, yeah, I don't want to turn into a billing clerk. [00:35:55] So sometimes, sometimes the, the, the best thing in business that you can do is just say no. [00:36:01] Jeremy: That's very interesting because I think being a solo... Person is what probably makes that possible, right? Because if you had the additional staff, then you might say like, Well, I need to pay my staff, so we should be getting, you know, as much business as [00:36:19] Mike: Yeah. Chasing every customer you can, right. But yeah. [00:36:22] Every customer is different. I mean, I have some customers that just, they never contact me. They pay their bill really fast or right on time. And they're paying me, you know, five figures, 20, a year. And they just, it's a, God bless them because those are, are the. [00:36:40] Best customers to have and the worst customers are the ones who are paying 99 bucks a month and everything that they don't understand or whatever is a complaint. So sometimes, sometimes you, you want to, vet your customers from that perspective and say, which one of these customers are going to be good? [00:36:58] Which ones are going to be problematic? [00:37:01] Jeremy: And you're only only person... And I'm not sure how many customers you have, but [00:37:08] Mike: I have 2000 [00:37:09] Jeremy: 2000 customers. [00:37:10] Okay. [00:37:11] Mike: Yeah. [00:37:11] Jeremy: And has that been relatively stable or has there been growth [00:37:16] Mike: It's been relatively stable the last couple of years. Ruby has, has sort of plateaued. Um, it's, you don't see a lot of growth. I'm getting probably, um, 15, 20 percent growth maybe. Uh, so I'm not growing like a weed, like, you know, venture capital would want to see, but steady incremental growth is, is, uh, wonderful, especially since I do very little. [00:37:42] Sales and marketing. you know, I come to RubyConf I, I I tweet out, you know, or I, I toot out funny Mastodon Toots occasionally and, and, um, and, and put out new releases of the software. And, and that's, that's essentially my, my marketing. My marketing is just staying in front of developers and, and, and being a presence in the Ruby community. [00:38:06] But yeah, it, it's, uh. I, I, I see not a, not a huge amount of churn, but I see enough sales to, to, to stay up and keep my head above water and to keep growing, um, slowly but surely. Support needs haven't grown [00:38:20] Jeremy: And as you've had that steady growth, has the support burden not grown with it? [00:38:27] Mike: Not as much because once customers are on Sidekiq and they've got it working, then by and large, you don't hear from them all that much. There's always GitHub issues, you know, customers open GitHub issues. I love that. but yeah, by and large, the community finds bugs. and opens up issues. And so things remain relatively stable. [00:38:51] I don't get a lot of the complete newbie who has no idea what they're doing and wants me to, to tell them how to use Sidekiq that I just don't see much of that at all. Um, I have seen it before, but in that case, generally, I, I, I politely tell that person that, listen, I'm not here to educate you on the product. [00:39:14] It's there's documentation in the wiki. Uh, and there's tons of, of more Ruby, generic Ruby, uh, educational material out there. That's just not, not what I do. So, so yeah, by and large, the support burden is, is not too bad because once people are, are up and running, it's stable and, and they don't, they don't need to contact me. [00:39:36] Jeremy: I wonder too, if that's perhaps a function of the price, because if you're a. new developer or someone who's not too familiar with how to do job processing or what they want to do when you, there is the open source product, of course. but then the next step up, I believe is about a hundred dollars a month. [00:39:58] And if you're somebody who is kind of just getting started and learning how things work, you're probably not going to pay that, is my guess. And so you'll never hear from them. [00:40:11] Mike: Right, yeah, that's a good point too, is the open source version, which is what people inevitably are going to use and integrate into their app at first. Because it's open source, you're not going to email me directly, um, and when people do email me directly, Sidekiq support questions, I do, I reply literally, I'm sorry I don't respond to private email, unless you're a customer. [00:40:35] Please open a GitHub issue and, um, that I try to educate both my open source users and my commercial customers to try and stay in GitHub issues because private email is a silo, right? Private email doesn't help anybody else but them. If I can get people to go into GitHub issues, then that's a public record. [00:40:58] that people can search. Because if one person has that problem, there's probably a dozen other people that have that same problem. And then that other, those other 11 people can search and find the solution to their problem at four in the morning when I'm asleep. Right? So that's, that's what I'm trying to do is, is keep, uh, keep everything out in the open so that people can self service as much as possible. Sidekiq open source [00:41:24] Jeremy: And on the open source side, are you still primarily the main contributor? Or do you have other people that are [00:41:35] Mike: I mean, I'd say I do 90 percent of the work, which is why I don't feel guilty about keeping 100 percent of the money. A lot of open source projects, when they look for financial sustainability, they also look for how can we split this money amongst the team. And that's, that's a completely different topic that I've. [00:41:55] is another reason why I've stayed solo is if I hire an employee and I pay them 200, 000 a year as a developer, I'm meanwhile keeping all the rest of the profits of the company. And so that almost seems a little bit unfair. because we're both still working 40 hours a week, right? Why am I the one making the vast majority of the, of the profit and the money? [00:42:19] Um, so, uh, I've always, uh, that's another reason why I've stayed solo, but, but yeah, having a team of people working on something, I do get, regular commits, regular pull requests from people, fixing a bug that they found or just making a tweak that. that they saw, that they thought they could improve. [00:42:42] A little more rarely I get a significant improvement or feature, as a pull request. but Sidekiq is so stable these days that it really doesn't need a team of people maintaining it. The volume of changes necessary, I can easily keep up with that. So, I'm still doing 90 95 percent of the work. Are there other Sidekiq-like opportunities out there? [00:43:07] Jeremy: Yeah, so I think Sidekiq has sort of a unique positioning where it's the code base itself is small enough where you can maintain it yourself and you have some help, but primarily you're the main maintainer. And then you have enough customers who are willing to, to pay for the benefit it gives them on top of what the open source product provides. [00:43:36] cause it's, it's, you were talking about how. Every project people work on, they have, they could have hundreds of dependencies, right? And to ask somebody to, to pay for each of them is, is probably not ever going to happen. And so it's interesting to think about how you have things like, say, you know, OpenSSL, you know, it's a library that a whole bunch of people rely on, but nobody is going to pay a monthly fee to use it. [00:44:06] You have things like, uh, recently there was HashiCorp with Terraform, right? They, they decided to change their license because they, they wanted to get, you know, some of that value back, some of the money back, and the community basically revolted. Right? And did a fork. And so I'm kind of curious, like, yeah, where people can find these sweet spots like, like Sidekiq, where they can find this space where it's just small enough where you can work on it on your own and still get people to pay for it. [00:44:43] It's, I'm trying to picture, like, where are the spaces? Open source as a public utility [00:44:48] Mike: We need to look at other forms of financing beyond pure capitalism. If this is truly public infrastructure that needs to be maintained for the long term, then why are we, why is it that we depend on capitalism to do that? Our roads, our water, our sewer, those are not Capitalist, right? Those are utilities, that's public infrastructure that we maintain, that the government helps us maintain. [00:45:27] And in a sense, tech infrastructure is similar or could be thought of in a similar fashion. So things like Open Collective, things like, uh, there's a, there's a organization in Europe called NLNet, I think, out of the Netherlands. And they do a lot of grants to various open source projects to help them improve the state of digital infrastructure. [00:45:57] They support, for instance, Mastodon as a open source project that doesn't have any sort of corporate backing. They see that as necessary social media infrastructure, uh, for the long term. And, and I, and I think that's wonderful. I like to see those new directions being explored where you don't have to turn everything into a product, right? [00:46:27] And, and try and market and sale, um, and, and run ads and, and do all this stuff. If you can just make the case that, hey, this is, this is useful public infrastructure that so many different, um, Technical, uh, you know, applications and businesses could rely on, much like FedEx and DHL use our roads to the benefit of their own, their own corporate profits. [00:46:53] Um, why, why, why shouldn't we think of tech infrastructure sort of in a similar way? So, yeah, I would like to see us explore more. in that direction. I understand that in America that may not happen for quite a while because we are very, capitalist focused, but it's encouraging to see, um, places like Europe, uh, a little more open to, to trialing things like, cooperatives and, and grants and large long term grants to, to projects to see if they can, uh, provide sustainability in, in, you know, in a new way. [00:47:29] Jeremy: Yeah, that's a good point because I think right now, a lot of the open source infrastructure that we all rely on, either it's being paid for by large companies and at the whim of those large companies, if Google decides we don't want to pay for you to work on this project anymore, where does the money come from? [00:47:53] Right? And on the other hand, there's the thousands, tens of thousands of people who are doing it. just for free out of the, you know, the goodness of their, their heart. And that's where a lot of the burnout comes from. Right. So I think what you're saying is that perhaps a lot of these pieces that we all rely on, that our, our governments, you know, here in the United States, but also around the world should perhaps recognize as this is, like you said, this is infrastructure, and we should be. [00:48:29] Paying these people to keep the equivalent of the roads and, and, uh, all that working. [00:48:37] Mike: Yeah, I mean, I'm not, I'm not claiming that it's a perfect analogy. There's, there's, there's lots of questions that are unanswered in that, right? How do you, how do you ensure that a project is well maintained? What does that even look like? What does that mean? you know, you can look at a road and say, is it full of potholes or is it smooth as glass, right? [00:48:59] It's just perfectly obvious, but to a, to a digital project, it's, it's not as clear. So, yeah, but, but, but exploring those new ways because turning everybody into a businessman so that they can, they can keep their project going, it, it, it itself is not sustainable, right? so yeah, and that's why everything turns into a SaaS because a SaaS is easy to control. [00:49:24] It's easy to gatekeep behind a paywall and it's easy to charge for, whereas a library on GitHub. Yeah. You know, what do you do there? You know, obviously GitHub has sponsors, the sponsors feature. You've got Patreon, you've got Open Collective, you've got Tidelift. There's, there's other, you know, experiments that have been run, but nothing has risen to the top yet. [00:49:47] and it's still, it's still a bit of a grind. but yeah, we'll see, we'll see what happens, but hopefully people will keep experimenting and, and maybe, maybe governments will start. Thinking in the direction of, you know, what does it mean to have a budget for digital infrastructure maintenance? [00:50:04] Jeremy: Yeah, it's interesting because we, we started thinking about like, okay, where can we find spaces for other Sidekiqs? But it sounds like maybe, maybe that's just not realistic, right? Like maybe we need more of a... Yeah, a rethinking of, I guess the, the structure of how people get funded. Yeah. [00:50:23] Mike: Yeah, sometimes the best way to solve a problem is to think at a higher level. You know, we, the, the sustainability problem in American Silicon Valley based open source developers is naturally going to tend toward venture capital and, and capitalism. And I, you know, I think, I think that's, uh, extremely problematic on a, on a lot of different, in a lot of different ways. [00:50:47] And, and so sometimes you need to step back and say, well, maybe we're, maybe we just don't have the right tool set to solve this problem. But, you know, I, I. More than that, I'm not going to speculate on because it is a wicked problem to solve. [00:51:04] Jeremy: Is there anything else you wanted to, to mention or thought we should have talked about? [00:51:08] Mike: No, I, I, I loved the talk, of sustainability and, and open source. And I, it's, it's a, it's a topic really dear to my heart, obviously. So I, I am happy to talk about it at length with anybody, anytime. So thank you for having me. [00:51:25] Jeremy: All right. Thank you very much, Mike.
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
Doc Searls and Simon Phipps talk with Luis Villa of Tidelift about how it helps code maintainers get paid, plus what's happening in AI, ML, regulation and more. Hosts: Doc Searls and Simon Phipps Guest: Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: kolide.com/floss
In this episode of the podcast, Grizz sits down with Cortney Stauffer (Head of UX Practice) & Chuck Danielsson (Head of Practice, Web/UI), both from Adaptive. They talk about UX, UI, FDC3, and why things should just work. Cortney Stauffer: https://www.linkedin.com/in/cortstauffer/ Chuck Danielsson: https://www.linkedin.com/in/chuck-danielsson-2141b058/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Connectifi, Discover, Enterprise DB, FinOps Foundation, Fujitsu, instaclustr, Major League Hacking, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Percona, Sonatype, StormForge, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, Grizz sits down with Jon Gottfried, Co-Founder of Major League Hacking. They talk about hackathons in finance, and developer/engineering talent, from both the individual and hiring manager perspectives. Jon Gottfried: https://www.linkedin.com/in/jonmarkgo/ MajorLeagueHacking: https://sponsor.mlh.io/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Connectifi, Discover, Enterprise DB, FinOps Foundation, Fujitsu, instaclustr, Major League Hacking, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Percona, Sonatype, StormForge, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, our FINOS COO, Jane Gavronsky sits down with Adrian Dale of ISLA and David Shone of ISDA to discuss the associations contribution and backing of the FINOS CDM, Common Domain Model to the FINOS open source community. CDM: https://cdm.finos.org/ On GitHub: https://github.com/finos/common-domain-model Adrian Dale, Head of Regulation & Markets, ISLA - https://www.linkedin.com/in/adrian-dale-27942314/ David Shone, Director of Product - Data & Digital, ISDA - https://www.linkedin.com/in/david-shone/ Jane Gavronsky, COO, FINOS - https://www.linkedin.com/in/janegavronsky/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Connectifi, Discover, Enterprise DB, FinOps Foundation, Fujitsu, instaclustr, Major League Hacking, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Percona, Sonatype, StormForge, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, Grizz sits down with Peter Smulovics, Executive Director at Morgan Stanley about.. well, just about everything. We hit his developer journey, metaverse, XR, spatial computing, Big Boost Mondays, autism hackathons, and painting fences. He is currently Executive Director for Windows and .NET develop practices and spatial computing and metaverse development practices at Morgan Stanley, and co-chair for Open Source Readiness ( https://osr.finos.org ) and Emerging Technologies ( https://zenith.finos.org ) at The Linux Foundation / FINOS. He will be speaking at the Open Source in Finance Forum on November 1st in New York: https://sched.co/1PzH7 Peter Smulovics LinkedIn: https://www.linkedin.com/in/smulovicspeter/ FSI Hack for Autism - 2023: https://fsi-hack4autism.github.io/ Zenith Emerging Technologies: https://zenith.finos.org/ Open Source Readiness: https://osr.finos.org/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, Grizz sits down with Anna McDonald, Technical Voice of the Customer at Confluent to talk about her OSFF talk: "Enabling Real Time Regulatory Compliance with Kafka Streams and Morphir". We talk about Kafka Streams, Morphir, Open Regulation, and what it's like to figure out your passion for coding at 5 years old. She will be speaking at the Open Source in Finance Forum on November 1st in New York: https://sched.co/1PzH7 Anna McDonald LinkedIn: https://www.linkedin.com/in/jbfletch/ NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, Grizz sits down with Brian Douglas, CEO, of OpenSauced to talk about his OSFF talk: "Data-Driven Decisions: Uncovering the Key Metrics Shaping Success in OSS". We talk about his developer evangelist journey, open source project analytics, accessing talent, and a little Steph Curry. He will be speaking at the Open Source in Finance Forum on November 1st in New York: https://sched.co/1PzGI LinkedIn: https://www.linkedin.com/in/brianldouglas/ OpenSauced: https://opensauced.pizza/ Podcast & Videos: https://www.youtube.com/@OpenSauced/videos NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsors: Databricks, where you can unify all your data, analytics, and AI on one platform. And Red Hat - Open to change—yesterday, today, and tomorrow. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
Doc Searls and Jonathan Bennett talk with Claude Warren, Jr. about open source culture going back to coffee shops in the 1600s, how open source manners matter, and much more on this episode of FLOSS Weekly. The concept of open source projects as "insurance" against risk and companies that fund them for risk reduction. InnerSource as an open source practice to develop and establish an open source-like culture within organizations. Business source licenses changing mid-project and the fallout following such a chance. Alternative models like Tidelift for funding open source. The challenges of determining a single best model vs. many potential solutions. HashiCorp's shift to a business source license and forking. The impact of cultural differences on software teams and misunderstandings that can follow. Setting expectations for asking "improper" questions to learn. Social media outrage culture vs. traditional "voting with your feet." How to sustain projects as they evolve from early stages projects. Why succession planning is needed to continue the progress when project leaders leave. The ethics of Protestware and embedding political messages. Drawing lines around appropriate levels of protest or advocacy in code. Hosts: Doc Searls and Jonathan Bennett Guest: Claude Warren, Jr. Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: bitwarden.com/twit fastmail.com/twit
Doc Searls and Jonathan Bennett talk with Claude Warren, Jr. about open source culture going back to coffee shops in the 1600s, how open source manners matter, and much more on this episode of FLOSS Weekly. The concept of open source projects as "insurance" against risk and companies that fund them for risk reduction. InnerSource as an open source practice to develop and establish an open source-like culture within organizations. Business source licenses changing mid-project and the fallout following such a chance. Alternative models like Tidelift for funding open source. The challenges of determining a single best model vs. many potential solutions. HashiCorp's shift to a business source license and forking. The impact of cultural differences on software teams and misunderstandings that can follow. Setting expectations for asking "improper" questions to learn. Social media outrage culture vs. traditional "voting with your feet." How to sustain projects as they evolve from early stages projects. Why succession planning is needed to continue the progress when project leaders leave. The ethics of Protestware and embedding political messages. Drawing lines around appropriate levels of protest or advocacy in code. Hosts: Doc Searls and Jonathan Bennett Guest: Claude Warren, Jr. Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: bitwarden.com/twit fastmail.com/twit
Doc Searls and Jonathan Bennett talk with Claude Warren, Jr. about open source culture going back to coffee shops in the 1600s, how open source manners matter, and much more on this episode of FLOSS Weekly. The concept of open source projects as "insurance" against risk and companies that fund them for risk reduction. InnerSource as an open source practice to develop and establish an open source-like culture within organizations. Business source licenses changing mid-project and the fallout following such a chance. Alternative models like Tidelift for funding open source. The challenges of determining a single best model vs. many potential solutions. HashiCorp's shift to a business source license and forking. The impact of cultural differences on software teams and misunderstandings that can follow. Setting expectations for asking "improper" questions to learn. Social media outrage culture vs. traditional "voting with your feet." How to sustain projects as they evolve from early stages projects. Why succession planning is needed to continue the progress when project leaders leave. The ethics of Protestware and embedding political messages. Drawing lines around appropriate levels of protest or advocacy in code. Hosts: Doc Searls and Jonathan Bennett Guest: Claude Warren, Jr. Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: bitwarden.com/twit fastmail.com/twit
Doc Searls and Jonathan Bennett talk with Claude Warren, Jr. about open source culture going back to coffee shops in the 1600s, how open source manners matter, and much more on this episode of FLOSS Weekly. The concept of open source projects as "insurance" against risk and companies that fund them for risk reduction. InnerSource as an open source practice to develop and establish an open source-like culture within organizations. Business source licenses changing mid-project and the fallout following such a chance. Alternative models like Tidelift for funding open source. The challenges of determining a single best model vs. many potential solutions. HashiCorp's shift to a business source license and forking. The impact of cultural differences on software teams and misunderstandings that can follow. Setting expectations for asking "improper" questions to learn. Social media outrage culture vs. traditional "voting with your feet." How to sustain projects as they evolve from early stages projects. Why succession planning is needed to continue the progress when project leaders leave. The ethics of Protestware and embedding political messages. Drawing lines around appropriate levels of protest or advocacy in code. Hosts: Doc Searls and Jonathan Bennett Guest: Claude Warren, Jr. Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: bitwarden.com/twit fastmail.com/twit
In this episode of the podcast, Grizz sits down with Varsha Sundar, VP of Cloud FinOps at Chubb Insurance to talk about her OSFF talk: "Cloud Financial Management Strategy". We talk about her journey, what FInOps is, and why it's important. She will be speaking at the Open Source in Finance Forum on November 1st in New York: https://sched.co/1Q2n3 LinkedIn: https://www.linkedin.com/in/varsha-sundar-b751b326/ FinOps Foundation: https://www.finops.org/ All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsor: Databricks, where you can unify all your data, analytics, and AI on one platform. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, we break down the newly released schedule from the Open Source in Finance Forum (OSFF). Plus - we return to our FINOS Debrief episodes that wrap up the past month in the FINOS Ecosystem - and look forward to the next month and beyond. All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2023 State of Open Source in Financial Services Survey: https://www.research.net/r/NX3VVXM 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis This event wouldn't be possible without our sponsors. A special thank you to our Leader sponsor: Databricks, where you can unify all your data, analytics, and AI on one platform. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, Open Mainframe Project, OpenJS Foundation, OpenLogic by Perforce, Orkes, Red Hat, Sonatype, and Tidelift. If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
In this episode of the podcast, we discuss the formation of a new major project in FINOS around common cloud controls for financial services. Get involved now here: https://www.finos.org/common-cloud-controls-project Read the Press Release here: https://www.finos.org/press/finos-announces-formation-of-common-cloud-controls US Dept of Treasury Cloud Report: https://home.treasury.gov/system/files/136/Treasury-Cloud-Report.pdf UK HMT Critical 3rd Party Finance Sector Policy Statement: https://www.gov.uk/government/publications/critical-third-parties-to-the-finance-sector-policy-statement European Council DORA: https://www.consilium.europa.eu/en/press/press-releases/2022/11/28/digital-finance-council-adopts-digital-operational-resilience-act/ Monetary Authority of Singapore Cloud Advisory: https://www.mas.gov.sg/-/media/MAS/Regulations-and-Financial-Stability/Regulatory-and-Supervisory-Framework/Risk-Management/Cloud-Advisory.pdf All Links on Current Newsletter Here: https://www.finos.org/newsletter - more show notes to come NYC November 1 - Open Source in Finance Forum: https://events.linuxfoundation.org/open-source-finance-forum-new-york/ 2023 State of Open Source in Financial Services Survey: https://www.research.net/r/NX3VVXMhttps://www.research.net/r/NX3VVXM 2022 State of Open Source in Financial Services Download: https://www.finos.org/state-of-open-source-in-financial-services-2022 A huge thank you to all our sponsors for Open Source in Finance Forum New York https://events.linuxfoundation.org/open-source-finance-forum-new-york/that will take place this November 1st at the New York Marriott Marquis, especially to our Leader sponsor: Databricks. And our Contributor and Community sponsors: Adaptive/Aeron, Discover, FinOps Foundation, instaclustr, mend.io, OpenJS, Open Mainframe Project, Perforce, Red Hat, Sonatype, and Tidelift. Registration is now open and early bird pricing is available till August 18th. Join us in NYC! If you would like to sponsor or learn more about this event, please send an email to sponsorships@linuxfoundation.org. Grizz's Info | https://www.linkedin.com/in/aarongriswold/ | grizz@finos.org ►► Visit FINOS www.finos.org ►► Get In Touch: info@finos.org
Subscribe to Changelog++: https://changelog.com/podcast/519/discussFeaturing Shawn Wang – Twitter, GitHub, Website Adam Stacoviak – Mastodon, Twitter, GitHub, LinkedIn, Website Jerod Santo – Mastodon, Twitter, GitHub, LinkedIn Notes and Links AI Notes Why “Prompt Engineering” and “Generative AI” are overhyped Multiverse, not Metaverse The Particle/Wave Duality Theory of Knowledge OpenRAIL: Towards open and responsible AI licensing frameworks Open-ish from Luis Villa ChatGPT for Google The Myth of The Infrastructure Phase ChatGPT examples in the wild Debugging code TypeScript answer is wrong Fix code and explain fix dynamic programming Translating/refactoring Wasplang DSL AWS IAM policies Code that combines multiple cloud services Solving a code problem Explain computer networks homework Rewriting code from elixir to PHP Turning ChatGPT into an interpreter for a custom language, and then generating code and executing it, and solving Advent of Code correctly Including getting #1 place “I haven't done a single google search or consulted any external documentation to do it and I was able to progress faster than I have ever did before when learning a new thing.” Build holy grail website and followup with framework, copy, repsonsiveness For ++ subscribers Getting Senpai To Notice You Moving to Obsidian as a Public Second Brain Transcript**Jerod Santo:** Alright, well we have Sean Wang here again. Swyx, welcome back to the show.**Shawn Wang:** Thanks for having me back on. I have lost count of how many times, but I need to track my annual appearance on the Changelog.**Adam Stacoviak:** Is that twice this year on this show, and then once on JS Party at least, right?**Shawn Wang:** Something like that, yeah. I don't know, it's a dream come true, because, I changed careers into tech listening to the Changelog, so every time I'm asked on, I'm always super-grateful. So yeah, here to chat about all the hottest, latest things, right?**Adam Stacoviak:** Yeah.**Jerod Santo:** That's right, there's so much going on right now. It seems like things just exploded this fall. So we had Stable Diffusion back in late August; it really blew up at the end of August. And then in September is when we had Simon Willison on the show to talk about Stable Diffusion breaking the internet. You've been tracking this stuff really closely. You even have a Substack, and you've got Obsidian notes out there in the wild, and then of course, you're learning in public, so whenever Swyx is learning something, we're all kind of learning along with you... Which is why we brought you back on. I actually included your Stable Diffusion 2.0 summary stuff in our Changelog News episode a couple of weeks back, and a really interesting part of that post that you have, that I didn't talk about much, but I touched on and I want you to expand upon here is this idea of prompt engineering, not as a cool thing, but really as a product smell. And when I first saw it, I was like, "No, man, it's cool." And then I read your explainer and I'm like, "No, he's right. This is kind of a smell."**Adam Stacoviak:** "Dang it, he's right again."**Jerod Santo:** Yeah. We just learned about prompt engineering back in September, with Simon, and talking about casting spells and all this, and now it's like, well, you think it's overhyped. I'll stop prompting you, and I'll just let you engineer an answer.**Jerod Santo:** Well, so I don't know if you know, but the Substack itself got its start because I listened to the Simon episode, and I was like, "No, no, no. Spellcasting is not the way to view this thing. It's not something we glorify." And that's why I wrote "Multiverse, not Metaverse", because the argument was that prompting is -- you can view prompting as a window into a different universe, with a different seed, and every seed is a different universe. And funny enough, there's a finite number of seeds, because basically, Stable Diffusion has a 512x512 space that determines the total number of seeds.So yeah, prompt engineering [unintelligible 00:04:23.23] is not my opinion. I'm just reporting on what the AI thought leaders are already saying, and I just happen to agree with it, which is that it's very, very brittle. The most interesting finding in the academic arena about prompt engineering is that default GPT-3, they ran it against some benchmarks and it came up with like a score of 17 out of 100. So that's a pretty low benchmark of like just some logical, deductive reasoning type intelligence tests. But then you add the prompt "Let's think step by step" to it, and that increases the score from 17 to 83... Which is extremely -- like, that sounds great. Like I said, it's a magic spell that I can just kind of throw onto any problems and make it think better... But if you think about it a little bit more, like, would you actually use this in a real work environment, if you said the wrong thing and it suddenly deteriorates in quality - that's not good, and that's not something that you want to have in any stable, robust product; you want robustness, you want natural language understanding, to understand what you want, not to react to random artifacts and keywords that you give.Since then, we actually now know why "Let's think step by step" is a magic keyword, by the way, because -- and this is part of transformer architecture, which is that the neural network has a very limited working memory, and if you ask a question that requires too many steps to calculate the end result, it doesn't have the working memory to store the result, therefore it makes one up. But if you give it the working memory, which is to ask for a longer answer, the longer answer stores the intermediate steps, therefore giving you the correct result.**Jerod Santo:** [06:00] Talk about implementation detail, right?**Shawn Wang:** It's yeah, it's leaking implementation detail, it's not great, and that's why a lot of the thought leaders - I think I quoted Andrej Karpathy, who was head of AI at Tesla, and now he's a YouTuber... [laughter] And Sam Altman, who is the CEO of -- yeah, he quit Tesla to essentially pursue an independent creator lifestyle, and now he's a YouTuber.**Jerod Santo:** I did not know that.**Adam Stacoviak:** All roads lead to creator land, you know what I'm saying? You'll be an expert in something for a while, and eventually you'll just eject and be like "I want to own my own thing, and create content, and educate people around X."**Shawn Wang:** So at my day job I'm a head of department now, and I work with creators, and some of them have very valuable side hustles... And I just had this discussion yesterday, of like "Why do you still have a job if you're an independent creator? Like, isn't total independence great." And I had to remind them, "No. Like, career progression is good. You're exposed to new things etc." but that's just me trying to talk him out of quitting. [laughter] No, I have a serious answer, but we're not here to talk about that.**Jerod Santo:** Right.**Shawn Wang:** So I'll read out this quote... So Sam Altman, CEO of OpenAI, says "I don't think we'll still be doing prompt engineering in five years. It's not about figuring out how to hack the prompt by adding one magic word to the end that changes everything else. What will matter is the quality of ideas and the understanding that you want." I think that is the prevailing view, and I think as people change models, they are understanding the importance of this.So when Stable Diffusion 1 came out, everyone was like, "Alright, we know how to do this. I'm going to build an entire business on this" etc. And then Stable Diffusion 2 came out and everything broke. All the [unintelligible 00:07:40.21] stopped working, because they just expected a different model, and you have to increase your negative prompting, and people are like "What is negative prompting?" etc. These are all new techniques that arise out of the model, and this is going to happen again and again and again, because you're relying on a very, very brittle foundation.Ultimately, what we want to get people to is computers should understand what we want. And if we haven't specified it well enough, they should be able to ask us what we want, and we should be able to tell them in some capacity, and eventually, they should produce something that we like. That is the ultimate alignment problem.We talk about AI a lot, and you hear about this alignment problem, which is basically some amount of getting it to do what we want it to do, which is a harder problem than it sounds until you work with a programmer, and try to give them product specs and see how many different ways they can get it wrong. But yeah, this is an interesting form of the alignment problem, and it interestingly has a very strong tie with Neuralink as well, because the problem, ultimately, is the amount of bandwidth that we can transfer from our brain to an artificial brain. And right now it's prompts. But why does it have to be prompts? It could be images. That's why you have image-to-image in Stable Diffusion. And it could also be brain neural connections. So there's a lot in there; I'll give you time to pick on whatever you respond to...**Jerod Santo:** Well, I went from -- so I was super-excited about prompting after talking with Simon a few months back, and I was super-excited about Stable Diffusion. And I went from like giddy schoolboy who's just like "Gonna learn all the spells" very quickly to like aggravated end user who's like "Nah, I don't want to go to this other website and copy and paste this paragraph of esoterica in order to get a result that I like." And so I wonder what's so exciting about the whole prompt engineering thing to us nerds, and I think maybe there's like a remnant of "Well, I still get to have esoteric knowledge" or "I still get to be special somehow if I can learn this skill..."[09:46] But in reality, what we're learning, I think, by all the people using ChatGPT - the ease of use of it, as opposed to the difficulty of getting an image out of Stable Diffusion 1.0 at least, is quite a bit different. And it goes from aggravating and insider baseball kind of terms, keywords, spells, to plain English, explain what you want, and maybe modify that with a follow-up, which we'll get into ChatGPT, but we don't necessarily have to go into the depths of that right now... But I changed very quickly, even though I still thought prompt engineering was pretty rad... And then when you explain to me how Stable Diffusion 2 completely broke all the prompts, I'm like, "Oh yeah, this is a smell. This doesn't work. You can't just completely change the way it works on people..." That doesn't scale.**Shawn Wang:** Yeah. And then think about all the businesses that have been built already. There haven't been any huge businesses built on Stable Diffusion, but GPT-3 has internal models as well. So Jasper recently raised like a 1.5 billion valuation, and then ChatGPT came out, basically validating Jasper... So all the people who bought stock are probably not feeling so great right now. [laughs]That's it. So I don't want to overstate my position. There are real moats to be built around AI, and I think that the best entrepreneurs are finding that regardless of all these flaws. The fact that there are flaws right now is the opportunity, because so many people are scared off by it. They're like, "AI has no moats. You're just a thin wrapper around OpenAI." But the people who are real entrepreneurs figure it out. So I think it's just a really fascinating case study in technology and entrepreneurship, because here's a new piece of technology nobody knows how to use and productize, and the people who figure out the playbook are the ones who win.**Adam Stacoviak:** Yeah. Are we back to this -- I mean, it was like this years ago, when big data became a thing... But are we back to this whole world where -- or maybe we never left, where "Data is the new oil", is the quote... Because to train these models, you have to have data. So you could be an entrepreneur, you could be a technologist, you could be a developer, you could be in ML, you could be whatever it might take to build these things, but at some point you have to have a dataset, right? Like, how do you get access to these datasets? It's the oil; you've got to have money to get these things, you've got to have money to run the hardware to enable... Jerod, you were saying before the call, there was speculation of how much it costs to run ChatGPT daily, and it's just expensive. But the data is the new oil thing - how does that play into training these models and being able to build the moat?**Shawn Wang:** Yeah. So one distinction we must make there is there is a difference between running the models, which is just inferences, which is probably a few orders of magnitude cheaper than training the models, which are essentially a one-time task. Not that many people continuously train, which is nice to have, but I don't think people actually care about that in reality.So the training of the models ranges between -- and let's just put some bounds for people. I love dropping numbers in podcasts, by the way, because it helps people contextualize. You made an oblique reference to how much ChatGPT costs, but let's give real numbers. I think the guy who did an estimate said it was running at $3 million a month. I don't know if you heard any different, but that's...**Jerod Santo:** I heard a different estimate, that would have been more expensive, but I think yours is probably more reliable than mine... So let's just go with that.**Shawn Wang:** I went through his stuff, and I was like, "Yeah, okay, this is on the high end." I came in between like one to three as well. It's fine. And then for training the thing - so it's widely known or widely reported that Stable Diffusion cost 600k for a single run. People think the full thing, including R&D and stuff, was on the order of 10 million. And GPT-3 also costs something on the order of tens of millions. So I think that is the cost, but then also that is training; that is mostly like GPU compute. We're not talking about data collection, which is a whole other thing, right?[13:46] And I think, basically, there's a towering stack of open source contributions to this data collective pool that we have made over time. I think the official numbers are like 100,000 gigabytes of data that was trained for Stable Diffusion... And it's basically pooled from like Flickr, from Wikipedia, from like all the publicly-available commons of photos. And that is obviously extremely valuable, because -- and another result that came out recently that has revolutionized AI thinking is the concept of Chinchilla Laws. Have you guys covered that on the show, or do I need to explain that?**Adam Stacoviak:** Chinchilla Laws misses the mark for me. Please tell. I like the idea though; it sounds cool, so please...**Shawn Wang:** Yeah, they just had a bunch of models, and the one that won happened to be named Chinchilla, so they kind of went with it. It's got a cute name. But the main idea is that we have discovered scaling laws for machine learning, which is amazing.So in the sort of classical understanding of machine learning, you would have a point at which there's no further point to train. You're sort of optimizing for a curve, and you get sort of like diminishing returns up to a certain point, and then that's about it. You would typically conclude that you have converged on a global optimum, and you kind of just stop there. And mostly, in the last 5 to 10 years, the very depressing discovery is that this is a mirage. This is not a global optimum, this is a local optimum... And this is called the Double Dissent Problem. If you google it, on Wikipedia you'll find it... Which is you just throw more data at it, it levels off for a bit, and then it continues improving. And that's amazing for machine learning, because that basically precipitated the launch of all these large models. Because essentially, what it concludes is that there's essentially no limit to how good these models are, as long as you can throw enough data at it... Which means that, like you said, data is the new oil again, but not for the old reason, which is like "We're gonna analyze it." No, we're just gonna throw it into all these neural nets, and let them figure it out.**Adam Stacoviak:** Yeah. Well, I think there's a competitive advantage though if you have all the data. So if you're the Facebooks, or if you're the Google, or X, Y, or Z... Instagram, even. Like, Instagram ads are so freakin relevant that --**Jerod Santo:** Apple...**Adam Stacoviak:** Yeah, Apple for sure.**Jerod Santo:** TikTok...**Adam Stacoviak:** Yeah. Gosh... Yeah, TikTok. Yeah, the point is, these have a competitive advantage, because they essentially have been collecting this data, would-be to analyze, potentially to advertise to us more, but what about in other ways that these modes can be built? I just think like, when you mentioned the entrepreneurial mind, being able to take this idea, this opportunity as this new AI landscape, to say, "Let me build a moat around this, and not just build a thin layer on top of GPT, but build my own thing on all together", I've gotta imagine there's a data problem at some point, right? Obviously, there's a data problem at some point.**Shawn Wang:** So obviously, the big tech companies have a huge headstart. But how do you get started collecting this data as a founder? I think the story of Midjourney is actually super-interesting. So between Midjourney, Stability AI and OpenAI, as of August, who do you think was making the most money? I'll give you the answer, it was Midjourney.**Jerod Santo:** Oh, I was gonna guess that. You can't just give us the answer...**Shawn Wang:** Oh... [laughs]**Jerod Santo:** I had it.**Shawn Wang:** But it's not obvious, right? Like, the closed source one, that is not the big name, that doesn't have all the industry partnerships, doesn't have the celebrity CEO, that's the one that made the most money.**Jerod Santo:** Yeah. But they launched with a business model immediately, didn't they? They had a subscription out of the box.**Shawn Wang:** Yeah, they did. But also, something that they've been doing from the get-go is that you can only access Midjourney through Discord. Why is that?**Jerod Santo:** Right. Because it's social, or... I don't know. What do you think? That's my guess, because they're right in front of everybody else.**Shawn Wang:** Data.**Adam Stacoviak:** Data.**Jerod Santo:** Oh...**Adam Stacoviak:** Please tell us more, Shawn.**Shawn Wang:** Because the way that you experience Midjourney is you put in a prompt, it gives you four images, and you pick the ones that you like for enhancing. So the process of using Midjourney generates proprietary data for Midjourney to improve Midjourney. So from v3 to v4 of Midjourney they improved so much that they have carved out a permanent space for their kind of visual AI-driven art, that is so much better than everyone else because they have data that no one else has.**Jerod Santo:** [17:55] That's really cool.**Adam Stacoviak:** And that's relevance, or is it like quality takes? What is the data they actually get?**Shawn Wang:** Preference, right?**Jerod Santo:** What's good.**Shawn Wang:** Yeah. Literally, you type in a prompt, unstructuredly it tells you -- they give you four low-res images, and you have to pick one of the four to upscale it. By picking that four, they now have the data that says "Okay, out of these four, here's what a human picks." And it's and it's proprietary to them, and they paid nothing for it, because it's on Discord. It's amazing.**Jerod Santo:** That is awesome.**Shawn Wang:** They didn't build a UI, they just used Discord. I don't know if Discord knows this, or cares... But it's pretty freakin' phenomenal...**Jerod Santo:** That's pretty smart.**Shawn Wang:** ...because now they have this--**Adam Stacoviak:** It's the ultimate in scrappy, right? It's like, by any means necessary. That's the ultimate binding that's necessary, right? You'll make a beat however you can to put up the track and become the star.**Jerod Santo:** Right.**Adam Stacoviak:** That's amazing.**Jerod Santo:** That's really cool.**Shawn Wang:** So just to close this out, the thing I was saying about Chinchilla was "More data is good, we've found the double descent problem. Now let's go get all the data that's possible." I should make a mention about the open source data attempts... So people understand the importance of data, and basically Luther.AI is kind of the only organization out there that is collecting data that anyone can use to train anything. So they have two large collections of data called The Stack and The Pile, I think is what it's called. Basically, the largest collection of open source permissively-licensed text for you to train whatever language models you want, and then a similar thing for code. And then they are training their open source equivalents of GPT-3 and Copilot and what have you. But I think those are very, very important steps to have. Basically, researchers have maxed out the available data, and part of why Open AI Whisper is so important for OpenAI is that it's unlocking sources of text that are not presently available in the available training data. We've basically exhausted, we're data-constrained in terms of our ability to improve our models. So the largest source of untranscribed text is essentially on YouTube, and there's a prevailing theory that the primary purpose of Whisper is to transcribe all video, to get text, to train the models... [laughs] Because we are so limited on data.**Adam Stacoviak:** Yeah. We've helped them already with our podcasts. Not that it mattered, but we've been transcribing our podcasts for a while, so we just gave them a leg up.**Shawn Wang:** You did.**Adam Stacoviak:** And that's open source on GitHub, too. They probably -- I mean, ChatGPT knows about Changelog. They know that -- Jerod, I don't know if I told you this yet, but I prompted that; I said "Complete the sentence "Who's the hosts of the Changelog podcast?" "Well, that's the dynamic duo, Jerod Santo and Adam Stacoviak." It knows who we are. I mean, maybe it's our transcripts, I don't know, but it knows...**Jerod Santo:** Please tell me it called us "the dynamic duo"... [laughs]**Adam Stacoviak:** I promise you!**Jerod Santo:** It said that?**Adam Stacoviak:** I promise you it said that. "The dynamic duo..."**Jerod Santo:** Oh, [unintelligible 00:20:34.05]**Adam Stacoviak:** It actually reversed the order. It said Adams Stacoviak first and then Jerod Santo... Because usually, my name is, I guess, first, because - I have no clue why it's ever been that way, but... It said "The dynamic duo, Adam Stacoviak and Jerod Santo..."**Jerod Santo:** That's hilarious.**Adam Stacoviak:** ...hosts of the Changelog Podcast.**Jerod Santo:** It already understands flattery.**Adam Stacoviak:** Yeah, it does. Well, actually, the first prompt didn't include us, and I said "Make it better, and include the hosts." And that's all I said, was "Make it better and include the hosts." So in terms of re-prompting, or refining the response that you get from the prompts - that to me is like the ultimate human way to conjure the next available thing, which is try again, or do it better by giving me the hosts, too. And the next one was flattery, and actually our names in the thing. So... It's just crazy. Anyways...**Shawn Wang:** Yeah, so that is the big unlock that ChatGPT enabled.**Jerod Santo:** Totally.**Shawn Wang:** Which is why usually I take a few weeks for my takes to marinate, for me to do research, and then for me to write something... But I had to write something immediately after ChatGPT to tell people how important this thing is. It is the first real chat AI, which means that you get to give human feedback. And this theme of reinforcement learning through human feedback is - the low-res version of it was Midjourney. Actually, the lowest-res version of it was TikTok, because every swipe is human feedback. And being able to incorporate that into your -- and same for Google; every link click is a is human feedback. But the ability to incorporate that and to improve the recommendations and the generations is essentially your competitive advantage, and being able to build that as part of your UI... Which is why, by the way, I have been making the case that frontend engineers should take this extremely seriously, because guess who's very good at making a UI?**Adam Stacoviak:** Yeah, for sure.**Shawn Wang:** But yeah, ChatGPT turns it from a one-off zero-shot experience where you prompt the thing, and then you get the result, and it's good or bad, that's about the end of the story - now it's an interactive conversation between you and the bot, and you can shape it to whatever you want... Which is a whole different experience.**Break:** [22:31]**Adam Stacoviak:** "Complete the sentence" has been a hack for me to use, particularly with ChatGPT. "Complete the sentence" is a great way to easily say "Just give me somebody long, given these certain constraints."**Jerod Santo:** Well, that's effectively what these models are, right? They're auto-complete on steroids. Like, they are basically auto-completing with a corpus of knowledge that's massive, and guessing what words semantically should come next, kind of a thing... In layman's terms; it's more complicated than that, of course, but they are basically auto-completers.**Adam Stacoviak:** Yeah. On that note though, we have a show coming out... So we're recording this on a Friday, the same day we release the same podcast, but it's the week before. So we had Christina Warren on, and so I was like "You know what? I'm gonna use ChatGPT to give me a leg up. Let me make my intro maybe a little easier, and just spice it up a little bit." So I said "Complete the sentence "This week on the Changelog we're talking to Christina Warren about..." and then I ended the quote, and I said "and mention her time at Mashable, film and pop culture, and now being a developer advocate at GitHub." And I've gotta say, most of, 50% of the intro for the episode with Christina is thanks to ChatGPT. I don't know if I break the terms of service by doing that or not, but like -- do I? I don't know. If I do, sue me. I'm sorry. But... Don't sue me. Don't sue us. We'll take it down. We'll axe it out.**Jerod Santo:** We'll rewrite it.**Adam Stacoviak:** Yeah, we'll rewrite it. But, I mean, it's basically what I would have said. So...**Shawn Wang:** There's a nice poetry -- there's a YouTuber who's been on this forever, Two Minute Papers, and what he often says is, "What a time to be alive." And this is very much what a time to be alive. But not just because we're seeing this evolve live, but because we get to be part of the training data. And there was a very interesting conversation between Lex Fridman and Andrej Andrej Karpathy; he was inviting him on to the show... He said, "Our conversation will be immortalized in the training data. This is a form of immortality, because we get to be the first humans essentially baked in." [laughter]**Jerod Santo:** Essentially baked in... Hello, world.**Shawn Wang:** Like, 100-200 years from now, if someone has the Changelog podcast, they will keep having Jerod and Adam pop up, because they're in the goddamn training data. [laughs]**Jerod Santo:** They're like "Come on, these guys have been dead for a long time."**Adam Stacoviak:** [26:05] Let them go. Give them their RIP. [laughter]**Shawn Wang:** Which is poetic and nice. Yeah.**Adam Stacoviak:** Yeah, it is a good time to be alive... I think it is interesting, too... I just wonder -- I mean, this might be jumping the shark a little bit, but I often wonder, at what point does humanity stop creating? And at some point, 100 years from now, or maybe more, I don't know, we're gonna be -- maybe sooner, given how fast this is advancing, that we'll create only through what was already created. "At what point is the snake eating the snake?" kind of thing. Like, is there an end to human creativity at some point, because we are just so reliant, at some point, shape, or form, on [unintelligible 00:26:45.20] because of training data, and this just kind of like morphing to something much, much bigger in the future?**Shawn Wang:** So I have an optimistic attitude to that... This question basically is asking, "Can we exhaust infinity?" And so my obvious answer is no. There is a more concrete stat I can give you, which is I think - this is floating around out there. Don't quote me on the exact number, but apparently, 10% of all Google searches every single year have never been asked before. And Google's been around for like 20 years.**Adam Stacoviak:** That's a big percentage.**Shawn Wang:** It's still true. So it's on that order; it might be like 7%, it might be 13%.**Adam Stacoviak:** Well, is it trending down though? Is it trending down? Is it 10% per year, but is it like trending down to like 8%?**Jerod Santo:** Is it because we put the year in our searches? [laughter]**Adam Stacoviak:** Yeah, it's true, Jerod. Good one.**Shawn Wang:** Yeah. But anyway, so that's what the SEO people talk about when they talk about long tail... The amount of infinity is always bigger than our capability of creating to fill it.**Jerod Santo:** I mean, I feel like if you look at us in an abstract way, humans, we are basically taking in inputs and then generating outputs. But that's creativity, right? So I think what we're just doing is adding more to the inputs. Now we have computers that also take in inputs and generate outputs, but like, everything's already a remix, isn't it? Our life experience and everything that goes into us, and then something else produces a brand new thing, which isn't really new, but it's a remix of something else that we experienced... So I feel like we're just going to keep doing that, and we'll have computer aid at doing that, and the computer eventually maybe will just do the actual outputting part, but we somehow instruct it. I'm with Swyx on this one; I don't think there's going to be an end to human creativity, as the AI gets more and more output... What's the word? When you're just -- not notorious. What's it called when you just can't stop outputting stuff?**Adam Stacoviak:** I don't know.**Jerod Santo:** Prolific!**Adam Stacoviak:** Prolific.**Jerod Santo:** As the AI gets more and more output-prolific, and overwhelms us with output, I think we're still going to be doing our thing.**Adam Stacoviak:** Yeah. It's the ultimate reduction in latency to new input, right? Think of 100 years ago - creative folks were few and far between. They had miles between them, depending on your system; maybe it's kilometers. No offense. But there's distance of some sort of magnitude, and the lack of connection and shared ideas. So that's the latency, right? And now, the latency to the next input is just so small in comparison, and will get reduced to basically nothing. So we'll just constantly be inputting and outputting creativity, we'll just become like a creative [unintelligible 00:29:31.17] system with zero latency, nonstop creativity... Go, go, go...**Shawn Wang:** Well, I think this is where you start -- I don't know about you, but I feel a little bit uncomfortable with that, right? Entropy is always increasing in the universe; we're contributing to increasing noise and not signal. And that is a primary flaw of all these language models, is just they are very confidently incorrect. They have no sense of physics, no sense of logic; they will confidently assert things that are not true, and they're trained on sounding plausible, rather than being true.**Jerod Santo:** Right. They're kind of like me when I was in college, you know?**Shawn Wang:** Exactly. [laughter]**Jerod Santo:** [30:10] Just so much confidence, but wrong most of the time. [laughs]**Shawn Wang:** Exactly. Which happens to Galactica, which is this sort of science LLM from Meta, where Yann LeCun, who is one of the big names in tech, was like "This thing will generate papers for you." And within three days, the internet tore it apart, and they had to take it down. It was a very, very dramatic failure, this kind of tech... Because you're talking about biology, and science, and medicine, and you can't just make stuff up like that. [laughs]**Jerod Santo:** Right. So like in the world where chat GPT operates today, which is really in the world of fiction, and kind of BS-ing, for lack of a better term, like writing intros to a podcast - you know, like, it doesn't have to be correct necessarily; it can be like close enough to correct, and then you can massage it, of course, you can cherry pick to get the one that you like... But when the rubber hits the road, like on serious things, like science, or "How many of these pills do I need to take?" I guess that is also -- that's health science. So science, and other things... It's like, it can't be correct 60% of the time, or 80%, or even like 95%. It's gotta reach that point where you actually can trust it. And because we're feeding it all kinds of information that's not correct, de facto... Like, how much of the internet's wrong? Most of it, right?**Adam Stacoviak:** I mean, medicine though has evolved too, and it hasn't always been correct, though it's also very serious... You'd get advice from a doctor 10-15 years ago, they'd say it with full confidence and full accuracy, but it's only based on that current dataset.**Jerod Santo:** But you can sue them for malpractice and stuff, right? Like, how do we take recourse against--**Adam Stacoviak:** You can if they actually have malpractice; they can be wrong, because it's as much science as possible to make the most educated guess. It's malpractice when there's negligence; it's not malpractice when they're wrong.**Jerod Santo:** A good doctor will actually go up to the fringe and say, "You know what - I'm not 100% sure about this. It's beyond my knowledge."**Adam Stacoviak:** Sure. For sure.**Jerod Santo:** "Here's what you can do. Here's the risks of doing that." Whereas the chat bots, the ChatGPT thing is like, "The answer is 7", and you're like, "It actually was 12." And it's like, "Ah, shoot..." [laughter]**Adam Stacoviak:** Well, I think when there's mortality involved, maybe there's going to be a timeframe when we actually begin to trust the future MedGPT, for example; I don't know if that's a thing in the future, but something that gives you medical results or responses based upon data, real data, potentially, that you get there, but it's not today.**Jerod Santo:** Well, I think this goes back to the data point that you made, and I think where we go from like the 95 -- I'm making up numbers here, but like 95% accuracy, to get it to like 98.5%, or 99%. Like, that's gonna require niche, high-value, high-signal data that maybe this medical facility has, because they've been collecting it for all these years. And they're the only ones who have it. And so maybe that's where you like carve out proprietary datasets that take these models from a baseline of accuracy, to like, in this particular context of health it's this much accuracy. And then maybe eventually you combine all those and have a super model. I don't know... Swyx, what do you think?**Shawn Wang:** I love the term super-model. I think the term [unintelligible 00:33:23.10] in the industry is ensemble. But that just multiplies the costs, right? Like if you want to run a bank of five models, and pick the best one, that obviously 6x-es your cost. So not super-interesting; good for academic papers, but not super-interesting in practice, because it's so expensive.There's so many places to go with this stuff... Okay, there's one law that I love, which is Brandolini's Law. I have this tracking list of eponymous laws... Brandolini's law is people's ability to create bulls**t far exceeds the ability of people to refute it. Basically, if all of these results of this AI stuff is that we create better bulls***t engines, it's not great. And what you're talking about, the stuff with like the 90% correct, 95% correct - that is actually a topic of discussion. It's pretty interesting to have the SRE type conversation of "How many nines do you need for your use case, and where are we at right now?" Because the number of nines will actually improve. We are working on -- sorry, "we" as in the collective human we, not me personally...**Adam Stacoviak:** [34:32] The royal we, yes.**Shawn Wang:** The role royal we... Like, humanity is working on ways to improve, to get that up. It's not that great right now, so that's why it's good for creativity and not so much for precision, but it will get better. One of the most viral posts on Hacker News is something that you featured, which is the ability to simulate virtual machines instead of ChatGPT-3, where people literally opened -- I mean, I don't know how crazy you have to be, but open ChatGPT-3, type in LS, and it gives you a file system. [laughter]**Jerod Santo:** But that only exists -- it's not a real file system, it's just one that's [unintelligible 00:35:00.05]**Shawn Wang:** It's not a real file system, for now. It's not a real set file system for now, because they hallucinate some things... Like, if you ask it for a Git hash, it's gonna make up a Git hash that's not real, because you can verify [unintelligible 00:35:10.25] MD5. But like, how long before it learns MD5? And how long before it really has a virtual machine inside of the language model? And if you go that far, what makes you so confident that we're not in one right now? [laughs]**Jerod Santo:** Now I'm uncomfortable... That actually is a very short hop into the simulation hypothesis, because we are effectively simulating a brain... And if you get good enough at simulating brains, what else can you simulate?**Adam Stacoviak:** What else WOULD you want to simulate? I mean, that's the Holy Grail, a brain.**Shawn Wang:** Yeah. So Emad Mostaque is the CEO of Stability AI. He's like, "We're completely unconcerned with the AGI. We don't know when it'll get here. We're not working on it. But what we're concerned about is the ability to augment human capability. People who can't draw now can draw; people who can't write marketing texts or whatever, now can do that." And I think that's a really good way to approach this, which is we don't know what the distant future is gonna hold, but in the near future, this can help a lot of people.**Adam Stacoviak:** It's the ultimate tool in equality, right? I mean, if you can do --**Shawn Wang:** Yeah, that's a super-interesting use case. So there was a guy who was like sort of high school-educated, not very professional, applying for a job. And what he used ChatGPT to do was like "Here's what I want to say, and please reward this in a professional email." And it basically helped to pass the professional class status check. Do you know about the status checks? All the other sort of informal checks that people have, like "Oh, we'll fly you in for your job interview... Just put the hotel on your credit card." Some people don't have credit cards. And likewise, when people email you, you judge them by their email, even though some haven't been trained to write professionally, right? And so yeah, GPT is helping people like that, and it's a huge enabler for those people.**Adam Stacoviak:** Hmm... That is -- I mean, I like that idea, honestly, because it does enable more people who are less able... It's a net positive.**Shawn Wang:** Yeah. I mean, I seem generally capable, but also, I have RSI on my fingers, and sometimes I can't type. And so what Whisper is enabling me to do, and Copilot... So GitHub, at their recent GitHub Universe, recently announced voice-enabled Copilot... And it is good enough for me to navigate VS Code, and type code with Copilot and voice transcription. Those are the two things that you need; and they're now actually good enough that I don't have to learn a DSL for voice coding, like you would with Talon, or the prior solutions.**Adam Stacoviak:** You know, it's the ultimate -- if you're creative enough, it's almost back to the quote that Sam had said, that you liked... Well, I'm gonna try and go back to it; he says "At the end, because they were just able to articulate it with a creative eye that I don't have." So that to me is like insight, creativity; it's not skill, right? It's the ability to dream, which is the ultimate human skill, which is - since the beginning of time, we've been dreamers.**Shawn Wang:** [38:01] This is a new brush. Some artists are learning to draw with it. There'll be new kinds of artists created.**Adam Stacoviak:** Provided that people keep making the brush, though. It's a new brush...**Shawn Wang:** Well, the secret's out; the secret's out that you can make these brushes.**Jerod Santo:** Right.**Adam Stacoviak:** Yeah, but you still have to have the motivation to maintain the brush, though.**Jerod Santo:** What about access, too? I mean, right now you're talking about somebody who's made able, that isn't otherwise, with let's just say ChatGPT, which is free for now. But OpenAI is a for profit entity, and they can't continue to burn money forever; they're gonna have to turn on some sort of a money-making machine... And that's going to inevitably lock some people out of it. So now all of a sudden, access becomes part of the class, doesn't it? Like, you can afford an AI and this person cannot. And so that's gonna suck. Like, it seems like open source could be for the win there, but like you said, Swyx, there's not much moving and shaking in that world.**Adam Stacoviak:** Well, I haven't stopped thinking about what Swyx said last time we talked, which was above or below the API, which is almost the same side of the coin that we talked about last time, which is like, this the same thing.**Jerod Santo:** Yeah. Well, ChatGPT is an API, isn't it?**Shawn Wang:** Nice little callback. Nice. [laughter]**Adam Stacoviak:** I really haven't been able to stop thinking about it. Every time I use any sort of online service to get somebody to do something for me that I don't want to do, because I don't have the time for it, or I'd rather trade dollars for my time, I keep thinking about that above or below the API, which is what we talked about. And that's what Jerod has just brought up; it's the same exact thing.**Shawn Wang:** Yep, it is. One more thing I wanted to offer, which is the logical conclusion to generative. So that post where we talked about why prompt engineering is overrated - the second part of it is why you shouldn't think about this as generative... Because right now, the discussion we just had was only thinking about it as a generative type of use case. But really, what people want to focus on going forward is -- well, two things. One is the ability for it to summarize and understand and reason, and two, for it to perform actions. So the emerging focuses on agentic AI; AI agents that can perform actions on your behalf. Essentially, hooking it up to -- giving it legs and legs and arms and asking it to do stuff autonomously.So I think that's super-interesting to me, because then you get to have it both ways. You get AI to expand bullet points into prose, and then to take prose into bullet points. And there's a very funny tweet from Josh Browder, who is the CEO DoNotPay, which is kind of like a --**Adam Stacoviak:** Yeah, I'm a fan of him.**Shawn Wang:** Yeah. Fantastic, right? So what DoNotPay does is they get rid of annoying payment UX, right? Like, sometimes it was parking tickets, but now they are trying to sort of broaden out into different things. So he recently tweeted that DoNotPay is working on a way to talk to Comcast to negotiate your cable bill down. And since Comcast themselves are going to have a chat bot as well, it's going to be chat bots talking to each other to resolve this... [laughter]**Adam Stacoviak:** Wow, man...**Jerod Santo:** It's like a scene out of Futurama, or something...**Shawn Wang:** Yeah. So I'm very excited about the summarization aspects, right? One of the more interesting projects that came out of this recent wave was Explained Paper, which is - you can throw any academic paper at it and it explains the paper to you in approachable language, and you can sort of query it back and forth. I think those are super-interesting, because that starts to reverse Brandolini is law. Instead of generating bulls**t, you're taking bulls**it in, getting into some kind of order. And that's very exciting.**Adam Stacoviak:** Yeah. 17 steps back, it makes me think about when I talk to my watch, and I say "Text my wife", and I think about like who is using this to their betterment? And I'm thinking like, we're only talking about adults, for the most part. My kid, my son, Eli - he talks to Siri as if like she knows everything, right? But here's me using my watch to say "Text my wife." I say it, it puts it into the phone... And the last thing it does for me, which I think is super-interesting for the future, as like this AI assistant, is "Send it" is the final prompt back to me as the human; should I send this? And if I say no, Siri doesn't send it. But if I say "Send it", guess what she does? She sends it. But I love this idea of the future, like maybe some sort of smarter AI assistant like that. I mean, to me, that's a dream. I'd love that.**Shawn Wang:** [42:21] Yeah, I was watching this clip of the first Iron Man, when Robert Downey Jr. is kind of working with his bot to work on his first suit... And he's just talking to the bot, like "Here's what I want you to do." Sometimes it gets it wrong and he slaps it on the ahead... But more often than not, he gets it right. And this is why I've been -- you know, Wes Boss recently tweeted -- this is actually really scary. "Should we be afraid as engineers, like this is going to come for our jobs?" And I'm like, "No. All of us just got a personal junior developer." That should excite you.**Jerod Santo:** Yeah. And it seems like it's particularly good at software development answers. You'd think it's because there's lots of available text... I mean, think about like things that it's good at; it seems like it knows a lot about programming.**Shawn Wang:** I have a list. Do you want a list?**Jerod Santo:** Yeah.**Shawn Wang:** So writing tutorials - it's very good. Literally, tables of contents, section by section, explaining "First you should npm install. Then you should do X. Then you should do Y." Debugging code - just paste in your error, and paste in the source code, and it tells you what's wrong with it. Dynamic programming, it does really well. Translating DSLs. I think there'll be a thousand DSLs blooming, because the barrier to adoption of a DSL has just disappeared. [laughs] So why would you not write a DSL? No one needs to learn your DSL.**Adam Stacoviak:** What is this, Copilot you're using, or ChatGPT, that you're--**Shawn Wang:** ChatGPT-3. I have a bunch of examples here I can drop in the show notes. AWS IAM policies. "Hey, I want to do X and Y in AWS." Guess what? There's tons of documentation. ChatGPT knows AWS IAM policies. Code that combines multiple cloud services. This one comes from Corey Quinn. 90% of our jobs is hooking up one service to another service. You could just tell it what to do, and it just does it, right? There a guy who was like, "I fed my college computer network's homework to it, and they gave the right result", which is pretty interesting.Refactoring code from Elixir to PHP is another one that has been has been done... And obviously, Advent of Code, which - we're recording this in December now. The person who won -- so Advent of Code for the first 100 people is a race; whoever submits the correct answer first, wins it. And the number one place in the first Advent of Code this year was a ChatGPT guy. So it broke homework. Like, this thing has broken homework and take-home interviews, basically. [laughs]**Jerod Santo:** Completely. It's so nice though; like, I've only used it a little bit while coding, but it's two for two, of just like drilling my exact questions. And just stuff like "How do you match any character that is not an [unintelligible 00:44:43.28] regular expression?"**Shawn Wang:** Oh, yeah. Explaining regexes.**Jerod Santo:** Yeah. That was my question. Like, I know exactly what I want, but I can't remember which is the character, and so I just asked it, and it gave me the exact correct answer, and an example, and explained it in more detail, if I wanted to go ahead and read it. And it warned me, "Hey, this is not the best way to test against email addresses... But here it is." So I was like, "Alright..." This is a good thing for developers, for sure.**Shawn Wang:** Yeah. But you can't trust it -- so you have a responsibility as well. You can't write bad code, have something bad happen, and go, "Oh, it wasn't my fault. It was ChatGPT."**Jerod Santo:** Well, you can't paste Stack Overflow answers into your code either.**Shawn Wang:** You have the responsibility. Exactly.**Jerod Santo:** Yeah. I mean, you can, but you're gonna get fired, right? Like, if the buck stops at you, not at the Stack Overflow answer person, you can't go find them and be like, "Why were you wrong?" Right? It stops at you.**Shawn Wang:** Yeah. So I think the way I phrased it was -- do you know about this trade offer meme that is going around? So it's "Trade offer - you receive better debugging, code explanation, install instructions, better documentation, elimination of your breaking of flow from copy and pasting in Stack Overflow - you receive all these benefits, in exchange for more code review." There is a cost, which is code review. You have to review the code that your junior programmer just gave you. But hey, that's better and easier than writing code yourself.**Jerod Santo:** [46:04] Yeah, because you've got a free junior programmer working for you now. [laughter]**Shawn Wang:** There's a guy that says, "I haven't done a single Google search or consulted any external documentation for the past few days, and I was able to progress faster than I ever had when delivering a new thing." I mean, it's just... It's amazing, and Google should be worried.**Jerod Santo:** Yeah, that's what I was gonna say - is this an immediate threat to Google? Now, I did see a commenter on Hacker News - Swyx, I'm not sure if you saw this one - from inside of Google, talking about the cost of integration?**Shawn Wang:** Yes. Yeah, I've read basically every thread... [laughter] Which is a full-time job, but... This is so important. Like, I don't do this for most things, right? Like, I think this is big enough that I had to drop everything and go read up on it... And not be an overnight expert, but at least try to be informed... And that's all I'm doing here, really. But yeah, do you want to read it up?**Jerod Santo:** Yeah. So in summary, they were responding... This is on a thread about ChatGPT, and they say -- this is a Googler, and they say "It's one thing to put up a demo that interested nerds can play with, but it's quite another thing to try to integrate it deeply in a system that serves billions of requests a day, when you take into account serving costs, added latency, and the fact that average revenue on something like a Google search is close to infinitesimal (which is the word I can't say out loud) already. I think I remember the presenter saying something like they'd want to reduce the cost by at least 10 times before it could be feasible to integrate models like this in products like Google search. A 10x or even 100x improvement is obviously an attainable target in the next few years, so I think technology like this is coming in the next few years."So that's one insider's take on where Google stands. Obviously, Google has tons of resources dedicated to these areas of expertise, right? It's not like Google's asleep at the wheel, and is going to completely have their lunch eaten by OpenAI. But right now, there's a lot of people who are training new habits, right? They're like, "I'm not gonna use Google anymore. I'm gonna start using OpenAI." I think it's something on the order of one million users in their first few days have signed up... How long can Google potentially bleed people before it becomes an actual problem? I don't know. I don't know the answer to these things.**Shawn Wang:** So there's one way in which you can evaluate for yourself right now, and I think that's the most helpful, constructive piece of advice that we can give on this podcast, which is -- we're covering something that is moving very live, very fast. Everything that we say could be invalidated tomorrow by something new. But you could just run ChatGPT-3 alongside of all your Google searches. That's a very, very simple way to evaluate if this would replace Google for you; just run it twice, every single time. And so there's a Google extension - and I'll link it - [unintelligible 00:48:47.04] ChatGPT Google extension; I'll put it in the show notes. And yeah, I have it running; it's not that great. [laughs] Surprisingly. So ChatGPT is optimized for answering questions. Sometimes I don't put questions in there. I just put the thing I'm looking for, and Google's pretty good at that, it turns out... [laughs]**Jerod Santo:** Right. See, because you are an expert-level Google prompt engineer, right? Like, you know how to talk to Google.**Shawn Wang:** We have optimized to Google prompting, yes.**Jerod Santo:** Exactly.**Shawn Wang:** If I need to search within a certain date range, I know how to do that in Google. I can't do that in ChatGPT-3. If I need to look for PDFs, I know how to do that. If I want to look for Reddit, and constrain the site to Reddit, I know how to do that. ChatGPT-3 has no concept of attribution, no concept of date ranges, and stuff like that.**Jerod Santo:** Right.**Shawn Wang:** But yeah, it is just like better at some things, and worse at other things, and that is the nature of all new technology. It just has to be better at one thing, that you cannot get anywhere else, and it has a permanent hold in your mind. Whenever you need that thing done, you will turn to ChatGPT-3, or any other new technology.[49:53] I love this sort of meta philosophy about technology adoption, because all new toys just generally are worse than the things that they replace, except in one area, and that's the area needs to matter. And if it does matter, it will win, because they will fix the bugs.**Jerod Santo:** Yeah, oftentimes with disruption, that area is cost; like acquisition cost. Sometimes it's convenience, and maybe I guess sometimes it's accuracy. There's different metrics, but it's got to be the one that matters. If it's marginally better at things that don't matter, you're not going to disrupt. But if it's a lot better at one thing that matters a lot, even if everything else sucks, you'll use that thing.**Shawn Wang:** Yeah, exactly. So it's interesting, because -- you know, Google has a few things going for it. By the way, it has one of the largest training repositories of text that no one else has, which is Gmail. But the most impressive thing it's being able to ship with Gmail is the little autocomplete, like, "Looks good", Okay", the little buttons that you see in the smart replies.**Jerod Santo:** Do you guys ever use those? Do you ever click on those?**Shawn Wang:** I use that. I use that. Save some typing.**Adam Stacoviak:** Yeah, well, I used to actually use Gmail directly to compose my emails, or respond. I would tap to complete all the time, if the response was like, "Yeah, I was gonna say that."**Shawn Wang:** There's a billion little ways that AI is built into Google right now, that we just take for granted, because we don't feel it, because there's no prompts. [laughter]**Jerod Santo:** We need a prompt!**Adam Stacoviak:** Even if OpenAI did eat Google's lunch, Google would just acquire it, or something...**Shawn Wang:** You would think so...**Jerod Santo:** Maybe...**Shawn Wang:** But I would say that probably OpenAI is not for sale. Like, they have this world-conquering ambition that would just not let them settle for anything less than global domination... Which is a little bit scary, right?**Jerod Santo:** Yeah, I think they're probably going the distance, is their plan, it seems like...**Shawn Wang:** Well, if anything, Microsoft should have bought them when they had the chance, because that was Bing's opportunity, and I don't think that ever came to pass... Probably because Sam Altman was smart enough not to do that deal. But yeah, so let's take that line of thinking to its logical conclusion. What would you feel if Google started autocompleting your entire email for you, and not just like individual, like two or three words? You would feel different, you would feel creeped out. So Google doesn't have the permission to innovate.**Adam Stacoviak:** I wouldn't freak out if I opted in, though. If I was like, "This technology exists, and it's helpful. I'll use that." Now, if it just suddenly started doing it, yeah, creeped out. But if I'm like, "Yeah, this is kind of cool. I opt into this enhanced AI, or this enhanced autocompletion", or whatever, simplifies the usage of it, or whatever.**Shawn Wang:** Yeah, so there's actually some people working on the email client that does that for you. So Evan Conrad is working on EveryPrompt email, which is essentially you type a bunch of things that you want to say, and you sort of batch answer all your emails with custom generated responses from GPT-3. It's a really smart application of this tech to email that I've seen. But I just think, like, you would opt in; the vast majority of people never opt into anything.**Jerod Santo:** Yeah, most people don't opt in.**Shawn Wang:** Like, that's just not the default experience. So I'm just saying, one reason that Google doesn't do it is "Yeah, we're just too big." Right? That is essentially the response that you read out from that engineer; like, "This doesn't work at Google scale. We can't afford it. It would be too slow", whatever. That's kind of a cop out, I feel like... Because Google should be capable. These are the best engineers in the world, they should they should be able to do it.**Jerod Santo:** Well, he does say he thinks it's coming in the next few years. So he's not saying it's impossible, he's saying they're not there yet. And I will say, I'm giving ChatGPT the benefit of my wait time that I do not afford to Google. I do not wait for Google to respond. I will give ChatGPT three to five seconds, because I know it's like a new thing that everyone's hitting hard... But like, if they just plugged that in, it would be too slow. I wouldn't wait three to five seconds for a Google search.**Shawn Wang:** Yeah. By the way, that's a fascinating cloud story that you guys have got to have on - find the engineer at OpenAI that scaled ChatGPT-3 in one week from zero to one million users?**Jerod Santo:** Yeah, totally.**Adam Stacoviak:** [53:58] Well, if you're listening, or you know the person, this is an open invite; we'd love to have that conversation.**Shawn Wang:** Yeah. I've seen the profile of the guy that claimed to [unintelligible 00:54:04.00] so that he would know... But I don't know who would be responsible for that. That is one of the most interesting cloud stories probably of the year. And Azure should be all over this. Azure should be going like, "Look, they handled it no problem. This is most successful consumer product of all time come at us", right?**Jerod Santo:** That's true. They should.**Shawn Wang:** They're the number three cloud right now. This is like their one thing, this is their time to shine. They've got to do it.**Jerod Santo:** And does anybody even know that Azure is behind OpenAI? I'm sure you can find out, but like, is that well known? I didn't know that.**Shawn Wang:** Oh, it's very public. Microsoft invested a billion dollars in OpenAI.**Jerod Santo:** Okay. Did you know that, Adam?**Adam Stacoviak:** No.**Jerod Santo:** So I'm trying to gauge the public knowledge...**Shawn Wang:** What we didn't know was that it was at a valuation of $20 billion, which... So OpenAI went from like this kind of weird research lab type thing into one of the most highly valued startups in the world. [laughs]**Jerod Santo:** Do you think Microsoft got their money's worth?**Shawn Wang:** I think so... It's awash right now, because --**Jerod Santo:** Too early.**Shawn Wang:** ...they probably cut them a lot of favorable deals for training, and stuff... So it's more about like being associated with one of the top AI names. Like, this is the play that Microsoft has been doing for a long time, so it's finally paying off... So I'm actually pretty happy for that. But then they have to convert into like getting people who are not [unintelligible 00:55:21.00] onto this thing.**Break:** [55:26]**Adam Stacoviak:** What's the long-term play here though? I mean, if Microsoft invested that kind of money, and we're using ChatGPT right now, we're willing to give it extra seconds, potentially even a minute if the answer is that important to you, that you wouldn't afford to Google... Like, what's the play for them? Will they turn this into a product? How do you make billions from this? Do you eventually just get absorbed by the FAANGs of the world, and next thing you know now this incredible future asset to humanity is now owned by essentially folks we try to like host our own services for? Like, we're hosting Nextcloud locally, so we can get off the Google Drives and whatnot... And all this sort of anti-whatever. I mean, what's the endgame here?**Shawn Wang:** Am I supposed to answer that? [laughs]**Adam Stacoviak:** Do you have an answer? I mean, that's what I think about...**Jerod Santo:** Let's ask ChatGPT what the endgame is... No, I mean, short-term it doesn't seem like OpenAI becomes the API layer for every AI startup that's gonna start in the next 5 or 10 years, right? Like, aren't they just charging their fees to everybody who wants to integrate AI into their products, pretty much? That's not an end game, but that's a short-term business model, right?**Shawn Wang:** That is a short-term business model, yeah. I bet they have much more up their sleeves... I don't actually know. But they did just hire their first developer advocate, which is interesting, because I think you'll start to hear a lot more from them.[58:12] Well, there's two things I will offer for you. One, it's a very common view or perception that AI is a centralizing force, right? Which is, Adam, what you're talking about, which is, "Does this just mean that the big always get bigger?" Because the big have the scale and size and data advantage. And one of the more interesting blog posts - sorry, I can't remember who I read this from - was that actually one of the lessons from this year is that it's not necessarily true, because AI might be a more decentralized force, because it's more amenable to open source... And crypto, instead of being decentralized, turned out to be more centralized than people thought.So the two directions of centralized versus decentralized - the common perception is that AI is very centralized, and crypto very decentralized. The reality was that it's actually the opposite, which is fascinating to me as a thesis. Like, is that the end game, that AI eventually gets more decentralized, because people want this so badly that there are enough researchers who go to NeurIPS to present their research papers and tweet out all this stuff, that diffuses these techniques all over the place? And we're seeing that happen, helped in large probably by Stability AI. The proof that Stability as an independent, outsider company, like not a ton of connections in the AI field, did this humongous achievement I think is just a remarkable encouragement that anyone could do it... And that's a really encouraging thing for those people who are not FAANG and trying to make some extra headroom in this world. So that's one way to think about the future.The second way to think about who monetizes and who makes the billion dollars on this... There's a very influential post that I was introduced to recently from Union Square Ventures, called "The myth of the infrastructure phase", which is directly tackling this concept that everyone says "When you have a gold rush, sell picks and shovels", right? And it's a very common thing, and presumably AI being the gold rush right now, you should sell picks and shovels, which is you should build AI infrastructure companies. But really, there are tons of AI infrastructure companies right now, they're a dime a dozen; really, they're all looking for use cases, and basically, the argument, the myth of the infrastructure phase is that technology swings back and forth between app constraint and infra constraint. And right now, we're not infrastructure-constrained, we're app-constrained. And really, it's the builders of AI-enabled products like TikTok that know what to do with the AI infrastructure tha
Forecasting is essential for every team, and it involves tracking key performance indicators and financial metrics to measure each team's success. But many people don't discuss best practices, especially when you begin to add headcount into the mix.Our host Joe Michalowski welcomes Brian Weisberg, the CFO at Tidelift, back to The Role Forward podcast. While they work through departmental headcount planning, Joe and Brian also discuss the importance of understanding the business's goals, why you need to be realistic with expectations, and why companies should invest early in a financial position.Links Referenced in This EpisodeDave Kellogg's BlogLauren Kelley and Tom Huntington on the R&D Magic NumberGuest-at-a-Glance
Learn from Lyn Muldrow, Maintainer Advocate, Tidelift, about how to solve the world's problems through developer education, how organizations like WWCode impacted her career, and her experience being a single mom in tech.
Josh and Kurt talk about Microsoft creating a policy of not allowing anyone to charge for open source in their app store. This policy was walked back quickly, but it raises some questions about how fair or unfair open source really is. It's mostly unfair to developers if you look at the big picture. Show Notes Syft Grype Microsoft bans and unbans open source Tidelift survey Bruce Perens - What comes after open source
Tidelift co-founders Jeremy Katz and Luis Villa join Doc Searls and Aaron Newcomb on this episode of FLOSS Weekly to discuss how maintainers should be paid. You might think the answer would be different for every codebase, but not if there's a platform for doing it. Hosts: Doc Searls and Aaron Newcomb Guests: Jeremy Katz and Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Tidelift co-founders Jeremy Katz and Luis Villa join Doc Searls and Aaron Newcomb on this episode of FLOSS Weekly to discuss how maintainers should be paid. You might think the answer would be different for every codebase, but not if there's a platform for doing it. Hosts: Doc Searls and Aaron Newcomb Guests: Jeremy Katz and Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Tidelift co-founders Jeremy Katz and Luis Villa join Doc Searls and Aaron Newcomb on this episode of FLOSS Weekly to discuss how maintainers should be paid. You might think the answer would be different for every codebase, but not if there's a platform for doing it. Hosts: Doc Searls and Aaron Newcomb Guests: Jeremy Katz and Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
Tidelift co-founders Jeremy Katz and Luis Villa join Doc Searls and Aaron Newcomb on this episode of FLOSS Weekly to discuss how maintainers should be paid. You might think the answer would be different for every codebase, but not if there's a platform for doing it. Hosts: Doc Searls and Aaron Newcomb Guests: Jeremy Katz and Luis Villa Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit
When you look at the state of the Open-Source Software (OSS) ecosystem, what do you think some of the biggest problems are?Why do you think we're now starting to see so much increased attention on the Software Supply Chain?When it comes to OSS maintainers and contributors, typically this is all done voluntarily and uncompensated in many cases. How is Tidelift looking to changing that paradigm?What are some recommendations you have for organizations as they start to try and get a handle on their software supply chain?What are some things Tidelift is focused on that you think will benefit the industry and community?
In episode 46 of EnterpriseReady, Grant speaks with Donald Fischer of Tidelift. Donald shares lessons from his early career journey in product and venture capital, and together they discuss the importance of supporting open source creators at scale.
In episode 46 of EnterpriseReady, Grant speaks with Donald Fischer of Tidelift. Donald shares lessons from his early career journey in product and venture capital, and together they discuss the importance of supporting open source creators at scale.
Open Source is an essential foundation for pretty much everything. How do we fund it appropriately? What do we do about Log4Shell-types of issues? Donald Fischer of Tidelift joins us to discuss these economic and human issues. Discuss this episode: https://discord.gg/nPa76qF
When someone even mentions the budgeting season, most business owners and their team members — especially the finance department — start feeling anxious. But it should not be like that, says our guest, Brian Weisberg, the Head of Finance and Business Operations at Tidelift. The key to any fruitful budgeting process is a well-thought-out financial model. Therefore, each company needs a finance expert — someone ready to go beyond their role to get to know the business and its parts and "knit the whole thing together." In this episode of The Role Forward, Brian and our host Joe Garafalo discuss the importance of mindful sales planning, the role of technology in the FP&A space, and why every company should perceive a finance expert as a team captain. Guest-at-a-GlanceName: Brian WeisbergWhat he does: Brian is the Head of Finance and Business Operations at TideliftCompany: TideliftNoteworthy: Before joining Tidelift, Brian spent a considerable number of years running finance for fast-growing startups in the enterprise B2B SaaS space. Where to find Brian: LinkedIn|Twitter
Guest Nicholas C. Zakas Panelists Richard Littauer Show Notes Hello and welcome to Sustain! The podcast where we talk about sustaining open source for the long haul. You may know my guest today, Nicholas Zakas, because he is the creator of a very popular JavaScript project called ESLint, which has been downloaded 13 million times each week. Nicholas is an independent software engineer, consultant, and coach, and has written numerous books including, Understanding ECMAScript 6, The Principles of Object-Oriented JavaScript, and Maintainable JavaScript. With over sixteen years of web application development experience and speaking at conferences around the world, he's putting his focus now on mentoring and coaching the next generation of JavaScript engineers. Nicholas brings us on his journey sharing his story of becoming a developer, starting ESLint, and what he's doing to make sure everybody in the ESLint community is able to benefit from the money they are bringing in. We also learn more about an interesting blog post he wrote, how contributors get paid, and other open source projects ESLint donates to. Why should you use ESLint? Go ahead and download this episode now to find out! [00:01:39] Nicholas shares his story with us starting out as a developer and how it led him to starting ESLint. [00:03:01] What did Nicholas mean when he said he fell in love with JavaScript? [00:03:47] We find out how long ESLint has been around, how many people are working full-time, and how he keeps himself in funds. [00:05:04] Nicholas talks about the Open Collective and GitHub sponsors they set up for donations. [00:07:42] Richard brings up a blog post Nicholas wrote on, “How to talk to your company about sponsoring an open source project” and he tells us what iterations he's gone through with ESLint. [00:10:59] Nicholas talks about the difficulties in multi-tasking, and he tells us the next thing they tried with paying a straight per hour rate for team members. [00:17:15] Richard wonders where Nicholas came up with the less than standard rate for hourly work which is not really a Silicon Valley salary, and he also tells us how many hours per month he is paying out and for the people that have been paid, how they feel about it, and having no caps on what people can make. [00:20:43] Nicholas mentions using Tidelift, how much money it brings in, and the money going to TSC members. [00:22:04] Find out what else Nicholas is doing with the money besides paying contributors. He mentions several other open source projects they are donating to, and one person in particular he mentions is Sindre Sorhus. [00:27:58] Richard wonders more about the governance process and how Nicholas feels about it. [00:31:52] Nicholas dives deep as he explains three things that would convince him that ESLint would be a project that he would want to use. [00:34:20] We learn some future plans for what Nicholas would do with funds to make the project more sustainable. [00:38:09] Find out where you follow Nicholas online. Quotes [00:03:26] “And I see ESLint as really, this will sound cheesy, as an act of love on your code that we aren't trying to change what it does.” [00:04:24] “We found that people who have kids are looking for something to do after the kids go to bed.” [00:05:52] “And so, if that is your starting point where even folks who are just coming right out of college are getting 120k each year, that means that's the minimum that you need to raise in order to hire someone full-time if they're in a major metropolitan area in the United States.” [00:22:17] “The first thing is we have what's called a contributor pool, which is money that we set aside every month to pay non-team members for contributions to ESLint.” [00:22:46] “Generally, anything that is of benefit to the project we will potentially pay you for.” [00:24:43] “So, one of the things that we were looking at in terms of sustainability is we're bringing in a certain amount of money each month.” [00:24:53] “We are building on top of the work of others. And so, why shouldn't we be spreading that money to those others, because without them ESLint either wouldn't exist or be a lot harder to maintain.” [00:28:17] “Well, what's interesting is that when I started ESLint, in my mind this was like a one-year project.” [00:29:16] “And I just kept coming back to, what's in it for them?” [00:30:44] “And so, how can I ensure the future survival of the project outside of me working on it?” Spotlight [00:38:52] Richard's spotlight is StandardJS. [00:39:27] Nicholas's spotlight is a project called Release Please. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) Nicholas C. Zakas Twitter (https://twitter.com/slicknet?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Human Who Codes (http://humanwhocodes.com/) Open Collective- ESLint (https://opencollective.com/eslint) How to talk to your company about sponsoring an open source project by Nicholas C. Zakas- Human Who Codes (https://humanwhocodes.com/blog/2021/05/talk-to-your-company-sponsoring-open-source/) Reading List-Human Who Codes (https://humanwhocodes.com/reading/) Deep Work: Rules for Focused Success in a Distracted World by Cal Newport (https://www.amazon.com/Deep-Work-Focused-Success-Distracted/dp/1455586692/ref=sr_1_1?crid=20RZZIIP2GWVG&dchild=1&keywords=deep+work+cal+newport&qid=1634932822&qsid=140-9480495-9312539&sprefix=deep+work%2Caps%2C101&sr=81&sres=0349411905%2C9123832355%2C9123781467%2C912411412X%2CB07DBRBP7G%2C1401962122%2C0735211299%2C9123963255%2C0525536558%2C1443460710%2CB009CMO8JQ%2C1544512279%2CB00IWYP5NI%2CB07SBX56MC%2C0374533555%2CB08817M9SS&srpt=ABIS_BOOK) A year of paying contributors (ESLint) (https://eslint.org/blog/2020/10/year-paying-contributors-review) Sindre Sorhus (https://sindresorhus.com/) ESLint (https://eslint.org/) Standard JS-GitHub (https://github.com/standard/standard) Release Please-GitHib (https://github.com/googleapis/release-please) [Understanding ECMAScript 6: The Definitive Guide for JavaScript Developers by Nicholas C. Zakas](https://www.amazon.com/Understanding-ECMAScript-Definitive-JavaScript-Developers/dp/1593277571/ref=sr15?crid=299FWWAJ52K4H&dchild=1&keywords=nicholas+Zakas+books&qid=1634926017&qsid=140-9480495-9312539&sprefix=nicholas+zakas+book%2Caps%2C86&sr=85&sres=059680279X%2CB00I87B1H8%2C1593277571%2C1449327680%2C1118026691%2C0470109491%2CB011DBHZ2K%2C3944540573%2CB011DB19KE%2CB088P9Q6BB%2CB00BQ7RMW0%2CB01A65ALSY%2CB01A64IRUY%2CB00HK37CXS%2C0470227818%2CB089LJTMPJ&srpt=ABISBOOK)_ [The Principles of Object-Oriented JavaScript by Nicholas C. Zakas](https://www.amazon.com/Principles-Object-Oriented-JavaScript-Nicholas-Zakas-dp-1593275404/dp/1593275404/ref=mtother?encoding=UTF8&me=&qid=1634926112) [Maintainable JavaScript: Writing Readable Code by Nicholas C. Zakas](https://www.amazon.com/Maintainable-JavaScript-Writing-Readable-Code/dp/1449327680/ref=sr15?crid=299FWWAJ52K4H&dchild=1&keywords=nicholas+Zakas+books&qid=1634926112&qsid=140-9480495-9312539&sprefix=nicholas+zakas+book%2Caps%2C86&sr=85&sres=059680279X%2CB00I87B1H8%2C1593277571%2C1449327680%2C1118026691%2C0470109491%2CB011DBHZ2K%2C3944540573%2CB011DB19KE%2CB088P9Q6BB%2CB00BQ7RMW0%2CB01A65ALSY%2CB01A64IRUY%2CB00HK37CXS%2C0470227818%2CB089LJTMPJ&srpt=ABISBOOK)_ Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Nicholas Zakas.
Brandon and Eric reflect on 2021 and their favorite episodes and topics of the year. Episode Links 32 - Open Source Sustainability (https://sudo.show/32) 30 - Loving Your Work with Dashaun Carter (https://sudo.show/30) 27 - Open Source Virtual Desktop Infrastrcture (https://sudo.show/27) 16 - Starting a Home Lab (https://sudo.show/16) 18 - Managing Multi-Cloud with Chris Psaltis (https://sudo.show/18) 22 - Tidelift (https://sudo.show/22) 24 - Data Quality with Soda (https://sudo.show/24) 37 - Data Integration with Michel Tricot of Airbyte (https://sudo.show/37) 28 - Security Intelligence with Steve Ginty of RiskIQ (https://sudo.show/28) 35 - Busting Open Source Security Myths (https://sudo.show/35) Software Links Project Hamster (https://github.com/projecthamster) Links to the network shows Destination Linux Network (https://destinationlinux.network) Sudo Show Website (https://sudo.show) Support the Show Sponsor: Bitwarden (https://bitwarden.com/dln) Sponsor: Digital Ocean (https://do.co/dln-mongo) Sudo Show Swag (https://sudo.show/swag) Contact Us: DLN Discourse (https://sudo.show/discuss) Email Us! (mailto:contact@sudo.show) Sudo Matrix Room (https://sudo.show/matrix) Follow our Hosts: Brandon's Website (https://open-tech.net) Eric's Website (https://itguyeric.com) Red Hat Streaming (https://www.redhat.com/en/livestreaming) Chapters 00:00 Intro 00:42 Welcome 02:41 DigitalOcean 03:45 Bitwarden 05:11 Main Content 47:20 Wrap Up
With the release of Vue 3, developers now have access to the Composition API, a new way to write Vue components. This API allows features to be grouped together logically, rather than having to organize single-file components by function. Using the Composition API can lead to more readable code, and gives developers more flexibility and scalability when developing their applications, which signals a bright future for Vue. At least, this is what today's guest believes! Today, we speak with Oscar Spencer, developer at Tidelift and co-author of the Grain programming language, about Vue's Composition API and why he believes it represents great things for Vue. We touch on Options API, our opinions of a template-first approach, and why Composition API is infinitely better than Mixins, as well as how JavaScript can prepare developers for Options API and what to watch out for when you first start working with Composition API in Vue. All this plus this week's picks and so much more when you tune in today! Key Points From This Episode: An introduction to today's guest, Oscar Spencer. The panel shares what sound their Slack makes when they receive a new message. Oscar shares his personal passion for the Vue Composition API. Why he believes that Vue's bright future includes the options API too. Why Composition API represents great things for the future of Vue. The panel discusses commit messages, interactive rebasing, and squashing. What Oscar means when he says that the Composition API makes Vue more scalable. Oscar and the panel weigh in on taking a template-first approach Discover Oscar's situational approach to composables when reusing business logic. Composition API versus Mixins and why Oscar believes Composition API is superior. Whether Options API or Composition API is easier to teach to a beginner developer. How JavaScript prepares developers for Options API, which Oscar describes as ‘cozy'. Oscar on how to know when to use Composition API versus Options API. Why you would choose Composition API over simply using JavaScript: reactivity. The panel shares some of the longest Vue components they have worked on. Render functions in Vue and Oscar's perspective on React versus Vue. What to look out for if you're new to Composition API; not understanding Vue's reactivity. Why the coolest thing Oscar has done in Vue is write a backend using the reactivity API. This week's picks: Only Murders in the Building, The Artful Escape, Dyson Sphere Program, The Great Ace Attorney Chronicles, and more! Tweetables: “When I look at the Composition API, I see a very bright future for Vue.” — @oscar_spen (https://twitter.com/oscar_spen) [0:02:22] “The Composition API just gets rid of a whole host of issues that you have with Mixins. In fact, Mixins were my only complaint in Vue 2.” — @oscar_spen (https://twitter.com/oscar_spen) [0:24:05] “Don't be too scared of the [Composition API]. It was definitely designed with composition in mind. It was designed for you to have your composables consuming composables and not blowing up the world – [while] being fairly easy to follow as well.” — @oscar_spen (https://twitter.com/oscar_spen) [0:27:34] “Regular JavaScript modules only get you so far because, fundamentally, what these regular JavaScript modules are missing is the reactivity. What the Composition API is letting us do is compose things that are reactive.” — @oscar_spen (https://twitter.com/oscar_spen) [0:41:44] “By far the biggest gotcha with the Composition API is not understanding Vue's reactivity. That's going to be the biggest gotcha that you can possibly run into. I highly recommend, instead of trying to wing it, just go look at a tutorial.” — @oscar_spen (https://twitter.com/oscar_spen) [0:57:02] Links Mentioned in Today's Episode: Vue-oxford (https://www.npmjs.com/package/vue-oxford) Unconventional Vue - Vue as a Backend Framework (https://www.vuemastery.com/conferences/vueconf-us-2020/unconventional-vue-vue-as-a-backend-framework), Oscar Spencer (VueConf US 2020) AITA for being mad at my parents for decorating my first house without my consent? (https://www.reddit.com/r/AmItheAsshole/comments/pmgu2h/aita_for_being_mad_at_my_parents_for_decorating), iamcag07 @oscar_spen (https://twitter.com/oscar_spen) (Twitter) ospencer (https://github.com/ospenser) (Github) Grain (https://grain-lang.org) Dyson Sphere Program (https://en.wikipedia.org/wiki/Dyson_Sphere_Program) The Artful Escape (https://theartfulescape.com/) Only Murders in the Building (https://www.hulu.com/series/only-murders-in-the-building-ef31c7e1-cd0f-4e07-848d-1cbfedb50ddf), Hulu (Television Show) The Great Ace Attorney Chronicles (https://www.ace-attorney.com/great1-2), Capcom (Nintendo Switch, PlayStation 4, Steam) TERRO® Fly Magnet® Super Fly Roll (https://www.terro.com/terro-fly-magnet-super-fly-roll-t521) Tiny Beautiful Things: Advice on Love and Life from Dear Sugar (https://libro.fm/audiobooks/9780449808269-tiny-beautiful-things?bookstore=bookshoporg), Cheryl Strayed Special Guest: Oscar Spencer.
Eric and Brandon sit down and look into some of the biggest security myths around Open Source software and one by one debunk them right on the show! Destination Linux Network (https://destinationlinux.network) Sudo Show Website (https://sudo.show) Sponsor: Bitwarden (https://bitwarden.com/dln) Sponsor: Digital Ocean (https://do.co/dln-mongo) Sudo Show Swag (https://sudo.show/swag) Contact Us: DLN Discourse (https://sudo.show/discuss) Email Us! (mailto:contact@sudo.show) Sudo Matrix Room (https://sudo.show/matrix) Heartbleed (https://heartbleed.com) Sophos: Venom Virtual Machine Escape Bug (https://nakedsecurity.sophos.com/2015/05/14/the-venom-virtual-machine-escape-bug-what-you-need-to-know) Tidelift Blog: More than Half of Maintainers Have Quit or Considered Quitting, and Here's Why (https://blog.tidelift.com/finding-5-more-than-half-of-maintainers-have-quit-or-considered-quitting-and-heres-why) Jaeger Tracing (https://www.jaegertracing.io/) Article: Measure the Health of Open Source Communities (https://www.linux.com/news/measuring-the-health-of-open-source-communities) Open Source Security Foundation (OpenSSF) (https://openssf.org) Article: Google Releases New Open Source Seucirty Software Program Scorecards (https://www.zdnet.com/google-amp/article/google-releases-new-open-source-security-software-program-scorecards) GitHub: OSSF Scorecard (https://github.com/ossf/scorecard) LFX Insights (https://insights.lfx.linuxfoundation.org/projects) Tidelift (https://tidelift.com) Open Collective (https://opencollective.com) Chapters 00:00 Intro 00:42 Welcome 01:14 Sponsor - Bitwarden 02:40 Sponsor - Digital Ocean 03:42 OSS Has Vulnerabilities 07:45 Free means cheap 14:53 Heartbleed Bug 20:25 Open Source is Amature 24:29 OpenSSF Scorecard 33:07 Wrap Up
This week we're bringing JS Party to The Changelog — Nick Nisi and Christopher Hiller had an awesome conversation with Luis Villa, co-founder and General Counsel at Tidelift. They discuss GitHub Copilot and the implications of an AI pair programmer and fair use from a legal perspective.
This week we're bringing JS Party to The Changelog — Nick Nisi and Christopher Hiller had an awesome conversation with Luis Villa, co-founder and General Counsel at Tidelift. They discuss GitHub Copilot and the implications of an AI pair programmer and fair use from a legal perspective.
CHAOSScast – Episode 42 Hello and welcome to CHAOSScast Community podcast, where we share use cases and experiences with measuring and improving open source community health. Elevating conversations about metrics, analytics, and software from the Community Health Analytics Open Source Software, or short CHAOSS Project, to wherever you like to listen. We are super excited to have as our guest, Josh Simmons, who is President of the Open Source Initiative and Ecosystem Strategy Lead at Tidelift. Today, we will be talking with Josh all about Open Source Foundations and the topic of “Hidden Infrastructure” which is very relevant to community health. We learn from Josh the major challenges he sees to open source foundations sustainability and foundational sustainability in corporations. Also, there is a big discussion with everyone as each of them share their opinions about the health of projects and foundations and how they would asses that. Download this episode now to find out much more, and don't forget to subscribe for free to this podcast and share this podcast with your friends and colleagues. [00:02:42] Josh explains the topic of “Hidden Infrastructure-The Foundations of Open Source.” [00:05:24] Brian asks Josh what he sees as some of the major challenges that he sees to open source foundations sustainability. [00:08:43] Daniel wonders where Josh sees the balance between growing and growing as a foundation or being more of a smaller foundation but really focused on providing those services to the projects. [00:14:10] Josh goes more in depth about foundational sustainability in corporations. [00:24:54] There is discussion with everyone about the health of projects and foundations and how you would assess that. [00:35:35] Daniel brings up development tools, some might not be open source that are being used, and there might be changes in the service quality, and he asks Josh if this is an issue we should be aware of or take care of. [00:38:42] Daniel tells us about how they analyzed software development projects at GrimoireLab, which is part of CHAOSS Project, and what happened. [00:39:55] Find out where you can get in touch with Josh and follow him online. Adds (Picks) of the week: [00:40:29] Georg's picks are the answer to the “Ultimate Question of Life, the Universe, and Everything,” and his birthday coming up August 27th. [00:41:34] Brian's pick is being excited about the OSPO.Zone from the new Open Alliance in the EU. [00:42:22] Daniel's pick is taking a course on Business Anthropology. [00:43:02] Josh's pick is a project called OCEAN + ACROSS. Panelists: Georg Link Brian Proffitt Daniel Izquierdo Guest: Josh Simmons Sponsor: SustainOSS (https://sustainoss.org/) Links: CHAOSS (https://chaoss.community/) CHAOSS Project Twitter (https://twitter.com/chaossproj?lang=en) CHAOSScast Podcast (https://podcast.chaoss.community/) podcast@chaoss.community (mailto:podcast@chaoss.community) Joshua Simmons Website (https://joshsimmons.com/) Josh Simmons Twitter (https://twitter.com/joshsimmons) Josh Simmons Linkedin (https://www.linkedin.com/in/joshsimmons) Checklist for measuring the health of an open source project-Red Hat (https://www.redhat.com/en/resources/open-source-project-health-checklist) GitHub Sponsors (https://github.com/sponsors) Open Collective (https://opencollective.com/) Software Freedom Conservancy (https://sfconservancy.org/) The Apache Software Foundation (https://www.apache.org/) The Linux Foundation (https://www.linuxfoundation.org/) Mozilla (https://foundation.mozilla.org/en/) Greg Kroah-Hartman bans University of Minnesota from Linux development for deliberately buggy patches (ZD Net article) (https://www.zdnet.com/article/greg-kroah-hartman-bans-university-of-minnesota-from-linux-development-for-deliberately-buggy-patches/) Mozilla-Firefox Browser (https://www.mozilla.org/en-US/firefox/new/) Django changes its governance (LWN.net article) (https://lwn.net/Articles/815838/) CHAOSS Types of Contributions (https://chaoss.community/metric-types-of-contributions/) The Hitchhiker's Guide to the Galaxy (Movie) (https://www.imdb.com/title/tt0371724/) [The Hitchhiker's Guide to the Galaxy by Douglas Adams](https://www.amazon.com/Hitchhikers-Guide-Galaxy-Douglas-Adams/dp/0345418913/ref=sr11?crid=X6TY2V3GAW0F&keywords=the+hitchhiker%27s+guide+to+the+galaxy&qid=1627667766&sprefix=the+hit%2Caps%2C200&sr=8-1) OSPO.Zone (https://ospo.zone/) Amanda Casari Twitter (for Project OCEAN + ACROSS) (https://twitter.com/amcasari/status/1417836786085208064) Special Guest: Josh Simmons.
Eric and Brandon jump and their soap box this episode to address the critical issues surrounding open source development, ongoing lifecycle management, securing the supply chain, and monetizing developers time. Destination Linux Network (https://destinationlinux.network) Sudo Show Website (https://sudo.show) Sponsor: Bitwarden (https://bitwarden.com/dln) Sponsor: Digital Ocean (https://do.co/dln-mongo) Sudo Show Swag (https://sudo.show/swag) Contact Us: DLN Discourse (https://sudo.show/discuss) Email Us! (mailto:contact@sudo.show) Sudo Matrix Room (https://sudo.show/matrix) Elementary AppCenter (https://appcenter.elementary.io) Tidelift: Finding #5: More than half of maintainers have quit or considered quitting, and here's why. (https://blog.tidelift.com/finding-5-more-than-half-of-maintainers-have-quit-or-considered-quitting-and-heres-why) Linux.Com: Measuring the Health of Open Source Communities (Blog) (https://www.linux.com/news/measuring-the-health-of-open-source-communities) MongoDB Switches Up Its Open Source License (https://techcrunch.com/2018/10/16/mongodb-switches-up-its-open-source-license/) Twitter: Brandon's Thread (https://twitter.com/dbrandonjohnson/status/1412608646882549761?s=20) Ars Technica: No, Open Source Audacity Audio Editor Is Not Spyware (https://arstechnica.com/gadgets/2021/07/no-open-source-audacity-audio-editor-is-not-spyware/) Joplin Notes (https://joplinapp.org) Open Collective (https://opencollective.com) Chapters 00:00 Intro 00:42 Welcome 01:30 Sponsor - Digital Ocean 02:34 Sponsor - Bitwarden 04:03 The Open Source Problem 10:47 MongoDB and Elastic Search 15:19 Just Fork It 21:18 Development Isn't Just a Hobby 31:47 How Do We Fix FOSS? 41:07 Wrap Up
El software está en todas partes. Y la forma de obtener ese software no solo ha cambiado con el tiempo, sino que se vuelve cada vez más compleja. En Más y Open, dentro de Más allá de la Innovación hablaremos de paquetes de software. Hoy el software lo instalamos desde tiendas de apps, se actualiza automáticamente, viene embebido en casi cualquier electrónico de consumo y se transforma y recombina de forma casi orgánica. ¿Qué implica esto para la ciberseguridad y los derechos de los usuarios? ¿Podemos replantearnos nuestro contrato con el software que usamos? Cuando hablamos de paquetes de software nos referimos a la forma en que el software se distribuye, lo que implica una plataforma (por ejemplo, una “tienda de apps” o un “repositorio”), una experiencia de “unboxing” o experiencia inicial con ese software y una o más estrategias para mantener ese software, esa experiencia, actualizada en el tiempo. Hay paquetes de software para consumidores o usuarios finales, que instalamos en nuestros ordenadores, dispositivos móviles, vehículos y un sinfín de dispositivos, pero también para otros tipos de usuarios como por ejemplo unidades de TI en organizaciones de todo tipo o incluso otros programadores que utilizan ese software como dependencias para construir otro software o solución tecnológica. Si además consideramos el crecimiento explosivo en el uso de paquetes de software libre y de código abierto, y del volumen de ciertos ecosistemas open source como pueden las librerías de JavaScript, y todas las posibles combinaciones de software que pueden encontrarse en diferentes casos de uso, estamos ante una “red” gigante y que evoluciona constantemente. Solo en el portal libraries.io, un proyecto de Tidelift que monitorea componentes open source, se listan casi cuatro millones y medio de paquetes; y en ClearlyDefined hay casi 12 millones y medio de definiciones. Presenta y dirige : José Miguel Parrella Contacto: https://www.mypublicinbox.com/MasAlladelaInnovacion Música:https://incompetech.filmmusic.io/ by Kevin McLeod Licencia : Creative Commons (CC BY-NC-SA)
In this episode we review the state of FLOSS software, the things going right and the things going wrong. We discuss if it's good to be a technocrat, and when that can go wrong, and sigh with disbelief at the google vs australian newspaper fight.Please contact us or support us on Patreon!Tidelift - a new way to pay for open sourceHow one programmer broke the internet by deleting a tiny piece of codeEconomic vs Rhetorical Dominance - Bryan CaplanContra Weyl on TechnocracyWeyl's rebuttalDouble Dutch Irish Sandwich (the breakfast of champions)Corporate Tax transparencyBlury photo of a record possibly being broken. I'm not sure any more, looks like it might be fakedSummoning Salt - The finest reviews of speed runsBig list of coffee bets
As a developer and user of open source code, you interact with software and digital media every day. What is often overlooked are the rights and responsibilities conveyed by the intellectual property that is implicit in all creative works. Software licenses are a complicated legal domain in their own right, and they can often conflict with each other when you factor in the web of dependencies that your project relies on. In this episode Luis Villa, Co-Founder of Tidelift, explains the catagories of software licenses, how to select the right one for your project, and what to be aware of when you contribute to someone else's code.