Linux distribution based on Debian
POPULARITY
Categories
Ubuntu and Fedora are out! And Git turns 20! Cosmic is showing up everywhere, Framework has an impressive AMD-powered 13-inch laptop, and Thunderbird is rolling out the Thundermail service! For tips we have vidir for renaming multiple files at once, pw-mon for monitoring pipewire, g as a go replacement for ls, and todist-rs for a TUI take on todoist. It's a great show, and the notes are at https://bit.ly/4lzTAWt thanks for coming! Host: Jonathan Bennett Co-Hosts: Jeff Massie, Ken McDonald, and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Fedora 42 and Ubuntu 25.04 are here—We break down what's new, what stands out, and what we love most about each release.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. ConfigCat Feature Flags: Manage features and change your software configuration using ConfigCat feature flags, without the need to re-deploy code. Support LINUX UnpluggedLinks:
Full Press Conference. Hear from RSL Legend Brian Dunseth, LHM CEO Steve Starks, Partial Shareholder David Blitzer, LHM Board Chairman Steve Miller, MLS Commisioner Don Garber, NWSL Commisioner Jessica Burman, and Utah Governor Spencer Cox.
Luego de una semana muy movida, Samsung se disculpa con los usuarios de su país natal, y brinda el anuncio de la activación de One UI 7 y Android 15, desde members, con lo cual comenzara seguramente la instalación mundial, ademas; Foxconn ahora ensambla el iPhone 16e en Brasil; Canonical lanza Ubuntu 25.04 Plucky Puffin, con soporte a los micros ARM y como todos los días les solicitamos sus comentarios. La administración Trump considera ampliar la prohibición de DeepSeek https://www.nytimes.com/2025/04/16/technology/nvidia-deepseek-china-ai-trump.html Foxconn ahora ensambla el iPhone 16e en Brasil https://macmagazine.com.br/post/2025/04/16/brasil-faz-parte-da-linha-de-producao-do-iphone-16e/ Samsung Members detalla oficialmente el lanzamiento de One UI 7.0 https://oneuinews.com/samsung-announces-one-ui-7-rollout-timeline-for-india/ Instagram ahora te permite combinar tus recomendaciones de Reels con las de tus amigos https://www.engadget.com/social-media/instagram-now-lets-you-combine-your-reels-recommendations-with-friends-160023003.html? Canonical lanza Ubuntu 25.04 Plucky Puffin, con soporte a los micros ARM https://canonical.com/blog/canonical-releases-ubuntu-25-04-plucky-puffin ESPERAMOS TUS COMENTARIOS...
SPONSER LINK Mailtrap (https://mailtrap.io/?utm_source=podcast&utm_medium=episode&utm_campaign=coder_radio_1) Coder's Socials Mike on X (https://x.com/dominucco) Mike on BlueSky (https://bsky.app/profile/dominucco.bsky.social) Mike's Blog (https://dominickm.com) Coder on X (https://x.com/coderradioshow) Coder on BlueSky (https://bsky.app/profile/coderradio.bsky.social) Show Discord (https://discord.gg/k8e7gKUpEp) Alice (https://alice.dev) Alice Forms (https://alice.dev/forms) TMB Earth Day 2025 Competition (https://dominickm.com/earth-day-25-competition/)
This show has been flagged as Clean by the host. Running a private Ubuntu Mirror It is possible to set up a local server to keep a synchronized copy of all the Ubuntu packages, allowing later installs of packages for any local machine even in the absence of an internet connection. To do this a script called apt-mirror can be run on the server. crontab 0 1 * * * /usr/local/bin/apt-mirror The location of the mirror is specified in apt-mirror.conf /etc/apt/apt-mirror.conf set mirror_path /disk/ftp/Mirror set cleanup_freq daily set mirror_verbose yes The origin servers are specified in mirror.list . It is possible to choose which architectures and Ubuntu releases to fetch as well as whether to fetch just the binary packages or also the sources. /etc/apt/mirror.list deb http://archive.ubuntu.com/ubuntu noble main restricted universe multiverse deb http://security.ubuntu.com/ubuntu noble-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu noble-updates main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu noble-backports main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu noble main restricted universe multiverse deb-i386 http://security.ubuntu.com/ubuntu noble-security main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu noble-updates main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu noble-backports main restricted universe multiverse #deb-src http://archive.ubuntu.com/ubuntu noble main restricted universe multiverse #deb-src http://security.ubuntu.com/ubuntu noble-security main restricted universe multiverse #deb-src http://archive.ubuntu.com/ubuntu noble-updates main restricted universe multiverse #deb-src http://archive.ubuntu.com/ubuntu noble-backports main restricted universe multiverse clean http://archive.ubuntu.com/ubuntu The mirrored packages could be served up to local machines in a number of ways, I am using vsftpd to serve the files via FTP. /etc/vsftp.conf anonymous_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES listen=YES pam_service_name=vsftpd seccomp_sandbox=NO isolate_network=NO anon_root=/disk/ftp/ no_anon_password=YES hide_ids=YES pasv_min_port=40000 pasv_max_port=50000 write_enable=YES On local machines, the mirror on the server can then be specified as the source for apt to use to retrieve packages. /etc/apt/sources.list.d/ubuntu.sources Types: deb URIs: ftp://server/Mirror/mirror/archive.ubuntu.com/ubuntu Suites: noble noble-updates noble-backports Components: main universe restricted multiverse Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg ## Ubuntu security updates. Aside from URIs and Suites, ## this should mirror your choices in the previous section. Types: deb URIs: ftp://server/Mirror/mirror/security.ubuntu.com/ubuntu Suites: noble-security Components: main universe restricted multiverse Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg Provide feedback on this episode.
Fedora is about to ship 42, Ubuntu is gearing up for 25.04, and we talk about a head-to-head performance comparison between the two. LMDE is working on OEM mode, OpenSSH pushes version 10, and the guys make virtual swap make sense. For tips there's cheat, sponge, and ranger, and you can find the show notes at https://bit.ly/4j2qgGg Enjoy! Host: Jonathan Bennett Co-Hosts: Rob Campbell and Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this episode, Craig Zelizer talks with Ilco van der Linde, a Dutch social entrepreneur, storyteller, and founder of multiple global movements including Dance for Life, MasterPeace, and Ocean Love. From organizing some of the largest public festivals in the Netherlands to driving emotional, community-rooted innovation for ocean protection, Ilco shares what it takes to build movements that not only spark change but sustain it. This conversation explores personal and planetary transformation, how movements scale, and why love—and not fear—is often the most powerful catalyst for action. Whether you're leading a social enterprise, working in climate, or figuring out how to live with purpose, this episode offers hard-earned wisdom, strategy, and inspiration. Why take a listen Ocean Love Innovation Awards: What it means to build the most inclusive global initiative to spark solutions for our oceans—open to individuals and teams from any background, anywhere in the world Turning pain into purpose: How Ilco's experience as a diver witnessing coral bleaching and plastic-filled oceans led him to launch a new global platform Sustainable change requires systems thinking: From individual behavior to corporate accountability and government policy, what levers really shift the needle Practical optimism: How to lead without burning out, stay grounded in values, and work in ways that regenerate both the planet and yourself Building social movements with soul: Reflections on 40 years of activism and organizing—from HIV prevention to global peacebuilding to climate Advice for impact careers: Ilco shares tips for launching bold projects, building teams, attracting funding, and learning to live with financial and emotional uncertainty Resources from the podcast Ocean Love Innovation Award Open to applicants worldwide until September 30, 2025. Seeking creative, actionable ideas that protect oceans, rivers, biodiversity, or marine life. More info and to apply: https://oceanlove.news Ocean Love on Instagram For visual stories, campaign updates, and community calls: https://www.instagram.com/oceanlovenews Mandela House, Amsterdam A hub for social change, community, and cultural programming built on the values of Ubuntu. Visitors welcome. Website: https://mandelahuisje.nl Dance for Life Global youth movement using music and dance to promote sexual and reproductive health. https://www.dance4life.com MasterPeace Creative peacebuilding movement in over 50 countries connecting young people through music, art, and dialogue. https://masterpeace.org WaterBear Network Free streaming platform focused on environmental films and documentaries. https://waterbear.com Sea Shepherd Mentioned during the episode as a partner in ocean protection and direct action. https://seashepherd.org Project Drawdown Cited as a source for climate solution frameworks. https://drawdown.org Carbon Collective Example of climate-aligned investing referenced in the conversation. https://www.carboncollective.co Global Alliance for Banking on Values For those interested in ethical banking and investment. https://www.gabv.org More from PCDN Subscribe to the PCDN Career Digest Daily or weekly, human-curated global opportunities—jobs, fellowships, events, funding, and more for social impact professionals. https://pcdn.global/subscribe Listen to the Social Change Career Podcast Over 180 episodes with social entrepreneurs, changemakers, and innovators from 30+ countries. https://pcdn.global/listen Subscribe to the AI for Impact Newsletter Explore ethical AI tools, impact jobs, funding, and stories at the intersection of tech and purpose. https://impactai.beehiiv.com BIO Ilco van der Linde is a Dutch social entrepreneur, storyteller, and movement builder. He is the founder of the Bevrijdingsfestivals (Liberation Day Festivals), which attract over 1 million annual visitors in the Netherlands. He co-founded dance4life, active in 30 countries, and MasterPeace, a global peacebuilding initiative in 50 countries He also founded the Mandela House in Amsterdam, a hub for community and social impact; authored Be a Nelson (Lemniscaat); and writes for The Optimist magazine and National Geographic Travel. His latest initiative, OceanLove, along with the OceanLove Innovation Award, is a global platform mobilizing emotional connection and action to protect the ocean. The work is rooted in one core belief: people protect what they love—and lasting change starts from the heart.
Brandon Liu is an open source developer and creator of the Protomaps basemap project. We talk about how static maps help developers build sites that last, the PMTiles file format, the role of OpenStreetMap, and his experience funding and running an open source project full time. Protomaps Protomaps PMTiles (File format used by Protomaps) Self-hosted slippy maps, for novices (like me) Why Deploy Protomaps on a CDN User examples Flickr Pinball Map Toilet Map Related projects OpenStreetMap (Dataset protomaps is based on) Mapzen (Former company that released details on what to display based on zoom levels) Mapbox GL JS (Mapbox developed source available map rendering library) MapLibre GL JS (Open source fork of Mapbox GL JS) Other links HTTP range requests (MDN) Hilbert curve Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: I'm talking to Brandon Liu. He's the creator of Protomaps, which is a way to easily create and host your own maps. Let's get into it. [00:00:09] Brandon: Hey, so thanks for having me on the podcast. So I'm Brandon. I work on an open source project called Protomaps. What it really is, is if you're a front end developer and you ever wanted to put maps on a website or on a mobile app, then Protomaps is sort of an open source solution for doing that that I hope is something that's way easier to use than, um, a lot of other open source projects. Why not just use Google Maps? [00:00:36] Jeremy: A lot of people are gonna be familiar with Google Maps. Why should they worry about whether something's open source? Why shouldn't they just go and use the Google maps API? [00:00:47] Brandon: So Google Maps is like an awesome thing it's an awesome product. Probably one of the best tech products ever right? And just to have a map that tells you what restaurants are open and something that I use like all the time especially like when you're traveling it has all that data. And the most amazing part is that it's free for consumers but it's not necessarily free for developers. Like if you wanted to embed that map onto your website or app, that usually has an API cost which still has a free tier and is affordable. But one motivation, one basic reason to use open source is if you have some project that doesn't really fit into that pricing model. You know like where you have to pay the cost of Google Maps, you have a side project, a nonprofit, that's one reason. But there's lots of other reasons related to flexibility or customization where you might want to use open source instead. Protomaps examples [00:01:49] Jeremy: Can you give some examples where people have used Protomaps and where that made sense for them? [00:01:56] Brandon: I follow a lot of the use cases and I also don't know about a lot of them because I don't have an API where I can track a hundred percent of the users. Some of them use the hosted version, but I would say most of them probably use it on their own infrastructure. One of the cool projects I've been seeing is called Toilet Map. And what toilet map is if you're in the UK and you want find a public restroom then it maps out, sort of crowdsourced all of the public restrooms. And that's important for like a lot of people if they have health issues, they need to find that information. And just a lot of different projects in the same vein. There's another one called Pinball Map which is sort of a hobby project to find all the pinball machines in the world. And they wanted to have a customized map that fit in with their theme of pinball. So these sorts of really cool indie projects are the ones I'm most excited about. Basemaps vs Overlays [00:02:57] Jeremy: And if we talk about, like the pinball map as an example, there's this concept of a basemap and then there's the things that you lay on top of it. What is a basemap and then is the pinball locations is that part of it or is that something separate? [00:03:12] Brandon: It's usually something separate. The example I usually use is if you go to a real estate site, like Zillow, you'll open up the map of Seattle and it has a bunch of pins showing all the houses, and then it has some information beneath it. That information beneath it is like labels telling, this neighborhood is Capitol Hill, or there is a park here. But all that information is common to a lot of use cases and it's not specific to real estate. So I think usually that's the distinction people use in the industry between like a base map versus your overlay. The overlay is like the data for your product or your company while the base map is something you could get from Google or from Protomaps or from Apple or from Mapbox that kind of thing. PMTiles for hosting the basemap and overlays [00:03:58] Jeremy: And so Protomaps in particular is responsible for the base map, and that information includes things like the streets and the locations of landmarks and things like that. Where is all that information coming from? [00:04:12] Brandon: So the base map information comes from a project called OpenStreetMap. And I would also, point out that for Protomaps as sort of an ecosystem. You can also put your overlay data into a format called PMTiles, which is sort of the core of what Protomaps is. So it can really do both. It can transform your data into the PMTiles format which you can host and you can also host the base map. So you kind of have both of those sides of the product in one solution. [00:04:43] Jeremy: And so when you say you have both are you saying that the PMTiles file can have, the base map in one file and then you would have the data you're laying on top in another file? Or what are you describing there? [00:04:57] Brandon: That's usually how I recommend to do it. Oftentimes there'll be sort of like, a really big basemap 'cause it has all of that data about like where the rivers are. Or while, if you want to put your map of toilets or park benches or pickleball courts on top, that's another file. But those are all just like assets you can move around like JSON or CSV files. Statically Hosted [00:05:19] Jeremy: And I think one of the things you mentioned was that your goal was to make Protomaps or the, the use of these PMTiles files easy to use. What does that look like for, for a developer? I wanna host a map. What do I actually need to, to put on my servers? [00:05:38] Brandon: So my usual pitch is that basically if you know how to use S3 or cloud storage, that you know how to deploy a map. And that, I think is the main sort of differentiation from most open source projects. Like a lot of them, they call themselves like, like some sort of self-hosted solution. But I've actually avoided using the term self-hosted because I think in most cases that implies a lot of complexity. Like you have to log into a Linux server or you have to use Kubernetes or some sort of Docker thing. What I really want to emphasize is the idea that, for Protomaps, it's self-hosted in the same way like CSS is self-hosted. So you don't really need a service from Amazon to host the JSON files or CSV files. It's really just a static file. [00:06:32] Jeremy: When you say static file that means you could use any static web host to host your HTML file, your JavaScript that actually renders the map. And then you have your PMTiles files, and you're not running a process or anything, you're just putting your files on a static file host. [00:06:50] Brandon: Right. So I think if you're a developer, you can also argue like a static file server is a server. It's you know, it's the cloud, it's just someone else's computer. It's really just nginx under the hood. But I think static storage is sort of special. If you look at things like static site generators, like Jekyll or Hugo, they're really popular because they're a commodity or like the storage is a commodity. And you can take your blog, make it a Jekyll blog, hosted on S3. One day, Amazon's like, we're charging three times as much so you can move it to a different cloud provider. And that's all vendor neutral. So I think that's really the special thing about static storage as a primitive on the web. Why running servers is a problem for resilience [00:07:36] Jeremy: Was there a prior experience you had? Like you've worked with maps for a very long time. Were there particular difficulties you had where you said I just gotta have something that can be statically hosted? [00:07:50] Brandon: That's sort of exactly why I got into this. I've been working sort of in and around the map space for over a decade, and Protomaps is really like me trying to solve the same problem I've had over and over again in the past, just like once and forever right? Because like once this problem is solved, like I don't need to deal with it again in the future. So I've worked at a couple of different companies before, mostly as a contractor, for like a humanitarian nonprofit for a design company doing things like, web applications to visualize climate change. Or for even like museums, like digital signage for museums. And oftentimes they had some sort of data visualization component, but always sort of the challenge of how to like, store and also distribute like that data was something that there wasn't really great open source solutions. So just for map data, that's really what motivated that design for Protomaps. [00:08:55] Jeremy: And in those, those projects in the past, were those things where you had to run your own server, run your own database, things like that? [00:09:04] Brandon: Yeah. And oftentimes we did, we would spin up an EC2 instance, for maybe one client and then we would have to host this server serving map data forever. Maybe the client goes away, or I guess it's good for business if you can sign some sort of like long-term support for that client saying, Hey, you know, like we're done with a project, but you can pay us to maintain the EC2 server for the next 10 years. And that's attractive. but it's also sort of a pain, because usually what happens is if people are given the choice, like a developer between like either I can manage the server on EC2 or on Rackspace or Hetzner or whatever, or I can go pay a SaaS to do it. In most cases, businesses will choose to pay the SaaS. So that's really like what creates a sort of lock-in is this preference for like, so I have this choice between like running the server or paying the SaaS. Like businesses will almost always go and pay the SaaS. [00:10:05] Jeremy: Yeah. And in this case, you either find some kind of free hosting or low-cost hosting just to host your files and you upload the files and then you're good from there. You don't need to maintain anything. [00:10:18] Brandon: Exactly, and that's really the ideal use case. so I have some users these, climate science consulting agencies, and then they might have like a one-off project where they have to generate the data once, but instead of having to maintain this server for the lifetime of that project, they just have a file on S3 and like, who cares? If that costs a couple dollars a month to run, that's fine, but it's not like S3 is gonna be deprecated, like it's gonna be on an insecure version of Ubuntu or something. So that's really the ideal, set of constraints for using Protomaps. [00:10:58] Jeremy: Yeah. Something this also makes me think about is, is like the resilience of sites like remaining online, because I, interviewed, Kyle Drake, he runs Neocities, which is like a modern version of GeoCities. And if I remember correctly, he was mentioning how a lot of old websites from that time, if they were running a server backend, like they were running PHP or something like that, if you were to try to go to those sites, now they're like pretty much all dead because there needed to be someone dedicated to running a Linux server, making sure things were patched and so on and so forth. But for static sites, like the ones that used to be hosted on GeoCities, you can go to the internet archive or other websites and they were just files, right? You can bring 'em right back up, and if anybody just puts 'em on a web server, then you're good. They're still alive. Case study of news room preferring static hosting [00:11:53] Brandon: Yeah, exactly. One place that's kind of surprising but makes sense where this comes up, is for newspapers actually. Some of the users using Protomaps are the Washington Post. And the reason they use it, is not necessarily because they don't want to pay for a SaaS like Google, but because if they make an interactive story, they have to guarantee that it still works in a couple of years. And that's like a policy decision from like the editorial board, which is like, so you can't write an article if people can't view it in five years. But if your like interactive data story is reliant on a third party, API and that third party API becomes deprecated, or it changes the pricing or it, you know, it gets acquired, then your journalism story is not gonna work anymore. So I have seen really good uptake among local news rooms and even big ones to use things like Protomaps just because it makes sense for the requirements. Working on Protomaps as an open source project for five years [00:12:49] Jeremy: How long have you been working on Protomaps and the parts that it's made up of such as PMTiles? [00:12:58] Brandon: I've been working on it for about five years, maybe a little more than that. It's sort of my pandemic era project. But the PMTiles part, which is really the heart of it only came in about halfway. Why not make a SaaS? [00:13:13] Brandon: So honestly, like when I first started it, I thought it was gonna be another SaaS and then I looked at it and looked at what the environment was around it. And I'm like, uh, so I don't really think I wanna do that. [00:13:24] Jeremy: When, when you say you looked at the environment around it what do you mean? Why did you decide not to make it a SaaS? [00:13:31] Brandon: Because there already is a lot of SaaS out there. And I think the opportunity of making something that is unique in terms of those use cases, like I mentioned like newsrooms, was clear. Like it was clear that there was some other solution, that could be built that would fit these needs better while if it was a SaaS, there are plenty of those out there. And I don't necessarily think that they're well differentiated. A lot of them all use OpenStreetMap data. And it seems like they mainly compete on price. It's like who can build the best three column pricing model. And then once you do that, you need to build like billing and metrics and authentication and like those problems don't really interest me. So I think, although I acknowledge sort of the indie hacker ethos now is to build a SaaS product with a monthly subscription, that's something I very much chose not to do, even though it is for sure like the best way to build a business. [00:14:29] Jeremy: Yeah, I mean, I think a lot of people can appreciate that perspective because it's, it's almost like we have SaaS overload, right? Where you have so many little bills for your project where you're like, another $5 a month, another $10 a month, or if you're a business, right? Those, you add a bunch of zeros and at some point it's just how many of these are we gonna stack on here? [00:14:53] Brandon: Yeah. And honestly. So I really think like as programmers, we're not really like great at choosing how to spend money like a $10 SaaS. That's like nothing. You know? So I can go to Starbucks and I can buy a pumpkin spice latte, and that's like $10 basically now, right? And it's like I'm able to make that consumer choice in like an instant just to spend money on that. But then if you're like, oh, like spend $10 on a SaaS that somebody put a lot of work into, then you're like, oh, that's too expensive. I could just do it myself. So I'm someone that also subscribes to a lot of SaaS products. and I think for a lot of things it's a great fit. Many open source SaaS projects are not easy to self host [00:15:37] Brandon: But there's always this tension between an open source project that you might be able to run yourself and a SaaS. And I think a lot of projects are at different parts of the spectrum. But for Protomaps, it's very much like I'm trying to move maps to being it is something that is so easy to run yourself that anyone can do it. [00:16:00] Jeremy: Yeah, and I think you can really see it with, there's a few SaaS projects that are successful and they're open source, but then you go to look at the self-hosting instructions and it's either really difficult to find and you find it, and then the instructions maybe don't work, or it's really complicated. So I think doing the opposite with Protomaps. As a user, I'm sure we're all appreciative, but I wonder in terms of trying to make money, if that's difficult. [00:16:30] Brandon: No, for sure. It is not like a good way to make money because I think like the ideal situation for an open source project that is open that wants to make money is the product itself is fundamentally complicated to where people are scared to run it themselves. Like a good example I can think of is like Supabase. Supabase is sort of like a platform as a service based on Postgres. And if you wanted to run it yourself, well you need to run Postgres and you need to handle backups and authentication and logging, and that stuff all needs to work and be production ready. So I think a lot of people, like they don't trust themselves to run database backups correctly. 'cause if you get it wrong once, then you're kind of screwed. So I think that fundamental aspect of the product, like a database is something that is very, very ripe for being a SaaS while still being open source because it's fundamentally hard to run. Another one I can think of is like tailscale, which is, like a VPN that works end to end. That's something where, you know, it has this networking complexity where a lot of developers don't wanna deal with that. So they'd happily pay, for tailscale as a service. There is a lot of products or open source projects that eventually end up just changing to becoming like a hosted service. Businesses going from open source to closed or restricted licenses [00:17:58] Brandon: But then in that situation why would they keep it open source, right? Like, if it's easy to run yourself well, doesn't that sort of cannibalize their business model? And I think that's really the tension overall in these open source companies. So you saw it happen to things like Elasticsearch to things like Terraform where they eventually change the license to one that makes it difficult for other companies to compete with them. [00:18:23] Jeremy: Yeah, I mean there's been a number of cases like that. I mean, specifically within the mapping community, one I can think of was Mapbox's. They have Mapbox gl. Which was a JavaScript client to visualize maps and they moved from, I forget which license they picked, but they moved to a much more restrictive license. I wonder what your thoughts are on something that releases as open source, but then becomes something maybe a little more muddy. [00:18:55] Brandon: Yeah, I think it totally makes sense because if you look at their business and their funding, it seems like for Mapbox, I haven't used it in a while, but my understanding is like a lot of their business now is car companies and doing in dash navigation. And that is probably way better of a business than trying to serve like people making maps of toilets. And I think sort of the beauty of it is that, so Mapbox, the story is they had a JavaScript renderer called Mapbox GL JS. And they changed that to a source available license a couple years ago. And there's a fork of it that I'm sort of involved in called MapLibre GL. But I think the cool part is Mapbox paid employees for years, probably millions of dollars in total to work on this thing and just gave it away for free. Right? So everyone can benefit from that work they did. It's not like that code went away, like once they changed the license. Well, the old version has been forked. It's going its own way now. It's quite different than the new version of Mapbox, but I think it's extremely generous that they're able to pay people for years, you know, like a competitive salary and just give that away. [00:20:10] Jeremy: Yeah, so we should maybe look at it as, it was a gift while it was open source, and they've given it to the community and they're on continuing on their own path, but at least the community running Map Libre, they can run with it, right? It's not like it just disappeared. [00:20:29] Brandon: Yeah, exactly. And that is something that I use for Protomaps quite extensively. Like it's the primary way of showing maps on the web and I've been trying to like work on some enhancements to it to have like better internationalization for if you are in like South Asia like not show languages correctly. So I think it is being taken in a new direction. And I think like sort of the combination of Protomaps and MapLibre, it addresses a lot of use cases, like I mentioned earlier with like these like hobby projects, indie projects that are almost certainly not interesting to someone like Mapbox or Google as a business. But I'm happy to support as a small business myself. Financially supporting open source work (GitHub sponsors, closed source, contracts) [00:21:12] Jeremy: In my previous interview with Tom, one of the main things he mentioned was that creating a mapping business is incredibly difficult, and he said he probably wouldn't do it again. So in your case, you're building Protomaps, which you've admitted is easy to self-host. So there's not a whole lot of incentive for people to pay you. How is that working out for you? How are you supporting yourself? [00:21:40] Brandon: There's a couple of strategies that I've tried and oftentimes failed at. Just to go down the list, so I do have GitHub sponsors so I do have a hosted version of Protomaps you can use if you don't want to bother copying a big file around. But the way I do the billing for that is through GitHub sponsors. If you wanted to use this thing I provide, then just be a sponsor. And that definitely pays for itself, like the cost of running it. And that's great. GitHub sponsors is so easy to set up. It just removes you having to deal with Stripe or something. 'cause a lot of people, their credit card information is already in GitHub. GitHub sponsors I think is awesome if you want to like cover costs for a project. But I think very few people are able to make that work. A thing that's like a salary job level. It's sort of like Twitch streaming, you know, there's a handful of people that are full-time streamers and then you look down the list on Twitch and it's like a lot of people that have like 10 viewers. But some of the other things I've tried, I actually started out, publishing the base map as a closed source thing, where I would sell sort of like a data package instead of being a SaaS, I'd be like, here's a one-time download, of the premium data and you can buy it. And quite a few people bought it I just priced it at like $500 for this thing. And I thought that was an interesting experiment. The main reason it's interesting is because the people that it attracts to you in terms of like, they're curious about your products, are all people willing to pay money. While if you start out everything being open source, then the people that are gonna be try to do it are only the people that want to get something for free. So what I discovered is actually like once you transition that thing from closed source to open source, a lot of the people that used to pay you money will still keep paying you money because like, it wasn't necessarily that that closed source thing was why they wanted to pay. They just valued that thought you've put into it your expertise, for example. So I think that is one thing, that I tried at the beginning was just start out, closed source proprietary, then make it open source. That's interesting to people. Like if you release something as open source, if you go the other way, like people are really mad if you start out with something open source and then later on you're like, oh, it's some other license. Then people are like that's so rotten. But I think doing it the other way, I think is quite valuable in terms of being able to find an audience. [00:24:29] Jeremy: And when you said it was closed source and paid to open source, do you still sell those map exports? [00:24:39] Brandon: I don't right now. It's something that I might do in the future, you know, like have small customizations of the data that are available, uh, for a fee. still like the core OpenStreetMap based map that's like a hundred gigs you can just download. And that'll always just be like a free download just because that's already out there. All the source code to build it is open source. So even if I said, oh, you have to pay for it, then someone else can just do it right? So there's no real reason like to make that like some sort of like paywall thing. But I think like overall if the project is gonna survive in the long term it's important that I'd ideally like to be able to like grow like a team like have a small group of people that can dedicate the time to growing the project in the long term. But I'm still like trying to figure that out right now. [00:25:34] Jeremy: And when you mentioned that when you went from closed to open and people were still paying you, you don't sell a product anymore. What were they paying for? [00:25:45] Brandon: So I have some contracts with companies basically, like if they need a feature or they need a customization in this way then I am very open to those. And I sort of set it up to make it clear from the beginning that this is not just a free thing on GitHub, this is something that you could pay for if you need help with it, if you need support, if you wanted it. I'm also a little cagey about the word support because I think like it sounds a little bit too wishy-washy. Pretty much like if you need access to the developers of an open source project, I think that's something that businesses are willing to pay for. And I think like making that clear to potential users is a challenge. But I think that is one way that you might be able to make like a living out of open source. [00:26:35] Jeremy: And I think you said you'd been working on it for about five years. Has that mostly been full time? [00:26:42] Brandon: It's been on and off. it's sort of my pandemic era project. But I've spent a lot of time, most of my time working on the open source project at this point. So I have done some things that were more just like I'm doing a customization or like a private deployment for some client. But that's been a minority of the time. Yeah. [00:27:03] Jeremy: It's still impressive to have an open source project that is easy to self-host and yet is still able to support you working on it full time. I think a lot of people might make the assumption that there's nothing to sell if something is, is easy to use. But this sort of sounds like a counterpoint to that. [00:27:25] Brandon: I think I'd like it to be. So when you come back to the point of like, it being easy to self-host. Well, so again, like I think about it as like a primitive of the web. Like for example, if you wanted to start a business today as like hosted CSS files, you know, like where you upload your CSS and then you get developers to pay you a monthly subscription for how many times they fetched a CSS file. Well, I think most developers would be like, that's stupid because it's just an open specification, you just upload a static file. And really my goal is to make Protomaps the same way where it's obvious that there's not really some sort of lock-in or some sort of secret sauce in the server that does this thing. How PMTiles works and building a primitive of the web [00:28:16] Brandon: If you look at video for example, like a lot of the tech for how Protomaps and PMTiles works is based on parts of the HTTP spec that were made for video. And 20 years ago, if you wanted to host a video on the web, you had to have like a real player license or flash. So you had to go license some server software from real media or from macromedia so you could stream video to a browser plugin. But now in HTML you can just embed a video file. And no one's like, oh well I need to go pay for my video serving license. I mean, there is such a thing, like YouTube doesn't really use that for DRM reasons, but people just have the assumption that video is like a primitive on the web. So if we're able to make maps sort of that same way like a primitive on the web then there isn't really some obvious business or licensing model behind how that works. Just because it's a thing and it helps a lot of people do their jobs and people are happy using it. So why bother? [00:29:26] Jeremy: You mentioned that it a tech that was used for streaming video. What tech specifically is it? [00:29:34] Brandon: So it is byte range serving. So when you open a video file on the web, So let's say it's like a 100 megabyte video. You don't have to download the entire video before it starts playing. It streams parts out of the file based on like what frames... I mean, it's based on the frames in the video. So it can start streaming immediately because it's organized in a way to where the first few frames are at the beginning. And what PMTiles really is, is it's just like a video but in space instead of time. So it's organized in a way where these zoomed out views are at the beginning and the most zoomed in views are at the end. So when you're like panning or zooming in the map all you're really doing is fetching byte ranges out of that file the same way as a video. But it's organized in, this tiled way on a space filling curve. IIt's a little bit complicated how it works internally and I think it's kind of cool but that's sort of an like an implementation detail. [00:30:35] Jeremy: And to the person deploying it, it just looks like a single file. [00:30:40] Brandon: Exactly in the same way like an mp3 audio file is or like a JSON file is. [00:30:47] Jeremy: So with a video, I can sort of see how as someone seeks through the video, they start at the beginning and then they go to the middle if they wanna see the middle. For a map, as somebody scrolls around the map, are you seeking all over the file or is the way it's structured have a little less chaos? [00:31:09] Brandon: It's structured. And that's kind of the main technical challenge behind building PMTiles is you have to be sort of clever so you're not spraying the reads everywhere. So it uses something called a hilbert curve, which is a mathematical concept of a space filling curve. Where it's one continuous curve that essentially lets you break 2D space into 1D space. So if you've seen some maps of IP space, it uses this crazy looking curve that hits all the points in one continuous line. And that's the same concept behind PMTiles is if you're looking at one part of the world, you're sort of guaranteed that all of those parts you're looking at are quite close to each other and the data you have to transfer is quite minimal, compared to if you just had it at random. [00:32:02] Jeremy: How big do the files get? If I have a PMTiles of the entire world, what kind of size am I looking at? [00:32:10] Brandon: Right now, the default one I distribute is 128 gigabytes, so it's quite sizable, although you can slice parts out of it remotely. So if you just wanted. if you just wanted California or just wanted LA or just wanted only a couple of zoom levels, like from zero to 10 instead of zero to 15, there is a command line tool that's also called PMTiles that lets you do that. Issues with CDNs and range queries [00:32:35] Jeremy: And when you're working with files of this size, I mean, let's say I am working with a CDN in front of my application. I'm not typically accustomed to hosting something that's that large and something that's where you're seeking all over the file. is that, ever an issue or is that something that's just taken care of by the browser and, and taken care of by, by the hosts? [00:32:58] Brandon: That is an issue actually, so a lot of CDNs don't deal with it correctly. And my recommendation is there is a kind of proxy server or like a serverless proxy thing that I wrote. That runs on like cloudflare workers or on Docker that lets you proxy those range requests into a normal URL and then that is like a hundred percent CDN compatible. So I would say like a lot of the big commercial installations of this thing, they use that because it makes more practical sense. It's also faster. But the idea is that this solution sort of scales up and scales down. If you wanted to host just your city in like a 10 megabyte file, well you can just put that into GitHub pages and you don't have to worry about it. If you want to have a global map for your website that serves a ton of traffic then you probably want a little bit more sophisticated of a solution. It still does not require you to run a Linux server, but it might require (you) to use like Lambda or Lambda in conjunction with like a CDN. [00:34:09] Jeremy: Yeah. And that sort of ties into what you were saying at the beginning where if you can host on something like CloudFlare Workers or Lambda, there's less time you have to spend keeping these things running. [00:34:26] Brandon: Yeah, exactly. and I think also the Lambda or CloudFlare workers solution is not perfect. It's not as perfect as S3 or as just static files, but in my experience, it still is better at building something that lasts on the time span of years than being like I have a server that is on this Ubuntu version and in four years there's all these like security patches that are not being applied. So it's still sort of serverless, although not totally vendor neutral like S3. Customizing the map [00:35:03] Jeremy: We've mostly been talking about how you host the map itself, but for someone who's not familiar with these kind of tools, how would they be customizing the map? [00:35:15] Brandon: For customizing the map there is front end style customization and there's also data customization. So for the front end if you wanted to change the water from the shade of blue to another shade of blue there is a TypeScript API where you can customize it almost like a text editor color scheme. So if you're able to name a bunch of colors, well you can customize the map in that way you can change the fonts. And that's all done using MapLibre GL using a TypeScript API on top of that for customizing the data. So all the pipeline to generate this data from OpenStreetMap is open source. There is a Java program using a library called PlanetTiler which is awesome, which is this super fast multi-core way of building map tiles. And right now there isn't really great hooks to customize what data goes into that. But that's something that I do wanna work on. And finally, because the data comes from OpenStreetMap if you notice data that's missing or you wanted to correct data in OSM then you can go into osm.org. You can get involved in contributing the data to OSM and the Protomaps build is daily. So if you make a change, then within 24 hours you should see the new base map. Have that change. And of course for OSM your improvements would go into every OSM based project that is ingesting that data. So it's not a protomap specific thing. It's like this big shared data source, almost like Wikipedia. OpenStreetMap is a dataset and not a map [00:37:01] Jeremy: I think you were involved with OpenStreetMap to some extent. Can you speak a little bit to that for people who aren't familiar, what OpenStreetMap is? [00:37:11] Brandon: Right. So I've been using OSM as sort of like a tools developer for over a decade now. And one of the number one questions I get from developers about what is Protomaps is why wouldn't I just use OpenStreetMap? What's the distinction between Protomaps and OpenStreetMap? And it's sort of like this funny thing because even though OSM has map in the name it's not really a map in that you can't... In that it's mostly a data set and not a map. It does have a map that you can see that you can pan around to when you go to the website but the way that thing they show you on the website is built is not really that easily reproducible. It involves a lot of c++ software you have to run. But OpenStreetMap itself, the heart of it is almost like a big XML file that has all the data in the map and global. And it has tagged features for example. So you can go in and edit that. It has a web front end to change the data. It does not directly translate into making a map actually. Protomaps decides what shows at each zoom level [00:38:24] Brandon: So a lot of the pipeline, that Java program I mentioned for building this basemap for protomaps is doing things like you have to choose what data you show when you zoom out. You can't show all the data. For example when you're zoomed out and you're looking at all of a state like Colorado you don't see all the Chipotle when you're zoomed all the way out. That'd be weird, right? So you have to make some sort of decision in logic that says this data only shows up at this zoom level. And that's really what is the challenge in optimizing the size of that for the Protomaps map project. [00:39:03] Jeremy: Oh, so those decisions of what to show at different Zoom levels those are decisions made by you when you're creating the PMTiles file with Protomaps. [00:39:14] Brandon: Exactly. It's part of the base maps build pipeline. and those are honestly very subjective decisions. Who really decides when you're zoomed out should this hospital show up or should this museum show up nowadays in Google, I think it shows you ads. Like if someone pays for their car repair shop to show up when you're zoomed out like that that gets surfaced. But because there is no advertising auction in Protomaps that doesn't happen obviously. So we have to sort of make some reasonable choice. A lot of that right now in Protomaps actually comes from another open source project called Mapzen. So Mapzen was a company that went outta business a couple years ago. They did a lot of this work in designing which data shows up at which Zoom level and open sourced it. And then when they shut down, they transferred that code into the Linux Foundation. So it's this totally open source project, that like, again, sort of like Mapbox gl has this awesome legacy in that this company funded it for years for smart people to work on it and now it's just like a free thing you can use. So the logic in Protomaps is really based on mapzen. [00:40:33] Jeremy: And so the visualization of all this... I think I understand what you mean when people say oh, why not use OpenStreetMaps because it's not really clear it's hard to tell is this the tool that's visualizing the data? Is it the data itself? So in the case of using Protomaps, it sounds like Protomaps itself has all of the data from OpenStreetMap and then it has made all the decisions for you in terms of what to show at different Zoom levels and what things to have on the map at all. And then finally, you have to have a separate, UI layer and in this case, it sounds like the one that you recommend is the Map Libre library. [00:41:18] Brandon: Yeah, that's exactly right. For Protomaps, it has a portion or a subset of OSM data. It doesn't have all of it just because there's too much, like there's data in there. people have mapped out different bushes and I don't include that in Protomaps if you wanted to go in and edit like the Java code to add that you can. But really what Protomaps is positioned at is sort of a solution for developers that want to use OSM data to make a map on their app or their website. because OpenStreetMap itself is mostly a data set, it does not really go all the way to having an end-to-end solution. Financials and the idea of a project being complete [00:41:59] Jeremy: So I think it's great that somebody who wants to make a map, they have these tools available, whether it's from what was originally built by Mapbox, what's built by Open StreetMap now, the work you're doing with Protomaps. But I wonder one of the things that I talked about with Tom was he was saying he was trying to build this mapping business and based on the financials of what was coming in he was stressed, right? He was struggling a bit. And I wonder for you, you've been working on this open source project for five years. Do you have similar stressors or do you feel like I could keep going how things are now and I feel comfortable? [00:42:46] Brandon: So I wouldn't say I'm a hundred percent in one bucket or the other. I'm still seeing it play out. One thing, that I really respect in a lot of open source projects, which I'm not saying I'm gonna do for Protomaps is the idea that a project is like finished. I think that is amazing. If a software project can just be done it's sort of like a painting or a novel once you write, finish the last page, have it seen by the editor. I send it off to the press is you're done with a book. And I think one of the pains of software is so few of us can actually do that. And I don't know obviously people will say oh the map is never finished. That's more true of OSM, but I think like for Protomaps. One thing I'm thinking about is how to limit the scope to something that's quite narrow to where we could be feature complete on the core things in the near term timeframe. That means that it does not address a lot of things that people want. Like search, like if you go to Google Maps and you search for a restaurant, you will get some hits. that's like a geocoding issue. And I've already decided that's totally outta scope for Protomaps. So, in terms of trying to think about the future of this, I'm mostly looking for ways to cut scope if possible. There are some things like better tooling around being able to work with PMTiles that are on the roadmap. but for me, I am still enjoying working on the project. It's definitely growing. So I can see on NPM downloads I can see the growth curve of people using it and that's really cool. So I like hearing about when people are using it for cool projects. So it seems to still be going okay for now. [00:44:44] Jeremy: Yeah, that's an interesting perspective about how you were talking about projects being done. Because I think when people look at GitHub projects and they go like, oh, the last commit was X months ago. They go oh well this is dead right? But maybe that's the wrong framing. Maybe you can get a project to a point where it's like, oh, it's because it doesn't need to be updated. [00:45:07] Brandon: Exactly, yeah. Like I used to do a lot of c++ programming and the best part is when you see some LAPACK matrix math library from like 1995 that still works perfectly in c++ and you're like, this is awesome. This is the one I have to use. But if you're like trying to use some like React component library and it hasn't been updated in like a year, you're like, oh, that's a problem. So again, I think there's some middle ground between those that I'm trying to find. I do like for Protomaps, it's quite dependency light in terms of the number of hard dependencies I have in software. but I do still feel like there is a lot of work to be done in terms of project scope that needs to have stuff added. You mostly only hear about problems instead of people's wins [00:45:54] Jeremy: Having run it for this long. Do you have any thoughts on running an open source project in general? On dealing with issues or managing what to work on things like that? [00:46:07] Brandon: Yeah. So I have a lot. I think one thing people point out a lot is that especially because I don't have a direct relationship with a lot of the people using it a lot of times I don't even know that they're using it. Someone sent me a message saying hey, have you seen flickr.com, like the photo site? And I'm like, no. And I went to flickr.com/map and it has Protomaps for it. And I'm like, I had no idea. But that's cool, if they're able to use Protomaps for this giant photo sharing site that's awesome. But that also means I don't really hear about when people use it successfully because you just don't know, I guess they, NPM installed it and it works perfectly and you never hear about it. You only hear about people's negative experiences. You only hear about people that come and open GitHub issues saying this is totally broken, and why doesn't this thing exist? And I'm like, well, it's because there's an infinite amount of things that I want to do, but I have a finite amount of time and I just haven't gone into that yet. And that's honestly a lot of the things and people are like when is this thing gonna be done? So that's, that's honestly part of why I don't have a public roadmap because I want to avoid that sort of bickering about it. I would say that's one of my biggest frustrations with running an open source project is how it's self-selected to only hear the negative experiences with it. Be careful what PRs you accept [00:47:32] Brandon: 'cause you don't hear about those times where it works. I'd say another thing is it's changed my perspective on contributing to open source because I think when I was younger or before I had become a maintainer I would open a pull request on a project unprompted that has a hundred lines and I'd be like, Hey, just merge this thing. But I didn't realize when I was younger well if I just merge it and I disappear, then the maintainer is stuck with what I did forever. You know if I add some feature then that person that maintains the project has to do that indefinitely. And I think that's very asymmetrical and it's changed my perspective a lot on accepting open source contributions. I wanna have it be open to anyone to contribute. But there is some amount of back and forth where it's almost like the default answer for should I accept a PR is no by default because you're the one maintaining it. And do you understand the shape of that solution completely to where you're going to support it for years because the person that's contributing it is not bound to those same obligations that you are. And I think that's also one of the things where I have a lot of trepidation around open source is I used to think of it as a lot more bazaar-like in terms of anyone can just throw their thing in. But then that creates a lot of problems for the people who are expected out of social obligation to continue this thing indefinitely. [00:49:23] Jeremy: Yeah, I can totally see why that causes burnout with a lot of open source maintainers, because you probably to some extent maybe even feel some guilt right? You're like, well, somebody took the time to make this. But then like you said you have to spend a lot of time trying to figure out is this something I wanna maintain long term? And one wrong move and it's like, well, it's in here now. [00:49:53] Brandon: Exactly. To me, I think that is a very common failure mode for open source projects is they're too liberal in the things they accept. And that's a lot of why I was talking about how that choice of what features show up on the map was inherited from the MapZen projects. If I didn't have that then somebody could come in and say hey, you know, I want to show power lines on the map. And they open a PR for power lines and now everybody who's using Protomaps when they're like zoomed out they see power lines are like I didn't want that. So I think that's part of why a lot of open source projects eventually evolve into a plugin system is because there is this demand as the project grows for more and more features. But there is a limit in the maintainers. It's like the demand for features is exponential while the maintainer amount of time and effort is linear. Plugin systems might reduce need for PRs [00:50:56] Brandon: So maybe the solution to smash that exponential down to quadratic maybe is to add a plugin system. But I think that is one of the biggest tensions that only became obvious to me after working on this for a couple of years. [00:51:14] Jeremy: Is that something you're considering doing now? [00:51:18] Brandon: Is the plugin system? Yeah. I think for the data customization, I eventually wanted to have some sort of programmatic API to where you could declare a config file that says I want ski routes. It totally makes sense. The power lines example is maybe a little bit obscure but for example like a skiing app and you want to be able to show ski slopes when you're zoomed out well you're not gonna be able to get that from Mapbox or from Google because they have a one size fits all map that's not specialized to skiing or to golfing or to outdoors. But if you like, in theory, you could do this with Protomaps if you changed the Java code to show data at different zoom levels. And that is to me what makes the most sense for a plugin system and also makes the most product sense because it enables a lot of things you cannot do with the one size fits all map. [00:52:20] Jeremy: It might also increase the complexity of the implementation though, right? [00:52:25] Brandon: Yeah, exactly. So that's like. That's really where a lot of the terrifying thoughts come in, which is like once you create this like config file surface area, well what does that look like? Is that JSON? Is that TOML, is that some weird like everything eventually evolves into some scripting language right? Where you have logic inside of your templates and I honestly do not really know what that looks like right now. That feels like something in the medium term roadmap. [00:52:58] Jeremy: Yeah and then in terms of bug reports or issues, now it's not just your code it's this exponential combination of whatever people put into these config files. [00:53:09] Brandon: Exactly. Yeah. so again, like I really respect the projects that have done this well or that have done plugins well. I'm trying to think of some, I think obsidian has plugins, for example. And that seems to be one of the few solutions to try and satisfy the infinite desire for features with the limited amount of maintainer time. Time split between code vs triage vs talking to users [00:53:36] Jeremy: How would you say your time is split between working on the code versus issue and PR triage? [00:53:43] Brandon: Oh, it varies really. I think working on the code is like a minority of it. I think something that I actually enjoy is talking to people, talking to users, getting feedback on it. I go to quite a few conferences to talk to developers or people that are interested and figure out how to refine the message, how to make it clearer to people, like what this is for. And I would say maybe a plurality of my time is spent dealing with non-technical things that are neither code or GitHub issues. One thing I've been trying to do recently is talk to people that are not really in the mapping space. For example, people that work for newspapers like a lot of them are front end developers and if you ask them to run a Linux server they're like I have no idea. But that really is like one of the best target audiences for Protomaps. So I'd say a lot of the reality of running an open source project is a lot like a business is it has all the same challenges as a business in terms of you have to figure out what is the thing you're offering. You have to deal with people using it. You have to deal with feedback, you have to deal with managing emails and stuff. I don't think the payoff is anywhere near running a business or a startup that's backed by VC money is but it's definitely not the case that if you just want to code, you should start an open source project because I think a lot of the work for an opensource project has nothing to do with just writing the code. It is in my opinion as someone having done a VC backed business before, it is a lot more similar to running, a tech company than just putting some code on GitHub. Running a startup vs open source project [00:55:43] Jeremy: Well, since you've done both at a high level what did you like about running the company versus maintaining the open source project? [00:55:52] Brandon: So I have done some venture capital accelerator programs before and I think there is an element of hype and energy that you get from that that is self perpetuating. Your co-founder is gungho on like, yeah, we're gonna do this thing. And your investors are like, you guys are geniuses. You guys are gonna make a killing doing this thing. And the way it's framed is sort of obvious to everyone that it's like there's a much more traditional set of motivations behind that, that people understand while it's definitely not the case for running an open source project. Sometimes you just wake up and you're like what the hell is this thing for, it is this thing you spend a lot of time on. You don't even know who's using it. The people that use it and make a bunch of money off of it they know nothing about it. And you know, it's just like cool. And then you only hear from people that are complaining about it. And I think like that's honestly discouraging compared to the more clear energy and clearer motivation and vision behind how most people think about a company. But what I like about the open source project is just the lack of those constraints you know? Where you have a mandate that you need to have this many customers that are paying by this amount of time. There's that sort of pressure on delivering a business result instead of just making something that you're proud of that's simple to use and has like an elegant design. I think that's really a difference in motivation as well. Having control [00:57:50] Jeremy: Do you feel like you have more control? Like you mentioned how you've decided I'm not gonna make a public roadmap. I'm the sole developer. I get to decide what goes in. What doesn't. Do you feel like you have more control in your current position than you did running the startup? [00:58:10] Brandon: Definitely for sure. Like that agency is what I value the most. It is possible to go too far. Like, so I'm very wary of the BDFL title, which I think is how a lot of open source projects succeed. But I think there is some element of for a project to succeed there has to be somebody that makes those decisions. Sometimes those decisions will be wrong and then hopefully they can be rectified. But I think going back to what I was talking about with scope, I think the overall vision and the scope of the project is something that I am very opinionated about in that it should do these things. It shouldn't do these things. It should be easy to use for this audience. Is it gonna be appealing to this other audience? I don't know. And I think that is really one of the most important parts of that leadership role, is having the power to decide we're doing this, we're not doing this. I would hope other developers would be able to get on board if they're able to make good use of the project, if they use it for their company, if they use it for their business, if they just think the project is cool. So there are other contributors at this point and I want to get more involved. But I think being able to make those decisions to what I believe is going to be the best project is something that is very special about open source, that isn't necessarily true about running like a SaaS business. [00:59:50] Jeremy: I think that's a good spot to end it on, so if people want to learn more about Protomaps or they wanna see what you're up to, where should they head? [01:00:00] Brandon: So you can go to Protomaps.com, GitHub, or you can find me or Protomaps on bluesky or Mastodon. [01:00:09] Jeremy: All right, Brandon, thank you so much for chatting today. [01:00:12] Brandon: Great. Thank you very much.
O povo pulou pra frente / Semente o sangue do herói, semente / Ô, pula, João!". Esses versos, escritos por Aldir Blanc e João Bosco na música intitulada "João do Pulo", são a síntese da grandeza do nosso personagem. Nascido João Carlos de Oliveira, o Pulo lhe encontrou na pista. Tetracampeão pan-americano no salto triplo e no salto em distância, duas vezes medalhista olímpico, recordista mundial, deputado estadual. Do nascimento ao precoce fim, foram 45 anos de uma trajetória digna de filme. O Ubuntu Esporte Clube recebeu Thais Oliveira, filha de João do Pulo, e Nill Marcondes, que vai interpretar a lenda em um documentário que será lançado em breve, pare reverenciar a memória e o legado do homem que acaba de ser incluído no livro de Heróis da Pátria.
I am delighted to share this imperfectly perfect and perfectly imperfect conversation with Wakanyi Hoffman, who is an International Speaker, Indigenous African Thinker, Head of Sustainable AI Africa Research at Inclusive AI Lab (Utrecht Uni). Wakanyi is a Cross-Cultural Peace-Weaver who Talks about Ubuntu, Art of Being Human and is an African Folklorist.Together we discussed Wakanyi's journey and work on the philosophy of Ubuntu, emphasizing its importance in understanding collective relational ways of being and its extension to the natural world. The conversation also touched on the potential of AI to reflect and challenge human understanding and biases, and the importance of intercultural interactions and storytelling in fostering meaningful connections. Lastly, we explored the need for diverse voices in academia and philosophy, the significance of the feminine perspective, and the potential of AI in facilitating more considerate communication.
This show has been flagged as Explicit by the host. Background It all happened when I noticed that a disk space monitor sitting in the top right hand side on my Gnome desktop was red. On inspection I discovered that my root filesystem was 87% full. The root partition was only 37GB in size which meant there was less than 4GB of space left. When I thought back I remembered that my PC was running a bit slower than usual and that that the lack of space in the root partition could have been to blame. I had some tasks that I wanted to complete and thought I'd better do something about the lack of space before it became an even bigger problem. What happened As per usual all this happened when I was short of time and I was in a bit of a hurry. Lesson one don't do this sort of thing when your in a bit of a hurry. Because I was in a hurry I didn't spend time doing a complete backup. Lesson two do a backup. My plan was to get some space back by shrinking my home partition leaving some empty space to allow me to increase the size of my root partition. For speed and ease I decided to use Gparted as I have used this many times in the past. Wikipedia article about Gparted Official Gparted webpage It's not a good idea to try and resize and or move a mounted filesystem so a bootable live version of Gparted would be a good idea. The reason for this is that if you run Gparted from your normal Linux OS and the OS decides to write something to the disk while Gparted is also trying to write or move things on the disk then as you could imagine very bad things could and probably would happen. I knew I had an old bootable live CDROM with Gparted on it as I had used this many times in the past though not for a few years. As I was short on time I thought this would be the quickest way to get the job done. I booted up the live CD and setup the various operations such as shrinking the home partitions, moving it to the right to leave space for the root partition then finally increasing the size of the almost full root partition. What I didn't notice at the time is that there was a tiny explanation mark on at least one of the partitions. I probably missed this because I was in a hurry. Lesson three don't rush things and be on the lookout for any error messages. When I clicked the green tick button to carry out the operations it briefly seemed to start and almost instantly stopped saying that there were errors and that the operation was unsuccessful and something about unsupported 64 bit filesystems. At this point I thought / hoped that nothing had actually happened. My guess was that the old live Gparted distribution I was using didn't support Ext4 though I could be completely wrong on this. Lesson four don't use old versions of Gparted particularly when performing operations on modern filesystems. Wikipedia article about the Ext4 filesystem I removed the Gparted bootable CD and rebooted my PC. At this point I got lots of errors scrolling up the screen I then got a message I've never see before from memory I think it said Journaling It then said something about pass 1 pass 2 pass 3 and continued all the way to 5. Then it talked about recovering data blocks. At this point I got very nervous. I had all sorts of fears going through my head. I imagined I may have lost all the contents of my hard-rive. The whole experience was very scary. I let it complete all operations and eventually my Ubuntu operating system came up and seemed okay. I rebooted the PC and this time it booted correctly with no error messages and everting was okay. I have often seen things said about Journaling filesystems and how good they are though until this point I had never seen any real examples of them repairing a filesystem. Both my root and home partitions were EXT 4 and thankfully EXT 4 supports Journaling which I believe on this occasion saved me from a great deal of pain. Lesson five it might be a good idea to use Journaling filesystems. Wikipdeai article about Journaling filesystems This still left me with the original problem in that I had little free space on my root filesystems. This time I decided to take my time and break the task up into smaller chunks and not to do it in one go. First I downloaded the newest Live distribution version of Gparted I performed the checksum test to make sure the download was successful with no errors. The next day I tried to write it to a CD-ROM something I haven't done for a very long time. I initially couldn't understand why I couldn't click on the write button then I looked at my blank CD-ROM using the UBUNTU GNOME DISKS application. It reported that the disk was read only. I did a bit of goggling and came across a post saying that they had come across this and that they solved this by installing the CD-ROM writing application Brasero. Wikipedia article about Brasero ) Official website for Brasero Installing Brasero solved the problem and allowed me to write the image file to CD-ROM. I was actually surprised that it wasn't installed as I've used this application in the past. Just goes to show how long it's been since I've written anything to CD-ROM! I booted the CD-ROM to check that Gparted worked and didn't see any explanation marks on any of my partitions. I was short on time and didn't want to rush things so decided to stop at this point. Later on I popped the live bootable Gparted CD-ROM running version 1.6.0.3 AMD 64 version into my PC and booted it up. Everything seemed okay and there were no errors showing. I took my home partition SDA6 and shrunk it down by about 20 GB and then shifted it 20 GB to the right to the end of the disk. This left a 20 GB gap at the end of my root partition. I then increased the size of my root partition SDA5 by approximately 20 GB to fill the empty space. It took Gparted about one hour and 40 minutes to complete all the operations. The root partition is now reporting 61% full rather than 86% full. The root partition is now approximately 53 GB in size with 31 GB used. 22 GB is now free which is a bit more comfortable. Picture 1 Is a screenshot of GParted showing the new sizes of my root and home partitions. I removed the GParted CD from my CD-ROM drive and rebooted the PC to thankfully find all was well and no errors reported. Conclusion My PC is now running more smoothly. All I can say after all this is that I consider myself very lucky this time and I hope I learned some valuable lessons along the way. Provide feedback on this episode.
API hacking and bypassing Ubuntu's user namespace restrictions feature in this week's episode, as well as a bug in CimFS for Windows and revisiting the infamous NSO group WebP bug.Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/279.html[00:00:00] Introduction[00:00:28] Next.js and the corrupt middleware: the authorizing artifact[00:06:15] Pwning Millions of Smart Weighing Machines with API and Hardware Hacking[00:20:37] oss-sec: Three bypasses of Ubuntu's unprivileged user namespace restrictions[00:32:10] CimFS: Crashing in memory, Finding SYSTEM (Kernel Edition)[00:43:18] Blasting Past Webp[00:47:50] We hacked Google's A.I Gemini and leaked its source code (at least some part)Podcast episodes are available on the usual podcast platforms: -- Apple Podcasts: https://podcasts.apple.com/us/podcast/id1484046063 -- Spotify: https://open.spotify.com/show/4NKCxk8aPEuEFuHsEQ9Tdt -- Google Podcasts: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9hMTIxYTI0L3BvZGNhc3QvcnNz -- Other audio platforms can be found at https://anchor.fm/dayzerosecYou can also join our discord: https://discord.gg/daTxTK9
¿Que son las #coreutils?¿Influye GPL o MIT? Como mejorar #ubuntu y otras distros #linux utilizando herramientas y software implementado en #rustComo ya te he contado en alguna que otra ocasión si bien soy un consumidor impulsivo de podcast, lo de los vídeos no es tanto así, mas bien todo lo contrario. Sin embargo, si suelo ver los titulares de los canales que sigo, aunque no los consuma, simplemente por aquello de estar informado, de estar al tanto. Ha sido así como me he encontrado con un vídeo de Linuxero Errante, en el que habla de la posible marcha de Ubuntu de GNU/Linux. Como te puedes imaginar, me ha llamado la atención y he decidido investigar un poco más. ¿A que se refiere con eso de que Ubuntu se marcha de GNU/Linux?. Pues esto es precisamente de lo que te quiero hablar en este episodio. No va mucho mas allá de cambiar parte de las herramientas que se utilizan en Ubuntu y en otras distribuciones por otras. Pero esto no es nada nuevo. Ya te mencioné en el episodio 591 cuando te hablé de Alpine, la mejor distribución Linux, que no utiliza las core utils de GNU, con lo que realmente no estamos hablando de una distribución GNU/Linux. Pues en este caso, Ubuntu, también quiere reemplazar las core utils de GNU por otras escritas en Rust. Esto, es algo que yo vengo haciendo durante los últimos años, trayendo distintas herramientas implementadas en Rust, en su gran mayoría, que reemplazan o otras existentes, ya sea por que aportan nuevas funcionalidades como por que las mejoran.Más información y enlaces en las notas del episodio
This week we're talking about Ubuntu's 25.04 Beta, SteamOS rumors, and the next big XZ release. Then EU OS has the guys scratching their heads, KDE starts planning the Plasma Login Manager, and Torvalds has another rant over hdrtest in the kernel. For tips we have pw-mididump for dumping Pipewire Midi events, ddgr for command line Duck Duck Go, and cd . for reloading the current directory. You can see the show notes at https://bit.ly/4hX44MD and we'll see you next week! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Ken McDonald Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
video: https://youtu.be/LvEB5lGUqaw Comment on the TWIL Forum (https://thisweekinlinux.com/forum) This week in Linux, we have a brand new version of the Linux kernel, including a boost for gaming performance and GPU workload protection in the Linux 6.14 release. There are some new distro releases to talk about with Zorin OS and EndeavourOS, as well as some beta releases from Fedora and Ubuntu. Plus, openSUSE is adding a long-requested feature to their Zypper Package Manager, and so much more on this episode of This Week in Linux. This is the weekly news show that will keep you up to date with what's going on in the Linux and Open Source world. Now let's jump right into Your Source for Linux GNews. Download as MP3 (https://aphid.fireside.fm/d/1437767933/2389be04-5c79-485e-b1ca-3a5b2cebb006/80747735-291f-4f6b-a6c4-1f3607cbe048.mp3) Support the Show Become a Patron = tuxdigital.com/membership (https://tuxdigital.com/membership) Store = tuxdigital.com/store (https://tuxdigital.com/store) Chapters: 00:00 Intro 00:45 Linux 6.14 Released 07:01 Zorin OS 17.3 Released 13:11 EndeavourOS Mercury Neo Released 14:58 Sandfly Security, agentless Linux security [ad] 16:54 Rescuezilla 13 Released 17:54 Finnix 250 Released 19:14 Fedora 42 Beta Released 22:14 Ubuntu 25.04 Beta Released 23:23 openSUSE Adds Experimental Parallel Downloads to Zypper 24:21 HP considers SteamOS for their next Gaming Handheld 26:56 Suppor the show Links: Linux 6.14 Released https://lore.kernel.org/lkml/CAHk-=wg7TO09Si5tTPyhdrLLvyYtVmCf+GGN4kVJ0=Xk=5TE3g@mail.gmail.com/T/#u (https://lore.kernel.org/lkml/CAHk-=wg7TO09Si5tTPyhdrLLvyYtVmCf+GGN4kVJ0=Xk=5TE3g@mail.gmail.com/T/#u) https://www.kernel.org/category/releases.html (https://www.kernel.org/category/releases.html) https://bsky.app/profile/plagman.bsky.social/post/3lkp26xmco22k (https://bsky.app/profile/plagman.bsky.social/post/3lkp26xmco22k) https://kernelnewbies.org/Linux_6.14 (https://kernelnewbies.org/Linux_6.14) Zorin OS 17.3 Released [https://blog.zorin.com/2025/03/26/zorin-os-17.3-is-here/](https://blog.zorin.com/2025/03/26/zorin-os-17.3-is-here/) https://kdeconnect.kde.org/ (https://kdeconnect.kde.org/) EndeavourOS Mercury Neo Released https://endeavouros.com/news/mercury-neo-with-linux-6-13-7-and-arch-mirror-ranking-bug-fix/ (https://endeavouros.com/news/mercury-neo-with-linux-6-13-7-and-arch-mirror-ranking-bug-fix/) Sandfly Security, agentless Linux security [ad] https://thisweekinlinux.com/sandfly (https://thisweekinlinux.com/sandfly) https://destinationlinux.net/409 (https://destinationlinux.net/409) Rescuezilla 13 Released https://rescuezilla.com/ (https://rescuezilla.com/) https://github.com/rescuezilla/rescuezilla/releases/tag/2.6 (https://github.com/rescuezilla/rescuezilla/releases/tag/2.6) Finnix 250 Released https://blog.finnix.org/2025/03/22/finnix-250-released/ (https://blog.finnix.org/2025/03/22/finnix-250-released/) https://www.finnix.org/ (https://www.finnix.org/) Fedora 42 Beta Released https://fedoramagazine.org/announcing-fedora-linux-42-beta/ (https://fedoramagazine.org/announcing-fedora-linux-42-beta/) Ubuntu 25.04 Beta Released https://www.omgubuntu.co.uk/2025/03/ubuntu-25-04-beta-download (https://www.omgubuntu.co.uk/2025/03/ubuntu-25-04-beta-download) https://www.phoronix.com/news/Ubuntu-25.04-Beta (https://www.phoronix.com/news/Ubuntu-25.04-Beta) GNOME 48 https://thisweekinlinux.com/303 (https://thisweekinlinux.com/303) openSUSE Adds Experimental Parallel Downloads to Zypper https://news.opensuse.org/2025/03/27/zypper-adds-experimental-parallel-downloads/ (https://news.opensuse.org/2025/03/27/zypper-adds-experimental-parallel-downloads/) HP considers SteamOS for their next Gaming Handheld https://www.xda-developers.com/hp-hasnt-made-omen-gaming-handheld/ (https://www.xda-developers.com/hp-hasnt-made-omen-gaming-handheld/) https://www.gamingonlinux.com/2025/03/hp-are-interested-in-making-a-steamos-handheld-as-the-windows-experience-sucks/ (https://www.gamingonlinux.com/2025/03/hp-are-interested-in-making-a-steamos-handheld-as-the-windows-experience-sucks/) Support the show https://tuxdigital.com/membership (https://tuxdigital.com/membership) https://store.tuxdigital.com/ (https://store.tuxdigital.com/)
To become a follower of Jesus, visit: https://MorningMindsetMedia.com/MeetJesus (NOT a Morning Mindset resource) ~~~~~~~~~~~~~~ FINANCIALLY SUPPORT THE MORNING MINDSET: (not tax-deductible) -- Become a monthly partner: https://mm-gfk-partners.supercast.com/ -- Support a daily episode: https://MorningMindsetMedia.com/daily-sponsor/ -- Give one-time: https://give.cornerstone.cc/careygreen -- Venmo: @CareyNGreen -- Support our SPANISH TRANSLATION podcast: https://MorningMindsetMedia.com/supportSpanish -- Support our HINDI TRANSLATION podcast: https://MorningMindsetMedia.com/supportHindi ~~~~~~~~~~~~~~ FOREIGN LANGUAGE VERSIONS OF THIS PODCAST: Subscribe to the SPANISH version: https://MorningMindsetMedia.com/Spanish Subscribe to the HINDI version: https://MorningMindsetMedia.com/Hindi ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CONTACT: Carey@careygreen.com ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ THEME MUSIC: “King’s Trailer” – Creative Commons 0 | Provided by https://freepd.com/ ***All NON-ENGLISH versions of the Morning Mindset are translated using A.I. Dubbing and Translation tools from DubFormer.ai ***All NON-ENGLISH text content (descriptions and titles) are translated using the A.I. functionality of Google Translate.
This show has been flagged as Clean by the host. A collection of tips and tricks that operat0r uses to make a standard Android phone more custom. The secret block extension is "11335506" - tell 'em Ken sent ya. Links UserLAnd - Linux on Andro UserLAnd is an open-source app which allows you to run several Linux distributions like Ubuntu, Debian, and Kali. Widgify - DIY Live Wallpaper Widgify is a well-designed beautification tool for phone, where you can experience a wide variety of screen widgets to easily match your super personalized phone home screen! Nova Launcher Prime Nova Launcher is a powerful, customizable, and versatile home screen replacement. Firefox Nightly for Developers Nightly is built for testers. Help us make Firefox the best browser it can be. Expanded extension support in Firefox for Android Nightly How to use collections on addons.mozilla.org SponsorBlock SponsorBlock is an open-source crowdsourced browser extension and open API for skipping sponsor segments in YouTube videos. WireGuard (VPN) The official app for managing WireGuard VPN tunnels. DNS66 This is a DNS-based host blocker for Android. (Requires root) Hacker's Keyboard Four- or five-row soft-keyboard TidyPanel Notification Cleaner Tidy up your notification panel with simple, minimal, beautiful and intuitive UI. Provide feedback on this episode.
Um dos maiores pugilistas de toda a história, George Foreman morreu na última sexta-feira (21), aos 76 anos de idade. Ouro nas Olimpíadas de 1968 na Cidade do México e bicampeão mundial dos pesos pesados, o "Big George" marcou época no esporte com seus nocautes impressionantes, e com uma carreira de sucesso como ícone publicitário.
What if a single moment could redefine your purpose? For Angel Jones, it was standing in Trafalgar Square, hearing Nelson Mandela's powerful words: “I want to put you in my pocket and take you home.” That spark ignited a revolution—The Homecoming Revolution—a movement dedicated to bringing talented Africans back to their roots. In this episode, Angel shares her incredible journey from a thriving advertising career in London to leading a bold mission of change and connection across Africa. She opens up about facing a midlife crisis that led to founding an executive search firm, helping top African talent return home to drive impact. You'll hear her insights on navigating identity, the power of empathy in leadership, and why she believes listening is the most vital skill for leaders today. Plus, Angel reflects on the challenges of misinformation, the role of technology, and how African values of Ubuntu offer hope in a divided world. Get ready for a powerful conversation on bold leadership, resilience, and the future of Africa.Love the show? Subscribe, rate, review & share! https://anne-pratt.com
New Afrikan Ourstory 365 continues…Listen up and listen in! In this episode of Prison Focus Radio, we are in conversation with our might Sis, Porshe Taylor, founder of Prison: From the Inside Out. This is another beautiful opportunity to tap into the revolutionary love in action to bring our People home, healthy in mind and spirit, and attend to the medical challenges they face after long term capture. We do this in Ubuntu, shared humanity. I am because We are. Kan't stop, Won't stop All Power to the People Liberate Our Elders Free Em All! Free Palestine!
Mike sits down with Matt Hartley of Framework to discuss some of their exciting new announcements, some Linux goodness & of course an obligatory Rust shoutout. Matt's Socials LinkedIn (https://www.linkedin.com/in/matthartley/) Bluesky (https://bsky.app/profile/matthartleylinux.bsky.social) Framework (https://frame.work/) Coder's Socials Mike on X (https://x.com/dominucco) Mike on BlueSky (https://bsky.app/profile/dominucco.bsky.social) Mike's Blog (https://dominickm.com) Coder on X (https://x.com/coderradioshow) Coder on BlueSky (https://bsky.app/profile/coderradio.bsky.social) Show Discord (https://discord.gg/k8e7gKUpEp) Alice (https://alice.dev) TMB Earth Day 2025 Competition (https://dominickm.com/earth-day-25-competition/)
Home Assistant gets even more credible and sustainable, open source users are entitled, changes in KDE land, Fedora says hello to Plasma and goodbye to X11, Ubuntu looks to drop GNU coreutils, GIMP 3 is out and still has a terrible name, and new Pebble devices will be shipping soon™. News Home Assistant officially... Read More
Coming up in this episode * Oh GNOME! * Mozilla, Don't Watch * And a few high notes The Video Version! (https://youtu.be/FdHulOnBwEo) https://youtu.be/FdHulOnBwEo 0:00 Cold Open 1:07 Dash To Panel Needs Your Help! 27:21 Firefox's New Terms Of Use 51:33 Mark / Contact Button 1:00:34 Scott / Contact Button 1:03:22 Dan / Matrix 1:06:09 chraist / Matrix 1:08:07 bgt lover / Matrix 1:10:00 MarshMan / Discord 1:13:58 Next Time! 1:18:45 Stinger Dash to Panel Maintainer Quits Dash to panel maintainer quits (https://www.theregister.com/2025/03/14/dashtopanel_maintainer_quits/) The GitHub issue (https://github.com/home-sweet-gnome/dash-to-panel/issues/2259)
Home Assistant gets even more credible and sustainable, open source users are entitled, changes in KDE land, Fedora says hello to Plasma and goodbye to X11, Ubuntu looks to drop GNU coreutils, GIMP 3 is out and still has a terrible name, and new Pebble devices will be shipping soon™. News Home Assistant officially... Read More
This show has been flagged as Clean by the host. Transferring Large Data Sets Very large data sets present their own problems. Not everyone has directories with hundreds of gigabytes of project files, but I do, and I assume I'm not the only one. For instance, I have a directory with over 700 radio shows, many of these directories also have a podcast, and they also have pictures and text files. Doing a properties check on the directory I see 450 gigabytes of data. When I started envisioning Libre Indie Archive I wanted to move the directories into archival storage using optical drives. My first attempt at this didn't work because I lost metadata when I wrote the optical drives since optical drives are read only. After further work and study I learned that tar files can preserve meta data if they are created and uncompressed as root. In fact, if you are running tar as root preserving file ownership and permissions is the default. So this means that optical drives are an option if you write tar archives onto the optical drives. I have better success rates with 25 GB Blue Ray Discs than with the 50 GB discs. So, if your directory breaks up into projects that fit on 25 GB discs, that's great. My data did not do this easily but tar does have an option to write a data set to multiple tar files each with a maximum size, labelling them -0 -1, etc. When using this multi volume feature you cannot use compression. So you will get tar files, not tar.gz files. It's better to break the file sets up in more reasonable sizes so I decided to divide the shows up alphabetically by title, so all the shows starting with the letter a would be one data set and then down the alphabet, one letter at a time. Most of the letters would result in a single tar file labeled -0 that would fit on the 25 GB disc. Many letters, however, took two or even three tar files that would have to be written on different disks and then concatenated on the primary system before they are extracted to the correct location in primaryfiles. There is a companion program to tar, called tarcat, that I used to combine 2 or 3 tar files split by length into a single tar file that could be extracted. I ran engrampa as root to extract the files. So, I used a tar command on the working system where my Something Blue radio shows are stored. Then I used K3b to burn these files onto a 25 GB Blu Ray Disc carefully labeling the discs and writing a text file that I used to keep up with which files I had already copied to Disc. Then on the Libre Indie Archive primary system I copied from the Blu Ray to the boot drive the file or files for that data set. Then I would use tarcat to combine the files if there was more than one file for that data set. And finally I would extract the files to primaryfiles by running engrampa as root. Now I'm going to go into details on each of these steps. First make sure that the Libre Indie Archive program, prep.sh, is in your home directory on your workstation. Then from the data directory to be archived, in my case the something_blue directory run prep.sh like this. ~/prep.sh This will create a file named IA_Origin.txt that lists the date, the computer and directory being archived, and the users and userids on that system. All very helpful information to have if at some time in the future you need to do a restore. Next create a tar data set for each letter of the alphabet. (You may want to divide your data set in a different way.) Open a terminal in the same directory as the data directory, my something_blue directory, so that ls displays something_blue (your data directory). I keep the Something Blue shows and podcasts in subdirectories in the something_blue directory. Here's the tar command. Example a: sudo tar -cv --tape-length=20000000 --file=somethingblue-a-{0..50}.tar /home/larry/delta/something_blue/a* This is for the letter a so the --file parameter includes the letter a. The numbers 0..50 in the squirelly brackets are the sequence numbers for the files. I only had one file for the letter a, somethingblue-a-0.tar. The last parameter is the source for the tar files, in this case /home/larry/delta/something_blue/a* All of the files and directories in the something_blue directory that start with the letter a. You may want to change the --tape-length parameter. As listed it stores up to 19.1 GB. The maximum capacity of a 25 GB Blu-ray is 23.3GB for data storage. Example b: For the letter b, I ended up with three tar files. somethingblue-b-0.tarsomethingblue-b-1.tarsomethingblue-b-2.tar I will use these files in the example below using tarcat to combine the files. I use K3b to burn Blu-Ray data discs. Besides installing K3b you have to install some other programs and then there is a particular setup that needs to be done including selecting cdrecord and no multisession. Here's an excellent article that will go step by step through the installation and setup. How to burn Blu-ray discs on Ubuntu and derivatives using K3b? https://en.ubunlog.com/how-to-burn-blu-ray-discs-on-ubuntu-and-derivatives-using-k3b/ I also always check Verify data and I use the Linux/Unix file system, not windows which will rename your files if the filenames are too long. I installed a Blu-Ray reader into the primary system and I used thunar to copy the files from the Blu-Ray Disc to the boot drive. In the primaryfiles directory I make a subdirectory, something_blue, to hold the archived shows. If there is only one file, like in example a above, you can skip the concatenation step. If there is more than one file, like Example b above, you use tarcat to concatenate these files into one tar file. You have to do this. If you try to extract from just one of the numbered files when there is more than one you will get an error. So if I try to extract from somethingblue-b-0.tar and I get an error it doesn't mean that there's anything wrong with that file. It just has to be concatenated with the other b files before it can be extracted. There is a companion program to tar called tarcat that should be used to concatenate the tar files. Here's the command I used for example b, above. tarcat somethingblue-b-0.tar somethingblue-b-1.tar somethingblue-b-2.tar > sb-b.tar This will concatenate the three smaller tar files into one bigger tar file named sb-b.tar In order to preserve the meta data you have to extract the files as root. In order to make it easier to select the files to be extracted and where to store them I use the GUI archive manager, engrampa. To run engrampa as root open a terminal with CTRL-ALT t and use this command sudo -H engrampa Click Open and select the tar file to extract. Then follow the path until you are in the something_blue directory and you are seeing the folders and files you want to extract. Type Ctrl a to select them all. (instead of the something_blue directory you will go to your_data directory) Then click Extract at the top of the window. Open the directory where you want the files to go. In my case, primaryfiles/something_blue Then click Extract again in the lower right. After the files are extracted go to your data directory in primaryfiles and check that the directories and files are where you expect them to be. You can also open a terminal in that directory and type ls -l to review the meta data. When dealing with data chunks sized 20 GB or more each one of these steps takes time. The reason I like using an optical disk backup to transfer the files from the working system to Libre Indie Archive is because it gives me an easy to store backup that is not on a spinning drive and that cannot be overwritten. Still optical disk storage is not perfect either. It's just another belt to go with your suspenders. Another way to transfer directories into the primaryfiles directory is with ssh over the network. This is not as safe as using optical disks and it also does not provide the extra snapshot backup. It also takes a long time but it is not as labor intensive. After I spend some more time thinking about this and testing I will do a podcast about transferring large data sets with ssh. Although I am transferring large data sets to move them into archival storage using Libre Indie Archive there are many other situations where you might want to move a large data set while preserving the meta data. So what I have written about tar files, optical discs, and running thunar and engrampa as root is generally applicable. As always comments are appreciated. You can comment on Hacker Public Radio or on Mastodon. Visit my blog at home.gamerplus.org where I will post the show notes and embed the Mastodon thread for comments about thie podcast. Thanks Provide feedback on this episode.
Canonical's VP of Engineering for Ubuntu reveals why they're swapping coreutils for Rust-built tools. Then we break down the GNOME 48 release, and why this one is special.Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. River: River is the most trusted place in the U.S. for individuals and businesses to buy, sell, send, and receive Bitcoin. Support LINUX UnpluggedLinks:
Gimp 3 is finally here, after 7,10, 13, or 20 years of waiting, depending on who you ask. Blender 4.4 and Calibre 8 are out, Fedora 42 goes Beta, and Gnome 48 is available. Firefox finally brings back PWA, Linux 6.15 fixes a de-randomized security misfeature, and Asahi Lina has stepped back from Linux GPU development. For tips, we have the ifne command for if not empty, pw-metadata for getting and setting options in Pipewire, Lutris and Gamescope for running old Wine games on high resolution displays, and talk for old school text chatting in a terminal. You can find the show notes at https://bit.ly/41QPaBp and have a great week! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Jeff Massie, and Ken McDonald Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Canonical's VP of Engineering for Ubuntu reveals why they're swapping coreutils for Rust-built tools. Then we break down the GNOME 48 release, and why this one is special.
Na madrugada de 24 de fevereiro, Igor Mello de Carvalho, de 31 anos, foi baleado por um ex-policial militar ao ser falsamente acusado de roubo, quando saía do trabalho. Como consequência, ficou dias internado no hospital e perdeu um rim. Mas isso você já sabe. O que você não sabe é que, por trás da tragédia e muito além da dor, existe um pai, um filho, um marido, um jornalista... Enfim, um ser humano. Que, assim como todos nós, tem sua história para contar.
Tune in to hear:What does “Shirtsleeves to Shirtsleeves in three generations” refer to and why is this concept ubiquitous across many cultures?What is the “crab in a bucket” theory and how do we see this play out with people? Why is this called “tall poppy syndrome” in Australia and New Zealand?What are psychological “leveling mechanisms?” What do these look like in practice?What is the African concept of Ubuntu and what can we learn from it?How can we find a middle ground between Individualism and Collectivism?LinksThe Soul of WealthConnect with UsMeet Dr. Daniel CrosbyCheck Out All of Orion's PodcastsPower Your Growth with OrionCompliance Code: 0650-U-25066
This week a community resource fell offline unexpectedly. Members from all over the internet banded together to restore a community resource! -- During The Show -- 01:00 Intro Noah brought the warm weather back We need your feedback Join Geeklab (https://matrix.to/#/#geeklab:linuxdelta.com) Tag Marlin 04:10 Smart Watches Original pebble inventor New pebble smartwatches available for preorder ArsTechnica (https://arstechnica.com/gadgets/2025/03/new-pebbleos-watches-with-more-battery-and-familiar-looks-are-up-for-preorder/) Steve's smartwatch use case Noah's watch Pine Time (https://pine64.com/product/pinetime-smartwatch-sealed/) Fitness Features Eric Migicovsky doesn't stick with companies AsteroidOS (https://asteroidos.org/) BangleJS (https://banglejs.com/) 28:32 News Wire GIMP 3.0 - gimp.org (https://www.gimp.org/news/2025/03/16/gimp-3-0-released/) Digikam 8.6 - digikam.org (https://www.digikam.org/news/2025-03-15-8.6.0_release_announcement/) Peertube 7.1 - joinpeertube.org (https://joinpeertube.org/news/release-7.1) Gstreamer 1.26 - freedesktop.org (https://gstreamer.freedesktop.org/releases/1.26/) KDE Frameworks 6.12 - kde.org (https://kde.org/announcements/frameworks/6/6.12.0/) End of Nouveau OpenGL Driver - itsfoss.com (https://news.itsfoss.com/mesa-zink-nvk-switch/) Debian Bookworm 12.10 - debian.org (https://www.debian.org/releases/bookworm/#:~:text=Debian%2012.10%20was%20released%20on,release%20and%20the%20Release%20Notes.) Ubuntu's Rust Coreutils - ubuntu.com (https://discourse.ubuntu.com/t/carefully-but-purposefully-oxidising-ubuntu/56995) GitHub Actions Hack - infoworld.com (https://www.infoworld.com/article/3847178/thousands-of-open-source-projects-at-risk-from-hack-of-github-actions-tool.html) Open Source OSV Scanner - gbhackers.com (https://gbhackers.com/google-launches-open-source-osv-scanner/) Linux Kernel Use-After-Free Vulnerability - gbhackers.com (https://gbhackers.com/poc-exploit-released-linux-kernel-vulnerability/) Kagent - thenewstack.io (https://thenewstack.io/meet-kagent-open-source-framework-for-ai-agents-in-kubernetes/) Tencent Open Source Model - bloomberg.com (https://www.bloomberg.com/news/articles/2025-03-18/tencent-touts-open-source-ai-models-to-turn-text-into-3d-visuals) Mistral Small 3.1 - techzine.eu (https://www.techzine.eu/news/applications/129697/mistral-ai-unveils-small-powerful-and-open-source-ai-model/) - venturebeat.com (https://venturebeat.com/ai/mistral-ai-drops-new-open-source-model-that-outperforms-gpt-4o-mini-with-fraction-of-parameters/) 29:50 Linuxrocks.online Outage Noah's travel story Awake for 30+ hours Wakes up to DMs, emails, online posts, etc Linux Rocks server is down Linux Rocks server grew organically Moved into Altispeed Egan MN data center SSH connected but then kicked you out Altispeed got pulled into it User wasn't in the libvertd group Organic way information spreads on the internet Glad to see people calm down after learning someone is in the hospital Linux Rocks is now monitored by LibreNMS Michael is donating a new server Thank you for polite and kind notification To those not so kind, please consider what you are getting for free Reddit post (https://www.reddit.com/r/Mastodon/comments/1jblofg/linuxrocksonline_been_down_for_nearly_48_hours/) Everything was documented 49:45 Continuity Plan Reach out to Nerd friends Interest Old laptops show up on Noah's desk Enabling people through technology -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/433) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
Dan Corder speaks to Gabi Le Roux—renowned South African keyboardist, composer, and music producer, best known for his work on Mandoza’s iconic hit "Nkalakatha." - to unpack the inclusiveness of SA’s national anthem.See omnystudio.com/listener for privacy information.
Carl sits down with Mike to talk ARM on Thelio, Linux computing, a little Rust (of course), SCALE, COSMIC and more. Carl's Socials Carl on X (https://x.com/carlrichell) System76 on X (https://x.com/system76) System76 (https://system76.com) Coder's Socials Mike on X (https://x.com/dominucco) Mike on BlueSky (https://bsky.app/profile/dominucco.bsky.social) Mike's Blog (https://dominickm.com) Coder on X (https://x.com/coderradioshow) Coder on BlueSky (https://bsky.app/profile/coderradio.bsky.social) Show Discord (https://discord.gg/k8e7gKUpEp) Alice (https://alice.dev) TMB Earth Day 2025 Competition (https://dominickm.com/earth-day-25-competition/)
Tracking WiFi devices with cheap ESP32 devices, using OSM and Google Maps together, deleting your Twitter data, “3D” images with any camera, forcing Ubuntu to give you all the available updates, efficiently importing photos, counting lines of code, and more. Discoveries espargos and demo video OSM2GoogleMaps Bookmarklet Cyd twitter-defollower Cross Views About apt upgrade... Read More
Tracking WiFi devices with cheap ESP32 devices, using OSM and Google Maps together, deleting your Twitter data, “3D” images with any camera, forcing Ubuntu to give you all the available updates, efficiently importing photos, counting lines of code, and more. Discoveries espargos and demo video OSM2GoogleMaps Bookmarklet Cyd twitter-defollower Cross Views About apt upgrade... Read More
This week we're talking Rust Coreutils in Ubntu, Intel's new CEO, and the Linux performance of AMD's newest x3d powerhouse CPU. Then Crossover releases 25, and ReactOS and Free95 battle for Windows reimplementation supremacy. There's the Zed Editor, Audacity updates and news from KDE! For tips we have the Pipewire pw-profiler, ifdata for network interface quick reference, and exch for atomically swapping two files. You can find the show notes at https://bit.ly/4hgT1xo and enjoy! Host: Jonathan Bennett Co-Hosts: Ken McDonald and Rob Campbell Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this episode, Andreea Munteanu of Canonical discusses Data Science Stack, an out-of-the-box machine learning environment solution. Emphasizing the industry's shift to Kubernetes and cloud native applications, she outlines her vision for accessible and secure open source AI. The conversation also covers the importance of community contribution, challenges faced by data scientists, and the future of AI being open source. 00:00 Introduction 01:50 Data Science Stack Introduction 03:31 Community and Collaboration 06:30 Getting Started with Generative AI 08:56 Andreea's Journey into Data Science 10:59 The Future of AI and Open Source 14:57 Encouraging Open Source Contributions 17:28 Conclusion and Final Thoughts Guest: Andreea Munteanu helps organizations drive scalable transformation projects with open source AI. She leads AI at Canonical, the publisher of Ubuntu. With a background in data science across industries like retail and telecommunications, she helps enterprises make data-driven decisions with AI.
Ja nee. Is klar. Wo kam DAS denn her?! Werder gewinnt mit 2:0 in Leverkusen und nicht nur Thomas und Jan haben sich verwundert die Augen gerieben. Ubuntu wieder komplett da, eine aufopferungsvoll kämpfende Mannschaft, die eiskalt vor dem Tor ist... ES IST WIEDER 2024, BABY!! Und so schnell kann es gehen: Mit einem guten Spiel am Samstag ist Werder auf einmal wieder mittendrin statt nur dabei. Genügend Gründe für Thomas und Jan, die Euphoriebremse zu treten... Oder?! Enjoy!
Coming up in this episode * Syncing the Notes * The History of Snaps * And How Much We Absolutely Adore Them 0:00 Cold Open 1:34 Seeking Syncthing 16:42 The History of Snaps 33:52 How'd 9 Years of Snaps Go? 1:01:54 Next Time 1:04:49 Stinger The Video Version https://youtu.be/izDzKkuEyRw It is all about the notes Leo goes back to basics and uses SyncThing (https://syncthing.net/) to move his markdown files around that he edits using a standard text editor (https://code.visualstudio.com/).
There are new GPUs that are "available"! Are either NVIDIA or AMD's new offering a good deal for the Linux user? Speaking of AMD, what's up with that AMD Microcode vulnerability? Mono is back, with a dash of Wine, Ubuntu is reverting the O3 optimizations, and we say Goodbye to Skype. For tips we have mesg for controlling console messaging, and virsh for managing and live migrating your virtual machines. You can find the show notes at https://bit.ly/3FfcqkU and we'll see you next week! Host: Jonathan Bennett Co-Host: Jeff Massie Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
El episodio de esta semana nos ha quedado muy taurino. Comenzamos recordando que hace 40 años se estrenaba “La vaquilla”, una de las mejores películas de Luis García Berlanga. Después, aprovechando el estreno de “Tardes de soledad”, la película de Albert Serra sobre el matador Andrés Roca Rey que ganó la Concha de Oro en el último festival de San Sebastián, hemos revisado en nuestra “Enciclopedia curiosa del cine” el capítulo dedicado a los toros y el cine. Elio Castro ha charlado con el director Albert Solé, la persona que está al frente del Brain Film Festival, un certamen que se celebra en Barcelona del 12 al 16 de marzo dedicado a películas relacionadas con el cerebro humano y la salud mental. Albert Solé es hijo de Jordi Solé-Tura, uno de los padres de la Constitución española, y
We're live from SCaLE this year, and a panel joins us to learn more about how Ubuntu is trying to reach the next generation. Dinner at SCale We're teaming up with Jupiter Broadcasting for a meetup and dinner! Come join us Saturday night! Where: El Cholo Café 300 E Colorado Blvd Suite 214 Pasadena, CA When: Saturday, March 8, 2025 7:00 PM. -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/431) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
This show has been flagged as Clean by the host. Maintaining The Remote System I have renamed the project Libre Indie Archive because the name theindiearchive is already someone else's domain. I never would have renamed The Indie Archive but I do think that Libre Indie Archive is more descriptive, hence, better. I am getting close to a pre beta push up to codeberg. Anyone following along who wants to help test, you can do this with two or three old systems. Let me know. Email hairylarry@gmail.com or on Mastodon I am @hairylarry@gamerplus.org. I have decided to develop and document for Xubuntu first and here's the reasons why. I bought an older HP small form factor office system with 4 Gigabytes of ram. HP Compaq 4000 Pro Pentium Dual-Core E6600 3.06GHz 4GB RAM Thirty dollars on ebay with shipping and taxes. I was testing Libre Indie Archive on it. Because of the age of the system Ubuntu wouldn't install. I tested it with some BSD systems and installed Indie Archive without a GUI. Ghost BSD didn't install but Midnight BSD did install so I used the Midnight BSD GUI and installed Indie Archive. None of this was easy for me because I'm a BSD newb and unless you already use BSD I can't recommend it for Libre Indie Archive. Remember, not all indie producers are computer programmers, and I want Indie Archive to work for those producers as well as for the computer savvy. Then on a whim I thought I would try the Xubuntu 24.04 distro and it installed no problems. Thanks XFCE for keeping it light. The other reason I am developing and documenting for Xubuntu is that I can use the Xubuntu install document and install on Ubuntu or Debian with only minor differences. I know because I tried it. This is probably also true for other Debian and Ubuntu derived distributions. So, if you want to help, you could take the Xubuntu install document and see if it works on other distributions. Write down what you had to change and let me know. I plan on making an install checklist out of the install document and it would be great to have a checklist with the actual commands for several distributions. So, that was the intro. Now on to the topic. I am planning on installing remotenear and remotefar systems, remotenear being a short drive away (or maybe in your home if your studio is not in your home, like mine) and the remotefar further away to avoid losing data in the case of a regional catastrophe like flood, fire, tornado, or hurricane. Still even a short drive is not what I want to do any time there might be something I need to check on a remote system so I have devised a way to manage it from the secondary system. When a remote system is delivered to a new location it will be headless. No monitor, no keyboard, and no mouse. At the remote location it is plugged into a UPS and attached to the network with an ethernet cable and attached to the UPS with a usb cable. Then it is turned on. Even without a keyboard or a mouse there is still some local control of the system available. As part of the remote system install we go into the power management settings and next to "when power button is pressed" we select shutdown. So, a short press on the power button initiates a Xubuntu shutdown just like the shutdown that you get from the menu or Alt F4. If that doesn't work a long press of the power button will turn the system off. This is like unplugging the system or losing power and is not recommended but Xubuntu will rebuild the file structure when the system is restarted. And if you do lose power the UPS will send a signal to the computer shutting it down with a controlled shutdown, just like a short press of the power button or a shutdown from the menu. I would like to carry this one step further and enable automatic power up for the computer. A quick search shows cyberpower PowerPanel software for linux. Also you can set a power restore function in the BIOS to restart the system when the power is restored. I just checked and this worked on my little HP. So ... with just the power button and an attached UPS you can get both manual and automatic control of shutting the remote system down and restarting it. Pretty cool for a rather sparse interface. If you know more about how to set this up please let me know. There's a big jump between doing a search to see if something is possible and actually implementing it. Okay, that was the easy part. Now for the fun part. First off, the remote system is probably not going to be at your place but at the home or business of friends or family. And they probably don't have a static IP, and they may not be able to implement port forwarding in their router, and they may not be able to control their firewall. So we can't go, "I'll just ssh in when I need to fix a problem". And you don't really want to change their setup anyway because all of the above add to their security risk. Also their router undoubtedly gives dynamic IP addresses so we want the remote system to use that because when we are setting it up we might not even know what subnet their LAN uses. But, at the same time it doesn't make any sense at all to try to maintain a remote system that you can't log into. So, the tool for setting up a terminal session on the remote system is called a remote tunnel reverse shell. The remote system is already connecting to the secondary system with rsync ssh when the cron job fires off every day to update the files. So, the secondary system is running an ssh server and the remote system has the public key that allows access without entering a password. There are two parts to setting up a remote tunnel reverse shell. The secondary system has to be listening for the remote system on a port, I use port 7070. And then the remote system runs a bash command with the -i parameter that means reverse shell, and with the port, 7070. I'm using nc to set up the listener. nc -lvnp 7070 -l is --listen -v is --verbose -n means the port is restricted to numeric values. -p is --port 7070 is the port I chose the port number, 7070. You can use any available port but the listener has to use the same port as the remote system uses in the bash call. Which is this. bash -i >& /dev/tcp/your-static-ip-from-your-isp/7070 0>&1 This is the order of events. On the secondary system I start listening. nc -lvnp 7070 Then a script runs on the remote system. bash -i >& /dev/tcp/your-static-ip-from-your-isp/7070 0>&1 And then a command prompt opens up in the terminal on the secondary system that's listening. And you are logged into the remote system and you can look around and check things out and even move or delete files until you exit. Except it didn't work. Of course not, nothing ever works the first time. Two other things have to be changed that we're going to talk about now, the firewall and port forwarding. These things are already discussed in install.txt because we had to fix the firewall and port forwarding for the remote system to log into the secondary system to pick up the new files. To set up port forwarding, log into your router from a browser attached to the router. Like, for instance, a browser on your secondary system. You open the browser and type into the address bar, 192.168.1.1 Which is right most of the time. On my setup I type 192.168.2.1 because the isp's router uses the 192.168.1 subnet. How do I know which to use??? This also is covered in install.txt because to connect from the primary system to the secondary system I have to connect to the static ip that I assigned to the secondary system. So my primary system has the static ip 192.168.2.11 and my secondary system has the static ip 192.168.2.12 which allows me to ssh into the secondary system from the primary system. And this means my router is at 192.168.2.1 Your router is likely at 192.168.1.1 because that's the most common LAN subnet. Anyway, in the browser I open the router's control console and then I have to enter the password. If you don't know what it is you have to find out and write it down. Check what the defaults are for your router by searching on the internet. The defaults might work. If they do change your login and password and write them down! Do not leave your router defaults in place. That's a big security risk. After you're logged into the control console check around in the menus for Port Forwarding. I already had to do this to make ssh work from the remote system to the secondary system. In that case I had to forward port 22 (the ssh port) from the internet to the secondary system. Here's how that works. On the remote system I type. ssh indiearchive@your-static-ip-from-your-isp Since it's coming in as ssh that means the router sees port 22. The router checks the port forwarding table and sees that incoming traffic using port 22 should go to the secondary system, in my case 192.168.2.12 So the incoming ssh goes to the secondary system which is my ssh server. What a coincidence. So in order to use port 7070 to open a tunnel from the remote system to the secondary system I have to add a row to the port forwarding table with 7070 as the port and 192.168.2.12 as the ip. Except on your LAN the ip address may be different. Except it doesn't work. I bet you guessed why. It's the firewall. On the secondary system type. sudo ufw status It should show you that port 22 is allowed because otherwise you wouldn't be getting ssh traffic. It probably won't show you that port 7070 is allowed. So type. sudo ufw allow 7070 Then check the status again and see if it shows 7070. Here's a nice firewall link with instructions. https://www.digitalocean.com/community/tutorials/ufw-essentials-common-firewall-rules-and-commands It still might not work even though it should. Why? Operator error. You may have typed 7000 instead of 7070. (I did that.) Or any other little typo in any of the commands. When this works you are ready to test the reverse shell. The remote system can ssh into the secondary system and we have added port 7070 to the port forwarding table on the router and to the firewall on the secondary system. This is great! But how do I know when to listen and how do I get the remote system to issue the bash command that sets up the reverse shell? Remember, in the future the remote system is going to be sitting somewhere with no monitor, keyboard, or mouse. Only computer programmers are required to remember the future. After all that setup, here's the clever bit. I have a text file on the secondary system named letmein.txt and it's a flag with two values. The text file either reads yes or no. If it reads yes it means I'm here at the secondary system and I want to log into the remote system. If it reads no. Not so much. I'm not really trying to log in to the remote system at all. The remote system has ssh access to the secondary system since that's the way it picks up the new files, with rsync ssh. So the remote system can use rsync to copy the letmein.txt file over to it's hard drive. And it does this every five minutes, with a cron job. On the remote system type sudo -s to become root. crontab -e to edit the root crontab. Add this line */5 * * * * /home/indiearchive/check.sh Every 5 minutes the remote system runs check.sh which grabs the letmein.txt file and checks to see if it says yes or no. If it says yes it starts the reverse shell, assuming I remembered to start listening to port 7070 on the secondary system. After I'm done working on the remote system while sitting at the secondary system I type exit to close the remote terminal and come back to the terminal on the secondary system. If I forgot to do something I can start listening again but if I'm done I edit letmein.txt to say no and the remote system will quit trying to set up a reverse shell every 5 minutes. But wait! There's more. Email notifications. I set up email notifications with mailersend for file integrity reports using curl. To do that I wrote a script called send.sh that takes a file name as an argument and then sends me an email with the contents of the file in the body of the email. So when I run my file integrity program if the log files are larger than they should be, it means there is a discrepancy and that log file gets emailed to me so I can check things out. (Maybe with my remote tunnel reverse shell.) I also check diskspace with df and send a disk space report. Using send.sh when I run check.sh and detect a yes in letmein.txt I call send.sh with letmein.txt as the parameter and I get an email that says yes, meaning the remote system is trying to set up a reverse shell. So if I change letmein.txt to yes on the secondary system and I wait five or ten minutes without getting notified I may just have to make a call. Maybe the nice people who are hosting my remote system have lost power. Or internet. Or maybe they will have to push a button. If that doesn't work I may have to make a trip. I hope it's remotenear and not remotefar. So when I was testing the email notifications part of check.sh and fiddling around with the code all of a sudden I quit getting notifications at all. I learned a lot about bash scripting trying to figure out what I did wrong and it turned out it wasn't me. After I sent myself numerous emails saying yes from a weird email address gmail decided they were spam. So I went into my spam folder and marked the notification email as not spam. That fixed it for me but if you are setting up email notifications for Libre Indie Archive or for anything be sure you white list the email address so that the email powers that be don't suddenly decide that your notifications are spam and you quit getting important notifications. In gmail you set up a filter entry with the notifier's email address and set the action to be "Never send it to Spam". Because getting these emails is important. First they remind me to have the secondary system listen. Then they remind me to change letmein.txt from yes to no after I'm done with the remote terminal. And while you're changing letmein.txt to no make sure the listener is off. Leaving it listening for an extended period of time is a security risk. So there's a lot of little moving parts involved in this. Kind of complicated but still fascinating. Almost done. I didn't think this would be so long and now I'm exhausted. I am including slightly redacted and well commented copies of check.sh and send.sh in the show notes which will be on Hacker Public Radio and on my Delta Boogie Network-Gamer+ blog at home.gamerplus.org. As always, I appreciate your comments. Thanks Provide feedback on this episode.
In this week's episode, we continue our discuss about how seeking prestige can be dangerous for writers, specifically in the form of traditional publishing and the New York Times Bestseller list. This coupon code will get you 50% off the audiobook of Dragonskull: Shield of the Knight, Book #2 in the Dragonskull series (as excellently narrated by Brad Wills), at my Payhip store: DRAGONSHIELD50 The coupon code is valid through March 21, 2025. So if you need a new audiobook for spring, we've got you covered! TRANSCRIPT 00:00:00 Hello, everyone. Welcome to Episode 241 of The Pulp Writer Show. My name is Jonathan Moeller. Today is February 28th, 2025. Today we are continuing our discussion of how to escape the trap of prestige for writers, specifically traditional publishing and The New York Times Bestseller List. Before we get to our main topic, we will do Coupon of the Week, an update on my current writing and audiobook projects, and then Question of the Week. This week's coupon code will get you 50% off the audiobook of Dragonskull: Shield of the Knight, Book Two in the Dragonskull series (as excellently narrated by Brad Wills), at my Payhip store. That coupon code is DRAGONSHIELD50. As always, I'll include the coupon code and the link to the store in the show notes. This coupon code is valid through March 21st, 2025. So if you need a new audiobook as we start to head into the spring months, we have got you covered. Now an update on my current writing projects. I'm pleased to report I am done with the rough draft of Ghost in the Assembly. I came in at 106,000 words, so it'll definitely be over a hundred thousand words when it's done. I'm about 20% of the way through the first round of edits, so I am confident in saying that if all goes well and nothing unexpected happens, I am on track to have it out in March. I am also 10,000 words into Shield of Battle, which will be the fifth of six books in the Shield War series and I'm hoping to have that out in April, if all goes well. In audiobook news, recording for both Cloak of Dragonfire and Orc-Hoard is done. I'm just waiting for them to get through the processing on the various stores so they're available. There is also an audiobook edition of Half Elven Thief Omnibus One and Cloak Mage Omnibus Three that hopefully should be coming in March. More news with that to come. 00:01:55 Question of the Week Now let's move on to Question of the Week. Question of the Week is intended to inspire interesting discussions of enjoyable topics. This week's question: what is your favorite subgenre of fantasy, high fantasy, epic fantasy, sword and sorcery, historical fantasy, urban fantasy, LitRPG, cultivation, or something else? No wrong answers, obviously. Cindy says: Epic fantasy or those with a good history for that world. The Ghost Series are fantastic at this. Thanks, Cindy. Justin says: I enjoy all those sub-genres, if they are done well. In times past I would've said comic fantasy, but that is because Terry Pratchett at his best was just that good. Mary says: High fantasy. Surabhi says: I'd honestly read anything fantasy that's written well and has characters I'm attached to, given that it's not too gritty. Bonus points if there's humor! Also, I love your books so much and they're the perfect blend of fantasy, adventure, and characters. Your books were what really got me into Sword and Sorcery. Thanks, Surabhi. Matthew says: See, that's difficult. I love my sabers, both light and metal. I would say urban fantasy crosses the boundary the most. If it's a captivating story, it will be read. John F says: I can't choose one- Lord of the Rings or LWW, The Inheritance Cycle, The Dresden Files, Caina, Ridmark, or Nadia. I think what draws me is great characters who grow. The setting/genre is just the device. That's why I keep coming back to your books. You create great characters. Thanks, John F. John K says: I think I'm partial to historical fantasy. I enjoy all genres, but when I think of my favorites, they tend to be derivations of historical settings. Think Guy Gavriel Kay or Miles Cameron. That said, I was weaned on Robert E. Howard, Fritz Lieber, Michael Moorcock, Karl Edward Wagner, Jack Vance, so a strong sword and sorcery second place. Juana says: High fantasy. Belgariad, Tolkien, dragons, et cetera. Jonathan says: Sword and sorcery in space! Prehistoric sword and sorcery, sword and sorcery always. Quint: says Sword and sorcery! Michael says: Sword and sorcery. For myself, I think I would agree with our last couple of commenters and it would be sword and sorcery. My ideal fantasy novel has a barbarian hero wandering from corrupt city state to corrupt city state messing up the business of some evil wizards. I'm also very fond of what's called generic fantasy (if a fighter, a dwarf, an elf, and a wizard are going into a dungeon and fighting some orcs, I'm happy). 00:04:18 Main Topic of the Week: Escaping the Prestige Trap, Part 2 Now onto our main topic for the week, Escaping the Prestige Trap, Part 2, and we'll focus on traditional publishing and the New York Times Bestseller List this week. As we talked about last week, much of the idea of success, especially in the United States, is based on hitting certain milestones in a specific order. In the writing world, these measures of success have until fairly recently been getting an MFA, finding an agent, getting traditionally published, and hitting The New York Times Bestseller List. Last week we talked about the risks of an MFA and an agent. This week, we are going to talk about two more of those writing markers of prestige, getting traditionally published and having a book land on The New York Times Bestseller List. Why are they no longer as important? What should you devote your energy and focus to instead? So let's start with looking at getting traditionally published. Most writers have dreamed of seeing their book for sale and traditional publishing for a long time has been the only route to this path. Until about 15 years ago, traditional publishing was the way that a majority of authors made their living. Now that big name authors like Hugh Howie, Andy Weir, and Colleen Hoover have had success starting as self-published authors (or in the case of authors Sarah J. Maas and Ali Hazelwood, fan fiction authors) and then are getting traditional publishing deals made for them for their self-published works. It's proof that self-publishing is no longer a sign that the author isn't good enough to be published traditionally. Previous to the rise of the Kindle, that was a common belief that if you were self-published, it was because you were not good enough to get traditionally published. That was sort of this pernicious belief that traditional publishing was a meritocracy, when in fact it tended to be based on who you knew. But that was all 15 years ago and now we are well into the age of self-publishing. Why do authors still want to be traditionally published when in my frank opinion, self-publishing is the better path? Well, I think there are three main reasons for that. One of the main reasons is that the authors say they want to be traditionally published is to have someone else handle the marketing and the advertising. They don't realize how meager marketing budgets and staffing support are, especially for unknown authors. Many traditionally published authors are handling large portions of their own marketing and hiring publicists out of their own pocket because publishers are spending much less on marketing. The new reality is that traditional publishers aren't going to do much for you as a debut author unless you are already a public figure. Even traditionally published authors are not exempt from having to do their own marketing now. James Patterson set up an entire company himself to handle his marketing. Though, to be fair to James Patterson, his background was in advertising before he came into publishing, so he wasn't exactly a neophyte in the field, but you see more and more traditionally published authors who you think would be successful just discontented with the system and starting to dabble in self-publishing or looking at alternative publishers like Aethon Books and different arrangements of publishing because the traditional system is just so bad for writers. The second main reason authors want to be traditionally published is that they want to avoid the financial burden of publishing. This is an outdated way of thinking. The barrier to publishing these days is not so much financial as it is knowledge. In fact, I published a book entirely using free open source software in 2017 just to prove that it could be done. It was Silent Order: Eclipse Hand, the fourth book in my science fiction series. I wrote it on Ubuntu using Libre Office and I edited it in Libre Office and I did the formatting on Ubuntu and I did the cover in the GIMP, which is a free and open source image editing program. This was all using free software and I didn't have to pay for the program. Obviously I had to pay for the computer I was using and the Internet connection, but in the modern era, having an internet connection is in many ways almost a requirement, so that's the cost you would be paying anyway. The idea that you must spend tens of thousands of dollars in formatting, editing, cover, and marketing comes from scammy self-publishing services. Self-publishing, much like traditional publishing, has more than its fair share of scams or from people who aren't willing to take the time to learn these skills and just want to cut someone a check to solve the problem. There are many low cost and effective ways to learn these skills and resources designed specifically for authors. People like Joanna Penn have free videos online explaining how to do this, and as I've said, a lot of the software you can use to self-publish is either free or low cost, and you can get some very good programs like Atticus or Vellum or Jutoh for formatting eBooks for very low cost. The third reason that writers want to be traditionally published is that many believe they will get paid more this way, which is, unless you are in the top 1% of traditionally published authors, very wrong. Every so often, there's a study bemoaning the fact that most publishers will only sell about $600 worth of any individual book, and that is true of a large percentage of traditionally published books. Traditional publishers typically pay a lump sum called advance, and then royalties based on sales. An average advance is about the same as two or three months of salary from an office job and so not a reflection of the amount of time it typically takes most authors to finish a book. Most books do not earn out their advance, which means the advance is likely to be the only money the author receives for the book. Even well-known traditionally published authors are not earning enough to support themselves as full-time authors. So as you can see, all three of these reasons are putting a lot of faith in traditional publishers, faith that seems increasingly unnecessary or downright misplaced. I think it is very healthy to get rid of the idea that good writing comes from traditional publishers and that the prestige of being traditionally published is the only way you'll be accepted as a writer or be able to earn a living as a full-time writer. I strongly recommend that people stop thinking that marketing is beneath you as an author or too difficult to learn. Whether you are indie or tradpub, you are producing a product that you want to sell, thus you are a businessperson. The idea that only indie authors have to sell their work is outdated. The sooner you accept this reality, the more options you will have. Self-publishing and indie publishing are admittedly more work. However, the benefits are significant. Here are five benefits of self-publishing versus traditional publishing. The first advantage of self-publishing is you have complete creative control. You decide what the content of your book will be; you decide what the cover will be. If you don't want to make the covers yourself or you don't want to learn how to do that, you can very affordably hire someone to do it for you and they will make the cover exactly to your specifications. You also have more freedom to experiment with cross-genre books. As I've mentioned before, publishers really aren't a fan of cross genre books until they make a ton of money, like the new romantasy trend. Traditional publishing is very trend driven and cautious. Back in the 2000s before I gave up on traditional publishing and discovered self-publishing, I would submit to agents a lot. Agents all had these guidelines for fantasy saying that they didn't want to see stories with elves and orcs and dwarves and other traditional fantasy creatures because they thought that was passe. Well, when I started self-publishing, I thought I'm going to write a traditional fantasy series with elves and orcs and dwarves and other traditional fantasy creatures just because I can and Frostborn has been my bestselling series of all time in the time I've been self-publishing, so you can see the advantages of having creative control. The second advantage is you can control the marketing. Tradpub authors often sign a contract that they'll get their social media and website content approved by the publisher before posting. They may even be given boilerplate or pre-written things to post. In self-publishing, you have real time data to help you make decisions and adjust ads and overall strategy on the fly to maximize revenue. For example, if one of your books is selling strangely well on Google Play, it's time to adjust BookBub ads to focus on that platform instead of Amazon. You can also easily change your cover, your blurb, and so forth after release. I've changed covers of some of my books many times trying to optimize them for increased sales and that is nearly impossible to do with traditional publishing. And in fact, Brandon Sanderson gave a recent interview where he talked about how the original cover of his Mistborn book was so unrelated to the content of the book that it almost sunk the book and hence his career. You also have the ability to run ad campaigns as you see fit, not just an initial launch like tradpub does. For example, in February 2025, I've been heavily advertising my Demonsouled series even though I finished writing that series back in 2013, but I've been able to increase sales and derive a significant profit from those ads. A third big advantage is that you get a far greater share of the profits. Most of the stores, if you price an ebook between $2.99 (prices are USD) and $9.99, you will get 70% of the sale price, which means if you sell an ebook for $4.99, you're probably going to get about $3.50 per sale (depending on currency fluctuations and so forth). That is vastly more than you would get from any publishing contract. You also don't have to worry about the publisher trying to cheat you out of royalties. We talked about an agency stealing money last episode. Every platform you publish your book on, whether Amazon, Barnes and Noble, Kobo, Google Play, Smashwords and Apple will give you a monthly spreadsheet of your sales and then you can look at it for yourself, see exactly how many books you sold and exactly how much money you're going to get. I have only very rarely seen traditional publishing royalty statements that are as clear and have as much data in them as a spreadsheet from Google Play or Amazon. A fourth advantage is you don't have to worry about publishers abandoning you mid-series. In traditional publishing, there is what's called the Publishing Death Spiral where let's say an author is contracted to write a series of five books. The author writes the first book and it sells well. Then the author publishes the second book and it doesn't sell quite as well, but the publisher is annoyed enough by the decrease in sales that they drop the writer entirely and don't finish the series. This happens quite a bit in the traditional publishing world, and you don't have to worry about that in indie publishing because you can just publish as often as you want. If you're not happy with the sales of the first few books in the series, you can change the covers, try ad campaigns, and other strategies. Finally, you can publish as often as you want and when you want. In traditional publishing, there is often a rule of thumb that an author should only publish one book a year under their name. Considering that last year I published 10 books under my name, that seems somewhat ridiculous, but that's a function of the fact that traditional publishing has only so much capacity and the pieces of the machine involved there are slow and not very responsive. Whereas with self-publishing, you have much more freedom and everything involved with it is much more responsive. There's no artificial deadlines, so you can take as long as you want to prepare it and if the book is ready, you don't have to wait a year to put it out because it would mess up the publisher's schedule. So what to do instead of chasing traditional publishing? Learn about self-publishing, especially about scams and bad deals related to it. Publish your own works by a platform such as KDP, Barnes and Noble Press, Kobo Writing Life, Apple Books, Google Play, Smashwords, and possibly your own Payhip and/or Shopify store. Conquer your fear of marketing and advertising. Even traditionally published authors are shouldering more of this work and paying out of their own pocket to hire someone to do it, and if you are paying your own marketing costs, you might as well self-publish and keep a greater share of the profits. The second half of our main topic, another potential risk of prestige, is getting on The New York Times Bestseller List. I should note that I suppose someone could accuse me of sour grapes here saying, oh, Jonathan Moeller, you've never been on The New York Times Bestseller List. You must just be bitter about it. That is not true. I do not want to be on The New York Times Bestseller List. What I would like to be is a number one Amazon bestseller. Admittedly though, that's unlikely, but a number one Amazon bestseller would make a lot more money than a number one New York Times Bestseller List, though because of the way it works, if you are a number one Amazon bestseller, you might be a New York Times Bestseller, but you might not. Let's get into that now. Many writers have the dream of seeing their name on the New York Times Bestseller List. One self-help guru wrote about “manifesting” this milestone for herself by writing out the words “My book is number one on The New York Times Bestseller List” every day until it happened. Such is the mystique of this milestone that many authors crave it as a necessity. However, this list has seen challenges to its prestige in recent years. The one thing that shocks most people when they dig into the topic is that the list is not an objective list based on the raw number of books sold. The list is “editorial content” and The New York Times can exclude, include, or rank the books on the list however they choose. What it does not capture is perennial sellers or classics. For example, the Bible and the Quran are obviously some of the bestselling books of all time, but you won't see editions of the Bible or the Quran on the New York Times Bestseller List. Textbooks and classroom materials, I guarantee there are some textbooks that are standards in their field that would be on the bestseller list every year, but they're not because The New York Times doesn't track them. Ebooks available only from a single vendor such as Kindle Unlimited books, ebook sales from not reporting vendors such as Shopify or Payhip. Reference Works including test prep guides (because I guarantee when test season comes around the ACT and SAT prep guides or the GRE prep guides sell a lot of copies) and coloring books or puzzle books. It would be quite a blow to the authors on the list to realize that if these excluded works were included on the list, they would in all likelihood be consistently below To Kill a Mockingbird, SAT prep books, citation manuals, Bibles/other religious works, and coloring books about The Eras Tour. Publishers, political figures, religious groups, and anyone with enough money can buy their way into the rank by purchasing their books in enormous quantities. In fact, it's widely acknowledged in the United States that this is essentially a legal form of bribery and a bit of money laundering too, where a publisher will give a truly enormous advance to a public figure or politician that they like, and that advance will essentially be a payment to that public figure in the totally legal form of an enormous book advance that isn't going to pay out. Because this is happening with such frequency, The New York Times gave into the pressure to acknowledge titles suspected of this strategy with a special mark next to it on the list. However, these books remain on the list and can still be called a New York Times Bestseller. Since the list is not an objective marker of sales and certainly not some guarantee of quality, why focus on making it there? I think trying to get your book on The New York Times Bestseller List would be an enormous waste of time, since the list is fundamentally an artificial construction that doesn't reflect sales reality very well. So what can you do instead? Focus on raw sales numbers and revenue, not lists. Even Amazon's bestseller category lists have a certain amount of non-quantitative factors. In the indie author community, there's a saying called Bank not Rank, which means you should focus on how much revenue your books are actually generating instead of whatever sales rank they are on whatever platform. I think that's a wiser approach to focus your efforts. You can use lists like those from Publishers Weekly instead if you're interested in what's selling or trends in the industry, although that too can be manipulated and these use only a fairly small subset of data that favors retail booksellers, but it's still more objective in measuring than The New York Times. I suppose in the end, you should try and focus on ebook and writing activities that'll bring you actual revenue or satisfaction rather than chasing the hollow prestige of things like traditional publishing, agents, MFAs, and The New York Times Bestseller List. So that is it for this week. Thank you for listening to The Pulp Writer Show. I hope you found the show useful. A reminder that you can listen to all back episodes at https://thepulpwritershow.com. If you enjoyed the podcast, please leave a review on your podcasting platform of choice. Stay safe and stay healthy and see you all next week.
This is our unabridged interview with Mpho Tutu van Furth. What does it mean to ask someone for forgiveness? The experience after Apartheid in South Africa has much to teach us. “In English, you say, ‘I'm sorry, forgive me.' It's all about me” Says Mpho Tutu van Furth, daughter to the late Desmond Tutu. But in the South African language of Xhosa “You say ndicela uxolo which means ‘I ask for peace'. And that's a very different thing than ‘forgive me'” In this episode, explore the deep impact of apartheid in South Africa, the meaning of true forgiveness, and the profound philosophy of Ubuntu. Discover how Mpho carries on her father's legacy of peace and reconciliation while navigating her own journey as an Episcopalian priest and social activist. This heartfelt and enlightening conversation delves into the courage required to love, forgive, and build a just community. Show Notes Resources mentioned this episode: The Desmond & Leah Tutu Legacy Foundation Forgiveness and Reparation: The Healing Journey by Mpho Tutu The Book of Forgiving by Desdmond Tutu and Mpho Tutu Truth and Reconciliation Commission of South Africa Similar NSE episodes: Azim Khamisa: Ending Violence Through Forgiveness Forgiving My Mother's Murderer: Sharon Risher Pádraig Ó Tuama: A Poet's Work in Peace and Reconciliation PDF of Lee's Interview Notes Transcript of Abridged Episode Want more NSE? JOIN NSE+ Today! Our subscriber only community with bonus episodes designed specifically to help you live a good life, ad-free listening, and discounts on live shows Subscribe to episodes: Apple | Spotify | Amazon | Google | YouTubeFollow Us: Instagram | Twitter | Facebook | YouTubeFollow Lee: Instagram | TwitterJoin our Email List: nosmallendeavor.com See Privacy Policy: Privacy Policy Amazon Affiliate Disclosure: Tokens Media, LLC is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program… Learn about your ad choices: dovetail.prx.org/ad-choices
Ubuntu is an ancient African word meaning “humanity to others”. It is often described as reminding us that “I am what I am because of who we all are.” It is a traditional African philosophy that emphasizes the interdependence of all people and the importance of community. Core values of Ubuntu are: Compassion: Expressing compassion for others; Reciprocity: Treating others as you would want to be treated; Dignity: Valuing the dignity of all people; Humanity: Showing humanity to others; Mutuality: Working together for the benefit of the community.Upenyu Majee and Halla Jones are working to establish the Institute for Ubuntu Thought and Practice (IUTP) at Michigan State University.Conversation Highlights:(0:32) – Upenyu, what's your background, and what attracted you to MSU?(1:53) – Halla, what brought you to MSU?(2:44) – Say more about the Ubuntu Dialogues Project that initially brought you two together.(4:24) – How did the project evolve into the institute?(6:02) – What is the mission of the IUTP?(11:04) – What is the change you would like to see in the world today and how can Ubuntu help us get there?(13:47) – Why aren't we there yet? The concept of Ubuntu sounds so good. How and why are our lived experiences important to understand? “We listen to understand.”(21:12) – How is Ubuntu strategic and deeply necessary?(23:42) – What would you like us to keep in mind about the IUTP?(27:33) – How would you like citizens to get involved with IUTP? How do we get others to see themselves in the institute?Listen to “MSU Today with Russ White” on the radio and through Spotify, Apple Podcasts, and wherever you get your shows.
What does it mean to ask someone for forgiveness? The experience after Apartheid in South Africa has much to teach us. “In English, you say, ‘I'm sorry, forgive me.' It's all about me” Says Mpho Tutu van Furth, the daughter to the late Desmond Tutu. But in the South African language of Xhosa “You say ndicela uxolo which means ‘I ask for peace'. And that's a very different thing than ‘forgive me'” In this episode, explore the deep impact of apartheid in South Africa, the meaning of true forgiveness, and the profound philosophy of Ubuntu. Discover how Mpho carries on her father's legacy of peace and reconciliation while navigating her own journey as an Episcopalian priest and social activist. This heartfelt and enlightening conversation delves into the courage required to love, forgive, and build a just community. Show Notes Resources mentioned this episode: The Desmond & Leah Tutu Legacy Foundation Forgiveness and Reparation: The Healing Journey by Mpho Tutu The Book of Forgiving by Desdmond Tutu and Mpho Tutu Truth and Reconciliation Commission of South Africa Similar NSE episodes: Azim Khamisa: Ending Violence Through Forgiveness Forgiving My Mother's Murderer: Sharon Risher Pádraig Ó Tuama: A Poet's Work in Peace and Reconciliation PDF of Lee's Interview Notes Transcription Link Want more NSE? JOIN NSE+ Today! Our subscriber only community with bonus episodes designed specifically to help you live a good life, ad-free listening, and discounts on live shows Subscribe to episodes: Apple | Spotify | Amazon | Google | YouTubeFollow Us: Instagram | Twitter | Facebook | YouTubeFollow Lee: Instagram | TwitterJoin our Email List: nosmallendeavor.com See Privacy Policy: Privacy Policy Amazon Affiliate Disclosure: Tokens Media, LLC is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linki… Learn about your ad choices: dovetail.prx.org/ad-choices
Bill commits to running MX Linux for a year and has issues with Ubuntu based distros. We discuss Linux drivers, the Cosmic desktop, Wayland display manager, gaming on Linux and much, much more. Episode Time Stamps 00:00 Going Linux #464 · 2024 Year End Review 04:44 Bill commits to running MX Linux for a year 07:57 Bill has issues with Ubuntu based distros 17:44 Some Linux driver maintainers de-listed 21:23 New file system accepted - no bovine intervention 25:31 Good news for team green - Nvidia 28:08 The Cosmic desktop from System76 is making great progress 30:18 What's going on with Mozilla? 30% layoffs? 34:38 The Rasberry Pie foundation has been busy 37:00 Wayland display manager on Fedora and Ubuntu 42:40 RISC 44:43 Faster installs 46:39 HEIC - HEIF image support in Linux 50:14 Linux kernel cadence changed 54:25 Better gaming for everyone 56:34 Gnome feature fest 58:02 Ubuntu's anniversary flourishes 61:05 Wayland: All the cool kids are doing it 63:09 Ubuntu's desktop security center 65:00 Ubuntu app center can install .deb packages 68:50 Advances in gaming on Linux 69:42 Steamdeck uses Arch Linux 71:39 Fedora desktops galore 74:57 Intel has problems with 13th and 14th gen chips 76:60 goinglinux.com, goinglinux@gmail.com, +1-904-468-7889, @goinglinux, feedback, listen, subscribe 77:55 End
Send us a textContent Warning: suicide attemptAbout This EpisodeShola Richards, the innovative CEO of Go Together Global, speaker and author, brings a fresh perspective on being bold. Shola reveals how seemingly small acts, like setting personal boundaries, can trigger profound personal growth and transformation. He candidly shares his own experiences with toxic workplace environments, illustrating the bold decisions and inner strength needed to reclaim one's mental well-being and authenticity. Shola also delves into the significance of fostering psychological safety and civility in the workplace, rooted in the philosophy of Ubuntu and his Go Together movement. By creating spaces that honor both our commonalities and differences, we can pave the way for genuine collaboration and mutual respect. With practical strategies for engaging with opposing views, Shola inspires us to embrace discomfort as a catalyst for growth, urging us to take bold steps toward a more inclusive and respectful world. About Shola RichardsShola Richards is an international keynote speaker, author, and suicide survivor, who has deep expertise about—and firsthand experience with—the dangers of toxic incivility. Lovingly nicknamed, “Brother Teresa”, Shola has shared his transformative message of civility on three different continents, on major media platforms such as CBS This Morning, with top organizations (such Microsoft, Google, and WebMD), on the TEDx stage, and even on Capitol Hill where he was invited to testify in front of the House of Representatives for two hours about how to bring more civility to Congress (and he'll be the first to tell you that they need a refresher course). Shola's ideas are known to be extremely practical, deeply researched, highly inspirational, and readily applicable to people from all walks of life. Additional ResourcesWeb: sholarichards.com Instagram: @sholarichardsLinkedIn: @SholaRichardsSupport the show-------- Stay Connected www.leighburgess.com Watch the episodes on YouTube Follow Leigh on Instagram: @theleighaburgess Follow Leigh on LinkedIn: @LeighBurgess Sign up for Leigh's bold newsletter