POPULARITY
Mitchell Hashimoto, founder of HashiCorp, joins us to talk about his latest project, Ghostty—a terminal that's fast, feature-rich, and truly cross-platform. Mitchell shares the vision behind Ghostty, its architecture built around the libghosty core, how it's tackling long-standing limitations in terminal emulation, and why features like quake-style dropdowns matter more than you'd think. We also explore how the community can get involved and the future potential of Ghosty as a platform for modern text-based applications. Tune in to explore the innovative features that make Ghosty a standout cross-platform terminal! Links https://mitchellh.com https://x.com/mitchellh https://hachyderm.io/@mitchellh https://github.com/mitchellh https://www.linkedin.com/in/mitchellh https://ghostty.org/docs/about We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Mitchell Hashimoto.
Mitchell Hashimoto - famously the founder of HashiCorp (creators of Terraform, Vault etc.) joins the show to discuss his latest open-source project, Ghostty, a modern terminal emulator. We discuss:Designing dev tools with a focus on human experience.Taking on large technical projects and breaking them down into achievable steps.Open source sustainability and the role of financial support.The impossible goal of building a perfect human experience with software.Passion and hiring—why obsession with a topic often leads to the best hires.Using AI as a developer and why Mitchell considers AI tooling essential.The motivation behind Ghostty and the idea of "technical philanthropy."The vision for libghostty as a reusable terminal core for other applications.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. https://workos.com/Links:Ghostty (https://ghostty.org/)Mitchell Hashimoto on Twitter (https://twitter.com/mitchellh)Mitchell's blog (https://mitchellh.com/)
Arnaud et Emmanuel discutent des versions Java, font un résumé de l'ecosystème WebAssembly, discutent du nouveau Model Context Protocol, parlent d'observabilité avec notamment les Wide Events et de pleins d'autres choses encore. Enregistré le 17 janvier 2025 Téléchargement de l'épisode LesCastCodeurs-Episode–321.mp3 ou en vidéo sur YouTube. News Langages java trend par InfoQ https://www.infoq.com/articles/java-trends-report–2024/ Java 17 finalement depasse 11 et 8 ~30/33% Java 21 est à 1.4% commonhaus apparait GraalVM en early majority Spring AI et langchain4j en innovateurs SB 3 voit son adoption augmenter Un bon résumé sur WebAssembly, les différentes specs comme WASM GC, WASI, WIT, etc https://2ality.com/2025/01/webassembly-language-ecosystem.html WebAssembly (Wasm) est un format d'instructions binaires pour une machine virtuelle basée sur une pile, permettant la portabilité et l'efficacité du code. Wasm a évolué à partir d'asm.js, un sous-ensemble de JavaScript qui pouvait fonctionner à des vitesses proches de celles natives. WASI (WebAssembly System Interface) permet à Wasm de fonctionner en dehors des navigateurs Web, fournissant des API pour le système de fichiers, CLI, HTTP, etc. Le modèle de composant WebAssembly permet l'interopérabilité entre les langages Wasm à l'aide de WIT (Wasm Interface Type) et d'ABI canonique. Les composants Wasm se composent d'un module central et d'interfaces WIT pour les importations/exportations, facilitant l'interaction indépendante du langage. Les interfaces WIT décrivent les types et les fonctions, tandis que les mondes WIT définissent les capacités et les besoins d'un composant (importations/exportations). La gestion des packages Wasm est assurée par Warg, un protocole pour les registres de packages Wasm. Une enquête a montré que Rust est le langage Wasm le plus utilisé, suivi de Kotlin et de C++; de nombreux autres langages sont également en train d'émerger. Un algorithme de comptage a taille limitée ne mémoire a été inventé https://www.quantamagazine.org/computer-scientists-invent-an-efficient-new-way-to-count–20240516/ élimine un mot de manière aléatoire mais avec une probabilité connue quand il y a besoin de récupérer de l'espace cela se fait par round et on augmente la probabilité de suppression à chaque round donc au final, ne nombre de mots / la probabilité d'avoir été éliminé donne une mesure approximative mais plutot précise Librairies Les contributions Spring passent du CLA au DCO https://spring.io/blog/2025/01/06/hello-dco-goodbye-cla-simplifying-contributions-to-spring d'abord manuel amis meme automatisé le CLA est une document legal complexe qui peut limiter les contribuitions le DCO vient le Linux je crois et est super simple accord que la licence de la conmtrib est celle du projet accord que le code est public et distribué en perpetuité s'appuie sur les -s de git pour le sign off Ecrire un serveur MCP en Quarkus https://quarkus.io/blog/mcp-server/ MCP est un protocol proposé paor Antropic pour integrer des outils orchestrables par les LLMs MCP est frais et va plus loin que les outils offre la notion de resource (file), de functions (tools), et de proimpts pre-built pour appeler l'outil de la meilleure façon On en reparlera a pres avec les agent dans un article suivant il y a une extension Quarkus pour simplifier le codage un article plus detaillé sur l'integration Quarkus https://quarkus.io/blog/quarkus-langchain4j-mcp/ GreenMail un mini mail server en java https://greenmail-mail-test.github.io/greenmail/#features-api Utile pour les tests d'integration Supporte SMTP, POP3 et IMAP avec TLS/SSL Propose des integrations JUnit, Spring Une mini UI et des APIs REST permettent d'interagir avec le serveur si par exemple vous le partagé dans un container (il n'y a pas d'integration TestContainer existante mais elle n'est pas compliquée à écrire) Infrastructure Docker Bake in a visual way https://dev.to/aurelievache/understanding-docker-part–47-docker-bake–4p05 docker back propose d'utiliser des fichiers de configuration (format HCL) pour lancer ses builds d'images et docker compose en gros voyez ce DSL comme un Makefile très simplifié pour les commandes docker qui souvent peuvent avoir un peu trop de paramètres Datadog continue de s'etendre avec l'acquisition de Quickwit https://www.datadoghq.com/blog/datadog-acquires-quickwit/ Solution open-source de recherche des logs qui peut être déployée on-premise et dans le cloud https://quickwit.io/ Les logs ne quittent plus votre environment ce qui permet de répondre à des besoins de sécurité, privacy et réglementaire Web 33 concepts en javascript https://github.com/leonardomso/33-js-concepts Call Stack, Primitive Types, Value Types and Reference Types, Implicit, Explicit, Nominal, Structuring and Duck Typing, == vs === vs typeof, Function Scope, Block Scope and Lexical Scope, Expression vs Statement, IIFE, Modules and Namespaces, Message Queue and Event Loop, setTimeout, setInterval and requestAnimationFrame, JavaScript Engines, Bitwise Operators, Type Arrays and Array Buffers, DOM and Layout Trees, Factories and Classes, this, call, apply and bind, new, Constructor, instanceof and Instances, Prototype Inheritance and Prototype Chain, Object.create and Object.assign, map, reduce, filter, Pure Functions, Side Effects, State Mutation and Event Propagation, Closures, High Order Functions, Recursion, Collections and Generators, Promises, async/await, Data Structures, Expensive Operation and Big O Notation, Algorithms, Inheritance, Polymorphism and Code Reuse, Design Patterns, Partial Applications, Currying, Compose and Pipe, Clean Code Data et Intelligence Artificielle Phi 4 et les small language models https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi–4-microsoft%e2%80%99s-newest-small-language-model-specializing-in-comple/4357090 Phi 4 un SML pour les usages locaux notamment 14B de parametres belle progression de ~20 points sur un score aggregé et qui le rapproche de Llama 3.3 et ses 70B de parametres bon en math (data set synthétique) Comment utiliser Gemini 2.0 Flash Thinking (le modèle de Google qui fait du raisonnement à la sauce chain of thought) en Java avec LangChain4j https://glaforge.dev/posts/2024/12/20/lets-think-with-gemini–2-thinking-mode-and-langchain4j/ Google a sorti Gemini 2.0 Flash, un petit modèle de la famille Gemini the “thinking mode” simule les cheminements de pensée (Chain of thoughts etc) décompose beaucoup plus les taches coplexes en plusiewurs taches un exemple est montré sur le modele se battant avec le probleme Les recommendations d'Antropic sur les systèmes d'agents https://www.anthropic.com/research/building-effective-agents défini les agents et les workflow Ne recommence pas les frameworks (LangChain, Amazon Bedrock AI Agent etc) le fameux débat sur l'abstraction Beaucoup de patterns implementable avec quelques lignes sans frameworks Plusieurs blocks de complexité croissante Augmented LLM (RAG, memory etc): Anthropic dit que les LLMs savent coordonner cela via MCP apr exemple Second: workflow prompt chaining : avec des gates et appelle les LLMs savent coordonner successivement ; favorise la precision vs la latence vu que les taches sont décomposées en plusieurs calls LLMs Workflow routing: classifie une entree et choisie la route a meilleure: separation de responsabilité Workflow : parallelisation: LLM travaillent en paralllele sur une tache et un aggregateur fait la synthèse. Paralleisaiton avec saucissonage de la tache ou voter sur le meilleur réponse Workflow : orchestrator workers: quand les taches ne sont pas bounded ou connues (genre le nombre de fichiers de code à changer) - les sous taches ne sont pas prédéfinies Workflow: evaluator optimizer: nun LLM propose une réponse, un LLM l'évalue et demande une meilleure réponse au besoin Agents: commande ou interaction avec l;humain puis autonome meme si il peut revenir demander des precisions à l'humain. Agents sont souvent des LLM utilisât des outil pour modifier l'environnement et réagir a feedback en boucle Ideal pour les problèmes ouverts et ou le nombre d'étapes n'est pas connu Recommende d'y aller avec une complexité progressive L'IA c'est pas donné https://techcrunch.com/2025/01/05/openai-is-losing-money-on-its-pricey-chatgpt-pro-plan-ceo-sam-altman-says/ OpenAI annonce que même avec des licenses à 200$/mois ils ne couvrent pas leurs couts associés… A quand l'explosion de la bulle IA ? Outillage Ghostty, un nouveau terminal pour Linux et macOS : https://ghostty.org/ Initié par Mitchell Hashimoto (hashicorp) Ghostty est un émulateur de terminal natif pour macOS et Linux. Il est écrit en Swift et utilise AppKit et SwiftUI sur macOS, et en Zig et utilise l'API GTK4 C sur Linux. Il utilise des composants d'interface utilisateur native et des raccourcis clavier et souris standard. Il prend en charge Quick Look, Force Touch et d'autres fonctionnalités spécifiques à macOS. Ghostty essaie de fournir un ensemble riche de fonctionnalités utiles pour un usage quotidien. Comment Pinterest utilise Honeycomb pour améliorer sa CI https://medium.com/pinterest-engineering/how-pinterest-leverages-honeycomb-to-enhance-ci-observability-and-improve-ci-build-stability–15eede563d75 Pinterest utilise Honeycomb pour améliorer l'observabilité de l'intégration continue (CI). Honeycomb permet à Pinterest de visualiser les métriques de build, d'analyser les tendances et de prendre des décisions basées sur les données. Honeycomb aide également Pinterest à identifier les causes potentielles des échecs de build et à rationaliser les tâches d'astreinte. Honeycomb peut également être utilisé pour suivre les métriques de build locales iOS aux côtés des détails de la machine, ce qui aide Pinterest à prioriser les mises à niveau des ordinateurs portables pour les développeurs. Méthodologies Suite à notre épisode sur les différents types de documentation, cet article parle des bonnes pratiques à suivre pour les tutoriels https://refactoringenglish.com/chapters/rules-for-software-tutorials/ Écrivez des tutoriels pour les débutants, en évitant le jargon et la terminologie complexe. Promettez un résultat clair dans le titre et expliquez l'objectif dans l'introduction. Montrez le résultat final tôt pour réduire les ambiguïtés. Rendez les extraits de code copiables et collables, en évitant les invites de shell et les commandes interactives. Utilisez les versions longues des indicateurs de ligne de commande pour plus de clarté. Séparez les valeurs définies par l'utilisateur de la logique réutilisable à l'aide de variables d'environnement ou de constantes nommées. Épargnez au lecteur les tâches inutiles en utilisant des scripts. Laissez les ordinateurs évaluer la logique conditionnelle, pas le lecteur. Maintenez le code en état de fonctionnement tout au long du tutoriel. Enseignez une chose par tutoriel et minimisez les dépendances. Les Wide events, un “nouveau” concept en observabilité https://jeremymorrell.dev/blog/a-practitioners-guide-to-wide-events/ un autre article https://isburmistrov.substack.com/p/all-you-need-is-wide-events-not-metrics L'idée est de logger des evenements (genre JSON log) avec le plus d'infos possible de la machine, la ram, la versiond e l'appli, l'utilisateur, le numero de build qui a produit l'appli, la derniere PR etc etc ca permet de filtrer et grouper by et de voir des correlations visuelles tres rapidement et de zoomer tiens les ventes baisses de 20% tiens en fait ca vient de l'appli andriod tiens aps correle a la version de l'appli mais la version de l'os si! le deuxieme article est facile a lire le premier est un guide d'usage exhaustif du concept Entre argumenter et se donner 5 minutes https://signalvnoise.com/posts/3124-give-it-five-minutes on veut souvent argumenter aka poser des questions en ayant déjà la reponse en soi emotionnellement mais ca amene beaucoup de verbiage donner 5 minutes à l'idée le temps d'y penser avant d'argumenter Loi, société et organisation Des juges fédéraux arrêtent le principe de la neutralité du net https://www.lemonde.fr/pixels/article/2025/01/03/les-etats-unis-reviennent-en-arriere-sur-le-principe-de-la-neutralite-du-net_6479575_4408996.html?lmd_medium=al&lmd_campaign=envoye-par-appli&lmd_creation=ios&lmd_source=default la neutralité du net c'est l'interdiction de traiter un paquet différemment en fonction de son émetteur Par exemple un paquet Netflix qui serait ralenti vs un paquet Amazon Donald trump est contre cette neutralité. À voir les impacts concrets dans un marché moins régulé. Rubrique débutant Un petit article sur les float vs les double en Java https://www.baeldung.com/java-float-vs-double 4 vs 8 bytes precision max de 7 vs 15 echele 10^38 vs 10^308 (ordre de grandeur) perf a peu pret similaire sauf peut etre pour des modeles d'IA qui vont privilegier une taille plus petite parfois attention overflow et les accumulation d'erreurs d'approximation BigDecimal Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 20 janvier 2025 : Elastic{ON} - Paris (France) 22–25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 24–25 janvier 2025 : Agile Games Île-de-France 2025 - Paris (France) 6–7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 6 mars 2025 : DevCon #24 : 100% IA - Paris (France) 13 mars 2025 : Oracle CloudWorld Tour Paris - Paris (France) 14 mars 2025 : Rust In Paris 2025 - Paris (France) 19–21 mars 2025 : React Paris - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20–21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26–29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 27–28 mars 2025 : SymfonyLive Paris 2025 - Paris (France) 28 mars 2025 : DataDays - Lille (France) 28–29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 3 avril 2025 : SoCraTes Rennes 2025 - Rennes (France) 4 avril 2025 : Flutter Connection 2025 - Paris (France) 10–11 avril 2025 : Android Makers - Montrouge (France) 10–12 avril 2025 : Devoxx Greece - Athens (Greece) 16–18 avril 2025 : Devoxx France - Paris (France) 23–25 avril 2025 : MODERN ENDPOINT MANAGEMENT EMEA SUMMIT 2025 - Paris (France) 24 avril 2025 : IA Data Day 2025 - Strasbourg (France) 29–30 avril 2025 : MixIT - Lyon (France) 7–9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 5–6 juin 2025 : AlpesCraft - Grenoble (France) 5–6 juin 2025 : Devquest 2025 - Niort (France) 11–13 juin 2025 : Devoxx Poland - Krakow (Poland) 12–13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12–13 juin 2025 : DevLille - Lille (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25–27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26–27 juin 2025 : Sunny Tech - Montpellier (France) 1–4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7–9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 18–19 septembre 2025 : API Platform Conference - Lille (France) & Online 2–3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6–10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9–10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 16–17 octobre 2025 : DevFest Nantes - Nantes (France) 4–7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12–14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 23–25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Mitchell Hashimoto joins the show to discuss Ghostty, the newest terminal in town. Mitchell co-founded HashiCorp, took it all the way to IPO, exited in 2023—and now he's working on a terminal emulator called Ghostty. Ghostty is set to 1.0 this month, so we sat down to talk through all the details.
Mitchell Hashimoto joins the show to discuss Ghostty, the newest terminal in town. Mitchell co-founded HashiCorp, took it all the way to IPO, exited in 2023—and now he's working on a terminal emulator called Ghostty. Ghostty is set to 1.0 this month, so we sat down to talk through all the details.
Christian Rocha shares his experience building the TUI framework Bubble Tea based on The Elm Architecture. We talk about Impostor Syndrome, mentors, and how he incorporates his background in design at Charm.Thanks to our sponsor, Logistically. Email: elmtown@logisticallyinc.com.Music by Jesse Moore.Edited by Toni Cañete.Recording date: 2024.10.10GuestChristian RochaShow notes[00:00:23] Sponsored by Logistically[00:00:41] Introducing ChristianCharmBubble TeaElm Town 79 – Inspired: Gleam with HayleighElm Town 80 – Inspired: Roc with Richard Feldmanlearning elm, so I don't need to use javascript by bashbunni[00:02:12] Origin story[00:08:17] Mentor John Weir[00:10:58] Taking The Elm Architecture to the command line[00:13:51] John Weir's question[00:15:48] Bubble Tea related toolsGlowBubblesLip GlossGumSoft ServeGlamour[00:20:23] Background in design/branding[00:23:35] What is VHS?VHSasciinema[00:26:08] How has the architecture of Bubble Tea held up?[00:27:35] What are you excited about these days?Mitchell Hashimoto's GhosttyRoc[00:32:44] PicksChristian's picksGo Mecha Ball"The Great"TampopoBromptonJared's picksTeardownDan BullGlass AnimalsBicycle Diaries by David Byrne
Daniel Quinn weighs in on how to develop with Docker The Right Way, Mitchell Hashimoto says Ghostty will be publicly released this coming December, Kevin Li writes about the value of learning how to learn, The Browser Company moves on from Arc & the React Native team ships its new architecture.
Daniel Quinn weighs in on how to develop with Docker The Right Way, Mitchell Hashimoto says Ghostty will be publicly released this coming December, Kevin Li writes about the value of learning how to learn, The Browser Company moves on from Arc & the React Native team ships its new architecture.
Daniel Quinn weighs in on how to develop with Docker The Right Way, Mitchell Hashimoto says Ghostty will be publicly released this coming December, Kevin Li writes about the value of learning how to learn, The Browser Company moves on from Arc & the React Native team ships its new architecture.
Join Mitchell Hashimoto, the visionary behind Terraform and co-founder of HashiCorp, on an engaging journey from his college years to his groundbreaking impact on cloud resource management.Explore the pivotal moments that led to Terraform's inception, the challenges of fostering an open-source community, and how HashiCorp evolved from a passion project into an industry powerhouse.Don't miss this captivating conversation that delves into the story behind one of the most influential software tools, offering a rare insight into the mind of a true innovator.Mitchell is a developer living in Los Angeles, CA. He co-founded HashiCorp, and was part of the initial engineering team behind most of HashiCorp's products, such as Vagrant, Packer, Consul, Terraform, Vault, Nomad, Waypoint, and more. Mitchell was CEO for 4 years, CTO for 5 years, an individual contributor for 2 years. He is passionate about flying airplanes - he's an FAA-licensed pilot and currently flies a Cirrus SF50 Vision Jet.
In this episode of Breaking Changes, Postman Head of Product-Observability Jean Yang is joined by Mitchell Hashimoto, founder at HashiCorp, to share insights into their journey and strategic decisions. From navigating remote-work challenges to fostering a chat-literate culture, Mitchell delves into HashiCorp's multi-product approach and changes they've pioneered in the industry. This conversation digs deep into company culture, remote-work dynamics, hiring an external CEO, and why he decided to step away. For more on Mitchell Hashimoto, check out the following: LinkedIn: https://www.linkedin.com/in/mitchellh/ Twitter: https://twitter.com/mitchellh Personal Website: https://mitchellh.com/ HashiCorp Website: https://www.hashicorp.com/ Follow Jean on Twitter/X @jeanqasaur. And remember, never miss an episode by subscribing to the Breaking Changes Podcast on your favorite streaming platform, company website at https://www.postman.com/events/breaking-changes/ or Postman's YouTube Channel—just hit that bell for notifications. Timestamps [00:02:18] Naming of HashiCorp. [00:04:29] Embracing programming beyond DevOps. [00:07:32] Chat literacy and communication styles. [00:13:33] Unexpected success with Vault. [00:17:13] Breaking industry changes. [00:20:23] Total market failure and pivot. [00:24:25] Making tough decisions in leadership. [00:26:29] Hard announcements remotely. [00:30:54] CEO search process insights. [00:33:46] Defining company culture. [00:36:56] Remote work challenges. [00:40:58] Following your passion over security. [00:44:36] Changing priorities in career. [00:47:04] Reversion to speed or performance. [00:52:37] Accessibility in tech industry. #BreakingChanges #Postman #HashiCorp #TechLeadership #EntrepreneurialJourney #ProfessionalGrowth #TechInnovation #StartupSuccess #BusinessInsights #EntrepreneurMindset #careersuccess #PersonalGrowthJourney
This week we have Mitchell Hashimoto, the founder of HashiCorp. We talk about where it all started and how the company and it's tools have impacted the industry. We also talk about the challenges of being a founder and the importance of thinking about open source monetization early on. We also talk about his experience as a pilot and his technical philanthropy in the terminal emulator space. https://twitter.com/mitchellh https://mitchellh.com/ https://github.com/mitchellh Origin story of HashiCorp (https://youtu.be/Y7c_twmDxQ4) Selected writing My Approach to Building Large Technical Projects (https://mitchellh.com/writing/building-large-technical-projects) Contributing to Complex Projects (https://mitchellh.com/writing/contributing-to-complex-projects) Episode sponsored By CodeCrafters (https://codecrafters.io/devtoolsfm) 40% Discount! Become a paid subscriber our patreon, spotify, or apple podcasts for the full episode. https://www.patreon.com/devtoolsfm https://podcasters.spotify.com/pod/show/devtoolsfm/subscribe https://podcasts.apple.com/us/podcast/devtools-fm/id1566647758 https://www.youtube.com/@devtoolsfm/membership
Mitchell co-founded HashiCorp in 2012 and created many important infrastructure tools, such as Terraform, Vagrant, Packer, and Consul. In addition to being a prolific engineer, Mitchell grew HashiCorp into a multi-billion-dollar public company. We discuss: How to structure large projects to avoid demotivation or burnout The "A.P.P.L.E" framework for diffusing tense situations and handling trolls How to decide what to work on Mitchell's unconventional transitions from CEO to CTO and then back to an individual contributor (IC) The quality that Mitchell values the most in an engineering team. Segments: [0:14:19] Impactful lessons from working at the Apple Store in college [0:22:26] Origin story of HashiCorp [0:26:08] College side project that turned into Mitchell's first financial success [0:31:25] Why infrastructure? [0:39:50] How individual products came about [0:44:17] Challenges of fundraising as a company with an umbrella of products [0:48:20] Balancing being the CTO and writing code: “I didn't want to be that CTO that just produced technical debt” [0:53:09] Transitioning from CEO to co-CTO [0:57:26] From CTO to Individual Contributor [1:06:03] What's next? Show Notes: Mitchell's blog: https://mitchellh.com/writing The “APPLE” principle that has guided Mitchell throughout his career: https://mitchellh.com/writing/apple-the-key-to-my-success Mitchell's Startup Banking Story
Ned Bellavance worked in the world of tech for more than a decade before joining the family profession as an educator. He joins Corey on Screaming in the Cloud to discuss his shift from engineer to educator and content creator, the intricacies of Terraform, and how changes in licensing affect the ecosystem.About NedNed is an IT professional with more than 20 years of experience in the field. He has been a helpdesk operator, systems administrator, cloud architect, and product manager. In 2019, Ned founded Ned in the Cloud LLC to work as an independent educator, creator, and consultant. In this new role, he develops courses for Pluralsight, runs multiple podcasts, writes books, and creates original content for technology vendors.Ned is a Microsoft MVP since 2017 and a HashiCorp Ambassador since 2020.Ned has three guiding principles: embrace discomfort, fail often, and be kind.Links Referenced: Ned in the Cloud: https://nedinthecloud.com/ LinkedIn: https://www.linkedin.com/in/ned-bellavance/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is Ned Bellavance, who's the founder and curious human over at Ned in the Cloud. Ned, thank you for joining me.Ned: Yeah, it's a pleasure to be here, Corey.Corey: So, what is Ned in the Cloud? There are a bunch of easy answers that I feel don't give the complete story like, “Oh, it's a YouTube channel,” or, “Oh no, it's the name that you wound up using because of, I don't know, easier to spell the URL or something.” Where do you start? Where do you stop? What are you exactly?Ned: What am I? Wow, I didn't know we were going to get this deep into philosophical territory this early. I mean, you got to ease me in with something. But so, Ned in the Cloud is the name of my blog from back in the days when we all started up a blog and hosted on WordPress and had fun. And then I was also at the same time working for a value-added reseller as a consultant, so a lot of what went on my blog was stuff that happened to me in the world of consulting.And you're always dealing with different levels of brokenness when you go to clients, so you see some interesting things, and I blogged about them. At a certain point, I decided I want to go out and do my own thing, mostly focused on training and education and content creation and I was looking for a company name. And I went through—I had a list of about 40 different names. And I showed them to my wife, and she's like, “Why don't you go Ned in the Cloud? Why are you making this more complicated than it needs to be?”And I said, “Well, I'm an engineer. That is my job, by definition, but you're probably right. I should just go with Ned in the Cloud.” So, Ned in the Cloud now is a company, just me, focused on creating educational content for technical learners on a variety of different platforms. And if I'm delivering educational content, I am a happy human, and if I'm not doing that, I'm probably out running somewhere.Corey: I like that, and I'd like to focus on education first. There are a number of reasons that people will go in that particular direction, but what was it for you?Ned: I think it's kind of in the heritage of my family. It's in my blood to a certain degree because my dad is a teacher, my mom is a teacher-turned-librarian, my sister is a teacher, my wife is a teacher, her mother is a teacher. So, there was definitely something in the air, and I think at a certain point, I was the black sheep in the sense that I was the engineer. Look, this guy over here. And then I ended up deciding that I really liked training people and learning and teaching, and became a teacher of sorts, and then they all went, “Welcome to the fold.”Corey: It's fun when you get to talk to people about the things that they're learning because when someone's learning something I find that it's the time when their mind is the most open. I don't think that that's something that you don't get to see nearly as much once someone already, quote-unquote, “Knows a thing,” because once that happens, why would you go back and learn something new? I have always learned the most—even about things that I've built myself—by putting it in the hands of users and seeing how they honestly sometimes hold it wrong and make mistakes that don't make sense to me, but absolutely make sense to them. Learning something—or rather, teaching something—versus building that thing is very much an orthogonal skill set, and I don't think that there's enough respect given to that understanding.Ned: It's an interesting sphere of people who can both build the thing and then teach somebody else to build the thing because you're right, it's very different skill sets. Being able to teach means that you have to empathize with the human being that you're teaching and understand that their perspective is not yours necessarily. And one of the skills that you build up as an instructor is realizing when you're making a whole bunch of assumptions because you know something really well, and that the person that you're teaching is not going to have that context, they're not going to have all those assumptions baked in, so you have to actually explain that stuff out. Some of my instruction has been purely online video courses through, like, Pluralsight; less of a feedback loop there. I have to publish the entire course, and then I started getting feedback, so I really enjoy doing live trainings as well because then I get the questions right away.And I always insist, like, if I'm delivering a lecture, and you have a question, please don't wait for the end. Please interrupt me immediately because you're going to forget what that question is, you're going to lose your train of thought, and then you're not going to ask it. And the whole class benefits when someone asks a question, and I benefit too. I learn how to explain that concept better. So, I really enjoy the live setting, but making the video courses is kind of nice, too.Corey: I learned to speak publicly and give conference talks as a traveling contract trainer for Puppet years ago, and that was an eye-opening experience, just because you don't really understand something until you're teaching other people how it works. It's how I learned Git. I gave a conference talk that explained Git to people, and that was called a forcing function because I had four months to go to learn this thing I did not fully understand and welp, they're not going to move the conference for me, so I guess I'd better hustle. I wouldn't necessarily recommend that approach. These days, it seems like you have a, let's say, disproportionate level of focus on the area of Infrastructure as Code, specifically you seem to be aiming at Terraform. Is that an accurate way of describing it?Ned: That is a very accurate way of describing it. I discovered Terraform while I was doing my consulting back in 2016 era, so this was pretty early on in the product's lifecycle. But I had been using CloudFormation, and at that time, CloudFormation only supported JSON, which meant it was extra punishing. And being able to describe something more succinctly and also have access to all these functions and loops and variables, I was like, “This is amazing. Where were you a year ago?” And so, I really just jumped in with both feet into Terraform.And at a certain point, I was at a conference, and I went past the Pluralsight booth, and they mentioned that they were looking for instructors. And I thought to myself, well, I like talking about things, and I'm pretty excited about this Terraform thing. Why don't I see if they're looking for someone to do a Terraform course? And so, I went through their audition process and sure enough, that is exactly what they were looking for. They had no getting started course for Terraform at the time. I published the course in 2017, and it has been in the top 50 courses ever since on Pluralsight. So, that told me that there's definitely an appetite and maybe this is an area I should focus on a little bit more.Corey: It's a difficult area to learn. About two months ago, I started using Terraform for the first time in anger in ages. I mean, I first discovered it when I was on my way back from one of those Puppet trainings, and the person next to me was really excited about this thing that we're about to launch. Turns out that was Mitchell Hashimoto and Armon was sitting next to him on the other side. Why he had a middle seat, I'll never know.But it was a really fun conversation, just talking about how he saw the world and what he was planning on doing. And a lot of that vision was realized. What I figured out a couple months ago is both that first, I'm sort of sad that Terraform is as bad as it is, but it's the best option we've got because everything else is so much worse. It is omnipresent, though. Effectively, every client I've ever dealt with on AWS billing who has a substantial estate is managing it via Terraform.It is the lingua franca of cloud across the board. I just wish it didn't require as much care and feeding, especially for the getting-started-with-a-boilerplate type of scenario. So, much of what you type feels like it's useless stuff that should be implicit. I understand why it's not, but it feels that way. It's hard to learn.Ned: It certainly can be. And you're right, there's a certain amount of boilerplate and [sigh] code that you have to write that seems pointless. Like, do I have to actually spell this all out? And sometimes the answer is yes, and sometimes the answer is you should use a module for that. Why are you writing this entire VPC configuration out yourself? And that's the sort of thing that you learn over time is that there are shortcuts, there are ways to make the code simpler and require less care and feeding.But I think ultimately, your infrastructure, just like your software, evolves, changes has new requirements, and you need to manage it in the same way that you want to manage your software. And I wouldn't tell a software developer, “Oh, you know, you could just write it once and never go back to it. I'm sure it's fine.” And by the same token, I wouldn't tell an infrastructure developer the same thing. Now, of course, people do that and never go back and touch it, and then somebody else inherits that infrastructure and goes, “Oh, God. Where's the state data?” And no one knows, and then you're starting from scratch. But hopefully, if you have someone who's doing it responsibly, they'll be setting up Terraform in such a way that it is maintainable by somebody else.Corey: I'd sure like to hope so. I have encountered so many horrible examples of code and wondering what malicious person wrote this. And of course, it was me, 6 or 12 months ago.Ned: Always [laugh].Corey: I get to play architect around a lot of these things. In fact, that's one of the problems that I've had historically with an awful lot of different things that I've basically built, called it feature complete, let it sit for a while using the CDK or whatnot, and then oh, I want to make a small change to it. Well, first, I got to spend half a day during the entire line dependency updates and seeing what's broken and how all of that works. It feels like for better or worse, Terraform is a lot more stable than that, as in, old versions of Terraform code from blog posts from 2016 will still effectively work. Is that accurate? I haven't done enough exploring in that direction to be certain.Ned: The good thing about Terraform is you can pin the version of various things that you're using. So, if you're using a particular version of the AWS provider, you can pin it to that specific version, and it won't automatically upgrade you to the latest and greatest. If you didn't do that, then you'll get bit by the update bug, which certainly happens to some folks when they changed the provider from version 3 to version 4 and completely changed how the S3 bucket object was created. A lot of people's scripts broke that day, so I think that was the time for everyone to learn what the version argument is and how it works. But yeah, as long as you follow that general convention of pinning versions of your modules and of your resource provider, you should be in a pretty stable place when you want to update it.Corey: Well, here's the $64,000 question for you, then. Does Dependabot on your GitHub repo begin screaming at you as soon as you've done that because in one of its dependencies in some particular weird edge cases when they're dealing with unsanitized, internet-based input could wind up taking up too many system resources, for example? Which is, I guess, in an ideal world, it wouldn't be an issue, but in practice, my infrastructure team is probably not trying to attack the company from the inside. They have better paths to get there, to be very blunt.Ned: [laugh].Corey: Turns out giving someone access to a thing just directly is way easier than making them find it. But that's been one of the frustrating parts where, especially when it encounters things like, I don't know, corporate security policies of, “Oh, you must clear all of these warnings,” which well-intentioned, poorly executed seems to be the takeaway there.Ned: Yeah, I've certainly seen some implementations of tools that do static scanning of Terraform code and will come up with vulnerabilities or violations of best practice, then you have to put exceptions in there. And sometimes it'll be something like, “You shouldn't have your S3 bucket public,” which in most cases, you shouldn't, but then there's the one team that's actually publishing a front-facing static website in the S3 bucket, and then they have to get, you know, special permission from on high to ignore that warning. So, a lot of those best practices that are in the scanning tools are there for very good reasons and when you onboard them, you should be ready to see a sea of red in your scan the first time and then look through that and kind of pick through what's actually real, and we should improve in our code, and what's something that we can safely ignore because we are intentionally doing it that way.Corey: I feel like there's an awful lot of… how to put this politely… implicit dependencies that are built into things. I'll wind up figuring out how to do something by implementing it and that means I will stitch together an awful lot of blog posts, things I found on Stack Overflow, et cetera, just like a senior engineer and also Chat-Gippity will go ahead and do those things. And then the reason—like, someone asks me four years later, “Why is that thing there?” And… “Well, I don't know, but if I remove it, it might stop working, so…” there was almost a cargo-culting style of, well, it's always been there. So, is that necessary? Is it not?I'm ashamed by how often I learned something very fundamental in a system that I've been using for 20 years—namely, the command line—just by reading the man page for a command that I already, quote-unquote, “Already know how to use perfectly well.” Yeah, there's a lot of hidden gems buried in those things.Ned: Oh, my goodness, I learned something about the Terraform CLI last week that I wish I'd known two years ago. And it's been there for a long time. It's like, when you want to validate your code with the terraform validate, you can initialize without initializing the back-end, and for those who are steeped in Terraform, that means something and for everybody else, I'm sorry [laugh]. But I discovered that was an option, and I was like, “Ahhh, this is amazing.” But to get back to the sort of dependency problems and understanding your infrastructure better—because I think that's ultimately what's happening when you have to describe something using Infrastructure as Code—is you discover how the infrastructure actually works versus how you thought it worked.If you look at how—and I'm going to go into Azure-land here, so try to follow along with me—if you go into Azure-land and you look at how they construct a load balancer, the load balancer is not a single resource. It's about eight different resources that are all tied together. And AWS has something similar with how you have target groups, and you have the load balancer component and the listener and the health check and all that. Azure has the same thing. There's no actual load balancer object, per se.There's a bunch of different components that get slammed together to form that load balancer. When you look in the portal, you don't see any of that. You just see a load balancer, and you might think this is a very simple resource to configure. When it actually comes time to break it out into code, you realize, oh, this is eight different components, each of which has its own options and arguments that I need to understand. So, one of the great things that I have seen a lot of tooling up here around is doing the import of existing infrastructure into Terraform by pointing the tool at a collection of resources—whatever they are—and saying, “Go create the Terraform code that matches that thing.” And it's not going to be the most elegant code out there, but it will give you a baseline for what all the settings actually are, and other resource types are, and then you can tweak it as needed to add in input variables or remove some arguments that you're not using.Corey: Yeah, I remember when they first announced the importing of existing state. It's wow, there's an awful lot of stuff that it can be aware of that I will absolutely need to control unless I want it to start blowing stuff away every time I run the—[unintelligible 00:15:51] supposedly [unintelligible 00:15:52] thing against it. And that wasn't a lot of fun. But yeah, this is the common experience of it. I only recently was reminded of the fact that I once knew, and I'd forgotten that a public versus private subnet in AWS is a human-based abstraction, not something that is implicit to the API or the way they envision subnets existing. Kind of nice, but also weird when you have to unlearn things that you've thought you'd learned.Ned: That's a really interesting example of we think of them as very different things, and when we draw nice architecture diagrams there—these are the private subnets and these are the public ones. And when you actually go to create one using Terraform—or really another tool—there's no box that says ‘private' or ‘make this public.' It's just what does your route table look like? Are you sending that traffic out the internet gateway or are you sending it to some sort of NAT device? And how does traffic come back into that subnet? That's it. That's what makes it private versus public versus a database subnet versus any other subnet type you want to logically assign within AWS.Corey: Yeah. It's kind of fun when that stuff hits.Ned: [laugh].Corey: I am curious, as you look across the ecosystem, do you still see that learning Terraform is a primary pain point for, I guess, the modern era of cloud engineer, or has that sunk below the surface level of awareness in some ways?Ned: I think it's taken as a given to a certain degree that if you're a cloud engineer or an aspiring cloud engineer today, one of the things you're going to learn is Infrastructure as Code, and that Infrastructure as Code is probably going to be Terraform. You can still learn—there's a bunch of other tools out there; I'm not going to pretend like Terraform is the end-all be-all, right? We've got—if you want to use a general purpose programming language, you have something like Pulumi out there that will allow you to do that. If you want to use one of the cloud-native tools, you've got something like CloudFormation or Azure has Bicep. Please don't use ARM templates because they hurt. They're still JSON only, so at least CloudFormation added YAML support in there. And while I don't really like YAML, at least it's not 10,000 lines of code to spin up, like, two domain controllers in a subnet.Corey: I personally wind up resolving the dichotomy between oh, should we go with JSON or should we go with YAML by picking the third option everyone hates more. That's why I'm a staunch advocate for XML.Ned: [laugh]. I was going to say XML. Yeah oh, as someone who dealt with SOAP stuff for a while, yeah, XML was particularly painful, so I'm not sad that went away. JSON for me, I work with it better, but YAML is more readable. So, it's like it's, pick your poison on that. But yeah, there's a ton of infrastructure tools out there.They all have basically the same concepts behind them, the same core concepts because they're all deploying the same thing at the end of the day and there's only so many ways you can express that concept. So, once you learn one—say you learned CloudFormation first—then Terraform is not as big of a leap. You're still declaring stuff within a file and then having it go and make those things exist. It's just nuances between the implementation of Terraform versus CloudFormation versus Bicep.Corey: I wish that there were more straightforward abstractions, but I think that as soon as you get those, that inherently limits what you're able to do, so I don't know how you square that circle.Ned: That's been a real difficult thing is, people want some sort of universal cloud or infrastructure language and abstraction. I just want a virtual machine. I don't care what kind of platform I'm on. Just give me a VM. But then you end up very much caring [laugh] what kind of VM, what operating system, what the underlying hardware is when you get to a certain level.So, there are some workloads where you're like, I just needed to run somewhere in a container and I really don't care about any of the underlying stuff. And that's great. That's what Platform as a Service is for. If that's your end goal, go use that. But if you're actually standing up infrastructure for any sort of enterprise company, then you need an abstraction that gives you access to all the underlying bits when you want them.So, if I want to specify different placement groups about my VM, I need access to that setting to create a placement group. And if I have this high-level of abstraction of a virtual machine, it doesn't know what a placement group is, and now I'm stuck at that level of abstraction instead of getting down to the guts, or I'm going into the portal or the CLI and modifying it outside of the tool that I'm supposed to be using.Corey: I want to change gears slightly here. One thing that has really been roiling some very particular people with very specific perspectives has been the BSL license change that Terraform has wound up rolling out. So far, the people that I've heard who have the strongest opinions on it tend to fall into one of three categories: either they work at HashiCorp—fair enough, they work at one of HashiCorp's direct competitors—which yeah, okay, sure, or they tend to be—how to put this delicately—open-source evangelists, of which I freely admit I used to be one and then had other challenges I needed to chase down in other ways. So, I'm curious as to where you, who are not really on the vendor side of this at all, how do you see it shaking out?Ned: Well, I mean, just for some context, essentially what HashiCorp decided to do was to change the licensing from Mozilla Public licensing to BSL for, I think eight of their products and Terraform was amongst those. And really, this sort of tells you where people are. The only one that anybody really made any noise about was Terraform. There's plenty of people that use Vault, but I didn't see a big brouhaha over the fact that Vault changed its licensing. It's really just about Terraform. Which tells you how important it is to the ecosystem.And if I look at the folks that are making the most noise about it, it's like you said, they basically fall into one of two camps: it's the open-source code purists who believe everything should be licensed in completely open-source ways, or at least if you start out with an open-source license, you can't convert to something else later. And then there is a smaller subset of folks who work for HashiCorp competitors, and they really don't like the idea of having to pay HashiCorp a regular fee for what used to be ostensibly free to them to use. And so, what they ended up doing was creating a fork of Terraform, just before the licensing change happened and that fork of Terraform was originally called OpenTF, and they had an OpenTF manifesto. And I don't know about you, when I see the word ‘manifesto,' I back away slowly and try not to make any sudden moves.Corey: You really get the sense there's going to be a body count tied to this. And people are like, “What about the Agile Manifesto?” “Yeah, what about it?”Ned: [laugh]. Yeah, I'm just—when I see ‘manifesto,' I get a little bit nervous because either someone is so incredibly passionate about something that they've kind of gone off the deep end a little bit, or they're being somewhat duplicitous, and they have ulterior motives, let's say. Now, I'm not trying to cast aspersions on anybody. I can't read anybody's mind and tell you exactly what their intention was behind it. I just know that the manifesto reads a little bit like an open-source purist and a little bit like someone having a temper tantrum, and vacillating between the two.But cooler heads prevailed a little bit, and now they have changed the name to OpenTofu, and it has been accepted by the Linux Foundation as a project. So, it's now a member of the Linux Foundation, with all the gravitas that that comes with. And some people at HashiCorp aren't necessarily happy about the Linux Foundation choosing to pull that in.Corey: Yeah, I saw a whole screed, effectively, that their CEO wound up brain-dumping on that frankly, from a messaging perspective, he would have been better served as not to say anything at all, to be very honest with you.Ned: Yeah, that was a bit of a yikes moment for me.Corey: It's very rare that you will listen yourself into trouble as opposed to opening your mouth and getting yourself into trouble.Ned: Exactly.Corey: You wouldn't think I would be one of those—of all people who would have made that observation, you wouldn't think I would be on that list, yet here I am.Ned: Yeah. And I don't think either side is entirely blameless. I understand the motivations behind HashiCorp wanting to make the change. I mean, they're a publicly traded company now and ostensibly that means that they should be making some amount of money for their investors, so they do have to bear that in mind. I don't necessarily think that changing the licensing of Terraform is the way to make that money.I think in the long-term, it's not going—it may not hurt them a lot, but I don't think it's going to help them out a lot, and it's tainted the goodwill of the community to a certain degree. On the other hand, I don't entirely trust what the other businesses are saying as well in their stead. So, there's nobody in this that comes out a hundred percent clean [laugh] on the whole process.Corey: Yeah, I feel like, to be direct, the direct competitors to HashiCorp along its various axes are not the best actors necessarily to complain about what is their largest competitor no longer giving them access to continue to compete against them with their own product. I understand the nuances there, but it also doesn't feel like they are the best ambassadors for that. I also definitely understand where HashiCorp is coming from where, why are we investing all this time, energy, and effort for people to basically take revenue away from us? But there's also the bigger problem, which is, by and large, compared to how many sites are running Terraform and the revenues that HashiCorp puts up for it, they're clearly failing to capture the value they have delivered in a massive way. But counterpoint, if they hadn't been open-source for their life until this point, would they have ever captured that market share? Probably not.Ned: Yeah, I think ultimately, the biggest competitor to their paid offering of Terraform is their free version of Terraform. It literally has enough bells and whistles already included and plenty of options for automating those things and solving the problems that their enterprise product solves that their biggest problem is not other competitors in the Terraform landscape; it's the, “Well, we already have something, and it's good enough.” And I'm not sure how you sell to that person, that's why I'm not in marketing, but I think that is their biggest competitor is the people who already have a solution and are like, “Why do I need to pay for your thing when my thing works well enough?”Corey: That's part of the strange thing that I'm seeing as I look across this entire landscape is it feels like this is not something that is directly going to impact almost anyone out there who's just using this stuff, either the open-source version as a paying customer of any of these things, but it is going to kick up a bunch of dust. And speaking of poor messaging, HashiCorp is not really killing it this quarter, where the initial announcement led to so many questions that were unclear, such as—like, they fixed this later in the frequently asked questions list, but okay, “I'm using Terraform right now and that's fine. I'm building something else completely different. Am I going to lose my access to Terraform if you decide to launch a feature that does what my company does?” And after a couple of days, they put up an indemnity against that. Okay, fine.Like, when Mongo did this, there was a similar type of dynamic that was emerging, but a lot fewer people are writing their own database engine to then sell onward to customers that are provisioning infrastructure on behalf of their customers. And where the boundaries lay for who was considered a direct Terraform competitor was unclear. I'm still not convinced that it is clear enough to bet the business on for a lot of these folks. It comes down to say what you mean, not—instead of hedging, you're not helping your cause any.Ned: Yeah, I think out of the different products that they have, some are very clear-cut. Like, Vault is a server that runs as a service, and so that's very clear what that product is and where the lines of delineation are around Vault. If I go stand up a bunch of Vault servers and offer them as a service, then that is clearly a competitor. But if I have an automation pipeline service and people can technically automate Terraform deployments with my service, even if that's not the core thing that I'm looking to do, am I now a competitor? Like, it's such a fuzzy line because Terraform isn't an application, it's not a server that runs somewhere, it's a CLI tool and a programming language. So yeah, those lines are very, very fuzzy. And I… like I said, it would be better if they say what they meant, as opposed to sort of the mealy-mouthed language that they ended up using and the need to publish multiple revisions of that FAQ to clarify their position on very specific niche use cases.Corey: Yeah, I'm not trying to be difficult or insulting or anything like that. These are hard problems that everyone involved is wrestling with. It just felt a little off, and I think the messaging did them no favors when that wound up hitting. And now, everyone is sort of trying to read the tea leaves and figure out what does this mean because in isolation, it doesn't mean anything. It is a forward-looking thing.Whatever it is you're doing today, no changes are needed for you, until the next version comes out, in which case, okay, now do we incorporate the new thing or don't we? Today, to my understanding, whether I'm running Terraform or OpenTofu entirely comes down to which binary am I invoking to do the apply? There is no difference of which I am aware. That will, of course, change, but today, I don't have to think about that.Ned: Right. OpenTofu is a literal fork of Terraform, and they haven't really added much in the way of features, so it should be completely compatible with Terraform. The two will diverge in the future as feature as new features get added to each one. But yeah, for folks who are using it today, they might just decide to stay on the version pre-fork and stay on that for years. I think HashiCorp has pledged 18 months of support for any minor version of Terraform, so you've got at least a year-and-a-half to decide. And we were kind of talking before the recording, 99% of people using Terraform do not care about this. It does not impact their daily workflow.Corey: No. I don't see customers caring at all. And also, “Oh, we're only going to use the pre-fork version of Terraform,” they're like, “Thanks for the air cover because we haven't updated any of that stuff in five years, so tha”—Ned: [laugh].Corey: “Oh yeah, we're doing it out of license concern. That's it. That's the reason we haven't done anything recent with it.” Because once it's working, changes are scary.Ned: Yeah.Corey: Terraform is one of those scary things, right next to databases, that if I make a change that I don't fully understand—and no one understands everything, as we've covered—then this could really ruin my week. So, I'm going to be very cautious around that.Ned: Yeah, if metrics are to be believed across the automation platforms, once an infrastructure rollout happens with a particular version of Terraform, that version does not get updated. For years. So, I have it on good authority that there's still Terraform version 0.10 and 0.11 running on these automation platforms for really old builds where people are too scared to upgrade to, like, post 0.12 where everything changed in the language.I believe that. People don't want to change it, especially if it's working. And so, for most people, this licensing chain doesn't matter. And all the constant back and forth and bickering just makes people feel a little nervous, and it might end up pushing people away from Terraform as a platform entirely, as opposed to picking a side.Corey: Yeah, and I think that that is probably the fair way to view it at this point where right now—please, friends at HashiCorp and HashiCorp competitors don't yell at me for this—it's basically a nerd slap-fight at the moment.Ned: [laugh].Corey: And of one of the big reasons that I also stay out of these debates almost entirely is that I married a corporate attorney who used to be a litigator and I get frustrated whenever it comes down to license arguments because you see suddenly a bunch of engineers who get to cosplay as lawyers, and reading the comments is infuriating once you realize how a little bit of this stuff works, which I've had 15 years of osmotic learning on this stuff. Whenever I want to upset my wife, I just read some of these comments aloud and then our dinner conversation becomes screaming. It's wonderful.Ned: Bad legal takes? Yeah, before—Corey: Exactly.Ned: Before my father became a social studies teacher, he was a lawyer for 20 years, and so I got to absorb some of the thought process of the lawyer. And yeah, I read some of these takes, and I'm like, “That doesn't sound right. I don't think that would hold up in any court of law.” Though a lot of the open-source licensing I don't think has been tested in any sort of court of law. It's just kind of like, “Well, we hope this stands up,” but nobody really has the money to check.Corey: Yeah. This is the problem with these open-source licenses as well. Very few have never been tested in any meaningful way because I don't know about you, but I don't have a few million dollars in legal fees lying around to prove the point.Ned: Yeah.Corey: So, it's one of those we think this is sustainable, and Lord knows the number of companies that have taken reliances on these licenses, they're probably right. I'm certainly not going to disprove the fact—please don't sue me—but yeah, this is one of those things that we're sort of assuming is the case, even if it's potentially not. I really want to thank you for taking the time to discuss how it is you view these things and talk about what it is you're up to. If people want to learn more, where's the best place for them to find you?Ned: Honestly, just go to my website. It's nedinthecloud.com. And you can also find me on LinkedIn. I don't really go for Twitter anymore.Corey: I envy you. I wish I could wean myself off of it. But we will, of course, include a link to that in the show notes. Thank you so much for being so generous with your time. It's appreciated.Ned: It's been a pleasure. Thanks, Corey.Corey: Net Bellavance, founder and curious human at Ned in the Cloud. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that I will then fork under a different license and claim as my own.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.
Levi McCormick, Cloud Architect at Jamf, joins Corey on Screaming in the Cloud to discuss his work modernizing baseline cloud infrastructure and his experience being on the compliance side of cloud engineering. Levi explains how he works to ensure the different departments he collaborates with are all on the same page so that different definitions don't end up in miscommunications, and why he feels a sandbox environment is an important tool that leads to a successful production environment. Levi and Corey also explore the ethics behind the latest generative AI craze. About LeviLevi is an automation engineer, with a focus on scalable infrastructure and rapid development. He leverages deep understanding of DevOps culture and cloud technologies to build platforms that scale to millions of users. His passion lies in helping others learn to cloud better.Links Referenced: Jamf: https://www.jamf.com/ Twitter: https://twitter.com/levi_mccormick LinkedIn: https://www.linkedin.com/in/levimccormick/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. A longtime friend and person has been a while since he's been on the show, Levi McCormick has been promoted or punished for his sins, depending upon how you want to slice that, and he is now the Director of Cloud Engineering at Jamf. Levi, welcome back.Levi: Thanks for having me, Corey.Corey: I have to imagine internally, you put that very pronounced F everywhere, and sometimes where it doesn't belong, like your IAMf policies and whatnot.Levi: It is fun to see how people like to interpret how to pronounce our name.Corey: So, it's been a while. What were you doing before? And how did you wind up stumbling your way into your current role?Levi: [laugh]. When we last spoke, I was a cloud architect here, diving into just our general practices and trying to shore up some of them. In between, I did a short stint as director of FedRAMP. We are pursuing some certifications in that area and I led, kind of, the engineering side of the compliance journey.Corey: That sounds fairly close to hell on earth from my particular point of view, just because I've dealt in the compliance side of cloud engineering before, and it sounds super interesting from a technical level until you realize just how much of it revolves around checking the boxes, and—at least in the era I did it—explaining things to auditors that I kind of didn't feel I should have to explain to an auditor, but there you have it. Has the state of that world improved since roughly 2015?Levi: I wouldn't say it has improved. While doing this, I did feel like I drove a time machine to work, you know, we're certifying VMs, rather than container-based architectures. There was a lot of education that had to happen from us to auditors, but once they understood what we were trying to do, I think they were kind of on board. But yeah, it was a [laugh] it was a journey.Corey: So, one of the things you do—in fact, the first line in your bio talking about it—is you modernize baseline cloud infrastructure provisioning. That means an awful lot of things depending upon who it is that's answering the question. What does that look like for you?Levi: For what we're doing right now, we're trying to take what was a cobbled-together part-time project for one engineer, we're trying to modernize that, turn it into as much self-service as we can. There's a lot of steps that happen along the way, like a new workload needs to be spun up, they decide if they need a new AWS account or not, we pivot around, like, what does the access profile look like, who needs to have access to it, which things does it need to connect to, and then you look at the billing side, compliance side, and you just say, you know, “Who needs to be informed about these things?” We apply tags to the accounts, we start looking at lower-level tagging, depending on if it's a shared workload account or if it's a completely dedicated account, and we're trying to wrap all of that in automation so that it can be as click-button as possible.Corey: Historically, I found that when companies try to do this, the first few attempts at it don't often go super well. We'll be polite and say their first attempts resemble something artisanal and handcrafted, which might not be ideal for this. And then in many cases, the overreaction becomes something that is very top-down, dictatorial almost, is the way I would frame that. And the problem people learn then is that, “Oh, everyone is going to route around us because they don't want to deal with us at all.” That doesn't quite seem like your jam from what I know of you and your approach to things. How do you wind up keeping the guardrails up without driving people to shadow IT their way around you?Levi: I always want to keep it in mind that even if it's not an option, I want to at least pretend like a given team could not use our service, right? I try to bring a service mentality to it, so we're talking Accounts as a Service. And then I just think about all of the things that they would have to solve if they didn't go through us, right? Like, are they managing their finances w—imagine they had to go in and negotiate some kind of pricing deal on their own, right, all of these things that come with being part of our organization, being part of our service offering. And then just making sure, like, those things are always easier than doing it on their own.Corey: How diverse would you say that the workloads are that are in your organization? I found that in many cases, you'll have a SaaS-style company where there's one primary workload that is usually bearing the name of the company, and that's the thing that they provide to everyone. And then you have the enterprise side of the world where they have 1500 or 2000 distinct application teams working on different things, and the only thing they really have in common is, well, that all gets billed to the same company, eventually.Levi: They are fairly diverse in how… they're currently created. We've gone through a few acquisitions, we've pulled a bunch of those into our ecosystem, if you will. So, not everything has been completely modernized or brought over to, you know, standards, if you will, if such a thing even exists in companies. You know [laugh], you may pretend that they do, but you're probably lying to yourself, right? But you know, there are varying platforms, we've got a whole laundry list of languages that are being used, we've got some containerized, some VM-based, some serverless workloads, so it's all over the place. But you nailed it. Like, you know, the majority of our footprint lives in maybe a handful of, you know, SaaS offerings.Corey: Right. It's sort of a fun challenge when you start taking a looser approach to these things because someone gets back from re:Invent, like, “Well, I went to the keynote and now I have my new shopping list of things I'm going to wind up deploying,” and ehh, that never goes well, having been that person in a previous life.Levi: Yeah. And you don't want to apply too strict of governance over these things, right? You want people to be able to play, you want them to be inspired and start looking at, like, what would be—what's something that's going to move the needle in terms of our cloud architecture or product offerings or whatever we have. So, we have sandbox accounts that are pretty much wide open, we've got some light governance over those, [laugh] moreso for billing than anything. And all of our internal tooling is available, you know, like if you're using containers or whatever, like, all of that stuff is in those sandbox accounts.And that's where our kind of service offering comes into play, right? Sandbox is still an account that we tried to vend, if you will, out of our service. So, people should be building in your sandbox environments just like they are in your production as much as possible. You know, it's a place where tools can get the tires kicked and smooth out bugs before you actually get into, you know, roadmap-impacting problems.Corey: One of the fun challenges you have is, as you said, the financial aspect of this. When you've got a couple of workloads that drive most things, you can reason about them fairly intelligently, but trying to predict the future—especially when you're dealing with multi-year contract agreements with large cloud providers—becomes a little bit of a guessing game, like, “Okay. Well, how much are we going to spend on generative AI over the next three years?” The problem with that is that if you listen to an awful lot of talking heads or executive types, like, “Oh, yeah, if we're spending $100 million a year, we're going to add another 50 on top of that, just in terms of generative AI.” And it's like, press X to doubt, just because it's… I appreciate that you're excited about these things and want to play with them, but let's make sure that there's some ‘there' there before signing contracts that are painful to alter.Levi: Yeah, it's a real struggle. And we have all of these new initiatives, things people are excited for. Meanwhile, we're bringing old architecture into a new platform, if you will, or a new footprint, so we have to constantly measure those against each other. We have a very active conversation with finance and with leadership every month, or even weekly, depending on the type of project and where that spend is coming from.Corey: One of the hard parts has always been, I think, trying to get people on the finance side of the world, the engineering side of the world, and the folks who are trying to predict what the business was going to do next, all speaking the same language. It just feels like it's too easy to wind up talking past each other if you're not careful.Levi: Yeah, it's really hard. Recently taken over the FinOps practice. It's been really important for me, for us to align on what our words mean, right? What are these definitions mean? How do we come to common consensus so that eventually the communication gets faster? But we can't talk past each other. We have to know what our words mean, we have to know what each person cares about in this conversation, or what does their end goal look like? What do they want out of the conversation? So, that's been—that's taken a significant amount of time.Corey: One of the problems I have is with the term FinOps as a whole, ignoring the fact entirely that it was an existing term of art within finance for decades; great, we're just going to sidestep past that whole mess—the problem you'll see is that it just seems like that it means something different to almost everyone who hears it. And it's sort of become a marketing term more so that it has an actual description of what people are doing. Just because some companies will have a quote-unquote, “FinOps team,” that is primarily going to be run by financial analysts. And others, “Well, we have one of those lying around, but it's mostly an engineering effort on our part.”And I've seen three or four different expressions as far as team composition goes and I'm not convinced any of them are right. But again, it's easy for me to sit here and say, “Oh, that's wrong,” without having an environment of my own to run. I just tend to look at what my clients do. And, “Well, I've seen a lot of things, and they all work poorly in different ways,” is not uplifting and helpful.Levi: Yeah. I try not to get too hung up on what it's called. This is the name that a lot of people inside the company have rallied around and as long as people are interested in saving money, cool, we'll call it FinOps, you know? I mean, DevOps is the same thing, right? In some companies, you're just a sysadmin with a higher pay, and in some companies, you're building extensive cloud architecture and pipelines.Corey: Honestly, for the whole DevOps side of the world, I maintain we're all systems administrators. The tools have changed, the methodologies have changed, the processes have changed, but the responsibility of ‘keep the site up' generally has not. But if you call yourself a sysadmin, you're just asking him to, “Please pay me less money in my next job.” No, thanks.Levi: Yeah. “Where's the Exchange Server for me to click on?” Right? That's the [laugh]—if you call yourself a sysadmin [crosstalk 00:11:34]—Corey: God. You're sending me back into twitching catatonia from my early days.Levi: Exactly [laugh].Corey: So, you've been paying attention to this whole generative AI hype monster. And I want to be clear, I say this as someone who finds the technology super neat and I'm optimistic about it, but holy God, it feels like people have just lost all sense. If that's you, my apologies in advance, but I'm still going to maintain the point.Levi: I've played with all the various toys out there. I'm very curious, you know? I think it's really fun to play with them, but to, like, make your entire business pivot on a dime and pursue it just seems ridiculous to me. I hate that the cryptocurrency space has pivoted so hard into it, you know? All the people that used to be shilling coins are now out there trying to cobble together a couple API calls and turn it into an AI, right?Corey: It feels like it's just a hype cycle that people are more okay with being a part of. Like, Andy Jassy, in the earnings call a couple of weeks ago saying that every Amazon team is working with generative AI. That's not great. That's terrifying. I've been playing with the toys as well and I've asked it things like, “Oh, spit out an IAM policy for me,” or, “Oh, great, what can I do to optimize my AWS bill?” And it winds up spitting out things that sound highly plausible, but they're also just flat-out wrong. And that, it feels like a lot of these spaces, it's not coming up with a plausible answer—that's the hard part—is coming up with the one that is correct. And that's what our jobs are built around.Levi: I've been trying to explain to a lot of people how, if you only have surface knowledge of the thing that it's telling you, it probably seems really accurate, but when you have deep knowledge on the topic that you're interacting with this thing, you're going to see all of the errors. I've been using GitHub's Copilot since the launch. You know, I was in one of the previews. And I love it. Like, it speeds up my development significantly.But there have been moments where I—you know, IAM policies are a great example. You know, I had it crank out a Lambda functions policy, and it was just frankly, wrong in a lot of places [laugh]. It didn't quite imagine new AWS services, but it was really [laugh] close. The API actions were—didn't exist. It just flat-out didn't exist.Corey: I love that. I've had some magic happen early on where it could intelligently query things against the AWS pricing API, but then I asked it the same thing a month later and it gave me something completely ridiculous. It's not deterministic, which is part of the entire problem with it, too. But it's also… it can help incredibly in some weird ways I didn't see coming. But it can also cause you to spend more time chasing that thing than just doing it yourself the first time.I found a great way to help it—you know, it helped me write blog posts with it. I tell it to write a blog post about a topic and give it some bullet points and say, “Write in my voice,” and everything it says I take issue with, so then I just copy that into a text editor and then mansplain-correct the robot for 20 minutes and, oh, now I've got a serviceable first draft.Levi: And how much time did you save [laugh] right? It is fun, you know?Corey: It does help because that's better for me at least and staring at an empty page of what am I going to write? It gets me past the writer's block problem.Levi: Oh, that's a great point, yeah. Just to get the ball rolling, right, once you—it's easier to correct something that's wrong, and you're almost are spite-driven at that point, right? Like, “Let me show this AI how wrong it was and I'll write the perfect blog post.” [laugh].Corey: It feels like the companies jumping on this, if you really dig into what we're talking about, it seems like they're all very excited about the possibility of we don't have to talk to customers anymore because the robots will all do that. And I don't think that's going to go the way you want to. We just have this minor hallucination problem. Yeah, that means that lies and tries to book customers to hotel destinations that don't exist. Think about this a little more. The failure mode here is just massive.Levi: It's scary, yeah. Like, without some kind of review process, I wouldn't ship that straight to my customers, right? I wouldn't put that in front of my customer and say, like, “This is”—I'm going to take this generative output and put it right in front of them. That scares me. I think as we get deeper into it, you know, maybe we'll see… I don't know, maybe we'll put some filters or review process, or maybe it'll get better. I mean, who was it that said, you know, “This is the worst it's ever going to be?” Right, it will only get better.Corey: Well, the counterargument to that is, it will get far worse when we start putting this in charge [unintelligible 00:16:08] safety-critical systems, which I'm sure it's just a matter of time because some of these boosters are just very, very convincing. It's just thinking, how could this possibly go the worst? Ehhh. It's not good.Levi: Yeah, well, I mean, we're talking impact versus quality, right? The quality will only ever get better. But you know, if we run before we walk, the impact can definitely get wider.Corey: From where I sit, I want to see this really excel within bounded problem spaces. The one I keep waiting for is the AWS bill because it's a vast space, yes, and it's complicated as all hell, but it is bounded. There are a finite—though large—number of things you can see in an AWS bill, and there are recommendations you can make based on top of that. But everything I've seen that plays in this space gets way overconfident far too quickly, misses a bunch of very obvious lines of inquiry. Ah, I'm skeptical.Then you pass that off to unbounded problem spaces like human creativity and that just turns into an absolute disaster. So, much of what I've been doing lately has been hamstrung by people rushing to put in safeguards to make sure it doesn't accidentally say something horrible that it's stripped out a lot of the fun and the whimsy and the sarcasm in the approach, of I—at one point, I could bully a number of these things into ranking US presidents by absorbency. That's getting harder to do now because, “Nope, that's not respectful and I'm not going to do it,” is basically where it draws the line.Levi: The one thing that I always struggle with is, like, how much of the models are trained on intellectual property or, when you distill it down, pure like human suffering, right? Like, this is somebody's art, they've worked hard, they've suffered for it, they put it out there in the world, and now it's just been pulled in and adopted by this tool that—you know, how many of the examples of, “Give me art in the style of,” right, and you just see hundreds and hundreds of pieces that I mean, frankly, are eerily identical to the style.Corey: Even down to the signature, in some cases. Yeah.Levi: Yeah, exactly. You know, and I think that we can't lose sight of that, right? Like, these tools are fun and you know, they're fun to play with, it's really interesting to explore what's possible, but we can't lose sight of the fact that there are ultimately people behind these things.Corey: This episode is sponsored in part by Panoptica. Panoptica simplifies container deployment, monitoring, and security, protecting the entire application stack from build to runtime. Scalable across clusters and multi-cloud environments, Panoptica secures containers, serverless APIs, and Kubernetes with a unified view, reducing operational complexity and promoting collaboration by integrating with commonly used developer, SRE, and SecOps tools. Panoptica ensures compliance with regulatory mandates and CIS benchmarks for best practice conformity. Privacy teams can monitor API traffic and identify sensitive data, while identifying open-source components vulnerable to attacks that require patching. Proactively addressing security issues with Panoptica allows businesses to focus on mitigating critical risks and protecting their interests. Learn more about Panoptica today at panoptica.app.Corey: I think it matters, on some level, what the medium is. When I'm writing, I will still use turns of phrase from time to time that I first encountered when I was reading things in the 1990s. And that phrase stuck with me and became part of my lexicon. And I don't remember where I originally encountered some of these things; I just know I use those raises an awful lot. And that has become part and parcel of who and what I am.Which is also, I have no problem telling it to write a blog post in the style of Corey Quinn and then ripping a part of that out, but anything that's left in there, cool. I'm plagiarizing the thing that plagiarized from me and I find that to be one of those ethically just moments there. But written word is one thing depending on what exactly it's taking from you, but visual style for art, that's something else entirely.Levi: There's a real ethical issue here. These things can absorb far much more information than you ever could in your entire lifetime, right, so that you can only quote-unquote, you know, “Copy, borrow, steal,” from a handful of other people in your entire life, right? Whereas this thing could do hundreds or thousands of people per minute. I think that's where the calculus needs to be, right? How many people can we impact with this thing?Corey: This is also nothing new, where originally in the olden times, great, copyright wasn't really a thing because writing a book was a massive, massive undertaking. That was something that you'd have to do by hand, and then oh, you want a copy of the book? You'd have to have a scribe go and copy the thing. Well then, suddenly the printing press came along, and okay, that changes things a bit.And then we continue to evolve there to digital distribution where suddenly it's just bits on a disk that I can wind up throwing halfway around the internet. And when the marginal cost of copying something becomes effectively zero, what does that change? And now we're seeing, I think, another iteration in that ongoing question. It's a weird world and I don't know that we have the framework in place even now to think about that properly. Because every time we start to get a handle on it, off we go again. It feels like if they were doing be invented today, libraries would absolutely not be considered legal. And yet, here we are.Levi: Yeah, it's a great point. Humans just do not have the ethical framework in place for a lot of these things. You know, we saw it even with the days of Napster, right? It's just—like you said, it's another iteration on the same core problem. I [laugh] don't know how to solve it. I'm not a philosopher, right?Corey: Oh, yeah. Back in the Napster days, I was on that a fair bit in high school and college because I was broke, and oh, I wanted to listen to this song. Well, it came on an album with no other good songs on it because one-hit wonders were kind of my jam, and that album cost 15, 20 bucks, or I could grab the thing for free. There was no reasonable way to consume. Then they started selling individual tracks for 99 cents and I gorged myself for years on that stuff.And now it feels like streaming has taken over the world to the point where the only people who really lose on this are the artists themselves, and I don't love that outcome. How do we have a better tomorrow for all of this? I know we're a bit off-topic from you know, cloud management, but still, this is the sort of thing I think about when everything's running smoothly in a cloud environment.Levi: It's hard to get people to make good decisions when they're so close to the edge. And I think about when I was, you know, college-age scraping by on minimum wage or barely above minimum wage, you know, it was hard to convince me that, oh yeah, you shouldn't download an MP3 of that song; you should go buy the disc, or whatever. It was really hard to make that argument when my decision was buy an album or figure out where I'm going to, you know, get my lunch. So, I think, now that I'm in a much different place in my life, you know, these decisions are a lot easier to make in an ethical way because that doesn't impact my livelihood nearly as much. And I think that is where solutions will probably come out of. The more people doing better, the easier it is for them to make good decisions.Corey: I sure hope you're right, but something I found is that okay we made it easy for people to make good decisions. Like, “Nope, you've just made it easier for me to scale a bunch of terrible ones. I can make 300,000 more terrible decisions before breakfast time now. Thanks.” And, “No, that's not what I did that for.” Yet here we are. Have you been tracking lately what's been going on with the HashiCorp license change?Levi: Um, a little bit, we use—obviously use Terraform in the company and a couple other Hashi products, and it was kind of a wildfire of, you know, how does this impact us? We dove in and we realized that it doesn't, but it is concerning.Corey: You're not effectively wrapping Terraform and then using that as the basis for how you do MDM across your customer fleets.Levi: Yeah. You know, we're not deploying customers' written Terraform into their environments or something kind of wild like that. Yeah, it doesn't impact us. But it is… it is concerning to watch a company pivot from an open-source, community-based project to, “Oh, you can't do that anymore.” It doesn't impact a lot of people who use it day-to-day, but I'm really worried about just the goodwill that they've lit on fire.Corey: One of the problems, too, is that their entire write-up on this was so vague that it was—there is no way to get an actual… piece of is it aimed at us or is it not without very deep analysis, and hope that when it comes to court, you're going to have the same analysis as—that is sympathetic. It's, what is considered to be a competitor? At least historically, it was pretty obvious. Some of these databases, “Okay great. Am I wrapping their database technology and then selling it as a service? No? I'm pretty good.”But with HashiCorp, what they do is so vast in a few key areas that no one has the level of certainty. I was pretty freaking certain that I'm not shipping MongoDB with my own wrapper around it, but am I shipping something that looks like Terraform if I'm managing someone's environment for them? I don't know. Everything's thrown into question. And you're right. It's the goodwill that currently is being set on fire.Levi: Yeah, I think people had an impression of Hashi that they were one of the good guys. You know, the quote-unquote, “Good guys,” in the space, right? Mitchell Hashimoto is out there as a very prominent coder, he's an engineer at heart, he's in the community, pretty influential on Twitter, and I think people saw them as not one of the big, faceless corporations, so to see moves like this happen, it… I think it shook a lot of people's opinions of them and scared them.Corey: Oh, yeah. They've always been the good guys in this context. Mitch and Armon were fantastic folks. I'm sure they still are. I don't know if this is necessarily even coming from them. It's market forces, what are investors demanding? They see everyone is using Terraform. How does that compare to HashiCorp's market value?This is one of the inherent problems if I'm being direct, of the end-stages of capitalism, where it's, “Okay, we're delivering on a lot of value. How do we capture ever more of it and growing massively?” And I don't know. I don't know what the answer is, but I don't think anyone's thrilled with this outcome. Because, let's be clear, it is not going to meaningfully juice their numbers at all. They're going to be setting up a lot of ill will against them in the industry, but I don't see the upside for them. I really don't.Levi: I haven't really done any of the analysis or looked for it, I should say. Have you seen anything about what this might actually impact any providers or anything? Because you're right, like, what kind of numbers are we actually talking about here?Corey: Right. Well, there are a few folks that have done things around this that people have named for me: Spacelift being one example, Pulumi being another, and both of them are saying, “Nope, this doesn't impact us because of X, Y, and Z.” Yeah, whether it does or doesn't, they're not going to sit there and say, “Well, I guess we don't have a company anymore. Oh, well.” And shut the whole thing down and just give their customers over to HashiCorp.Their own customers would be incensed if that happened and would not go to HashiCorp if that were to be the outcome. I think, on some level, they're setting the stage for the next evolution in what it takes to manage large-scale cloud environments effectively. I think basically, every customer I've ever dealt with on my side has been a Terraform shop. I finally decided to start learning the ins and outs of it myself a few weeks ago, and well, it feels like I should have just waited a couple more weeks and then it would have become irrelevant. Awesome. Which is a bit histrionic, but still, this is going to plant seeds for people to start meaningfully competing. I hope.Levi: Yeah, I hope so too. I have always awaited releases of Terraform Cloud with great anticipation. I generally don't like managing my Terraform back-ends, you know, I don't like managing the state files, so every time Terraform Cloud has some kind of release or something, I'm looking at it because I'm excited, oh finally, maybe this is the time I get to hand it off, right? Maybe I start to get to use their product. And it has never been a really compelling answer to the problems that I have.And I've always said, like, the [laugh] cloud journey would be Google's if they just released a managed Terraform [laugh] service. And this would be one way for them to prevent that from happening. Because Google doesn't even have an Infrastructure as Code competitor. Not really. I mean, I know they have their, what, Plans or their Projects or whatever they… their Infrastructure as Code language was, but—Corey: Isn't that what Stackdriver was supposed to be? What happened with that? It's been so long.Levi: No, that's a logging solution [laugh].Corey: That's the thing. It all runs together. Not it was their operations suite that was—Levi: There we go.Corey: —formerly Stackdriver. Yeah. Now, that does include some aspects—yeah. You're right, it's still hanging out in the observability space. This is the problem is all this stuff conflates and companies are terrible at naming and Google likes to deprecate things constantly. And yeah, but there is no real competitor. CloudFormation? Please. Get serious.Levi: Hey, you're talking to a member of the CloudFormation support group here. So, I'm still a huge fan [laugh].Corey: Emotional support group, more like it, it seems these days.Levi: It is.Corey: Oh, good. It got for loops recently. We've been asking for basically that to make them a lot less wordy only for, what, ten years?Levi: Yeah. I mean, my argument is that I'm operating at the account level, right? I need to deploy to 250, 300, 500 accounts. Show me how to do that with Terraform that isn't, you know, stab your eyes out with a fork.Corey: It can be done, but it requires an awful lot of setting things up first.Levi: Exactly.Corey: That's sort of a problem. Like yeah, once you have the first 500 going, the rest are just like butter. But that's a big step one is massive, and then step two becomes easy. Yeah… no, thank you.Levi: [laugh]. I'm going to stick with my StacksSets, thank you.Corey: [laugh]. I really want to thank you for taking the time to come back on and honestly kibitz about the state of the industry with me. If people want to learn more, where's the best place for them to find you?Levi: Well, I'm still active on the space normally known as—formerly known as Twitter. You can reach out to me there. DMs are open. I'm always willing to help people learn how to cloud better. Hopefully trying to make my presence known a little bit more on LinkedIn. If you happen to be over there, reach out.Corey: And we will, of course, put links to that in the [show notes 00:30:16]. Thank you so much for taking the time to speak with me again. It's always a pleasure.Levi: Thanks, Corey. I always appreciate it.Corey: Levi McCormick, Director of Cloud Engineering at Jamf. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, and along with an insulting comment that tells us that we completely missed the forest for the trees and that your programmfing is going to be far superior based upon generative AI.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Richard talks with HashiCorp cofounder Mitchell Hashimoto about a side project of his: a high-performance terminal emulator that he wrote using Zig and Swift, and which has become his daily driver terminal.
First, some self-administered back-patting for the Stack Overflow editorial team: great engineering blogs give tech companies an edge (The New York Times says so). Hiring aside, engineering blogs are fresh sources of knowledge, insight, and entertainment for anyone working in tech. You can learn a lot from, for instance, blog posts that break down an outage or security incident and detail how engineers got things up and running again. One classic of the genre: Amazon's explanation of how one engineer brought the internet to its knees. And here's an example from our own blog. When you've finished catching up on the Stack Overflow blog, check out those from Netflix and Uber.Good news for late-night impulse shoppers: Instagram is removing the shopping tag from the home feed, reports The Verge. Is this a response to widespread user pushback, and does this herald the end of New Instagram? We can hope.Sony announces Project Leonardo, an accessibility controller kit for PS5.Did you know? Using only Tetris, you can build a machine capable of universal computation.Developer advocate Matt Kiernander is moving on to his next adventure. If you're looking for a developer advocate or engineer, connect with him on LinkedIn or email him.One of Matt's favorite conversations on the podcast was our episode with Mitchell Hashimoto, cofounder and CEO of HashiCorp. It's worth a (re)listen.
About KelseyKelsey Hightower is the Principal Developer Advocate at Google, the co-chair of KubeCon, the world's premier Kubernetes conference, and an open source enthusiast. He's also the co-author of Kubernetes Up & Running: Dive into the Future of Infrastructure.Links: Twitter: @kelseyhightower Company site: Google.com Book: Kubernetes Up & Running: Dive into the Future of Infrastructure TranscriptAnnouncer: Hello and welcome to Screaming in the Cloud, with your host Cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. I'm joined this week by Kelsey Hightower, who claims to be a principal developer advocate at Google, but based upon various keynotes I've seen him in, he basically gets on stage and plays video games like Tetris in front of large audiences. So I assume he is somehow involved with e-sports. Kelsey, welcome to the show.Kelsey: You've outed me. Most people didn't know that I am a full-time e-sports Tetris champion at home. And the technology thing is just a side gig.Corey: Exactly. It's one of those things you do just to keep the lights on, like you're waiting to get discovered, but in the meantime, you're waiting table. Same type of thing. Some people wait tables you more or less a sling Kubernetes, for lack of a better term.Kelsey: Yes.Corey: So let's dive right into this. You've been a strong proponent for a long time of Kubernetes and all of its intricacies and all the power that it unlocks and I've been pretty much the exact opposite of that, as far as saying it tends to be over complicated, that it's hype-driven and a whole bunch of other, shall we say criticisms that are sometimes bounded in reality and sometimes just because I think it'll be funny when I put them on Twitter. Where do you stand on the state of Kubernetes in 2020?Kelsey: So, I want to make sure it's clear what I do. Because when I started talking about Kubernetes, I was not working at Google. I was actually working at CoreOS where we had a competitor Kubernetes called Fleet. And Kubernetes coming out kind of put this like fork in our roadmap, like where do we go from here? What people saw me doing with Kubernetes was basically learning in public. Like I was really excited about the technology because it's attempting to solve a very complex thing. I think most people will agree building a distributed system is what cloud providers typically do, right? With VMs and hypervisors. Those are very big, complex distributed systems. And before Kubernetes came out, the closest I'd gotten to a distributed system before working at CoreOS was just reading the various white papers on the subject and hearing stories about how Google has systems like Borg tools, like Mesa was being used by some of the largest hyperscalers in the world, but I was never going to have the chance to ever touch one of those unless I would go work at one of those companies.So when Kubernetes came out and the fact that it was open source and I could read the code to understand how it was implemented, to understand how schedulers actually work and then bonus points for being able to contribute to it. Those early years, what you saw me doing was just being so excited about systems that I attended to build on my own, becoming this new thing just like Linux came up. So I kind of agree with you that a lot of people look at it as a more of a hype thing. They're looking at it regardless of their own needs, regardless of understanding how it works and what problems is trying to solve that. My stance on it, it's a really, really cool tool for the level that it operates in, and in order for it to be successful, people can't know that it's there.Corey: And I think that might be where part of my disconnect from Kubernetes comes into play. I have a background in ops, more or less, the grumpy Unix sysadmin because it's not like there's a second kind of Unix sysadmin you're ever going to encounter. Where everything in development works in theory, but in practice things pan out a little differently. I always joke that ops is the difference between theory and practice. In theory, devs can do everything and there's no ops needed. In practice, well it's been a burgeoning career for a while. The challenge with this is Kubernetes at times exposes certain levels of abstraction that, sorry certain levels of detail that generally people would not want to have to think about or deal with, while papering over other things with other layers of abstraction on top of it. That obscure, valuable troubleshooting information from a running something in an operational context. It absolutely is a fascinating piece of technology, but it feels today like it is overly complicated for the use a lot of people are attempting to put it to. Is that a fair criticism from where you sit?Kelsey: So I think the reason why it's a fair criticism is because there are people attempting to run their own Kubernetes cluster, right? So when we think about the cloud, unless you're in OpenStack land, but for the people who look at the cloud and you say, "Wow, this is much easier." There's an API for creating virtual machines and I don't see the distributed state store that's keeping all of that together. I don't see the farm of hypervisors. So we don't necessarily think about the inherent complexity into a system like that, because we just get to use it. So on one end, if you're just a user of a Kubernetes cluster, maybe using something fully managed or you have an ops team that's taking care of everything, your interface of the system becomes this Kubernetes configuration language where you say, "Give me a load balancer, give me three copies of this container running." And if we do it well, then you'd think it's a fairly easy system to deal with because you say, "kubectl, apply," and things seem to start running.Just like in the cloud where you say, "AWS create this VM, or G cloud compute instance, create." You just submit API calls and things happen. I think the fact that Kubernetes is very transparent to most people is, now you can see the complexity, right? Imagine everyone driving with the hood off the car. You'd be looking at a lot of moving things, but we have hoods on cars to hide the complexity and all we expose is the steering wheel and the pedals. That car is super complex but we don't see it. So therefore we don't attribute as complexity to the driving experience.Corey: This to some extent feels it's on the same axis as serverless, with just a different level of abstraction piled onto it. And while I am a large proponent of serverless, I think it's fantastic for a lot of Greenfield projects. The constraints inherent to the model mean that it is almost completely non-tenable for a tremendous number of existing workloads. Some developers like to call it legacy, but when I hear the term legacy I hear, "it makes actual money." So just treating it as, "Oh, it's a science experiment we can throw into a new environment, spend a bunch of time rewriting it for minimal gains," is just not going to happen as companies undergo digital transformations, if you'll pardon the term.Kelsey: Yeah, so I think you're right. So let's take Amazon's Lambda for example, it's a very opinionated high-level platform that assumes you're going to build apps a certain way. And if that's you, look, go for it. Now, one or two levels below that there is this distributed system. Kubernetes decided to play in that space because everyone that's building other platforms needs a place to start. The analogy I like to think of is like in the mobile space, iOS and Android deal with the complexities of managing multiple applications on a mobile device, security aspects, app stores, that kind of thing. And then you as a developer, you build your thing on top of those platforms and APIs and frameworks. Now, it's debatable, someone would say, "Why do we even need an open-source implementation of such a complex system? Why not just everyone moved to the cloud?" And then everyone that's not in a cloud on-premise gets left behind.But typically that's not how open source typically works, right? The reason why we have Linux, the precursor to the cloud is because someone looked at the big proprietary Unix systems and decided to re-implement them in a way that anyone could run those systems. So when you look at Kubernetes, you have to look at it from that lens. It's the ability to democratize these platform layers in a way that other people can innovate on top. That doesn't necessarily mean that everyone needs to start with Kubernetes, just like not everyone needs to start with the Linux server, but it's there for you to build the next thing on top of, if that's the route you want to go.Corey: It's been almost a year now since I made an original tweet about this, that in five years, no one will care about Kubernetes. So now I guess I have four years running on that clock and that attracted a bit of, shall we say controversy. There were people who thought that I meant that it was going to be a flash in the pan and it would dry up and blow away. But my impression of it is that in, well four years now, it will have become more or less system D for the data center, in that there's a bunch of complexity under the hood. It does a bunch of things. No-one sensible wants to spend all their time mucking around with it in most companies. But it's not something that people have to think about in an ongoing basis the way it feels like we do today.Kelsey: Yeah, I mean to me, I kind of see this as the natural evolution, right? It's new, it gets a lot of attention and kind of the assumption you make in that statement is there's something better that should be able to arise, giving that checkpoint. If this is what people think is hot, within five years surely we should see something else that can be deserving of that attention, right? Docker comes out and almost four or five years later you have Kubernetes. So it's obvious that there should be a progression here that steals some of the attention away from Kubernetes, but I think where it's so new, right? It's only five years in, Linux is like over 20 years old now at this point, and it's still top of mind for a lot of people, right? Microsoft is still porting a lot of Windows only things into Linux, so we still discuss the differences between Windows and Linux.The idea that the cloud, for the most part, is driven by Linux virtual machines, that I think the majority of workloads run on virtual machines still to this day, so it's still front and center, especially if you're a system administrator managing BDMs, right? You're dealing with tools that target Linux, you know the Cisco interface and you're thinking about how to secure it and lock it down. Kubernetes is just at the very first part of that life cycle where it's new. We're all interested in even what it is and how it works, and now we're starting to move into that next phase, which is the distro phase. Like in Linux, you had Red Hat, Slackware, Ubuntu, special purpose distros.Some will consider Android a special purpose distribution of Linux for mobile devices. And now that we're in this distro phase, that's going to go on for another 5 to 10 years where people start to align themselves around, maybe it's OpenShift, maybe it's GKE, maybe it's Fargate for EKS. These are now distributions built on top of Kubernetes that start to add a little bit more opinionation about how Kubernetes should be pushed together. And then we'll enter another phase where you'll build a platform on top of Kubernetes, but it won't be worth mentioning that Kubernetes is underneath because people will be more interested on the thing above.Corey: I think we're already seeing that now, in terms of people no longer really care that much what operating system they're running, let alone with distribution of that operating system. The things that you have to care about slip below the surface of awareness and we've seen this for a long time now. Originally to install a web server, it wound up taking a few days and an intimate knowledge of GCC compiler flags, then RPM or D package and then yum on top of that, then ensure installed, once we had configuration management that was halfway decent.Then Docker run, whatever it is. And today feels like it's with serverless technologies being what they are, it's effectively a push a file to S3 or it's equivalent somewhere else and you're done. The things that people have to be aware of and the barrier to entry continually lowers. The downside to that of course, is that things that people specialize in today and effectively make very lucrative careers out of are going to be not front and center in 5 to 10 years the way that they are today. And that's always been the way of technology. It's a treadmill to some extent.Kelsey: And on the flip side of that, look at all of the new jobs that are centered around these cloud-native technologies, right? So you know, we're just going to make up some numbers here, imagine if there were only 10,000 jobs around just Linux system administration. Now when you look at this whole Kubernetes landscape where people are saying we can actually do a better job with metrics and monitoring. Observability is now a thing culturally that people assume you should have, because you're dealing with these distributed systems. The ability to start thinking about multi-regional deployments when I think that would've been infeasible with the previous tools or you'd have to build all those tools yourself. So I think now we're starting to see a lot more opportunities, where instead of 10,000 people, maybe you need 20,000 people because now you have the tools necessary to tackle bigger projects where you didn't see that before.Corey: That's what's going to be really neat to see. But the challenge is always to people who are steeped in existing technologies. What does this mean for them? I mean I spent a lot of time early in my career fighting against cloud because I thought that it was taking away a cornerstone of my identity. I was a large scale Unix administrator, specifically focusing on email. Well, it turns out that there aren't nearly as many companies that need to have that particular skill set in house as it did 10 years ago. And what we're seeing now is this sort of forced evolution of people's skillsets or they hunker down on a particular area of technology or particular application to try and make a bet that they can ride that out until retirement. It's challenging, but at some point it seems that some folks like to stop learning, and I don't fully pretend to understand that. I'm sure I will someday where, "No, at this point technology come far enough. We're just going to stop here, and anything after this is garbage." I hope not, but I can see a world in which that happens.Kelsey: Yeah, and I also think one thing that we don't talk a lot about in the Kubernetes community, is that Kubernetes makes hyper-specialization worth doing because now you start to have a clear separation from concerns. Now the OS can be hyperfocused on security system calls and not necessarily packaging every programming language under the sun into a single distribution. So we can kind of move part of that layer out of the core OS and start to just think about the OS being a security boundary where we try to lock things down. And for some people that play at that layer, they have a lot of work ahead of them in locking down these system calls, improving the idea of containerization, whether that's something like Firecracker or some of the work that you see VMware doing, that's going to be a whole class of hyper-specialization. And the reason why they're going to be able to focus now is because we're starting to move into a world, whether that's serverless or the Kubernetes API.We're saying we should deploy applications that don't target machines. I mean just that step alone is going to allow for so much specialization at the various layers because even on the networking front, which arguably has been a specialization up until this point, can truly specialize because now the IP assignments, how networking fits together, has also abstracted a way one more step where you're not asking for interfaces or binding to a specific port or playing with port mappings. You can now let the platform do that. So I think for some of the people who may be not as interested as moving up the stack, they need to be aware that the number of people we need being hyper-specialized at Linux administration will definitely shrink. And a lot of that work will move up the stack, whether that's Kubernetes or managing a serverless deployment and all the configuration that goes with that. But if you are a Linux, like that is your bread and butter, I think there's going to be an opportunity to go super deep, but you may have to expand into things like security and not just things like configuration management.Corey: Let's call it the unfulfilled promise of Kubernetes. On paper, I love what it hints at being possible. Namely, if I build something that runs well on top of Kubernetes than we truly have a write once, run anywhere type of environment. Stop me if you've heard that one before, 50,000 times in our industry... or history. But in practice, as has happened before, it seems like it tends to fall down for one reason or another. Now, Amazon is famous because for many reasons, but the one that I like to pick on them for is, you can't say the word multi-cloud at their events. Right. That'll change people's perspective, good job. The people tend to see multi-cloud are a couple of different lenses.I've been rather anti multi-cloud from the perspective of the idea that you're setting out day one to build an application with the idea that it can be run on top of any cloud provider, or even on-premises if that's what you want to do, is generally not the way to proceed. You wind up having to make certain trade-offs along the way, you have to rebuild anything that isn't consistent between those providers, and it slows you down. Kubernetes on the other hand hints at if it works and fulfills this promise, you can suddenly abstract an awful lot beyond that and just write generic applications that can run anywhere. Where do you stand on the whole multi-cloud topic?Kelsey: So I think we have to make sure we talk about the different layers that are kind of ready for this thing. So for example, like multi-cloud networking, we just call that networking, right? What's the IP address over there? I can just hit it. So we don't make a big deal about multi-cloud networking. Now there's an area where people say, how do I configure the various cloud providers? And I think the healthy way to think about this is, in your own data centers, right, so we know a lot of people have investments on-premises. Now, if you were to take the mindset that you only need one provider, then you would try to buy everything from HP, right? You would buy HP store's devices, you buy HP racks, power. Maybe HP doesn't sell air conditioners. So you're going to have to buy an air conditioner from a vendor who specializes in making air conditioners, hopefully for a data center and not your house.So now you've entered this world where one vendor does it make every single piece that you need. Now in the data center, we don't say, "Oh, I am multi-vendor in my data center." Typically, you just buy the switches that you need, you buy the power racks that you need, you buy the ethernet cables that you need, and they have common interfaces that allow them to connect together and they typically have different configuration languages and methods for configuring those components. The cloud on the other hand also represents the same kind of opportunity. There are some people who really love DynamoDB and S3, but then they may prefer something like BigQuery to analyze the data that they're uploading into S3. Now, if this was a data center, you would just buy all three of those things and put them in the same rack and call it good.But the cloud presents this other challenge. How do you authenticate to those systems? And then there's usually this additional networking costs, egress or ingress charges that make it prohibitive to say, "I want to use two different products from two different vendors." And I think that's-Corey: ...winds up causing serious problems.Kelsey: Yes, so that data gravity, the associated cost becomes a little bit more in your face. Whereas, in a data center you kind of feel that the cost has already been paid. I already have a network switch with enough bandwidth, I have an extra port on my switch to plug this thing in and they're all standard interfaces. Why not? So I think the multi-cloud gets lost in the chew problem, which is the barrier to entry of leveraging things across two different providers because of networking and configuration practices.Corey: That's often the challenge, I think, that people get bogged down in. On an earlier episode of this show we had Mitchell Hashimoto on, and his entire theory around using Terraform to wind up configuring various bits of infrastructure, was not the idea of workload portability because that feels like the windmill we all keep tilting at and failing to hit. But instead the idea of workflow portability, where different things can wind up being interacted with in the same way. So if this one division is on one cloud provider, the others are on something else, then you at least can have some points of consistency in how you interact with those things. And in the event that you do need to move, you don't have to effectively redo all of your CICD process, all of your tooling, et cetera. And I thought that there was something compelling about that argument.Kelsey: And that's actually what Kubernetes does for a lot of people. For Kubernetes, if you think about it, when we start to talk about workflow consistency, if you want to deploy an application, queue CTL, apply, some config, you want the application to have a load balancer in front of it. Regardless of the cloud provider, because Kubernetes has an extension point we call the cloud provider. And that's where Amazon, Azure, Google Cloud, we do all the heavy lifting of mapping the high-level ingress object that specifies, "I want a load balancer, maybe a few options," to the actual implementation detail. So maybe you don't have to use four or five different tools and that's where that kind of workload portability comes from. Like if you think about Linux, right? It has a set of system calls, for the most part, even if you're using a different distro at this point, Red Hat or Amazon Linux or Google's container optimized Linux.If I build a Go binary on my laptop, I can SCP it to any of those Linux machines and it's going to probably run. So you could call that multi-cloud, but that doesn't make a lot of sense because it's just because of the way Linux works. Kubernetes does something very similar because it sits right on top of Linux, so you get the portability just from the previous example and then you get the other portability and workload, like you just stated, where I'm calling kubectl apply, and I'm using the same workflow to get resources spun up on the various cloud providers. Even if that configuration isn't one-to-one identical.Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That's U-P-T-Y-C-S Secret Menu dot com.Corey: One thing I'm curious about is you wind up walking through the world and seeing companies adopting Kubernetes in different ways. How are you finding the adoption of Kubernetes is looking like inside of big E enterprise style companies? I don't have as much insight into those environments as I probably should. That's sort of a focus area for the next year for me. But in startups, it seems that it's either someone goes in and rolls it out and suddenly it's fantastic, or they avoid it entirely and do something serverless. In large enterprises, I see a lot of Kubernetes and a lot of Kubernetes stories coming out of it, but what isn't usually told is, what's the tipping point where they say, "Yeah, let's try this." Or, "Here's the problem we're trying to solve for. Let's chase it."Kelsey: What I see is enterprises buy everything. If you're big enough and you have a big enough IT budget, most enterprises have a POC of everything that's for sale, period. There's some team in some pocket, maybe they came through via acquisition. Maybe they live in a different state. Maybe it's just a new project that came out. And what you tend to see, at least from my experiences, if I walk into a typical enterprise, they may tell me something like, "Hey, we have a POC, a Pivotal Cloud Foundry, OpenShift, and we want some of that new thing that we just saw from you guys. How do we get a POC going?" So there's always this appetite to evaluate what's for sale, right? So, that's one case. There's another case where, when you start to think about an enterprise there's a big range of skillsets. Sometimes I'll go to some companies like, "Oh, my insurance is through that company, and there's ex-Googlers that work there." They used to work on things like Borg, or something else, and they kind of know how these systems work.And they have a slightly better edge at evaluating whether Kubernetes is any good for the problem at hand. And you'll see them bring it in. Now that same company, I could drive over to the other campus, maybe it's five miles away and that team doesn't even know what Kubernetes is. And for them, they're going to be chugging along with what they're currently doing. So then the challenge becomes if Kubernetes is a great fit, how wide of a fit it isn't? How many teams at that company should be using it? So what I'm currently seeing as there are some enterprises that have found a way to make Kubernetes the place where they do a lot of new work, because that makes sense. A lot of enterprises to my surprise though, are actually stepping back and saying, "You know what? We've been stitching together our own platform for the last five years. We had the Netflix stack, we got some Spring Boot, we got Console, we got Vault, we got Docker. And now this whole thing is getting a little more fragile because we're doing all of this glue code."Kubernetes, We've been trying to build our own Kubernetes and now that we know what it is and we know what it isn't, we know that we can probably get rid of this kind of bespoke stack ourselves and just because of the ecosystem, right? If I go to HashiCorp's website, I would probably find the word Kubernetes as much as I find the word Nomad on their site because they've made things like Console and Vault become first-class offerings inside of the world of Kubernetes. So I think it's that momentum that you see across even People Oracle, Juniper, Palo Alto Networks, they're all have seem to have a Kubernetes story. And this is why you start to see the enterprise able to adopt it because it's so much in their face and it's where the ecosystem is going.Corey: It feels like a lot of the excitement and the promise and even the same problems that Kubernetes is aimed at today, could have just as easily been talked about half a decade ago in the context of OpenStack. And for better or worse, OpenStack is nowhere near where it once was. It would felt like it had such promise and such potential and when it didn't pan out, that left a lot of people feeling relatively sad, burnt out, depressed, et cetera. And I'm seeing a lot of parallels today, at least between what was said about OpenStack and what was said about Kubernetes. How do you see those two diverging?Kelsey: I will tell you the big difference that I saw, personally. Just for my personal journey outside of Google, just having that option. And I remember I was working at a company and we were like, "We're going to roll our own OpenStack. We're going to buy a free BSD box and make it a file server. We're going all open sources," like do whatever you want to do. And that was just having so many issues in terms of first-class integrations, education, people with the skills to even do that. And I was like, "You know what, let's just cut the check for VMware." We want virtualization. VMware, for the cost and when it does, it's good enough. Or we can just actually use a cloud provider. That space in many ways was a purely solved problem. Now, let's fast forward to Kubernetes, and also when you get OpenStack finished, you're just back where you started.You got a bunch of VMs and now you've got to go figure out how to build the real platform that people want to use because no one just wants a VM. If you think Kubernetes is low level, just having OpenStack, even OpenStack was perfect. You're still at square one for the most part. Maybe you can just say, "Now I'm paying a little less money for my stack in terms of software licensing costs," but from an extraction and automation and API standpoint, I don't think OpenStack moved the needle in that regard. Now in the Kubernetes world, it's solving a huge gap.Lots of people have virtual machine sprawl than they had Docker sprawl, and when you bring in this thing by Kubernetes, it says, "You know what? Let's reign all of that in. Let's build some first-class abstractions, assuming that the layer below us is a solved problem." You got to remember when Kubernetes came out, it wasn't trying to replace the hypervisor, it assumed it was there. It also assumed that the hypervisor had APIs for creating virtual machines and attaching disc and creating load balancers, so Kubernetes came out as a complementary technology, not one looking to replace. And I think that's why it was able to stick because it solved a problem at another layer where there was not a lot of competition.Corey: I think a more cynical take, at least one of the ones that I've heard articulated and I tend to agree with, was that OpenStack originally seemed super awesome because there were a lot of interesting people behind it, fascinating organizations, but then you wound up looking through the backers of the foundation behind it and the rest. And there were something like 500 companies behind it, an awful lot of them were these giant organizations that ... they were big e-corporate IT enterprise software vendors, and you take a look at that, I'm not going to name anyone because at that point, oh will we get letters.But at that point, you start seeing so many of the patterns being worked into it that it almost feels like it has to collapse under its own weight. I don't, for better or worse, get the sense that Kubernetes is succumbing to the same thing, despite the CNCF having an awful lot of those same backers behind it and as far as I can tell, significantly more money, they seem to have all the money to throw at these sorts of things. So I'm wondering how Kubernetes has managed to effectively sidestep I guess the open-source miasma that OpenStack didn't quite manage to avoid.Kelsey: Kubernetes gained its own identity before the foundation existed. Its purpose, if you think back from the Borg paper almost eight years prior, maybe even 10 years prior. It defined this problem really, really well. I think Mesos came out and also had a slightly different take on this problem. And you could just see at that time there was a real need, you had choices between Docker Swarm, Nomad. It seems like everybody was trying to fill in this gap because, across most verticals or industries, this was a true problem worth solving. What Kubernetes did was played in the exact same sandbox, but it kind of got put out with experience. It's not like, "Oh, let's just copy this thing that already exists, but let's just make it open."And in that case, you don't really have your own identity. It's you versus Amazon, in the case of OpenStack, it's you versus VMware. And that's just really a hard place to be in because you don't have an identity that stands alone. Kubernetes itself had an identity that stood alone. It comes from this experience of running a system like this. It comes from research and white papers. It comes after previous attempts at solving this problem. So we agree that this problem needs to be solved. We know what layer it needs to be solved at. We just didn't get it right yet, so Kubernetes didn't necessarily try to get it right.It tried to start with only the primitives necessary to focus on the problem at hand. Now to your point, the extension interface of Kubernetes is what keeps it small. Years ago I remember plenty of meetings where we all got in rooms and said, "This thing is done." It doesn't need to be a PaaS. It doesn't need to compete with serverless platforms. The core of Kubernetes, like Linux, is largely done. Here's the core objects, and we're going to make a very great extension interface. We're going to make one for the container run time level so that way people can swap that out if they really want to, and we're going to do one that makes other APIs as first-class as ones we have, and we don't need to try to boil the ocean in every Kubernetes release. Everyone else has the ability to deploy extensions just like Linux, and I think that's why we're avoiding some of this tension in the vendor world because you don't have to change the core to get something that feels like a native part of Kubernetes.Corey: What do you think is currently being the most misinterpreted or misunderstood aspect of Kubernetes in the ecosystem?Kelsey: I think the biggest thing that's misunderstood is what Kubernetes actually is. And the thing that made it click for me, especially when I was writing the tutorial Kubernetes The Hard Way. I had to sit down and ask myself, "Where do you start trying to learn what Kubernetes is?" So I start with the database, right? The configuration store isn't Postgres, it isn't MySQL, it's Etcd. Why? Because we're not trying to be this generic data stores platform. We just need to store configuration data. Great. Now, do we let all the components talk to Etcd? No. We have this API server and between the API server and the chosen data store, that's essentially what Kubernetes is. You can stop there. At that point, you have a valid Kubernetes cluster and it can understand a few things. Like I can say, using the Kubernetes command-line tool, create this configuration map that stores configuration data and I can read it back.Great. Now I can't do a lot of things that are interesting with that. Maybe I just use it as a configuration store, but then if I want to build a container platform, I can install the Kubernetes kubelet agent on a bunch of machines and have it talk to the API server looking for other objects you add in the scheduler, all the other components. So what that means is that Kubernetes most important component is its API because that's how the whole system is built. It's actually a very simple system when you think about just those two components in isolation. If you want a container management tool that you need a scheduler, controller, manager, cloud provider integrations, and now you have a container tool. But let's say you want a service mesh platform. Well in a service mesh you have a data plane that can be Nginx or Envoy and that's going to handle routing traffic. And you need a control plane. That's going to be something that takes in configuration and it uses that to configure all the things in a data plane.Well, guess what? Kubernetes is 90% there in terms of a control plane, with just those two components, the API server, and the data store. So now when you want to build control planes, if you start with the Kubernetes API, we call it the API machinery, you're going to be 95% there. And then what do you get? You get a distributed system that can handle kind of failures on the back end, thanks to Etcd. You're going to get our backs or you can have permission on top of your schemas, and there's a built-in framework, we call it custom resource definitions that allows you to articulate a schema and then your own control loops provide meaning to that schema. And once you do those two things, you can build any platform you want. And I think that's one thing that it takes a while for people to understand that part of Kubernetes, that the thing we talk about today, for the most part, is just the first system that we built on top of this.Corey: I think that's a very far-reaching story with implications that I'm not entirely sure I am able to wrap my head around. I hope to see it, I really do. I mean you mentioned about writing Learn Kubernetes the Hard Way and your tutorial, which I'll link to in the show notes. I mean my, of course, sarcastic response to that recently was to register the domain Kubernetes the Easy Way and just re-pointed to Amazon's ECS, which is in no way shape or form Kubernetes and basically has the effect of irritating absolutely everyone as is my typical pattern of behavior on Twitter. But I have been meaning to dive into Kubernetes on a deeper level and the stuff that you've written, not just the online tutorial, both the books have always been my first port of call when it comes to that. The hard part, of course, is there's just never enough hours in the day.Kelsey: And one thing that I think about too is like the web. We have the internet, there's webpages, there's web browsers. Web Browsers talk to web servers over HTTP. There's verbs, there's bodies, there's headers. And if you look at it, that's like a very big complex system. If I were to extract out the protocol pieces, this concept of HTTP verbs, get, put, post and delete, this idea that I can put stuff in a body and I can give it headers to give it other meaning and semantics. If I just take those pieces, I can bill restful API's.Hell, I can even bill graph QL and those are just different systems built on the same API machinery that we call the internet or the web today. But you have to really dig into the details and pull that part out and you can build all kind of other platforms and I think that's what Kubernetes is. It's going to probably take people a little while longer to see that piece, but it's hidden in there and that's that piece that's going to be, like you said, it's going to probably be the foundation for building more control planes. And when people build control planes, I think if you think about it, maybe Fargate for EKS represents another control plane for making a serverless platform that takes to Kubernetes API, even though the implementation isn't what you find on GitHub.Corey: That's the truth. Whenever you see something as broadly adopted as Kubernetes, there's always the question of, "Okay, there's an awful lot of blog posts." Getting started to it, learn it in 10 minutes, I mean at some point, I'm sure there are some people still convince Kubernetes is, in fact, a breakfast cereal based upon what some of the stuff the CNCF has gotten up to. I wouldn't necessarily bet against it socks today, breakfast cereal tomorrow. But it's hard to find a decent level of quality, finding the certain quality bar of a trusted source to get started with is important. Some people believe in the hero's journey, story of a narrative building.I always prefer to go with the morons journey because I'm the moron. I touch technologies, I have no idea what they do and figure it out and go careening into edge and corner cases constantly. And by the end of it I have something that vaguely sort of works and my understanding's improved. But I've gone down so many terrible paths just by picking a bad point to get started. So everyone I've talked to who's actually good at things has pointed to your work in this space as being something that is authoritative and largely correct and given some of these people, that's high praise.Kelsey: Awesome. I'm going to put that on my next performance review as evidence of my success and impact.Corey: Absolutely. Grouchy people say, "It's all right," you know, for the right people that counts. If people want to learn more about what you're up to and see what you have to say, where can they find you?Kelsey: I aggregate most of outward interactions on Twitter, so I'm @KelseyHightower and my DMs are open, so I'm happy to field any questions and I attempt to answer as many as I can.Corey: Excellent. Thank you so much for taking the time to speak with me today. I appreciate it.Kelsey: Awesome. I was happy to be here.Corey: Kelsey Hightower, Principal Developer Advocate at Google. I'm Corey Quinn. This is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple podcasts. If you've hated this podcast, please leave a five-star review on Apple podcasts and then leave a funny comment. Thanks.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more Core at screaminginthecloud.com or wherever fine snark is sold.Announcer: This has been a HumblePod production. Stay humble.
Armon Dadgar and Mitchell Hashimoto are long-time open source practitioners. It's that practitioner focus they established as core to their approach when they started HashiCorp about ten years ago. Today, HashiCorp is a publicly traded company. Before they started HashiCorp, Dadgar and Hashimoto were students at the University of Washington. Through college and afterward, they cut their teeth on open source and learning how to build software in open source. HashiCorp's business is an outgrowth of the two as practitioners in open source communities, said Dadgar, co-founder and CTO of HashiCorp, in an interview at the HashiConf conference in Los Angeles earlier this month. Both of them wanted to recreate the asynchronous collaboration that they loved so much about the open source projects they worked on as practitioners, Dadgar said. They knew that they did not want bureaucracy or a hard-to-follow roadmap. Dadgar cited Terraform as an example of their approach. Terraform is Hashicorp's open-source, infrastructure-as-code, software tool and reflects the company's model to control its core while providing a good user experience. That experience goes beyond community development and into the application architecture itself. "If you're a weekend warrior, and you want to contribute something, you're not gonna go read this massively complicated codebase to understand how it works, just to do an integration," Dadgar said." So instead, we built a very specific integration surface area for Terraform." The integration is about 200 lines of code, Dadgar said. They call the integration their core plus plugin model, with a prescriptive scaffold, examples of how to integrate, and the SDK. Their "golden path" to integration is how the company has developed a program that today has about 2,500 providers. The HashiCorp open source model relies on its core and plugin model. On Twitter, one person asked why doesn't HashiCorp be a proprietary company. Dadgar referred to HashiCorp's open source approach when asked that question in our interview. "Oh, that's an interesting question," Dadgar said. "You know, I think it'd be a much harder, company to scale. And what I mean by that is, if you take a look at like a Terraform community or Vault – there's thousands of contributors. And that's what solves the integration problem. Right? And so if you said, we were proprietary, hey, how many engineers would it take to build 2000 TerraForm integrations? It'd be a whole lot more people that we have today. And so I think fundamentally, what open source helps you solve is the fact that, you know, modern infrastructure has this really wide surface area of integration. And I don't think you can solve that as a proprietary business." "I don't think we'd be able to have nearly the breadth of integration. We could maybe cover the core cloud providers. But you'd have 50 Terraform providers, not 2500 Terraform providers."
Quizzes and games like Roblox are a good way to build your knowledge, whether you're learning to code or becoming a K-pop expert.ICYMI: Listen to our conversation with HashiCorp cofounder Mitchell Hashimoto, who recently returned to an IC role after serving as CEO and CTO.Connect with Jon on LinkedIn or follow him on Twitter.Today's Lifeboat badge goes to user Roko C. Buljan for their answer to Pure CSS 3 image slider—without JavaScript or radio inputs.
Neopets: A little-known gateway into a software career. (Nineties kids will remember.)Among the products Mitchell helped build at Hashicorp: Terraform, Vagrant, and Vault.Not many C-level execs return to IC roles, but you might be surprised how many managers move back to being individual contributors.Follow Mitchell on Twitter here.
Neopets: A little-known gateway into a software career. (Nineties kids will remember.)Among the products Mitchell helped build at Hashicorp: Terraform, Vagrant, and Vault.Not many C-level execs return to IC roles, but you might be surprised how many managers move back to being individual contributors.Follow Mitchell on Twitter here.
Mitchell Hashimoto's company HashiCorp is the creator of Vagrant, Packer, Serf, Consul, Terraform, Vault, and Nomad. Together, these open source projects have over 100,000 stars on Github and exist to help engineers better with consistent workflows to provision, secure, connect, and run any infrastructure for any application.
Cloud Security News this week 22 October 2021 Hope you have been enjoying your Cloud Security News this week and in our special third instalment for this week we bring you our best bits from Hashiconf Global 2021, conference held by Hashicorp. Hashicorp is a software company who provide open source tools and products - some of their popular products Vagrant, Terraform, Vault and boundary - You can view the conference and the talks here The opening keynote was delivered by their Co-Founders Mitchell Hashimoto, Armon Dadgar, and CEO Dave McJannet - with key themes around Zero Trust, Hybrid and MultiCloud - looking to make Zero Trust more accessible for users. Mitchell Hashimoto spoke about the challenges Developers face when deploying applications with Kubernetes and how Waypoint assists with this. They also spoke about the Hashicorp Cloud Platform (HCP) and the packer service which is now in public Beta, available free to use. Some of the features highlighted included remediation, enforcing security checks and maintaining images Shane Petrich from Target in his talk “Managing Target's Secrets Platform” spoke about how Target manages and maintains its enterprise deployment of HashiCorp Vault (Hashicorp's secret management and data protection product) -- everything from unattended builds, automated maintenance activities, and client onboardings. Identity and account access is one of the first things you set up in the cloud and Austin Burdine, Mike Saraf and Yates Spearman share how Red Ventures implemented a custom Terraform solution to automate access management, meeting the requirements of various compliance frameworks Last year Hashicorp announced Boundary, their secure remote access solution. This year at Hashiconf 2021, Susmitha Girumala and Mike Gaffney from HashiCorp showcased what is new in Boundary with a demo of key capabilities of identity-based access, integrated secrets management with Vault and dynamic host catalogs. Mark Guan and Ruoran Wang from Stripe's Service Networking Team spoke about their multi-region service networking tech stack built on Consul (Hashicorp's service networking solution), how it works across AWS accounts and regions, federated multi-region clusters and on Kubernetes. They also generously shared the challenges they faced. Episode Show Notes on Cloud Security Podcast Website. Podcast Twitter - Cloud Security Podcast (@CloudSecPod) Instagram - Cloud Security News If you want to watch videos of this LIVE STREAMED episode and past episodes, check out: - Cloud Security Podcast: - Cloud Security Academy:
Cloud Security News this week 22 October 2021 Hope you have been enjoying your Cloud Security News this week and in our special third instalment for this week we bring you our best bits from Hashiconf Global 2021, conference held by Hashicorp. Hashicorp is a software company who provide open source tools and products - some of their popular products Vagrant, Terraform, Vault and boundary - You can view the conference and the talks here The opening keynote was delivered by their Co-Founders Mitchell Hashimoto, Armon Dadgar, and CEO Dave McJannet - with key themes around Zero Trust, Hybrid and MultiCloud - looking to make Zero Trust more accessible for users. Mitchell Hashimoto spoke about the challenges Developers face when deploying applications with Kubernetes and how Waypoint assists with this. They also spoke about the Hashicorp Cloud Platform (HCP) and the packer service which is now in public Beta, available free to use. Some of the features highlighted included remediation, enforcing security checks and maintaining images Shane Petrich from Target in his talk “Managing Target's Secrets Platform” spoke about how Target manages and maintains its enterprise deployment of HashiCorp Vault (Hashicorp's secret management and data protection product) -- everything from unattended builds, automated maintenance activities, and client onboardings. Identity and account access is one of the first things you set up in the cloud and Austin Burdine, Mike Saraf and Yates Spearman share how Red Ventures implemented a custom Terraform solution to automate access management, meeting the requirements of various compliance frameworks Last year Hashicorp announced Boundary, their secure remote access solution. This year at Hashiconf 2021, Susmitha Girumala and Mike Gaffney from HashiCorp showcased what is new in Boundary with a demo of key capabilities of identity-based access, integrated secrets management with Vault and dynamic host catalogs. Mark Guan and Ruoran Wang from Stripe's Service Networking Team spoke about their multi-region service networking tech stack built on Consul (Hashicorp's service networking solution), how it works across AWS accounts and regions, federated multi-region clusters and on Kubernetes. They also generously shared the challenges they faced. Episode Show Notes on Cloud Security Podcast Website. Podcast Twitter - Cloud Security Podcast (@CloudSecPod) Instagram - Cloud Security News If you want to watch videos of this LIVE STREAMED episode and past episodes, check out: - Cloud Security Podcast: - Cloud Security Academy:
HashiCorp's Co-Founder and CTO, Armon Dadgar, joins me for a conversation on Cloud Native Startup.In this episode, we focus on open source and how it serves as the core to HashiCorp's identity. We also explore Armon's journey towards founding HashiCorp with Mitchell Hashimoto and what the future holds as they both lean into their respective passions. Learn a few keys to cultivating a successful open source community, why some companies don't rely on this success, his lessons learned, and more.Highlights: A look at Armon's role as Co-Founder and Chief Technology Officer and what sparked the decision to build HashiCorp alongside Mitchell Hashimoto. (00:13) Armon shares why HashiCorp began as open source and how that developed into a company. (4:46) How an open source community compares to a paid community - and Armon's take on bootstraping an open source company. (11:23) Why creating an open source project directly from a closed source project is not the best strategy. (14:43) Keys to building a successful open source community and why this is vital to HashiCorp.(18:14) Armon shares lessons learned during the early days of HashiCorp - and his thoughts on the complexities of being a founder. (23:15) How Mitchell's decision to step back as an individual contributor allows him to focus on his passion - and more on why they chose to monetize HashiCorp. (29:30) Links:Armon Twitter: https://twitter.com/armon GitHub: https://github.com/armon Linkedin: https://www.linkedin.com/in/armon-dadgar/ HashiCorp Website: https://www.hashicorp.com/ Twitter: www.twitter.com/hashicorp
Video: https://youtu.be/Cxaf8E00GMMSlides: https://docs.google.com/presentation/d/1sJSqNy-t-kVxzrWlqMTp_03nI7Zo8Znr7k0f0C6L9ig/edit?usp=sharingTimestamps:[00:00:00] Intro[00:02:17] Part 1 - Components: Code Organization for Real Apps [00:04:26] What we learned from React [00:07:46] Part 2 - Architecture: Choreography vs Orchestration [00:13:05] Retries and Timeouts [00:14:37] Part 3 - Time: React vs Temporal[00:16:34] Elevator Pitch [00:17:13] Programming Model [00:18:44] Comparing React and Temporal Principles [00:19:11] Live Demo: Amazon One Click Button [00:23:49] Talk Recap [00:24:16] React and Temporal Full Comparison [00:24:42] Conclusion: EnablementTranscript [00:00:00] Once again, I want to thank you all for tuning in and joining, React New York 2021 without further ado, I'll pass it on to Shawn. All right, so hi everyone. Hello, React new York. It is my home town in the U S and I miss everyone back in New York. I am currently based in Seattle, but I'm here to talk about React for the Backend. In 2020 I actually thought that I had given my last React talk because I was all tapped out. I had said everything I wanted to say, and then React New York came by and said, do you want to speak? And I was like, oh, I really wanted to speak for React New York. So here's my presentation about what I've been working on and what I think the parallels have been for React. And I think there's some generalizable lessons, even if you don't end up using Temporal. So, the inspiration for this talk came from Guillermo Rauch, the creator of Next.js. And he was the first person to point out that Temporal.io, does to backend and infra what React did to frontend. Temporal engine is quite complex, much like React, but the surface exposed to developers a beautiful render function and I'm a bit upset because he realized there's before me and I have been working on Temporal for a few months now. So important caveats before I start this talk. What I'm presenting to you is alpha for TypeScript. Temporal is typically a goal or Java based application, but we're developing TypeScript and hopefully launching it soon. And then finally "React for the backend" is an analogy, not a design goal. The way I treat this is like, it's a, it's basically like crabs. And one of the most entertaining facts that I've ever found is that nature has apparently tried to evolve crabs five independent times. And in fact, there's a word in evolutionary biology for it called Carcinization. And of course, this is really good for a lot of memes. So tired convergent evolution is not uncommon, especially when species have similar selecting pressures in their environments, wired. Everything is Crab. And perhaps everything is React, because we have similar design space problems. So I'll tell a little bit of the story through three parts there's Components, and we'll tell it through the story of Uber, talk about architecture, we'll talk through the story of YouTube, and Time will tell you through the story of Amazon. So a lot to cover, I'm going to try to go really fast. Don't worry. I'll share the slides on my Twitter later on. Okay. [00:02:17] Part 1 - Components: Code Organization for Real Apps So part one is about components. You see this a lot on YouTube. Probably you're watching now on YouTube or live streaming. And yeah, you know, like three hour live stream and that's it. Very cool. I think we, we know how to break things down and React has really helped us be more productive by being able to break things down into the components and knowing how to compose them together in a predictable way. But there's a lot of things unanswered in things like this in, in full stack, clones of major well-known apps, which is the hard parts. What like a typical Uber trip, we'll have all these steps like search pricing match. Pick-up drop-off rating tipping, payment, email, uh, and so on and so forth. And typically the naive way of organizing all this is basically one after the other, right? Like this is search goes to pricing, goes to matching, goes to pick upgoes to dropoff goes to rating goes to tipping goes to payment, goes to email, imagine that these are all managed by separate teams and scaled independently. Then you realize, like, this is only the happy path. Then you have to throw in a whole bunch of things that can happen along the way. An Uber trip is basically a long running process with humans in the loop and humans are very, very messy by nature. So how would you write an Uber clone? good luck with a lot of data technologies that you would typically reach for just naively, because you will have to discover all these systems and all these use cases and edge cases along the way. So when people say full stack, they often really mean like this half drawn horse meme. I think this is particularly funny so I take every opportunity I can get to show it, but to be honest, a lot of us front end developers are probably the other way the half-drawn Dragon where we're frontend a very good and in the backend, we'll just like, you know, stick some stuff on Firebase and something. And in reality, if you look at the backend systems, most companies, especially at scale, go towards some form of very complex micro service, system. I don't have the chart for Uber, but Hail-0 is probably a good comparison. Netflix, Twitter, and It's not really avoidable. If you want to scale a company to any significant size, you probably have to break them up into independent services because you're going to ship your org chart anyway. [00:04:26] What we learned from React The thing I realized as a React developer, as a front end developer, is that actually we had a pretty good run in the past seven, eight years of React in terms of the fact that front end developers know how to organize code at least in terms of the component level. So we moved from the jQuery era where everything was just kind of spaghetti all over the place to at least something more organized where event handlers are strongly tied, locally tied with renders, but essentially managed by React's runtime. So a few key lessons from React that I personally draw [00:05:00] is that you want to have a component and a renderer model. Like, so essentially the user or the developer writes components. And then the react core team writes the render and that handles a lot of the boilerplate that you might typically forget. And this is everything to do with on mounting or having local states. And it gives you a very nice, non-leaky abstraction that you can write. Second, you can also guarantee work and correctness, which is originally what drew Jordan walk to make something like React because he was working on Facebook messenger and there was a lot of inconsistent state within Facebook manager because of the spaghetti code. So correctness, meaning that we embrace functional programming to produce a virtual DOM view is a pure function of state. If you look at the old enough React talks, you will see a lot of v = f(d), so view as a pure function of data. And finally the programming model. We like to say that it's just JavaScript. There's no custom syntax with templating syntax to learn. I think all these three lessons , there are actually a lot more, but all of these three lessons are where I'm going to focus on for this talk. And I think whenever you tackle any programming paradigm, any framework, any design question, you might want to run it through some of these ideas. So whenever I talk about React principles, I always like to bring up the fact that there's this often overlooked repo called react-basic. And it's actually in the official React organization on GitHub. And this is Sebastian Markbage, who is the tech lead of React. And he wrote down six years ago, his principles on what he thinks makes up React on a fundamental basis? No, JSX just like, what are the principles that we're designing for? We are designing for a simple, pure transformation, abstraction, composition, state, memoization. The words that he uses are very theoretical sometimes, but you feel it every single day when you write React. So there's a lot of things else apart from that, that reacts has done for front end programming. Apart from deterministic renders, we have useState with a reduction of boiler plate with unmounting, child components in con in the very careful design composition, um, side effects where, you know, we have used effect or use memo. And actually a lot of people don't know. I don't, I forget my source. I think it's Sophie Alpert, but, one-third of the React code base is actually just normalization of events across browsers. So you don't even have to worry about it. And creating synthetic events for that. They also produce a dev tool and manage a central scheduler and obviously the success of React over the past five, six years has really shown Testament to how great all these decisions have been. If you want to learn more about the talks that I've done and my perspectives on some of these React principles, I've done three talks. One is at React Rally. The second at JSConf and the third, ,at React Summit. So you can check out my YouTube for more conversations on that. I don't have time here. Okay. [00:07:46] Part 2 - Architecture: Choreography vs Orchestration So that was part one where we talked about Components and the React revolution. So part two, we're going to talk about architecture. So a bit, one level higher than just components. And I'm going to motivate this question with a question of how would you write YouTube. And again, if you look on YouTube for how to write YouTube tutorials, you can get full-stack clones of YouTube, which is pretty impressive, you know, write YouTube in three hours using Firebase. That's very impressive. Unfortunately the hard parts of YouTube also come in. And there are a bunch of Googlers actually who actually went and interviewed YouTube engineers on how YouTube works on the backend. There's a bunch of work that goes on in the background. So you need to upload your file. You need to analyze for metadata. You need to split it up into chunks. You need to process these chunks in parallel, and then you need to stitch it back. And by the way, processing, you have to produce an array of formats, right? From like, 240 P to like 1400 P or something like that. And then you have to stitch all these chunks back into the continuous videos that you actually see in stream. You need to notify subscribers, you need to produce automatic captions and you need to produce thumbnails. And that is again, just the happy path. Right. So, what about all these other features? It's. For example, YouTube premiere is a scheduled release of a YouTube video or feeding into the recommendation algorithm. That must be the most craziest batch job in the world. And you need to scale this process, whatever, whatever you design for 30,000 hours of video uploaded every hour. That's the sheer amount of volume that's going on YouTube today, which is just insane. Like, like any design that you make at scale is going to break in some respect. So I think, I think that's, that's really interesting to consider. And I learned about this actually, and I thought more about this because I interviewed one of our users who is Descript (hi! I'm editing this transcript in Descript rn lol). Descript is a audio transcription platform and their entire business is transcribing audio and then making it easy for you to edit audio. I do it for my podcast every single day and millions of people use it. I think it's really cool. So their problem was that when a user hits transcribe, it kicks off asynchronous multi-stage and parallelized process that involves reading, encoding audio, chunk splitting. external API calls, merging results that may potentially arrive out of order and then verifying their alignment. So there's a lot of [00:10:00] nuance here that can get really tricky. And if any part of the process fails, you need to try it again. So, this is typically the kind of architectures that people build up incrementally over time, as they discover all these use cases and then find holes and patch them because it's too late to rewrite something. There's a lot of decisions that goes into here. And this is normal. This is natural. I think you run into basically the eight fallacies of distributed computing, which has actually discussed or discovered back in 1994 by people at Sun Microsystems. I love these cartoons but it can be a little bit hard to read. So here's a more organized version of them. At the bare minimum, don't forget distributed computing fallacy number one, which is that the network may or may not be reliable or compute may or may not be reliable. So, what that means in practice is that when you're calling system a, B, C, D E F G you may actually need to introduce hardening layers because at every point and you cross system boundaries, you have a chance of failure and that multiplies exponentially, as you have more and more services tied up in your systems like we saw for the Uber example, like we saw for the YouTube example. You need to add in timeouts and retries. And what that means is that you need to persist the number of times you timed out, when you timed out, what jobs you timed up. So you need a database every single time, and then you need a scheduler or a timer to say when the next time is going out, I'm going to try this again. And you need to write this for every service. If the ma the maintainer for every service needs to maintain both the code and the infrastructure for this. This is a lot of how I was talking about things when I was exploring the serverless world. So here's a real life example from the AWS blog where they said that you were using dead letter queues to replay messages when such things as failures occur. This is a fine looking example until you try to scale it. And again it looks like a complete mess, complete track, and it's very hard to keep in your head, and pretty soon when you're explaining this to your CTO you look like the Pepe Silvia meme. So the solution that I found is really to have a central orchestrator, right? Instead of every single system maintainer writing their own API hardening layer, which is a production requirements, as you find more and more of these bugs, you should centralize it with a centralized team that takes care of the orchestration of all these different services. And that's in the business, what we call choreography, which is A to B to C versus orchestration, which is a central orchestrator coordinating the dance between AB and C, and then storing both the infrastructure and the code for the scheduler and the database. So there's a really good article on this by Yan Cui in the burningmonk.com so I highly recommend checking it out where he talks about choreography versus orchestration, with real life examples that people use in AWS, but also it's not specific to any cloud. It's a architecture design pattern, which I think fundamentally, if you start off with this, it's really hard to rearchitect to this. I mean, it's, it's possible because people are doing it, but also it's a conscious, architectural choice that you might not know that you're making if you don't know about it. So, I guess a lot of my message here is to tell you that orchestration is a thing. [00:13:05] Retries and Timeouts Also, so you want to declaratively put into your framework retries and timeouts, so for example, this is actually our API. You want to be able to say, all right, here's the default retry policy. Whenever I fire off an activity, an activity is just like an external API call, for example. So when I fire up an activity, I want it to be retried every second. If it fails, I need a backoff coefficient, like exponential backoff. This is very similar to the TCP protocol so that if the endpoint is failing or getting rate limited, I don't keep retrying, and then building up a DDoS attack on myself, I actually back off and put more and more intervals in between until some maximum interval, let's say a hundred seconds. And then I give myself a maximum attempt, so I can say like, all right. I don't want any retries. I can just say have a maximum attempt of one. Or let's say, I want a linear back off and not an exponential for whatever reason. And I want to try to a maximum of five times - you want to have this all declarative so that you can tweak this as you understand your system and you scale your system. Right? So I think this is a really interesting programming model that just puts retries into the code that you write. And that's only possible when you have your centralized orchestrator, no matter what system, not just Temporal. Okay. So the case that I'm making is really for choreography versus orchestration. And I, the analogy that I make for front end versus the back end is that it's kind of like vanilla or jQuery versus react. React has a react as the central orchestrator, orchestrating all the components. And I think that's a really interesting architectural analogy that you can make and learn from React. All right. [00:14:37] Part 3 - Time Part three - Time. I'm doing very good on time. I think better than I thought, which means that we'll have time for a live demo, which is really awesome. So let's talk a little bit about Temporal. [00:14:45] What is Temporal? What is Temporal? Temporal is the open source platform for orchestrating highly reliable mission-critical applications at scale. I love talking a little bit about the history, the reason because our CEO started at Amazon as the tech lead for what became Amazon SQS. Our [00:15:00] CTO was at Microsoft and it was the principal architect of the Durable Task Framework, which became a Microsoft's version of Durable Functions, and then finally they joined Uber and worked on Cadence, which is the open-source version of their workflow orchestration platform and Cadence became so popular that they spun out and became Temporal. And since then it's been adopted by a lot of well-known household name companies, especially in the developer world. There are a lot of people hiring for Temporal developers, which I really like to see because it's not just being used, but also it's creating jobs for people and it's becoming a desirable skillset. And most recently last week we had Netflix presenting about how they used Temporal for their CI/CD. Temporal has three components or produces three products that are used in sync. The main star is Temporal server, which is comparable to the React runtime that you might see, then there's Devtools, which is the UI that you might want to inspect the state of things. And then the SDK is, which is what you use to code. So I think all those are really comparable to what we have in React and having been in the React world for while, like, it's really amazing to see the analogies that we have. We have exactly the same thing. For me, the really sort of the seal of approval comes from Mitchell Hashimoto who, created Hashi Corp, saying that without Temporal, we would have spent a significant amount of time rebuilding Temporal, which actually to me is the best form of validation because Mitchell is one of the best developers in distributed systems and he says it's hard and he says it does it well. All right. Enough social proof you want actual facts? I would just give it straight to you. [00:16:34] Elevator Pitch So because your workloads like the YouTube encoding, or like the Uber journey and this technology was developed at Uber i s long running and it ties together multiple services. You want to standardize timeouts and retries and you want to make it easy for every team to have production grade retries and timeouts. Because this work is so important. You must never drop any work. You must log all progress. In other words, you must use event sourcing. And then finally, because this work is so complex, you want to use generic programming languages, instead of Domain specific languages. So you want to model a dynamic, asynchronous logic, and then you want to reuse, test, version and migrated it. So that's the pitch in one screen. But I'll just break it down for what it means, and then we'll go into a demo. [00:17:13] Programming Model So to me, The, the closest analogy to React is the programming model, because React spends a lot of time on API design and in the workflow orchestration world there are a lot of JSON or DAG based domain specific languages. So you, you write a bunch of JSON or you do boxes and arrows boxes and arrows boxes and arrows, sometimes you've even write XML, which is very interesting as well. What I find with all these is that they're actually really good for manipulating visually. But they get very tricky when you need to do programming language constructs, like variables, functions, loops, branching statements and all the things that we've invented in programming languages over the past few years. So if you use "just JavaScript" or "just programming languages", you have all the tooling available. You can use all the libraries that are available. You can use all the testing and code version, quality controls available. If you write your own, you have to rebuild all this dev tooling from scratch for yourself. So that's essentially what this is. Here's an example from one of the big clouds where this is their workflow orchestrator model, where you write Jason and it's really hard. It actually goes off the screen and I couldn't really fit everything on one screen. And with Temporal literal just JavaScript you call an endpoint you use that the result of that end point to call other end points, for example. It's a very simple example, but in built here is default retry policies that have been worked out. So both of these handle reliability on rails, it's just, we differ in the programming model and the engineering that it takes to maintain one of these SDKs is I'm learning. It's very, very immense. So it's really interesting. [00:18:44] Comparing React and Temporal Principles So, again, back to the core principles that we talked about early on from React. React d ecided on using a framework, decided on correctness and decided on a programming model, and Temporal, in a very similar way. The developer writes workflows and the Temporal core team writes the orchestrator, which is Temporal server. In terms of correctness, React insists on functional programming, Temporal insists on event sourcing in deterministic workflows and then programming model, you want "just JavaScript" or just programming languages, not any custom DSL syntax. [00:19:11] Live Demo: Amazon One Click Button So the final example that I'm gonna motivate is which is like, I'm, I've been trying to re progressively reduce the complexity of my examples. So we met from Uber, which is like a super long running, a lot of humans in the loop to YouTube, which is not so much humans in the loop. You upload it once and then everything else takes over from there. Now I just want to build one feature, which is a one-click buy button in React or in front end. It's actually super easy. It's a button. That's the literal simplest thing you can possibly do. You put an onclick handler. You're done. If you want to do a one-click buy, you do a setTimeout, and then say like, okay, if you want to cancel this within some window, with Amazon is 30 minutes, we can cancel it. But if you want to persist it, imagine if some person clicks, closes the browser and then changes their mind, opens the browser again, and it's gone. You're screwed. You don't have any other way to implement one-click [00:20:00] purchases. You need to implement timers on the backend to do this. I was watching this old talk from Joel Spolsky where he talks about the engineering for the one-click buy button. And I put it up on my YouTube because this is such an old talk. And I was afraid to link to the timestamp, but you can check it out as it's just a three minute video where he tells the story about how Amazon moved from shopping cart to one-click buy I mean they still have a shopping cart but it's that important because in online e-commerce actually even up to today the abandonment rate for shopping carts is 70%. So imagine if you implement this one feature, you improve your sales by I don't what's the inverse of 70%, three times. That's really amazing. So I think it's just fascinating and it's not just about Amazon. It's not about one click buy. It's about user experience. It's about making things easy and intuitive and that often involves turning synchronous things into asynchronous things and in persisting them so that they persist in the background. So I have a little demo here. I'm going to go really, really fast, but you can check out the code in temporalio/samples-node. There's the specific path this year but it's basically a Next.js demo where I have a Next.js folder here. This is going to be pretty standard for a lot of React developers. Hopefully you're familiar with Next.js, so you can learn it. It's got some pages and an API routes where I have serverless functions that call and send signals to my workflow functions. I have also a Temporal folder where I have written my workflows and activities. The activities are just a little logs obviously, cause they don't interact with any backends, but they could. And then the workflow coordinates the states in the background of all of these. So I can show you the code, but essentially I kick off a one-click buy with a purchase and then I set a timer and promise.race with a five second wait. So if I receive a cancel signal during that timer then that cancels if not, it goes through and the purchase is confirmed. Obviously I can. And what's fascinating about Temporal is that every single step is persisted in automatically saved. So in other words, I can sleep for 30 days. I can sleep for a year. I can sleep for five years and it doesn't matter because it's all persisted and wakes up automatically. So the compute the, this serverless function can be. The worker or a Temporal server itself can go down. You can just bring you back up again and it carries on as though nothing happened because of event sourcing. So, I'm gonna, I'm going to go ahead and run this. I think it's uh, demos I'm always stressed out about live demos. Okay. I mean, I did test it before the talk. It's just that whenever I'm streaming, like it adds an extra latency thing and that goes haywire. So, Let's see if I have this demo available. All right. So I also want to pull this out, which is the UI layer. These are the my test runs. But I have here at one-click purchase UI, and literally, I, you know, I, I want to implement this without a shopping cart, but I want to be able to cancel within some certain amount of time. So if I click buy, uh, it clicks, it handles it's. It sends a workflow. And that workflow starts in starts in the background and it's running, right. It's waiting for the timer to proceed. So I'm going to hit the timer, uh, and you can see that a timer started and time of ended, uh, within that five second window that I specified, obviously I should make it longer if I, if I really wanted to show this along the way. Um, so, uh, this, this is, this is as is purchased, um, and we can, uh, and now we've confirmed it. Um, but if I ever want to click buy, and then I can click the. That also fires off a different workflow, uh, where it sees that it receives the cancel signal from me. Uh, so, so I signaled it to cancel. And that's a very useful model as well. So this actually shows off a lot of the core principles of Temporal, which is you kick off a workflow, you can set durable timers, you can send it human signals and you can get out data as well with queries. There's a lot of interesting elements behind that, but that's the core demo that I wanted to show off. So maybe I'll write a YouTube example and then I'll go on to an Uber example and be a billionaire. [00:23:49] Talk Recap So ultimately I just want it to recap again, what we covered. We covered components, we covered architecture we covered time, and these are all the three elements I wanted to compare reacts and Temporal, and explain a little bit of how we think about doing the hard parts of making clones of very popular projects. Why is it so interesting? It's a little bit like the crabs story, you know. Obviously the founders of Temporal are not front end developers. They didn't even know react at all. [00:24:16] React and Temporal Full Comparison But they independently evolved a lot of the same principles and this that's, I haven't even gone into like the full comparison. So we talked a little bit about deterministic functions and local state and composition, but we haven't talked about normalization and how that compares dev tools. Testing is also super interesting thing as well as the central runtime. So there's a lot here, which I think. And fascinated by, and I'm obsessed by applying the lessons from React to things that are not React. [00:24:42] Conclusion: Enablement And I think overall, when I asked my CEO, like, what is the core message that we want to deliver is actually about enablements. Like we enable people to do things that they're not formally trained to do because we wrapped it up rapid all in a central runtime or central framework. So, uh, I always loved the Alfred north Whitehead quote that [00:25:00] civilization advances by extending the numberof things that we can do without thinking about it. So for me, my version of it is that B2B software advances by extending the number of jobs we can perform without formal training. And the message overall here is that Temporal lets backend developers or, just general full stack developers do distributed systems right? So that's it. I blasted through that. I only took 26 minutes. Really great for me, cause I was worried that it would take 50 and I'm happy to answer any questions you can hit me up on Twitter at Swyx. You can read my long form blog posts about why Temporal and then you can join our mailing list, YouTube or Slack. Thank you. Alright, thank you very much things. So I think that was a really, really nice. And you did, uh, went through that quite quickly. Uh, when I see the comments, people love the, like the most right there, because I could fail because I could fail. It's always like that. So, uh, yeah. Um, the nice thank you for the presentation. With this talk, I think it's actually the last talk of the event and I want to thanks everyone for joining us and thanks to everyone, thanks to all the speakers, of course, for being part of this event, uh, React New York 2021 and the sponsors. Um, I think this would be a good afternoon, I guess, or good night, depending on where we are in the world. Right. Have a good one. Everyone.
On vient d'assister à une aventure improbable sur YouTube, et si on en parlait ?➡️ MON LINKEDIN : https://linkedin.com/in/cchaudier/
What happened last week? We failed to record a normal podcast episode, so we tried something different and put out a Textual Podcast instead: https://findthethread.postach.io/post/textual-podcast Management = FOMO? Some people wind up in management more due to FOMO (Fear Of Missing Out) than because they actually want to do the work — and then the rest of us get to rehabilitate the people who suffered the consequences. Management track is not the only track: Mitchell Hashimoto takes on a new individual contributor role at HashiCorp. https://www.hashicorp.com/blog/mitchell-s-new-role-at-hashicorp Quick News Zoom Bets on AI with $14.7 Billion Deal to Acquire Five9 https://www.bloomberg.com/news/articles/2021-07-19/zoom-to-buy-cloud-service-firm-five9-for-14-7-billion Akamai Edge DNS goes down, takes a chunk of the internet with it https://www.theregister.com/2021/07/22/akamai_edge_dns_outage/ (it's always DNS) A different kind of supply-chain attack: A Defunct Video Hosting Site Is Flooding Normal Websites With Hardcore Porn https://www.vice.com/en/article/qj8xz3/a-defunct-video-hosting-site-is-flooding-normal-websites-with-hardcore-porn Recommendations Dominic Clean install your OS. That's it, that's the recommendation. Follow the show on Twitter @Roll4Enterprise or on our LinkedIn page. Theme music by Renato Podestà. Please send us suggestions for topics and/or guests for future episodes!
Mitchell Hashimoto has announced he is stepping away from a leadership position at HashiCorp to focus on an engineering, individual contributor role. That is a massive change, but I also think it will be great for HashiCorp and for Mitchell himself. Mitchell's post: https://www.hashicorp.com/blog/mitchell-s-new-role-at-hashicorp ----------------------------------------------------------------------------------------------------- Patreon: https://www.patreon.com/nedinthecloud Website: https://nedinthecloud.com Pluralsight: https://app.pluralsight.com/profile/author/edward-bellavance GitHub: https://github.com/ned1313
Mitchell Hashimoto started programming in middle school, teaching himself how to code through open source libraries and zip files he could download on the internet. He is a pilot, and owns his own plane, which happens to be a Cirrus. He spends an hour a day studying or practicing flying, and even takes his wife and dog up every now and again, when there is something worth flying to and they can make the oxygen work for then dog.He attended college at the University of Washington in Seattle, which was located equidistance from Amazon, Google and other cloud focused infrastructure companies. As you could guess, there was a huge focus on this topic while he was at school, and he was able to gain access to vast resources through his computer lab and research projects. It was these projects put the ideas in his head, on what he could make in order for infrastructure to work better.This is the creation story of HashiCorp.LinksWebsite: https://www.hashicorp.com/LinkedIn: https://www.linkedin.com/in/mitchellh/Leave us a review on Apple Podcasts Amazing tools we use:If you want the best publishing platform for your podcast, with amazing support & people - use Transistor.fmWant to record your remote interviews with class? Then, you need to use Squadcast.Code Story uses the 1-click product ClipGain, sign up now to get 3hrs of podcast processing time FREECredits: Code Story is hosted and produced by Noah Labhart. Be sure to subscribe on Apple Podcasts, Spotify, Pocket Casts, Google Play, Breaker, Youtube, or the podcasting app of your choice.Support this podcast at — https://redcircle.com/code-story/donationsAdvertising Inquiries: https://redcircle.com/brands
Armon was on the PhD path, poised to become an academic. But he and his college friend Mitchell Hashimoto realized they had an opportunity ahead of them: building technology to support cloud infrastructure. They founded HashiCorp in 2012 and have grown to more than 80 million open source downloads per year, with a company valuation over $5 billion. Armon shares how a stint at a San Francisco startup led to his aha moment, how HashiCorp decided on an open-source model and enterprise customer, and why the company was remote-first long before Covid.
This is truly a 2 for 1 special as newlyweds actress Amy Okuda and Mitchell Hashimoto, founder of the software company HashiCorp, join me in sharing their amazing 9 year relationship journey, what a ride! They had to balance 2 very different and demanding careers in the emotional ups and downs of an acting career and the struggles of a startup company early in dating all while dealing with long distance. How did they go from almost breaking up by putting their careers first to putting each other first? How did and does Mitchell survive Amy's constant poking? They have such a beautiful and funny proposal story with amazing chemistry to top it off! I hope you enjoy their thoughts and insights as much as I am honored for them to join the podcast. Follow and find Amy Okuda and Mitchell Hashimoto here! Instagram: @amyokuda Twitter: @mitchellh Follow for MORE FEELS here: -instagram: instagram.com/rightinthefeelspodcast -tiktok: @rightinthefeelspodcast -twitter: @jackzwchen Don't forget to rate, review and subscribe on itunes! Sharing is caring Email me ANYTHING!: rightinthefeelspod@gmail.com --- Support this podcast: https://anchor.fm/rightinthefeels/support
This time I'm talking to an absolute legend: Mitchell Hashimoto, the CTO of HashiCorp the company behind a series of brilliant open source products (Terraform, Vault, Nomad, Vagrant) that are highly successfully both business wise and community wise. I would even argue that HashiCorp is the only company that manages to do that. We talk about: * The path from 7 to 1000+ employees in the last 10 years
Mitchell Hashimoto started programming in middle school, teaching himself how to code through open source libraries and zip files he could download on the internet. He is a pilot, and owns his own plane, which happens to be a Cirrus. He spends an hour a day studying or practicing flying, and even takes his wife and dog up every now and again, when there is something worth flying to and they can make the oxygen work for then dog. He attended college at the University of Washington in Seattle, which was located equidistance from Amazon, Google and other cloud focused infrastructure companies. As you could guess, there was a huge focus on this topic while he was at school, and he was able to gain access to vast resources through his computer lab and research projects. It was these projects put the ideas in his head, on what he could make in order for infrastructure to work better. This is the creation story of HashiCorp ( https://www.hashicorp.com/ ). Links * Website: https://www.hashicorp.com/ * LinkedIn: https://www.linkedin.com/in/mitchellh/ Leave us a review on Apple Podcasts ( https://ratethispodcast.com/codestory ) Amazing tools we use: * If you want the best publishing platform for your podcast, with amazing support & people - use Transistor.fm ( https://transistor.fm/?via=code-story ) * Want to record your remote interviews with class? Then, you need to use Squadcast ( https://squadcast.fm/?ref=noahlabhart ). * Code Story uses the 1-click product ClipGain ( https://clipgain.io/?utm_campaign=clipgain&utm_medium=episode&utm_source=codestory ) , sign up now to get 3hrs of podcast processing time FREE Credits: Code Story is hosted and produced by Noah Labhart. Be sure to subscribe on Apple Podcasts ( https://podcasts.apple.com/us/podcast/code-story/id1466861744 ) , Spotify ( https://open.spotify.com/show/0f5HGQ2EPd63H83gqAifXp ) , Pocket Casts ( https://pca.st/Z1k7 ) , Google Play ( https://play.google.com/music/listen?pcampaignid=MKT-na-all-co-pr-mu-pod-16&t=Code_Story&view=%2Fps%2FIcdmshauh7jgmkjmh6iu3wd4oya ) , Breaker ( https://www.breaker.audio/code-story ) , Youtube ( https://www.youtube.com/channel/UCgjZsiUDp-oKY_ffHc5AUpQ ) , or the podcasting app of your choice. Support this podcast at — https://redcircle.com/code-story/donations Advertising Inquiries: https://redcircle.com/brands Privacy & Opt-Out: https://redcircle.com/privacy
There's a few ontologies for describing the phases leaders -- and their startups -- go through, whether it's product-sales-etc. or pioneer to settler. In any case, as companies evolve, so must the leaders -- but can the same person transition across all these phases? When and when not; what are the qualities, criteria, and tradeoffs to be made?In this episode of the a16z Podcast, originally recorded as an internal hallway-style chat (pre pandemic!) a16z general partner Martin Casado, who co-founded but decided to remain CTO of Nicira -- and previously shared his own journey, lessons learned, and advice for founders about bringing in an external CEO and the question of "to CTO or not to CTO" -- and Armon Dadgar, co-founder (with Mitchell Hashimoto) and CTO of HashiCorp, chat with Sonal Chokshi about both managing their past psychology through these common questions and decisions. They also share their strategies on managing the specific tactics behind it all: Everything from the "dating" process of finding an external CEO to figuring out swim lanes; handling debates and decisions; who presents, who sells. And while the conversation is a brief glimpse into their longer personal journeys, there's lessons in it for startups and leaders of all kinds on the art of hiring and sales, managing credit and conflict, and more...
Armon Dadgar is the Co-founder and CTO of HashiCorp, a commercial open source company that provides the fundamental building blocks around infrastructure and automation that developers need to create scalable, secure, high-performance applications hosted in the cloud. Valued at over $5 billion and now with over 1,000 employees worldwide, HashiCorp started out life in 2010 as an open source project and since grown into the defacto standard for modern cloud-computing development. In this episode, which was recorded live at GGV’s annual Evolving Enterprise conference, we learn how Armon and his Co-founder, Mitchell Hashimoto, initially met, what it took to sell to the enterprise and how to hire the right people as a distributed company.
Mitchell Hashimoto is the Co-founder and CTO of HashiCorp, the leader in multi-cloud infrastructure automation software. In this episode, which was recorded during a GGV live webinar, we learn how Mitchell and his cofounder, Armon Dadgar, built HashiCorp into a 1000-person company valued at over $5B—all with a very distributed team since day one. For companies suddenly managing remote workforces, or for startups just beginning the journey toward growth, how can you scale and succeed far into the future with a fully distributed team? HashiCorp’s experiences scaling into a global enterprise software powerhouse without opening the usual array of offices are especially valuable today, now that so many companies have gone remote almost overnight.
In episode 29 of EnterpriseReady, Grant speaks with Mitchell Hashimoto of HashiCorp. They discuss Hashicorp's founding story, fundraising tactics and pricing models, and monetizing open source projects.
In episode 29 of EnterpriseReady, Grant speaks with Mitchell Hashimoto of HashiCorp. They discuss Hashicorp’s founding story, fundraising tactics and pricing models, and monetizing open source projects The post Ep. #29, Achieving Ubiquity with Mitchell Hashimoto of HashiCorp appeared first on Heavybit.
In episode 29 of EnterpriseReady, Grant speaks with Mitchell Hashimoto of HashiCorp. They discuss Hashicorp’s founding story, fundraising tactics and pricing models, and monetizing open source projects The post Ep. #29, Achieving Ubiquity with Mitchell Hashimoto of HashiCorp appeared first on Heavybit.
In episode 29 of EnterpriseReady, Grant speaks with Mitchell Hashimoto of HashiCorp. They discuss Hashicorp's founding story, fundraising tactics and pricing models, and monetizing open source projects
Today on the show we have esteemed Kubernetes thought-leader, Kelsey Hightower, with us. We did not prepare a topic as we know that Kelsey presents talks and features on podcasts regularly, so we thought it best to pick his brain and see where the conversation takes us. We end up covering a mixed bag of super interesting Kubernetes related topics. Kelsey begins by telling us what he has been doing and shares with us his passion for learning in public and why he has chosen to follow this path. From there, we then talk about the issue of how difficult many people still think Kubernetes is. We discover that while there is no doubting that it is complicated, at one point, Linux was the most complicated thing out there. Now, we install Linux servers without even batting an eyelid and we think we can reach the same place with Kubernetes in the future if we shift our thinking! We also cover other topics such as APIs and the debates around them, common questions Kelsey gets before finally ending with a brief discussion on KubeCon. From the attendance and excitement, we saw that this burgeoning community is simply growing and growing. Kelsey encourages us all to enjoy this spirited community and what the innovation happening in this space before it simply becomes boring again. Tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Michael Gasch Key Points From This Episode: Learn more about Kelsey Hightower, his background and why he teaches Kubernetes! The purpose of Kelsey’s course, Kubernetes the Hard Way. Why making the Kubernetes cluster disappear will change the way Kubernetes works. There is a need for more ops-minded thinking for the current Kubernetes problems. Find out why Prometheus is a good example of ops-thinking applied to a system. An overview of the diverse ops skillsets that Kelsey has encountered. Being ops-minded is just an end –you should be thinking about the next big thing! Discover the kinds of questions Kelsey is most often asked and how he responds. Some interesting thinking and developments in the backup space of Kubernetes. Is it better to backup or to have replicas? If the cost of losing data is very high, then backing up cannot be the best solution. Debates around which instances are not the right ones to use Kubernetes in. The Kubernetes API is the part everyone wants to use, but it comes with the cluster. Why the Kubernetes API is only useful when building a platform. Can the Kubernetes control theory be applied to software? Protocols are often forgotten about when thinking about APIs. Some insights into the interesting work Akihiro Suda’s is doing. Learn whether Kubernetes can run on Edge or not. Verizon: how they are changing the Edge game and what the future trajectory is. The interesting dichotomy that Edge presents and what this means. Insights into the way that KubeCon is run and why it’s structured in the way it is. How Spotify can teach us a lesson in learning new skills! Quotes: “The real question to come to mind: there is so much of that work that how are so few of us going to accomplish it unless we radically rethink how it will be done?” — @mauilion [0:06:49] “If ops were to put more skin in the game earlier on, they would definitely be capable of building these systems. And maybe they even end up more mature as more operations people put ops-minded thinking into these problems.” — @kelseyhightower [0:04:37] “If you’re in operations, you should have been trying to abstract away all of this stuff for the last 10 to 15 years.” — @kelseyhightower [0:12:03] “What are you backing up and what do you hope to restore?” — @kelseyhightower [0:20:07] “Istio is a protocol for thinking about service mesh, whereas Kubernetes provides the API for building such a protocol.” — @kelseyhightower [0:41:57] “Go to sessions you know nothing about. Be confused on purpose.” — @kelseyhightower [0:51:58] “Pay attention to the fundamentals. That’s the people stuff. Fundamentally, we’re just some people working on some stuff.” — @kelseyhightower [0:54:49] Links Mentioned in Today’s Episode: The Podlets on Twitter — https://twitter.com/thepodlets Kelsey Hightower — https://twitter.com/kelseyhightower Kelsey Hightower on GitHub — https://github.com/kelseyhightower Interaction Protocols: It's All about Good Manners — https://www.infoq.com/presentations/history-protocols-distributed-systems Akihiro Suda — https://twitter.com/_AkihiroSuda_ Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisia/ Kubernetes — https://kubernetes.io/ Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilion/ Bryan Liles on LinkedIn — https://www.linkedin.com/in/bryanliles/ KubeCon North America — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/ Linux — https://www.linux.org/ Amazon Fargate — https://aws.amazon.com/fargate/ Go — https://golang.org/ Docker — https://www.docker.com/ Vagrant — https://www.vagrantup.com/ Prometheus — https://prometheus.io/ Kafka — https://kafka.apache.org/ OpenStack — https://www.openstack.org/ Verizon — https://www.verizonwireless.com/ Spotify — https://www.spotify.com/ Transcript: EPISODE 7 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [INTERVIEW] [00:00:41] CC: Hi, everybody. Welcome back to The Podlets, and today we have a special guest with us, Kelsey Hightower. A lot of people listening to us today will know Kelsey, but as usual, there are a lot of new comers in this space. So Kelsey, please give us an introduction. [00:01:00] KH: Yeah. So I consider myself a minimalist. So I want to keep this short. I work at Google, on Google Cloud stuff. I’ve been involved with the Kubernetes community for what? 3, 4, 5 years ever since it’s been out, and one main goal, learning in public and helping other people do the same. [00:01:16] CC: There you go. You do have a repo on your GitHub that it’s about learning Kubernetes the hard way. Are you still maintaining that? [00:01:26] KH: Yeah. So every six months or so. So Kubernetes is a hard way for those that don’t know. It’s a guide, a tutorial. You can copy and paste. It takes about three hours, and the whole goal of that guide was to teach people how to stand up a Kubernetes cluster from the ground up. So starting from scratch, 6 VMs, you install etcd, all the components, the nodes, and then you run a few test workloads so you can get a feel for Kubernetes. The history behind that was when I first joined Google, we were all concerned about the adaption of such a complex system that Kubernetes is, right? Docker Swarm is out at the time. A lot of people are using Mesos and we’re wondering like a lot of the feedback at that time was Kubernetes is too complex. So Kubernetes the hard way was built as an idea that if people understand how it worked just like they understand how Linux works, because that’s also complex, that if people just saw how the moving pieces fit together, then they would complain less about the complexity and have a way to kind of grasp it. [00:02:30] DC: I’m back. This is Duffie Colley. I’m back this week, and then we also have Michael and Bryan with us. So looking forward to this session talking through this stuff. [00:02:40] CC: Yeah. Thank you for doing that. I totally forgot to introduce who else is in this show, and me, Carlisia. We didn’t plan what the topic is going to be today. I will take a wild guess, and we are going to touch on Kubernetes. I have so many questions for you, Kelsey. But first and foremost, why don’t you tell us what you would love to talk about? One thing that I love about you is that every time I hear an interview of you, you’re always talking about something different, or you’re talking about the same thing in a different way. I love that about the way you speak. I know you offer to be on a lot of podcast shows, which is how we ended up here and I was thinking, “Oh my gosh! We’re going to talk about what everybody is going to talk about, but I know that’s not going to happen.” So feel free to get a conversation started, and we are VMware engineers here. So come at us with questions, but also what you would like to talk about on our show today. [00:03:37] KH: Yeah. I mean, we’re all just coming straight off the hills of KubeCon, right? So this big, 12,000 people getting together. We’re super excited about Kubernetes and the Mister V event, things are wrapping up there as well. When we start to think about Kubernetes and what’s going to happen, and a lot of people saw Amazon jump in with Fargate for EKS, right? So those unfamiliar with that offering, over the years, all the cloud providers have been providing some hosted Kubernetes offering, the ideas that the cloud provider, just like we do with hypervisors and virtual machines, would provide this base infrastructure so you can focus on using Kubernetes. You’ve seen this even flow down on-prem with VMware, right? VMware saying, “Hey, Kubernetes is going to be a part of this control plane that you can use to Kubernetes’ API to manage virtual machines and containers on-prem.” So at some point now, where do we go from here? There’s a big serverless movement, which is trying to eliminate infrastructure for all kinds of components, whether that’s compute, database as a storage. But even in the Kubernetes world, I think there’s an appetite when we saw this with Fargate, that we need to make the Kubernetes cluster disappear, right? If we can make it disappear, then we can focus on building new platforms that extend the API or, hell, just using Kubernetes as is without thinking about managing nodes, operating systems and autoscalers. I think that’s kind of been the topic that I’m pretty interested in talking about, because that feature means lots of things disappear, right? Programming languages and compilers made assembly disappear for a lot of developers. Assembly is still there. I think people get caught up on nothing goes away. They’re right. Nothing goes away, but the number of people who have to interact with that thing is greatly reduced. [00:05:21] BL: You know what, Kelsey? I’m going to have you get out of my brain, because that was the exact example that I was going to use. I was on a bus today and I was thinking about all the hubbub, about the whole Fargate EKS thing, and then I was thinking, “Well, Go, for example, can generate assembler and then it compiles that down.” No one complains about the length of the assembler that Go generates. Who cares? That’s how we should think about this problem. That’s a whole solvable problem. Let’s think about bigger things. [00:05:51] KH: I think it’s because in operations we tend to identify ourselves as the people responsible for running the nodes. We’re the people responsible for tuning the API server. When someone says it’s going to go away, in ops – And you see this in some parts, right? Ops, some people focus a lot more on observability. They can care less about what machine something runs on. They’re still going to try to observe and tune it. You see this in SRE and some various practices. But a lot of people who came up in a world like I have in a traditional ops background, you were the one that pixie-booted the server. You installed that Linux OS. You configured it with Puppet. When someone tells you, “We’re going to move on from that as if it’s a good thing.” You’re going to be like, “Hold up. That’s my job.” [00:06:36] DC: Definitely. We’ve touched this topic through a couple of different times on this show as well, and it definitely comes back to like understanding that, in my opinion, it’s not about whether there will be a worker for people who are in operations, people who want to focus on that. The real question that come to mind is like there is so much of that work that how are so few of us are going to be able to accomplish it unless we radically re-sync how it will be done. We’re vastly outnumbered. The number of people walking into the internet for the first time every day is mind-boggling. [00:07:08] KH: In early days, we have this goal of abstract or automating ourselves out of a job, and anyone that tried that a number of times knows that you’re always going to have something else to do. I think if we carry that to the infrastructure, I want to see the ops folks. I was very surprised that Docker didn’t come from operations folks. It came from the developer folks. Same thing for Vagrant and the same thing from Kubernetes. These are developer-minded folks that want to tackle infrastructure problems. If I think if ops were to put more skin in the game earlier on, definitely capable of building these systems and maybe they even end up more mature as more operations people put ops-minded thinking to these problems. [00:07:48] BL: Well, that’s exactly what we should do. Like you said, Kelsey, we will always have a job. Whenever we solve one problem, we could think about more interesting problems. We don’t think about Linux on servers anymore. We just put Linux on servers and we run it. We don’t think about the 15 years where it was little rocky. That’s gone now. So think about what we did there and let’s do that again with what we’re doing now. [00:08:12] KH: Yeah. I think the Prometheus community is a good example of operations-minded folks producing a system. When you meet the kind of the originators of Prometheus, they took a lot of their operational knowledge and kind of build this metrics and monitoring standard that we all kind of think about now when we talk about some levels of observability, and I think that’s what happens when you have good operations people that take prior experience, the knowledge, and that can happen over code these days. This is the kind of systems they produce, and it’s a very robust and extensible API that I think you start to see a lot of adaption. [00:08:44] BL: One more thing on Prometheus. Prometheus is six-years-old. Just think about that, and that’s not done yet, and it’s just gotten better and better and better. We go to give up our old thing so we can get better and better and better. That’s just what I want to add. [00:08:58] MG: Kelsey, if you look at the – Basically your own history of coming from ops, as I understood your own history, right? Now being kind of one of the poster childs in the Kubernetes world, you see the world changing to serverless, to higher abstractions, more complex systems on one hand, but then on the other side, we have ops. Looking beyond or outside the world of Silicon Valley into the traditional ops, traditional large enterprise, what do you think is the current majority level of these ops people? I don’t want to discriminate anyone here. I’m just basically throwing this out as a question. Where do you think do they need to go in terms of to keep up with this evolving and higher level abstractions where we don’t really care about nitty-gritty details? [00:09:39] KH: Yes. So this is a good, good question. I spent half of my time. So I probably spent time onsite with at least 100 customers a year globally. I fly on a plane and visit them in their home turf, and you definitely meet people at various skill levels and areas of responsibility. I want to make sure that I’m clear about the areas of responsibility. Sometimes you’re hired in an area of responsibility that’s below your skillset. Some people are hired to manage batch jobs or to translate files from XML to JSON. That really doesn’t say a lot about their skillset. It just kind of talks about the area of responsibility. So shout out to all the people that are dealing with main frames and having to deal with that kind of stuff. But when you look at it, you have the opportunity to rise up to whatever level you want to be in in terms of your education. When we talk about this particular question, some people really do see themselves as operators, and there’s nothing wrong with that. Meaning, they could come in. They get a system and they turn the knobs. You gave me a mainfrastructure me, I will tell you how to turn the knobs on that mainframe. You buy me a microwave, I’ll tell you how to pop popcorn. They’re not very interested in building a microwave. Maybe they have other things that are more important to them, and that is totally okay. Then you have people who are always trying to push the boundaries. Before Kubernetes, if I think back to 10 years ago, maybe 8. When I was working in a traditional enterprise, like kind of the ones you’re talking about or hinting at, the goal has always been to abstract away all of these stuff that it means to deploy an application the right way in a specific environment for that particular company. The way I manage to do it was say, “Hey, look. We have a very complex change in management processes.” I work in finance at that time. So everything had to have a ticket no matter how good the automation was. So I decided to make JIRA the ticketing system their front door to do everything. So you go to JIRA. There’ll be a custom field that says, “Hey, here are all the RPMs that have been QA’d by the QA team. Here are all the available environments.” You put those two fields in. That ticket goes to change in management and approval, and then something below the scenes automated everything, in that case it was Puppet, Red Hat and VMware, right? So I think what most people have been doing if you’re in the world of abstracting this stuff away and making it easier for the company to adapt, you’ve already been pushing these ideas that we call serverless now. I think the cloud providers put these labels on platforms to describe the contract between us and the consumer of the APIs that we present. But if you’re in operations, you should have been trying to abstract away all of these stuff for the last 10 or 15 years. [00:12:14] BL: I 100% agree. Then also, think about other verticals. So 23 years ago, I did [inaudible 00:12:22] work. That was my job. But we learned how to program in C and C++ because we were on old Suns, not even Spark machines. We’re on the old Suns, and we wanted to write things in CVE and we wanted to write our own Window managers. That is what we’re doing right now, and that’s why you see like Mitchell Hashimoto with Vagrant and you’re seeing how we’re pushing this thing. We have barely scratched the surface of what we’re trying to do. For a lot of people who are just ops-minded, understand that being ops-minded is just the end. You have to be able to think outside of your boundaries so you can create the next big thing. [00:12:58] KH: Of you may not care about creating the next big thing. There are parts of my life where I just don’t care. For example, I pay Comcast to get internet access, and my ops involvement was going to BestBuy and buying a modem and screwing it into the wall, and I troubleshoot this thing every once in a while when someone in the household complains the internet is down. But that’s just far as I’m ever going to push the internet boundaries, right? I am not really interested in pushing that forward. I’m assuming others will, and I think that’s one thing in our industry where sometimes we believe that we all need to contribute to pushing things forward. Look, there’s a lot of value in being a great operations person. Just be welcomed to saying that what we operate will change overtime. [00:13:45] DC: Yeah, that’s fair. Very fair. For me, personally, I definitely identify as an operations person. I don’t consider it my life’s goal to create new work necessarily, but to expand on the work that has been identified and to help people understand the value of it. I find I sit in between two roles personally. One is to help figure out all of the different edges and pieces and parts of Kubernetes or some other thing in the ecosystem. Second, to educate others on those things, right? Take what I’ve learned and amplify it. Having the amplifying effect. [00:14:17] CC: One thing that I wanted to ask you, Kelsey is – I work on the Valero project, and that does back and recovery of Kubernetes clusters. Some people ask me, “Okay. So tell me about the people who are doing?” I’m like, “I don’t want to talk about that. That’s boring. I wanted to talk about the people who are not doing backups.” “Okay. Let’s talk about why you should be doing maybe thinking about that.” Well, anyway. I wonder if you get a lot of questions in the area of Kubernetes operations or cloud native in general, infrastructure, etc., that in the back of your mind you go, “That’s the wrong question or questions.” Do you get that? [00:14:54] KH: Yeah. So let’s use your backup example. So I think when I hear questions, at least it lets me know what people are thinking and where they’re at, and if I ask enough questions, I can kind of get a pulse in the trend of where the majority of the people are. Let’s take the backups questions. When I hear people say, “I want to back up my Kubernetes cluster.” I rewind the clock in my mind and say, “Wow! I remember when we used to backup Linux servers,” because we didn’t know what config files were on the disk. We didn’t know where processes are running. So we used to do these PS snapshots and we used to pile up the whole file system and store it somewhere so we can recover it. Remember Norton Ghost? You take a machine and ghost it so you can make it again. Then we said, “You know what? That’s a bad idea.” What we should be doing is having a tool that can make any machine look like the way we want it. Config management is boring. So we don’t back those up anymore. So when I hear that question I say, “Hmm, what is happening in the community that’s keeping people to ask these questions?” Because if I hear a bunch of questions that already have good answers, that means those answers aren’t visible enough and not enough people are sharing these ideas. That should be my next key note. Maybe we need to make sure that other people know that that is no longer a boring thing, even though it’s boring to me, it’s not boring to the industry in general. When I hear these question I kind of use it as a keeps me up-to-date, keeps me grounded. I hear stuff like how many Kubernetes clusters should I have? I don’t think there’s a best practice around that answer. It depends on how your company segregates things, or depends on how you understand Kubernetes. It depends on the way you think about things. But I know why they’re asking that question, is because Kubernetes presents itself as a solution to a much broader problem set than it really is. Kubernetes manages a group of machines typically backed by IS APIs. If you have that, that’s what it does. It doesn’t do everything else. It doesn’t tell you exactly how you should run your business. It doesn’t tell you how you should compartmentalize your product teams. Those decisions you have to make independently, and once you do, you can serialize those into Kubernetes. So that’s the way I think about those questions when I hear them, like, “Wow! Yeah, that is a crazy thing that you’re still asking this question six years later. But now I know why you’re asking that question.” [00:17:08] CC: That is such a great take on this, because, yes, it in the area of backup, people who are doing backup in my mind – Yeah, they should be independent of Kubernetes or not. But let’s talk about the people who are not doing backups. What motivates you to not do backups? Obviously, backups can be done in many different ways. But, yes. [00:17:30] BL: So think about it like this way. Some people don’t exercise, because exercise is tough and it’s hard, and it’s easier to sit on the couch and eat a bag of potato chips than exercise. It’s the same thing with backups. Well, backing up my Kubernetes cluster before Valero was so hard that I’d rather just invest brain cycles in figuring out how to make money. So that’s where people come from when it comes to hard things like backups. [00:17:52] KH: There’s a trust element too, right? Because we don’t know if the effort we’re putting in is worth it. When people do unit testing, a lot of times unit testing can be seen as a proactive activity, where you write unit tests to catch bugs in the future. Some people only write unit test when there’s a problem. Meaning, “Wow! There’s an odd things in a database. Maybe we should write a test to prove that our code is putting odd things. Fix the code, and now the test pass.” I think it’s really about trusting that the investment is worth it. I think when you start to think about backups – I’ve seen people back up a lot of stuff, like every day or every couple of hours, they’re backing up their database, but they’d never restored the database. Then when you read their root cause analysis, they’re like, “Everything was going fine until we tried to restore a 2 terabyte database over 100 meg link. Yeah, we never exercised that part.” [00:18:43] CC: That is very true. [00:18:44] DC: Another really fascinating thing to think about the backup piece is that especially like in the Kubernetes with Valero and stuff, we’re so used to having the conversation around stateless applications and being able to ensure that you can redeploy in the case of a failure. You’re not trying to actually get back to a known state the way that like a backup traditionally would. You’re just trying to get back to a running state. So there’s a bit of a dichotomy there I think for most folks. Maybe they’re not conceptualizing the need for having to deal with some of those stateful applications when they start trying to just think about how Valero fits into the puzzle, because they’ve been told over and over again, “This is about immutable infrastructure. This is about getting back to running. This is not about restoring some complex state.” So it’s kind of interesting. [00:19:30] MG: I think part of this is also that for the stateful services that why we do backups actually, things change a lot lately, right? With those new databases, scale out databases, cloud services. Thinking about backup also has changed in the new world of being cloud native, which for most of the people, that’s also a new learning experiment to understand how should I backup Kafka? It’s replicated, but can I backup it? What about etcd and all those things? Little different things than backing up a SQL database like more traditional system. So backup, I think as you become more complex, stays if needed for [inaudible 00:20:06]. [00:20:06] KH: Yeah. The case is what are you backing up and what do you hope to restore? So replication, global replication, like we do with like cloud storage and S3. The goal is to give some people 11 9s of reliability and replicate that data almost as many geographies as you can. So it’s almost like this active backup. You’re always backing up and restoring as a part of the system design versus it being an explicit action. Some people would say the type of replication we do for object stores is much closer to active restoring and backing up on a continuous basis versus a one-time checkpoint. [00:20:41] BL: Yeah. Just a little bit of a note, you can back up two terabytes over 100 meg link in like 44 hours and a half. So just putting out there, it’s possible. Just like two days. But you’re right. When it comes to backups, especially for like – Let’s say you’re doing MySQL or Postgres. These days, is it better to back it up or is it better to have a replica right next to it and then having like a 10 minute delayed replica right next to that and then replicating to Europe or Asia? Then constantly querying the data that you’re replicating. That’s still a backup. What I’m saying here is that we can change the way that we talk about it. Backup started as conventional as they used to be. There are definitely other ways to protect your data. [00:21:25] KH: Yeah. Also, I think the other part too around the backup thing is what is the price of data loss? When you take a backup, you’re saying, “I’m willing to lose this much data between the last backup and the next.” That cost is too high than backing up cannot be your primary mode of operation, because the cost of losing data is way too high, then replication becomes a complementing factor in the whole discussion of backups versus real-time replication and shorter times to recovery. I have a couple of questions. When should people not use Kubernetes? Do you know what I mean? I visit a lot of customers, I work with a lot of eng teams, and I am in the camp of Kubernetes is not for everything, right? That’s a very obvious thing to say. But some people don’t actually practice it that way. They’re trying to jam more and more into Kubernetes. So I love to get your insights on where do you see Kubernetes being like the wrong direction for some folks or workloads. [00:22:23] MG: I’m going to scratch this one from my question list to Kelsey. [00:22:26] KH: I’ll answer it too then. I’ll answer it after you will answer it. [00:22:29] MG: Okay. Who wants to go first? [00:22:30] BL: All right. I’ll go first. There are cases when I’m writing a piece of software where I don’t care about the service discovery. I don’t care about ingress. It’s just software that needs to run. When I’m running it locally, I don’t need it. If it’s simple enough where I could basically throw it into a VM through a CloudNet script, I think that is actually lower friction than Kubernetes if it’s simple. Now, but I’m also a little bit jaded here, because I work for the dude who created Kubernetes, and I’m paid to create solutions for Kubernetes, but I’m also really pragmatic about it as well. It’s all about effort for me. If I can do it faster in CloudNet, I will. [00:23:13] DC: For my part, I think that there’s – I have a couple of – I got follow on questions to this real quick. But I do think that if you’re not actively trying to develop a distributed systems, something where you’re actually making use of the primitives that Kubernetes provides, then that already would kind of be a red flag for me. If you’re building a monolithic application or if you’re in that place where you’re just rapidly iterating on a SaaS product and you’re just trying to like get as many commits on this thing until it works and like just really rapidly prototype or even create this thing. Maybe Kubernetes isn’t the right thing, because although we’ve come a long way in improving the tools that allow for that iteration, I certainly wouldn’t say that we’re like all the way there yet. [00:23:53] BL: I would debate you that, Duffy. [00:23:55] DC: All right. Then the other part of it is Kubernetes aside, I’m curious about the same question as it relates to containerization. Is it containerization the right thing for everyone, or have we made that pronouncement, for example? [00:24:08] KH: I’m going to jump in and answer on this one, because I definitely think we need a way to transport applications in some way, right? We used to do it on floppy disks. We used to do it on [inaudible 00:24:18]. I think the container to me I treat as a glorified [inaudible 00:24:23]. That’s the way I’ve been seeing it for years. Registry store them. They replace [inaudible 00:24:28]. Great. Now we kind of have a more maybe universal packaging format that can handle simple use cases, scratch containers where it’s just your binary, and the more complex use cases where you have to compose multiple layers to get the output, right? I think RPM spec files used to do something very similar when you start to build those thing in [inaudible 00:24:48], “All right. We got that piece.” Do people really need them? The thing I get weary about is when people believe they have to have Kubernetes on their laptop to build an app that will eventually deploy to Kubernetes, right? If we took that thinking about the cloud, then everyone would be trying to install open stack on their laptop just to build an app. Does that even make sense? Does that make sense in that context? Because you don’t need the entire cloud platform on your laptop to build an app that’s going to take a request and respond. I think Kubernetes people, I guess because it’s easier to put your on laptop, people believe that it needs to be there. So I think Kubernetes is overused, because people just don’t quite understand what it does. I think there’s a case where you don’t use Kubernetes, like I need to read a file from a bucket. Someone uploaded an XML file and my app is going to translate it into JSON. That’s it. In that case, this is where I think functions as a service, something like Cloud Run or even Heroku make a lot more sense to me because the operational complexity is kind of hitting within a provider and is linked almost like an SDK to the overall service, which is the object store, right? The compute part, I don’t want to make a big deal about, because it’s only there to process the file that got uploaded, right? It’s almost like a plug-in to an FTP server, if you will. Those are the cases where I start to see Kubernetes become less of a need, because I need a custom platform to do such an obvious operation. [00:26:16] DC: Those applications that require the primitives that Kubernetes provides, service discovery, the ability to define ingress in a normal way. When you’re actually starting to figure out how you’re going to platform that application with regard to those primitives, I do see the argument for having Kubernetes locally, because you’re going to be using those tools locally and remotely. You have some way of defining what that platforming requirement is. [00:26:40] KH: So let me pull on that thread. If you have an app that depends on another app, typically we used to just have a command line flag that says, “This app is over there.” Local host when it’s on my laptop. Some DNS name when it’s in the cluster, or a config file can satisfy that need. So the need for service discovery usually arises where you don’t know where things are. But if you’re literally on your laptop, you know where the things are. You don’t really have that problem. So when you bring that problem space to your laptop, I think you’re actually making things worse. I’ve seen people depend on Kubernetes service discovery for the app to work. Meaning, they just assume they can call a thing by name and they don’t support IPs, and ports. They don’t support anything, because they say, “Oh! No. No. No. You’ll always be running into Kubernetes.” You know what’s going to happen? In 5 or 10 years, we’re going to be talking like, “Oh my God! Do you remember when you used to use Kubernetes? Man! That legacy thing. I built my whole career porting apps away from Kubernetes to the next thing.” The number one thing we’ll talk about is where people lean too hard on service discovery, or people who built apps that taught to config maps directly. Why are you calling the Kubernetes API from your app? That’s not a good design. I think we got to be careful coupling ourselves too much to the infrastructure. [00:27:58] MG: It’s a fair point too. Two answers from my end, to your question. So one is I just build an appliance, which basically priced to bring an AWS Lambda experience to the Vsphere ecosystem. Because we don’t – Or actually my approach is that I don’t want any ops people who needs to do some one-off things, like connect this guy to another guy. I don’t want him to learn Kubernetes for that. It should be as simple as writing a function. So for that appliance, we had to decide how do we build it? Because it should be scalable. We might have some function as a service component running on there. So we looked around and we decided to put it on Kubernetes. So build the appliance as a traditional VM using Kubernetes on top. For me as a developer, it gave me a lot of capabilities, like self-healing, the self-healing capabilities. But it’s also a fair point that you wrote, Kelsey, about how much do we depend or write our applications being depend on those auxiliary features from Kubernetes? Like self-healing, restarts, for example. [00:28:55] KH: Well, in your case, you’re building a platform. I would hate for you to tell me that you rebuilt a Kubernetes-like thing just for that appliance. In your case, it’s a great use case. I think the problem that we have as platform builders is what happens when things start leaking up to the user? You tell a user all they have to care about is functions. Then they get some error saying, “Oh! There’s some Kubernetes security context that doesn’t work.” I’m like, “What the hell is Kubernetes?” That leakage is the problem, and I think that’s the part where we have to be careful, and it will take time, but we don’t start leaking the underlying platform making the original goal untrue. [00:29:31] MG: The point is where I wanted to throw this question back was now these functions being written as simple scripts, whatever, and the operators put in. They run on Kubernetes. Now, the operators don’t know that it runs in Kubernetes. But going back to your question, when should we not use Kubernetes. Is it me writing in a higher level abstraction like a function? Not using Kubernetes in first sense, because I don’t know actually I’m using it. But on the covers, I’m still using it. So it’s kind of an answer and not an answer to your question because – [00:29:58] KH: I’ve seen these single node appliances. There’s only one node, right? They’re only there to provide like email at a grocery store. You don’t have a distributed system. Now, what people want is the Kubernetes API, the way it deploys things, the way it swaps out a running container for the next one. We want that Kubernetes API. Today, the only way to get it is by essentially bringing up a whole Kubernetes cluster. I think the K3S project is trying to simplify that by re-implementing Kubernetes. No etcd, SQLite instead. A single binary that has everything. So I think when we start to say what is Kubernetes, there’s the implementation, which is a big distributed system. Then there’s the API. I think what’s going to happen is if you want the Kubernetes API, you’re going to have so many more choices on the implementation that makes better sense for the target platform. So if you’re building an appliance, you’re going to look at K3S. If you’re a cloud provider, you’re going to probably look something like what we see on GitHub, right? You’re going to modify and integrate it into your cloud platform. [00:31:00] BL: Of maybe what happened with Kubernetes over the next few years is what happened with the Linux API, or the API. Firecracker and gVisor did this, and WSL did this. We can basically swap out Linux from the backend because we can just get on with the calls. Maybe that will happen with Kubernetes as well. So maybe Kubernetes will become a standard where Kubernetes standard and Kubernetes implementation that we have right now. I don’t even know about that one. [00:31:30] KH: We’re starting to see it, right? When you say here is my pod, and we can just look at Fargate for EKS as an example. When you give them a pod, their implementation is definitely different than what most people are thinking about running these days, right? One pod per VM. Not using Virtual Kube. So they’ve taken that pod spec and tried to uphold its means. But the problem with that, you get leaks. For example, they don’t allow you to bind to a host 4. Well, the pod spec says you can bind to a host 4. Their implementation doesn’t allow you to do it, and we see the same problem with gVisor. It doesn’t implement all the system calls. You couldn’t run the Docker daemon on top of gVisor. It wouldn’t work. So I think as long as we don’t leak, because when we leak, then we start breaking stuff. [00:32:17] BL: So we’re doing the same thing with Project Pacific here at VMware, where this concept of a pod is actually a virtual machines that loops in like a tenth of a second. It’s pretty crazy how they’ve been able to figure that out. If we can get this right, that’s huge for us. That means we can move out of our appliance and we can create better things that actually work. I’m VMware specific. I’m on AWS and I want this name space. I can use Fargate and EKS. That’s actually a great idea. [00:32:45] MG: I remember this presentation, Kelsey, that you gave. I think two or three years ago. It might be three years, where you took the Kubernetes architecture and you removed the boxes and the only thing remaining was the API server. This is where it clicked to me as like, “This is right,” because I was focused on the scheduler. I wanted to understand the scheduler. But then you zoomed out or your stripped off all these pieces and the only thing remaining was the API server. This is where it clicked to me. It’s like [inaudible 00:33:09] or like the syscall interface. It’s basically my API to do some crazy things that I would have write on my own and assembly kind of something before I could even get started. As well the breakthrough moment for me, this specific presentation. [00:33:24] KH: I’m working on an analogy to talk about what’s happening with the Kubernetes API, and I haven’t refined it yet. But when the web came out, we had all of these HTTP verbs, put post git. We have a body. We have headers. You can extract that out of the whole web, the web browser plus the web server. If you have tracked out that one piece, the instead of building web package, we can build APIs and GraphQL, because we can reuse many of those mechanisms, and we just call that RESTful interfaces. Kubernetes is going through the same evolution, right? The first thing we built was this container orchestration tool. But if you look at the CRDs, the way we do RBAC, the way we think about the status field in a custom object, if you extract those components out, then you end up with this Kubernetes style APIs where we start to treat infrastructure not as code, but as data. That will be the restful moment for Kubernetes, right? The web, we extracted it out, then we have REST interfaces. In Kubernetes, once we extracted out, we’ll end up with this declarative way of describing maybe any system. But right now, the fine, or the perfect match is infrastructure. Infrastructure as data and using these CRDs to allow us to manipulate that data. So maybe you start with Helm, and then Helm gets piped into something like Customize. That then gets piped into a mission controller. That’s how Kubernetes actually works, and that data model to API development I think is going to be the unique thing that lasts longer then the Kubernetes container platform does. [00:34:56] CC: But if you’re talking about – Correct me if I misinterpret it, platform as data. Data to me is meant to be consumed, and I actually have been thinking since you said, “Oh, developers should not be developing apps that connect directly to Kubernetes,” or I think you said the Kubernetes API. Then I was thinking, “Wait. I’ve heard so many times people saying that that’s one great benefit of Kubernetes, that the apps have that access.” Now, if you see my confusion, please clarify it. [00:35:28] KH: Yeah. Right. I remember early on when we’re doing config maps, and a big debate about how config maps should be consumed by the average application. So one way could be let’s just make a configs map API and tell every developer that they need to import a Kubernetes library to call the API server, right? Now everybody’s app doesn’t work anymore on your laptop. So we were like, “Of course not.” What we should do is have config maps be injected into the file system. So that’s why you can actually describe a config map as a volume and say, “Take these key values from the config map and write them as normal files and inject them into the container so you can just read them from the file system. The other option also was environment variables. You can take a config map and translate them into an environment variables, and lastly, you can take those environment variables and put them into command line flags. So the whole point of that is all three of the most popular ways of configuring an app, environment variables, command line flags and files. Kubernetes molded itself into that world so that developers would never tightly couple themselves to the Kubernetes API. Now, let’s say you’re building a platform, like you’re building a workflow engine like Argo, or you’re building a network control plane like Istio. Of course, you should use a Kubernetes API. You’re building a platform on top of a platform. I would say that’s kind of the exception to the rule if you’re building a platform. But a general application that’s leveraging the platform, I really think you should stay away from the Kubernetes API directly. You shouldn’t be making sys calls directly [inaudible 00:37:04] of your runtime. The unsafe package in Go. Once you start doing that, Go can’t really help you anymore. You start pining yourself to specific threads. You’re going to be in a bad time. [00:37:15] CC: Right. Okay. I think I get it. But you can still use Kubernetes to decouple your app from the machine by using objects to generate those dependencies. [00:37:25] KH: Exactly. That was the whole benefit of Kub, and Docker even, saying, “You know what? Don’t worry too much more about C groups and namespaces. Don’t even try to do that yourself.” Because remember, there was a period of time where people were actually trying to build C groups and network namespaces into the runtime. There’s a bunch of like Ruby and Python projects that they were trying to containerize themselves within the runtime. Whoa! What are we doing? Having that second layer now with Containerd on C, we don’t have to implement that 10,000 times for every programming language. [00:37:56] DC: One of the things I want to come back to is your point that you’d made about the Kubernetes API being like one of the more attractive parts of the projects, and people needing that to kind of move forward in some of these projects, and I wonder if it’s more abstract than that. I wonder if it’s abstract enough to think about in terms of like a level triggered versus edge triggered stuff. Taking control theory, the control theory that basically makes Kubernetes such a stable project and applying that to software architecture rather than necessarily bringing the entire API with you. Perhaps, what you should take from this is the lessons that we’ve learned in developing Kubernetes and apply that to your software. [00:38:33] KH: Yeah. I have the fortunate time to spend some time with Mark Burgess. He came out with the Promise Theory, and the Promise Theory is the underpinnings of Puppet Chef, Ansible, CF Engine, and this idea that we would make promises about something and eventually convergent to that state. The problem was with Puppet Chef and Ansible, we’re basically doing this with shell scripts and Ruby. We were trying to write all of these if, and, else statements. When those didn’t work, what did you do? You made an exec statement at the bottom and then you’re like, “Oh! Just run some batch, and who knows what’s going to happen?” That early implementations of Promise Theory, we didn’t own the resource that we were making promises about. Anyone could go behind this and remove the user, or the user could have a different user ID on different systems but mean the same thing. In the Kubernetes world, we push a lot of that if, else statements into the controller. Now, we force the API not have any code. That’s the big difference. If you look at the Kubernetes API, you can’t do if statements. Terraform, you can do if statements. So you kind of fall into the imperative trap at the worst moments when you’re doing dry runs or something like that. It does a really good of it. Don’t get me wrong. So the Kubernetes API says, “You know what? We’re going to go all-in on this idea.” You have to change the controller first and then update the API. There is no escape patches in the API. So it forces a set of discipline that I think gets us closer to the promises, because we know that the controller owns everything. There’s no way to escape in the API itself. [00:40:07] DC: Exactly. That’s exactly what I was pushing for. [00:40:09] MG: I have a somewhat related question and I’m just not sure how to frame it correctly. So yesterday I saw a good talk by someone talking about protocols, like they somewhat forgotten power of protocols in the world of APIs. We got Swagger. We got API definitions. But he made the very easy point of if I give you an open, a close and a write and read method, or an API, you’d still don’t know how to call them in sequence and which one to call it off. This is same for [inaudible 00:40:36] library if you look at that. So I always have to force myself, “Should I do anything [inaudible 00:40:40] or I’m not leaking some stuff.” So I look it up. Versus on protocols, if you look at the RFC definitions, they are very, very precise and very plainly outlined of what you should do, how you should behave, how you should communicate between these systems. This is more of a communication and less about the actual implementation of an API. I still have to go through that talk again, and I’m going to put it in the show notes. But this kind of opened my mind again a little bit to think more about communication between systems and contracts and promises, as you said, Carlisia. Because we make so many assumptions in our code, especially as we have to write a lot of stuff very quickly, which I think will make things brittle overtime. [00:41:21] KH: So the gift and the curse of Kubernetes that it tries to do both all the time. For some things like a pod or a deployment, we all feel that. If I give any Kubernetes cluster a deployment object, I’m going to get back out running pod. This is what we all believe. But the thing is it may not necessarily run on the same kernel. It may not run on the same OS version. It may not even run on the same type of infrastructure, right? This is where I think Kubernetes ends up leaking some of those protocol promises. A deployment gets you a set of running pods. But then we dropdown to a point where you can actually do your own API and build your own protocol. I think you’re right. Istio is a protocol for thinking about service mesh, whereas Kubernetes provides the API for building such a protocol. [00:42:03] MG: Yeah, good point. [inaudible 00:42:04]. [00:42:04] DC: On the Fargate stuff, I thought was a really interesting article, or actually, an interesting project by [inaudible 00:42:10], and I want to give him a shout out on this, because I thought that was really interesting. He wrote an admission controller that leverages autoscaler, node affinity and pod affinity to effectively do the same thing so that whenever there is a new pod created, it will spin up a new machine and associate only that pod with that machine. I was like, “What a fascinating project.” But also just seeing this come up from like the whole Fargate ECS stuff. I was like – [00:42:34] KH: I think that’s the thread that virtual kubelet is pulling on, right? This idea that you can simplify autoscalling if you remove that layer, right? Because right now we’re trying to do this musical chairs dance, right? Like in a cloud. Imagine if someone gave you the hypervisor and told you you’re responsible for attaching hypervisor workers and the VMs. It would be a nightmare. We’re going to be talking about autoscalling the way we do in the cloud. I think Kubernetes moving into a world where a one pod per resource envelope. Today we call them VMs, but I think at some point we’re going to drop the VM and we would just call it a resource envelope. VMs, this is the way we think about that, Firecrackers. Like, “Hey, does it really need to be a complete VM?” Firecracker is saying, “No. It doesn’t. It just needs to be a resource envelope that allows you to run their particular workload.” [00:43:20] DC: Yeah. Same thing we’re doing here. It’s just enough VM to get you to the point where you can drop those containers on to it. [00:43:25] CC: Kelsey, question. Edge? Kubernetes on edge. Yes or no? [00:43:29] KH: Again, it’s just like compute on edge has been a topic for discussion forever. Problem is when some people say compute on edge, they mean like go buy some servers from Dell and put it in some building somewhere close to your property as you can. But then you have to go build the APIs to deploy it to that edge. What people want, and I don’t know how far off it is, but Kubernetes has set the bar so high that the Kubernetes API comes with a way to low balance, attach storage, all of these things by just writing a few YAML files. What I hear people saying is I want that close to my data center or store as possible. When you say Kubernetes on the edge, that’s what they’re saying, is like, “But we currently have one at edge. It’s not enough.” We’ve been providing edge for a very longtime. Open stack was – Remember open stack? Oh! We’re going to do open stack on the edge. But now you’re a pseudo cloud provider without the APIs. I think what Kubernetes is bringing to the table is that we have to have a default low balancer. We have to have a default block store. We have to have a default everything and on or for to mean Kubernetes like it does today centralized. [00:44:31] BL: Well, Doors have been doing this forever in some form or another. 20 years ago I worked for a Duty Free place, and literally traveled all over the world replacing point of sale. You might think of point of sales as a cash register. There was a computer in the back and it was RS-232 links from the cash register to the computer in the back. Then there was dial-up, or [inaudible 00:44:53] line to our central thing. We’ve been doing edge for a long time, but now we can do edge. The central facility can actually manage the compute infrastructure. All they care about is basically CPU and memory and network storage now, and it’s a lot more flexible. The surety is long, but I think we’re going to do it. It’s going to happen, and I think we’re almost right – People are definitely experimenting. [00:45:16] KH: You know what, Carlisia? You know what’s interesting now though? I was watching the Reinvent announcement. Verizon is starting to allow these edge components to leverage 5G for the last mile, and that’s something game-changer, because most people are very skeptical about 5G being able to provide the same coverage as 4G because of the wavelength and point-to-point, all of these things. But for edge, this thing is a game-changer. Higher bandwidth, but shorter distance. This is exactly what edge want, right? Now you don’t have to dig up the ground and run fiber from point-to-point. So if you could buy in these Kubernetes APIs, plus concepts like 5G, and get in that closer to people, yeah, I think that’s going to change the way we think about regions and zones. That kind of goes away. We’re going to move closer to CDNs, like Cloudflare has been experimenting with their worker technology. [00:46:09] DC: On the edge stuff, I think that there’s also an interesting dichotomy happening, right? There’s a definition of edge that we referred to, which is storage stuff and one that you’re alluding to, which is that there may be like some way of actually having some edge capability and a point of presence in a 5G tower or some point with that. In some cases, edge means data gravity. You’re actually taking a bunch of data from sensors and you’re trying to store it in a place where you don’t have to pay the cost of moving all of the data form one point to another where you can actually centralize compute. So in those edge cases, you’re actually willing to invest in a high-end compute to allow for the manipulation of that data where that data lake is so that you can afford to move it into some centralized location later. But I think that that whole space is so complex right now, because there are so many different definitions and so many different levels of constraints that you have to solve for under one umbrella term, which is the edge. [00:47:04] KH: I think Bryan was pulling on that with the POS stuff, right? Because instead of you going to go buy your own cash registry and gluing everything together, that whole space got so optimized that you can just buy a square terminal. Plug it on some Wi-Fi and then there you go, right? You now have that thing. So once we start to do this for like ML capabilities, security capabilities, I think you’re going to see that POS-like thing expand and that computer get a little bit more robust to do exactly what you’re saying, right? Keep the data local. Maybe you ship models to that thing so that it can get smarter overtime, and then upload the data from various stores overtime. [00:47:40] DC: Yup. [00:47:40] MG: One last question from my end. Switching gears a bit, if allow it. KubeCon. I left KubeCon with some mixed feelings this years. But my perspective is different, because I’m not the typical, one of the 12,000 people, because most of them were new comers actually. So I looked at them and I asked myself, “If I would be new to this huge big world of CNCF and Kubernetes and all these stuff, what would I take from that?” I would be confused. Confused like how from [inaudible 00:48:10] talks, which make it sound like it’s so complex to run all these things through the keynotes, which seems to be like just a lineup of different projects that I all have to get through and install and run. I was missing some perspective and some clarity from KubeCon this year, especially for new comers. Because I’m afraid, if we don’t retain them, attract them, and maybe make them contributors, because that’s another big problem. I’m afraid that we’ll lose our base that is using Kubernetes. [00:48:39] BL: Before Kelsey says anything, and Kelsey was a Kub contrary before I was, but I was a Kub contrary this time, and I can tell you exactly why everything is like it is. Well, fortunately and unfortunately, this cloud native community is huge now. There’s lots of money. There are lots of people. There are lots of interests. If we went back to KubeCon when it was in San Francisco years ago, or even like the first Seattle one, that was a community event. We could make the event for the community. Now, there’s community. The people who are creating the products. There’s the end users, the people who are consuming the products, and there are these big corporations and companies, people who are actually financing this whole entire thing. We actually have to balance all three of those. As a person who just wants to learn, what are you trying to learn from? Are you learning from the consumption piece? Are you learning to be a vendor? Are you learning to be a contributor? We have to think about that. At a certain point, that’s good for Kubernetes. That means that we’ve been able to do the whole chasm thing. We’ve cross over to chasm. This thing is real. It’s big. It’s going to make a lot of people a lot of money one day. But I do see the issue for the person who’s trying to come in and say, “What do I do now?” Well, unfortunately, it’s like anything else. Where do you start? Well, you got to take it all in. So you need to figure out where you want to be. I’m not going to be the person that’s going to tell you, “Well, go do a sig.” That’s not it. What I want to tell you is like anything else that we’d have to learn is real hard, whether it’s a programming language or a new technique. Figure out where you want to be and you’re going to have to do some research. Then hopefully you can contribute. I’m sure Kelsey has opinions on this as well. [00:50:19] KH: I think Brian is right. I mean, I think it’s just like a pyramid happening. A the very bottom, we’re new. We need to get everybody together in one space and it becomes more of a tradeshow, like an introductory, like a tasting, right? When you’re hungry and you go and just taste everything. Then when you figure out what you want, then that will be your focus, and that’s going to change every year for a lot of people. Some people go from consumer to contributor, and they’re going to want something out of the conference. They’re only going to want to go to the contributor day and maybe some of the deep-dive technical tracks. You’re trying to serve everybody in two or three days. So you’re going to start to have like everything pulling for your attention. I think what you got to do is commit. If you go and you’re a contributor, or you’re someone what’s building on top, you may have to find a separate event to kind of go with it, right? Someone told me, “Hey, when you go to all of these conferences, make sure you don’t forget to invest in the one-on-one time.” Me going to Oslo and spending an evening with Mark Burgess and really talk about Promise Theory outside of competing for attention with the rest of the conference. When I go, I’d like to meet new people. Sit down with them. Out of the 12,000 people, I call it a win if I can meet three new people that I’ve never met before. You know what? I’ll do a follow-up hangout with them to go deeper in some areas. So I think it’s more of a catch all. It’s definitely has a tradeshow feel now, because it’s big and there’s a lot of money and opportunity involved. But at the same time, you got to know that, “Hey, you got to go and seek out.” You go to Spotif
S01 E05 : Las notas de la semana Conducido por @domix y @_marKox 21-Julio-2019 Revisión de las noticias Linkerd 2.4: Traffic Splitting and SMI (https://linkerd.io/2019/07/11/announcing-linkerd-2.4/) Flagger 0.17 release (https://twitter.com/stefanprodan/status/1151162591856812035?s=21) UNLEASHING PREDATOR: PRECOMPUTED DATA REPOSITORIES (https://objectcomputing.com/news/2019/07/18/unleashing-predator-precomputed-data-repositories) Multi-Cloud Service Mesh Networking for Humans (https://www.hashicorp.com/resources/multi-cloud-service-mesh-networking-for-humans) Tweet de Mitchell Hashimoto (https://twitter.com/i/web/status/1151543328615227392) Kubernetes: Understanding Pods vs. Containers (https://speakerdeck.com/thockin/kubernetes-understanding-pods-vs-containers) Repos chingones de código Nginx Admin's Handbook (https://github.com/trimstray/nginx-admins-handbook) Library/SDK for creating the operators for Kubernetes and Openshift. (https://github.com/jvm-operators/abstract-operator) Dive-A tool for exploring each layer in a docker image (https://github.com/wagoodman/dive)
HashiCorp has embraced the multi-cloud, and in this episode, Corey asks Founder and CTO Mitchell Hashimoto to explain how that’s working out. From Terraform’s humble beginnings to the answer to “why HCL?” Hashimoto explains what makes HashiCorp tick, and why it continues to do so.
Dave McJannet joined HashiCorp as CEO about three years ago when the company was approximately 30 people. Today the company employs more than 400 people, the company’s value has grown more than 20x and customer adoption for both the company’s open source platform and enterprise products have exploded. Prior to joining HashiCorp, Dave ran marketing at GitHub and HortonWorks, and earlier in his career spent time at VMware, Microsoft and webMethods. In this episode, we dig into HashiCorp’s growth and how it balances open source communities and enterprise revenue models, Dave’s journey to becoming the CEO and his relationship with HashiCorp co-founders Mitchell Hashimoto and Armon Dadgar, and the secret to selling software to large enterprises. Episode Highlights: 02:36: What’s it like to join a startup as a CEO when you’re not the founder? What are some of the challenges? 05:23: What made you think you were the right CEO for the CEO role at Hashicorp? 06:40: How to do split things up – between yourself and the two co-founders? 08:05: How do you manage disagreements? 08:54: How does your open source business model work? 10:40: Are there key indicators or metrics that you use to monitor the health of the business? 12:05: How do you think about time and resource allocation between the open source and commercial sides of the business? 13:11: How much time are the founders spending with open source communities vs. commercial customers? 14:15: How do you prioritize what's going to be part of the open source roadmap and what you're going to keep for commercial? 15:50: What do you need to do to be successful in selling to the enterprise as a young company? 17:39: In selling to the enterprise, it is just fake it till you make it? 19:21: How big a deal is support when working with enterprise customers? 20:05: What value do you get from your user conference? Do you recommend annual events for users? 22:08: As you scale from different revenue phases, what have you had to re-tool? What’s on your mind next? 25:07: What's your favorite book that you recommend for founders? 25:33: If you were a investor or board member in a Series A or B company what's the one piece of advice you'd give to the founder? 26:12 What's a company that you admire and why?
Dave McJannet joined HashiCorp as CEO about three years ago when the company was approximately 30 people. Today the company employs more than 400 people, the company’s value has grown more than 20x and customer adoption for both the company’s open source platform and enterprise products have exploded. Prior to joining HashiCorp, Dave ran marketing at GitHub and HortonWorks, and earlier in his career spent time at VMware, Microsoft and webMethods. In this episode, we dig into HashiCorp’s growth and how it balances open source communities and enterprise revenue models, Dave’s journey to becoming the CEO and his relationship with HashiCorp co-founders Mitchell Hashimoto and Armon Dadgar, and the secret to selling software to large enterprises. Episode Highlights: 02:36: What’s it like to join a startup as a CEO when you’re not the founder? What are some of the challenges? 05:23: What made you think you were the right CEO for the CEO role at Hashicorp? 06:40: How to do split things up – between yourself and the two co-founders? 08:05: How do you manage disagreements? 08:54: How does your open source business model work? 10:40: Are there key indicators or metrics that you use to monitor the health of the business? 12:05: How do you think about time and resource allocation between the open source and commercial sides of the business? 13:11: How much time are the founders spending with open source communities vs. commercial customers? 14:15: How do you prioritize what's going to be part of the open source roadmap and what you're going to keep for commercial? 15:50: What do you need to do to be successful in selling to the enterprise as a young company? 17:39: In selling to the enterprise, it is just fake it till you make it? 19:21: How big a deal is support when working with enterprise customers? 20:05: What value do you get from your user conference? Do you recommend annual events for users? 22:08: As you scale from different revenue phases, what have you had to re-tool? What’s on your mind next? 25:07: What's your favorite book that you recommend for founders? 25:33: If you were a investor or board member in a Series A or B company what's the one piece of advice you'd give to the founder? 26:12 What's a company that you admire and why?
stdout.fm 23번째 로그에서는 만우절, 일본의 새 연호, 북스캔에 대해서 이야기를 나눴습니다. The stackoverflow April fools : programming Mitchell Hashimoto on Twitter…
stdout.fm 23번째 로그에서는 만우절, 일본의 새 연호, 북스캔에 대해서 이야기를 나눴습니다. 참가자: @seapy, @raccoonyy, @nacyo_t 게스트: @ecleya(후얌얌) The stackoverflow April fools : programming Mitchell Hashimoto on Twitter: “Terraform is finally achieving its true purpose. …” Provider Purpose | terraform-provider-dominos (일본어) 구글에서 “배고프다”고 검색하면 피자가 배달되도록 하기까지 Pizza for bitcoins? zozi(厳島神社の人) on Twitter: “There was a talk about delivery by drone, …” 빙그레 붕어싸만코 광고 - TV CF 싸만코의 역습 - YouTube Why I’m Ditching iOS and Becoming an Android Developer | raywenderlich.com 만우절 기념 구글 두들이 ‘네이버 로고’? : 사회일반 : 사회 : 뉴스 : 한겨레 Bixby Speaker Impressions! - YouTube Introducing Warp: Fixing Mobile Internet Performance and Security Deep packet inspection - Wikipedia “헤이세이 불황 시대 끝났다” 일본 새 연호 ‘레이와’ 환호 (일본어) 새로운 연호는 이걸로 결정 - Qiita unicode-range - CSS: Cascading Style Sheets | MDN chisaka on Twitter: “이번 신연호(레이와)대응 꿀잼포인트… “ Ruby 2.6.3 released by nurse · Pull Request #2023 · ruby/www.ruby-lang.org Unicode 12.1.0 FUJITSU Image Scanner ScanSnap iX500 - Fujitsu Korea 현대오피스 YG-858 A4 종합정보 행복쇼핑의 시작 ! 다나와 (가격비교) - Danawa.com Write the Docs Seoul Meetup #1: Explorable Explanations - Speaker Deck Adobe Acrobat Pro DC: PDF converter, convert PDFs from anywhere. 혹시 끊임없이 책을 사지만, 절대 읽지 않고 있나요? - BBC News 코리아 AppleScript - Wikipedia Detect Text (OCR) | Cloud Vision API Documentation | Google Cloud Amazon Textract | 텍스트 및 데이터 추출 | AWS Mi Global Home GoodReader Documents 브런치 복사 - Twitter Search FUJITSU Image Scanner ScanSnap SV600 - Fujitsu Korea Google Books Adds Hand Scans | TechCrunch Smart & Professional Book Scanner with Digitization Solutions - CZUR 300페이지를 20분 안에 스캔한다고? 스마트스캐너: 아우라 | 와디즈 리워드 - 크라우드펀딩 BFS-Auto: High Speed Book Scanner at over 250 pages/min - YouTube 단독 - 1년에 무려 8만 장…서울대 ‘스캔 노예’ 파문 Aaron Swartz - Wikipedia O’Reilly Media - Technology and Business Training 세계문학APP - 오픈파트너 모집 당시 등록된 주소로 대량 발송되는 공지 메일입니다. - Pastebin.com 이용약관 제7조 서비스의 중단 - 리디
stdout.fm 19번째 로그에서는 테라폼 0.12 베타, 데이터독 APM, 엘라스틱 APM, 해커스 크라우드 펀딩에 대해서 이야기를 나눴습니다. 참가자: @seapy, @raccoonyy, @nacyo_t 오빠들 1위 위해 ‘스밍’… 극성팬 때문에 멍든 차트 - 조선닷컴 - 연예 > K-pop Write The Docs 서울의 2019 첫 번째 밋업 | Festa! Sticker Mule: Custom stickers that kick ass 미성출력 테라폼 0.12 베타 1 출시 및 개선된 HCL 문법 살펴보기 | 44bits.io Announcing Terraform 0.12 Beta 1 테라폼 0.12 지원 프로바이더 | HashiCorp Releases HashiCorp Terraform 0.12 Preview LaTeX - Wikipedia Metafont - Wikipedia HashiCorp on Twitter: “Terraform 0.12 is coming later this summer. …” hashicorp/hcl2: Temporary home for experimental new version of HCL Mitchell Hashimoto on Twitter: “Congratulations @GitHub on launching Actions! …” Release v2.0.0 · terraform-providers/terraform-provider-aws 테라폼을 가장 잘 지원하는 에디터는? - 젯브레인 인텔리J를 활용한 테라폼 코드 작성 | 44bits.io emacs.dev vim.dev 달물이 on Twitter: “한국식 MBTI를 개발했습니다. …” 어엉부엉 on Twitter: “트친님들의 도움을 받아 만든 개정판 개발자 MBTI… “ Modern monitoring & analytics | Datadog New Relic | Real-time insights for modern software Next-generation application performance monitoring | Datadog Elasticsearch을 이용한 오픈 소스 APM | Elastic Datadog - Watchdog amatsuda/jb: A simple and fast JSON API template engine for Ruby on Rails Datadog - Notebooks) Metricbeat: 경량 메트릭 수집기 | Elastic Soonson Kwon on Twitter: “스티븐 레비의 해커스가 크라우드펀딩으로 복간된다는 소식. …” 해커스 - YES24 Facebook - Hanbit Media: 굿바이 해커스. 해커스가 영어책만 있는 건 아닙니다. … 해커 그 광기와 비밀의 기록(삼인서각) - YES24 알라딘 중고 - 해커스 : 세상을 바꾼 컴퓨터 천재들 (무삭제판)
Like most 12-year-old boys, Mitchell Hashimoto played a lot of videogames. But he never liked the repetitive parts of games like Neopets, where players feed and care for virtual animals. "I used a lot of bot software that other people wrote to play the more mundane parts for me, so I could do the fun stuff," he says. Those bots were often blocked by gamemakers, so Hashimoto taught himself to program and created his own bot.
HashiCorp was founded seven years ago with the goal of building infrastructure tools for automating cloud workflows such as provisioning, secret management, and service discovery. Hashicorp’s thesis was that operating cloud infrastructure was too hard: there was a need for new tools to serve application developers. Hashicorp founders Mitchell Hashimoto and Armon Dadgar began releasing The post Scaling HashiCorp with Armon Dadgar and Mitchell Hashimoto appeared first on Software Engineering Daily.
Mitchell turned his hobby into a business that now serves 100 of the Fortune 500 companies. In this episode, he talks about how he grew the HashiCorp open source community, monetized an open-source product, and decided to bring on a CEO. Mitchell Hashimoto is best known as the creator of Vagrant, Packer, Terraform and Consul. Mitchell is the co-founder of HashiCorp, a company that builds powerful and elegant DevOps tools. He is also an O’Reilly author. He is one of the top GitHub users by followers, activity, and contributions. “Automation obsessed,” Mitchell solves problems with as much computer automation as possible. Highlights from the episode: 3:31 How did you get to where you are now? 5:30 How did you know when your hobby should be a company? 6:55 How did you find your co-founder? Why did you think a co-founder made sense? What are the positives and negatives of being best friends with your co-founder? 9:07 When you left your job, what was your vision for the company and how has it changed to your vision today? 10:05 How did you nurture and grow the HashiCorp API community? 12:08 How did you transition from an open source project to a commercialized one? How does it change your job as a founder? 14:06 Within your open source customer base, how do you identify which customers to monetize? 17:08 How did you and your co-founder decide to bring on a CEO? 21:01 How do you run a distributed business? What are some of the challenges and how have you dealt with that? What are some tools you use to overcome the lack of proximity for people? 24:21 Tell us about your user conference. Why did you decide to do an annual conference so early in the life of the company and what benefits have you seen from doing it? 26:08 As your company has scaled from 5 to 300 people, how has it changed your relationship with your customers and what burden do you feel? 28:15 What is your product philosophy and how do you share that with the product and engineering teams? 30:35 Favorite book, blog, or piece of content? 31:04 What do you believe that not many others believe? 31:39 What is your hobby? How do you recharge?
In June of this year, Consul 1.2 was released. The release expanded Consul’s capability around service segmentation (controlling who and how services connect East and West). On this week’s podcast, Wes and Mitchell discuss Consul in detail. The two discuss Consul’s design decisions around focusing on user space networking, layer 4 routing, Go, Windows’ performance characteristics, the roadmap for eBPF on Linux, and an interesting feature that Consul implements called Network Tomography. The show wraps with Mitchell’s discussion on some of the research that Hashicorp is doing around machine learning and security with Consul. Why listen to this podcast: - Consul is first and foremost a centralized service registry that provides discovery. While it has a key-value store, it is Consul’s least important feature. With the June release (1.2), Consul entered more into the space of a service mesh with the focus on service segmentation (controlling how you connect and who can connect). - Hashicorp attempts to limit the language fragmentation in the Company and has seen a lot of success leveraging Go across their platforms. Therefore, Consul is written in Go. - Because Consul focused on layer 4 first, it is recommended to leverage the recent integration with Envoy for achieving high degrees of observability. - All of the network routing with Consul happens in user space at this point; however, kernel space routing with eBPF is planned for the near term. The focus, at this point, is safely cross-compiling to every platform and addressing the most possible use cases. The focuses isn’t on the high performance use cases (yet). - For any two servers across the globe in different data centers, instantly Consul can give you 99th percentile round-trip time between with uses a feature called Network Tomography. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2S3ZiSx You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2S3ZiSx
In this episode, rachel, Kendall, and Mitchell discuss: * Mitchell's first real job working at an Apple retail store, where the employee customer service training made a big impact * How the empathy training from that experience has translated well to most of his subsequent experience in the working world * Digging into why customers are angry, and finding a place to be genuinely empathetic * How this translates to the employee-employer relationship * How we can't not talk about Mitchell's job at a smoothie place * Learning to interact with clients as a software consultant and as a leader via building things in the open source community * Mitchell's origins as a self-taught programmer while his parents limited his time on the computer * Working around that by PRINTING OUT SOURCE CODE TO STUDY IT, WTF?! * Having to level up at management fast after founding HashiCorp via reading a lot of books, hiring experienced, "professional" managers * How hard it is to scale management process beyond 300 or so people, can't personally address every issue * That people are much more complex than computers--computers are much easier to understand * The value of emphasizing shared culture when your company is fully distributed * Wondering why Mitchell is cagey about his answer to whether he'd prefer to fight a crowd of duck-sized horses or one horse-sized duck...perhaps this is related to his level of introversion? :) * How leading and understanding extroverted people is harder than you might think, especially when it comes to personnel reviews * The nature of video calls and introversion and how working from home allows you to recharge in between calls * Learning to be responsible with people when you are in an authority position--for example, do not message people on Friday saying you want to talk with them on Monday without any details (when you just need to talk to them about nothing) * How this totally works the other way for Kendall--freaking out about an employee wanting to talk * That someone having authority over you can be comforting if you trust them * The benefits of viewing titles as job descriptions and not "who you're the boss of" * Using (and getting caught using) management techniques with your partner You can find Mitchell by name on GitHub and Twitter, and you can email him and he'll do his best to respond :) Special thanks to Mel Stanley for our theme music
DESCRIPTION Welcome to episode 76 of the I.T. Career Energizer, a weekly podcast where your host Phil Burgess chats with inspiring I.T. professionals, consultants and experts from around the world. On today's episode Phil interviews founder of HashiCorp, creator of Vagrant, Packer, Serf, Consul, Terraform, Vault, and Nomad, Mitchell Hashimoto. Mitchell is a passionate engineer and professional speaker whose goal is to make the best DevOps tools in the world. Mitchell is also an O’Reilly author and a top Github user in terms of followers, activity and contributions. To find out more about the podcast visit itcareerenergizer.com And remember to “Subscribe” to get new episodes automatically downloaded to your device KEY TAKEAWAYS (00.59) Phil introduces Mitchell and asks him to tell us more about himself and his I.T career. Mitchell eagerly responds to state that for the last 6 years he has been hyper-focused on building out Hashicorp and the open source tools around it. He names DevOp tools such as Vagrant, Packer, Serf, Consul, Terraform, Vault, and Nomad to give context to his focus. (01.47) Phil asks Mitchell to share a unique career tip with the audience. Mitchell gives the advice of time. He mentions that you should always schedule time, not to go on holiday but to try new things related to what you're trying to achieve. In Mitchell’s case this is exploring new cloud based tools to see how they work and how to maintain them in real life projects. Mitchell mentions that by taking this time you can become inspired and keep abreast of the latest technology and this can help in your job (03.13) After hearing Mitchell’s advice, Phil asks if he feels that this helps him stay motivated and enthused by what he does. Mitchell responds saying that this time away can keep you innovative and up-to-date with the latest trends (05.53) As a result of hearing Mitchell's worst mistake in his career, Phil asks him to share with the audience his career highlight. Mitchell says that a change in thinking came about with the development of 'Vault' a secrets management tool. It's different to other management tools and now has lots of adoption in businesses and fortune 500 companies (07.07) Phil asks Mitchell to share with the audience what excites him about the I.T industry. Mitchell says that I.T infrastructure is being viewed more and more as disposable machines and as a result it improves stability and makes everything more resilient (09:48) Phil begins the Reveal Round and asks Mitchell what first attracted him to a career in I.T? Mitchell says that he has always gotten a kick out of watching things work the way they are supposed to. Mitchell saw I.T as an opportunity to automate vast fleets of servers to do things for him (12:38) Phil asks Mitchell to share his career objectives, that he's currently focusing on? Mitchell says he is continuing to work on HashiCorp and to continue building confidence in the I.T industry and better integrate Vault and Terraform into more complete packages (16:37) Finally Phil asks Mitchell to share a last piece of advice with the I.T. Career Energizer audience. Mitchell responds to say, focus on automation as we’re nowhere near the limit. There’s no future without automation and anytime that you are doing a manual task consider how it could be automated BEST MOMENTS (03.09) "Find time to step away from being in the trenches to motivate and inspire yourself” (05.20) "It was misconfigured checks on our IP server that caused no traffic to be sent to the ad server" (06.45) Mitchell Hashimoto mentions that; "Google made an announcement without telling Hashicorp of their integration with Vault" (06:58) Speaking about 'Vault' Mitchell states that "The thing I'm most proud of is not it's success, it's a different way of thinking" (10.21) "What attracted me to I.T was the huge automation problem waiting to be solved (10.28) "The best career advice I've received is to 'do everything' don't focus on one thing and be the best at it, it's the time to do everything and understand what you really love" (17.10) Mitchell confidently states that; “those that have dedicated themselves to automation are wildly successful” ABOUT THE HOST Phil Burgess, an I.T. consultant, mentor, and coach, is the creator and host of the I.T. Career Energizer Podcast. His podcast continues to inspire, assist and guide anybody wanting to start, develop and grow a career in I.T. by inviting successful I.T. professionals, consultants, and experts to share their advice, career tips and experiences. CONTACT THE HOST Website: itcareerenergizer.com LinkedIn: www.linkedin.com/in/philburgess ABOUT THE GUEST Mitchell Hashimoto is best known as the creator of Vagrant, Packer, Terraform and Consul. Mitchell is the founder of HashiCorp, an open-source software company based in San Francisco, California. HashiCorp provides open source tools and commercial products that enable developers, operators and security professionals to provision, secure, run and connect distributed application infrastructure CONTACT THE GUEST LinkedIn: https://www.linkedin.com/in/mitchellh/ Twitter: https://twitter.com/mitchellh @mitchellh Website: https://www.hashicorp.com/
In this episode we talk to Mitchell Hashimoto, founder and co-CTO of HashiCorp and Paul Banks, software engineer on the Consul team. We get a little insight into the personalities and backgrounds of both guests, most importantly we learn some amazing things about the new feature in Consul called Connect. Why we need a Service Mesh and how it helps with security for dynamically scheduled and legacy applications. https://www.consul.io
Mitchell Hashimoto (http://codemonkey.fm/guests/mitchell-hashimoto), Founder and CTO of HashiCorp (https://hashicorp.com), joins us to discuss if code is art, designing open source code for varying audiences and AWS CodeStar. Credits Opening Music: Another beek beep beer please (http://freemusicarchive.org/music/Rolemusic/gigs_n_contest/rolemusic_-_gigs_n_contest_-_03_Another_beek_beep_beer_please) by Rolemusic (http://freemusicarchive.org/music/Rolemusic/) Special Guest: Mitchell Hashimoto.
Join us as we have a great chat with HashiCorp founder, Mitchell Hashimoto. This is a good opportunity to become familiar with the various products in the HashiCorp ecosystem and to learn the importance of composable infrastructure and how HashiCorp products have come to be as a result of this.
In this exciting episode, ActualTech Media partners David Davis and James Green interview Mitchell Hashimoto of Hashicorp fame to find out how Hashicorp came to be, understand the way Mitchell sees the data center evolving, and hear about what's in the works at Hashicorp!
Aaron and Brian talk to Mitchell Hashimoto (@mitchellh, Founder of @HashiCorp) about the lack of VC funding for an open source Death Star, customer interest in Microsoft Azure, the need for bigger/faster schedulers, developer patterns and Zero-Trust Datacenter. Show Links: This show is sponsored by Intel Cloud For All Get a free book from O'Reilly media or use promo code PCBW for a discount on all books Hashicorp Homepage Hashicorp Adds Microsoft Azure Support Nomad 1,000,000 Container Challenge Topic 1 - It’s been a little over a year since you were last on the show. Things are changing really fast in the market. What are you seeing in the market? Topic 2 - Hashicorp recently made all of their tools Microsoft Azure compatible. Where are you seeing interest from your customers in Azure? Topic 3 - A few months ago, Docker showed that Swarm could support 30,000 containers. Cloud Foundry supposedly supports 25,000 containers. And then Nomad came out with 1M containers. Why does anyone need that many containers? Topic 4 - There is a lot of dogma in the market about “platforms”. Are we getting close to a big market shakeout or consolidation, or will the confusion continue for a while? Topic 5 - You always give us some perspective on a broader way to view application management. We don’t talk much about security on the show because we’re clueless, so give us some insight.
“SaaS, whether we want it or not, in enterprise technology or in our data centers, is coming.” Application delivery has become more complex as software architectures have moved into the cloud. Data center infrastructure has turned into code to be manipulated, and software engineering teams are adjusting their strategies. HashiCorp is a company that builds The post Automating Infrastructure at HashiCorp with Mitchell Hashimoto appeared first on Software Engineering Daily.
02:28 - Jerod Santo Introduction Twitter GitHub Blog Object Lateral, Inc. 02:44 - Adam Stacoviak Introduction Twitter GitHub Blog 02:55 - The Changelog and Beyond Code (Background) 5by5 Wynn Netherland The Changelog #172: GitUp, Git UX, and More with Pierre-Olivier Latour The Changelog #145: 10+ Years of Rails with DHH The Changelog #139: The Rise of io.js with Mikeal Rogers The Changelog #155: The Future of Node.js with Scott Hammond JavaScript Jabber #147: io.js with Isaac Schlueter and Mikeal Rogers 13:50 - The Corporatization of Open Source 16:00 - Sharing Stories of Fascinating People and Choosing Conferences Gophercon 21:21 - Differences Between Communities Ron Evans Ruby Rogues Episode #141: Teaching Kids with Ron Evans 24:54 - Where are The Changelog and Beyond Code’s future plans? OSCON Strange Loop Questions: Who is your programming hero? If you had to relearn how to code all over again, what would you tell yourself? What’s the most exciting thing in software right now? 31:57 - Interview Wishlist 20 Years of Ruby with Matz 20 Years of JavaScript with Brendan Eich Bill O’Reilly Linus Torvalds 10 Years of Git with Junio Hamano Apple: ResearchKit Sara Chipps 35:43 - Origin Stories Mitchell Hashimoto The Changelog #180: Otto, Vagrant, and Automation with Mitchell Hashimoto LEGO League Picks Personality Insights (Coraline) The Art of Empathy: A Complete Guide to Life's Most Essential Skill by Karla McLaren (Coraline) FamilySearch (Chuck) Snow (Chuck) DISTRICT Roasters (Adam) The Balvenie (Adam) Lismore Scotch (Adam) The Elixir Fountain (Jerod) Robot or Not? (Jerod) Song Exploder (Jerod)
02:28 - Jerod Santo Introduction Twitter GitHub Blog Object Lateral, Inc. 02:44 - Adam Stacoviak Introduction Twitter GitHub Blog 02:55 - The Changelog and Beyond Code (Background) 5by5 Wynn Netherland The Changelog #172: GitUp, Git UX, and More with Pierre-Olivier Latour The Changelog #145: 10+ Years of Rails with DHH The Changelog #139: The Rise of io.js with Mikeal Rogers The Changelog #155: The Future of Node.js with Scott Hammond JavaScript Jabber #147: io.js with Isaac Schlueter and Mikeal Rogers 13:50 - The Corporatization of Open Source 16:00 - Sharing Stories of Fascinating People and Choosing Conferences Gophercon 21:21 - Differences Between Communities Ron Evans Ruby Rogues Episode #141: Teaching Kids with Ron Evans 24:54 - Where are The Changelog and Beyond Code’s future plans? OSCON Strange Loop Questions: Who is your programming hero? If you had to relearn how to code all over again, what would you tell yourself? What’s the most exciting thing in software right now? 31:57 - Interview Wishlist 20 Years of Ruby with Matz 20 Years of JavaScript with Brendan Eich Bill O’Reilly Linus Torvalds 10 Years of Git with Junio Hamano Apple: ResearchKit Sara Chipps 35:43 - Origin Stories Mitchell Hashimoto The Changelog #180: Otto, Vagrant, and Automation with Mitchell Hashimoto LEGO League Picks Personality Insights (Coraline) The Art of Empathy: A Complete Guide to Life's Most Essential Skill by Karla McLaren (Coraline) FamilySearch (Chuck) Snow (Chuck) DISTRICT Roasters (Adam) The Balvenie (Adam) Lismore Scotch (Adam) The Elixir Fountain (Jerod) Robot or Not? (Jerod) Song Exploder (Jerod)
02:28 - Jerod Santo Introduction Twitter GitHub Blog Object Lateral, Inc. 02:44 - Adam Stacoviak Introduction Twitter GitHub Blog 02:55 - The Changelog and Beyond Code (Background) 5by5 Wynn Netherland The Changelog #172: GitUp, Git UX, and More with Pierre-Olivier Latour The Changelog #145: 10+ Years of Rails with DHH The Changelog #139: The Rise of io.js with Mikeal Rogers The Changelog #155: The Future of Node.js with Scott Hammond JavaScript Jabber #147: io.js with Isaac Schlueter and Mikeal Rogers 13:50 - The Corporatization of Open Source 16:00 - Sharing Stories of Fascinating People and Choosing Conferences Gophercon 21:21 - Differences Between Communities Ron Evans Ruby Rogues Episode #141: Teaching Kids with Ron Evans 24:54 - Where are The Changelog and Beyond Code’s future plans? OSCON Strange Loop Questions: Who is your programming hero? If you had to relearn how to code all over again, what would you tell yourself? What’s the most exciting thing in software right now? 31:57 - Interview Wishlist 20 Years of Ruby with Matz 20 Years of JavaScript with Brendan Eich Bill O’Reilly Linus Torvalds 10 Years of Git with Junio Hamano Apple: ResearchKit Sara Chipps 35:43 - Origin Stories Mitchell Hashimoto The Changelog #180: Otto, Vagrant, and Automation with Mitchell Hashimoto LEGO League Picks Personality Insights (Coraline) The Art of Empathy: A Complete Guide to Life's Most Essential Skill by Karla McLaren (Coraline) FamilySearch (Chuck) Snow (Chuck) DISTRICT Roasters (Adam) The Balvenie (Adam) Lismore Scotch (Adam) The Elixir Fountain (Jerod) Robot or Not? (Jerod) Song Exploder (Jerod)
The Hot Aisle is hosted by Brent Piatti (@brentpiatti) and Brian Carpenter (@intheDC). Joining us this Episode is Mitchell Hashimoto (@mitchellh) Co-Founder of Hashi Corp. – creators of super cool and open source projects like Vagrant, Packer, Serf, Consul, Terraform, Vault, and more. Learn about how they use Atlas to tie those projects together to better enable […]
Mitchell Hashimoto joined the show to talk about HashiCorp’s new tool - Otto, how it compares to and compliments Vagrant, Automation, and we even talked to Mitchell about his history with software development in the beginning of the show.
Mitchell Hashimoto joined the show to talk about HashiCorp’s new tool - Otto, how it compares to and compliments Vagrant, Automation, and we even talked to Mitchell about his history with software development in the beginning of the show.
Hashicorp's first ever user conference, HashiConf 2015 held in Portland, Oregon, sold out in six weeks, according to Hashicorp founder Mitchell Hashimoto. “It's worked out really well,” he says of the two-day, two-track event. During the conference, Mitchell joined Alex Williams, Donnie Berkholz and Kelsey Hightower to discuss HashiCorp's announcements coming out of the event, and that discussion on this episode of The New Stack Analysts. Watch on YouTube: https://youtu.be/Fso5yNGeStA Learn more at: https://thenewstack.io/tns-analysts-show-62-at-hashiconf-2015/
“Go is not just a programming language; it is a way of doing software development.” “It's very precise.” “It's gotten popular enough that there are people who hate it.” “It is pragmatic, painfully pragmatic.” “It's from the stone ages.” “It's a nicer C.” “There's no future without Go.” The New Stack founder Alex Williams was merely asking his guests to talk about what changes they'd noticed from a year ago. Thus begins this episode of The New Stack Analysts podcast, captured at GopherCon in Denver, with guests Alex Polvi, CoreOS CEO and Founder, and Mitchell Hashimoto, CEO and Founder of Hashicorp. Watch on YouTube: https://youtu.be/2I5iJ1X2WnE Learn more at: https://thenewstack.io/tns-analysts-show-51-hashimoto-and-polvi-on-the-painfully-pragmatic-go-and-defining-google-for-everyone/
Our guest on the podcast this week is Mitchell Hashimoto, Founder of HashiCorp. We discuss golden images which ensure that every copy of a replicated environment is identical. Image creation has historically caused organizational problems, however, new technologies utilize configuration management tools that are DevOps friendly creating an easy and valuable process that enables teams to iterate often and deliver a consistent product to end users. Listen in to learn more.
02:08 - Noah Gibbs Introduction Twitter GitHub Blog 02:38 - Rebuilding Rails: Understand Rails by Building a Ruby Web Framework by Noah Gibbs [YouTube] Noah Gibbs: GoGaRuCo 2013 - The Littlest ORM 03:06 - Sinatra 03:47 - Rack Introduction to Rack middleware rackamole 07:32 - Deploying Apps Hosting Heroku Redis Vagrant Server Provisioning Chef Puppet Ansible Capistrano 12:22 - Support, Operations, and Monitoring DevOps Database Administrator (DBA) [Confreaks] Paul Hinze: Smoke & Mirrors: The Primitives of High Availability Reliability Enterprise Tools HashiCorp Ruby Rogues Episode #192: Vagrant with Mitchell Hashimoto Learning Curve and Lack of Documentation (“Wild West”) 20:36 - Social Differences Between Communities: Ruby vs Python Ruby Rogues Episode #198: Expanding the Ruby Community Values to Other Languages with Scott Feinberg and Mark Bates COBOL, Java, C The SaltStack Ryan D. Lane: Moving away from Puppet: SaltStack or Ansible? 27:18 - Deployment Tools Targeting Polyglot Architectures 28:39 - Ease of Deployment Go 32:26 - The Success of a Language = The Deployment Story 33:51 - Feedback Cycle 34:57 - Reproducibility Bash 35:44 - Docker and Configuration Management Tools "chroot" = change root 44:06 - Deployment Problems 46:45 - Ruby Mad Science madscience_gem Community Feedback The Learning Curve Roadmap Multiple VM Setups Picks TuneMyGC (Coraline) Bear Metal: Rails Garbage Collection: Tuning Approaches (Coraline) Rbkit (Coraline) Get out and jump in a mud puddle! (Jessica) Release It!: Design and Deploy Production-Ready Software by Michael T. Nygard (Noah) Ruby DSL Handbook by Jim Gay (Noah)
02:08 - Noah Gibbs Introduction Twitter GitHub Blog 02:38 - Rebuilding Rails: Understand Rails by Building a Ruby Web Framework by Noah Gibbs [YouTube] Noah Gibbs: GoGaRuCo 2013 - The Littlest ORM 03:06 - Sinatra 03:47 - Rack Introduction to Rack middleware rackamole 07:32 - Deploying Apps Hosting Heroku Redis Vagrant Server Provisioning Chef Puppet Ansible Capistrano 12:22 - Support, Operations, and Monitoring DevOps Database Administrator (DBA) [Confreaks] Paul Hinze: Smoke & Mirrors: The Primitives of High Availability Reliability Enterprise Tools HashiCorp Ruby Rogues Episode #192: Vagrant with Mitchell Hashimoto Learning Curve and Lack of Documentation (“Wild West”) 20:36 - Social Differences Between Communities: Ruby vs Python Ruby Rogues Episode #198: Expanding the Ruby Community Values to Other Languages with Scott Feinberg and Mark Bates COBOL, Java, C The SaltStack Ryan D. Lane: Moving away from Puppet: SaltStack or Ansible? 27:18 - Deployment Tools Targeting Polyglot Architectures 28:39 - Ease of Deployment Go 32:26 - The Success of a Language = The Deployment Story 33:51 - Feedback Cycle 34:57 - Reproducibility Bash 35:44 - Docker and Configuration Management Tools "chroot" = change root 44:06 - Deployment Problems 46:45 - Ruby Mad Science madscience_gem Community Feedback The Learning Curve Roadmap Multiple VM Setups Picks TuneMyGC (Coraline) Bear Metal: Rails Garbage Collection: Tuning Approaches (Coraline) Rbkit (Coraline) Get out and jump in a mud puddle! (Jessica) Release It!: Design and Deploy Production-Ready Software by Michael T. Nygard (Noah) Ruby DSL Handbook by Jim Gay (Noah)
02:08 - Noah Gibbs Introduction Twitter GitHub Blog 02:38 - Rebuilding Rails: Understand Rails by Building a Ruby Web Framework by Noah Gibbs [YouTube] Noah Gibbs: GoGaRuCo 2013 - The Littlest ORM 03:06 - Sinatra 03:47 - Rack Introduction to Rack middleware rackamole 07:32 - Deploying Apps Hosting Heroku Redis Vagrant Server Provisioning Chef Puppet Ansible Capistrano 12:22 - Support, Operations, and Monitoring DevOps Database Administrator (DBA) [Confreaks] Paul Hinze: Smoke & Mirrors: The Primitives of High Availability Reliability Enterprise Tools HashiCorp Ruby Rogues Episode #192: Vagrant with Mitchell Hashimoto Learning Curve and Lack of Documentation (“Wild West”) 20:36 - Social Differences Between Communities: Ruby vs Python Ruby Rogues Episode #198: Expanding the Ruby Community Values to Other Languages with Scott Feinberg and Mark Bates COBOL, Java, C The SaltStack Ryan D. Lane: Moving away from Puppet: SaltStack or Ansible? 27:18 - Deployment Tools Targeting Polyglot Architectures 28:39 - Ease of Deployment Go 32:26 - The Success of a Language = The Deployment Story 33:51 - Feedback Cycle 34:57 - Reproducibility Bash 35:44 - Docker and Configuration Management Tools "chroot" = change root 44:06 - Deployment Problems 46:45 - Ruby Mad Science madscience_gem Community Feedback The Learning Curve Roadmap Multiple VM Setups Picks TuneMyGC (Coraline) Bear Metal: Rails Garbage Collection: Tuning Approaches (Coraline) Rbkit (Coraline) Get out and jump in a mud puddle! (Jessica) Release It!: Design and Deploy Production-Ready Software by Michael T. Nygard (Noah) Ruby DSL Handbook by Jim Gay (Noah)
伊藤直也さんをゲストに迎えて、Swift, Docker, Consul, ディープラーニングなどについて話しました。 Show Notes Serverspecという本が出ます O'Reilly - Serverspec Swift 1.2 and Xcode 6.3 beta - Swift Blog lookup.swift App Container and Docker Support App Container images in the Docker Engine by jonboulle Hosted Private Docker Registry - Quay.io Secure hosting of private Docker repositories in Google Cloud Platform Rebuild: 41: Kids These Days Don't Know Shell (Naoya Ito) WEB+DB PRESS Vol.85 Serf vs. Consul - Serf by HashiCorp sorah/mamiya Consulと連携するpull型デプロイツール stretcher Rebuild: 14: DevOps with Docker, chef and serverspec (naoya, mizzy) Mitchell Hashimoto on Twitter: "Off the plane. http://t.co/Ut3it3uRiW" Deep learning Facebook Open-Sources Some Of Its Deep-Learning Tools Google Blog: Using large-scale brain simulations for machine learning and A.I. 多腕バンディットによる表示コンテンツの最適化 Deep Learningでラブライブ!キャラを識別する Where's Wally? There's an algorithm for that Mark Zuckerberg invests in CAPTCHA-crushing AI Apple Paid More Than $200 Million For Siri To Get Into Mobile Search Sponsorship: Rebuild ★
The Rogues interview Mitchell Hashimoto, the creator of Vagrant.
The Rogues interview Mitchell Hashimoto, the creator of Vagrant.
The Rogues interview Mitchell Hashimoto, the creator of Vagrant.
Aaron and Brian talk to Mitchell Hashimoto (@mitchellh, Founder of @HashiCorp) about their new DevOps framework Atlas, commercial product support, a new round of funding, and the “Hashicorp” way of thinking about DevOps and automation. Music Credit: Nine Inch Nails (www.nin.com)
Brian talks to Mitchell Hashimoto (@mitchellh, Founder of @HashiCorp. Creator of Vagrant, Packer, Serf, Consul, and Terraform) about their new tool Terraform and the evolution of DevOps. They discuss how their tools compliment each other and existing tools such as Chef, Puppet, Ansible. They highlight how Hashicorp hides complexity for DevOps teams. Music Credit: Nine Inch Nails (www.nin.com)
Software Engineering Radio - The Podcast for Professional Software Developers
Charles Anderson talks to Mitchell Hashimoto about the Vagrant open source project, which can be used to create and configure lightweight, reproducible, and portable development environments. Vagrant aims to make new developers on a project productive within minutes of joining the project instead of spending hours or days setting up the developer’s workstation. The outline […]
Join us as we discuss Packer with Mitchell Hashimoto.
Adam Stacoviak and Andrew Thorp talk with Mitchell Hashimoto, the creator of Vagrant and founder of HashiCorp.
Adam Stacoviak and Andrew Thorp talk with Mitchell Hashimoto, the creator of Vagrant and founder of HashiCorp.
Wynn caught up with Mitchell Hashimoto from the Vagrant project to talk about virtualized environments, DevOps, and more.
Wynn caught up with Mitchell Hashimoto from the Vagrant project to talk about virtualized environments, DevOps, and more.
В гостях Денис Довгань, враховуючи тему, залишаю посилання на .config Дениса https://github.com/denysdovhan/dotfiles Telegram https://t.me/androidstory_chat 0:00:00 Вступ та обговорення рейтингів GitHub 0:01:11 Проект Spaceship: промт для терміналу 0:03:54 Проблеми з продуктивністю Spaceship та асинхронний рендеринг 0:06:15 Starship: альтернатива Spaceship на Rust 0:07:11 Переваги різних командних оболонок (Shell) та емуляторів терміналів (iTerm, WezTerm, Ghostty) 0:15:05 Mitchell Hashimoto, HashiCorp та їх інструменти (Vagrant, Terraform) 0:17:20 Чому варто використовувати кастомні емулятори терміналу: GPU, шрифти, лігатури 0:25:10 Віддалена розробка: Neovim через SSH та Docker contexts 0:28:40 Корисні конфігурації та інструменти для Zsh: аліаси, Zoxide, Fzf, гіт-аліаси 0:44:01 Дискусія про Open Source: визначення, ліцензії, VSC як Open Source 0:53:25 Профілі GitHub при наймі та код як відображення навичок Patreon - https://www.patreon.com/androidstory Згадали в випуску https://github.com/ajeetdsouza/zoxide @TheLinuxCast https://github.com/denysdovhan/smart-home https://github.com/denysdovhan/home-assistant-config