Podcasts about cloud run

  • 45PODCASTS
  • 99EPISODES
  • 39mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 21, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about cloud run

Latest podcast episodes about cloud run

Ardan Labs Podcast
CloudLife, Business, and Persistence with Ryan Ryke

Ardan Labs Podcast

Play Episode Listen Later May 21, 2025 84:45


In this episode, Bill Kennedy interviews Ryan Ryke, founder of CloudLife Consulting, focusing on AWS and cloud computing. They discuss the challenges of understanding AWS billing, the importance of managing cloud storage, and the benefits of using services like Cloud Run and Fargate. The conversation also touches on the evolution of engineering perspectives on complexity, the shift towards simpler infrastructure solutions, and personal experiences with technology. 00:00 Introduction00:30 What is Ryan Doing Today?9:00 Cloud Run Experience13:00 Handling Complexity21:00 Running Local LLMs25:30 First Memory of a Computer33:20 Entering University36:30 Relevant Education42:00 Early Industry53:00 Trading Stocks1:05:00  Discovering AWS 1:10:00 Starting a Business1:16:00 Maintaining Steady Clients1:22:00 Contact Info Connect with Ryan: Linkedin: https://www.linkedin.com/in/ryanrykeX: https://x.com/itsacloudlife99Email: ryan@cloudlife.ioMentioned in this Episode:CloudLife Consulting: https://www.cloudlife.io/Fargate: https://aws.amazon.com/fargate/Want more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses : https://ardanlabs.com/education/ Live Events : https://www.ardanlabs.com/live-training-events/ Blog : https://www.ardanlabs.com/blog Github : https://github.com/ardanlabs

The CyberWire
Chrome & Firefox squash the latest flaws.

The CyberWire

Play Episode Listen Later Apr 2, 2025 30:28


Google and Mozilla patch nearly two dozen security flaws. The UK's Royal Mail Group sees 144GB of data stolen and leaked. A bizarre campaign looks to recruit cybersecurity professionals to hack Chinese websites. PostgreSQL servers with weak credentials have been compromised for cryptojacking. Google Cloud patches a vulnerability affecting its Cloud Run platform. Oracle faces a class-action lawsuit over alleged cloud services data breaches. CISA releases ICS advisories detailing vulnerabilities in Rockwell Automation and Hitachi Energy products. General Paul Nakasone offers a candid assessment of America's evolving cyber threats. On today's CertByte segment,  a look at the Cisco Enterprise Network Core Technologies exam. Are AI LLMs more like minds or mirrors? Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CertByte Segment Welcome to CertByte! On this bi-weekly segment hosted by Chris Hare, a content developer and project management specialist at N2K, we share practice questions from N2K's suite of industry-leading certification resources, this week, Chris is joined by Troy McMillan to break down a question targeting the Cisco Enterprise Network Core Technologies (350-401 ENCOR) v1.1 exam. Today's question comes from N2K's Cisco CCNP Implementing and Operating Cisco Enterprise Network Core Technologies ENCOR (350-401) Practice Test. The ENCOR exam enables candidates to earn the Cisco Certified Specialist - Enterprise Core certification, which can also be used to meet exam requirements for several other Cisco certifications. Have a question that you'd like to see covered? Email us at certbyte@n2k.com. If you're studying for a certification exam, check out N2K's full exam prep library of certification practice tests, practice labs, and training courses by visiting our website at n2k.com/certify.To get the full news to knowledge experience, learn more about our N2K Pro subscription at https://thecyberwire.com/pro.  Please note: The questions and answers provided here, and on our site, are not actual current or prior questions and answers from these certification publishers or providers. Additional source: https://www.cisco.com/site/us/en/learn/training-certifications/exams/encor.html   Selected Reading Chrome 135, Firefox 137 Patch High-Severity Vulnerabilities (SecurityWeek) Royal Mail Group Loses 144GB to Infostealers: Same Samsung Hacker, Same 2021 Infostealer Log (Infostealers) Someone is trying to recruit security researchers in bizarre hacking campaign (TechCrunch) Ongoing cryptomining campaign hits over 1.5K PostgreSQL servers (SC Media) ImageRunner Flaw Exposed Sensitive Information in Google Cloud (SecurityWeek) Google Brings End-to-End Encrypted Emails to All Enterprise Gmail Users (SecurityWeek) Oracle now faces class action amid alleged data breaches (The Register) CISA Releases Two ICS Advisories for Vulnerabilities, & Exploits Surrounding ICS (Cyber Security News) Exclusive: Gen. Paul Nakasone says China is now our biggest cyber threat (The Record) Large AI models are cultural and social technologies (Science) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show.  Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices

Les Cast Codeurs Podcast
LCC 322 - Maaaaveeeeen 4 !

Les Cast Codeurs Podcast

Play Episode Listen Later Feb 9, 2025 77:13


Arnaud et Emmanuel discutent des nouvelles de ce mois. On y parle intégrité de JVM, fetch size de JDBC, MCP, de prompt engineering, de DeepSeek bien sûr mais aussi de Maven 4 et des proxy de répository Maven. Et d'autres choses encore, bonne lecture. Enregistré le 7 février 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-322.mp3 ou en vidéo sur YouTube. News Langages Les evolutions de la JVM pour augmenter l'intégrité https://inside.java/2025/01/03/evolving-default-integrity/ un article sur les raisons pour lesquelles les editeurs de frameworks et les utilisateurs s'arrachent les cheveux et vont continuer garantir l'integrite du code et des données en enlevant des APIs existantes historiquemnt agents dynamiques, setAccessible, Unsafe, JNI Article expliques les risques percus par les mainteneurs de la JVM Franchement c'est un peu leg sur les causes l'article, auto propagande JavaScript Temporal, enfin une API propre et moderne pour gérer les dates en JS https://developer.mozilla.org/en-US/blog/javascript-temporal-is-coming/ JavaScript Temporal est un nouvel objet conçu pour remplacer l'objet Date, qui présente des défauts. Il résout des problèmes tels que le manque de prise en charge des fuseaux horaires et la mutabilité. Temporal introduit des concepts tels que les instants, les heures civiles et les durées. Il fournit des classes pour gérer diverses représentations de date/heure, y compris celles qui tiennent compte du fuseau horaire et celles qui n'en tiennent pas compte. Temporal simplifie l'utilisation de différents calendriers (par exemple, chinois, hébreu). Il comprend des méthodes pour les comparaisons, les conversions et le formatage des dates et des heures. La prise en charge par les navigateurs est expérimentale, Firefox Nightly ayant l'implémentation la plus aboutie. Un polyfill est disponible pour essayer Temporal dans n'importe quel navigateur. Librairies Un article sur les fetch size du JDBC et les impacts sur vos applications https://in.relation.to/2025/01/24/jdbc-fetch-size/ qui connait la valeur fetch size par default de son driver? en fonction de vos use cases, ca peut etre devastateur exemple d'une appli qui retourne 12 lignes et un fetch size de oracle a 10, 2 a/r pour rien et si c'est 50 lignres retournées la base de donnée est le facteur limitant, pas Java donc monter sont fetch size est avantageux, on utilise la memoire de Java pour eviter la latence Quarkus annouce les MCP servers project pour collecter les servier MCP en Java https://quarkus.io/blog/introducing-mcp-servers/ MCP d'Anthropic introspecteur de bases JDBC lecteur de filke system Dessine en Java FX demarrables facilement avec jbang et testes avec claude desktop, goose et mcp-cli permet d'utliser le pouvoir des librarires Java de votre IA d'ailleurs Spring a la version 0.6 de leur support MCP https://spring.io/blog/2025/01/23/spring-ai-mcp-0 Infrastructure Apache Flink sur Kibernetes https://www.decodable.co/blog/get-running-with-apache-flink-on-kubernetes-2 un article tres complet ejn deux parties sur l'installation de Flink sur Kubernetes installation, setup mais aussi le checkpointing, la HA, l'observablité Data et Intelligence Artificielle 10 techniques de prompt engineering https://medium.com/google-cloud/10-prompt-engineering-techniques-every-beginner-should-know-bf6c195916c7 Si vous voulez aller plus loin, l'article référence un très bon livre blanc sur le prompt engineering https://www.kaggle.com/whitepaper-prompt-engineering Les techniques évoquées : Zero-Shot Prompting: On demande directement à l'IA de répondre à une question sans lui fournir d'exemple préalable. C'est comme si on posait une question à une personne sans lui donner de contexte. Few-Shot Prompting: On donne à l'IA un ou plusieurs exemples de la tâche qu'on souhaite qu'elle accomplisse. C'est comme montrer à quelqu'un comment faire quelque chose avant de lui demander de le faire. System Prompting: On définit le contexte général et le but de la tâche pour l'IA. C'est comme donner à l'IA des instructions générales sur ce qu'elle doit faire. Role Prompting: On attribue un rôle spécifique à l'IA (enseignant, journaliste, etc.). C'est comme demander à quelqu'un de jouer un rôle spécifique. Contextual Prompting: On fournit des informations supplémentaires ou un contexte pour la tâche. C'est comme donner à quelqu'un toutes les informations nécessaires pour répondre à une question. Step-Back Prompting: On pose d'abord une question générale, puis on utilise la réponse pour poser une question plus spécifique. C'est comme poser une question ouverte avant de poser une question plus fermée. Chain-of-Thought Prompting: On demande à l'IA de montrer étape par étape comment elle arrive à sa conclusion. C'est comme demander à quelqu'un d'expliquer son raisonnement. Self-Consistency Prompting: On pose plusieurs fois la même question à l'IA et on compare les réponses pour trouver la plus cohérente. C'est comme vérifier une réponse en la posant sous différentes formes. Tree-of-Thoughts Prompting: On permet à l'IA d'explorer plusieurs chemins de raisonnement en même temps. C'est comme considérer toutes les options possibles avant de prendre une décision. ReAct Prompting: On permet à l'IA d'interagir avec des outils externes pour résoudre des problèmes complexes. C'est comme donner à quelqu'un les outils nécessaires pour résoudre un problème. Les patterns GenAI the thoughtworks https://martinfowler.com/articles/gen-ai-patterns/ tres introductif et pre RAG le direct prompt qui est un appel direct au LLM: limitations de connaissance et de controle de l'experience eval: evaluer la sortie d'un LLM avec plusieurs techniques mais fondamentalement une fonction qui prend la demande, la reponse et donc un score numerique evaluation via un LLM (le meme ou un autre), ou evaluation humaine tourner les evaluations a partir de la chaine de build amis aussi en live vu que les LLMs puvent evoluer. Decrit les embedding notament d'image amis aussi de texte avec la notion de contexte DeepSeek et la fin de la domination de NVidia https://youtubetranscriptoptimizer.com/blog/05_the_short_case_for_nvda un article sur les raisons pour lesquelles NVIDIA va se faire cahllengert sur ses marges 90% de marge quand meme parce que les plus gros GPU et CUDA qui est proprio mais des approches ardware alternatives existent qui sont plus efficientes (TPU et gros waffle) Google, MS et d'autres construisent leurs GPU alternatifs CUDA devient de moins en moins le linga franca avec l'investissement sur des langages intermediares alternatifs par Apple, Google OpenAI etc L'article parle de DeepSkeek qui est venu mettre une baffe dans le monde des LLMs Ils ont construit un competiteur a gpt4o et o1 avec 5M de dollars et des capacites de raisonnements impressionnant la cles c'etait beaucoup de trick d'optimisation mais le plus gros est d'avoir des poids de neurores sur 8 bits vs 32 pour les autres. et donc de quatizer au fil de l'eau et au moment de l'entrainement beaucoup de reinforcemnt learning innovatifs aussi et des Mixture of Expert donc ~50x moins chers que OpenAI Donc plus besoin de GPU qui on des tonnes de vRAM ah et DeepSeek est open source un article de semianalytics change un peu le narratif le papier de DeepSkeek en dit long via ses omissions par ensemple les 6M c'est juste l'inference en GPU, pas les couts de recherches et divers trials et erreurs en comparaison Claude Sonnet a coute 10M en infererence DeepSeek a beaucoup de CPU pre ban et ceratins post bans evalués a 5 Milliards en investissement. leurs avancées et leur ouverture reste extremement interessante Une intro à Apache Iceberg http://blog.ippon.fr/2025/01/17/la-revolution-des-donnees-lavenement-des-lakehouses-avec-apache-iceberg/ issue des limites du data lake. non structuré et des Data Warehouses aux limites en diversite de données et de volume entrent les lakehouse Et particulierement Apache Iceberg issue de Netflix gestion de schema mais flexible notion de copy en write vs merge on read en fonction de besoins garantie atomicite, coherence, isoliation et durabilite notion de time travel et rollback partitions cachées (qui abstraient la partition et ses transfos) et evolution de partitions compatbile avec les moteurs de calcul comme spark, trino, flink etc explique la structure des metadonnées et des données Guillaume s'amuse à générer des histoires courtes de Science-Fiction en programmant des Agents IA avec LangChain4j et aussi avec des workflows https://glaforge.dev/posts/2025/01/27/an-ai-agent-to-generate-short-scifi-stories/ https://glaforge.dev/posts/2025/01/31/a-genai-agent-with-a-real-workflow/ Création d'un générateur automatisé de nouvelles de science-fiction à l'aide de Gemini et Imagen en Java, LangChain4j, sur Google Cloud. Le système génère chaque nuit des histoires, complétées par des illustrations créées par le modèle Imagen 3, et les publie sur un site Web. Une étape d'auto-réflexion utilise Gemini pour sélectionner la meilleure image pour chaque chapitre. L'agent utilise un workflow explicite, drivé par le code Java, où les étapes sont prédéfinies dans le code, plutôt que de s'appuyer sur une planification basée sur LLM. Le code est disponible sur GitHub et l'application est déployée sur Google Cloud. L'article oppose les agents de workflow explicites aux agents autonomes, en soulignant les compromis de chaque approche. Car parfois, les Agent IA autonomes qui gèrent leur propre planning hallucinent un peu trop et n'établissent pas un plan correctement, ou ne le suive pas comme il faut, voire hallucine des “function call”. Le projet utilise Cloud Build, le Cloud Run jobs, Cloud Scheduler, Firestore comme base de données, et Firebase pour le déploiement et l'automatisation du frontend. Dans le deuxième article, L'approche est différente, Guillaume utilise un outil de Workflow, plutôt que de diriger le planning avec du code Java. L'approche impérative utilise du code Java explicite pour orchestrer le workflow, offrant ainsi un contrôle et une parallélisation précis. L'approche déclarative utilise un fichier YAML pour définir le workflow, en spécifiant les étapes, les entrées, les sorties et l'ordre d'exécution. Le workflow comprend les étapes permettant de générer une histoire avec Gemini 2, de créer une invite d'image, de générer des images avec Imagen 3 et d'enregistrer le résultat dans Cloud Firestore (base de donnée NoSQL). Les principaux avantages de l'approche impérative sont un contrôle précis, une parallélisation explicite et des outils de programmation familiers. Les principaux avantages de l'approche déclarative sont des définitions de workflow peut-être plus faciles à comprendre (même si c'est un YAML, berk !) la visualisation, l'évolutivité et une maintenance simplifiée (on peut juste changer le YAML dans la console, comme au bon vieux temps du PHP en prod). Les inconvénients de l'approche impérative incluent le besoin de connaissances en programmation, les défis potentiels en matière de maintenance et la gestion des conteneurs. Les inconvénients de l'approche déclarative incluent une création YAML pénible, un contrôle de parallélisation limité, l'absence d'émulateur local et un débogage moins intuitif. Le choix entre les approches dépend des exigences du projet, la déclarative étant adaptée aux workflows plus simples. L'article conclut que la planification déclarative peut aider les agents IA à rester concentrés et prévisibles. Outillage Vulnérabilité des proxy Maven https://github.blog/security/vulnerability-research/attacks-on-maven-proxy-repositories/ Quelque soit le langage, la techno, il est hautement conseillé de mettre en place des gestionnaires de repositories en tant que proxy pour mieux contrôler les dépendances qui contribuent à la création de vos produits Michael Stepankin de l'équipe GitHub Security Lab a cherché a savoir si ces derniers ne sont pas aussi sources de vulnérabilité en étudiant quelques CVEs sur des produits comme JFrog Artifactory, Sonatype Nexus, et Reposilite Certaines failles viennent de la UI des produits qui permettent d'afficher les artifacts (ex: mettez un JS dans un fichier POM) et même de naviguer dedans (ex: voir le contenu d'un jar / zip et on exploite l'API pour lire, voir modifier des fichiers du serveur en dehors des archives) Les artifacts peuvent aussi être compromis en jouant sur les paramètres propriétaires des URLs ou en jouant sur le nomage avec les encodings. Bref, rien n'est simple ni niveau. Tout système rajoute de la compléxité et il est important de les tenir à mettre à jour. Il faut surveiller activement sa chaine de distribution via différents moyens et ne pas tout miser sur le repository manager. L'auteur a fait une présentation sur le sujet : https://www.youtube.com/watch?v=0Z_QXtk0Z54 Apache Maven 4… Bientôt, c'est promis …. qu'est ce qu'il y aura dedans ? https://gnodet.github.io/maven4-presentation/ Et aussi https://github.com/Bukama/MavenStuff/blob/main/Maven4/whatsnewinmaven4.md Apache Maven 4 Doucement mais surement …. c'est le principe d'un projet Maven 4.0.0-rc-2 est dispo (Dec 2024). Maven a plus de 20 ans et est largement utilisé dans l'écosystème Java. La compatibilité ascendante a toujours été une priorité, mais elle a limité la flexibilité. Maven 4 introduit des changements significatifs, notamment un nouveau schéma de construction et des améliorations du code. Changements du POM Séparation du Build-POM et du Consumer-POM : Build-POM : Contient des informations propres à la construction (ex. plugins, configurations). Consumer-POM : Contient uniquement les informations nécessaires aux consommateurs d'artefacts (ex. dépendances). Nouveau Modèle Version 4.1.0 : Utilisé uniquement pour le Build-POM, alors que le Consumer-POM reste en 4.0.0 pour la compatibilité. Introduit de nouveaux éléments et en marque certains comme obsolètes. Modules renommés en sous-projets : “Modules” devient “Sous-projets” pour éviter la confusion avec les Modules Java. L'élément remplace (qui reste pris en charge). Nouveau type de packaging : “bom” (Bill of Materials) : Différencie les POMs parents et les BOMs de gestion des dépendances. Prend en charge les exclusions et les imports basés sur les classifiers. Déclaration explicite du répertoire racine : permet de définir explicitement le répertoire racine du projet. Élimine toute ambiguïté sur la localisation des racines de projet. Nouvelles variables de répertoire : ${project.rootDirectory}, ${session.topDirectory} et ${session.rootDirectory} pour une meilleure gestion des chemins. Remplace les anciennes solutions non officielles et variables internes obsolètes. Prise en charge de syntaxes alternatives pour le POM Introduction de ModelParser SPI permettant des syntaxes alternatives pour le POM. Apache Maven Hocon Extension est un exemple précoce de cette fonctionnalité. Améliorations pour les sous-projets Versioning automatique des parents Il n'est plus nécessaire de définir la version des parents dans chaque sous-projet. Fonctionne avec le modèle de version 4.1.0 et s'étend aux dépendances internes au projet. Support complet des variables compatibles CI Le Flatten Maven Plugin n'est plus requis. Prend en charge les variables comme ${revision} pour le versioning. Peut être défini via maven.config ou la ligne de commande (mvn verify -Drevision=4.0.1). Améliorations et corrections du Reactor Correction de bug : Gestion améliorée de --also-make lors de la reprise des builds. Nouvelle option --resume (-r) pour redémarrer à partir du dernier sous-projet en échec. Les sous-projets déjà construits avec succès sont ignorés lors de la reprise. Constructions sensibles aux sous-dossiers : Possibilité d'exécuter des outils sur des sous-projets sélectionnés uniquement. Recommandation : Utiliser mvn verify plutôt que mvn clean install. Autres Améliorations Timestamps cohérents pour tous les sous-projets dans les archives packagées. Déploiement amélioré : Le déploiement ne se produit que si tous les sous-projets sont construits avec succès. Changements de workflow, cycle de vie et exécution Java 17 requis pour exécuter Maven Java 17 est le JDK minimum requis pour exécuter Maven 4. Les anciennes versions de Java peuvent toujours être ciblées pour la compilation via Maven Toolchains. Java 17 a été préféré à Java 21 en raison d'un support à long terme plus étendu. Mise à jour des plugins et maintenance des applications Suppression des fonctionnalités obsolètes (ex. Plexus Containers, expressions ${pom.}). Mise à jour du Super POM, modifiant les versions par défaut des plugins. Les builds peuvent se comporter différemment ; définissez des versions fixes des plugins pour éviter les changements inattendus. Maven 4 affiche un avertissement si des versions par défaut sont utilisées. Nouveau paramètre “Fail on Severity” Le build peut échouer si des messages de log atteignent un niveau de gravité spécifique (ex. WARN). Utilisable via --fail-on-severity WARN ou -fos WARN. Maven Shell (mvnsh) Chaque exécution de mvn nécessitait auparavant un redémarrage complet de Java/Maven. Maven 4 introduit Maven Shell (mvnsh), qui maintient un processus Maven résident unique ouvert pour plusieurs commandes. Améliore la performance et réduit les temps de build. Alternative : Utilisez Maven Daemon (mvnd), qui gère un pool de processus Maven résidents. Architecture Un article sur les feature flags avec Unleash https://feeds.feedblitz.com//911939960/0/baeldungImplement-Feature-Flags-in-Java-With-Unleash Pour A/B testing et des cycles de développements plus rapides pour « tester en prod » Montre comment tourner sous docker unleash Et ajouter la librairie a du code java pour tester un feature flag Sécurité Keycloak 26.1 https://www.keycloak.org/2025/01/keycloak-2610-released.html detection des noeuds via la proble base de donnée aulieu echange reseau virtual threads pour infinispan et jgroups opentelemetry tracing supporté et plein de fonctionalités de sécurité Loi, société et organisation Les grands morceaux du coût et revenus d'une conférence. Ici http://bdx.io|bdx.io https://bsky.app/profile/ameliebenoit33.bsky.social/post/3lgzslhedzk2a 44% le billet 52% les sponsors 38% loc du lieu 29% traiteur et café 12% standiste 5% frais speaker (donc pas tous) Ask Me Anything Julien de Provin: J'aime beaucoup le mode “continuous testing” de Quarkus, et je me demandais s'il existait une alternative en dehors de Quarkus, ou à défaut, des ressources sur son fonctionnement ? J'aimerais beaucoup avoir un outil agnostique utilisable sur les projets non-Quarkus sur lesquels j'intervient, quitte à y metttre un peu d'huile de coude (ou de phalange pour le coup). https://github.com/infinitest/infinitest/ Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 6-7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 6 mars 2025 : DevCon #24 : 100% IA - Paris (France) 13 mars 2025 : Oracle CloudWorld Tour Paris - Paris (France) 14 mars 2025 : Rust In Paris 2025 - Paris (France) 19-21 mars 2025 : React Paris - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20-21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26-29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 27-28 mars 2025 : SymfonyLive Paris 2025 - Paris (France) 28 mars 2025 : DataDays - Lille (France) 28-29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 3 avril 2025 : SoCraTes Rennes 2025 - Rennes (France) 4 avril 2025 : Flutter Connection 2025 - Paris (France) 4 avril 2025 : aMP Orléans 04-04-2025 - Orléans (France) 10-11 avril 2025 : Android Makers - Montrouge (France) 10-12 avril 2025 : Devoxx Greece - Athens (Greece) 16-18 avril 2025 : Devoxx France - Paris (France) 23-25 avril 2025 : MODERN ENDPOINT MANAGEMENT EMEA SUMMIT 2025 - Paris (France) 24 avril 2025 : IA Data Day 2025 - Strasbourg (France) 29-30 avril 2025 : MixIT - Lyon (France) 7-9 mai 2025 : Devoxx UK - London (UK) 15 mai 2025 : Cloud Toulouse - Toulouse (France) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 24 mai 2025 : NG Baguette Conf 2025 - Nantes (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 5-6 juin 2025 : Devquest 2025 - Niort (France) 10-11 juin 2025 : Modern Workplace Conference Paris 2025 - Paris (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

De Nederlandse Kubernetes Podcast
#72 Kubernetes at Scale: From On-Premises to Cloud

De Nederlandse Kubernetes Podcast

Play Episode Listen Later Dec 3, 2024 38:10


In this special episode recorded at TechEx Cloud & Security, Ronald Kers (CNCF Ambassador) and Jan Stomphorst (Solutions Architect at ACC ICT) are joined by Adnan Hodzic, a seasoned expert in data analytics and open-source contributions. The discussion centers on the transformative power of Kubernetes, the benefits of migrating to the cloud, and the importance of open-source collaboration in advancing technology.Key Topics Discussed:Adnan's Journey into IT and Open Source:Early fascination with technology, leading to impactful projects like Auto Freak, a Linux tool for optimizing system performance.Open-source ethos: the rewards and challenges of contributing freely accessible tools used by major organizations.Migrating to Google Cloud Platform:Adnan outlines the migration of large-scale analytics workloads to GCP.Advantages of Google services like Vertex AI and Kubernetes Engine for scalability and flexibility.Cost-saving practices, such as spinning up clusters on demand to reduce idle resource consumption.Kubernetes: On-Premises vs. Cloud Deployments:Adnan shares insights on maintaining one of the largest on-prem Kubernetes clusters in the region.The complexities of scalability, risk management, and compliance in hybrid environments.Practical use cases for disaster recovery and multi-region failovers.Open Source in the Financial Sector:A conversation on fostering open-source collaboration among financial institutions like Rabobank and ABN Amro.Opportunities to overcome silos and share solutions that improve efficiency across the industry.Future of Kubernetes and Cloud Technologies:The trend towards serverless infrastructure and tools like Cloud Run for lightweight, scalable solutions.Calls for Kubernetes Long-Term Support (LTS) to ease maintenance for large enterprises.Adnan's vision for innovation through open-source and its parallels with automotive industry disruption.Special Segment:Recorded at TechEx Cloud & Security, this episode captures the lively atmosphere of one of the leading cloud and security events, showcasing how experts like Adnan are shaping the future of technology.Closing Thoughts:The episode concludes with a reflection on the importance of adaptability in a fast-evolving tech landscape. Ronald and Jan express their excitement for upcoming episodes featuring more insights on GitOps, security, and cutting-edge Kubernetes practices.Stuur ons een bericht.Like and subscribe! It helps out a lot.You can also find us on:De Nederlandse Kubernetes Podcast - YouTubeNederlandse Kubernetes Podcast (@k8spodcast.nl) | TikTokDe Nederlandse Kubernetes PodcastWhere can you meet us:EventsThis Podcast is powered by:ACC ICT - IT-Continuïteit voor Bedrijfskritische Applicaties | ACC ICT

Kubernetes Podcast from Google
Ray & KubeRay, with Richard Liaw and Kai-Hsun Chen

Kubernetes Podcast from Google

Play Episode Listen Later Sep 3, 2024 54:42


In this episode, guest host and AI correspondent Mofi Rahman interviews Richard Liaw and Kai-Hsun Chen from Anyscale about Ray and KubeRay. Ray is an open-source unified compute framework that makes it easy to scale AI and Python workloads, while KubeRay integrates Ray's capabilities into Kubernetes clusters.   Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod   News of the week CNCF Blog - LitmusChaos audit complete! Kubernetes Podcast from Google episode 234 - LitmusChaos, with Karthik Satchitanand Google Cloud Blog - Run your AI inference applications on Cloud Run with NVIDIA GPUs Diginomica article - KubeCon China - at 33-and-a-third, Linux is a long player. So, why does Linus Torvalds hate AI? CNCF-Hosted Co-Located Event Schedule for KubeCon NA 2024  Google Kubernetes Engine Release Notes - August 20, 2024 (1.31 available in Rapid Channel) Kubernetes Podcast from Google - Kubernetes v1.31: "Elli", with Angelos Kolaitis Red Hat Press Release - Red Hat OpenStack Services on OpenShift is Now Generally Available Red Hat Enables OpenStack to Run Natively on OpenShift Platform Broadcom Revamps Tanzu to Simplify Cloud-Native App Development and Deployment Tanzu Platform 10 Offers Cloud Foundry Users Deep Visibility and Productivity Enhancements VMware Explore Conference Website CNCF Blog - Announcing 500 Kubestronauts CNCF - Kubestronaut FAQ Dapr Day 2024 Virtual Event Website Links from the interview Kai-Hsun Chen on LinkedIn Richard Liaw on LinkedIn Ray from the RISE Lab at UC Berkeley Ray: A Distributed System for AI by Robert Nishihara and Philipp Moritz - Jan 9, 2018 KubeRay Docs KubeRay on GitHub PyTorch Apache Airflow Apache Spark Kubeflow Apache Submarine (retired) Jupyter Notebooks VS Code Examples of schedulers for Batch/AI workloads in Kubernetes Kueue Volcano Apache Yunikorn Examples of observability tools for Batch/AI workloads in Kubernetes Prometheus Grafana Fluentbit Examples of loadbalancers Nginx Istio Ray Data: Scalable Datasets for ML Dask Python - Parallel Python Ray Serve: Scalable and Programmable Serving HPA - Horizontal Pod Autoscaling in Kubernetes Karpenter - “Just-in-time nodes for any Kubernetes cluster” Lazy Computation Graphs with the Ray DAG API Types of hardware accelerators Google Cloud Tensor Processing Units (TPUs) AMD Instinct AMD Radeon AWS Trainium AWS Inferentia Pandas Numpy KubeCon EU 2024 - Accelerators(FPGA/GPU) Chaining to Efficiently Handle Large AI/ML Workloads in K8s - Sampath Priyankara, Nippon Telegraph and Telephone Corporation & Masataka Sonoda, Fujitsu Limited NVidia Megatron Links from the post-interview chat DRA - Dynamic Resource Allocation in Kubernetes Different ways of Running RayJob on Kubernetes Ray framework diagram in the docs  

programmier.bar – der Podcast für App- und Webentwicklung
News AI #29: Cloud Run GPUs // Top100 Gen AI Consumer Apps // Dream Machine 1.5

programmier.bar – der Podcast für App- und Webentwicklung

Play Episode Listen Later Aug 28, 2024 40:31


Google updated Cloud Run, sodass man nun auch Serverless GPUs nutzen kann. Zwar hat der Service dadurch noch relativ lange Cold-Start-Zeiten, aber es ist durchaus für viele AI-Anwendungen interessant.Andreessen Horowitz hat seinen halbjährlichen Top100 Gen AI Consumer Apps Report herausgebracht. Philipp und Fabi diskutieren dabei, welche die neuen Apps sind, die es im letzten halben Jahr in die Top Ten geschafft haben.Dream Machine ist nun in Version 1.5 verfügbar und bietet einige Verbesserungen in der Video-Generierung, unter anderem die Möglichkeit Text im Video darzustellen.In den letzten zwei Wochen gab es einige neue Modelle. Zuletzt hatten wir uns noch gefragt, ob Aleph Alpha dem Untergang geweiht ist. In der letzten Woche haben sie dann aber das Pharia-1-LLM herausgebracht. Elon Musks x.ai hat Grok-2 released und Microsoft Phi MoE.Und natürlich haben Fabi und Philipp geschaut, wie gut sie AI-Bilder von echten unterscheiden können. Lasst uns gerne wissen, welchen Score ihr beim Fake Insect Tests erreicht (Content Note: Nahaufnahmen von Spinnen und Insekten).Weitere Links aus der Folge:- Everchanging Quest- LM Studio 0.3.0- Fine-tune Gpt-4o- wllamaSchreibt uns! Schickt uns eure Themenwünsche und euer Feedback: podcast@programmier.barFolgt uns! Bleibt auf dem Laufenden über zukünftige Folgen und virtuelle Meetups und beteiligt euch an Community-Diskussionen. TwitterInstagramFacebookMeetupYouTube

Screaming in the Cloud
Summer Replay - Building a User-Friendly Product with Aparna Sinha

Screaming in the Cloud

Play Episode Listen Later Aug 20, 2024 39:51


On this Screaming in the Cloud Summer Replay, we revisit our conversation with Aparna Sinha, the Head of AI Product at Capital One. As a former Director of Product Management at Google Cloud, Aparan joins Corey to talk about GCP and how Corey was surprised to find that, in some ways, it was “its own universe.” She offers up why folks can expect a developer user-friendly experience when using GCP, and how it differentiates them from the litany of cloud providers out there. From focusing on developing, to a vast array of customers, GCP is bringing their best forward. Check out their conversation on how GCP is keeping its focus on the user!Show Highlights:(0:00) Intro(0:48) Duckbill Group sponsor read(1:21) Role of a Director of Outbound Product Management(2:43) Developer experiences on Google Cloud(8:47) The philosophy of courting developers(11:38) The shift to serverless(17:17) Cloud Run observations(22:59) Duckbill Group sponsor read(23:43) Customer involvement with Google Cloud(28:55) Cloud Build vs. Cloud Deploy(32:50) Google and cloud security(38:45) Where you can find AparnaAbout AparnaAparna Sinha is Senior Vice President and Head of Enterprise AI/ML products at Capital One. She is also a startup investor / advisor at PearVC. Aparna has a track record of successful P&L ownership, creating new revenue streams and building $B+ businesses through technical and go-to-market innovation. She was Sr. Director of Developer Products at Google Cloud leading a 100+ member PM, UX, and DevRel Engineering team responsible for >40 cloud services and open source tools. She was an early contributor to Kubernetes, built the team and grew Google Kubernetes Engine 100x into a Top 3 revenue generator for Cloud. Prior to Cloud Aparna worked on Android, ChromeOS and Play. Previously at McKinsey & Company she was a leader in the business technology office, working with CIOs on server virtualization strategy, pricing, and SaaS.Aparna holds a PhD in Electrical Engineering from Stanford, and a patent from Google. She served as Chair of the Governing Board of the Cloud Native Computing Foundation (CNCF).Links:DevOps Research Report: https://www.devops-research.com/research.htmlTwitter: https://x.com/aparnabsinhaOriginal Episode:https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/building-a-user-friendly-product-with-aparna-sinha/Sponsor:The Duckbill Group: https://www.duckbillgroup.com/

What's new in Cloud FinOps?
WNiCF - May 2024 - News

What's new in Cloud FinOps?

Play Episode Listen Later Jun 6, 2024 35:33


SummaryIn this episode of What's New in Cloud FinOps, Frank and Stephen discuss a wide range of cloud-related news and updates. They cover topics such as Azure VM hibernation, Azure Compute Fleet, Google Cloud TPU, Amazon EC2 C7i Flex, DynamoDB, AWS Marketplace, Cloud Run, and more. The conversation also delves into the complexities of cloud pricing, energy progress, and the impact of cloud technology on businesses.

The Cybersecurity Defenders Podcast
#106 - Intel Chat: LockBit, TicTacToe Dropper, Google Cloud Run & I-Soon

The Cybersecurity Defenders Podcast

Play Episode Listen Later Feb 29, 2024 28:10


In this episode of The Cybersecurity Defenders Podcast, we discuss some cutting-edge intel coming out of LimaCharlie's community Slack channel.Law enforcement from 10 countries - in a joint operation called ‘Operation Cronos' - have disrupted the criminal operation of the LockBit ransomware group.FortiGuard has identified a grouping of malware droppers used to deliver various final-stage payloads through 2023 they are calling the TicTacToe dropper.Cisco Talos researchers have observed a significant increase in the volume of malicious emails leveraging the Google Cloud Run service to infect potential victims with banking trojans. A massive leak from a Chinese Ministry of Public Security contractor called I-Soon shows that Bejing's intelligence and military groups are attempting large-scale, systemic cyber intrusions against foreign governments, companies, and infrastructure.

UNMUTE IT
#47 MLOps = DevOps + Engineering + Machine Learning?

UNMUTE IT

Play Episode Listen Later Jan 11, 2024 26:20


Dev, DevOps, SecDevOps… Es gibt einige Abkürzungen in der Tech Welt und heute beschäftigen wir uns mit einer ganz anderen: MLOps!

Screaming in the Cloud
The Evolving Role of a Software Engineer with Forrest Brazeal

Screaming in the Cloud

Play Episode Listen Later Aug 17, 2023 37:04


Forrest Brazeal, Head of Developer Media at Google Cloud, joins Corey on Screaming in the Cloud to discuss how AI, current job markets, and more are impacting software engineers. Forrest and Corey explore whether AI helps or hurts developers, and what impact it has on the role of a junior developer and the rest of the development team. Forrest also shares his viewpoints on how he feels AI affects people in creative roles. Corey and Forrest discuss the pitfalls of a long career as a software developer, and how people can break into a career in cloud as well as the necessary pivots you may need to make along the way. Forrest then describes why he feels workers are currently staying put where they work, and how he predicts a major shift will happen when the markets shift.About ForrestForrest is a cloud educator, cartoonist, author, and Pwnie Award-winning songwriter. He currently leads the content marketing team at Google Cloud. You can buy his book, The Read Aloud Cloud, from Wiley Publishing or attend his talks at public and private events around the world.Links Referenced: Personal Website: https://goodtechthings.com Newsletter signup: https://cloud.google.com/innovators TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn and I am thrilled to have a returning guest on, who has been some would almost say suspiciously quiet over the past year or so. Forrest Brazeal is the Head of Developer Media over at Google Cloud, and everyone sort of sits there and cocks their head, like, “What does that mean?” And then he says, “Oh, I'm the cloud bard.” And everyone's, “Oh, right. Get it: the song guy.” Forrest, welcome back.Forrest: Thanks, Corey. As always, it's great to be here.Corey: So, what have you been up to over the past, oh let's call it, I don't know, a year, since I think, is probably the last time you're on the show.Forrest: Well, gosh, I mean, for one thing, it seems like I can't call myself the cloud bard anymore because Google rolled out this thing called Bard and I've started to get some DMs from people asking for, you know, tech support on Bard. So, I need to make that a little bit clearer that I do not work on Bard. I am a lowercase bard, but I was here first, so if anything, you know, Google has deprecated me.Corey: Honestly, this feels on some level like it's more cloudy if we define cloudy as what, you know, Amazon does because they launched a quantum computing service about six months after they launched some unrelated nonsense that they called [QuantumDB 00:01:44], which you'd think if you're launching quantum stuff, you'd reserve the word quantum for that. But no, they're going to launch things that stomp all over other service names as well internally, so customers just wind up remarkably confused. So, if you find a good name, just we're going to slap it on everything, seems to be the way of cloud.Forrest: Yeah, naming things has proven to be harder than either quantum computing or generative AI at this point, I think.Corey: And in fairness, I will point out that naming things is super hard; making fun of names is not. So, that is—everyone's like, “Wow, you're so good at making fun of names. Can you name something well?” [laugh]. Absolutely not.Forrest: Yeah, well, one of the things you know, that I have been up to over the past year or so is just, you know, getting to learn more about what it's like to have an impact in a very, very large organizational context, right? I mean, I've worked in large companies before, but Google is a different size and scale of things and it takes some time honestly, to, you know, figure out how you can do the best for the community in an environment like that. And sometimes that comes down to the level of, like, what are things called? How do we express things in a way that makes sense to everyone and takes into account people's different communication styles and different preferences, different geographies, regions? And that's something that I'm still learning.But you know, hopefully, we're getting to a point where you're going to start hearing some things come out of Google Cloud that answer your questions and makes sense to you. That's supposed to be part of my job, anyway.Corey: So, I want to talk a bit about the idea of generative AI because there has been an awful lot of hype in the space, but you have never given me a bum steer. You have always been a level-headed, reasonable voice. You are not—to my understanding—a VC trying desperately to prop up an industry that you may or may not believe in, but you are financially invested into. What is your take on the last, let's call it, year of generative AI enhancements?Forrest: So, to be clear, while I do have a master's degree in interactive intelligence, which is kind of AI adjacent, this is not something that I build with day-to-day professionally. But I have spent a lot of time over the last year working with the people who do that and trying to understand what is the value that gen AI can bring to the domains that I do care about and have a lot of interest in, which of course, are cloud developers and folks trying to build meaningful enterprise applications, take established workloads and make them better, and as well work with folks who are new to their careers and trying to figure out, you know, what's the most appropriate technology for me to bet on? What's going to help me versus what's going to hurt me?And I think one of the things that I have been telling people most frequently—because I talk to a lot of, like, new cloud learners, and they're saying, “Should I just drop what I'm doing? Should I stop building the projects I'm working on and should I instead just go and get really good at generating code through something like a Bard or a ChatGPT or what have you?” And I went down a rabbit hole with this, Corey, for a long time and spent time building with these tools. And I see the value there. I don't think there's any question.But what has come very, very clearly to the forefront is, the better you already are at writing code, the more help a generative AI coding assistant is going to give you, like a Bard or a ChatGPT, what have you. So, that means the way to get better at using these tools is to get better at not using these tools, right? The more time you spend learning to code without AI input, the better you'll be at coding with AI input.Corey: I'm not sure I entirely agree because for me, the wake-up call that I had was a singular moment using I want to say it was either Chat-Gippity—yes, that's how it's pronounced—or else it was Gif-Ub Copilot—yes, also how it's pronounced—and the problem that I was having was, I wanted to query probably the worst API in the known universe—which is, of course, the AWS pricing API: it returns JSON, that kind of isn't, it returns really weird structures where you have to correlate between a bunch of different random strings to get actual data out of it, and it was nightmarish and of course, it's not consistent. So, I asked it to write me a Python script that would contrast the hourly cost of a Managed NAT gateway in all AWS regions and return a table sorted by the most to least expensive. And it worked.Now, this is something that I could have done myself in probably half a day because my two programming languages of choice remain brute force and enthusiasm, but it wound up taking away so much of the iterative stuff that doesn't work of oh, that's not quite how you'd handle that data structure. Oh, you think it's a dict, but no, it just looks like one. It's a string first; now you have to convert it, or all kinds of other weird stuff like that. Like, this is not senior engineering work, but it really wound up as a massive accelerator to get the answer I was after. It was almost an interface to a bad API. Or rather, an interface to a program—to a small script that became an interface itself to a bad API.Forrest: Well, that's right. But think for a minute, Corey, about what's implicit in that statement though. Think about all the things you had to know to get that value out of ChatGPT, right? You had to know, A, what you were looking for: how these prices worked, what the right price [style 00:06:52] was to look for, right, why NAT gateway is something you needed to be caring about in the first place. There's a pretty deep stack of things—actually, it's what we call a context window, right, that you needed to know to make this query take a half-day of work away from you.And all that stuff that you've built up through years and years of being very hands-on with this technology, you put that same sentence-level task in the hands of someone who doesn't have that background and they're not going to have the same results. So, I think there's still tremendous value in expanding your personal mental context window. The more of that you have, the better and faster results you're going to get.Corey: Oh, absolutely. I do want to steer away from this idea that there needs to be this massive level of subject matter expertise because I don't disagree with it, but you're right, the question I asked was highly contextual to the area of expertise that I have. But everyone tends to have something like that. If you're a marketer for example, and you wind up with an enormous pile of entrants on a feedback form, great. Can you just dump it all in and say, can you give me a sentiment analysis on this?I don't know how to run a sentiment analysis myself, but I'm betting that a lot of these generative AI models do, or being able to direct me in the right area on this. The question I have is—it can even be distilled down into simple language of, “Here's a bunch of comments. Do people love the thing or hate the thing?” There are ways to get there that apply, even if you don't have familiarity with the computer science aspects of it, you definitely have aspect to the problem in which you are trying to solve.Forrest: Oh, yeah, I don't think we're disagreeing at all. Domain expertise seems to produce great results when you apply it to something that's tangential to your domain expertise. But you know, I was at an event a month or two ago, and I was talking to a bunch of IT executives about ChatGPT and these other services, and it was interesting. I heard two responses when we were talking about this. The first thing that was very common was I did not hear any one of these extremely, let's say, a little bit skeptical—I don't want to say jaded—technical leaders—like, they've been around a long time; they've seen a lot of technologies come and go—I didn't hear a single person say, “This is something that's not useful to me.”Every single one of them immediately was grasping the value of having a service that can connect some of those dots, can in-between a little bit, if you will. But the second thing that all of them said was, “I can't use this inside my company right now because I don't have legal approval.” Right? And then that's the second round of challenges is, what does it look like to actually take these services and make them safe and effective to use in a business context where they're load-bearing?Corey: Depending upon what is being done with them, I am either sympathetic or dismissive of that concern. For example, yesterday, I wound up having fun with it, and—because I saw a query, a prompt that someone had put in of, “Create a table of the US presidents ranked by years that they were in office.” And it's like, “Okay, that's great.” Like, I understand the value here. But if you have a magic robot voice from the future in a box that you can ask it any question and as basically a person, why not have more fun with it?So, I put to it the question of, “Rank the US presidents by absorbency.” And it's like, “Well, that's not a valid way of rating presidential performance.” I said, “It is if I have a spill and I'm attempting to select the US president with which to mop up the spill.” Like, “Oh, in that case, here you go.” And it spat out a bunch of stuff.That was fun and exciting. But one example he gave was it ranked Theodore Roosevelt very highly. Teddy Roosevelt was famous for having a mustache. That might be useful to mop up a spill. Now, I never would have come up in isolation with the idea of using a president's mustache to mop something up explicitly, but that's a perfect writer's room style Yes, And approach that I could then springboard off of to continue iterating on if I'm using that as part of something larger. That is a far cry from copying and pasting whatever it is to say into an email, whacking send before realizing it makes no sense.Forrest: Yeah, that's right. And of course, you can play with what we call the temperatures on these models, right, to get those very creative, off-the-wall kind of answers, or to make them very, kind of, dry and factual on the other end. And Google Cloud has been doing some interesting things there with Generative AI Studio and some of the new features that have come to Vertex AI. But it's just—it's going to be a delicate dance, honestly, to figure out how you tune those things to work in the enterprise.Corey: Oh, absolutely. I feel like the temperature dial should instead be instead relabeled as ‘corporate voice.' Like, do you want a lot of it or a little of it? And of course, they have to invert it. But yeah, the idea is that, for some things, yeah, you definitely just want a just-the-facts style of approach.Another demo that I saw, for example, that I thought showed a lack of imagination was, “Here's a transcript of a meeting. Extract all the to-do items.” Okay. Yeah, I suppose that works, but what about, here's a transcript of the meeting. Identify who the most unpleasant, passive-aggressive person in this meeting is to work with.And to its credit—because of course this came from something corporate, none of the systems that I wound up running that particular query through could identify anyone because of course the transcript was very bland and dry and not actually how human beings talk, other than in imagined corporate training videos.Forrest: Yes, well again, I think that gets us into the realm of just because you can doesn't mean you should use it for this.Corey: Oh, I honestly, most of what I use this stuff for—or use anything for—should be considered a cautionary tale as opposed to guidance for the future. You write parody songs a fair bit. So do I, and I've had an attempt to write versions of, like, write parody lyrics for some random song about this theme. And it's not bad, but for a lot of that stuff, it's not great, either. It is a starting point.Forrest: Now, hang on, Corey. You know, as well as I do that I don't write parody songs. We've had this conversation before. A parody is using existing music and adding new lyrics to it. I write my own music and my own lyrics and I'll have you know, that's an important distinction. But—Corey: True.Forrest: I think you're right on that, you know, having these services give you creative output. What you're getting is an average of a lot of other creative output, right, which is—could give you a perfectly average result, but it's difficult to get a first pass that gives you something that really stands out. I do also find, as a creative, that starting with something that's very average oftentimes locks me into a place where I don't really want to be. In other words, I'm not going to potentially come up with something as interesting if I'm starting with a baseline like that. It's almost a little bit polluting to the creative process.I know there's a lot of other creatives that feel that way as well, but you've also got people that have found ways to use generative AI to stimulate some really interesting creative things. And I think maybe the example you gave of the president's rank by absorbency is a great way to do that. Now, in that case, the initial creativity, a lot of it resided in the prompt, Corey. I mean, you're giving it a fantastically creative, unusual, off-the-wall place to start from. And just about any average of five presidents that come out of that is going to be pretty funny and weird because of just how funny and weird the idea was to begin with. That's where I think AI can give you that great writer's room feel.Corey: It really does. It's a Yes, And approach where there's a significant way that it can build on top of stuff. I've been looking for a, I guess, a writer's room style of approach for a while, but it's hard to find the right people who don't already have their own platform and voice to do this. And again, it's not a matter of payment. I'm thrilled to basically pay any reasonable out of money to build a writer's room here of people who get the cloud industry to work with me and workshops on some of the bigger jokes.The challenge is that those people are very hard to find and/or are conflicted out. Having just a robot who, with infinite patience for tomfoolery—because the writing process can look kind of dull and crappy until you find the right thing—has been awesome. There's also a sense of psychological safety in not poisoning people. Like, “I thought you were supposed to be funny, but this stuff is all terrible. What's the deal here?” I've already poisoned that well with my business partner, for example.Forrest: Yeah, there's only so many chances you get to make that first impression, so why not go with AI that never remembers you or any of your past mistakes?Corey: Exactly. Although the weird thing is that I found out that when they first launched Chat-Gippity, it already knew who I was. So, it is in fact familiar, so at least my early work of my entire—I guess my entire life. So that's—Forrest: Yes.Corey: —kind of worrisome.Forrest: Well, I know it credited to me books I hadn't written and universities I hadn't attended and all kinds of good stuff, so it made me look better than I was.Corey: So, what have you been up to lately in the context of, well I said generative AI is a good way to start, but I guess we can also call it at Google Cloud. Because I have it on good authority that, marketing to the contrary, all of the cloud providers do other things in addition to AI and ML work. It's just that's what's getting in the headline these days. But I have noticed a disturbing number of virtual machines living in a bunch of customer environments relative to the amount of AI workloads that are actually running. So, there might be one or two other things afoot.Forrest: That's right. And when you go and talk to folks that are actively building on cloud services right now, and you ask them, “Hey, what is the business telling you right now? What is the thing that you have to fix? What's the thing that you have to improve?” AI isn't always in the conversation.Sometimes it is, but very often, those modernization conversations are about, “Hey, we've got to port some of these services to a language that the people that work here now actually know how to write code in. We've got to find a way to make this thing a little faster. Or maybe more specifically, we've got to figure out how to make it run at the same speed while using less or less expensive resources.” Which is a big conversation right now. And those are things that they are conversations as old as time. They're not going away, and so it's up to the cloud providers to continue to provide services and features that help make that possible.And so, you're seeing that, like, with Cloud Run, where they've just announced this CPU Boost feature, right, that gives you kind of an additional—it's like a boost going downhill or a push on the swing as you're getting started to help you get over that cold-start penalty. Where you're seeing the session affinity features for Cloud Run now where you have the sticky session ability that might allow you to use something like, you know, a container-backed service like that, instead of a more traditional load balancer service that you'd be using in the past. So, you know, just, you take your eye off the ball for a minute, as you know, and 10 or 20, more of these feature releases come out, but they're all kind of in service of making that experience better, broadening the surface area of applications and workloads that are able to be moved to cloud and able to be run more effectively on cloud than anywhere else.Corey: There's been a lot of talk lately about how the idea of generative AI might wind up causing problems for people, taking jobs away, et cetera, et cetera. You almost certainly have a borderline unique perspective on this because of your work with, honestly, one of the most laudable things I've ever seen come out of anywhere which is The Cloud Resume Challenge, which is a build a portfolio site, then go ahead and roll that out into how you interview. And it teaches people how to use cloud, step-by-step, you have multi-cloud versions, you have them for specific clouds. It's nothing short of astonishing. So, you find yourself talking to an awful lot of very early career folks, folks who are transitioning into tech from other places, and you're seeing an awful lot of these different perspectives and AI plays come to the forefront. How do you wind up, I guess, making sense of all this? What guidance are you giving people who are worried about that?Forrest: Yeah, I mean, I, you know—look, for years now, when I get questions from these, let's call them career changers, non-traditional learners who tend to be a large percentage, if not a plurality, of the people that are working on The Cloud Resume Challenge, for years now, the questions that they've come to me with are always, like, you know, “What is the one thing I need to know that will be the magic technology, the magic thing that will unlock the doors and give me the inside track to a junior position?” And what I've always told them—and it continues to be true—is, there is no magic thing to know other than magically going and getting two years of experience, right? The way we hire juniors in this industry is broken, it's been broken for a long time, it's broken not because of any one person's choice, but because of this sort of tragedy of the commons situation where everybody's competing over a dwindling pool of senior staff level talent and hopes that the next person will, you know, train the next generation for them so they don't have to expend their energy and interview cycles and everything else on it. And as long as that remains true, it's just going to be a challenge to stand out.Now, you'll hear a lot of people saying that, “Well, I mean, if I have generative AI, I'm not going to need to hire a junior developer.” But if you're saying that as a hiring manager, as a team member, then I think you always had the wrong expectation for what a junior developer should be doing. A junior developer is not your mini me who sits there and takes the little challenges, you know, the little scripts and things like that are beneath you to write. And if that's how you treat your junior engineers, then you're not creating an environment for them to thrive, right? A junior engineer is someone who comes in who, in a perfect world, is someone who should be able to come in almost in more of an apprentice context, and somebody should be able to sit alongside you learning what you know, right, and having education integrated into their actual job experience so that at the end of that time, they're able to step back and actually be a full-fledged member of your team rather than just someone that you kind of throw tasks over the wall to, and they don't have any career advancement potential out of that.So, if anything, I think the advancement of generative AI, in a just world, ought to give people a wake-up call that, hey, training the next generation of engineers is something that we're actually going to have to actively create programs around, now. It's not something that we can just, you know, give them the scraps that fall off of our desks. Unfortunately, I do think that in some cases, the gen AI narrative more than the reality is being used to help people put off the idea of trying to do that. And I don't believe that that's going to be true long-term. I think that if anything, generative AI is going to open up more need for developers.I mean, it's generating a lot of code, right, and as we know, Jevons paradox says that when you make it easier to use something and there's elastic demand for that thing, the amount of creation of that thing goes up. And that's going to be true for code just like it was for electricity and for code and for GPUs and who knows what all else. So, you're going to have all this code that has a much lower barrier of entry to creating it, right, and you're going to need people to harden that stuff and operate it in production, be on call for it at three in the morning, debug it. Someone's going to have to do all that, you know? And what I tell these junior developers is, “It could be you, and probably the best thing for you to do right now is to, like I said before, get good at coding on your own. Build as much of that personal strength around development as you can so that when you do have the opportunity to use generative AI tools on the job, that you have the maximum amount of mental context to put around them to be successful.”Corey: I want to further point out that there are a number of folks whose initial reaction to a lot of this is defensiveness. I showed that script that wound up spitting out the Managed NAT gateway ranked-by-region table to one of our contract engineers, who's very senior. And the initial response I got from them was almost defensive, were, “Okay, yeah. That'll wind up taking over, like, a $20 an hour Upwork coder, but it's not going to replace a senior engineer.” And I felt like that was an interesting response psychologically because it felt defensive for one, and two, not for nothing, but senior developers don't generally spring fully formed from the forehead of some ancient God. They start off as—dare I say it—junior developers who learn and improve as they go.So, I wonder what this means. If we want to get into a point where generative AI takes care of all the quote-unquote, “Easy programming problems,” and getting the easy scripts out, what does that mean for the evolution and development of future developers?Forrest: Well, keep in mind—Corey: And that might be a far future question.Forrest: Right. That's an argument as old as time, right, or a concern is old as time and we hear it anew with each new level of automation. So, people were saying this a few years ago about the cloud or about virtual machines, right? Well, how are people going to, you know, learn how to do the things that sit on top of that if they haven't taken the time to configure what's below the surface? And I'm sympathetic to that argument to some extent, but at the same time, I think it's more important to deal with the reality we have now than try to create an artificial version of realities' past.So, here's the reality right now: a lot of these simple programming tasks can be done by AI. Okay, that's not likely to change anytime soon. That's the new reality. So now, what does it look like to bring on juniors in that context? And again, I think that comes down to don't look at them as someone who's there just to, you know, be a pair of hands on a keyboard, spitting out tiny bits of low-level code.You need to look at them as someone who needs to be, you know, an effective user of general AI services, but also someone who is being trained and given access to the things they'll need to do on top of that, so the architectural decisions, the operational decisions that they'll need to make in order to be effective as a senior. And again, that takes buy-in from a team, right, to make that happen. That is not going to happen automatically. So, we'll see. That's one of those things that's very hard to automate the interactions between people and the growth of people. It takes people that are willing to be mentors.Corey: I'm also curious as to how you see the guidance shifting as computers get better. Because right now, one of my biggest problems that I see is that if I have an idea for a company I want to start or a product I want to build that involves software, step one is, learn to write a bunch of code. And I feel like there's a massive opportunity for skipping aspects of that, whereas effectively have the robot build me the MVP that I describe. Think drag-and-drop to build a web app style of approach.And the obvious response to that is, well, that's not going to go to hyperscale. That's going to break in a bunch of different ways. Well, sure, but I can get an MVP out the door to show someone without having to spend a year building it myself by learning the programming languages first, just to throw away as soon as I hire someone who can actually write code. It cuts down that cycle time massively, and I can't shake the feeling that needs to happen.Forrest: I think it does. And I think, you know, you were talking about your senior engineer that had this kind of default defensive reaction to the idea that something like that could meaningfully intrude on their responsibilities. And I think if you're listening to this and you are that senior engineer, you're five or more years into the industry and you've built your employability on the fact that you're the only person who can rough out these stacks, I would take a very, very hard look at yourself and the value that you're providing. And you say, you know—let's say that I joined a startup and the POC was built out by this technical—or possibly the not-that-technical co-founder, right—they made it work and that thing went from, you know, not existing to having users in the span of a week, which we're seeing more now and we're going to see more and more of. Okay, what does my job look like in that world? What am I actually coming on to help with?Am I—I'm coming on probably to figure out how to scale that thing and make it maintainable, right, operate it in a way that is not going to cause significant legal and financial problems for the company down the road. So, your role becomes less about being the person that comes in and does this totally greenfield thing from scratch and becomes more about being the person who comes in as the adult in the room, technically speaking. And I think that role is not going away. Like I said, there's going to be more of those opportunities rather than less. But it might change your conception of yourself a little bit, how you think about yourself, the value that you provide, now's the time to get ahead of that.Corey: I think that it is myopic and dangerous to view what you do as an engineer purely through the lens of writing code because it is a near certainty that if you are learning to write code today and build systems involving technology today, that you will have multiple careers between now and retirement. And in fact, if you're entering the workforce now, the job that you have today will not exist in anything remotely approaching the same way by the time you leave the field. And the job you have then looks borderline unrecognizable, if it even exists at all today. That is the overwhelming theme that I've got on this ar—the tech industry moves quickly and is not solidified like a number of other industries have. Like, accountants: they existed a generation ago and will exist in largely the same form a generation from now.But software engineering in particular—and cloud, of course, as well, tied to that—have been iterating so rapidly, with such sweepingly vast changes, that that is something that I think we're going to have a lot of challenge with, just wrestling with. If you want a job that doesn't involve change, this is the wrong field.Forrest: Is it the wrong field. And honestly, software engineering is, has been, and will continue to be a difficult business to make a 40-year career in. And this came home to me really strongly. I was talking to somebody a couple of months ago who, if I were to say the name—which I won't—you and I would both know it, and a lot of people listening to this would know as well. This is someone who's very senior, very well respected is, by name, identified in large part with the creation of a significant movement in technology. So, someone who you would never think of would be having a problem getting a job.Corey: Is it me? And is it Route 53 as a database, as the movement?Forrest: No, but good guess.Corey: Excellent.Forrest: This is someone I was talking to because I had just given a talk where I was pleading with IT leaders to take more responsibility for building on-ramps for non-traditional learners, career changers, people that are doing something a little different with their career. And I was mainly thinking of it as people that had come from a completely non-technical background or maybe people that were you know, like, I don't know, IT service managers with skills 20 years out of date, something like that. But this is a person who you and I would think of as someone at the forefront, the cutting edge, an incredibly employable person. And this person was a little bit farther on in their career and they came up to me and said, “Thank you so much for giving that talk because this is the problem I have. Every interview that I go into, I get told, ‘Oh, we probably can't afford you,' or, ‘Oh well, you say you want to do AI stuff now, but we see that all your experience is doing this other thing, and we're just not interested in taking a chance on someone like that at the salary you need to be at.'” and this person's, like, “What am I going to do? I don't see the roadmap in front of me anymore like I did 10, 15, or 20 years ago.”And I was so sobered to hear that coming from, again, someone who you and I would consider to be a luminary, a leading light at the top of the, let's just broadly say IT field. And I had to go back and sit with that. And all I could come up with was, if you're looking ahead and you say I want to be in this industry for 30 years, you may reach a point where you have to take a tremendous amount of personal control over where you end up. You may reach a point where there is not going to be a job out there for you, right, that has the salary and the options that you need. You may need to look at building your own path at some point. It's just it gets really rough out there unless you want to continue to stagnate and stay in the same place. And I don't have a good piece of advice for that other than just you're going to have to find a path that's unique to you. There is not a blueprint once you get beyond that stage.Corey: I get asked questions around this periodically. The problem that I have with it is that I can't take my own advice anymore. I wish I could. But what I used to love doing was, every quarter or so, I'd make it a point to go on at least one job interview somewhere else. This wound up having a few great features.One, interviewing is a skill that atrophies if you don't use it. Two, it gives me a finger on the pulse of what the market is doing, what the industry cares about. I dismissed Docker the first time I heard about it, but after the fourth interview where people were asking about Docker, okay, this is clearly a thing. And it forced me to keep my resume current because I've known too many people who spend seven years at a company and then wind up forgetting what they did years three, four, and five, where okay, then what was the value of being there? It also forces you to keep an eye on how you're evolving and growing or whether you're getting stagnant.I don't ever want to find myself in the position of the person who's been at a company for 20 years and gets laid off and discovers to their chagrin that they don't have 20 years of experience; they have one year of experience repeated 20 times. Because that is a horrifying and scary position to be in.Forrest: It is horrifying and scary. And I think people broadly understand that that's not a position they want to be in, hence why we do see people that are seeking out this continuing education, they're trying to find—you know, trying to reinvent themselves. I see a lot of great initiative from people that are doing that. But it tends to be more on the company side where, you know, they get pigeonholed into a position and the company that they're at says, “Yeah, no. We're not going to give you this opportunity to do something else.”So, we say, “Okay. Well, I'm going to go and interview other places.” And then other companies say, “No, I'm not going to take a chance on someone that's mid-career to learn something brand new. I'm going to go get someone that's fresh out of school.” And so again, that comes back to, you know, where are we as an industry on making space for non-traditional learners and career changers to take the maturity that they have, right, even if it's not specific familiarity with this technology right now, and let them do their thing, let them get untracked.You know, there's tremendous potential being untapped there and wasted, I would say. So, if you're listening to this and you have the opportunity to hire people, I would just strongly encourage you to think outside the box and consider people that are farther on in their careers, even if their technical skill set doesn't exactly line up with the five pieces of technology that are on your job req, look for people that have demonstrated success and ability to learn at whatever [laugh] the things are that they've done in the past, people that are tremendously highly motivated to succeed, and let them go win on your behalf. There's—you have no idea the amount of talent that you're leaving on the table if you don't do that.Corey: I'd also encourage people to remember that job descriptions are inherently aspirational. If you take a job where you know how to do every single item on the list because you've done it before, how is that not going to be boring? I love being given problems. And maybe I'm weird like this, but I love being given a problem where people say, “Okay, so how are you going to solve this?” And the answer is, “I have no idea yet, but I can't wait to find out.” Because at some level, being able to figure out what the right answer is, pick up the skill sets I don't need, the best way to learn something that I've ever found, at least for me.Forrest: Oh, I hear that. And what I found, you know, working with a lot of new learners that I've given that advice to is, typically the ones that advice works best for, unfortunately, are the ones who have a little bit of baked-in privilege, people that tend to skate by more on the benefit of the doubt. That is a tough piece of advice to fulfill if you're, you know, someone who's historically underrepresented or doesn't often get the chance to prove that you can do things that you don't already have a testament to doing successfully. So again, takes it back to the hiring side. Be willing to bet on people, right, and not just to kind of look at their resume and go from there.Corey: So, I'm curious to see what you've noticed in the community because I have a certain perspective on these things, and a year ago, everyone was constantly grousing about dissatisfaction with their employers in a bunch of ways. And that seems to have largely vanished. I know, there have been a bunch of layoffs and those are tragic on both sides, let's be very clear. No one is happy when a layoff hits. But I'm also seeing a lot more of people keeping their concerns to either private channels or to themselves, and I'm seeing what seems to be less mobility between companies than I saw previously. Is that because people are just now grateful to have a job and don't want to rock the boat, or is it still happening and I'm just not seeing it in the same way?Forrest: No, I think the vibe has shifted, for sure. You've got, you know, less opportunities that are available, you know that if you do lose your job that you're potentially going to have fewer places to go to. I liken it to like if you bought a house with a sub-3% mortgage and 2021, let's say, and now you want to move. Even though the housing market may have gone down a little bit, those interest rates are so high that you're going to be paying more, so you kind of are stuck where you are until the market stabilizes a little bit. And I think there's a lot of people in that situation with their jobs, too.They locked in salaries at '21, '22 prices and now here we are in 2023 and those [laugh] those opportunities are just not open. So, I think you're seeing a lot of people staying put—rationally, I would say—and waiting for the market to shift. But I think that at the point that you do see that shift, then yes, you're going to see an exodus; you're going to see a wave and there will be a whole bunch of new think pieces about the great resignation or something, but all it is just that pent up demand as people that are unhappy in their roles finally feel like they have the mobility to shift.Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to find you?Forrest: You can always find me at goodtechthings.com. I have a newsletter there, and I like to post cartoons and videos and other fun things there as well. If you want to hear my weekly take on Google Cloud, go to cloud.google.com/innovators and sign up there. You will get my weekly newsletter The Overwhelmed Person's Guide to Google Cloud where I try to share just the Google Cloud news and community links that are most interesting and relevant in a given week. So, I would love to connect with you there.Corey: I have known you for years, Forrest, and both of those links are new to me. So, this is the problem with being active in a bunch of different places. It's always difficult to—“Where should I find you?” “Here's a list of 15 places,” and some slipped through the cracks. I'll be signing up for both of those, so thank you.Forrest: Yeah. I used to say just follow my Twitter, but now there's, like, five Twitters, so I don't even know what to tell you.Corey: Yes. The balkanization of this is becoming very interesting. Thanks [laugh] again for taking the time to chat with me and I look forward to the next time.Forrest: All right. As always, Corey, thanks.Corey: Forrest Brazeal, Head of Developer Media at Google Cloud, and of course the Cloud Bard. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment that you undoubtedly had a generative AI model write for you and then failed to proofread it.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Les Cast Codeurs Podcast
LCC 296 - Interview Google IA IA I/O 2023

Les Cast Codeurs Podcast

Play Episode Listen Later May 25, 2023 104:45


Dans cet épisode, Antonio, Emmanuel et Guillaume reviennent sur les nouveautés et annonces faites à Google I/O 2023 : de nouveaux téléphones Pixel qui se plient ou pas, et surtout de l'intelligence artificielle du sol au plafond ! Que ce soit dans Android, dans Google Workspace, dans Google Cloud, une tonne de produits passe en mode survitaminé à l'IA. Guillaume, Antonio et Emmanuel discutent aussi de l'impact qu'ils voient sur l'AI, et de comment les Large Language Models sont raffinés et pourquoi on les fait halluciner, de subtilités du langage des signes. Enregistré le 23 mai 2023 Téléchargement de l'épisode LesCastCodeurs-Episode-296.mp3 Google I/O 2023 Site web : https://io.google/2023/ Keynote principale : https://io.google/2023/program/396cd2d5-9fe1-4725-a3dc-c01bb2e2f38a/ Keynote développeur : https://io.google/2023/program/9fe491dd-cadc-4e03-b084-f75e695993ea/ Vidéo résumée en 10 minutes de toutes les annonces : https://www.youtube.com/watch?v=QpBTM0GO6xI&list=TLGGCy91ScdjTPYxNjA1MjAyMw Vidéo de toutes les sessions techniques : https://io.google/2023/program/?q=technical-session Google I/O s'est tenu il y a 10 jours en Californie, dans l'amphithéâtre de Shoreline, près du campus de Google. Seulement 2000 personnes sur place, un chat et un jeu en ligne pour assister à distance. Jeu en ligne I/O Flip créé avec Flutter, Dart, Firebase, et Cloud Run, et tous les assets graphiques générés par Generative AI https://blog.google/technology/ai/google-card-game-io-flip-ai/ Des Pixels plein les yeux ! Des détails sur le design des nouveaux appareils : https://blog.google/products/pixel/google-pixel-fold-tablet-7a-design/ Pixel Fold Article : https://blog.google/products/pixel/google-pixel-fold/ Premier téléphone foldable de Google (après Samsung et Oppo) Un écran sur le dessus, et un grand écran pliable à l'intérieur Pratique pour la traduction où peut voir une discussion traduire en deux langues d'un côté sur un écran et dans l'autre langue sur l'autre Utilisation créative de la pliure : mode “laptop”, pour les selfies, pour poser l'appareil pour des photos de nuit Par contre… pas disponible en France, et tout de même presque 1900€ ! Pixel Tablet Article : https://blog.google/products/pixel/google-pixel-tablet/ Une belle tablette de 11 pouces, avec un dock de recharge avec enceinte intégrée Processeur Tensor G2, Chromecast intégré C'est un peu comme le Google Nest Hub Max mais avec un écran détachable Une coque pratique avec un trépied intégré et qui n'empêche pas de recharger la tablette sur le dock En mode dock, c'est comme l'écran du Google Home App, et dès qu'on la décroche, on est en mode multi-utilisateur, chacun avec son profil Pixel 7a Article : https://blog.google/products/pixel/pixel-7a-io-2023/ Écran de 6 pouces Triple appareil photo (grand angle, principal, et photo avant pour les selfies) 509 euros Magic Eraser pour effacer les trucs qu'on veut pas dans la photo, Magic Unblur pour rendre une photo floue plus nette, Real Tone pour rendre les peaux foncées plus naturelles Android Article quoi de neuf dans Android : https://blog.google/products/android/android-updates-io-2023/ Dans Messages, Magic Compose dans les conversations, l'IA nous aide à concevoir nos messages, dans différents styles (plus pro, plus fun, dans le style de Shakespeare) Android 14 devrait arriver un peu plus tard dans l'année, avec plus de possibilités de customisation (fond d'écran généré par Gen AI, fond d'écran Emojis, couleurs associées, fond d'écran 3D issus de ses photos) https://blog.google/products/android/new-android-features-generative-ai/ StudioBot : un chatbot intégré à Android Studio pour aider au développement d'applis Android https://io.google/2023/program/d94e89c5-1efa-4ab2-a13a-d61c5eb4e49c/ 800 millions d'utilisateurs sont passés à RCS pour le messaging Adaptation de 50 applications Android pour s'adapter aux foldables https://blog.google/products/android/android-app-redesign-tablet-foldable/ Wear OS 4 va rajouter le backup restore quand on change de montre et autres nouveautés https://blog.google/products/wear-os/wear-os-update-google-io-2023/ 800 chaînes TV gratuites dans Google TV sur Android et dans la voiture Android Auto va être disponible de 200 millions de voitures https://blog.google/products/android/android-auto-new-features-google-io-2023/ Waze disponible globalement sur le playstore dans toutes les voitures avec Android Auto Google Maps Article : https://blog.google/products/maps/google-maps-updates-io-2023/ Maps propose 20 milliards de km de direction tous les jours Immersive View for Routes 15 villes : Amsterdam, Berlin, Dublin, Florence, Las Vegas, London, Los Angeles, Miami, New York, Paris, San Francisco, San Jose, Seattle, Tokyo et Venice Possibilité pour les développeurs de s'intégrer et rajouter des augmentations 3D, des marqueurs Google Photos Article Magic Editor : https://blog.google/products/photos/google-photos-magic-editor-pixel-io-2023/ Magic Editor survitaminé à l'IA pour améliorer les photos, en déplaçant des gens, en rajoutant des parties coupées, ou bien rendre le ciel plus beau Possible que ce soit limité aux téléphones Pixel au début Projets expérimentaux Project Starline (écran avec caméra 3D qui donne un rendu 3D de son interlocuteur comme s'il était en face de soi) a été amélioré pour prendre moins de place https://blog.google/technology/research/project-starline-prototype/ Universal Translator : une nouvelle expérimentation pour faire du doublage et traduction automatique avec synchronisation des mouvements des lèvres Project Tailwind, une sorte de notebook dans lequel on peut rajouter tous ses documents à partir de drive, et poser des questions sur leur contenu, proposer des résumés, de faire du brainstorming sur ces thèmes https://thoughtful.sandbox.google.com/about MusicLM : un large language model pour générer de la musique à partir d'un texte de prompt (waitlist pour s'inscrire) https://blog.google/technology/ai/musiclm-google-ai-test-kitchen/ Project Gameface : utilisation des expressions du visage pour commander une souris et un ordinateur, pour les personnes qui ont perdu leur mobilité https://blog.google/technology/ai/google-project-gameface/ VisualBlocks : pour expérimenter dans une interface drag'n drop avec le développement de modèles pour Tensorflow lite et js https://visualblocks.withgoogle.com/ MakerStudio : pour les bidouilleurs et développeurs https://makersuite.google.com/ https://developers.googleblog.com/2023/05/palm-api-and-makersuite-moving-into-public-preview.html Search Labs Article : https://blog.google/products/search/generative-ai-search/ Expérimentations pour rajouter l'IA générative dans Google Search Faire des recherches avec des requêtes avec des phrases plus complexes, en intégrant des réponses comme Bard, avec des liens, des suggestions d'autres recherches associées Mais aussi proposer des publicités mieux ciblées On peut s'inscrire à Search Labs pour tester cette nouvelle expérience, mais au début juste en Anglais et juste pour les US Des intégrations avec Google Shopping pour proposer et filtrer des produits qui correspondent à la requête Recherche à l'aide d'image, avec Google Lens : 12 milliards de recherches visuelles par mois Palm et Bard Annonce du modèle LLM Palm 2 utilisé dans Bard et dans Google Cloud https://blog.google/technology/ai/google-palm-2-ai-large-language-model/ PaLM 2 est en cours d'intégration dans 25 produits de Google Supportera 100 langues différentes (pour l'instant seulement l'anglais, japonais et coréen), avec déjà les 40 langues les plus parlées d'ici la fin de l'année Maintenant disponible dans 180 pays… sauf l'Europe !!! Capacité de raisonnement accrue Peut coder dans une vingtaine de langages de programmation différents dont Groovy Différentes tailles de modèles : Gecko, Otter, Bison et Unicorn, mais le nombre de paramètres n'est pas communiquée, comme pour GPT-4 d'OpenAI Utilisable pour des requêtes et pour du chat Des modèles dérivées fine-tunés Med-PaLM 2 sur du savoir médical, sur l'analyse visuelle des radios et Sec-PaLM, entrainé sur des cas d'utilisation sur le thème de la cybersécurité, pour aider à déceler des scripts malicieux, des vecteurs d'attaque Sundar Pichai a aussi annoncé que Google travaillait déjà sur la prochaine évolution de ses LLM avec un modèle appelé Gemini. Peu de détails à part qu'il sera multimodal (en particulier recherche combinée image et texte par ex.) Partenariat et intégration de Adobe Firefly dans Bard pour générer des images https://blog.adobe.com/en/publish/2023/05/10/adobe-firefly-adobe-express-google-bard Duet AI pour Google Workspace Article : https://workspace.google.com/blog/product-announcements/duet-ai Dans Gmails et Docs, propose d'aider à la rédaction de vos emails et documents une extension de “smart compose” qui va permettre de générer des emails entiers, d'améliorer le style, de corriger la grammaire, éviter les répétitions de texte Dans Docs, des nouveaux “smart chips” pour rajouter des variables, des templates Dans Slides, rajouter des images générées par IA Des prompts dans Sheets pour générer un draft de table Dans Google Meet, possibilité de créer une image de fond customisée avec Generative AI Ces améliorations font parties de Workspace Labs auquel on peut s'inscrire dans la liste d'attente https://workspace.google.com/labs-sign-up/ Google Cloud Intégration de Generative AI partout https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-launches-new-ai-models-opens-generative-ai-studio Nouvelles VM A3 avec les GPUs H100 de Nvidia, idéal pour l'entrainement de modèles de machine learning, avec 26 exaFlops de performance https://cloud.google.com/blog/products/compute/introducing-a3-supercomputers-with-nvidia-h100-gpus Trois nouveaux modèles LLM dans Vertex AI : Imagen (private preview) pour générer des images, Codey pour la génération de code, et Chirp pour la génération de la parole supportant 100 langues différentes avec 2 milliards de paramètres vocaux Model Garden : avec les modèles de machine learning y compris externes et open sources Ajout des embeddings pour le texte et l'image RLHF, Reinforcement Learning from Human Feedback bientôt intégrer pour étendre Vertex AI tuning et prompt design avec une boucle de feedback humaine Generative AI Studio pour tester ses prompts zero-shot, one-shot, multi-shots Duet AI pour Google Cloud https://cloud.google.com/blog/products/application-modernization/introducing-duet-ai-for-google-cloud Assistance de code dans VSCode et bientôt les IDEs JetBrains grâce au plugin Cloud Code, et dans Cloud Workstations. Intégration dans les IDEs d'un chat pour comme un compagnon pour discuter d'architecture, trouver les commandes à lancer pour son projet Le modèle de code de Codey fonctionne sur une vingtaine de languages de programmation, mais un modèle fine-tuné a été entrainé sur toute la doc de Google Cloud, donc pourra aider en particulier sur l'utilisation des APIs de Google Cloud, ou l'utilisation de la ligne de commande gcloud Duet AI est aussi dans App Sheet, la plateforme low/no-code, et permettra de chatter avec un chatbot pour générer une application App Sheet Quoi de neuf dans Firebase https://firebase.blog/posts/2023/05/whats-new-at-google-io Web Article : https://developers.googleblog.com/2023/05/io23-developer-keynote-recap.html Flutter 3 et Dart 3.10 https://io.google/2023/program/7a253260-3941-470b-8a4d-4253af000119/ WebAssembly https://io.google/2023/program/1d176349-7cf8-4b51-b816-a90fc9d7d479/ WebGPU https://io.google/2023/program/0da196f5-5169-43ff-91db-8762e2c424a2/ Baseline https://io.google/2023/program/528a223c-a3d6-46c5-84e4-88af2cf62670/ https://web.dev/baseline/ Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Screaming in the Cloud
Doing What You Love in Cloud with Nate Avery

Screaming in the Cloud

Play Episode Listen Later May 11, 2023 33:15


Nate Avery, Outbound Product Manager at Google, joins Corey on Screaming in the Cloud to discuss what it's like working in the world of tech, including the implications of AI technology on the workforce and the importance of doing what you love. Nate explains why he feels human ingenuity is so important in the age of AI, as well as why he feels AI will make humans better at the things they do. Nate and Corey also discuss the changing landscape of tech and development jobs, and why it's important to help others throughout your career while doing something you love. About NateNate is an Outbound Product Manager at Google Cloud focused on our DevOps tools. Prior to this, Nate has 20 years of experience designing, planning, and implementing complex systems integrating custom-built and COTS applications. Throughout his career, he has managed diverse teams dedicated to meeting customer goals. With a background as a manager, engineer, Sys Admin, and DBA, Nate is currently working on ways to better build and use virtualized computer resources in both internal and external cloud environments. Nate was also named a Cisco Champion for Datacenter in 2015.Links Referenced: Google Cloud: https://cloud.google.com/devops Not Your Dad's IT: http://www.notyourdadsit.com/ Twitter: https://twitter.com/nathaniel_avery LinkedIn: https://www.linkedin.com/in/nathaniel-avery-2a43574/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: It's easy to **BEEP** up on AWS. Especially when you're managing your cloud environment on your own!Mission Cloud un **BEEP**s your apps and servers. Whatever you need in AWS, we can do it. Head to missioncloud.com for the AWS expertise you need. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and my guest today is Nate Avery, who's an outbound product manager over at Google Cloud. Nate, thank you for joining me.Nate: Thank you for having me. This is really a pretty high honor. I'm super thrilled to be here.Corey: One of my questions that I have about any large company when I start talking to them and getting to know people who work over there, pretty quickly emerges, which is, “What's the deal with your job title?” And it really doesn't matter what the person does, what the company is, there's always this strange nuance that tends to wind up creeping into the company. What is an outbound product manager and what is it you say it is you do here?Nate: Okay. That's an interesting question because I've been here for about a year now and I think I'm finally starting to figure it out. Sure, I should have known more when I applied for the job, [laugh] but there's what's on the paper and then there's what you do in reality. And so, what it appears to be, where I'm taking this thing now, is I talk to folks about our products and I try to figure out what it is they like, what it is they don't like, and then how do we make it better? I take that information back to our engineers, we huddle up, and we figure out what we can do, how to do it better, how to set the appropriate targets when it comes to our roadmaps. We look at others in the industry, where we are, where they are, where we think we can maybe have an advantage, and then we try to make it happen. That's really what it is.Corey: One of the strange things that happens at big companies, at least from my perspective, given that I've spent most of my career in small ones, is that everyone has a niche. There are very few people at large companies whose job description is yeah, I basically do everything. Where do you start? And where do you stop because Google Cloud, even bounding it to that business unit, is kind of enormous? You've [got 00:02:47] products that are outbound that you manage. And I feel like I should also call out that a product being outbound is not the same thing as being outgoing. I know that people are always wondering, what's Google going to turn off next, but Google Cloud mostly does the right thing in that respect. Good work.Nate: [laugh]. Nice. So, the products I focus on are the DevOps products. So, those are Cloud Build, Cloud Deploy, Artifact Registry, Artifact Analysis. I also work with some of our other dev tooling such as Cloud Workstations. That's in public preview right now, but maybe by the time this goes to air, it'll actually be in general availability.And then I also will talk about some of our other lesser-known tools like Skaffold or maybe on occasion, I'll throw out something about minikube. And also, Cloud Code, which is a really deep browser plugin for your IDE that gives you access to lots of different Google tools. So yeah, that's sort of my area.Corey: Well, I'm going to start with the last thing you mentioned, where you have Cloud Code as an IDE tooling and a plug-in for it. I'm relatively new to the world of IDEs because I come from the world of grumpy Unix admins; you never know what you're going to be remoting into next, but it's got VI on it, so worst case, you'll have that. So, I grew up using that, and as a result, that is still my default. I've been drifting toward VS Code a fair bit lately, as I've been regrettably learning JavaScript and TypeScript, just because having a lot of those niceties is great. But what's really transformative for me has been a lot of the generative AI offerings from various companies around hey, how about we just basically tab-complete your code for you, which is astonishing. I know people love to argue about that and then they go right back to their old approach of copying and pasting their code off a Stack Overflow.Nate: Yeah. That's an interesting one. When it works, it works and it's magical. And those are those experiences where you say, “I'm going to do this thing forever and ever I'm never going to go back.” And then when it doesn't work, you find yourself going back and then you maybe say, “Well, heck, that was horrible. Why'd I ever even go down this path?”I will say everyone's working on something along those lines. I don't think that that's much of a secret. And there are just so many different avenues at getting there. And I think that this is so early in the game that where we are today isn't where we're going to be.Corey: Oh, just—it's accelerating. Watching the innovation right now in the generative AI space is incredible. My light bulb moment that finally got me to start paying attention to this and viewing it as something other than hype that people are trying to sell us on conference stages was when I use one of them to spit out just, from a comment in VS Code, “Write a Python script that will connect to AWS pricing API and tell me what something costs, sorted from most to least expensive regions.” Because doing that manually would have taken a couple hours because their data structures are a sad joke and that API is garbage. And it sat and spun for a second and then it did it. But if I tell that story as, “This is the transformative moment that opened my eyes,” I sound incredibly sad and pathetic.Nate: No, I don't think so. I think that what it does, is it… one, it will open up more eyes, but the other thing that it does is you have to take that to the next level, which is great. That's great work, gone. Now that I have this information, what do I do with it? That's really where we need to be going and where we need to think about what this AI revolution is going to allow us to do, and that's to actually put this stuff into context.That's what humans do, which the computers are not always great at. And so, for instance, I see a lot of posts online about, “Hey, you know, I used to do job X, where I wrote up all these things,” or, “I used to write a blog and now because of AI, my boss wants me to write, you know, five times the output.” And I'm thinking, “Well, maybe the thing that you're writing doesn't need to be written if it can be easily queried and generated on the fly.” You know? Maybe those blog posts just don't have that much value anymore. So, what is it that we really should concentrate on in order to help us do better stuff, to have a higher order of importance in the world? That's where I think a lot of this really will wind up going is… you know, just as people, we've got to be better. And this will help us get there.Corey: One area of nuance on this, though, is—you're right when I talked about this with some of my developer friends, some of their responses were basically to become immediately defensive. Like, “Sure, it's great for the easy stuff, but it's not going to solve the high-level stuff that senior engineers are good at.” And I get that. This ridiculous thing that I had to do is not a threat to a senior engineer, but it is arguably a threat to someone I find on Upwork or Fiverr or whatnot to go and write this simple script for me.Nate: Oh yeah.Corey: Now, the concern that I have is one of approachability and accessibility because. Senior engineers don't form fully created from the forehead of some God somewhere that emerges from Google. They start off as simply people who have no idea what they're doing and have a burning curiosity about something, in many cases. Where is the next generation going to get the experience of writing a lot of that the small-scale stuff, if it's done for them? And I know that sounds alarmist, and oh, no, the sky is falling, and are the children going to be all right, as most people my age start to get into. But I do wonder what the future holds.Nate: That's legit. That's a totally legit question because it's always kind of hanging out there. I look at what my kids have access to today. They have freaking Oracle, the Oracle at Delphi on their phone; you know, and—Corey: If Oracle the database on their phone, I would hate to imagine what the cost of raising your kids to adulthood would be.Nate: Oh, it's mighty, mighty high [laugh]. But no, they have all of this stuff at their hands and then even just in the air, right? There's ambient computing, there's any question you want answered, you could speak it into the air and it'll come out. And it'll be, let's just say, I don't know, at least 85% accurate. But my kids still ask me [laugh].Corey: Having my kids, who are relatively young, still argue and exhaust their patience on a robot with infinite patience instead of me who has no patience? Transformative. “How do I spell whatever it is?” “Ask Alexa,” becomes a story instead of, “Look it up in the dictionary,” like my parents used to tell me. It's, “If I knew how to spell it, I would need to look it up in the dictionary, but I don't, so I can't.”Nate: Right. And I would never need to spell it again because I have the AI write my whole thing for me.Corey: That is a bit of concern for me when—some of the high school teachers are freaking out about students are writing essays with this thing. And, yeah, on the one hand, I absolutely see this as alarmism, where, oh, no, I'm going to have to do my job, on some level. But the reason you write so many of those boring, pointless essays in English class over the course of the K through 12 experience is ideally, it's teaching you how to frame your discussions, how to frame an argument, how to tell a compelling story. And, frankly, I think that's something that a lot of folks in the engineering cycle struggle with mightily. You're a product slash program manager at this point; I sort of assume that I don't need to explain to you that engineers are sometimes really bad at explaining what they mean.Nate: Yeah. Dude, I came up in tech. I'm… bad at it too sometimes [laugh]. Or when I think I'm doing a great job and then I look over and I see a… you know, the little blanky, blanky face, it goes, “Oh. Oh, hold on. I'll recalibrate that for you.” It's a thing.Corey: It's such a bad trope that they have now decided that they are calling describing what you actually mean slash want is now an entire field called prompt engineering.Nate: Dude, I hate that. I don't understand how this is going to be a job. It seems to be the most ridiculous thing in the world. If you say, “I sit down for six hours a day and I ask my computer questions,” I got to ask, “Well, why?” [laugh]. You know? And really, that's the thing. It gets back—Corey: Well, most of us do that all day long. It's just in Microsoft Excel or they use SQL to do it.Nate: Yeah… it is, but you don't spend your day asking the question of your computer, “Why.” Or really, most of us ask the question, “How?” That's really what it is we're doing.Corey: Yeah. And that is where I think it's going to start being problematic for some folks who are like, “Well, what is the point of writing blog posts if Chat-GIPITY can do it?” And yes, that's how I pronounce it: Chat-GIPITY. And the response is, “Look, if you're just going to rehash the documentation, you're right. There's no point in doing it.”Don't tell me how to do something. Tell me why. Tell me when I should consider using this tool, that tool, why this is interesting to me, why it exists. Because the how, one way or another, there are a myriad ways to find out the answer to something, but you've got to care first and convincing people to care is something computers still have not figured out.Nate: Bingo. And that gets back to your question about the engineers, right? Yeah. Okay. So sure, the little low-level tasks of, “Hey I need you to write this API.” All right, so maybe that stuff does get farmed out.However, the overall architecture still has to be considered by someone, someone still has to figure out where and how, and when things should be placed and the order in which these things should be connected. That never really goes away. And then there's also the act of creation. And by creation, I mean, just new ideas, things that—you know, that stroke of creativity and brilliance where you just say, “Man, I think there's a better way to do this thing.” Until I see that from one of these generative AI products, I don't know if anyone should truly feel threatened.Corey: I would argue that people shouldn't necessarily feel threatened regardless because things always change; that's the nature of it. I saw a headline on Hacker News recently where it said that 90% of my skills are worthless, but 10% of them are 10x what they were was worth. And I think that there's a lot of truth to that because it's, if you want a job where you never have to—you don't have to keep up with the continuing field, there are options. Not to besmirch them, but accountants are a terrific example of this. Yes, there's change to accountancy rules, but it happens slowly and methodically. You don't go on vacation for two years as an accountant—or a sabbatical—come back and discover that everything's different and math doesn't work the way it once did. Computers on the other hand, it really does feel like it's keep up or you never will.Nate: Unless you're a COBOL guy and you get called back for y2k.Corey: Oh, of course. And I'm sure—and now you're sitting around, you're waiting because when the epic time problem hits in 2038, you're going to get your next call out. And until then, it's kind of a sad life. You're the Maytag repair person.Nate: Yeah. I'm bad at humor, by the way, in case you have noticed. So, you touched on something there about the rate of change and how things change and whether or not these generative AI models are going to be able to—you know, just how far can they go? And I think that there's a—something happened over the last week or so that really got me thinking about this. There was a posting of a fake AI-generated song, I think from Drake.And say what you want about cultural appropriation, all that sort of thing, and how horrible that is, what struck me was the idea that these sorts of endeavors can only go so far because in any genre where there's language, and current language that morphs and changes and has subtlety to it, the generative AI would have to somehow be able to mimic that. And not to say that it could never get there, but again, I see us having some situations where folks are worried about a lot of things that they don't need to worry about, you know, just at this moment.Corey: I'm curious to figure out what your take is on how you see the larger industry because for a long time—and yes, it's starting to fade on some level, because it's not 2006 anymore, but there was a lot of hero worship going on with respect to Google, in particular. It was the mythical city on the hill where all the smart people went and people's entire college education was centered around the idea of, well, I'm going to get a job at Google when I graduate or I'm doomed. And it never seems to work out that way. I feel like there's a much more broad awareness these days that there's no one magical company that has the answers and there are a lot of different paths. But if you're giving guidance to someone who's starting down that path today, what would it be?Nate: Do what you love. Find something that you love, figure out who does the thing that you love, and go there. Or go to a place that does a thing that you love poorly. Go there. See if you can make a difference. But either way, you're working on something that you like to do.And really, in this business, if you can't get in the door at one of those places, then you can make your own door. It's becoming easier and easier to just sort of shoehorn yourself into this space. And a lot of it, yeah, there's got to be talent, yeah, you got to believe in yourself, all that sort of thing, but the barriers to entry are really low right now. It's super easy to start up a website, it costs you nothing to have a GitHub account. I really find it surprising when I talked to my younger cousins or someone else in that age range and they start asking, like, “Well, hey, how do I get into business?”And I'm like, “Well, what's your portfolio?” You know? And I ask them, “Do you want to work for someone else? Or would you like to at least try working for yourself first?” There are so many different avenues open to folks that you're right, you don't have to go to company X or you will never be anything anymore. That said, I am at [laugh] one of the bigger companies and do there are some brilliant people here. I bump into them and it's kind of wild. It really, really is.Corey: Oh, I want to be very clear, despite the shade that I throw at Google—and contemporary peers in the big tech company space—there are an awful lot of people who are freaking brilliant. And more importantly, by far, a lot of people who are extraordinarily kind.Nate: Yeah. Yeah. So, all right, in this business, there's that whole trope about, “Yeah, they're super smart, but they're such jerks.” It doesn't have to be that way. It really doesn't. And it's neat when you run into a place that has thousands of people who do not fit that horrible stereotype out there of the geek who can't, you know, who can't get along well with others. It's kind of nice.But I also think that that's because the industry itself is opening up. I go on to Twitter now and I see so many new faces and I see folks coming in, you know, for whatever reason, they're attracted to it for reasons, but they're in. And that's the really neat part of it. I used to worry that I didn't see a lot of young people being interested in this space. But I'm starting to notice it now and I think that we're going to wind up being in good hands.Corey: The kids are all right, I think, is a good way of framing it. What made you decide to go to Google? Again, you said you've been there about a year at this point. And, on some level, there's always a sense in hindsight of, well, yeah, obviously someone went from this job to that job to that job. There's a narrative here and it makes sense, but I've never once in my life found that it made sense and was clear while you're making the decision. It feels like it only becomes clear in hindsight.Nate: Yes, I am an extremely lucky person. I am super fortunate, and I will tell a lot of people, sometimes I have the ability to fall ass-backwards into success. And in this case, I am here because I was asked. I was asked and I didn't really think that I was the Google type because, I don't know what I thought the Google type was, just, you know, not me.And yet, I… talked it out with some folks, a really good, good buddy of mine and [laugh] I'll be darned, you know, next thing, you know, I'm here. So, gosh, what can I say except, don't limit yourself [laugh]. We do have a tendency to do that and oh, my God, it's great to have a champion and what I'd like to do now, now that you mention it and it's been something that I had on my mind for a bit is, I've got to figure out how to, you know how to start, you know, giving back, paying it forward, whatever the phrase it is you want to use? Because—Corey: I like, “Send the elevator back down.”Nate: Send the elevator back down? There you go, right? If that escalator stopped, turn it back on.Corey: Yeah, escalator; temporarily, stairs.Nate: Yes. You know, there are tons of ways up. But you know, if you can help someone, just go ahead and do it. You'd be surprised what a little bit of kindness can do.Corey: Well, let's tie this back to your day job for a bit, on some level. You're working on, effectively, developer tools. Who's the developer?Nate: Who's the developer? So, there's a general sense in the industry that anyone who works in IT or anyone who writes code is a developer. Sometimes there's the very blanket statement out there. I tend to take the view that a developer is the person who writes the code. That is a developer, that's [unintelligible 00:21:52] their job title. That's the thing that they do.The folks who assist developers, the folks who keep the servers up and running, they're going to have a lot of different names. They're DevOps admins, they're platform admins, they're server admins. Whatever they are, rarely would I call them developers, necessarily. So, I get it. We try to make blanket statement, we try to talk to large groups at a time, but you wouldn't go into your local county hospital and say that, “I want to talk to the dentist,” when you really mean, like, a heart surgeon.So, let's not do that, you know? We're known for our level of specificity when we discuss things in this field, so let's try to be a little more specific when we talk about the folks who do what they do. Because I came up on that ops track and I know the type of effort that I put in, and I looked at folks across from me and I know the kind of hours that they put in, I know all of the blood sweat and tears and nightless sleeps and answering the pagers at four in the morning. So, let's just call them what they are, [laugh] right? And it's not to say that calling them a developer is an insult in any way, but it's not a flex either.Corey: You do work at a large cloud company, so I have to assume that this is a revelation for you, but did you know that words actually mean things? I know, it's true. You wouldn't know it from a lot of the product names that wind up getting scattered throughout the world. The trophy for the worst one ever though, is Azure DevOps because someone I was talking to as a hiring manager once thought that they listed that is a thing they did on their resume and was about to can the resume. It's, “Wow, when your product name is so bad that it impacts other people's careers, that's kind of impressively awful.”But I have found that back when the DevOps movement was getting started, I felt a little offput because I was an operations person; I was a systems administrator. And suddenly, people were asking me about being a developer and what it's like. And honestly, on some level, I felt like an imposter, just because I write configuration files; I don't write code. That's very different. Code is something smart people write and I'm bad at doing that stuff.And in the fullness of time, I'm still bad at it, but at least now unenthusiastically bad at it. And, on some level, brute force also becomes a viable path forward. But it felt like it was gatekeeping, on some level, and I've always felt like the terms people use to describe what I did weren't aimed at me. I just was sort of against the edge.Nate: Yeah. And it's a weird thing that happens around here, how we get to these points, or… or somehow there's an article that gets written and then all of a sudden, everyone's life is changed in an industry. You go from your job being, “Hey, can you rack and stack the server?” To, “Hey, I need you to write this YAML code that's going to virtually instantiate a server and also connect it to a load balancer, and we need these done globally.” It's a really weird transition that happens in life.But like you said, that's part of our job: it morphs, it changes, it grows. And that's the fun of it. We hope that these changes are actually for the better and then they're going to make us more productive and they're going to make our businesses thrive and do things that they couldn't be before, like maybe be more resilient. You know, you look at the number of customers—customers; I think of them as customers—who had issues because of that horrible day in 9/11 and, you know, their business goes down the tube because there wasn't an adequate DR or COOP strategy, you know? And I know, I'm going way back in the wayback, but it's real. And I knew people who were affected by it.Corey: It is. And the tide is rising. This gets back to what we were talking about where the things that got you here won't necessarily get you there. And Cloud is a huge part of that. These days, I don't need to think about load balancers, in many cases, or all of the other infrastructure pieces because Google Cloud—among other companies, as well, lots of them—have moved significantly up the stack.I mean, people are excited about Kubernetes in a whole bunch of ways, but what an awful lot of enterprises are super excited about is suddenly, a hard drive failure doesn't mean their application goes down.Nate: [Isn't that 00:26:24] kind of awesome?Corey: Like, that's a transformative moment for them.Nate: It totally is. You know, I get here and I look at the things that people are doing and I kind of go, “Wow,” right? I'm in awe. And to be able to contribute to that in some way by saying, “Hey, you know what, we'll be cool? How about we try this feature?” Is really weird, [laugh] right?It's like, “Wow, they listened to me.” But we think about what it is we're trying to do and a lot of it, strangely enough, is not just helping people, but helping people by getting out of the way. And that is huge, right? You know, because you just want it to work, but more than it just working, you want it to be seamless. What's easier than putting your key in the ignition and turning it? Well, not having to use a key at all.So, what are those types of changes that we can bring to these different types of experiences that folks have? If you want to get your application onto a Kubernetes cluster, it shouldn't be some Herculean feat.Corey: And running that application responsibly should not require a team of people, each making a quarter million bucks a year, just to be able to do it safely and responsibly. There's going to be a collapsing down of what you have to know in order to run these things. I mean, web servers used to be something that required a month of your life and a fair bit of attention to run. Now, it's a checkbox in a cloud console.Nate: Yeah. And that's what we're trying to get it to, right? Why isn't everything a checkbox? Why can't you say, “Look, I wrote my app. I did the hard part.” Let's—you know, I just need to see it go somewhere. You know? Make it go and make it stay up. And how can I do that?And also, here's a feature that we're working on. Came out recently and we want folks to try it. It's a cloud deploy feature that works for Cloud Run as well as it does for GKE. And it's… I know it's going to sound super simple: it's our canary deployment method. But it's not just canary deployment, but also we can tie it into parallel deployment.And so, you can have your new version of your app stood up alongside your old version of the app and we can roll it out incrementally in parallel around the world and you can have an actual test that says, “Hey, is this working? Is it not working?” If it does, great, let's go forward. If it doesn't, let's roll back. And some of the stuff sounds like common sense, but it's been difficult to pull off.And now we're trying to do it with just a few lines a YAML. So, you know, is it as simple as it could be? Well, we're still looking at that. But the features are in there and we're constantly looking at what we can do to iterate and figure out what the next thing is.Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to find you?Nate: Best place for them to find me used to be my blog, it's Not Your Dad's IT, However, I've been pretty negligent there since doing this whole Google thing, so I would say, just look me up on Twitter at @nathaniel_avery, look me up on Google. You can go to a pretty cool search engine and [laugh]—Corey: Oh, that's right. You guys have a search engine now. Good work.Nate: That's what I hear [laugh].Corey: Someday maybe it'll even come to Google Docs.Nate: [laugh]. Yes, so yeah, that's where to find me. You know, just look me up at Nathaniel Avery. I think that handle works for almost everything, Twitter, LinkedIn, wherever, and reach out.If there's something you like about our DevOps tools, let me know. If there's something you hate about our DevOps tools, definitely let me know. Because the only reason we're doing this is to try and help people. And if we're not doing that, then we need to know. We need to know why it isn't working out.And trust me, I talk to these engineers every day. That's the thing that really keeps them moving in the morning is knowing that they're doing something to make things better for folks. Real quick, I'll close out, and I think I may have mentioned this on some other podcasts. I come from the ops world. I was that guy who had to help get a deployment out on a Friday night and it lasted all weekend long and you're staring there at your phone at some absurd time on a Sunday night and everyone's huddled together and you're trying to figure out, are we going to rollback or are we going to go forward? What are we going to do by Monday?Corey: I don't miss those days.Nate: Oh, oh God no. I don't miss those days either. But you know what I do want? I took this job because I don't want anyone else to have those days. That's really what it is. We want to make sure that these tools give folks the ability to deploy safely and to deploy with confidence and to take that level of risk out of the equation, so that folks can, you know, just get back to doing other things. You know, spend that time with your family, spend the time reading, spend that time prompting ChatGPT with questions, [laugh] whatever it is you want to do, but you shouldn't have to sit there and wonder, “Oh, my God, is my app working? And what do I do when it doesn't?”Corey: I really want to thank you for being as generous with your time and philosophy on this. Thanks again. I've really enjoyed our conversation.Nate: Thank you. Thank you. I've been a big fan of your work for years.Corey: [laugh]. Nate Avery, outbound product manager at Google Cloud. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice whereas if you hate this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that you had Chat-GIPITY write for you in YAML.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

The Cloud Pod
208: Azure AI Lost in Space

The Cloud Pod

Play Episode Listen Later Apr 21, 2023 57:43


Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Matthew are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI. Do people really love Matt's Azure know-how? Can Google make Bard fit into literally everything they make? What's the latest with Azure AI and their space collaborations? Let's find out! Titles we almost went with this week: Clouds in Space, Fictional Realms of Oracles, Oh My.  The cloudpod streams lambda to the cloud A big thanks to this week's sponsor:  Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.

Screaming in the Cloud
Improving the Developer Experience with Aja Hammerly

Screaming in the Cloud

Play Episode Listen Later Apr 6, 2023 33:59


Aja Hammerly, Developer Relations Manager at Google Cloud, joins Corey on Screaming in the Cloud to discuss her unexpected career journey at Google and what she's learned about improving the developer experience throughout her career. Aja and Corey discuss the importance of not only creating tools for developers that are intuitive and easy to adopt, but also cater to different learning styles. Aja describes why it's so important to respond with curiosity when a user does something seemingly random within a piece of software, and also reveals why she feels so strongly about the principle of least surprise when it comes to the developer experience. About AjaAja lives in Seattle where's she's a Developer Relations Manager at Google. She's currently excited about developer experience, software supply chain security, and becoming a better manager and mentor. In her free time she enjoys skiing, kayaking, cooking, knitting, and spending long hours in the garden.Links Referenced: Google Cloud: http://cloud.google.com/developers Personal Website: https://www.thagomizer.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn and I am joined today by Aja Hammerly, who's a Developer Relations Manager over at a small company called Google Cloud. Aja, thank you for joining me.Aja: Thank you for having me. I've been looking forward to this for quite a while.Corey: You have been at Google for, well, let's call it eons because that's more or less how it feels when you're coming from my position of being great at getting yourself fired from various companies as a core skill. How long have you been there now? And what is your trajectory been like over that time?Aja: So, I've been in there a little over eight years. And the funny part of that is that it was intended to be a two-year gig for me. I moved from consulting developer working on, you know, building out websites for other people to Google's like, “Hey, you want to do this advocacy [unintelligible 00:01:19] relations thing?” And I'm like, “Sure.” And at the time, I'm like, there's no way I'm going to last more than two, three years in this. I hadn't really held the job much longer than that.Turns out, I like it. And then they're like, “Hey, do you want to manage people doing this job?” And I'm like, “That sounds like fun. Let's try that.” And it turns out, I like that just as much if not more. Because I haven't picked a major in tech yet and so managing people doing a bunch of different things is a fantastic way for me to keep my squirrel brain engaged in all the different topics and, you know, always bouncing around. So, it's been great. Cloud's been—well, I started, Cloud was very, very, very small back in 2014 compared to what it is now. Google Cloud is way bigger.Corey: Google Cloud, if you take a look at its entire portfolio, I'm probably one of the only people in the world who looks at it and says, “Yeah, that seems like a reasonably small number of services,” just because I eat, sleep, and breathe in the firehose of AWS announcing every feature as a service. But let's be clear here, Google Cloud is fairly broad in terms of what it does and what it offers. Where do you start and where do you stop? Because increasingly, the idea of, “Oh, I am responsible for talking about everything that Google Cloud does,” is a—it's clearly a fantasy.Aja: Yeah. No, there's too much for any one person to be an expert on. I could give you a list of products, but that's not interesting, quite frankly, because I would prefer that people don't think about it as a set of products. Because—Corey: Why is this such a rare perspective to have? It drives me nuts whenever I go to a cloud conference, and okay, here's the database track, and here's the container track, and here's the storage track. It doesn't matter if I'm building Hello World, let alone anything more complicated. I have to think about all of it.Aja: Yeah. So, I don't know why it's a rare perspective, but at least within my—the folks that I look up to, the folks that I consider mentors internally, we tend to think more about audiences or problems. And the problem that I care the most about—I cared about this well before Google, and Google just pays me to care about it, which is awesome—I care about developers. I one hundred percent want to help people build cool stuff, ideally efficiently, as quickly as possible.I worked at a startup and as part of that experience, I learned that sometimes you just need to get something out quick. I wanted tools that would let me do that. When I worked in consulting, the whole gig was to get something out quick that folks could look at, folks could touch, then we could do feedback, we could iterate, we could come back. And so, I want to make developers successful and I want to get out of their way. And I've always liked tools like that as a developer; I don't want to have to read your 10,000-page manual in order to learn your product.So, when I come to Google Cloud, I'm focused on the products that help developers build stuff quickly. And by developers, I mean, developers. I mean, the people who are hands-on-keyboard with Python, with Go, with Java, building it out features for their employer or, you know—Corey: What about really crappy bash? Does that count?Aja: Sure. If you're going to build some sort of application, a really crappy bash. Awesome.Corey: You'd be surprised. My primary development stack usually is a combination of brute force and enthusiasm.Aja: Yeah. Honestly, there are days that I feel that way, too. And I was working on some demo stuff over the weekend and I'm like, “Well, I could sit down and actually read this code or I could just run it, fix the first bug, and then run it again and fix the next bug. Yeah, let's do it that way.” Brute force is fine.Corey: I think the iterative development.Aja: Yeah, iterative development and/or brute force. Whatever. It works. And, you know, if people want to build cool stuff, cool. Let's help them do that. That's what I get to do every day is figure out how I can make it easier for developers to build cool stuff.Corey: The thing that keeps confusing me, for lack of a better term, is that I see a bunch of companies talking in similar directions of, “Yeah, we want to talk to developers and we want to meet them across the stack about the problems they're having.” “Great, so what's your answer?” And then they immediately devolve it into industry verticals. As if the finance company is going to have problems that the healthcare company could never fathom happening. It's, you get that you to look an awful lot alike, right, and things that work for one of you are going to map them at least 80, if not 90 percent of what the other is doing? But nope, nope, completely separate audiences; completely separate approaches. And I find myself increasingly skeptical about that model.Aja: Yes. I think—I see that too. I have sometimes behave that way. And I think it's because… it's a combination of factors. One is people want to believe that their problems are unique and special. I've worked in edtech, I've worked on real estate stuff, I've worked in… a lot of different fields. As I said, haven't picked a major over my career.I've done a lot of different jobs, worked on a lot of different fields. I have passing knowledge of the electrical industry from an early, early job. And yeah, it's all code. At the end of the day, it's all code. But people like to believe that their problems are unique and special because they want to be unique and special. And cool. People can be unique and special. I am there to support that.I also think that different altitudes see the problems differently. So, if you're someone fairly high up and you're at a healthcare company versus a finance company, you're going to be thinking about things like your different regulatory requirements, your different security requirements. And some of those are going to be imposed by you by law, some of those are going to be imposed by you by your company policies, ethics, I would hope. But if you're the actual developer, I need to store some stuff in a database. Like, down at the lower level where you're actually writing code, getting your hands on keyboard, getting dirty, the problems all start looking roughly the same after a while.And so, you need different people to tell those stories to the different audiences because the higher level folks thinking about regulatory requirements or thinking about efficiencies, they're going to just have a different perspective than the folks I like to go chat with who are the ones banging out features.Corey: I'll take it one step further. Whenever I'm building something and I'm Googling around and talking to people in the community about how to do a certain thing and everyone looks at me like I've lost it, that is a great early warning sign that maybe I'm not doing something the right way. Now, yes, the ultimate product that I'm developing at the end of the day, maybe that's going to be different and differentiated—or at least funnier in my case—but the idea of, well, then I need to write that value back to a database, and people look at me like, “Writing data to a database? Why would you do such a thing?” Like, that's an indication that I might be somewhat misaligned somewhere.The other failure mode, of course, is when you start Googling around—because that's what we do when we're trying to figure out how to do something with a given service—and the only people ever talking about that service are the company who has built that thing. That's also not a great sign. There, at least for my purposes, needs to be a critical mass of community around a particular product where I can at least be somewhat reassured that I'm not going to be out twisting in the wind as the only customer of this thing.Aja: Yeah. No, a hundred percent agree as someone who's, in past lives, evaluated, you know, which APIs, which products, which companies we're going to work with. Having really great developer docs, having really great materials was always important. And I don't tend to read docs, so when I say materials, I like stuff that's interactive because I'm just going to dive in and fix the issues later. That's just how my brain works.But you know, people are different. Everyone has different learning preferences. But if there is a community, that means that you have lots of ways to get help. And you know, super concretely, I'm not a Kubernetes expert, I did some talks on it back in 2015 when it was brand new and shiny, I can use it and understand it, but I'm not an expert. I have other people on my team who have had the time to go deep.When I need help with Kubernetes, even though I work at Google, I've actually gone to my local community. I go to my local DevOps Slack, or I go to my local Kubernetes groups and stuff to get help. And I like that because it gives me all those different perspectives. I also know that if I'm asking you a question that no one understands—and I've had that experience [laugh] numerous times—either I'm doing something wrong, or the actual thing that I found more often is I'm explaining it in words that people don't understand. And that's always a challenge is figuring out the right words to go search for, the right phrasing of something so that everyone else understands the terms you're using. And that's a huge challenge, especially for folks that don't speak English as their primary language or their first language. Because we have lots of different ways to say the same thing, especially when it comes to code.Corey: I've noticed that. There are almost too many ways to do certain things, and they're all terrible and different ways, let's be clear. But that does mean that whenever I'm trying to find something that's 90% done on GitHub or whatnot, I will often find something that fits pretty well, and it's, “Well, I guess I'm learning TypeScript today because that's”—Aja: Yep.Corey: —“What it's written in,” versus building it in the language I have a bit more familiarity with, like, you know, crappy bash.Aja: Yep. Nope, I think that's a problem that anyone who's been developing on a deadline or, you know, spending a lot of time doing proof-of-concept stuff is super familiar with. And I think sometimes folks who haven't worked in those environments, at least not recently, forget that that's our reality. Like, “Cool. Okay, I guess today I'm learning Elastic,” was definitely a day I had when I was consulting. Or, “Cool. Okay. Swift is new”—because that's how long ago that was—“I guess we're all learning swift this afternoon.”Corey: I've been banging on for a fair bit now about the value of developer experience from my point of view. But given that you work with developers all the time, I'm betting a you have a more evolved position on it than I do, which distills down to, the better the developer experience, the happier I am, which is generally not something you can measure super effectively. Where do you stand on the topic?Aja: So, this is one of my passion points. I feel very strongly that tools should fit the workflows that developers have; developers shouldn't alter themselves to work toward their tools. I also think there's kind of a misunderstanding of the nature of developer experience that I've seen, especially from some of… a lot of different companies. Developer experience is not actually tools. Developer experience is the experience you as a developer have while using those tools.So, APIs: I like things that don't have surprises; I like things to get out of my way. I know that we joke about there being 9000 ways to run containers, or you know, five different ways to do this other thing, but you know, if that means it's faster to get something done, and you know, most of those are equally bad or equally good, I'm okay with it because it gets out of my way, and lets me ship the thing I want to ship. I enjoy coding; I think computers are rad, but what I enjoy more is having something finished that I can show to other people, that I can use, that I can make better. So, some of the things I feel super strongly about with developer experience is principle of least surprise. I was a Rubyist for years and years and years.Haven't written a lot of Ruby the last two, three years—management, I'll do that to you—but… I loved that once you understood some of the underlying concepts behind Ruby, stuff generally worked the way you expected. I know a lot of people find the very nature of Ruby surprising, but for those of us who learned it, stuff generally worked the way I expected. So, I like it when APIs do that. I like it when it's super easy to guess. Consistent naming schemes, you know?If you're going to call… you're going to call the way to list a set of services ‘list' in one place, don't call it ‘directory' in another. Keep things consistent. I like it when APIs and the cloud companies all—I've had many thoughts about all of the big cloud companies in this—when their APIs that they provide a fit the language. If you're making me write TypeScript like C because your APIs are really just designed by C programmers and you've loosely skinned them, that's not a great experience, that's not going to make me happy, it's going to slow me down. And quite frankly, my tools should speed me up, not slow me down. And that's kind of the underlying theme behind all of my feelings about developer experience is, I don't want to be angry when I'm writing code unless I'm angry at myself because I can't stop writing bugs.Corey: I don't really want to bias for that one either, personally.Aja: Yeah. And then the other one is, I don't want my tools to be something that I have to learn as a thing. I don't want there to have to be a multi-week experience of learning this particular API. Because that is interesting, potentially, but I'm not being paid to learn an API, I'm being paid to get something done. So, all of the learning of the API needs to be in service of getting something done, so it should go as quickly as possible. Stuff should just work the way I expect it.We're never going to do that. This I acknowledge. No company is ever going to get that right no matter where I work because turns out, everyone's brains are different. We all have different expectations. But we can get closer. We can talk to folks, we can do UX studies. Everyone thinks about UI and UX and design is very much focused on the visual.And one of the things I've learned since I've had the opportunity to hang out with some really amazing UX folks at Google—because big companies have advantages like that, you have lots of people doing UX—is that they can actually help us with our command line interfaces, they can help us with how we name things in an API, they can do studies on that and turn, you know, “It feels good,” into numbers. And that is fascinating to me and I think something that a lot of developers who are building tools for other developers don't realize is actually up there as an option.I spend a lot of time reading UX studies on developer experience. Managers working at big companies, you get to have access to data like that going back years. And I've spent a lot of time reading about this because I want to understand how we turn, “Feels good,” into something that we can develop against, that we can design against, and we can measure.Corey: One of the things that I've always viewed as something of a… a smell or a sign that ‘Here Be Dragons' are when I'm looking for a tool to solve a problem and there's a vendor in the space, great, awesome, terrific. Someone has devoted a lot of energy and effort to it. I want the problem solved; I don't necessarily need to do it for free or cheap. But I'm looking through their website and they talk about how awesome their training programs are and here's how you can sign up for a four-day course, and et cetera, et cetera, et cetera. That feels to me like in most cases, someone has bungled the interface. If I need to spend weeks on end learning how a particular tool works in order to be effective with it, on some level, you reach a point, fairly quickly for anything small, where the cure is worse than the disease.Aja: Yep. This is an interesting thing for me because I, my personal feelings are very similar to yours. I don't want to take a class. Like, if I have to take a class, we have failed. I don't really want to read extensive documentation.I want to get in, get dirty, try it, see, you know, watch the errors, come back and learn from the errors, that kind of thing. If there's code to read and it's a language, I know, I will actually often go read code as opposed to reading docs, which… is weird. The interesting thing to me is that, as I've managed folks, as I've, you know, spent time working with customers, working with folks who I, you know, think would benefit from some of Google Cloud's products, there are some folks who really, really want that formal training, they want that multi-day course before they dig in. And so, while in the past, I definitely would have agreed with you, if it's the only thing, maybe, but if it's one of many different ways to get started, I just keep telling myself, “Hey, that's how someone else needs to learn this.” Isn't my preference, but my preference isn't necessarily better.It's just, this is the brain I got and the tools that came with it. And it doesn't do well for four days in a classroom learning all of the intricacies of this because I need to learn this stuff in context, otherwise, it doesn't stick. Whereas I have people that work for me, I've had people who I've worked with who are like, “No, I actually need to go read the book.” And I'm like, “Let's make sure that there's both a book.”Corey: Everyone learns differently.Aja: Yeah. I just constantly reminding myself, both as a manager and also as someone who works in developer relations, all of the above is the correct option for how are we going to teach this? How are we going to help people? We really need to offer as much as possible all of the above because we need to be able to reach everyone in a way that works for them.Corey: It's the height of hubris to believe that everyone thinks, processes, learns, et cetera, the same way that you do. This is a weird confession for someone who hosts a podcast. I don't learn very well by listening to podcasts. I find that when I'm trying to absorb something if I can read it, it sticks with me in a way that listening to something or even watching a video doesn't.Aja: Yeah, and I'm actually very much the opposite. I take most of my information and learn best through hearing things. So, while I don't particularly like watching video, relatively often, I'll actually have video if I'm just doing like email or something running in the background and I'm listening to the audio as I'm learning the concepts. I adore learning stuff from podcasts, I love audiobooks, but I don't retain stuff as well when I read it. And it's just because, you know, human beings are interesting and weird and not consistent, in all sorts of delightful and confusing ways.Which, you know, as an engineer sometimes drives me nuts because I really wish there was one right way to do stuff that worked for everyone, but there just isn't. There are all sorts of interesting problems. And just like there are multiple valid ways to solve problems, there are multiple valid ways to learn, and we have to support all of them. And we have to support engineers with all of those styles too. People often will say, “Oh, sure. There's lots of learning, different learning styles, but you know, most engineers are like X.” No. There is no ‘most engineers.'Corey: Early on in my career, one of the things I noticed—in myself as well, let's be clear here, I was no saint—that, oh, if people don't learn the way that I learned, then clearly they're deficient in some way. Of course that's not true. Everyone learns differently. And that, among other things, was part of the reason that I stopped being as dismissive as I was about certifications, for example, or signing up for a planned classroom experience. There is tremendous value to folks who learn well from that type of structured learning.I am just barely contained chaos. And for whatever reason, that doesn't resonate with me in the same way. If anything, I'm the one that's broken. The trick is, is making sure that when you're trying to drive adoption, no matter what method people learn best from, you need to be there with something that approximates that. One area that I think resonates with something you said earlier is this idea that the best way for me to learn something, at least is to sit down and build something with it. Bar none, that's what I actually want to experience. And that is only slightly informed by the unfortunate reality that I've been through too many cycles of an exec in a keynote getting on stage and making grandiose promises that the service does not backup.Aja: Yep. And I actually do have a bias here that I will own. I don't believe in anything until I can touch it. And by ‘touch it,' I mean, use it. And that also includes I don't believe in my own learning or the learning of others until I can see it practically applied.And so, even when I have folks on my team who are like, “Hey, I want to go read a book, take a class,” I'm like, “Cool. What else are you going to do with that? How are you going to know that you can actually take what you learned and apply it to a novel situation?” And this has been based on mentors I had early in my career who I'm like, “Hey, I just read this book.” And they're like, “That's awesome. Can you write anything with what you learned?”And I'm like, “Yes. Let me do that and prove it to myself.” So, I do have a bias there. I also have a bias, having worked in the industry for 20-plus years now, that a lot of people say a lot of things that are either theoretically true or true through, you know, a happy path lens. And I started my career as a tester and compu—I always joke computers run in fear of me because if there's a way to cause something to error out in a confusing and unknown way, I will find it. I will find it on accident. And when I can't find it on accident, I will find it on purpose.So, that's the other reason I don't believe stuff until I touch it. It doesn't matter if it's at a keynote, doesn't matter if it's a blog post, I want to know that this works beyond that happy case that you just showed me. And part of this is also that I've built some of those keynote demos and I know that they're explicitly designed so that we can fit in the timeframe allowed to avoid any possible dragons that might be lurking in the background. So, I always go get dirty with things, new products, new features. It's one of the things I actually love about my job is I get to try stuff before anyone else does.And I'm like, “Hey, so, um… I did this thing. You probably didn't expect anyone to do this thing, but I did this thing. Can we talk about whether this thing that I did is actually a valid use case? Because it made sense to me, but you know, I might have been thinking about this backwards, upside down, in purple, so let's back the truck up and have a discussion.”Corey: Yeah, I get to moonlight occasionally as something that looks vaguely like an analyst at a variety of different companies. And as a part of that, I'm often kicking the tires on something that they're going to be releasing soon. And a very common failure mode is that, for obvious reasons, no one has ever really looked at this thing from the perspective of I've never seen this before or heard of this before. Let me approach this as someone who's learning about it for the first time. The documentation is always treated as an afterthought at those stages where it's, “Oh yeah, just spin it up and do it. And you do the thing that we all know about, right?” “Well, okay, assume I don't have that shared understanding. What's the experience?” And, “Oh.” Yeah, if I'm not on the path of a few pre-planned test cases, then everything falls off pretty quickly. I think I share that somewhat special ability to cause chaos and destruction to all about me [laugh] when I start trying to do something in good faith on the computer.Aja: Yeah. No, it's both a blessing and a curse. It's really annoying when like, I managed to brick my work laptop on the morning that I have, you know, a super important talk and I call up, you know, internal tech support at Google and they're like, “You did what, and how?” But it's also great because I know that… I know that I get to—because I started my career in tests working at other companies, I've always done some informal testing no matter where I've worked, everything I find we at least know about, even if we don't have time to fix it. We at least know about it, so if someone else runs into it, we can at least help them untangle whatever crazy stuff they did.And I'm also just not afraid of breaking computers either, which means that I'm very willing to go off happy paths. If I see a tutorial that's close, you know, if all of the steps that work, and I'll guess on the others. And that's a thing that I don't actually see a ton of folks being always willing to do because they're afraid of breaking it. And I'm like, “It's software.”Corey: And a lot of products are designed though, that once you deviate from the happy path, well, now you've broken it and you get to keep all the pieces. There's little attention paid towards, okay, now you've done something else and you're bringing something back into the happy path. It feels like if you haven't been here for every step of the way, well, your problem now. I have work to do. Go away kids, you're bothering me.Aja: Yeah, I've seen that. And I've seen that open-source frameworks, too, when people—when I talk about, you know, deviating from the happy path—and this will date me—using multiple databases with Rails was one of the ones that I ran into numerous times. Just was not designed for that in the beginning. Did not work. There was also some easy security stuff, ages and ages ago, that you often wanted to do, but was not at that point integrated into the framework, so it was clunky.And so, anyone would come to, like, a Ruby meetup or something like, “Hey, I want to use three databases with my Rails application,” we'd be like, “So, you can… but you may not actually want to do it that way. Can we interest you in some microservices so that you can go one-to-one?” And that wasn't always the right choice. I worked on an app for years that had multiple databases in Rails, one was a data warehouse, one was our production database. And it was clunky.And eventually, you know, the Rails community got better. Eventually, people do improve, but people are weird. They do weird things. Like, and I don't think people truly understand that. One of my jobs at various points was I was working in education tech and I was working on an application for kindergarteners.And I don't have kids, but I think kindergarteners are just [unintelligible 00:24:44]. And until you see five-year-olds use software, I don't think people get a true appreciation for how random human beings can actually be when given a textbox or when given a mouse. And, like, we like to think that, you know, engineers and adults are better. We're not. We just, you know, have a different set of five-year-old tools available to us.So, we do have to at least acknowledge that people are going to go do weird stuff. And some of that weird stuff probably makes sense in the context they're living in, and so, the best step is not to say, “Hey, stop doing weird stuff.” The best thing to then say is, “Okay, why did you do it that way?” Because everyone has good reasons for the decisions they make most of the time. And understanding those is important.Corey: Yeah. It's very rare—not entirely unheard of, but at least rare—that when someone shows up and says, “Okay, I'm making a bunch of choices today. What are the worst ones I can possibly make that I'm going to be tripping over for the next five years and leave is my eternal legacy to every engineer who ever works at this company after I do?” But it happens all the time, for better or worse.Aja: Yeah.Corey: Never intentional, but it always hits us.Aja: Yeah. Well, one of the things that I learned in the last-ten ish years, and one of the things that I tried to bring to all of my developer relations, all my developer education work, is, “It made sense at the time.” Now, it may have been that they made a assumption six years ago, that led them down the path of chaos and sadness and now that they're deep into this, they're going to have to back up to that decision six years ago and undo it. But based on the knowledge they had, the assumptions they were making—which may or may not have been true, but you know, were likely made in good faith—they're doing their best. And even when that's not true, I haven't found a situation where, assuming that with regards to technical decisions is harmful.Assume that people are relatively intelligent. They may not have the time to go learn all of your tools, the intricacies and use things exactly the way that you want them to be used because time is a limited resource, but assume that they're relatively intelligent and they're doing their best. And then try to understand why. What assumptions, what skills, what previous knowledge led them down this crazy path? And you know, then you can start having a conversation about okay, well, what should the tools do? How should the tools work together? Just because I wouldn't make that decision doesn't mean that their version of it is necessarily bad. It may not be the most efficient way to get stuff done, but if it works, eh, okay.Corey: So, as we wind up coming towards the end of this episode, one thing that I want to explore a little bit is, you've been with Google Cloud for eight years now. How have you seen the organization evolve during that time? Because from my perspective, back then it was oh, “Google has a cloud? Yeah, I guess they do.” It's a very different story, but all of my perspective is external. How have you seen it?Aja: Oh, that's an interesting question. And I'll caveat that appropriately with I only see the parts I see. One of the hard parts of big companies is, I don't actually dig in on some of the areas particularly deeply. I don't go deep on data analytics, I don't go deep on AI/ML. And I will also [laugh] own the fact that when I started, I'm like, “Oh, Google has a cloud? Huh. Okay, yeah, sure, I'll work on that.”I didn't even know the list of products my first day. I knew about App Engine and I knew that it didn't work with my preferred languages so I had a sad. Some of the things that I've seen. I've seen a real focus on how we can help people with real big problems. I've seen a real focus on listening to customers that I really like.I've learned a lot of techniques that we've been shared out, things like empathy sessions, friction logging. If you're not with the community of developer relations about how we make sure that, as an engineering team, we're thinking about real customer problems. I've seen a lot of maturing thoughts around how we maintain products; how we make sure that we've got updates where we need them, as much as we can; how we talk about our products; how we listen to customers and take, you know, direct feature requests from them.The other big thing is, I've just seen us grow. And that's the big thing is that there's just so many more people than when I started. And I've never worked at a company this big before and just getting my head around the number of people who are actively trying to make cloud better, and spending every day doing their best to improve the products, to add the features that are missing, to make sure that we're keeping stuff up to date where we can, it's kind of mind-boggling. Like, when I go look at the org chart, I'm like, “Wait, there are how many people working on what?” And that in and of itself is a story because that, to me at least shows that we care about getting it right. Google cares about getting it right.I'm not Google, of course, but I feel like from the inside, I can say that Google cares about getting it right as much as we can. And you know, sometimes it's not a hundred percent what people want, which is why we iterate. But we've also had a couple of things that I'm particularly happy with Cloud Run, I think landed pretty well.Corey: I'd say that I would agree with what you're saying. I've had nothing but positive experiences when I've been building admittedly very small-scale shitposting-style stuff on top of Google Cloud. There have been times where the biggest criticism I have is, “It's not the particular flavor of broken that I'm used to coming from AWS-land.” But that's hardly a fair criticism. I think that by and large, it is a superior platform coming from the perspective of developer experience.And people don't like it when I say that and they often like it even less when I say, “And thus, it has better security than something that does not have better user experience because simplicity is everything in that space.” But it's true. It is foundationally and fundamentally true.Aja: I agree with you. Obviously, it's my employer. But I do think you actually were onto something interesting with, “My particular flavor of broken.” I've talked to a lot of folks who are migrating and sometimes they struggle because there are particular magic incantations or other things that they learn to work with a different tool. It's the same thing is when you're learning a new language, a new programming language, or a new framework. You're like, “Wait, I don't have to do this thing. But I'm really good at doing that thing.”And so, I do think there is to some degree, everything—nothing's perfect and it happens to be, you know, it's hard for some folks. And I think some folks resist the better developer experience because it isn't what they're used to. And that's okay, too. Like, if I was a developer, I wouldn't want to have to relearn everything from scratch, so I get that and I think that that is a valid piece of feedback.[unintelligible 00:31:22] it make it familiar to folks working from other clouds, we're working on it. There's stuff coming out of DevRel. There's other things that we do to try to make it easier. But no, I do think, and I'm very grateful I get to work with a lot of teams to do this, we want to make developers like working with Google Cloud. I want to make developers like working with Google Cloud.Like, at the end of the day, if I had to say the most important thing for me is I want to make developers enjoy their time using Google Cloud to get other stuff done. I don't need to live in a world of people who are like, “You know, I really just want to go spend some time on Google Cloud today,” but I want it to be something that they enjoy using or at least gets out of their way, out their way to doing the stuff that they actually want to do: you know, add features, build shitposting websites, whatever it ends up being.Corey: As someone who does an awful lot of that, thanks. It's appreciated. I really want to thank you for spending so much time talking to me. If people want to learn more, where's the best place to find you?Aja: Oh. That's the best place to find me right now is www.thagomizer.com. Thagomizer is the spiky part of the end—at the end—Corey: Of a Stegosaurus.Aja: —of a Stegosaurus.Corey: Yes.Aja: It is. That is my website and it has my most recent social, et cetera on it. That's also where I theoretically blog, although it's been about a year. I've got, as I said, as I mentioned before the show, I've got several blog posts three-quarters of the way done that I'm going to hopefully try to get out over the next couple of weeks… on various topics.Corey: I have a pile of those myself, that for some reason, it never quite ends up happening when you hope it will.Aja: Yeah, exactly.Corey: And we'll, of course, put links to all of that in the [show notes 00:32:47]. Thank you so much for being so generous with explaining your point of view I appreciate it.Aja: Yeah. And thank you for having me. This was lovely.Corey: Likewise. Aja Hammerly, Developer Relations Manager at Google Cloud. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment telling me exactly which four-week course I need to sign up for to understand that comment.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Screaming in the Cloud
The Non-Magical Approach to Cloud-Based Development with Chen Goldberg

Screaming in the Cloud

Play Episode Listen Later Nov 15, 2022 40:13


About ChenChen Goldberg is GM and Vice President of Engineering at Google Cloud, where she leads the Cloud Runtimes (CR) product area, helping customers deliver greater value, effortlessly. The CR  portfolio includes both Serverless and Kubernetes based platforms on Google Cloud, private cloud and other public clouds. Chen is a strong advocate for customer empathy, building products and solutions that matter. Chen has been core to Google Cloud's open core vision since she joined the company six years ago. During that time, she has led her team to focus on helping development teams increase their agility and modernize workloads. Prior to joining Google, Chen wore different hats in the tech industry including leadership positions in IT organizations, SI teams and SW product development, contributing to Chen's broad enterprise perspective. She enjoys mentoring IT talent both in and outside of Google. Chen lives in Mountain View, California, with her husband and three kids. Outside of work she enjoys hiking and baking.Links Referenced: Twitter: https://twitter.com/GoldbergChen LinkedIn: https://www.linkedin.com/in/goldbergchen/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Forget everything you know about SSH and try Tailscale. Imagine if you didn't need to manage PKI or rotate SSH keys every time someone leaves. That'd be pretty sweet, wouldn't it? With Tailscale SSH, you can do exactly that. Tailscale gives each server and user device a node key to connect to its VPN, and it uses the same node key to authorize and authenticate SSH.Basically you're SSHing the same way you manage access to your app. What's the benefit here? Built-in key rotation, permissions as code, connectivity between any two devices, reduce latency, and there's a lot more, but there's a time limit here. You can also ask users to reauthenticate for that extra bit of security. Sounds expensive?Nope, I wish it were. Tailscale is completely free for personal use on up to 20 devices. To learn more, visit snark.cloud/tailscale. Again, that's snark.cloud/tailscaleCorey: Welcome to Screaming in the Cloud, I'm Corey Quinn. When I get bored and the power goes out, I find myself staring at the ceiling, figuring out how best to pick fights with people on the internet about Kubernetes. Because, well, I'm basically sad and have a growing collection of personality issues. My guest today is probably one of the best people to have those arguments with. Chen Goldberg is the General Manager of Cloud Runtimes and VP of Engineering at Google Cloud. Chen, Thank you for joining me today.Chen: Thank you so much, Corey, for having me.Corey: So, Google has been doing a lot of very interesting things in the cloud, and the more astute listener will realize that interesting is not always necessarily a compliment. But from where I sit, I am deeply vested in the idea of a future where we do not have a cloud monoculture. As I've often said, I want, “What cloud should I build something on in five to ten years?” To be a hard question to answer, and not just because everything is terrible. I think that Google Cloud is absolutely a bright light in the cloud ecosystem and has been for a while, particularly with this emphasis around developer experience. All of that said, Google Cloud is sort of a big, unknowable place, at least from the outside. What is your area of responsibility? Where do you start? Where do you stop? In other words, what can I blame you for?Chen: Oh, you can blame me for a lot of things if you want to. I [laugh] might not agree with that, but that's—Corey: We strive for accuracy in these things, though.Chen: But that's fine. Well, first of all, I've joined Google about seven years ago to lead the Kubernetes and GKE team, and ever since, continued at the same area. So evolved, of course, Kubernetes, and Google Kubernetes Engine, and leading our hybrid and multi-cloud strategy as well with technologies like Anthos. And now I'm responsible for the entire container runtime, which includes Kubernetes and the serverless solutions.Corey: A while back, I, in fairly typical sarcastic form, wound up doing a whole inadvertent start of a meme where I joked about there being 17 ways to run containers on AWS. And then as that caught on, I wound up listing out 17 services you could use to do that. A few months went past and then I published a sequel of 17 more services you can use to run Kubernetes. And while that was admittedly tongue-in-cheek, it does lead to an interesting question that's ecosystem-wide. If I look at Google Cloud, I have Cloud Run, I have GKE, I have GCE if I want to do some work myself.It feels like more and more services are supporting Docker in a variety of different ways. How should customers and/or people like me—though, I am sort of a customer as well since I do pay you folks every month—how should we think about containers and services in which to run them?Chen: First of all, I think there's a lot of credit that needs to go to Docker that made containers approachable. And so, Google has been running containers forever. Everything within Google is running on containers, even our VMs, even our cloud is running on containers, but what Docker did was creating a packaging mechanism to improve developer velocity. So, that's on its own, it's great. And one of the things, by the way, that I love about Google Cloud approach to containers and Docker that yes, you can take your Docker container and run it anywhere.And it's actually really important to ensure what we call interoperability, or low barrier to entry to a new technology. So, I can take my Docker container, I can move it from one platform to another, and so on. So, that's just to start with on a containers. Between the different solutions, so first of all, I'm all about managed services. You are right, there are many ways to run a Kubernetes. I'm taking a lot of pride—Corey: The best way is always to have someone else run it for you. Problem solved. Great, the best kind of problems are always someone else's.Chen: Yes. And I'm taking a lot of pride of what our team is doing with Kubernetes. I mean, we've been working on that for so long. And it's something that you know, we've coined that term, I think back in 2016, so there is a success disaster, but there's also what we call sustainable success. So, thinking about how to set ourselves up for success and scale. Very proud of that service.Saying that, not everybody and not all your workloads you need the flexibility that Kubernetes gives you in all the ecosystem. So, if you start with containers your first time, you should start with Cloud Run. It's the easiest way to run your containers. That's one. If you are already in love with Kubernetes, we won't take it away from you. Start with GKE. Okay [laugh]? Go all-in. Okay, we are all in loving Kubernetes as well. But what my team and I are working on is to make sure that those will work really well together. And we actually see a lot of customers do that.Corey: I'd like to go back a little bit in history to the rise of Docker. I agree with you it was transformative, but containers had been around in various forms—depending upon how you want to define it—dating back to the '70s with logical partitions on mainframes. Well, is that a container? Is it not? Well, sort of. We'll assume yes for the sake of argument.The revelation that I found from Docker was the developer experience, start to finish. Suddenly, it was a couple commands and you were just working, where previously it had taken tremendous amounts of time and energy to get containers working in that same context. And I don't even know today whether or not the right way to contextualize containers is as sort of a lite version of a VM, as a packaging format, as a number of other things that you could reasonably call it. How do you think about containers?Chen: So, I'm going to do, first of all, a small [unintelligible 00:06:31]. I actually started my career as a system mainframe engineer—Corey: Hmm.Chen: And I will share that when you know, I've learned Kubernetes, I'm like, “Huh, we already have done all of that, in orchestration, in workload management on mainframe,” just to the side. The way I think about containers is as a—two things: one, it is a packaging of an application, but the other thing which is also critical is the decoupling between your application and the OS. So, having that kind of abstraction and allowing you to portable and move it between environments. So, those are the two things that are when I think about containers. And what technologies like Kubernetes and serverless gives on top of that is that manageability and making sure that we take care of everything else that is needed for you to run your application.Corey: I've been, how do I put this, getting some grief over the past few years, in the best ways possible, around a almost off-the-cuff prediction that I made, which was that in five years, which is now a lot closer to two, basically, nobody is going to care about Kubernetes. And I could have phrased that slightly more directly because people think I was trying to say, “Oh, Kubernetes is just hype. It's going to go away. Nobody's going to worry about it anymore.” And I think that is a wildly inaccurate prediction.My argument is that people are not going to have to think about it in the same way that they are today. Today, if I go out and want to go back to my days of running production services in anger—and by ‘anger,' I of course mean in production—then it would be difficult for me to find a role that did not at least touch upon Kubernetes. But people who can work with that technology effectively are in high demand and they tend to be expensive, not to mention then thinking about all of the intricacies and complexities that Kubernetes brings to the foreground, that is what doesn't feel sustainable to me. The idea that it's going to have to collapse down into something else is, by necessity, going to have to emerge. How are you seeing that play out? And also, feel free to disagree with the prediction. I am thrilled to wind up being told that I'm wrong it's how I learn the most.Chen: I don't know if I agree with the time horizon of when that will happen, but I will actually think it's a failure on us if that won't be the truth, that the majority of people will not need to know about Kubernetes and its internals. And you know, we keep saying that, like, hey, we need to make it more, like, boring, and easy, and I've just said like, “Hey, you should use managed.” And we have lots of customers that says that they're just using GKE and it scales on their behalf and they don't need to do anything for that and it's just like magic. But from a technology perspective, there is still a way to go until we can make that disappear.And there will be two things that will push us into that direction. One is—you mentioned that is as well—the talent shortage is real. All the customers that I speak with, even if they can find those great people that are experts, they're actually more interesting things for them to work on, okay? You don't need to take, like, all the people in your organization and put them on building the infrastructure. You don't care about that. You want to build innovation and promote your business.So, that's one. The second thing is that I do expect that the technology will continue to evolve and are managed solutions will be better and better. So hopefully, with these two things happening together, people will not care that what's under the hood is Kubernetes. Or maybe not even, right? I don't know exactly how things will evolve.Corey: From where I sit, what are the early criticisms I had about Docker, which I guess translates pretty well to Kubernetes, are that they solve a few extraordinarily painful problems. In the case of Docker, it was, “Well, it works on my machine,” as a grumpy sysadmin, the way I used to be, the only real response we had to that was, “Well. Time to backup your email, Skippy, because your laptop is going into production, then.” Now, you can effectively have a high-fidelity copy of production, basically anywhere, and we've solved the problem of making your Mac laptop look like a Linux server. Great, okay, awesome.With Kubernetes, it also feels, on some level, like it solves for very large-scale Google-type of problems where you want to run things across at least a certain point of scale. It feels like even today, it suffers from having an easy Hello World-style application to deploy on top of it. Using it for WordPress, or some other form of blogging software, for example, is stupendous overkill as far as the Hello World story tends to go. Increasingly as a result, it feels like it's great for the large-scale enterprise-y applications, but the getting started story of how do I have a service I could reasonably run in production? How do I contextualize that, in the world of Kubernetes? How do you respond to that type of perspective?Chen: We'll start with maybe a short story. I started my career in the Israeli army. I was head of the department and one of the lead technology units and I was responsible for building a PAS. In essence, it was 20-plus years ago, so we didn't really call it a PAS but that's what it was. And then at some point, it was amazing, developers were very productive, we got innovation again, again. And then there was some new innovation just at the beginning of web [laugh] at some point.And it was actually—so two things I've noticed back then. One, it was really hard to evolve the platform to allow new technologies and innovation, and second thing, from a developer perspective, it was like a black box. So, the developers team that people were—the other development teams couldn't really troubleshoot environment; they were not empowered to make decisions or [unintelligible 00:12:29] in the platform. And you know, when it was just started with Kubernetes—by the way, beginning, it only supported 100 nodes, and then 1000 nodes. Okay, it was actually not for scale; it actually solved those two problems, which I'm—this is where I spend most of my time.So, the first one, we don't want magic, okay? To be clear on, like, what's happening, I want to make sure that things are consistent and I can get the right observability. So, that's one. The second thing is that we invested so much in the extensibility an environment that it's, I wouldn't say it's easy, but it's doable to evolve Kubernetes. You can change the models, you can extend it you can—there is an ecosystem.And you know, when we were building it, I remember I used to tell my team, there won't be a Kubernetes 2.0. Which is for a developer, it's [laugh] frightening. But if you think about it and you prepare for that, you're like, “Huh. Okay, what does that mean with how I build my APIs? What does that mean of how we build a system?” So, that was one. The second thing I keep telling my team, “Please don't get too attached to your code because if it will still be there in 5, 10 years, we did something wrong.”And you can see areas within Kubernetes, again, all the extensions. I'm very proud of all the interfaces that we've built, but let's take networking. This keeps to evolve all the time on the API and the surface area that allows us to introduce new technologies. I love it. So, those are the two things that have nothing to do with scale, are unique to Kubernetes, and I think are very empowering, and are critical for the success.Corey: One thing that you said that resonates most deeply with me is the idea that you don't want there to be magic, where I just hand it to this thing and it runs it as if by magic. Because, again, we've all run things in anger in production, and what happens when the magic breaks? When you're sitting around scratching your head with no idea how it starts or how it stops, that is scary. I mean, I recently wound up re-implementing Google Cloud Distinguished Engineer Kelsey Hightower's “Kubernetes the Hard Way” because he gave a terrific tutorial that I ran through in about 45 minutes on top of Google Cloud. It's like, “All right, how do I make this harder?”And the answer is to do it on AWS, re-implement it there. And my experiment there can be found at kubernetesthemuchharderway.com because I have a vanity domain problem. And it taught me he an awful lot, but one of the challenges I had as I went through that process was, at one point, the nodes were not registering with the controller.And I ran out of time that day and turned everything off—because surprise bills are kind of what I spend my time worrying about—turn it on the next morning to continue and then it just worked. And that was sort of the spidey sense tingling moment of, “Okay, something wasn't working and now it is, and I don't understand why. But I just rebooted it and it started working.” Which is terrifying in the context of a production service. It was understandable—kind of—and I think that's the sort of thing that you understand a lot better, the more you work with it in production, but a counterargument to that is—and I've talked about it on this show before—for this podcast, I wind up having sponsors from time to time, who want to give me fairly complicated links to go check them out, so I have the snark.cloud URL redirector.That's running as a production service on top of Google Cloud Run. It took me half an hour to get that thing up and running; I haven't had to think about it since, aside from a three-second latency that was driving me nuts and turned out to be a sleep hidden in the code, which I can't really fault Google Cloud Run for so much as my crappy nonsense. But it just works. It's clearly running atop Kubernetes, but I don't have to think about it. That feels like the future. It feels like it's a glimpse of a world to come, we're just starting to dip our toes into. That, at least to me, feels like a lot more of the abstractions being collapsed into something easily understandable.Chen: [unintelligible 00:16:30], I'm happy you say that. When talking with customers and we're showing, like, you know, yes, they're all in Kubernetes and talking about Cloud Run and serverless, I feel there is that confidence level that they need to overcome. And that's why it's really important for us in Google Cloud is to make sure that you can mix and match. Because sometimes, you know, a big retail customer of ours, some of their teams, it's really important for them to use a Kubernetes-based platform because they have their workloads also running on-prem and they want to serve the same playbooks, for example, right? How do I address issues, how do I troubleshoot, and so on?So, that's one set of things. But some cloud only as simple as possible. So, can I use both of them and still have a similar developer experience, and so on? So, I do think that we'll see more of that in the coming years. And as the technology evolves, then we'll have more and more, of course, serverless solutions.By the way, it doesn't end there. Like, we see also, you know, databases and machine learning, and like, there are so many more managed services that are making things easy. And that's what excites me. I mean, that's what's awesome about what we're doing in cloud. We are building platforms that enable innovation.Corey: I think that there's an awful lot of power behind unlocking innovation from a customer perspective. The idea that I can use a cloud provider to wind up doing an experiment to build something in the course of an evening, and if it works, great, I can continue to scale up without having to replace, you know, the crappy Raspberry Pi-level hardware in my spare room with serious enterprise servers in a data center somewhere. The on-ramp and the capability and the lack of long-term commitments is absolutely magical. What I'm also seeing that is contributing to that is the de facto standard that's emerged of most things these days support Docker, for better or worse. There are many open-source tools that I see where, “Oh, how do I get this up and running?”“Well, you can go over the river and through the woods and way past grandmother's house to build this from source or run this Docker file.” I feel like that is the direction the rest of the world is going. And as much fun as it is to sit on the sidelines and snark, I'm finding a lot more capability stories emerging across the board. Does that resonate with what you're seeing, given that you are inherently working at very large scale, given the [laugh] nature of where you work?Chen: I do see that. And I actually want to double down on the open standards, which I think this is also something that is happening. At the beginning, we talked about I want it to be very hard when I choose the cloud provider. But innovation doesn't only come from cloud providers; there's a lot of companies and a lot of innovation happening that are building new technologies on top of those cloud providers, and I don't think this is going to stop. Innovation is going to come from many places, and it's going to be very exciting.And by the way, things are moving super fast in our space. So, the investment in open standard is critical for our industry. So, Docker is one example. Google is in [unintelligible 00:19:46] speaking, it's investing a lot in building those open standards. So, we have Docker, we have things like of course Kubernetes, but we are also investing in open standards of security, so we are working with other partners around [unintelligible 00:19:58], defining how you can secure the software supply chain, which is also critical for innovation. So, all of those things that reduce the barrier to entry is something that I'm personally passionate about.Corey: Scaling containers and scaling Kubernetes is hard, but a whole ‘nother level of difficulty is scaling humans. You've been at Google for, as you said, seven years and you did not start as a VP there. Getting promoted from Senior Director to VP at Google is a, shall we say, heavy lift. You also mentioned that you previously started with, I believe, it was a seven-person team at one point. How have you been able to do that? Because I can see a world in which, “Oh, we just write some code and we can scale the computers pretty easily,” I've never found a way to do that for people.Chen: So yes, I started actually—well not 7, but the team was 30 people [laugh]. And you can imagine how surprised I was when I joining Google Cloud with Kubernetes and GKE and it was a pretty small team, to the beginning of those days. But the team was already actually on the edge of burning out. You know, pings on Slack, the GitHub issues, there was so many things happening 24/7.And the thing was just doing everything. Everybody were doing everything. And one of the things I've done on my second month on the team—I did an off-site, right, all managers; that's what we do; we do off-sites—and I brought the team in to talk about—the leadership team—to talk about our team values. And in the beginning, they were a little bit pissed, I would say, “Okay, Chen. What's going on? You're wasting two days of our lives to talk about those things. Why we are not doing other things?”And I was like, “You know guys, this is really important. Let's talk about what's important for us.” It was an amazing it worked. By the way, that work is still the foundation of the culture in the team. We talked about the three values that we care about and how that will look like.And the reason it's important is that when you scale teams, the key thing is actually to scale decision-making. So, how do you scale decision-making? I think there are two things there. One is what you're trying to achieve. So, people should know and understand the vision and know where we want to get to.But the second thing is, how do we work? What's important for us? How do we prioritize? How do we make trade-offs? And when you have both the what we're trying to do and the how, you build that team culture. And when you have that, I find that you're set up more for success for scaling the team.Because then the storyteller is not just the leader or the manager. The entire team is a storyteller of how things are working in this team, how do we work, what you're trying to achieve, and so on. So, that's something that had been a critical. So, that's just, you know, from methodology of how I think it's the right thing to scale teams. Specifically, with a Kubernetes, there were more issues that we needed to work on.For example, building or [recoding 00:23:05] different functions. It cannot be just engineering doing everything. So, hiring the first product managers and information engineers and marketing people, oh my God. Yes, you have to have marketing people because there are so many events. And so, that was one thing, just you know, from people and skills.And the second thing is that it was an open-source project and a product, but what I was personally doing, I was—with the team—is bringing some product engineering practices into the open-source. So, can we say, for example, that we are going to focus on user experience this next release? And we're not going to do all the rest. And I remember, my team was like worried about, like, “Hey, what about that, and what about this, and we have—” you know, they were juggling everything together. And I remember telling them, “Imagine that everything is on the floor. All the balls are on the floor. I know they're on the floor, you know they're on the floor. It's okay. Let's just make sure that every time we pick something up, it never falls again.” And that idea is a principle that then evolved to ‘No Heroics,' and it evolved to ‘Sustainable Success.' But building things towards sustainable success is a principle which has been very helpful for us.Corey: This episode is sponsored in part by our friend at Uptycs. Attackers don't think in silos, so why would you have siloed solutions protecting cloud, containers, and laptops distinctly? Meet Uptycs - the first unified solution that prioritizes risk across your modern attack surface—all from a single platform, UI, and data model. Stop by booth 3352 at AWS re:Invent in Las Vegas to see for yourself and visit uptycs.com. That's U-P-T-Y-C-S.com. My thanks to them for sponsoring my ridiculous nonsense.Corey: When I take a look back, it's very odd to me to see the current reality that is Google, where you're talking about empathy, and the No Heroics, and the rest of that is not the reputation that Google enjoyed back when a lot of this stuff got started. It was always oh, engineers should be extraordinarily bright and gifted, and therefore it felt at the time like our customers should be as well. There was almost an arrogance built into, well, if you wrote your code more like Google will, then maybe your code wouldn't be so terrible in the cloud. And somewhat cynically I thought for a while that oh Kubernetes is Google's attempt to wind up making the rest of the world write software in a way that's more Google-y. I don't think that observation has aged very well. I think it's solved a tremendous number of problems for folks.But the complexity has absolutely been high throughout most of Kubernetes life. I would argue, on some level, that it feels like it's become successful almost in spite of that, rather than because of it. But I'm curious to get your take. Why do you believe that Kubernetes has been as successful as it clearly has?Chen: [unintelligible 00:25:34] two things. One about empathy. So yes, Google engineers are brilliant and are amazing and all great. And our customers are amazing, and brilliant, as well. And going back to the point before is, everyone has their job and where they need to be successful and we, as you say, we need to make things simpler and enable innovation. And our customers are driving innovation on top of our platform.So, that's the way I think about it. And yes, it's not as simple as it can be—probably—yet, but in studying the early days of Kubernetes, we have been investing a lot in what we call empathy, and the customer empathy workshop, for example. So, I partnered with Kelsey Hightower—and you mentioned yourself trying to start a cluster. The first time we did a workshop with my entire team, so then it was like 50 people [laugh], their task was to spin off a cluster without using any scripts that we had internally.And unfortunately, not many folks succeeded in this task. And out of that came the—what you you call it—a OKR, which was our goal for that quarter, is that you are able to spin off a cluster in three commands and troubleshoot if something goes wrong. Okay, that came out of that workshop. So, I do think that there is a lot of foundation on that empathetic engineering and the open-source of the community helped our Google teams to be more empathetic and understand what are the different use cases that they are trying to solve.And that actually bring me to why I think Kubernetes is so successful. People might be surprised, but the amount of investment we're making on orchestration or placement of containers within Kubernetes is actually pretty small. And it's been very small for the last seven years. Where do we invest time? One is, as I mentioned before, is on the what we call the API machinery.So, Kubernetes has introduced a way that is really suitable for a cloud-native technologies, the idea of reconciliation loop, meaning that the way Kubernetes is—Kubernetes is, like, a powerful automation machine, which can automate, of course, workload placement, but can automate other things. Think about it as a way of the Kubernetes API machinery is observing what is the current state, comparing it to the desired state, and working towards it. Think about, like, a thermostat, which is a different automation versus the ‘if this, then that,' where you need to anticipate different events. So, this idea about the API machinery and the way that you can extend it made it possible for different teams to use that mechanism to automate other things in that space.So, that has been one very powerful mechanism of Kubernetes. And that enabled all of innovation, even if you think about things like Istio, as an example, that's how it started, by leveraging that kind of mechanism to separate storage and so on. So, there are a lot of operators, the way people are managing their databases, or stateful workloads on top of Kubernetes, they're extending this mechanism. So, that's one thing that I think is key and built that ecosystem. The second thing, I am very proud of the community of Kubernetes.Corey: Oh, it's a phenomenal community success story.Chen: It's not easy to build a community, definitely not in open-source. I feel that the idea of values, you know, that I was talking about within my team was actually a big deal for us as we were building the community: how we treat each other, how do we help people start? You know, and we were talking before, like, am I going to talk about DEI and inclusivity, and so on. One of the things that I love about Kubernetes is that it's a new technology. There is actually—[unintelligible 00:29:39] no, even today, there is no one with ten years experience in Kubernetes. And if anyone says they have that, then they are lying.Corey: Time machine. Yes.Chen: That creates an opportunity for a lot of people to become experts in this technology. And by having it in open-source and making everything available, you can actually do it from your living room sofa. That excites me, you know, the idea that you can become an expert in this new technology and you can get involved, and you'll get people that will mentor you and help you through your first PR. And there are some roles within the community that you can start, you know, dipping your toes in the water. It's exciting. So, that makes me really happy, and I know that this community has changed the trajectory of many people's careers, which I love.Corey: I think that's probably one of the most impressive things that it's done. One last question I have for you is that we've talked a fair bit about the history and how we see it progressing through the view toward the somewhat recent past. What do you see coming in the future? What does the future of Kubernetes look like to you?Chen: Continue to be more and more boring. There is the promise of hybrid and multi-cloud, for example, is only possible by technologies like Kubernetes. So, I do think that, as a technology, it will continue to be important by ensuring portability and interoperability of workloads. I see a lot of edge use cases. If you think about it, it's like just lagging a bit around, like, innovation that we've seen in the cloud, can we bring that innovation to the edge, this will require more development within Kubernetes community as well.And that's really actually excites me. I think there's a lot of things that we're going to see there. And by the way, you've seen it also in KubeCon. I mean, there were some announcements in that space. In Google Cloud, we just announced before, like, with customers like Wendy's and Rite Aid as well. So, taking advantage of this technology to allow innovation everywhere.But beyond that, my hope is that we'll continue and hide the complexity. And our challenge will be to not make it a black box. Because that will be, in my opinion, a failure pattern, doesn't help those kinds of platforms. So, that will be the challenge. Can we scope the project, ensure that we have the right observability, and from a use case perspective, I do think edge is super interesting.Corey: I would agree. There are a lot of workloads out there that are simply never going to be hosted in the cloud provider region, for a variety of reasons of varying validity, but it is the truth. I think that the focus on addressing customers where they are has been an emerging best practice for cloud providers and I'm thrilled to see Google leading the charge on that.Chen: Yeah. And you just reminded me, the other thing that we see also more and more is definitely AI and ML workloads running on Kubernetes, which is part of that, right? So, Google Cloud is investing a lot in making an AI/ML easy. And I don't know if many people know, but, like, even Vertex AI, our own platform, is running on GKE. So, that's part of seeing how do we make sure that platform is suitable for these kinds of workloads and really help customers do the heavy lifting.So, that's another set of workloads that are very relevant at the edge. And one of our customers—MLB, for example—two things are interesting there. The first one, I think a lot of people sometimes say, “Okay, I'm going to move to the cloud and I want to know everything right now, how that will evolve.” And one of the things that's been really exciting with working with MLB for the last four years is the journey and the iterations. So, they started somewhat, like, at one phase and then they saw what's possible, and then moved to the next one, and so on. So, that's one. The other thing is that, really, they have so much ML running at the stadium with Google Cloud technology, which is very exciting.Corey: I'm looking forward to seeing how this continues to evolve and progress, particularly in light of the recent correction we're seeing in the market where a lot of hype-driven ideas are being stress test, maybe not in the way we might have hoped that they would, but it'll be really interesting to see what shakes out as far as things that deliver business value and are clear wins for customers versus a lot of the speculative stories that we've been hearing for a while now. Maybe I'm totally wrong on this. And this is going to be a temporary bump in the road, and we'll see no abatement in the ongoing excitement around so many of these emerging technologies, but I'm curious to see how it plays out. But that's the beautiful part about getting to be a pundit—or whatever it is people call me these days that's at least polite enough to say on a podcast—is that when I'm right, people think I'm a visionary, and when I'm wrong, people don't generally hold that against you. It seems like futurist is the easiest job in the world because if you predict and get it wrong, no one remembers. Predict and get it right, you look like a genius.Chen: So, first of all, I'm optimistic. So usually, my predictions are positive. I will say that, you know, what we are seeing, also what I'm hearing from our customers, technology is not for the sake of technology. Actually, nobody cares [laugh]. Even today.Okay, so nothing needs to change for, like, nobody would c—even today, nobody cares about Kubernetes. They need to care, unfortunately, but what I'm hearing from our customers is, “How do we create new experiences? How we make things easy?” Talent shortage is not just with tech people. It's also with people working in the warehouse or working in the store.Can we use technology to help inventory management? There's so many amazing things. So, when there is a real business opportunity, things are so much simpler. People have the right incentives to make it work. Because one thing we didn't talk about—right, we talked about all these new technologies and we talked about scaling team and so on—a lot of time, the challenge is not the technology.A lot of time, the challenge is the process. A lot of time, the challenge is the skills, is the culture, there's so many things. But when you have something—going back to what I said before—how you unite teams, when there's something a clear goal, a clear vision that everybody's excited about, they will make it work. So, I think this is where having a purpose for the innovation is critical for any successful project.Corey: I think and I hope that you're right. I really want to thank you for spending as much time with me as you have. If people want to learn more, where's the best place for them to find you?Chen: So, first of all, on Twitter. I'm there or on LinkedIn. I will say that I'm happy to connect with folks. Generally speaking, at some point in my career, I recognized that I have a voice that can help people, and I've experienced that can also help people build their careers. I'm happy to share that and [unintelligible 00:36:54] folks both in the company and outside of it.Corey: I think that's one of the obligations on a lot of us, once we wanted to get into a certain position or careers to send the ladder back down, for lack of a better term. It's I've never appreciated the perspective, “Well, screw everyone else. I got mine.” The whole point the next generation should have it easier than we did.Chen: Yeah, definitely.Corey: Chen Goldberg, General Manager of Cloud Runtimes and VP of Engineering at Google. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry rant of a comment talking about how LPARs on mainframes are absolutely not containers, making sure it's at least far too big to fit in a reasonably-sized Docker container.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Google Cloud Platform Podcast
Next 2022 with Forrest Brazeal

Google Cloud Platform Podcast

Play Episode Listen Later Oct 12, 2022 43:44


Forrest Brazeal joins Stephanie Wong today on the second day of Google Cloud Next ‘22. We're talking about all the exciting announcements, how the conference has changed in recent years, and what to expect in the days ahead. The excitement and energy of the first in-person Next since 2019 was one of the best parts for Forrest. With 1300 releases in just half the year, a lot has happened in BigQuery, AI, Looker, and more. Next includes announcements in many of these areas as well, as Google Cloud expands and makes Cloud easier for all types of projects and clients. Strategic partnerships and development have allowed better use of Google Cloud for the virtual work world and advancements in sustainability have helped Google users feel better about their impact on the environment. New announcements in compute include C3 VMs, the first VM in the cloud with 4th Gen Intel Xeon scalable processors with Google's custom Intel IPU. MediaCDN uses the YouTube infrastructure and the new Live Stream API optimizes streaming capabilities. Among many other announcements, Network Analyzer is now GA allowing for simplified network configuration monitoring and Google Cloud Armor has been extended to include ML-based Adaptive Protection capabilities. Software Delivery Shield and Cloud Workstations are recent offerings to help developers in each of the four areas of software supply chain management. Advancements in Cloud Build include added security benefits, and new GKE and Cloud Run logging and security alerts ensure projects remain secure through the final stages of development. The best way to ensure secure, optimized work is with well-trained developers. And in that vein, Google Cloud is introducing Innovators Plus to provide a new suite of developer benefits under a fixed cost subscription. Forrest tells us about #GoogleClout and the challenges available in the Next portal for conference-goers. Assured Workloads helps with data sovereignty in different regions, Confidential Space in Confidential Computing provides trust guarantees when companies perform joint data analysis and machine learning training, and Chronicle Security Operations are some of the exciting security announcements we saw at Next. On the show next week, we'll go in depth on data announcements at Next, but Steph gives us a quick rundown of some of the biggest ones today. She talks briefly about announcements in AI, including Vertex AI Vision and Translation Hub. Forrest wraps up by talking about predictions for the future of tech and cloud. Forrest Brazeal Forrest Brazeal is a cloud educator, author, speaker, and Pwnie Award-winning songwriter. He is the creator of the Cloud Resume Challenge initiative, which has helped thousands of non-traditional learners take their first steps into the cloud. Cool things of the week Unlock biology & medicine potential with AlphaFold on Google Cloud video Interview Google Cloud Next ‘22 site Google Cloud Innovators site What's next for digital transformation in the cloud blog New cloud regions coming to a country near you blog The next wave of Google Cloud infrastructure innovation: New C3 VM and Hyperdisk blog 20+ Cloud Networking innovations unveiled at Google Cloud Next blog Introducing Software Delivery Shield for end-to-end software supply chain security blog Developers - Build, learn, and grow your career faster with Google Cloud blog Advancing digital sovereignty on Europe's terms blog Introducing Confidential Space to help unlock the value of secure data collaboration blog Introducing Chronicle Security Operations: Detect, investigate, and respond to cyberthreats with the speed, scale, and intelligence of Google blog What's new in Google Cloud databases: More unified. More open. More intelligent. blog Building the most open data cloud ecosystem: Unifying data across multiple sources and platforms blog Introducing the next evolution of Looker, your unified business intelligence platform blog Vertex AI Vision site New AI Agents can drive business results faster: Translation Hub, Document AI, and Contact Center AI blog Open source collaborations and key partnerships to help accelerate AI innovation blog Google Cloud Launches First-of-Its-Kind Service to Simplify Mainframe Modernization for Customers in Financial Services, Retail, Healthcare and Other Industries article Project Starline expands testing through an early access program blog What's something cool you're working on? Steph is working on the developer keynote and DevFest and UKI Google Cloud Next Developer Day. Check out her Next talk “Simplify and secure your network for all workloads”. Hosts Stephanie Wong

The Cloud Pod
184: The CloudPod Explicitly trusts itself

The Cloud Pod

Play Episode Listen Later Oct 7, 2022 52:22


On The Cloud Pod this week, AWS announces an update to IAM role trust policy behavior, Easily Collect Vehicle Data and Send to the Cloud with new AWS IoT FleetWise, now generally available, Get a head start with no-cost learning challenges before Google Next ‘22. Thank you to our sponsor, Foghorn Consulting, which provides top notch cloud and DevOps engineers to the world's most innovative companies. Initiatives stalled because you're having trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. Episode Highlights ⏰ AWS announces an update to IAM role trust policy behavior. ⏰ Easily Collect Vehicle Data and Send to the Cloud with new AWS IoT FleetWise, now generally available. ⏰ Get a head start with no-cost learning challenges before Google Next ‘22. General News:

The MongoDB Podcast
Ep 130 Serverless with MongoDB and Google Cloud Run

The MongoDB Podcast

Play Episode Listen Later Sep 27, 2022 21:43


In this episode, Mike Lynn chats to Abi from Google and Mira from MongoDB to talk about all things serverless and full stack application development in the cloud. They speak about the main reasons to use serverless and why you should use serverless technologies for your development. Topics covered include Docker, Containers, Jib (for building Java Docker images), environment variables, development vs production environments, how to connect to MongoDB, security, and when to consider developing for serverless (hint - when there's significant need for scaling, reducing maintenance and freeing up developers!!) Tune in to listen to what Abi and Mira have to share.

Screaming in the Cloud
Google Cloud Carbon Footprint with Steren Giannini

Screaming in the Cloud

Play Episode Listen Later Aug 16, 2022 35:07


About SterenSteren is a Group Product Manager at Google Cloud. He is part of the serverless team, leading Cloud Run. He is also working on sustainability, leading the Google Cloud Carbon Footprint product.Steren is an engineer from École Centrale (France). Before joining Google, he was CTO of a startup building connected objects and multi device solutions.Links Referenced: previous episode: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/google-cloud-run-satisfaction-and-scalability-with-steren-giannini/ Google Cloud Region Picker: https://cloud.withgoogle.com/region-picker/  Google Cloud regions: https://cloud.google.com/sustainability/region-carbon  TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: DoorDash had a problem. As their cloud-native environment scaled and developers delivered new features, their monitoring system kept breaking down. In an organization where data is used to make better decisions about technology and about the business, losing observability means the entire company loses their competitive edge. With Chronosphere, DoorDash is no longer losing visibility into their applications suite. The key? Chronosphere is an open-source compatible, scalable, and reliable observability solution that gives the observability lead at DoorDash business, confidence, and peace of mind. Read the full success story at snark.cloud/chronosphere. That's snark.cloud slash C-H-R-O-N-O-S-P-H-E-R-E.Corey: This episode is sponsored in part by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on-premises, private cloud, and they just announced a fully-managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half-dozen managed databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications—including Oracle—to the cloud. To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today was recently on the show. Steren Giannini is the product lead for Google Cloud Run, and we talked about that in a previous episode. If you haven't listened to it, you might wish to go back and listen to it, but it's not a prerequisite for what we're about to talk about today. Because apparently Google still does it's 20% time, and one of the things that Steren decided to do—because, you know, everyone needs a hobby—you decided to go ahead and start the Google Cloud Carbon Footprint, which is—well, Steren, thanks for coming back. What the hell is that?Steren: Thanks for having me back on the show. So yes, we started with Cloud Carbon Footprint, and this is a product that now has launched publicly, available to every Google Cloud customer right out of the box of the Google Cloud Console.Corey: I should also point out, because people always wonder and it's the first thing I always check, yes, this one is free. I'm trying to imagine a scenario which you charge for this and I wasn't incensed by it, and I can't. So, good work, you aren't charging anything for it. Good job. Please continue.Steren: So, Google Cloud Carbon Footprint helps a Google Cloud customer understand and reduce their gross carbon emissions linked to their Google cloud usage. So yeah, what do we mean by carbon emission? Just so that we are all on the same page, these are the greenhouse gases that are emitted due to the activity of using Google Cloud that are notably responsible for climate change. And we report them in equivalent of carbon dioxide—CO2—and you know, the shortcut is just to say ‘carbon.' Corey: Now, I'm going to start with something relatively controversial. It's an opinion I have around this sort of thing. And I should also disclaim, I am not in any way, shape, or form, disputing the climate change as caused by humans is real. It is. If you don't believe that, please go listen to something else, maybe Infowars. I don't know and I don't care. I just don't want you around.Now, the problem that I have with this is, on some level, it feels like a cloud provider talking to its customers about their carbon footprint is, on some level, shifting the onus of responsibilities in some way away from the cloud provider and onto the customer. Now, I freely admit that this is a nuanced topic, but how do you view this?Steren: What I mentioned is that we are exposing to customer their gross carbon emissions, but what about their net carbon emissions? Well, Google Cloud customers, net operational carbon emissions are simply zero. Why? Because if you open Google's environmental report, you will see that Google is purchasing as much renewable energy globally for the year as it is using. So, that means that on a yearly basis worldwide, every kilowatt hour of electricity has been matched with renewable energy.And you know, this Google has been doing since 2017. Since 2007, Google was already matching its carbon footprint with carbon offsets. But 2017, Google went beyond and is matching the purchase of the electricity with renewable energy. So, in a sense, your net operational emissions are zero.Now, that's not sufficient for our customers. They have some reporting obligations; they need to know before this renewable matching, what were their gross emissions? And they also need to know what are their emissions coming from, not only the electricity usage, but maybe the data center or manufacturing. And this is all of what we expose in Google Cloud Carbon Footprint.  They are before offset, before renewable energy matching.And you're right also to say that this is not only the customer's problem, and indeed, Google itself has set a goal to get to a hundred percent carbon-free electricity for every hour in every location. The big goal for 2030 is that at every hour, every location, the electricity comes from carbon-free sources. This is very ambitious and never done before, of course, at the scale of Google, but this is the next goal for Google.Corey: The challenge that I have—in the abstract—with cloud providers, more or less, shaming customers—not to say that's what you're doing here—about their carbon usage and their carbon footprint is, okay, I appreciate that this is everyone's problem, and yes, it is something that we should be focusing on extensively. The counterargument is that I don't recall ever getting a meeting invite to a Google or Amazon or Microsoft or Oracle negotiation with any of your power bills or power companies or power sourcing. I have no input whatsoever as a customer on those things. And, on some level, it's “Ooh, you're causing a particular amount of carbon to be used by your usage of these services.” Like, well, at some level, it feels like that is more of a you thing than a me thing.And I want to be clear, I'm speaking more in the abstract to the industry rather than the specifics of Google Cloud, not to unfairly put you in the position of having to speak for everyone.Steren: No, but you're right. If you were to do nothing, Google is constantly working hard to sign more power purchase agreements with some renewable energy sources or optimizing its data centers. Google Cloud data centers are one of the most optimized data centers in the industry with a power usage effectiveness of 1.1, which is basically saying that the energy that is used to power the facility over the energy used to actually power the server is 1.1. So, not that much loss in between.So, all of that to say, Google Cloud and Google are working very hard anyway to reduce Google Cloud's carbon footprints and the carbon footprint of Google Cloud customers. So, if you were to do nothing, the charts that you're seeing on Google Cloud Carbon Footprint should trend to zero. But in the meantime, you know, that's not the case, so that's why we show the data. And, like, many customers want to know or have the obligation to report on this data.Corey: One of the challenges that I see—and I believe this might even be related to the carbon footprint tool you have built out on top of Google Cloud—is when I am looking at… at where to place something—first, let me just say the region experience is wildly different than I'm used to with AWS. In the AWS universe, every region is basically its own island; it is very difficult to get a holistic view between regions. Google Cloud does not have that approach. There are advantages and disadvantages to both. I'm not passing any particular value judgment—for once—on this topic in this context. But where do I want to spin something up? And I have a dropdown of the regions that I can put it in. And some of these now have a green leaf next to them and others do not. I'm going to go out on a limb and assume you had a hand in this somewhere.Steren: Exactly. That's something I worked on with the team. So, you will see on the Google Cloud Console location selectors on the Google Cloud location page, on the Google Cloud documentation, you will see a small low CO2 indicator next to some regions. And this indicator is basically saying that this region meets some criteria of high carbon-free energy percentage or low grid carbon intensity. So, you don't need to go into the details; you just need to know that if you see this small leaf, that means that for a given workload, the emissions in that particular region will be way lower than on another region which doesn't have the leaf.Often at Google, when we do a change we A/B test it. We A/B tested those small low CO2 indicators because, you know, that's a console-wide change so we want to make sure that it's worth is. And well, it turns out that for people who were in the experiment—so people will be seeing the leaf—among new Google Cloud users, they were 50% more likely to pick a low-carbon region when the leaf was displayed. And among all users, it was 19%. So, you see how just by surfacing the information, we were able to significantly influence customers' behavior towards reducing their carbon emissions.And, you know, if you ask me, I think picking the cleanest region is probably one of the simplest action you can take—if possible, of course—to reduce your gross carbon emissions because, you know, they don't require to change your architecture or your infrastructure; it just requires you to make the right choice in the first place. And just by letting people know that some regions are emitting much less carbon than others we basically allow them to reduce their footprint.Corey: A question I have is that as you continue to move up the stack, one of the things that Google has done extraordinarily well is the global network. And we talked previously about how I run the snark.cloud URL shortener in Google Cloud. That is homed out of us-central1 as far as regions go. But given that thing is effectively stateless, it just talks to Google Sheets for its source of truth, but then just runs a Docker invocation on every request, cool, I can see a scenario in which that becomes much more of a global service.In other words, if you can run that in pops in every region around the world on some level, there is no downside, from my perspective, on doing that. What I'm wondering then, as a result of that, is as you start seeing the evolution of services becoming more and more global, instead of highly region-specific, does that change the way that we should be thinking potentially about carbon footprint and regional selection? Or is that too much of a niche edge case to really be top of radar right now?Steren: Oh, there are many things to talk about here. The first one is that you might be hinting at something that Google is already doing, which is location shifting of workloads in order to optimize power usage, and, you know of course, carbon emissions. So, Google itself is already doing that. For example, I guess, to process YouTube videos, that can be done, not necessarily right away and that can be done in the location in which, for example, the sun is shining. So, there are some very interesting things that can be done if you allow the workloads to be run in not necessarily a specific region.Now, that being said, I think there are many other things that people consider when they pick a region. First, well, maybe they have some data locality constraints, right? This is very much the case in European countries where the data must stay in a given region, by law. Second, well, maybe they care about the price. And as you probably know, [laugh] the price of cloud providers is not the same in every region.Corey: I've noticed that and in fact, I was going to get into that as our next transition, but you've just opened Pandora's Box early. It's great to have the carbon-friendly indicator next to the region, but I also want number of dollar signs next to it as well. Like in AWS-land, do you have the tier one regions where everything is the lowest price: us-east-1, us-west-2, and a few others escaped me from time to time, where Managed NAT Gateways are really expensive. And then you go under some others and they get even more expensive, somehow. Like, talk about pushing the bounds of cloud economics. It's astonishing to me.Steren: Yes. And so—Corey: Because I want that display, on some level—Steren: Exactly.Corey: —as a customer, in many cases.Steren: So, there is price, there is carbon, but of course, you know, if you are serving web requests, there is probably also latency that you care about, right? Even if—for example, Finland is very low carbon. You might not host your workloads in Finland if you want to serve US customers. So, in a sense, there are many dimensions to optimize when you pick a region. And I just sent you a link to something that I built, which is called Google Cloud Region Picker.It's basically a tool with three sliders. First one is carbon footprint; you tell us how much you care about that. Hopefully, you put it to the right. The second one is lower price. So, how much do you want the tool to optimize to lower your bill? And third one is latency, and then you tell us where your users are coming from and if you care about latency.Because some workloads are not subject to latency requirements. Like, if you do batch jobs, well, that doesn't serve a user request, so that can be done asynchronously at a later time or in a different place. And what this tool does is that it takes your inputs and it basically tells you which Google Cloud region is the best fit for you. And if you use it, you will see it has very small symbols like three dollars for the most expensive regions, one dollar for the least expensive ones, three leaves for the greenest regions, and zero leaves for the non-green one.Corey: This is awesome. I'm a little bit disappointed that I hadn't seen this before. This is a thing of beauty.Steren: Yeah. Again, done by me as a 20%. [laugh]. And, you know, the goal is to educate, like, of course, it's way more complex. Like, you know that price optimization is way more complex than a slider, but the goal of this tool is to educate and to give a first result. Like, okay, if you are in France and care about carbon, then go here. If you are in Oregon, go here. And so, many parameters that this tool help you optimize in a very simple way.Corey: One of the challenges I think I get into when I look at this across the board, is that you have a couple of very different ends on a confusing spectrum, by which I mean that one of the things I would care about from a region picker, for example, is there sufficient capacity in that region for the things I want to run. At my scale of things where right now on Google Cloud I run a persistent VM that hangs out all the time, and I run some Google Cloud Run stuff. Great. If you have capacity problems with either one of those, are you really a cloud?But then we have other folks who are spinning up tens or hundreds of thousands of a very particular instance type in a very specific region. That's the sort of thing that requires a bit more in the way of capacity planning and the rest. So, I have to imagine for those types of use cases, this tool is insufficient. The obvious reason, of course, if you're spinning up that much of anything, for God's sake, reach out and talk to your account manager before trying to do it willy-nilly but yes.Steren: That's exactly right. So, as I said, this tool is simplified tool to give, like, the vast majority of users a sense of where to put their workloads. But of course, if you're a very big enterprise customer who is going to sign a very big deal with Google Cloud, talk to your account manager because if you do need a lot of capacity, Google Cloud might need to plan for it. And not every regions have the same capacity and we are always working with our customers to make sure we direct them in the right place and have enough capacity. A real-life example of a very high profile Google Cloud customer was that they were selecting a region without knowing its carbon impact, and when we started to disclose the carbon characteristics of Google Cloud regions—which is another link we can send to the audience—this customer realized that the region they selected—you know, maybe because it was close to their user base—was really not the most carbon friendly.So, they decided to switch to another one. And if we take an example, if you take Las Vegas, it has a carbon-free energy percentage of 20%. So, that basically means that on average, 20% of the time, the electricity comes from carbon-free sources. If you are to move to Oregon, this same workload, Oregon has a carbon-free energy percentage of 90%. So, you can see how just by staying on the West Coast, moving from Las Vegas to Oregon, you have drastically reduced your carbon emissions. And your bill, by the way because it turns out Oregon is one of the cheapest Google Cloud Data Center. So, you see how just being aware of those numbers led some very important customers who care about sustainability to make some fundamental choices when it comes to the regions they select.Corey: I guess that leads to my big obvious question, where I wind up pulling up my own footprint in Google Cloud—again, I don't run much there—and apparently over the last year, I've had something on the order of two kilograms of carbon. Great. It feels like for this scale, I almost certainly burn more carbon than that just searching Google for carbon-related questions about where to place things. So, for my use case, this is entirely academic. You can almost run my workloads based upon, I don't know, burning baby seals or something, and the ecological footprint does not materially change.Then we go to the other extreme end of the spectrum with the hundreds of thousands of instances, where this stuff absolutely makes a significant and massive difference. My question is, when should people begin thinking about the carbon footprint of their cloud workload at what point of scale?Steren: So, as you said, a good order of magnitude is one transatlantic flight is a thousand kilogram of equivalent CO2. So, you see how just by flying once, you're already, like, completely overshadowing your Google Cloud carbon footprint. But that's because you are not using a lot of Google Cloud resources. Overall, you know, I think your question is basically the same as when should individuals try to optimize reducing their carbon footprint? And here I always recommend there are tons of things you can optimize.Start by the most impactful ones. And impactful means an action will have a lot of impact in reducing the footprint, but also the footprint reduction will be significant by itself. And two kilograms of CO2, yes indeed, it is very low, but if you start reaching out into the thousands of kilograms of CO2 that starts to represent, like, one flight, for example. So, you should definitely care about it. And as I said, some actions might be rather easy, like picking the right region might be something you can do pretty easily for your business and then you will see your carbon emissions being divided by, you know, sometimes five.This episode is sponsored in part by our friends at Lambda Cloud. They offer GPU instances with pricing that's not only scads better than other cloud providers, but is also accessible and transparent. Also, check this out, they get a lot more granular in terms of what's available. AWS offers NVIDIA A100 GPUs on instances that only come in one size and cost $32/hour. Lambda offers instances that offer those GPUs as single card instances for $1.10/hour. That's 73% less per GPU. That doesn't require any long term commitments or predicting what your usage is gonna look like years down the road. So if you need GPUs, check out Lambda. In beta, they're offering 10TB of free storage and, this is key, data ingress and egress are both free. Check them out at lambdalabs.com/cloud. That's l-a-m-b-d-a-l-a-b-s.com/cloud.Corey: I want to challenge your assertion, incidentally. You say that I'm not using a whole lot of Google Cloud resources. I disagree. I use roughly a dozen different Google Cloud resources tied together for some of these things, but they're built on serverless design patterns, which means that they scale to nothing. I'm not sitting there with an idle VM—except that one—that is existing on a persistent basis.For example, I look at the things that show up on the top five list. Compute Engine is number one, Cloud Run, Cloud Logging, Cloud Storage, and App Engine are the rest that are currently being used. I think there's a significant untold story around the idea of building in a serverless way for climate purposes.Steren: Yes. So, maybe for those who are not aware of what you are seeing on the dashboard, so when you open this Google Cloud Carbon Footprint tool on the Cloud Console, you saw a breakdown of your yearly carbon footprint and monthly carbon footprint across a few dimensions. The first one is the regions because as we said, this matters a lot; like, the regions have a lot of impact. The second one are the month; of course, you can see over time, how you're trending. The third one is a concept called Google Cloud Project, which is, for those who are not aware, it's a way to group Google Cloud resources into buckets.And the third one is Google Cloud services. So, what you described here is, which of your services emits the most and therefore which ones should you optimize first? Like, again, to go back to impactful actions. And to your point, yes, it is very interesting that if you use products which auto-scale, basically, the carbon attributed to you, the customer, will really follow this auto-scaling behavior. Compare that to a virtual machine that is always on, burning some CPU for almost nothing because you have a server that doesn't process requests. That is wasting, in a sense, resources.So, what you describe here is very interesting, which is basically the most optimized products you're going to pick, the less waste you're going to have. Now, I also want to be careful because comparing one CPU hour of Cloud Run and one CPU hour of Compute Engine is not comparing apples to apples. Why? Because when you use Cloud Run, I'm not sure if you know, but you are using a regional product. So, a product which has built-in redundancy, which is safe in case of one zone going down in a region.But that means the Cloud Run infrastructure has to provision a little bit more machines than if it was a zonal product. While Compute Engine, your virtual machine lives in one zone and there is only one machine for you. So, you see how we should also be careful comparing products with other products because fundamentally, they are not offering the same value and they are not running on the same infrastructure. But overall, I think you are correct to say that, you know, avoiding waste, using auto-scaling products, is a good way to reduce your footprint.Corey: I do want to ask—and this is always a delicate topic because you're talking about cultural things—how much headwind did you have internally at Google when you had the idea to start exposing this? How difficult was it to bring this to fruition?Steren: I think we are lucky that our leadership cares about reducing carbon emissions and understood that our customers needed our help to understand their cloud emissions. Like, many customers before we had this tool, we're trying to some kind of estimate their cloud emissions. And it was—you know, Google Cloud was a black box for them. They did not have access to what you said, to some data that only Google has access to.And you know, to build that tool, we are using energy measurement of every machine in every data center. We are using, you know, customer-wide resource usage. And that is something that we use to divide the footprint among customers. So, there is some data used to compute those numbers that only Google Cloud has access to. And indeed, you're correct; it required some executive approval which we received because many of our leaders believe that, you know, this is the right thing to do, and this is helping customers towards the same goal as Google, which is being net-zero and carbon-free.Many of our customers have made some sustainability commitments, and they need our help to meet those goals. So yeah, we did receive approval, first to share the per-region characteristics. This was already, you know, a first in the industry where a cloud provider disclosed that not every region is equal and some are emitting more carbon than others. And second, another approval which was to disclose a per customer carbon footprint, which is broken down by service, project, region, using some, you know, if you touch a little bit on the methodology, you know, it uses energy consumption, resource usage, and carbon intensity coming from a partner of ours to compute, basically, a per customer footprint.Corey: My question for you is, on some level, given that Google is already committed to being net-zero across the board for all of its usage, why do customers care? Why should they care? Effectively, haven't you made that entirely something that is outside of their purview? You've solved the problem, either way.Steren: This is where we should explore it a bit more the kinds of carbon emissions that exist. For a customer, their emissions linked to the cloud usage is all considered the indirect emissions. This, in the Greenhouse Gas Protocol Standard, this is called Scope 3. So, our Google Cloud emissions are the customers' Scope 3 emissions; they are all indirect for them. But those indirect emissions, what I mentioned as being net-zero are the emissions coming from electricity usage.So, to power those data centers, those data centers are located in certain electricity grids. Those electricity grids might be using energy sources that emit more or less carbon, right? Simply put, if in a given place, the electricity comes from coal, it will be emitting a lot of carbon compared to when electricity comes from solar, for example. So, you see how the location itself determines the carbon intensity. And these are the emissions coming from electricity usage, right?So, these are neutralized by Google purchasing as much renewable energy. But there are also types of emissions. For example, when a data center loses connection to the grid, we startup diesel generators. Those diesel generators will directly emit carbon. This is called Scope 1 emissions.And finally, there is the carbon emissions that are coming from the manufacturing of those servers, of those data centers. And this is called Scope 3 emissions. And the goal of Google is for the emissions coming from electricity to be always coming from carbon-free sources. So, this is a change that we've recently released to Google Cloud Carbon Footprint, which is now we also break down your emissions by scope. So, they are all Scope 3 for you, the customer, they are all indirect emissions for you, the customer, but now those indirect emissions, you can see how much is coming from diesel generators, how much is coming from electricity consumption, and how much is coming from manufacturing of the data center, and other, like, upstream, downstream activities. And yeah, overall, this is something that customers do need to report on.Corey: I think that's very fair. I do want to thank you for taking so much time to speak with me. And instead of the usual question I'd like to ask here of where can people go to find out more because we have a bunch of links for that, instead, I want to ask something a little bit different here, which is, what are the takeaways that customers or prospective customers should really have around their carbon footprint when it comes to cloud?Steren: So, I would recommend our audience to consider carbon emissions in your cloud infrastructure decisions. And my advice is, first, move to the cloud. Like, we've talked that Google Cloud has very well-optimized data centers. Like, your cloud gross carbon emissions are anyway going to be much lower than any on-premise carbon emissions. And by the way, if you use Google Cloud, your net operational emissions are zero.Second action is pick the region with the lowest carbon impact. Like we discussed that this is probably a low-effort action, if possible, that will have a lot of impact on your gross carbon emissions. And you know, if you want to go further, try to schedule those workloads when the electricity is the greenest, you know, when the sun is shining, the wind is blowing, for example, or try to schedule those workloads in regions which have the lowest impact. And yeah, Google Cloud gives you all the tools to do that, the tools to optimize your region selection, and the tools to report and reduce your gross carbon emissions. We haven't talked about it, but Google Cloud Carbon Footprint will even send you some proactive recommendations of things to do to reduce your emissions.For example, if you have a project, a machine that you forgotten, Google Cloud Carbon Footprint, will recommend you to delete it and we'll tell you how much carbon you would save by deleting it, as well as dollar, of course.Corey: It's funny because I feel like there's a definite alignment between my view of cloud economics and the carbon perspective on this, which is step one, everyone wins if you turn things off when you're not using them. What a concept. I sometimes try and take it too far of, ‘turn off all of production because your company's terrible.' Yeah, it turns out, that doesn't work super well. But the idea of step one, turn it off, especially when you're not using it. And if you're never using it, why would you want to pay for it? That becomes a very clear win for everyone involved. I think that in the fullness of time, economics are what are going to move the needle on driving further adoption about this. I have to guess that you see the same thing from where you are?Steren: Yes, very often working to reduce your carbon footprint is also working to reduce your bill. And we've also observed—not always—but some correlation between regions that have the lowest carbon impact and regions that are the cheapest. So, in a sense, this region selection, optimizing for price and carbon is often optimizing for the same thing. It's not always true, but it is often true.Corey: I really want to thank you for spending so much time to talk with me about this. This has definitely giving me a lot of food for thought, and I have to imagine that this will not be our last conversation around the topic.Steren: Well, thanks for having me. And I'm very happy to talk to you in the podcast, of course.Corey: Steren Giannini, product lead for Google Cloud Carbon Footprint and Google Cloud Run. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry screed about how climate change isn't real as you sit there wondering why it's 120 degrees in March.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Google Cloud Platform Podcast
Cloud Functions (2nd gen) with Jaisen Mathai and Sara Ford

Google Cloud Platform Podcast

Play Episode Listen Later Aug 10, 2022 41:05


Stephanie Wong and Brian Dorsey are joined today by fellow Googlers Jaisen Mathai and Sara Ford to hear all about Cloud Functions (2nd gen) and how it differs from the original. Jaisen gives us some background on Cloud Functions and why it was built. Supporting seven languages, this tool allows clients to write a function without worrying about scaling, devops, and a number of other things that are handled by Cloud Functions automatically. Customer feedback led to new features, and that's how the second evolution of Cloud Functions came about. Don't worry, first gen users! This will continue to be available and supported. Features in the 2nd gen fit into three categories: performance, cost, and control. Among other benefits, costs stay low or may even be reduced with some of the new features, larger instances and longer processing times mean better performance, and traffic splitting means better control over projects. Sara details an example illustrating the power of the new concurrency features, and Jaisen helps us understand when Cloud Functions is the right choice for your project and when it's not. Our guests walk us through getting started with Cloud Functions and using the 2nd gen additions. Companies like Lucille Games are using Cloud Functions, and our guests talk more about how specific users are leveraging the new features of the 2nd gen. Jaisen Mathai Jaisen is a product manager for Cloud Functions. He's been at Google for about six years and before joining Google was both a developer and product manager. Sara Ford Sara is a Cloud Developer Advocate focusing on Cloud Functions and enjoys working on serverless. Cool things of the week No pipelines needed. Stream data with Pub/Sub direct to BigQuery blog Cloud IAM Google Cloud blog The Diversity Annual Report is now a BigQuery public dataset blog Interview Cloud Functions site Cloud Functions 2nd gen walkthrough video Cloud Functions version comparison docs Lucille Games: Playing to win with Google Cloud Platform site BigQuery site Cloud Run site Eventarc docs Cloud Shell site GCP Podcast Episode 261: Full Stack Dart with Tony Pujals and Kevin Moore podcast Working with Remote Functions docs Cloud Console site Where should I run my stuff? Choosing compute options video What's something cool you're working on? Stephanie has been working on GCP Support Shorts. Hosts Stephanie Wong and Brian Dorsey

Screaming in the Cloud
Cloud Security and Cost with Anton Chuvakin

Screaming in the Cloud

Play Episode Listen Later Aug 2, 2022 35:47


About AntonDr. Anton Chuvakin is now involved with security solution strategy at Google Cloud, where he arrived via Chronicle Security (an Alphabet company) acquisition in July 2019.Anton was, until recently, a Research Vice President and Distinguished Analyst at Gartner for Technical Professionals (GTP) Security and Risk Management Strategies team. (see chuvakin.org for more)Links Referenced: Google Cloud: https://cloud.google.com/ Cloud Security Podcast: https://cloud.withgoogle.com/cloudsecurity/podcast/ Twitter: https://twitter.com/anton_chuvakin Medium blog: https://medium.com/anton.chuvakin TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on-premises, private cloud, and they just announced a fully-managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half-dozen managed databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications—including Oracle—to the cloud. To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: Let's face it, on-call firefighting at 2am is stressful! So there's good news and there's bad news. The bad news is that you probably can't prevent incidents from happening, but the good news is that incident.io makes incidents less stressful and a lot more valuable. incident.io is a Slack-native incident management platform that allows you to automate incident processes, focus on fixing the issues and learn from incident insights to improve site reliability and fix your vulnerabilities. Try incident.io, recover faster and sleep more.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. My guest today is Anton Chuvakin, who is a Security Strategy Something at Google Cloud. And I absolutely love the title, given, honestly, how anti-corporate it is in so many different ways. Anton, first, thank you for joining me.Anton: Sure. Thanks for inviting me.Corey: So, you wound up working somewhere else—according to LinkedIn—for two months, which in LinkedIn time is about 20 minutes because their date math is always weird. And then you wound up going—according to LinkedIn, of course—leaving and going to Google. Now, that was an acquisition if I'm not mistaken, correct?Anton: That's correct, yes. And it kind of explains that timing in a little bit of a title story because my original title was Head of Security Solution Strategy, and it was for a startup called Chronicle. And within actually three weeks, if I recall correctly, I was acquired into Google. So, title really made little sense of Google, so I kind of go with, like, random titles that include the word security, and occasionally strategy if I feel generous.Corey: It's pretty clear, the fastest way to get hired at Google, given their famous interview process is to just get acquired. Like, “I'm going to start a company and raise it to, like, a little bit of providence, and then do an acquihire because that will be faster than going through the loop, and ideally, there will be less algorithm solving on whiteboards.” But I have to ask, did you have to solve algorithms on whiteboards for your role?Anton: Actually, no, but it did come close to that for some other people who were seen as non-technical and had to join technical roles. I think they were forced to solve coding questions and stuff, but I was somehow grandfathered into a technical role. I don't know exactly how it happened.Corey: Yeah, how you wound up in a technical role. Let's be clear, you are Doctor Anton Chuvakin, and you have written multiple books, you were a research VP at Gartner for many years, and once upon a time, that was sort of a punchline in the circles I hung out with, and then I figured out what Gartner actually does. And okay, that actually is something fairly impressive, let's be clear here. Even as someone who categorically defines himself as not an analyst, I find myself increasingly having a lot of respect for the folks who are actually analysts and the laborious amount of work that they do that remarkably few people understand.Anton: That's correct. And I don't want to boost my ego too much. It's kind of big enough already, obviously, but I actually made it all the way to Distinguished Analyst, which is the next rank after VP.Corey: Ah, my apologies. I did not realize it. This [challenges 00:02:53] the internal structure.Anton: [laugh]. Yeah.Corey: It's like, “Oh, I went from Senior to Staff,” or Staff to Senior because I'm external; I don't know the direction these things go in. It almost feels like a half-step away from oh, I went from [SDE3 to SDE4 00:03:02]. It's like, what do those things mean? Nobody knows. Great.Anton: And what's the top? Is it 17 or is it 113? [laugh].Corey: Exactly. It's like, oh okay, so you're Research VP—or various kinds of VPs—the real question is, how many people have to die before you're the president? And it turns out that that's not how companies think. Who knew?Anton: That's correct. And I think Gartner was a lot of hard work. And it's the type of work that a lot of people actually don't understand. Some people understand it wrong, and some people understand it wrong, kind of, for corrupt reasons. So, for example, a lot of Gartner machinery involves soaking insight from the outside world, organizing it, packaging it, writing it, and then giving it as advice to other people.So, there's nothing offensive about that because there is a lot of insight in the outside world, and somebody needs to be a sponge slash filter slash enrichment facility for that insight. And that, to me, is a good analyst firm, like Gartner.Corey: Yeah. It's a very interesting world. But you historically have been doing a lot of, well, let's I don't even know how to properly describe it because Gardner's clientele historically has not been startups because let's face it, Gartner is relatively expensive. And let's be clear, you're at Google Cloud now, which is a different kind of expensive, but in a way that works for startups, so good for you; gold star. But what was interesting there is that the majority of the Gartner clientele that I've spoken to tend to be big-E Enterprise, which runs legacy businesses, which is a condescending engineering term for ‘it makes money.'And they had the temerity to start their company before 15 years ago, so they built data centers and did things in a data center environment, and now they're moving in a cloudy direction. Your emphasis has always been on security, so my question for you to start with all this is where do you see security vendors fitting in? Because when I walk the RSA expo hall and find myself growing increasingly depressed, it seems like an awful lot of what vendors are selling looks very little removed from, “We took a box, now we shoved in a virtual machine and here you go; it's in your cloud environment. Please pay us money.” The end. And it feels, if I'm looking at this from a pure cloud-native, how I would build things in the cloud from scratch perspective, to be the wrong design. Where do you stand on it?Anton: So, this has been one of the agonizing questions. So, I'm going to kind of ignore some of the context. Of course, I'll come back to it later, but want to kind of frame it—Corey: I love ignoring context. My favorite thing; it's what makes me a decent engineer some days.Anton: So, the frame was this. One of the more agonizing questions for me as an analyst was, a client calls me and says, “We want to do X.” Deep in my heart, I know that X is absolutely wrong, however given their circumstances and how they got to decided to do X, X is perhaps the only thing they can logically do. So, do you tell them, “Don't do X; X is bad,” or you tell them, “Here's how you do X in a manner that aligns with your goals, that's possible, that's whatever.”So, cloud comes up a lot in this case. Somebody comes and says, I want to put my on-premise security information management tool or SIM in the cloud. And I say, deep in my heart, I say, “No, get cloud-native tool.” But I tell them, “Okay, actually, here's how you do it in a less painful manner.” So, this is always hard. Do you tell them they're on their own path, but you help them tread their own path with least pain? So, as an analyst, I agonized over that. This was almost like a moral decision. What do I tell them?Corey: It makes sense. It's a microcosm of the architect's dilemma, on some level, because if you ask a typical Google-style interview whiteboard question, one of my favorites in years past was ‘build a URL shortener.' Great. And you can scale it out and turn it into different things and design things on the whiteboard, and that's great. Most mid-level people can wind up building a passable designed for most things in a cloud sense, when you're starting from scratch.That's not hard. The problem is that the real world is messy and doesn't fit on a whiteboard. And when you're talking about taking a thing that exists in a certain state—for whatever reason, that's the state that it's in—and migrating it to a new environment or a new way of operating, there are so many assumptions that have to break, and in most cases, you don't get the luxury of just taking the thing down for 18 months so you can rework it. And even that it's never as easy as people think it is, so it's going to be 36. Great.You have to wind up meeting people where they are as they're contextualizing these things. And I always feel like the first step of the cloud migration has been to improve your data center environment at the cost of worsening your cloud environment. And that's okay. We don't all need to be the absolute vanguard of how everything should be built and pushing the bleeding edge. You're an insurance company, for God's sake. Maybe that's not where you want to spend your innovation energies.Anton: Yeah. And that's why I tend to lean towards helping them get out of this situation, or maybe build a five-step roadmap of how to become a little bit more cloud-native, rather than tell them, “You're wrong. You should just rewrite the app in a cloud-native way.” That advice almost never actually works in real world. So, I see a lot of the security people move their security stacks to the cloud.And if I see this, I deepen my heart and say, “Holy cow. What do you mean, you want to IDS every packet between Cloud instances? You want to capture every packet in cloud instances? Why? It's all encrypted anyway.” But I don't say that. I say, “Okay, I see how this is the first step for you. Let's describe the next seven steps.”Corey: The problem I keep smacking into is that very often folks who are pushing a lot of these solutions are, yes, they're meeting customers where they are, and that makes an awful lot of sense; I'm not saying that there's anything inherently wrong about that. The challenge is it also feels on the high end, when those customers start to evolve and transform, that those vendors act as a drag. Because if you wind up going in a full-on cloud-native approach, in the fullness of time, there's an entire swath of security vendors that do not have anything left to sell you.Anton: Yes, that is correct. And I think that—I had a fight with an EDR vendor, Endpoint Detection Response, vendor one day when they said, “Oh, we're going to be XDR and we'll do cloud.” And I told them, “You do realize that in a true cloud-native environment, there's no E? There is no endpoint the way you understand it? There is no OS. There is no server. And 99% of your IP isn't working on the clients and servers. How are you going to secure a cloud again?”And I get some kind of rambling answer from them, but the point is that you're right, I do see a lot of vendors that meet clients where they are during their first step in the cloud, and then they may become a drag, or the customer has to show switch to a cloud-native vendor, or to both sometimes, and pay into two mouths. Well, shove money into two pockets.Corey: Well, first, I just want to interject for a second here because when I was walking the RSA expo floor, there were something like 15 different vendors that were trying to sell me XDR. Not a single one of them bothered to expand the acronym—Anton: Just 15? You missed half of them.Corey: Well, yeah—Anton: Holy cow.Corey: As far as I know XDR cable. It's an audio thing right? I already have a bunch of those for my microphone. What's the deal here? Like, “I believe that's XLR.” It's like, “I believe you should expand your acronyms.” What is XDR?Anton: So, this is where I'm going to be very self-serving and point to a blog that I've written that says we don't know what's XDR. And I'm going to—Corey: Well, but rather than a spiritual meaning, I'm going to ask, what does the acronym stands for? I don't actually know the answer to that.Anton: Extended Detection and Response.Corey: Ah.Anton: Extended Detection and Response. But the word ‘extended' is extended by everybody in different directions. There are multiple camps of opinion. Gartner argues with Forrester. If they ever had a pillow fight, it would look really ugly because they just don't agree on what XDR is.Many vendors don't agree with many other vendors, so at this point, if you corner me and say, “Anton, commit to a definition of XDR,” I would not. I will just say, “TBD. Wait two years.” We don't have a consensus definition of XDR at this point. And RSA notwithstanding, 30 booths with XDRs on their big signs… still, sorry, I don't have it.Corey: The problem that I keep running into again and again and again, has been pretty consistently that there are vendors willing to help customers in a very certain position, and for those customers, those vendors are spot on the right thing to do.Anton: Mmm, yep.Corey: But then they tried to expand and instead of realizing that the market has moved on and the market that they're serving is inherently limited and long-term is going to be in decline, they instead start trying to fight the tide and saying, “Oh, no, no, no, no. Those new cloud things, can't trust them.” And they start out with the FU, the Fear, Uncertainty, and Doubt marketing model where, “You can't trust those newfangled cloud things. You should have everything on-prem,” ignoring entirely the fact that in their existing data centers, half the time the security team forgets to lock the door.Anton: Yeah, yeah.Corey: It just feels like there is so much conflict of interest about in the space. I mean, that's the reason I started my Thursday Last Week in AWS newsletter that does security round-ups, just because everything else I found was largely either community-driven where it understood that it was an InfoSec community thing—and InfoSec community is generally toxic—or it was vendor-captured. And I wanted a round-up of things that I had to care about running an infrastructure, but security is not in my job title, even if the word something is or is not there. It's—I have a job to do that isn't security full time; what do I need to know? And that felt like an underserved market, and I feel like there's no equivalent of that in the world of the emerging cloud security space.Anton: Yes, I think so. But it has a high chance of being also kind of captured by legacy vendors. So, when I was at Gartner, there was a lot of acronyms being made with that started with a C: Cloud. There was CSPM, there was CWBP, and after I left the coined SNAPP with double p at the end. Cloud-Native Application Protection Platform. And you know, in my time at Gartner, five-letter acronyms are definitely not very popular. Like, you shouldn't have done a five-letter acronym if you can help yourself.So, my point is that a lot of these vendors are more from legacy vendors. They are not born in the cloud. They are born in the 1990s. Some are born in the cloud, but it's a mix. So, the same acronym may apply to a vendor that's 2019, or—wait for it—1989.Corey: That is… well, I'd say on the one hand, it's terrifying, but on the other, it's not that far removed from the founding of Google.Anton: True, true. Well, '89, kind of, it's another ten years. I think that if you're from the '90s, maybe you're okay, but if you're from the '80s… you really need to have superpowers of adaptation. Again, it's possible. Funny aside: at Gartner, I met somebody who was an analyst for 32 years.So, he was I think, at Gartner for 32 years. And how do you keep your knowledge current if you are always in an ivory tower? The point is that this person did do that because he had a unique ability to absorb knowledge from the outside world. You can adapt; it's just hard.Corey: It always is. I'm going to pivot a bit and put you in a little bit of a hot seat here. Not intentionally so. But it is something that I've been really kicking around for a while. And I'm going to basically focus on Google because that's where you work.I yeah, I want you to go and mouth off about other cloud companies. Yeah, that's—Anton: [laugh]. No.Corey: Going to go super well and no one will have a problem with that. No, it's… we'll pick on Google for a minute because Google Cloud offers a whole bunch of services. I think it's directionally the right number of services because there are areas that you folks do not view as a core competency, and you actually—imagine that—partner with third parties to wind up delivering something great rather than building this shitty knockoff version that no one actually wants. Ehem, I might be some subtweeting someone here with this, only out loud.Anton: [laugh].Corey: The thing that resonates with me though, is that you do charge for a variety of security services. My perspective, by and large, is that the cloud vendors should not be viewing security as a profit center but rather is something that comes baked into the platform that winds up being amortized into the cost of everything else, just because otherwise you wind up with such a perverse set of incentives.Anton: Mm-hm.Corey: Does that sound ridiculous or is that something that aligns with your way of thinking. I'm willing to take criticism that I'm wrong on this, too.Anton: Yeah. It's not that. It's I almost start to see some kind of a magic quadrant in my mind that kind of categorizes some things—Corey: Careful, that's trademarked.Anton: Uhh, okay. So, some kind of vis—Corey: It's a mystical quadrilateral.Anton: Some kind of visual depiction, perhaps including four parts—not quadrants, mind you—that is focused on things that should be paid and aren't, things that should be paid and are paid, and whatever else. So, the point is that if you're charging for encryption, like basic encryption, you're probably making a mistake. And we don't, and other people, I think, don't as well. If you're charging for logging, then it's probably also wrong—because charging for log retention, keeping logs perhaps is okay because ultimately you're spending resources on this—charging for logging to me is kind of in the vile territory. But how about charging for a tool that helps you secure your on-premise environment? That's fair game, right?Corey: Right. If it's something you're taking to another provider, I think that's absolutely fair. But the idea—and again, I'm okay with the reality of, “Okay, here's our object storage costs for things, and by the way, when you wind up logging things, yeah, we'll charge you directionally what it costs to store that an object store,” that's great, but I don't have the Google Cloud price list shoved into my head, but I know over an AWS land that CloudWatch logs charge 50 cents per gigabyte, for ingress. And the defense is, “Well, that's a lot less expensive than most other logging vendors out there.” It's, yeah, but it's still horrifying, and at scale, it makes me want to do some terrifying things like I used to, which is build out a cluster of Rsyslog boxes and wind up having everything logged to those because I don't have an unbounded growth problem.This gets worse with audit logs because there's no alternative available for this. And when companies start charging for that, either on a data plane or a management plane level, that starts to get really, really murky because you can get visibility into what happened and reconstruct things after the fact, but only if you pay. And that bugs me.Anton: That would bug me as well. And I think these are things that I would very clearly push into the box of this is security that you should not charge for. But authentication is free. But, like, deeper analysis of authentication patterns, perhaps costs money. This to me is in the fair game territory because you may have logs, you may have reports, but what if you want some kind of fancy ML that analyzes the logs and gives you some insights? I don't think that's offensive to charge for that.Corey: I come bearing ill tidings. Developers are responsible for more than ever these days. Not just the code that they write, but also the containers and the cloud infrastructure that their apps run on. Because serverless means it's still somebody's problem. And a big part of that responsibility is app security from code to cloud. And that's where our friend Snyk comes in. Snyk is a frictionless security platform that meets developers where they are - Finding and fixing vulnerabilities right from the CLI, IDEs, Repos, and Pipelines. Snyk integrates seamlessly with AWS offerings like code pipeline, EKS, ECR, and more! As well as things you're actually likely to be using. Deploy on AWS, secure with Snyk. Learn more at Snyk.co/scream That's S-N-Y-K.co/screamCorey: I think it comes down to what you're doing with it. Like, the baseline primitives, the things that no one else is going to be in a position to do because honestly, if I can get logging and audit data out of your control plane, you have a different kind of security problem, and—Anton: [laugh].Corey: That is a giant screaming fire in the building, as it should be. The other side of it, though, is that if we take a look at how much all of this stuff can cost, and if you start charging for things that are competitive to other log analytics tools, great because at that point, we're talking about options. I mean, I'd like to see, in an ideal world, that you don't charge massive amounts of money for egress but ingress is free. I'd like to see that normalized a bit.But yeah, okay, great. Here's the data; now I can run whatever analytics tools I want on it and then you're effectively competing on a level playing field, as opposed to, like, okay, this other analytics tool is better, but it'll cost me over ten times as much to migrate to it, so is it ten times better? Probably not; few things are, so I guess I'm sticking with the stuff that you're offering. It feels like the cloud provider security tools never quite hit the same sweet spot that third-party vendors tend to as far as usability, being able to display things in a way that aligns with various stakeholders at those companies. But it still feels like a cash grab and I have to imagine without having insight into internal costing structures, that the security services themselves are not a significant revenue driver for any of the cloud companies. And the rare times where they are is almost certainly some horrifying misconfiguration that should be fixed.Anton: That's fair, but so to me, it still fits into the bucket of some things you shouldn't charge for and most people don't. There is a bucket of things that you should not charge for, but some people do. And there's a bucket of things where it's absolutely fair to charge for I don't know the amount I'm not a pricing person, but I also seen things that are very clearly have cost to a provider, have value to a client, have margins, so it's very clear it's a product; it's not just a feature of the cloud to be more secure. But you're right if somebody positions as, “I got cloud. Hey, give me secure cloud. It costs double.” I'd be really offended because, like, what is your first cloud is, like, broken and insecure? Yeah. Replace insecure with broken. Why are you selling broken to me?Corey: Right. You tried to spin up a service in Google Cloud, it's like, “Great. Do you want the secure version or the shitty one?”Anton: Yeah, exactly.Corey: Guess which one of those costs more. It's… yeah, in the fullness of time, of course, the shitty one cost more because you find out about security breaches on the front page of The New York Times, and no one's happy, except maybe The Times. But the problem that you hit is that I don't know how to fix that. I think there's an opportunity there for some provider—any provider, please—to be a trendsetter, and, “Yeah, we don't charge for security services on our own stuff just because it'd be believed that should be something that is baked in.” Like, that becomes the narrative of the secure cloud.Anton: What about tiers? What about some kind of a good, better, best, or bronze, gold, platinum, where you have reasonable security, but if you want superior security, you pay money? How do you feel, what's your gut feel on this approach? Like, I can't think of example—log analysis. You're going to get some analytics and you're going to get fancy ML. Fancy ML costs money; yay, nay?Corey: You're bringing up an actually really interesting point because I think I'm conflating too many personas at once. Right now, just pulling up last months bill on Google Cloud, it fits in the free tier, but my Cloud Run bill was 13 cents for the month because that's what runs my snark.cloud URL shortener. And it's great. And I wound up with—I think my virtual machine costs dozen times that much. I don't care.Over in AWS-land, I was building out a serverless nonsense thing, my Last Tweet In AWS client, and that cost a few pennies a month all told, plus a whopping 50 cents for a DNS zone. Whatever. But because I was deploying it to all regions and the way that configural evaluations work, my config bill for that was 16 bucks. Now, I don't actually care about the dollar figures on this. I assure you, you could put zeros on the end of that for days and it doesn't really move the needle on my business until you get to a very certain number there, and then suddenly, I care a lot.Anton: [laugh]. Yeah.Corey: And large enterprises, this is expected because even the sheer cost of people's time to go through these things is valuable. What I'm thinking of is almost a hobby-level side project instead, where I'm a student, and I'm learning this in a dorm room or in a bootcamp or in my off hours, or I'm a career switcher and I'm doing this on my own dime out of hours. And I wind up getting smacked with the bill for security services that, for a company, don't even slightly matter. But for me, they matter, so I'm not going to enable them. And when I transition into the workforce and go somewhere, I'm going to continue to work the same way that I did when I was an independent learner, like, having a wildly generous free tier for small-scale accounts, like, even taking a perspective until you wind up costing, I don't know, five, ten—whatever it is—thousand dollars a month, none of the security stuff is going to be billable for you because it's it is not aimed at you and we want you comfortable with and using these things.This is a whole deep dive into the weeds of economics and price-driven behavior and all kinds of other nonsense, but every time I wind up seeing that, like, in my actual production account over at AWS land for The Duckbill Group, all things wrapped up, it's something like 1100 bucks a month. And over a third of it is monitoring, audit, and observability services, and a few security things as well. And on the one hand, I'm sitting here going, “I don't see that kind of value coming from it.” Now, the day there's an incident and I have to look into this, yeah, it's absolutely going to be worth having, but it's insurance. But it feels like a disproportionate percentage of it. And maybe I'm just sitting here whining and grousing and I sound like a freeloader who doesn't want to pay for things, but it's one of those areas where I would gladly pay more for a just having this be part of the cost and not complain at all about it.Anton: Well, if somebody sells me a thing that costs $1, and then they say, “Want to make it secure?” I say yes, but I'm already suspicious, and they say, “Then it's going to be 16 bucks.” I'd really freak out because, like, there are certain percentages, certain ratios of the actual thing plus security or a secure version of it; 16x is not the answer expect. 30%, probably still not the answer I expect, frankly. I don't know. This is, like, an ROI question [crosstalk 00:23:46]—Corey: Let's also be clear; my usage pattern is really weird. You take a look at most large companies at significant scale, their cloud environments from a billing perspective look an awful lot like a crap ton of instances—or possibly containers running—and smattering of other things. Yeah, you also database and storage being the other two tiers and because of… reasons data transfer loves to show up too, but by and large, everything else was more or less a rounding error. I have remarkably few of those things, just given the weird way that I use services inappropriately, but that is the nature of me, so don't necessarily take that as being gospel. Like, “Oh, you'll spend a third of your bill.”Like, I've talked to analyst types previously—not you, of course—who will hear a story like this and that suddenly winds up as a headline in some report somewhere. And it's, “Yeah, if your entire compute is based on Lambda functions and you get no traffic, yeah, you're going to see some weird distortions in your bill. Welcome to the conversation.” But it's a problem that I think is going to have to be addressed at some point, especially we talked about earlier, those vendors who are catering to customers who are not born in the cloud, and they start to see their business erode as the cloud-native way of doing things continues to accelerate, I feel like we're in for a time where they're going to be coming at the cloud providers and smacking them for this way harder than I am with my, “As a customer, wouldn't it be nice to have this?” They're going to turn this into something monstrous. And that's what it takes, that's what it takes. But… yeah.Anton: It will take more time than than we think, I think because again, back in the Gartner days, I loved to make predictions. And sometimes—I've learned that predictions end up coming true if you're good, but much later.Corey: I'm learning that myself. I'm about two years away from the end of it because three years ago, I said five years from now, nobody will care about Kubernetes. And I didn't mean it was going to go away, but I meant that it would slip below the surface level of awareness to point where most people didn't have to think about it in the same way. And I know it's going to happen because it's too complex now and it's going to be something that just gets handled in the same way that Linux kernels do today, but I think I was aggressive on the timeline. And to be clear, I've been misquoted as, “Oh, I don't think Kubernetes is going to be relevant.”It is, it's just going to not be something that you need to spend the quarter million bucks an engineer on to run in production safely.Anton: Yeah.Corey: So, we'll see. I'm curious. One other question I had for you while I've got you here is you run a podcast of your own: the Cloud Security Podcast if I'm not mistaken, which is—Anton: Sadly, you are not. [laugh].Corey: —the Cloud Se—yeah. Interesting name on that one, yeah. It's like what the Cloud Podcast was taken?Anton: Essentially, we had a really cool name [Weather Insecurity 00:26:14]. But the naming team here said, you must be descriptive as everybody else at Google, and we ended up with the name, Cloud Security Podcast. Very, very original.Corey: Naming is challenging. I still maintain that the company is renamed Alphabet, just so it could appear before Amazon in the yellow pages, but I don't know how accurate that one actually is. Yeah, to be clear, I'm not dunking on your personal fun podcast, for those without context. This is a corporate Google Cloud podcast and if you want to make the argument that I'm punching down by making fun of Google, please, I welcome that debate.Anton: [laugh]. Yes.Corey: I can't acquire companies as a shortcut to hire people. Yet. I'm sure it'll happen someday, but I can aspire to that level of budgetary control. So, what are you up to these days? You spent seven years at Gartner and now you're doing a lot of cloud security… I'll call it storytelling, and I want to be clear that I mean that as a compliment, not the, “Oh, you just tell stories rather than build things?”Anton: [laugh].Corey: Yeah, it turns out that you have to give people a reason to care about what you've built or you don't have your job for very long. What are you talking about these days? What narratives are you looking at going forward?Anton: So, one of the things that I've been obsessed with lately is a lot of people from more traditional companies come in in the cloud with their traditional on-premise knowledge, and they're trying to do cloud the on-premise way. On our podcast, we do dedicate quite some airtime to people who do cloud as if it were a rented data center, and sometimes we say, the opposite is called—we don't say cloud-native, I think; we say you're doing the cloud the cloudy way. So, if you do cloud, the cloudy way, you're probably doing it right. But if you're doing the cloud is rented data center, when you copy a security stack, you lift and shift your IDS, and your network capture devices, and your firewalls, and your SIM, you maybe are okay, as a first step. People here used to be a little bit more enraged about it, but to me, we meet customers where they are, but we need to journey with them.Because if all you do is copy your stack—security stack—from a data center to the cloud, you are losing effectiveness, you're spending money, and you're making other mistakes. I sometimes joke that you copy mistakes, not just practices. Why copy on-prem mistakes to the cloud? So, that's been bugging me quite a bit and I'm trying to tell stories to guide people out of a situation. Not away, but out.Corey: A lot of people don't go for the idea of the lift and shift migration and they say that it's a terrible pattern and it causes all kinds of problems. And they're right. The counterpoint is that it's basically the second-worst approach and everything else seems to tie itself for first place. I don't mean to sound like I'm trying to pick a fight on these things, but we're going to rebuild an application while we move it. Great.Then it doesn't work or worse works intermittently and you have no idea whether it's the rewrite, the cloud provider, or something else you haven't considered. It just sounds like a recipe for disaster.Anton: For sure. And so, imagine that you're moving the app, you're doing cut-and-paste to the cloud of the application, and then you cut-and-paste security, and then you end up with sizeable storage costs, possibly egress costs, possibly mistakes you used to make beyond five firewalls, now you make this mistake straight on the edge. Well, not on the edge edge, but on the edge of the public internet. So, some of the mistakes do become worse when you copy them from the data center to the cloud. So, we do need to, kind of, help people to get out of the situation but not by telling them don't do it because they will do it. We need to tell them what step B; what's step 1.5 out of this?Corey: And cost doesn't drive it and security doesn't drive it. Those are trailing functions. It has to be a capability story. It has to be about improving feature velocity or it does not get done. I have learned this the painful way.Anton: Whatever 10x cost if you do something in the data center-ish way in the cloud, and you're ten times more expensive, cost will drive it.Corey: To an extent, yes. However, the problem is that companies are looking at this from the perspective of okay, we can cut our costs by 90% if we make these changes. Okay, great. It cuts the cloud infrastructure cost that way. What is the engineering time, what is the opportunity cost that they gets baked into that, and what are the other strategic priorities that team has been tasked with this year? It has to go along for the ride with a redesign that unlocks additional capability because a pure cost savings play is something I have almost never found to be an argument that carries the day.There are always exceptions, to be clear, but the general case I found is that when companies get really focused on cost-cutting, rather than expanding into new markets, on some level, it feels like they are not in the best of health, corporately speaking. I mean, there's a reason I'm talking about cost optimization for what I do and not cost-cutting.It's not about lowering the bill to zero at all cost. “Cool. Turn everything off. Your bill drops to zero.” “Oh, you don't have a company anymore? Okay, so there's a constraint. Let's talk more about that.” Companies are optimized to increase revenue as opposed to reduce costs. And engineers are always more expensive than the cloud provider resources they're using, unless you've done something horrifying.Anton: And some people did, by replicating their mistakes for their inefficient data centers straight into the cloud, occasionally, yeah. But you're right, yeah. It costs the—we had the same pattern of Gartner. It's like, it's not about doing cheaper in the cloud.Corey: I really want to thank you for spending so much time talking to me. If people want to learn more about what you're up to, how you view the world, and what you're up to next, where's the best place for them to find you?Anton: At this point, it's probably easiest to find me on Twitter. I was about to say Podcast, I was about to say my Medium blog, but frankly, all of it kind of goes into Twitter at some point. And so, I think I am twitter.com/anton_chuvakin, if I recall correctly. Sorry, I haven't really—Corey: You are indeed. It's always great; it's one of those that you have a sizable audience, and you're like, “What is my Twitter handle, again? That's a good question. I don't know.” And it's your name. Great. Cool. “So, you're going to spell that for you, too, while you're at it?” We will, of course, put a link to that in the [show notes 00:32:09]. I really want to thank you for being so generous with your time. I appreciate it.Anton: Perfect. Thank you. It was fun.Corey: Anton Chuvakin, Security Strategy Something at Google Cloud. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment because people are doing it wrong, but also tell me which legacy vendor you work for.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Google Cloud Platform Podcast
Managed Service for Prometheus with Lee Yanco and Ashish Kumar

Google Cloud Platform Podcast

Play Episode Listen Later Jul 20, 2022 37:25


Hosts Carter Morgan and Anthony Bushong are in the studio this week! We're talking about Prometheus with guests Lee Yanco and Ashish Kumar and learning about the build process for Google Cloud's Managed Service for Prometheus and how Home Depot uses this tool to power their business. To begin with, Lee helps us understand what Managed Service for Prometheus is. Prometheus, a popular monitoring solution for Kubernetes, lets you know that your project is up and running and in the event of a failure, Prometheus lets you know what happened. But as Kubernetes projects scale and spread across the globe, Prometheus becomes a challenge to manage, and that's where Google Cloud's Managed Service for Prometheus comes in. Lee describes why Prometheus is so great for Kubernetes, and Ashish talks about CNCF's involvement helps open source tools integrate easily. With the help of Monarch, Google's Managed Service stands above the competition, and Lee explains what Monarch is and how it works with Prometheus to benefit users. Ashish talks about Home Depot's use of Google Cloud and the Managed Service for Prometheus, and how Home Depot's multiple data centers make data monitoring both trickier and more important. With Google Cloud, Home Depot is able to easily ensure everything is healthy and running across data centers, around the world, at an immense scale. He describes how Home Depot uses Managed Service for Prometheus in each of these data center environments from the point of view of a developer and talks about how easy Prometheus and the Managed Service are to integrate and use. Lee and Ashish wrap up the show with a look at how Home Depot and Google have worked together to create and adjust tools for increased efficiency. In the future, tighter integration into the rest of Google Cloud's suite of products is the focus. Lee Yanco Lee Yanco is the Product Management lead for Google Cloud Managed Service for Prometheus. He also works on Monarch, Google's planet-scale in-memory time series database, and on Cloud Monitoring's Kubernetes observability experience. Ashish Kumar Ashish Kumar is Senior Manager for Site Reliability and Production Engineering for The Home Depot. Cool things of the week Cloud Next registration is open site Introducing Parallel Steps for Workflows: Speed up workflow executions by running steps concurrently blog How to think about threat detection in the cloud blog GCP Podcast Episode 218: Chronicle Security with Dr. Anton Chuvakin and Ansh Patniak podcast Interview Prometheus site PromQL site Google Cloud Managed Service for Prometheus docs Kubernetes site CNCF site Monarch: Google's Planet-Scale In-Memory Time Series Database research Cloud Monitoring site Cloud Logging site Google Cloud's operations suite site What's something cool you're working on? Carter is focusing on getting organized, managing overwhelm, and comedy festivals. Anthony is testing a few new exciting features, working with build provenance in Cloud Build, jobs and network file systems in Cloud Run. Hosts Carter Morgan and Anthony Bushong

Screaming in the Cloud
Google Cloud Run, Satisfaction, and Scalability with Steren Giannini

Screaming in the Cloud

Play Episode Listen Later Jun 23, 2022 37:01


Full Description / Show Notes Steren and Corey talk about how Google Cloud Run got its name (00:49) Corey talks about his experiences using Google Cloud (2:42) Corey and Steven discuss Google Cloud's cloud run custom domains (10:01) Steren talks about Cloud Run's high developer satisfaction and scalability (15:54) Corey and Steven talk about Cloud Run releases at Google I/O (23:21) Steren discusses the majority of developer and customer interest in Google's cloud product (25:33) Steren talks about his 20% projects around sustainability (29:00) About SterenSteren is a Senior Product Manager at Google Cloud. He is part of the serverless team, leading Cloud Run. He is also working on sustainability, leading the Google Cloud Carbon Footprint product.Steren is an engineer from École Centrale (France). Prior to joining Google, he was CTO of a startup building connected objects and multi device solutions.Links Referenced: Google Cloud Run: https://cloud.run sheets-url-shortener: https://github.com/ahmetb/sheets-url-shortener snark.cloud/run: https://snark.cloud/run Twitter: https://twitter.com/steren TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined today by Steren Giannini, who is a senior product manager at Google Cloud, specifically on something called Google Cloud Run. Steren, thank you for joining me today.Steren: Thanks for inviting me, Corey.Corey: So, I want to start at the very beginning of, “Oh, a cloud service. What are we going to call it?” “Well, let's put the word cloud in it.” “Okay, great. Now, it is cloud, so we have to give it a vague and unassuming name. What does it do?” “It runs things.” “Genius. Let's break and go for work.” Now, it's easy to imagine that you spent all of 30 seconds on a name, but it never works that way. How easy was it to get to Cloud Run as a name for the service?Steren: [laugh]. Such a good question because originally it was not named Cloud Run at all. The original name was Google Serverless Engine. But a few people know that because they've been helping us since the beginning, but originally it was Google Serverless Engine. Nobody liked the name internally, and I think at one point, we wondered, “Hey, can we drop the engine structure and let's just think about the name. And what does this thing do?” “It runs things.”We already have Cloud Build. Well, wouldn't it be great to have Cloud Run to pair with Cloud Build so that after you've built your containers, you can run them? And that's how we ended up with this very simple Cloud Run, which today seems so obvious, but it took us a long time to get to that name, and we actually had a lot of renaming to do because we were about to ship with Google Serverless Engine.Corey: That seems like a very interesting last-minute change because it's not just a find and replace at that point, it's—Steren: No.Corey: —“Well, okay, if we call it Cloud Run, which can also be a verb or a noun, depending, is that going to change the meaning of some sentences?” And just doing a find and replace without a proofread pass as well, well, that's how you wind up with funny things on Twitter.Steren: API endpoints needed to be changed, adding weeks of delays to the launch. That is why we—you know, [laugh] announced in 2018 and publicly launched in 2019.Corey: I've been doing a fair bit of work in cloud for a while, and I wound up going down a very interesting path. So, the first native Google Cloud service—not things like WP Engine that ride on top of GCP—but my first native Google Cloud Service was done in service of this podcast, and it is built on Google Cloud Run. I don't think I've told you part of this story yet, but it's one of the reasons I reached out to invite you onto the show. Let me set the stage here with a little bit of backstory that might explain what the hell I'm talking about.As listeners of this show are probably aware, we have sponsors whom we love and adore. In the early days of this show, they would say, “Great, we want to tell people about our product”—which is the point of a sponsorship—“And then send them to a URL.” “Great. What's the URL?” And they would give me something that was three layers deep, then with a bunch of UTM tracking parameters at the end.And it's, “You do realize that no one is going to be sitting there typing all of that into a web browser?” At best, you're going to get three words or so. So, I built myself a URL redirector, snark.cloud. I can wind up redirecting things in there anywhere it needs to go.And for a long time, I did this on top of S3 and then put CloudFront in front of it. And this was all well and good until, you know, things happened in the fullness of time. And now holy crap, I have an operations team involved in things, and maybe I shouldn't be the only person that knows how to work on all of these bits and bobs. So, it was time to come up with something that had a business user-friendly interface that had some level of security, so I don't wind up automatically building out a spam redirect service for anything that wants to, and it needs to be something that's easy to work with. So, I went on an exploration.So, at first it showed that there were—like, I have an article out that I've spoken about before that there are, “17 Ways to Run Containers on AWS,” and then I wrote the sequel, “17 More Ways to Run Containers on AWS.” And I'm keeping a list, I'm almost to the third installation of that series, which is awful. So, great. There's got to be some ways to build some URL redirect stuff with an interface that has an admin panel. And I spent three days on this trying a bunch of different things, and some were running on deprecated versions of Node that wouldn't build properly and others were just such complex nonsense things that had got really bad. I was starting to consider something like just paying for Bitly or whatnot and making it someone else's problem.And then I stumbled upon something on GitHub that really was probably one of the formative things that changed my opinion of Google Cloud for the better. And within half an hour of discovering this thing, it was up and running. I did the entire thing, start to finish, from my iPad in a web browser, and it just worked. It was written by—let me make sure I get his name correct; you know, messing up someone's name is a great way to say that we don't care about them—Ahmet Balkan used to work at Google Cloud; now he's over at Twitter. And he has something up on GitHub that is just absolutely phenomenal about this, called sheets-url-shortener.And this is going to sound wild, but stick with me. The interface is simply a Google Sheet, where you have one column that has the shorthand slug—for example, run; if you go to snark.cloud/run, it will redirect to Google Cloud Run's website. And the second column is where you want it to go. The end.And whenever that gets updated, there's of course some caching issues, which means it can take up to five seconds from finishing that before it will actually work across the entire internet. And as best I can tell, that is fundamentally magic. But what made it particularly useful and magic, from my perspective, was how easy it was to get up and running. There was none of this oh, but then you have to integrate it with Google Sheets and that's a whole ‘nother team so there's no way you're going to be able to figure that out from our Docs. Go talk to them and then come back in the day.They were the get started, click here to proceed. It just worked. And it really brought back some of the magic of cloud for me in a way that I hadn't seen in quite a while. So, all which is to say, amazing service, I continue to use it for all of these sponsored links, and I am still waiting for you folks to bill me, but it fits comfortably in the free tier because it turns out that I don't have hundreds of thousands of people typing it in every week.Steren: I'm glad it went well. And you know, we measure tasks success for Cloud Run. And we do know that most new users are able to deploy their apps very quickly. And that was the case for you. Just so you know, we've put a lot of effort to make sure it was true, and I'll be glad to tell you more about all that.But for that particular service, yes, I suppose Ahmet—who I really enjoyed working with on Cloud Run, he was really helpful designing Cloud Run with us—has open-sourced this side project. And basically, you might even have clicked on a deploy to Cloud Run button on GitHub, right, to deploy it?Corey: That is exactly what I did and it somehow just worked and—Steren: Exactly.Corey: And it knew, even logging into the Google Cloud Console because it understands who I am because I use Google Docs and things, I'm already logged in. None of this, “Oh, which one of these 85 credential sets is it going to be?” Like certain other clouds. It was, “Oh, wow. Wait, cloud can be easy and fun? When did that happen?”Steren: So, what has happened when you click that deploy to Google Cloud button, basically, the GitHub repository was built into a container with Cloud Build and then was deployed to Cloud Run. And once on Cloud Run, well, hopefully, you have forgotten about it because that's what we do, right? We—give us your code, in a container if you know containers if you don't just—we support, you know, many popular languages, and we know how to build them, so don't worry about that. And then we run it. And as you said, when there is low traffic or no traffic, it scales to zero.When there is low traffic, you're likely going to stay under the generous free tier. And if you have more traffic for, you know, Screaming in the Cloud suddenly becoming a high destination URL redirects, well, Cloud Run will scale the number of instances of this container to be able to handle the load. Cloud Run scales automatically and very well, but only—as always—charging you when you are processing some requests.Corey: I had to fork and make a couple of changes myself after I wound up doing some testing. The first was to make the entire thing case insensitive, which is—you know, makes obvious sense. And the other was to change the permanent redirect to a temporary redirect because believe it or not, in the fullness of time, sometimes sponsors want to change the landing page in different ways for different campaigns and that's fine by me. I just wanted to make sure people's browser cache didn't remember it into perpetuity. But it was easy enough to run—that was back in the early days of my exploring Go, which I've been doing this quarter—and in the couple of months this thing has been running it has been effectively flawless.It's set it; it's forget it. The only challenges I had with it are it was a little opaque getting a custom domain set up that—which is still in beta, to be clear—and I've heard some horror stories of people saying it got wedged. In my case, no, I deployed it and I started refreshing it and suddenly, it start throwing an SSL error. And it's like, “Oh, that's not good, but I'm going to break my own lifestyle here and be patient for ten minutes.” And sure enough, it cleared itself and everything started working. And that was the last time I had to think about any of this. And it just worked.Steren: So first, Cloud Run is HTTPS only. Why? Because it's 2020, right? It's 2022, but—Corey: [laugh].Steren: —it's launched in 2020. And so basically, we have made a decision that let's just not accept HTTP traffic; it's only HTTPS. As a consequence, we need to provision a cert for your custom domain. That is something that can take some time. And as you said, we keep it in beta or in preview because we are not yet satisfied with the experience or even the performance of Cloud Run custom domains, so we are actively working on fixing that with a different approach. So, expect some changes, hopefully, this year.Corey: I will say it does take a few seconds when people go to a snark.cloud URL for it to finish resolving, and it feels on some level like it's almost like a cold start problem. But subsequent visits, the same thing also feel a little on the slow and pokey side. And I don't know if that's just me being wildly impatient, if there's an optimization opportunity, or if that's just inherent to the platform that is not under current significant load.Steren: So, it depends. If the Cloud Run service has scaled down to zero, well of course, your service will need to be started. But what we do know, if it's a small Go binary, like something that you mentioned, it should really take less than, let's say, 500 milliseconds to go from zero to one of your container instance. Latency can also be due to the way the code is running. If it occurred is fetching things from Google Sheets at every startup, that is something that could add to the startup latency.So, I would need to take a look, but in general, we are not spinning up a virtual machine anytime we need to scale horizontally. Like, our infrastructure is a multi-tenant, rapidly scalable infrastructure that can materialize a container in literally 300 milliseconds. The rest of the latency comes from what does the container do at startup time?Corey: Yeah, I just ran a quick test of putting time in front of a curl command. It looks like it took 4.83 seconds. So, enough to be perceptive. But again, for just a quick redirect, it's generally not the end of the world and there's probably something I'm doing that is interesting and odd. Again, I did not invite you on the show to file a—Steren: [laugh].Corey: Bug report. Let's be very clear here.Steren: Seems on the very high end of startup latencies. I mean, I would definitely expect under the second. We should deep-dive into the code to take a look. And by the way, building stuff on top of spreadsheets. I've done that a ton in my previous lives as a CTO of a startup because well, that's the best administration interface, right? You just have a CRUD UI—Corey: [unintelligible 00:12:29] world and all business users understand it. If people in Microsoft decided they were going to change Microsoft Excel interface, even a bit, they would revert the change before noon of the same day after an army of business users grabbed pitchforks and torches and marched on their headquarters. It's one of those things that is how the world runs; it is the world's most common IDE. And it's great, but I still think of databases through the lens of thinking about it as a spreadsheet as my default approach to things. I also think of databases as DNS, but that's neither here nor there.Steren: You know, if you have maybe 100 redirects, that's totally fine. And by the way, the beauty of Cloud Run in a spreadsheet, as you mentioned is that Cloud Run services run with a certain identity. And this identity, you can grant it permissions. And in that case, what I would recommend if you haven't done so yet, is to give an identity to your Cloud Run service that has the permission to read that particular spreadsheet. And how you do that you invite the email of the service account as a reader of your spreadsheet, and that's probably what you did.Corey: The click button to the workflow on Google Cloud automatically did that—Steren: Oh, wow.Corey: —and taught me how to do it. “Here's the thing that look at. The end.” It was a flawless user-onboarding experience.Steren: Very nicely done. But indeed, you know, there is this built-in security which is the principle of minimal permission, like each of your Cloud Run service should basically only be able to read and write to the backing resources that they should. And by default, we give you a service account which has a lot of permissions, but our recommendation is to narrow those permissions to basically only look at the cloud storage buckets that the service is supposed to look at. And the same for a spreadsheet.Corey: Yes, on some level, I feel like I'm going to write an analysis of my own security approach. It would be titled, “My God, It's Full Of Stars” as I look at the IAM policies of everything that I've configured. The idea of least privilege is great. What I like about this approach is that it made it easy to do it so I don't have to worry about it. At one point, I want to go back and wind up instrumenting it a bit further, just so I can wind up getting aggregate numbers of all right, how many times if someone visited this particular link? It'll be good to know.And I don't know… if I have to change permissions to do that yet, but that's okay. It's the best kind of problem: future Corey. So, we'll deal with that when the time comes. But across the board, this has just been a phenomenal experience and it's clear that when you were building Google Cloud Run, you understood the assignment. Because I was looking for people saying negative things about it and by and large, all of its seem to come from a perspective of, “Well, this isn't going to be the most cost-effective or best way to run something that is hyperscale, globe-spanning.”It's yes, that's the thing that Kubernetes was originally built to run and for some godforsaken reason people run their blog on it instead now. Okay. For something that is small, scales to zero, and has long periods where no one is visiting it, great, this is a terrific answer and there's absolutely nothing wrong with that. It's clear that you understood who you were aiming at, and the migration strategy to something that is a bit more, I want to say robust, but let's be clear what I mean when I'm saying that if you want something that's a little bit more impressive on your SRE resume as you're trying a multi-year project to get hired by Google or pretend you got hired by Google, yeah, you can migrate to something else in a relatively straightforward way. But that this is up, running, and works without having to think about it, and that is no small thing.Steren: So, there are two things to say here. The first is yes, indeed, we know we have high developer satisfaction. You know, we measure this—in Google Cloud, you might have seen those small satisfaction surveys popping up sometimes on the user interface, and you know, we are above 90% satisfaction score. We hire third parties to help us understand how usable and what satisfaction score would users get out of Cloud Run, and we are constantly getting very, very good results, in absolute but also compared to the competition.Now, the other thing that you said is that, you know, Cloud Run is for small things, and here while it is definitely something that allows you to be productive, something that strives for simplicity, but it also scales a lot. And contrary to other systems, you do not have any pre-provisioning to make. So, we have done demos where we go from zero to 10,000 container instances in ten seconds because of the infrastructure on which Cloud Run runs, which is fully managed and multi-tenant, we can offer you this scale on demand. And many of our biggest customers have actually not switched to something like Kubernetes after starting with Cloud Run because they value the low maintenance, the no infrastructure management that Cloud Run brings them.So, we have like Ikea, ecobee… for example ecobee, you know, the smart thermostats are using Cloud Run to ingest events from the thermostat. I think Ikea is using Cloud Run more and more for more of their websites. You know, those companies scale, right? This is not, like, scale to zero hobby project. This is actually production e-commerce and connected smart objects production systems that have made the choice of being on a fully-managed platform in order to reduce their operational overhead.[midroll 00:17:54]Corey: Let me be clear. When I say scale—I think we might be talking past each other on a small point here. When I say scale, I'm talking less about oh tens or hundreds of thousands of containers running concurrently. I'm talking in a more complicated way of, okay, now we have a whole bunch of different microservices talking to one another and affinity as far as location to each other for data transfer reasons. And as you start beginning to service discovery style areas of things, where we build a really complicated applications because we hired engineers and failed to properly supervise them, and that type of convoluted complex architecture.That's where it feels like Cloud Run increasingly, as you move in that direction, starts to look a little bit less like the tool of choice. Which is fine, I want to be clear on that point. The sense that I've gotten of it is a great way to get started, it's a great way to continue running a thing you don't have to think about because you have a day job that isn't infrastructure management. And it is clear to—as your needs change—to either remain with the service or pivot to a very close service without a whole lot of retooling, which is key. There's not much of a lock-in story to this, which I love.Steren: That was one of the key principles when we started to design Cloud Run was, you know, we realized the industry had agreed that the container image was the standard for the deployment artifact of software. And so, we just made the early choice of focusing on deploying containers. Of course, we are helping users build those containers, you know, we have things called build packs, we can continuously deploy from GitHub, but at the end of the day, the thing that gets auto-scaled on Cloud Run is a container. And that enables portability.As you said. You can literally run the same container, nothing proprietary in it, I want to be clear. Like, you're just listening on a port for some incoming requests. Those requests can be HTTP requests, events, you know, we have products that can push events to Cloud Run like Eventarc or Pub/Sub. And this same container, you can run it on your local machine, you can run it on Kubernetes, you can run it on another cloud. You're not locked in, in terms of API of the compute.We even went even above and beyond by having the Cloud Run API looks like a Kubernetes API. I think that was an extra effort that we made. I'm not sure people care that much, but if you look at the Cloud Run API, it is actually exactly looking like Kubernetes, Even if there is no Kubernetes at all under the hood; we just made it for portability. Because we wanted to address this concern of serverless which was lock-in. Like, when you use a Function as a Service product, you are worried that the architecture that you are going to develop around this product is going to be only working in this particular cloud provider, and you're not in control of the language, the version that this provider has decided to offer you, you're not in control of more of the complexity that can come as you want to scan this code, as you want to move this code between staging and production or test this code.So, containers are really helping with that. So, I think we made the right choice of this new artifact that to build Cloud Run around the container artifact. And you know, at the time when we launched, it was a little bit controversial because back in the day, you know, 2018, 2019, serverless really meant Functions as a Service. So, when we launched, we little bit redefined serverless. And we basically said serverless containers. Which at the time were two worlds that in the same sentence were incompatible. Like, many people, including internally, had concerns around—Corey: Oh, the serverless versus container war was a big thing for a while. Everyone was on a different side of that divide. It's… containers are effectively increasingly—and I know, I'll get email for this, and I don't even slightly care, they're a packaging format—Steren: Exactly.Corey: —where it solves the problem of how do I build this thing to deploy on Debian instances? And Ubuntu instances, and other instances, God forbid, Windows somewhere, you throw a container over the wall. The end. Its DevOps is about breaking down the walls between Dev and Ops. That's why containers are here to make them silos that don't have to talk to each other.Steren: A container image is a glorified zip file. Literally. You have a set of layers with files in them, and basically, we decided to adopt that artifact standard, but not the perceived complexity that existed at the time around containers. And so, we basically merged containers with serverless to make something as easy to use as a Function as a Service product but with the power of bringing your own container. And today, we are seeing—you mentioned, what kind of architecture would you use Cloud Run for?So, I would say now there are three big buckets. The obvious one is anything that is a website or an API, serving public internet traffic, like your URL redirect service, right? This is, you have an API, takes a request and returns a response. It can be a REST API, GraphQL API. We recently added support for WebSockets, which is pretty unique for a service offering to support natively WebSockets.So, what I mean natively is, my client can open a socket connection—a bi-directional socket connection—with a given instance, for up to one hour. This is pretty unique for something that is as fully managed as Cloud Run.Corey: Right. As we're recording this, we are just coming off of Google I/O, and there were a number of announcements around Cloud Run that were touching it because of, you know, strange marketing issues. I only found out that Google I/O was a thing and featured cloud stuff via Twitter at the time it was happening. What did you folks release around Cloud Run?Steren: Good question, actually. Part of the Google I/O Developer keynote, I pitched a story around how Cloud Run helps developers, and the I/O team liked the story, so we decided to include that story as part of the live developer keynote. So, on stage, we announced Cloud Run jobs. So now, I talked to you about Cloud Run services, which can be used to expose an API, but also to do, like, private microservice-to-microservice communication—because cloud services don't have to be public—and in that case, we support GRPC and, you know, a very strong security mechanism where only Service A can invoke Service B, for example, but Cloud Run jobs are about non-request-driven containers. So, today—I mean, before Google I/O a few days ago, the only requirement that we imposed on your container image was that it started to listen for requests, or events, or GRPC—Corey: Web requests—Steren: Exactly—Corey: It speaks [unintelligible 00:24:35] you want as long as it's HTTP. Yes.Steren: That was the only requirement we asked you to have on your container image. And now we've changed that. Now, if you have a container that basically starts and executes to completion, you can deploy it on a Cloud Run job. So, you will use Cloud Run jobs for, like, daily batch jobs. And you have the same infrastructure, so on-demand, you can go from zero to, I think for now, the maximum is a hundred tasks in parallel, for—of course, you can run many tasks in sequence, but in parallel, you can go from zero to a hundred, right away to run your daily batch job, daily admin job, data processing.But this is more in the batch mode than in streaming mode. If you would like to use a more, like, streaming data processing, than a Cloud Run service would still be the best fit because you can literally push events to it, and it will auto-scale to handle any number of events that it receives.Corey: Do you find that the majority of customers are using Cloud Run for one-off jobs that barely will get more than a single container, like my thing, or do you find that they're doing massively parallel jobs? Where's the lion's share of developer and customer interest?Steren: It's both actually. We have both individual developers, small startups—which really value the scale to zero and pay per use model of Cloud Run. Your URL redirect service probably is staying below the free tier, and there are many, many, many users in your case. But at the same time, we have big, big, big customers who value the on-demand scalability of Cloud Run. And for these customers, of course, they will probably very likely not scale to zero, but they value the fact that—you know, we have a media company who uses Cloud Run for TV streaming, and when there is a soccer game somewhere in the world, they have a big spike of usage of requests coming in to their Cloud Run service, and here they can trust the rapid scaling of Cloud Run so they don't have to pre-provision things in advance to be able to serve that sudden traffic spike.But for those customers, Cloud Run is priced in a way so that if you know that you're going to consume a lot of Cloud Run CPU and memory, you can purchase Committed Use Discounts, which will lower your bill overall because you know you are going to spend one dollar per hour on Cloud Run, well purchase a Committed Use Discount because you will only spend 83 cents instead of one dollar. And also, Cloud Run and comes with two pricing model, one which is the default, which is the request-based pricing model, which is basically you only have CPU allocated to your container instances if you are processing at least one request. But as a consequence of that, you are not paying outside of the processing of those requests. Those containers might stay up for you, one, ready to receive new requests, but you're not paying for them. And so, that is—you know, your URL redirect service is probably in that mode where yes when you haven't used it for a while, it will scale down to zero, but if you send one request to it, it will serve that request and then it will stay up for a while until it decides to scale down. But you the user only pays when you are processing these specific requests, a little bit like a Function as a Service product.Corey: Scales to zero is one of the fundamental tenets of serverless that I think that companies calling something serverless, but it always charges you per hour anyway. Yeah, that doesn't work. Storage, let's be clear, is a separate matter entirely. I'm talking about compute. Even if your workflow doesn't scale down to zero ever as a workload, that's fine, but if the workload does, you don't get to keep charging me for it.Steren: Exactly. And so, in that other mode where you decide to always have CPU allocated to your Cloud Run container instances, then you pay for the entire lifecycle of this container instances. You still benefit from the auto-scaling of Cloud Run, but you will pay for the lifecycle and in that case, the price points are lower because you pay for a longer period of time. But that's more the price model that those bigger customers will take because at their scale, they basically always receive requests, so they already to pay always, basically.Corey: I really want to thank you for taking the time to chat with me. Before you go, one last question that we'll be using as a teaser for the next episode that we record together. It seems like this is a full-time job being the product manager on Cloud Run, but no Google, contrary to popular opinion, does in fact, still support 20% projects. What's yours?Steren: So, I've been looking to work on Cloud Run since it was a prototype, and you know, for a long time, we've been iterating privately on Cloud Run, launching it, seeing it grow, seeing it adopted, it's great. It's my full-time job. But on Fridays, I still find the time to have a 20% project, which also had quite a bit of impact. And I work on some sustainability efforts for Google Cloud. And notably, we've released two things last year.The first one is that we are sharing some carbon characteristics of Google Cloud regions. So, if you have seen those small leaves in the Cloud Console next to the regions that are emitting the less carbon, that's something that I helped bring to life. And the second one, which is something quite big, is we are helping customers report and reduce their gross carbon emissions of their Google Cloud usage by providing an out of the box reporting tool called Google Cloud Carbon Footprint. So, that's something that I was able to bootstrap with a team a little bit on the side of my Cloud Run project, but I was very glad to see it launched by our CEO at the last Cloud Next Conference. And now it is a fully-funded project, so we are very glad that we are able to help our customers better meet their sustainability goals themselves.Corey: And we will be talking about it significantly on the next episode. We're giving a teaser, not telling the whole story.Steren: [laugh].Corey: I really want to thank you for being as generous with your time as you are. If people want to learn more, where can they find you?Steren: Well, if they want to learn more about Cloud Run, we talked about how simple was that name. It was obviously not simple to find this simple name, but the domain is https://cloud.run.Corey: We will also accept snark.cloud/run, I will take credit for that service, too.Steren: [laugh]. Exactly.Corey: There we are.Steren: And then, people can find me on Twitter at @steren, S-T-E-R-E-N. I'll be happy—I'm always happy to help developers get started or answer questions about Cloud Run. And, yeah, thank you for having me. As I said, you successfully deployed something in just a few minutes to Cloud Run. I would encourage the audience to—Corey: In spite of myself. I know, I'm as surprised as anyone.Steren: [laugh].Corey: The only snag I really hit was the fact that I was riding shotgun when we picked up my daughter from school and went through a dead zone. It's like, why is this thing not loading in the Google Cloud Console? Yeah, fix the cell network in my area, please.Steren: I'm impressed that you did all of that from an iPad. But yeah, to the audience give Cloud Run the try. You can really get started connecting your GitHub repository or deploy your favorite container image. And we've worked very hard to ensure that usability was here, and we know we have pretty strong usability scores. Because that was a lot of work to simplicity, and product excellence and developer experience is a lot of work to get right, and we are very proud of what we've achieved with Cloud Run and proud to see that the developer community has been very supportive and likes this product.Corey: I'm a big fan of what you've built. And well, of course, it links to all of that in the show notes. I just want to thank you again for being so generous with your time. And thanks again for building something that I think in many ways showcases the best of what Google Cloud has to offer.Steren: Thanks for the invite.Corey: We'll talk again soon. Steren Giannini is a senior product manager at Google Cloud, on Cloud Run. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice. If it's on YouTube, put the thumbs up and the subscribe buttons as well, but in the event that you hated it also include an angry comment explaining why your 20% project is being a shithead on the internet.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

AWS Morning Brief
Kubernetes Firewalln't

AWS Morning Brief

Play Episode Listen Later Jun 16, 2022 6:43 Very Popular


Links: Azure's continuing security woes The Meeting Owl videoconference device apparently had significant security problems  Brandon Sherman writes about how Temporal structures its access control strategy with regard to AWS  This week's S3 Bucket Negligence Award goes to Mobike.   Cloud Functions or Cloud Run launched from any GCP organization can bypass Google Kubernetes Engine (GKE) Authorized Networks restrictions Proof of someone migrating to SSO and disabling IAM users entirely.  AWS blog post about IAM policy types: How and when to use them Tailscale

Screaming in the Cloud
Creating “Quinntainers” with Casey Lee

Screaming in the Cloud

Play Episode Listen Later Apr 20, 2022 46:16


About CaseyCasey spends his days leveraging AWS to help organizations improve the speed at which they deliver software. With a background in software development, he has spent the past 20 years architecting, building, and supporting software systems for organizations ranging from startups to Fortune 500 enterprises.Links Referenced: “17 Ways to Run Containers in AWS”: https://www.lastweekinaws.com/blog/the-17-ways-to-run-containers-on-aws/ “17 More Ways to Run Containers on AWS”: https://www.lastweekinaws.com/blog/17-more-ways-to-run-containers-on-aws/ kubernetestheeasyway.com: https://kubernetestheeasyway.com snark.cloud/quinntainers: https://snark.cloud/quinntainers ECS Chargeback: https://github.com/gaggle-net/ecs-chargeback  twitter.com/nektos: https://twitter.com/nektos TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by our friends at Revelo. Revelo is the Spanish word of the day, and its spelled R-E-V-E-L-O. It means “I reveal.” Now, have you tried to hire an engineer lately? I assure you it is significantly harder than it sounds. One of the things that Revelo has recognized is something I've been talking about for a while, specifically that while talent is evenly distributed, opportunity is absolutely not. They're exposing a new talent pool to, basically, those of us without a presence in Latin America via their platform. It's the largest tech talent marketplace in Latin America with over a million engineers in their network, which includes—but isn't limited to—talent in Mexico, Costa Rica, Brazil, and Argentina. Now, not only do they wind up spreading all of their talent on English ability, as well as you know, their engineering skills, but they go significantly beyond that. Some of the folks on their platform are hands down the most talented engineers that I've ever spoken to. Let's also not forget that Latin America has high time zone overlap with what we have here in the United States, so you can hire full-time remote engineers who share most of the workday as your team. It's an end-to-end talent service, so you can find and hire engineers in Central and South America without having to worry about, frankly, the colossal pain of cross-border payroll and benefits and compliance because Revelo handles all of it. If you're hiring engineers, check out revelo.io/screaming to get 20% off your first three months. That's R-E-V-E-L-O dot I-O slash screaming.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is someone that I had the pleasure of meeting at re:Invent last year, but we'll get to that story in a minute. Casey Lee is the CTO with a company called Gaggle, which is—as they frame it—saving lives. Now, that seems to be a relatively common position that an awful lot of different tech companies take. “We're saving lives here.” It's, “You show banner ads and some of them are attack platforms for JavaScript malware. Let's be serious here.” Casey, thank you for joining me, and what makes the statement that Gaggle saves lives not patently ridiculous?Casey: Sure. Thanks, Corey. Thanks for having me on the show. So Gaggle, we're ed-tech company. We sell software to school districts, and school districts use our software to help protect their students while the students use the school-issued Google or Microsoft accounts.So, we're looking for signs of bullying, harassment, self-harm, and potentially suicide from K-12 students while they're using these platforms. They will take the thoughts, concerns, emotions they're struggling with and write them in their school-issued accounts. We detect that and then we notify the school districts, and they get the students the help they need before they can do any permanent damage to themselves. We protect about 6 million students throughout the US. We ingest a lot of content.Last school year, over 6 billion files, about the equal number of emails ingested. We're looking for concerning content and then we have humans review the stuff that our machine learning algorithms detect and flag. About 40 million items had to go in front of humans last year, resulted in about 20,000 what we call PSSes. These are Possible Student Situations where students are talking about harming themselves or harming others. And that resulted in what we like to track as lives saved. 1400 incidents last school year where a student was dealing with suicide ideation, they were planning to take their own lives. We detect that and get them help within minutes before they can act on that. That's what Gaggle has been doing. We're using tech, solving tech problems, and also saving lives as we do it.Corey: It's easy to lob a criticism at some of the things you're alluding to, the idea of oh, you're using machine learning on student data for young kids, yadda, yadda, yadda. Look at the outcome, look at the privacy controls you have in place, and look at the outcomes you're driving to. Now, I don't necessarily trust the number of school administrations not to become heavy-handed and overbearing with it, but let's be clear, that's not the intent. That is not what the success stories you have alluded to. I've got to say I'm a fan, so thanks for doing what you're doing. I don't say that very often to people who work in tech companies.Casey: Cool. Thanks, Corey.Corey: But let's rewind a bit because you and I had passed like ships in the night on Twitter for a while, but last year at re:Invent something odd happened. First, my business partner procrastinated at getting his ticket—that's not the odd part; he does that a lot—but then suddenly ticket sales slammed shut and none were to be had anywhere. You reached out with a, “Hey, I have a spare ticket because someone can't go. Let me get it to you.” And I said, “Terrific. Let me pay you for the ticket and take you to dinner.”You said, “Yes on the dinner, but I'd rather you just look at my AWS bill and don't worry about the cost of the ticket.” “All right,” said I. I know a deal when I see one. We grabbed dinner at the Venetian. I said, “Bust out your laptop.” And you said, “Oh, I was kidding.” And I said, “Great. I wasn't. Bust it out.”And you went from laughing to taking notes in about the usual time that happens when I start looking at these things. But how was your recollection of that? I always tend to romanticize some of these things. Like, “And then everyone's restaurant just turned, stopped, and clapped the entire time.” Maybe that part didn't happen.Casey: Everything was right up until the clapping part. That was a really cool experience. I appreciate you walking through that with me. Yeah, we've got lots of opportunity to save on our AWS bill here at Gaggle, and in that little bit of time that we had together, I think I walked away with no more than a dozen ideas for where to shave some costs. The most obvious one, the first thing that you keyed in on, is we had RIs coming due that weren't really well-optimized and you steered me towards savings plans. We put that in place and we're able to apply those savings plans not just to our EC2 instances but also to our serverless spend as well.So, that was a very worthwhile and cost-effective dinner for us. The thing that was most surprising though, Corey, was your approach. Your approach to how to review our bill was not what I thought at all.Corey: Well, what did you expect my approach was going to be? Because this always is of interest to me. Like, do you expect me to, like, whip a portable machine learning rig out of my backpack full of GPUs or something?Casey: I didn't know if you had, like, some secret tool you were going to hit, or if nothing else, I thought you were going to go for the Cost Explorer. I spend a lot of time in Cost Explorer, that's my go-to tool, and you wanted nothing to do with Cost Exp—I think I was actually pulling up Cost Explorer for you and you said, “I'm not interested. Take me to the bills.” So, we went right to the billing dashboard, you started opening up the invoices, and I thought to myself, “I don't remember the last time I looked at an AWS invoice.” I just, it's noise; it's not something that I pay attention to.And I learned something, that you get a real quick view of both the cost and the usage. And that's what you were keyed in on, right? And you were looking at things relative to each other. “Okay, I have no idea about Gaggle or what they do, but normally, for a company that's spending x amount of dollars in EC2, why is your data transfer cost the way it is? Is that high or low?” So, you're looking for kind of relative numbers, but it was really cool watching you slice and dice that bill through the dashboard there.Corey: There are a few things I tie together there. Part of it is that this is sort of a surprising thing that people don't think about but start with big numbers first, rather than going alphabetically because I don't really care about your $6 Alexa for Business spend. I care a bit more about the $6 million, or whatever it happens to be at EC2—I'm pulling numbers completely out of the ether, let's be clear; I don't recall what the exact magnitude of your bill is and it's not relevant to the conversation.And then you see that and it's like, “Huh. Okay, you're spending $6 million on EC2. Why are you spending 400 bucks on S3? Seems to me that those two should be a little closer aligned. What's the deal here? Oh, God, you're using eight petabytes of EBS volumes. Oh, dear.”And just, it tends to lead to interesting stuff. Break it down by region, service, and use case—or usage type, rather—is what shows up on those exploded bills, and that's where I tend to start. It also is one of the easiest things to wind up having someone throw into a PDF and email my way if I'm not doing it in a restaurant with, you know, people clapping standing around.Casey: [laugh]. Right.Corey: I also want to highlight that you've been using AWS for a long time. You're a Container Hero; you are not bad at understanding the nuances and depths of AWS, so I take praise from you around this stuff as valuing it very highly. This stuff is not intuitive, it is deeply nuanced, and you have a business outcome you are working towards that invariably is not oriented day in day out around, “How do I get these services for less money than I'm currently paying?” But that is how I see the world and I tend to live in a very different space just based on the nature of what I do. It's sort of a case study and the advantage of specialization. But I know remarkably little about containers, which is how we wound up reconnecting about a week or so before we did this recording.Casey: Yeah. I saw your tweet; you were trying to run some workload—container workload—and I could hear the frustration on the other end of Twitter when you were shaking your fist at—Corey: I should not tweet angrily, and I did in this case. And, eh, every time I do I regret it. But it played well with the people, so that does help. I believe my exact comment was, “‘me: I've got this container. Run it, please.' ‘Google Cloud: Run. You got it, boss.' AWS has 17 ways to run containers and they all suck.”And that's painting with an overly broad brush, let's be clear, but that was at the tail end of two or three days of work trying to solve a very specific, very common, business problem, that I was just beating my head off of a wall again and again and again. And it took less than half an hour from start to finish with Google Cloud Run and I didn't have to think about it anymore. And it's one of those moments where you look at this and realize that the future is here, we just don't see it in certain ways. And you took exception to this. So please, let's dive in because 280 characters of text after half a bottle of wine is not the best context to have a nuanced discussion that leaves friendships intact the following morning.Casey: Nice. Well, I just want to make sure I understand the use case first because I was trying to read between the lines on what you needed, but let me take a guess. My guess is you got your source code in GitHub, you have a Docker file, and you want to be able to take that repo from GitHub and just have it continuously deployed somewhere in Run. And you don't want to have headaches with it; you just want to push more changes up to GitHub, Docker Build runs and updates some service somewhere. Am I right so far?Corey: Ish, but think a little further up the stack. It was in service of this show. So, this show, as people who are listening to this are probably aware by this point, periodically has sponsors, which we love: We thank them for participating in the ongoing support of this show, which empowers conversations like this. Sometimes a sponsor will come to us with, “Oh, and here's the URL we want to give people.” And it's, “First, you misspelled your company name from the common English word; there are three sublevels within the domain, and then you have a complex UTM tagging tracking co—yeah, you realize people are driving to work when they're listening to this?”So, I've built a while back a link shortener, snark.cloud because is it the shortest thing in the world? Not really, but it's easily understandable when I say that, and people hear it for what it is. And that's been running for a long time as an S3 bucket with full of redirects, behind CloudFront. So, I wind up adding a zero-byte object with a redirect parameter on it, and it just works.Now, the challenge that I have here as a business is that I am increasingly prolific these days. So, anything that I am not directly required to be doing, I probably shouldn't necessarily be the one to do it. And care and feeding of those redirect links is a prime example of this. So, I went hunting, and the things that I was looking for were, obviously, do the redirect. Now, if you pull up GitHub, there are hundreds of solutions here.There are AWS blog posts. One that I really liked and almost got working was Eric Johnson's three-part blog post on how to do it serverlessly, with API Gateway, and DynamoDB, no Lambdas required. I really liked aspects of what that was, but it was complex, I kept smacking into weird challenges as I went, and front end is just baffling to me. Because I needed a front end app for people to be able to use here; I need to be able to secure that because it turns out that if you just have a, anyone who stumbles across the URL can redirect things to other places, well, you've just empowered a whole bunch of spam email, and you're going to find that service abused, and everyone starts blocking it, and then you have trouble. Nothing lasts the first encounter with jerks.And I was getting more and more frustrated, and then I found something by a Twitter engineer on GitHub, with a few creative search terms, who used to work at Google Cloud. And what it uses as a client is it doesn't build any kind of custom web app. Instead, as a database, it uses not S3 objects, not Route 53—the ideal database—but a Google sheet, which sounds ridiculous, but every business user here knows how to use that.Casey: Sure.Corey: And it looks for the two columns. The first one is the slug after the snark.cloud, and the second is the long URL. And it has a TTL of five seconds on cache, so make a change to that spreadsheet, five seconds later, it's live. Everyone gets it, I don't have to build anything new, I just put it somewhere around the relevant people can access it, I gave him a tutorial and a giant warning on it, and everyone gets that. And it just works well. It was, “Click here to deploy. Follow the steps.”And the documentation was a little, eh, okay, I had to undo it once and redo it again. Getting the domain registered was getting—ported over took a bit of time, and there were some weird SSL errors as the certificates were set up, but once all of that was done, it just worked. And I tested the heck out of it, and cold starts are relatively low, and the entire thing fits within the free tier. And it is reminiscent of the magic that I first saw when I started working with some of the cloud providers services, years ago. It's been a long time since I had that level of delight with something, especially after three days of frustration. It's one of the, “This is a great service. Why are people not shouting about this from the rooftops?” That was my perspective. And I put it out on Twitter and oh, Lord, did I get comments. What was your take on it?Casey: Well, so my take was, when you're evaluating a platform to use for running your applications, how fast it can get you to Hello World is not necessarily the best way to go. I just assumed you're wrong. I assumed of the 17 ways AWS has to run containers, Corey just doesn't understand. And so I went after it. And I said, “Okay, let me see if I can find a way that solves his use case, as I understand it, through a quick tweet.”And so I tried to App Runner; I saw that App Runner does not meet your needs because you have to somehow get your Docker image pushed up to a repo. App Runner can take an image that's already been pushed up and deployed for you or it can build from source but neither of those were the way I understood your use case.Corey: Having used App Runner before via the Copilot CLI, it is the closest as best I can tell to achieving what I want. But also let's be clear that I don't believe there's a free tier; there needs to be a load balancer in front of it, so you're starting with 15 bucks a month for this thing. Which is not the end of the world. Had I known at the beginning that all of this was going to be there, I would have just signed up for a bit.ly account and called it good. But here we are.Casey: Yeah. I tried Copilot. Copilot is a great developer experience, but it also is just pulling together tons of—I mean just trying to do a Copilot service deploy, VPCs are being created and tons IAM roles are being created, code pipelines, there's just so much going on. I was like 20 minutes into it, and I said, “Yeah, this is not fitting the bill for what Corey was looking for.” Plus, it doesn't solve my the way I understood your use case, which is you don't want to worry about builds, you just want to push code and have new Docker images get built for you.Corey: Well, honestly, let's be clear here, once it's up and running, I don't want to ever have to touch the silly thing again.Casey: Right.Corey: And that's so far has been the case, after I forked the repo and made a couple of changes to it that I wanted to see. One of them was to render the entire thing case insensitive because I get that one wrong a lot, and the other is I wanted to change the permanent 301 redirect to a temporary 302 redirect because occasionally, sponsors will want to change where it goes in the fullness of time. And that is just fine, but I want to be able to support that and not have to deal with old cached data. So, getting that up and running was a bit of a challenge. But the way that it worked, was following the instructions in the GitHub repo.The developer environment had spun up in the Google's Cloud Shell was just spectacular. It prompted me for a few things and it told me step by step what to do. This is the sort of thing I could have given a basically non-technical user, and they would have had success with it.Casey: So, I tried it as well. I said, “Well, okay, if I'm going to respond to Corey here and challenge him on this, I need to try Cloud Run.” I had no experience with Cloud Run. I had a small example repo that loosely mapped what I understood you were trying to do. Within five minutes, I had Cloud Run working.And I was surprised anytime I pushed a new change, within 45 seconds the change was built and deployed. So, here's my conclusion, Corey. Google Cloud Run is great for your use case, and AWS doesn't have the perfect answer. But here's my challenge to you. I think that you just proved why there's 17 different ways to run containers on AWS, is because there's that many different types of users that have different needs and you just happen to be number 18 that hasn't gotten the right attention yet from AWS.Corey: Well, let's be clear, like, my gag about 17 ways to run containers on AWS was largely a joke, and it went around the internet three times. So, I wrote a list of them on the blog post of “17 Ways to Run Containers in AWS” and people liked it. And then a few months later, I wrote “17 More Ways to Run Containers on AWS” listing 17 additional services that all run containers.And my favorite email that I think I've ever received in feedback was from a salty AWS employee, saying that one of them didn't really count because of some esoteric reason. And it turns out that when I'm trying to make a point of you have a sarcastic number of ways to run containers, pointing out that well, one of them isn't quite valid, doesn't really shatter the argument, let's be very clear here. So, I appreciate the feedback, I always do. And it's partially snark, but there is an element of truth to it in that customers don't want to run containers, by and large. That is what they do in service of a business goal.And they want their application to run which is in turn to serve as the business goal that continues to abstract out into, “Remain a going concern via the current position the company stakes out.” In your case, it is saving lives; in my case, it is fixing horrifying AWS bills and making fun of Amazon at the same time, and in most other places, there are somewhat more prosaic answers to that. But containers are simply an implementation detail, to some extent—to my way of thinking—of getting to that point. An important one [unintelligible 00:18:20], let's be clear, I was very anti-container for a long time. I wrote a talk, “Heresy in the Church of Docker” that then was accepted at ContainerCon. It's like, “Oh, boy, I'm not going to leave here alive.”And the honest answer is many years later, that Kubernetes solves almost all the criticisms that I had with the downside of well, first, you have to learn Kubernetes, and that continues to be mind-bogglingly complex from where I sit. There's a reason that I've registered kubernetestheeasyway.com and repointed it to ECS, Amazon's container service that is not requiring you to cosplay as a cloud provider yourself. But even ECS has a number of challenges to it, I want to be very clear here. There are no silver bullets in this.And you're completely correct in that I have a large, complex environment, and the application is nuanced, and I'm willing to invest a few weeks in setting up the baseline underlying infrastructure on AWS with some of these services, ideally not all of them at once because that's something a lunatic would do, but getting them up and running. The other side of it, though, is that if I am trying to evaluate a cloud provider's handling of containers and how this stuff works, the reason that everyone starts with a Hello World-style example is that it delivers ideally, the meantime to dopamine. There's a reason that Hello World doesn't have 18 different dependencies across a bunch of different databases and message queues and all the other complicated parts of running a modern application. Because you just want to see how it works out of the gate. And if getting that baseline empty container that just returns the string ‘Hello World' is that complicated and requires that much work, my takeaway is not that this user experience is going to get better once I'd make the application itself more complicated.So, I find that off-putting. My approach has always been find something that I can get the easy, minimum viable thing up and running on, and then as I expand know that you'll be there to catch me as my needs intensify and become ever more complex. But if I can't get the baseline thing up and running, I'm unlikely to be super enthused about continuing to beat my head against the wall like, “Well, I'll just make it more complex. That'll solve the problem.” Because it often does not. That's my position.Casey: Yeah, I agree that dopamine hit is valuable in getting attached to want to invest into whatever tech stack you're using. The challenge is your second part of that. Your second part is will it grow with me and scale with me and support the complex edge cases that I have? And the problem I've seen is a lot of organizations will start with something that's very easy to get started with and then quickly outgrow it, and then come up with all sorts of weird Rube Goldberg-type solutions. Because they jumped all in before seeing—I've got kind of an example of that.I'm happy to announce that there's now 18 ways to run containers on AWS. Because in your use case, in the spirit of AWS customer obsession, I hear your use case, I've created an open-source project that I want to share called Quinntainers—Corey: Oh, no.Casey: —and it solves—yes. Quinntainers is live and is ready for the world. So, now we've got 18 ways to run containers. And if you have Corey's use case of, “Hey, here's my container. Run it for me,” now we've got a one command that you can run to get things going for you. I can share a link for you and you could check it out. This is a [unintelligible 00:21:38]—Corey: Oh, we're putting that in the [show notes 00:21:37], for sure. In fact, if you go to snark.cloud/quinntainers, you'll find it.Casey: You'll find it. There you go. The idea here was this: There is a real use case that you had, and I looked at AWS does not have an out-of-the-box simple solution for you. I agree with that. And Google Cloud Run does.Well, the answer would have been from AWS, “Well, then here, we need to make that solution.” And so that's what this was, was a way to demonstrate that it is a solvable problem. AWS has all the right primitives, just that use case hadn't been covered. So, how does Quinntainers work? Real straightforward: It's a command-line—it's an NPM tool.You just run a [MPX 00:22:17] Quinntainer, it sets up a GitHub action role in your AWS account, it then creates a GitHub action workflow in your repo, and then uses the Quinntainer GitHub action—reusable action—that creates the image for you; every time you push to the branch, pushes it up to ECR, and then automatically pushes up that new version of the image to App Runner for you. So, now it's using App Runner under the covers, but it's providing that nice developer experience that you are getting out of Cloud Run. Look, is container really the right way to go with running containers? No, I'm not making that point at all. But the point is it is a—Corey: It might very well be.Casey: Well, if you want to show a good Hello World experience, Quinntainer's the best because within 30 seconds, your app is now set up to continuously deliver containers into AWS for your very specific use case. The problem is, it's not going to grow for you. I mean that it was something I did over the weekend just for fun; it's not something that would ever be worthy of hitching up a real production workload to. So, the point there is, you can build frameworks and tools that are very good at getting that initial dopamine hit, but then are not going to be there for you unnecessarily as you mature and get more complex.Corey: And yet, I've tilted a couple of times at the windmill of integrating GitHub actions in anything remotely resembling a programmatic way with AWS services, as far as instance roles go. Are you using permanent credentials for this as stored secrets or are you doing the [OICD 00:23:50][00:23:50] handoff?Casey: OIDC. So, what happens is the tool creates the IAM role for you with the trust policy on GitHub's OIDC provider, sets all that up for you in your account, locks it down so that just your repo and your main branch is able to push or is able to assume the role, the role is set up just to allow deployments to App Runner and ECR repository. And then that's it. At that point, it's out of your way. And you're just git push, and couple minutes later, your updates are now running an App Runner for you.Corey: This episode is sponsored in part by our friends at Vultr. Optimized cloud compute plans have landed at Vultr to deliver lightning fast processing power, courtesy of third gen AMD EPYC processors without the IO, or hardware limitations, of a traditional multi-tenant cloud server. Starting at just 28 bucks a month, users can deploy general purpose, CPU, memory, or storage optimized cloud instances in more than 20 locations across five continents. Without looking, I know that once again, Antarctica has gotten the short end of the stick. Launch your Vultr optimized compute instance in 60 seconds or less on your choice of included operating systems, or bring your own. It's time to ditch convoluted and unpredictable giant tech company billing practices, and say goodbye to noisy neighbors and egregious egress forever.Vultr delivers the power of the cloud with none of the bloat. "Screaming in the Cloud" listeners can try Vultr for free today with a $150 in credit when they visit getvultr.com/screaming. That's G E T V U L T R.com/screaming. My thanks to them for sponsoring this ridiculous podcast.Corey: Don't undersell what you've just built. This is something that—is this what I would use for a large-scale production deployment, obviously not, but it has streamlined and made incredibly accessible things that previously have been very complex for folks to get up and running. One of the most disturbing themes behind some of the feedback I got was, at one point I said, “Well, have you tried running a Docker container on Lambda?” Because now it supports containers as a packaging format. And I said no because I spent a few weeks getting Lambda up and running back when it first came out and I've basically been copying and pasting what I got working ever since the way most of us do.And response is, “Oh, that explains a lot.” With the implication being that I'm just a fool. Maybe, but let's be clear, I am never the only person in the room who doesn't know how to do something; I'm just loud about what I don't know. And the failure mode of a bad user experience is that a customer feels dumb. And that's not okay because this stuff is complicated, and when a user has a bad time, it's a bug.I learned that in 2012. From Jordan Sissel the creator of LogStash. He has been an inspiration to me for the last ten years. And that's something I try to live by that if a user has a bad time, something needs to get fixed. Maybe it's the tool itself, maybe it's the documentation, maybe it's the way that GitHub repo's readme is structured in a way that just makes it accessible.Because I am not a trailblazer in most things, nor do I intend to be. I'm not the world's best engineer by a landslide. Just look at my code and you'd argue the fact that I'm an engineer at all. But if it's bad and it works, how bad is it? Is sort of the other side of it.So, my problem is that there needs to be a couple of things. Ignore for a second the aspect of making it the right answer to get something out of the door. The fact that I want to take this container and just run it, and you and I both reach for App Runner as the default AWS service that does this because I've been swimming in the AWS waters a while and you're a frickin AWS Container Hero, where it is expected that you know what most of these things do. For someone who shows up on the containers webpage—which by the way lists, I believe 15 ways to run containers on mobile and 19 ways to run containers on non-mobile, which is just fascinating in its own right—and it's overwhelming, it's confusing, and it's not something that makes it is abundantly clear what the golden path is. First, get it up and working, get it running, then you can add nuance and flavor and the rest, and I think that's something that's gotten overlooked in our mad rush to pretend that we're all Google engineers, circa 2012.Casey: Mmm. I think people get stressed out when they tried to run containers in AWS because they think, “What is that golden path?” You said golden path. And my advice to people is there is no golden path. And the great thing about AWS is they do continue to invest in the solutions they come up with. I'm still bitter about Google Reader.Corey: As am I.Casey: Yeah. I built so much time getting my perfect set of RSS feeds and then I had to find somewhere else to—with AWS, the different offerings that are available for running containers, those are there intentionally, it's not by accident. They're there to solve specific problems, so the trick is finding what works best for you and don't feel like one is better than the other is going to get more attention than others. And they each have different use cases.And I approach it this way. I've seen a couple of different people do some great flowcharts—I think Forrest did one, Vlad did one—on ways to make the decision on how to run your containers. And I break it down to three questions. I ask people first of all, where are you going to run these workloads? If someone says, “It has to be in the data center,” okay, cool, then ECS Anywhere or EKS Anywhere and we'll figure out if Kubernetes is needed.If they need specific requirements, so if they say, “No, we can run in the cloud, but we need privileged mode for containers,” or, “We need EBS volumes,” or, “We want really small container sizes,” like, less than a quarter-VCP or less than half a gig of RAM—or if you have custom log requirements, Fargate is not going to work for you, so you're going to run on EC2. Otherwise, run it on Fargate. But that's the first question. Figure out where are you going to run your containers. That leads to the second question: What's your control plane?But those are different, sort of related but different questions. And I only see six options there. That's App Runner for your control plane, LightSail for your control plane, Rosa if you're invested in OpenShift already, EKS either if you have Momentum and Kubernetes or you have a bunch of engineers that have a bunch of experience with Kubernetes—if you don't have either, don't choose it—or ECS. The last option Elastic Beanstalk, but let's leave that as a—if you're not currently invested in Elastic Beanstalk don't start today. But I look at those as okay, so I—first question, where am I going to run my containers? Second question, what do I want to use for my control plane? And there's different pros and cons of each of those.And then the third question, how do I want to manage them? What tools do I want to use for managing deployment? All those other tools like Copilot or App2Container or Proton, those aren't my control plane; those aren't where I run my containers; that's how I manage, deploy, and orchestrate all the different containers. So, I look at it as those three questions. But I don't know, what do you think of that, Corey?Corey: I think you're onto something. I think that is a terrific way of exploring that question. I would argue that setting up a framework like that—one or very similar—is what the AWS containers page should be, just coming from the perspective of what is the neophyte customer experience. On some level, you almost need a slide of have choose your level of experience ranging from, “What's a container?” To, “I named my kid Kubernetes because I make terrible life decisions,” and anywhere in between.Casey: Sure. Yeah, well, and I think that really dictates the control plane level. So, for example, LightSail, where does LightSail fit? To me, the value of LightSail is the simplicity. I'm looking at a monthly pricing: Seven bucks a month for a container.I don't know how [unintelligible 00:30:23] works, but I can think in terms of monthly pricing. And it's tailored towards a console user, someone just wants to click in, point to an image. That's a very specific user, there's thousands of customers that are very happy with that experience, and they use it. App Runner presents that scale to zero. That's one of the big selling points I see with App Runner. Likewise, with Google Cloud Run. I've got that scale to zero. I can't do that with ECS, or EKS, or any of the other platforms. So, if you've got something that has a ton of idle time, I'd really be looking at those. I would argue that I think I did the math, Google Cloud Run is about 30% more expensive than App Runner.Corey: Yeah, if you disregard the free tier, I think that's have it—running persistently at all times throughout the month, the drop-out cold starts would cost something like 40 some odd bucks a month or something like that. Don't quote me on it. Again and to be clear, I wound up doing this very congratulatory and complimentary tweet about them on I think it was Thursday, and then they immediately apparently took one look at this and said, “Holy shit. Corey's saying nice things about us. What do we do? What do we do?” Panic.And the next morning, they raised prices on a bunch of cloud offerings. Whew, that'll fix it. Like—Casey: [laugh].Corey: Di-, did you miss the direction you're going on here? No, that's the exact opposite of what you should be doing. But here we are. Interestingly enough, to tie our two conversation threads together, when I look at an AWS bill, unless you're using Fargate, I can't tell whether you're using Kubernetes or not because EKS is a small charge. And almost every case for the control plane, or Fargate under it.Everything else just manifests as EC2 spend. From the perspective of the cloud provider. If you're running a Kubernetes cluster, it is a single-tenant application that can have some very funky behaviors like cross-AZ chatter back and fourth because there's no internal mechanism to say talk to the free thing, rather than the two cents a gigabyte thing. It winds up spinning up and down in a bunch of different ways, and the behavior patterns, because of how placement works are not necessarily deterministic, depending upon workload. And that becomes something that people find odd when, “Okay, we look at our bill for a week, what can you say?”“Well, first question. Are you running Kubernetes at all?” And they're like, “Who invited these clowns?” Understand, we're not prying into your workloads for a variety of excellent legal and contractual reasons, here. We are looking at how they behave, and for specific workloads, once we have a conversation engineering team, yeah, we're going to dive in, but it is not at all intuitive from the outside to make any determination whether you're running containers, or whether you're running VMs that you just haven't done anything with in 20 years, or what exactly is going on. And that's just an artifact of the billing system.Casey: We ran into this challenge in Gaggle. We don't use EKS, we use ECS, but we have some shared clusters, lots of EC2 spend, hard to figure out which team is creating the services that's running that up. We actually ended up creating a tool—we open-sourced it—ECS Chargeback, and what it does is it looks at the CPU memory reservations for each task definition, and then prorates the overall charge of the ECS cluster, and then creates metrics in Datadog to give us a breakdown of cost per ECS service. And it also measures what we like to refer to as waste, right? Because if you're reserving four gigs of memory, but your utilization never goes over two gigs, we're paying for that reservation, but you're underutilizing.So, we're able to also show which services have the highest degree of waste, not just utilization, so it helps us go after it. But this is a hard problem. I'd be curious, how do you approach these shared ECS resources and slicing and dicing those bills?Corey: Everyone has a different approach, too. This there is no unifiable, correct answer. A previous show guest, Peter Hamilton, over at Remind had done something very similar, open-sourced a bunch of these things. Understanding what your spend is important on this, and it comes down to getting at the actual business concern because in some cases, effectively dead reckoning is enough. You take a look at the cluster that is really hard to attribute because it's a shared service. Great. It is 5% of your bill.First pass, why don't we just agree that it is a third for Service A, two-thirds for Service B, and we'll call it mostly good at that point? That can be enough in a lot of cases. With scale [laugh] you're just sort of hand-waving over many millions of dollars a year there. How about we get into some more depth? And then you start instrumenting and reporting to something, be it CloudWatch, be a Datadog, be it something else, and understanding what the use case is.In some cases, customers have broken apart shared clusters for that specific reason. I don't think that's necessarily the best approach from an engineering perspective, but again, this is not purely an engineering decision. It comes down to serving the business need. And if you're taking up partial credits on that cluster, for a tax credit for R&D for example, you want that position to be extraordinarily defensible, and spending a few extra dollars to ensure that it is the right business decision. I mean, again, we're pure advisory; we advise customers on what we would do in their position, but people often mistake that to be we're going to go for the lowest possible price—bad idea, or that we're going to wind up doing this from a purely engineering-centric point of view.It's, be aware of that in almost every case, with some very notable weird exceptions, the AWS Bill costs significantly less than the payroll expense that you have of people working on the AWS environment in various ways. People are more expensive, so the idea of, well, you can save a whole bunch of engineering effort by spending a bit more on your cloud, yeah, let's go ahead and do that.Casey: Yeah, good point.Corey: The real mark of someone who's senior enough is their answer to almost any question is, “It depends.” And I feel I've fallen into that trap as well. Much as I'd love to sit here and say, “Oh, it's really simple. You do X, Y, and Z.” Yeah… honestly, my answer, the simple answer, is I think that we orchestrate a cyber-bullying campaign against AWS through the AWS wishlist hashtag, we get people to harass their account managers with repeated requests for, “Hey, could you go ahead and [dip 00:36:19] that thing in—they give that a plus-one for me, whatever internal system you're using?”Just because this is a problem we're seeing more and more. Given that it's an unbounded growth problem, we're going to see it more and more for the foreseeable future. So, I wish I had a better answer for you, but yeah, that's stuff's super hard is honest, but it's also not the most useful answer for most of us.Casey: I'd love feedback from anyone from you or your team on that tool that we created. I can share link after the fact. ECS Chargeback is what we call it.Corey: Excellent. I will follow up with you separately on that. That is always worth diving into. I'm curious to see new and exciting approaches to this. Just be aware that we have an obnoxious talent sometimes for seeing these things and, “Well, what about”—and asking about some weird corner edge case that either invalidates the entire thing, or you're like, “Who on earth would ever have a problem like that?” And the answer is always, “The next customer.”Casey: Yeah.Corey: For a bounded problem space of the AWS bill. Every time I think I've seen it all, I just have to talk to one more customer.Casey: Mmm. Cool.Corey: In fact, the way that we approached your teardown in the restaurant is how we launched our first pass approach. Because there's value in something like that is different than the value of a six to eight-week-long, deep-dive engagement to every nook and cranny. And—Casey: Yeah, for sure. It was valuable to us.Corey: Yeah, having someone come in to just spend a day with your team, diving into it up one side and down the other, it seems like a weird thing, like, “How much good could you possibly do in a day?” And the answer in some cases is—we had a Honeycomb saying that in a couple of days of something like this, we wound up blowing 10% off their entire operating budget for the company, it led to an increased valuation, Liz Fong-Jones says that—on multiple occasions—that the company would not be what it was without our efforts on their bill, which is just incredibly gratifying to hear. It's easy to get lost in the idea of well, it's the AWS bill. It's just making big companies spend a little bit less to another big company. And that's not exactly, you know, saving the lives of K through 12 students here.Casey: It's opening up opportunities.Corey: Yeah. It's about optimizing for the win for everyone. Because now AWS gets a lot more money from Honeycomb than they would if Honeycomb had not continued on their trajectory. It's, you can charge customers a lot right now, or you can charge them a little bit over time and grow with them in a partnership context. I've always opted for the second model rather than the first.Casey: Right on.Corey: But here we are. I want to thank you for taking so much time out of well, several days now to argue with me on Twitter, which is always appreciated, particularly when it's, you know, constructive—thanks for that—Casey: Yeah.Corey: For helping me get my business partner to re:Invent, although then he got me that horrible puzzle of 1000 pieces for the Cloud-Native Computing Foundation landscape and now I don't ever want to see him again—so you know, that happens—and of course, spending the time to write Quinntainers, which is going to be at snark.cloud/quinntainers as soon as we're done with this recording. Then I'm going to kick the tires and send some pull requests.Casey: Right on. Yeah, thanks for having me. I appreciate you starting the conversation. I would just conclude with I think that yes, there are a lot of ways to run containers in AWS; don't let it stress you out. They're there for intention, they're there by design. Understand them.I would also encourage people to go a little deeper, especially if you got a significantly large workload. You got to get your hands dirty. As a matter of fact, there's a hands-on lab that a company called Liatrio does. They call it their Night Lab; it's a one-day free, hands-on, you run legacy monolithic job applications on Kubernetes, gives you first-hand experience on how to—gets all the way up into observability and doing things like Canary deployments. It's a great, great lab.But you got to do something like that to really get your hands dirty and understand how these things work. So, don't sweat it; there's not one right way. There's a way that will probably work best for each user, and just take the time and understand the ways to make sure you're applying the one that's going to give you the most runway for your workload.Corey: I will definitely dig into that myself. But I think you're right, I think you have nailed a point that is, again, a nuanced one and challenging to put in a rage tweet. But the services don't exist in a vacuum. They're not there because, despite the joke, someone wants to get promoted. It's because there are customer needs that are going on that, and this is another way of meeting those needs.I think there could be better guidance, but I also understand that there are a lot of nuanced perspectives here and that… hell is someone else's workflow—Casey: [laugh].Corey: —and there's always value in broadening your perspective a bit on those things. If people want to learn more about you and how you see the world, where's the best place to find you?Casey: Probably on Twitter: twitter.com/nektos, N-E-K-T-O-S.Corey: That might be the first time Twitter has been described as a best place for anything. But—Casey: [laugh].Corey: Thank you once again, for your time. It is always appreciated.Casey: Thanks, Corey.Corey: Casey Lee, CTO at Gaggle and AWS Container Hero. And apparently writing code in anger to invalidate my points, which is always appreciated. Please do more of that, folks. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, or the YouTube comments, which is always a great place to go reading, whereas if you've hated this podcast, please leave a five-star review in the usual places and an angry comment telling me that I'm completely wrong, and then launching your own open-source tool to point out exactly what I've gotten wrong this time.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Google Cloud Platform Podcast
Celebrating Women's History Month with Vidya Nagarajan Raman

Google Cloud Platform Podcast

Play Episode Listen Later Mar 30, 2022 41:05


Stephanie Wong and Debi Cabrera host a special episode highlighting the amazing accomplishments of our guest Vidya Nagarajan Raman as we celebrate Women's History Month! With her more than 20 years of experience fostering growth and monetization in enterprise and education platforms, investing and working in the holistic lifestyle space, and earning her MBA while raising her two children, Vidya has certainly done a lot! Vidya tells us about her latest blog post stressing the importance of being an event-driven organization. In this business structure, reactions to events are planned in advance and developers consider how services are integrated for maximum efficiency. With synchronous extensions, projects retain flexibility in existing applications as they work with Cloud Functions to extend to new areas. Vidya gives our listeners examples of how this works. The journey from engineer to Head of Product Management was an interesting one for Vidya, and she describes how she got started in computer engineering. Her passion for connecting with users later pushed her to product management. She tells us about her contributions to Chromebooks for Education as well as other milestones during her time with Google. Vidya talks about the support system she credits with helping her along the way and gives our listeners advice for finding mentors in their fields. She touches on the challenges she faced, describes what it was like for a woman in the industry when she first started, and offers encouragement to women getting started now. Balancing work, continuing her education, and raising children was tough, but Vidya says that, along with her incredible professional and personal support systems, defining priorities is vital. Vidya offers our listeners the insights she's gained as she's watched Google and workplace teams change and adapt over the years. Building an inclusive team, encouraging diverse perspectives, and defining a framework for settling disagreements are some of the pieces of advice she shares. Don't be afraid to fail and be a risk-taker, Vidya says, because that promotes growth and learning. If you learn something new every day and have fun doing it, then you will be successful. In her spare time, Vidya leads a charitable foundation that partners with organizations in countries like India and Peru to further education, build orphanages and libraries, and provide medical care for women. She is an angel investor and runs workshops on creating a holistic lifestyle to help others lead well-rounded, fulfilling lives. Vidya Nagarajan Raman Vidya Nagarajan Raman is the Head of Product Management for Serverless at Google Cloud. She is also an angel investor, advisor, and co-founder of a holistic lifestyle platform that empowers people to grow and transform their lives. Cool things of the week Ready to solve for the future? Data Cloud Summit ‘22 is coming April 6 blog Visualizing Google Cloud: 101 Illustrated References for Cloud Engineers and Architects site Interview Evolving to a programmable cloud blog Cloud Functions site Cloud Run site Eventarc docs Work Flows site Chromebook site What's something cool you're working on? Debi is working on Apache Beam series with Mark Mirchandani. Stephanie is working on scripts for a series about getting into a career in cloud. Hosts Stephanie Wong and Debi Cabrera

Google Cloud Platform Podcast
Redesigning the Cloud SDK and CLI with Wael Manasra and Cody Oss

Google Cloud Platform Podcast

Play Episode Listen Later Feb 2, 2022 44:09


This week on the podcast, Wael Manasra and Cody Oss join hosts Carter Morgan and Mark Mirchandani to chat about new branding in Cloud SDK and gcloud CLI. Google Cloud SDK was built and designed to take over mundane development tasks, allowing engineers to focus on specialized features and solutions. The SDK documentation and tutorials are an important part of this as well. With clear instructions, developers can easily make use of Cloud SDK. Software Development Kits have evolved so much over the years that recently, Cody, Wael, and their teams have found it necessary to redefine and rethink SDKs. The popularity of cloud projects and distributed systems, for example, means changes to kit requirements. The update is meant to reevaluate the software included in SDKs and CLIs and to more accurately represent what the products offer. Giving developers the tools they need in the place they work means giving developers code language options, providing thorough instruction, and listening to feedback. These are the goals of this redesign. The Google Cloud SDK contains downloadable parts and web publications. Our guests explain the types of software and documentation in each group and highlight the importance of documentation and supporting materials like tutorials. The Cloud Console is a great place for developers to start building solutions using the convenient point-and-click tools that are available. When these actions need to be repeated, the downloadable Command Line Interface tool can do the work. Cody talks about authentication and gcloud, including its relationship to client libraries. He walks us through the steps a typical developer might take when using Google products and how they relate to the SDK and CLI. Through examples, Wael helps us further understand client libraries and how they can interact with the CLI. The Cloud SDK is a work in progress. Our guests welcome your feedback for future updates! Wael Manasra Wael manages the gcloud CLI, the client libraries for all GCP services, and the general Cloud SDK developer experience. Cody Oss Cody works on the Go Cloud Client libraries where he strives to provide an delightful and idiomatic experience to all the Gophers on Google Cloud. Cool things of the week Google Tau VMs deliver over 40% price-performance advantage to customers blog Find products faster with the new All products page blog Interview Cloud SDK site Cloud SDK Documentation docs Go site Google Cloud site Cloud Storage site Cloud Storage Documentation docs Cloud Code site Cloud Run site GKE site Cloud Functions site Cloud Client Libraries docs Cloud Shell site Cloud Shell Editor docs What's something cool you're working on? Carter is working on his comedy. Hosts Carter Morgan and Mark Mirchandani

Screaming in the Cloud
Building a User-Friendly Product with Aparna Sinha

Screaming in the Cloud

Play Episode Listen Later Dec 8, 2021 42:53


About AparnaAparna Sinha is Director of Product for Kubernetes and Anthos at Google Cloud. Her teams are focused on transforming the way we work through innovation in platforms. Before Anthos and Kubernetes, Aparna worked on the Android platform. She joined Google from NetApp where she was Director of Product for storage automation and private cloud. Prior to NetApp, Aparna was a leader in McKinsey and Company's business transformation office working with CXOs on IT strategy, pricing, and M&A. Aparna holds a PhD in Electrical Engineering from Stanford and has authored several technical publications. She serves on the Governing Board of the Cloud Native Computing Foundation (CNCF).Links: DevOps Research Report: https://www.devops-research.com/research.html Twitter: https://twitter.com/apbhatnagar TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database that is not the bind DNS server. If you're tired of managing open source Redis on your own, or you're using one of the vanilla cloud caching services, these folks have you covered with the go to manage Redis service for global caching and primary database capabilities; Redis Enterprise. Set up a meeting with a Redis expert during re:Invent, and you'll not only learn how you can become a Redis hero, but also have a chance to win some fun and exciting prizes. To learn more and deploy not only a cache but a single operational data platform for one Redis experience, visit redis.com/hero. Thats r-e-d-i-s.com/hero. And my thanks to my friends at Redis for sponsoring my ridiculous non-sense.  Corey: You know how Git works right?Announcer: Sorta, kinda, not really. Please ask someone else.Corey: That's all of us. Git is how we build things, and Netlify is one of the best ways I've found to build those things quickly for the web. Netlify's Git-based workflows mean you don't have to play slap-and-tickle with integrating arcane nonsense and web hooks, which are themselves about as well understood as Git. Give them a try and see what folks ranging from my fake Twitter for Pets startup, to global Fortune 2000 companies are raving about. If you end up talking to them—because you don't have to; they get why self-service is important—but if you do, be sure to tell them that I sent you and watch all of the blood drain from their faces instantly. You can find them in the AWS marketplace or at www.netlify.com. N-E-T-L-I-F-Y dot com.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. We have a bunch of conversations on this show covering a wide gamut of different topics, things that I find personally interesting, usually, and also things I'm noticing in the industry. Fresh on the heels of Google Next, we get to ideally have conversations about both of those things. Today, I'm speaking with the Director of Product Management at Google Cloud, Aparna Sinha. Aparna, thank you so much for joining me today. I appreciate it.Aparna: Thank you, Corey. It's a pleasure to be here.Corey: So, Director of Product Management is one of those interesting titles. We've had a repeat guest here, Director of Outbound Product Management Richard Seroter, which is great. I assume—as I told him—outbound products are the ones that are about to be discontinued. He's been there a year and somehow has failed the discontinue a single thing, so okay, I'm sure that's going to show up on his review. What do you do? The products aren't outbound; they're just products, and you're managing them, but that doesn't tell me much. Titles are always strange.Aparna: Yeah, sure. Richard is one of my favorite people, by the way. I work closely with him. I am the Director of Product for Developer Platform. That's Google Cloud's developer platform.It includes many different products—actually, 30-Plus products—but the primary pieces are usually when a developer comes to Google Cloud, the pieces that they interact with, like our command-line interface, like our Cloud Shell, and all of the SDK pieces that go behind it, and then also our DevOps tooling. So, as you're writing the application in the IDE and as you're deploying it into production, that's all part of the developer platform. And then I also run our serverless platform, which is one of the most developer-friendly capabilities from a compute perspective. It's also integrated into many different services within GCP. So, behind the title, that's really what I work on.Corey: Okay, so you're, I guess, in part responsible for well, I guess, a disappointment of mine a few years ago. I have a habit on Twitter—because I'm a terrible person—of periodically spinning up a new account on various cloud providers and kicking the tires and then live-tweeting the experience, and I was really set to dunk on Google Cloud; I turned this into a whole blog post. And I came away impressed, where the developer experience was pretty close to seamless for getting up and running. It was head and shoulders above what I've seen from other cloud providers, and on the one hand, I want to congratulate you and on the other, it doesn't seem like that's that high of a bar, to be perfectly honest with you because it seems that companies get stuck in their own ways and presuppose that everyone using the product is the same as the people building the product. Google Cloud has been and remains a shining example of great developer experience across the board.If I were starting something net new and did not have deep experience with an existing cloud provider—which let's face it, the most valuable thing about the cloud is knowing how it's going to break because everything breaks—I would be hard-pressed to not pick GCP, if not as the choice, at least a strong number two. So, how did that come to be? I take a look at a lot of Google's consumer apps and, “This is a great user experience,” isn't really something I find myself saying all that often. Google Cloud is sort of its own universe. What happened?Aparna: Well, thank you, first of all, for the praise. We are very humble about it, actually. I think that we're grateful if our developers find the experience to be seamless. It is something that we measure all the time. That may be one of the reasons why you found it to be better than other places. We are continuously trying to improve the time to value for developers, how long it takes them to perform certain actions. And so what you measure is what you improve, right? If you don't measure it, you don't improve it. That's one of our SRE principles.Corey: I wish. I've been measuring certain things for years, and they don't seem to be improving at all. It's like, “Wow, my code is still terrible, but I'm counting the bugs and the number isn't getting smaller.” Turns out there might be additional steps required.Aparna: Yes, you know, we measure it, we look at it, we take active OKRs to improve these things, especially usability. Usability is extremely important for certainly the developer platform, for my group; that's something that's extremely important. I would say, stepping back, you said it's not that common to find a good user experience in the cloud, I think in general—you know, and I've spent the majority of my career, if not all of my career, working on enterprise software. Enterprise software is not always designed in the most user-friendly way; it's not something that people always think about. Some of the enterprise software I've used has been really pretty… pretty bad. Just a list of things.Corey: Oh, yeah. And it seems like their entire philosophy—I did a bit of a dive into this, and I think it was Stripe's Patrick McKenzie who wound up pointing this out originally, though; but the internet is big and people always share and reshare ideas—the actual customer for enterprise software is very often procurement or a business unit that is very organizationally distant from the person who's using it. And I think in a world of a cloud platform, that is no longer true. Yeah, there's a strategic decision of what Cloud do we use, but let's be serious, that decision often comes into play long after there's already been a shadow IT slash groundswell uprising. The sales process starts to look an awful lot less like, “Pick our cloud,” and a lot more like, “You've already picked our cloud. How about we formalize the relationship?”And developer experience with platforms is incredibly important and I'm glad to see that this is a—well, it's bittersweet to me. I am glad to see that this is something that Google is focusing on, and I'm disappointed to admit that it's a differentiator.Aparna: It is a differentiator. It is extremely important. At Google, there are a couple of reasons why this is part of our DNA, and it is actually related to the fact that we are also a consumer products company. We have a very strong user experience team, a very strong measurements-oriented—they measure everything, and they design everything, and they run focus groups. So, we have an extraordinary usability team, and it's actually one of the groups that—just like every other group—is fungible; you can move between consumer and cloud. There's no difference in terms of your training and skill set.And so, I know you said that you're not super impressed with our consumer products, but I think that the practice behind treating the user as king, treating the user as the most important part of your development, is something that we bring over into cloud. And it's just a part of how we do development, and I think that's part of the reason why our products are usable. Again, I shy away from taking any really high credit on these things because I think I always have a very high bar. I want them to be delightful, super delightful, but we do have good usability scores on some of the pieces. I think our command line, I think, is quite good. I think—there's always improvements, by the way, Corey—but I think that there are certain things that are delightful.And a lot of thought goes into it and a lot of multi-functional—meaning across product—user experience and engineering. We have end-developer relations. We have, sort of this four-way communication about—you know, with friction logs and with lots of trials and lots of discussion and measurements, is how we improve the user experience. And I would love to see that in more enterprise software. I think that my experience in the industry is that the user is becoming more important, generally, even in enterprise software, probably because of the migration to cloud.You can't ignore the user anymore. This shouldn't be all about procurement. Anybody can procure a cloud service. It's really about how easily and how quickly can they get to what they want to do as a user, which I think also the definition of what a developer is changing and I think that's one of the most exciting things about our work is that the developer can be anybody; it can be my kids, and it can be anyone across the world. And our goal is to reach those people and to make it easy for them.Corey: If I had to bet on a company not understanding that distinction, on some level, Google's reputation lends itself to that where, oh, great. It's like, I'm a little old to go back to school and join a fraternity and be hazed there, so the second option was, oh, I'll get an interview to be an SRE at Google where, “Oh, great, you've done interesting things, but can you invert a binary tree on a whiteboard?” “No, I cannot. Let's save time and admit that.” So, the concern that I would have had—you just directly contradicted—was the idea that you see at some companies where there's the expectation that all developers are like their developers.Google, for better or worse, has a high technical bar for hiring. A number of companies do not have a similar bar along similar axes, and they're looking for different skill sets to achieve different outcomes, and that's fine. To be clear, I am not saying that, oh, the engineers at Google are all excellent and the engineers all at a bank are all crap. Far from it.That is not true in either direction, but there are differences as far as how they concern themselves with software development, how they frame a lot of these things. And I am surprised that Google is not automatically assuming that developers are the type of developers that you have at Google. Where did that mindset shift come from?Aparna: Oh, absolutely not. I think we would be in trouble if we did that. I studied electrical engineering in school. This would be like assuming that the top of the class is kind of like the kind of people that we want to reach, and it's just absolutely not. Like I said, I want to reach total beginners, I want to reach people who are non-developers with our developer platform.That's our explicit goal, and so we view developers as individuals with a range of superpowers that they've gained throughout their lives, professionally and personally, and people who are always on a path to learn new things, and we want to make it easy for them. We don't treat them as bodies in an employment relationship with some organization, or people with certain minimum bar degrees, or whatever it is. As far as interviewing goes, Corey, in product management, which is the practice that I'm part of, we actually look for, in the interview, that the candidate is not thinking about themselves; they're not imposing themselves on the user base.So, can you think outside of yourself? Can you think of the user base? And are you inquisitive? Are you curious? Do you observe? And how well do you observe differences and diversity, and how well are you able to grasp what might be needed by a particular segment? How well are you able to segment the user base?That's what we look for, certainly in product management, and I'm quite sure also in user experience. You're right, on engineering, of course, we're looking for technical skills, and so on, but that's not how we design our products, that's not how we design the usability of our products.Corey: “If you people were just a little bit smarter slash more like me, then this would work a lot better,” is a common trope. Which brings us, of course, to the current state of serverless. I tend to view serverless as largely a failed initiative so far. And to be clear, I'm viewing this from an AWS-centric lens; that is the… we'll be charitable and call it pool in which I swim. And they announced Lambda in 2015; that's great. “The only code you will ever write in the future is business logic.” Yeah, I might have heard that one before about 15 other technologies dating back to the 60s, but okay.And the expectation was that it was going to take off and set the world on fire. You just needed to learn the constraints of how this worked. And there were a bunch of them, and they were obnoxious, and it didn't have a learning curve so much as a learning cliff. And nowadays, we do see it everywhere, but it's also in small doses. It's mostly used as digital spackle to plaster over the gaps between various AWS services.What I'm not seeing across the board is a radical mindset shift in the way that developers are engaging with cloud platforms that would be heralded by widespread adoption of serverless principles. That said, we are on the heels here of Google Cloud Next, and that you had a bunch of serverless announcements, I'm going to go out on a limb and guess you might not agree with my dismal take on the serverless side of the world?Aparna: Well, I think this is a great question because despite the fact that I like not to be wishy-washy about anything, I actually both agree and disagree [laugh] with what you said. And that's funny.Corey: Well, that's why we're talking about this here instead of on Twitter where two contradictory things can't possibly both be true. Wow, imagine that; nuance, it doesn't fit 280 characters. Please, continue.Aparna: So, what I agree with is that—I agree with you that the former definition of serverless and the constrained way that we are conditioned thinking about serverless is not as expansive as originally hoped, from an adoption perspective. And I think that at Google, serverless is just no longer about only event-driven programming or microservices; it's about running complex workloads at scale while still preserving the delightful developer experience. And this is where the connection to the developer experience comes in. Because the developer experience, in my mind, it's about time to value. How quickly can I achieve the outcome that I need for my business?And what are the things that get in the way of that? Well, setting up infrastructure gets in the way of that, having to scale infrastructure gets in the way of that, having to debug pieces that aren't actually related to the outcome that you're trying to get to gets in the way of that. And the beauty of serverless, it's all in how you define serverless: what does this name actually mean? If serverless only means functions and event-driven applications, then yes, actually, it has a better developer experience, but it is not expansive, and then it is limited, and it's trapped in its skin the way that you mentioned it. [laugh].Corey: And it doesn't lend itself very well to legacy applications—legacy, of course, being condescending engineering-speak for ‘it makes money.' But yeah, that's the stuff that powers the world. We're not going to be redoing all those things as serverless-powered microservices anytime soon, in most cases.Aparna: At Google Cloud, we are redefining serverless. And so what we are taking from Serverless is the delightful user experience and the fact that you don't have to manage the infrastructure, and what we're putting in the serverless is essentially serverless containers. And this is the big revolution in serverless, is that serverless—at least a Google Cloud with serverless containers and our Cloud Run offering—is able to run much bigger varieties of applications and we are seeing large enterprises running legacy applications, like you say, on Cloud Run, which is serverless from a developer experience perspective. There's no cluster, there is no server, there's no VM, there's nothing for you to set up from a scaling perspective. And it essentially scales infinitely.And it is very developer-focused; it's meant for the developer, not for the operator or the infrastructure admin. In reality in enterprise, there is very much a segmentation of roles. And even in smaller companies, there's a segmentation of roles even within the same person. Like, they may have to do some infrastructure work and they may do some development work. And what serverless—at least in the context of Google Cloud—does, is it removes the infrastructure work and maximizes the development work so that you can focus on your application and you can get to that end result, that business value that you're trying to achieve.And with Cloud Run, what we've done is we've preserved that—and I would say, actually, arguably improved that because we've done usability studies that show that we're 22 points above every other serverless offering from a usability perspective. So, it's super important to me that anybody can use this service. Anybody. Maybe even not a developer can use this service. And that's where our focus is.And then what we've done underneath is we've removed many of the restrictions that are traditionally associated with serverless. So, it doesn't have to be event-driven, it is not only a particular set of languages or a particular set of runtimes. It is not only stateless applications, and it's not only request-based billing, it's not only short-running jobs. These are the kinds of things that we have removed and I think we've just redefined serverless.Corey: [unintelligible 00:17:05], on some level, the idea of short-lived functions with a maximum cap feels like a lazy answer to one of the hard problems in computer science, the halting problem. For those not familiar, my layman's understanding of it is, “Okay, you have a program that's running in a loop. How do you deterministically say that it is done executing?” And the functional answer to that is, “Oh, after 15 minutes, it's done. We're killing it.” Which I guess is an answer, but probably not one that's going to get anyone a PhD.It becomes very prescriptive and it leads to really weird patterns trying to work around some of those limitations. And historically, yeah, by working within the constraints of the platform, it works super well. What interests me about Cloud Run is that it doesn't seem to have many of those constraints in quite the same way. It's, “Can you shove whatever monstrosity you've got into a container? You can't? Well, okay, there are ways to get there.”Full disclosure, I was very anti-container; the industry has yet again proven to me that I cannot predict the future. Here we are. “Great, can you shove a container in and hand it to some other place to run it where”—spoiler, people will argue with me on this and they are wrong—“Google engineers are better at running infrastructure to run containers than you are.” Full stop. That is the truism of how this works; economies of scale.I love the idea of being able to take something, throw it over a wall, and not have to think about the rest of it. But everything that I'm thinking about in this context looks certain ways and it's the type of application that I'm working on or that I'm looking at most recently. What are you seeing in Cloud Run as far as interesting customer use cases? What are people doing with it that you didn't expect them to?Aparna: Yeah, I think this is a great time to ask that question because with the pandemic last year—I guess we're still in the pandemic, but with the pandemic, we had developers all over the world become much more important and much more empowered, just because there wasn't really much of an operations team, there wasn't really as much coordination even possible. And so we saw a lot of customers, a lot of developers moving to cloud, and they were looking for the easiest thing that they could use to build their applications. And as a result, serverless and Cloud Run in particular, became extremely popular; I would say hockey stick in terms of usage.And we're seeing everything under the sun. ecobee—this is a home automation company that makes smart thermostats—they're using Cloud Run to launch a new camera product with multi-factor authentication and security built-in, and they had a very tight launch timeline. They were able to very quickly meet that need. Another company—and you talk about, you know, sort of brick and mortar—IKEA, which you and I all like to shop [laugh] at, particularly doing the—Corey: Oh, I love building something from 500 spare parts, badly. It's like basically bringing my AWS architecture experience into my living room. It's great. Please continue.Aparna: Yeah, it's like, yeah—Corey: The Swedish puzzle manufacturer.Aparna: Yes. They're a great company, and I think it just in the downturn and the lockdown, it was actually a very dicey time, very tricky time, particularly for retailers. Of course, everybody was refurbishing their home or [laugh], you know, improving their home environment and their furniture. And IKEA started using serverless containers along with serverless analytics—so with BigQuery, and Cloud Run, and Cloud Functions—and one of the things they did is that they were able to cut their inventory refresh rate from more than three hours to less than three minutes. This meant that when you were going to drive up and do some curbside pickup, you know the order that you placed was actually in stock, which was fantastic for CSAT and everything.But that's the technical piece that they were able to do. When I spoke with them, the other thing that they were able to do with the Cloud Run and Cloud Functions is that they were able to improve the work-life balance of their engineers, which I thought was maybe the biggest accomplishment. Because the platform, they said, was so easy for them to use and so easy for them to accomplish what they needed to accomplish, that they had a better [laugh] better life. And I think that's very meaningful.In other companies, MediaMarktSaturn, we've talked about them before; I don't know if I've spoken to you about them, but we've certainly talked about them publicly. They're a retailer in EMEA, and because of their use of Cloud Run, and they were able to combine the speed of serverless with the flexibility of containers, and their development team was able to go eight times faster while handling 145% increase in digital channel traffic. Again, there are a lot more digital channel traffic during COVID. And perhaps my favorite example is the COVID-19 exposure notifications work that we did with Apple.Corey: An unfortunate example, but a useful one. I—Aparna: Yes.Corey: —we all—I think we all wish it wasn't necessary, but here's the world in which we live. Please, tell me more.Aparna: I have so many friends in engineering and mathematics and these technical fields, and they're always looking at ways that technology can solve these problems. And I think especially something like the pandemic which is so difficult to track, so difficult with the time that it takes for this virus to incubate and so on, so difficult to track these exposures, using the smartphone, using Bluetooth, to have a record of who has it and who they've been in contact with, I think really interesting engineering problem, really interesting human problem. So, we were able to work on that, and of course, when you need a platform that's going to be easy to use, that's going to be something that you can put into production quickly, you're going to use Cloud Run. So, they used Cloud Run, and they also used Cloud Run for Anthos, which is the more hybrid version, for the on-prem piece. And so both of those were used in conjunction to back all of the services that were used in the notifications work.So, those are some of the examples. I think net-net, it's that I think usability, especially in enterprise software is extremely important, and I think that's the direction in which software development is going.Corey: Are you building cloud applications with a distributed team? Check out Teleport, an open source identity-aware access proxy for cloud resources. Teleport provides secure access to anything running somewhere behind NAT: SSH servers, Kubernetes clusters, internal web apps and databases. Teleport gives engineers superpowers! Get access to everything via single sign-on with multi-factor. List and see all SSH servers, kubernetes clusters or databases available to you. Get instant access to them all using tools you already have. Teleport ensures best security practices like role-based access, preventing data exfiltration, providing visibility and ensuring compliance. And best of all, Teleport is open source and a pleasure to use.Download Teleport at https://goteleport.com. That's goteleport.com.Corey: It's easy for me to watch folks—like you—in keynotes at events—like Cloud Next—talk about things and say, “This is how the world is building things, and this is what the future looks like.” And I can sit there and pick to pieces all day, every day. It basically what I do because of deep-seated personality problems with me. It's very different to say that about a customer who has then taken that thing and built it into something that is transformative and solves a very real problem that they have. I may not relate to that problem that they have, but I do not believe that customers are going to have certain problems, find solutions like this and fix them, and the wrong in how they're approaching these things.No one sees the constraints that shape things; no one shows up in the morning hoping to do a crap job today unless you know you're the VP of Integrity at Facebook or something. But there's a very real sense of companies have a bunch of different drivers, and having a tool or a service or a platform that solves it for them, you'd better be very sure before you step up and start saying, “No, you're doing it wrong.” In earlier years, I did not see a whole lot of customer involvement with Cloud Next. It was always a, “Well, a bunch of Googlers are going to tell me how this stuff works, and they'll talk about theoretical things.”That's not the case anymore. You have a whole bunch of highly respectable reference customers out there doing a whole lot of really interesting things. And more to the point, they're willing to go on record talking about this. And I'm not talking about fun startups that are, “Great, it's Twitter, only for pets.” Great. I'm talking banks, companies where mistakes are going to show and leave a mark. It's really hard to reconcile what I'm seeing with Google Cloud in 2021 than what I was seeing in, let's say, five or six years ago. What drove that change?Aparna: Yes, Corey, I think you're definitely correct about that. There's no doubt about it that we have a number of really tremendous customers, we really tremendous enterprise references and so on. I run the Google Cloud Developer Platform, and for me, the developers that I work with and the developers that this platform serves are the inspiration for what we do. And in the last six or seven years that I've worked in Google Cloud, that has always been the case. So, nothing has changed from my perspective, in that regard.If anything, what has changed is that we have far more users, we have been growing exponentially, and we have many more large enterprise customers, but in terms of my journey, I started with the Kubernetes open-source project, I was one of the very early people on that, and I was working with a lot of developers, in that case, in the open-source community, a lot of them became GKE customers, and it just grew. And now we have so many [laugh] customers and so many developers, and we have developed this platform with them. We are very much—it's been a matter of co-innovation, especially on Kubernetes. It has been very much, “Okay, you tell us,” and it's a need-based relationship, you know? Something is not working, we are there and we fix it.Going back to 2017 or whenever it was that Pokemon Go was running on GKE, that was a moment when we realized, “Oh, this platform needs to scale. Okay, let's get at it.” And that's where, Corey, it really helps to have great engineers. For all the pros and cons, I think that's where you want those super-sharp, super-driven, super-intelligent folks because they can make things like that happen, they can make it happen in less than a week, so that—they can make it happen over a Saturday so that Pokemon Go can go live in Japan and everybody can be playing that game. And that's what inspires me.And that's a game, but we have a lot of customers that are running health applications. We have a customer that's running ambulances on the platform. And so this is life-threatening stuff; we have to take that very seriously, and we have to be listening to them and working with them. But I'm inspired, and I think that our roadmap, and the products, and the features that we build are inspired by what they are building on the platform. And they're combining all kinds of different things. They're taking our machine learning capabilities, they're taking our analytics capabilities, they're taking our Maps API, and they're combining it with Cloud Run, they're combining it with GKE. Often they're using both of those.And they're running new services. We've got a customer in Indonesia that's running in a food delivery service; I've got customers that are analyzing the cornfields in the middle of the country to improve crop yield. So, that's the kind of inspiring work, and each of those core, each of those users are coming back to us and saying, “Oh, you know, I need a different type of”—it's very detailed, like, “I need a different type of file system that gives me greater speed or better performance.” We just had a gaming company that was running on GKE that we really won out over a different cloud in terms of performance improvements that we were able to provide on the container startup times. It was just a significant performance improvement. We'll probably publish it in the coming few months.That's the kind of thing that drives it, and I'm very glad that I have a strong engineering team in Google Cloud, and I'm very glad that we have these amazing customers that are trying to do these amazing things, and that they're directly engaging with us and telling us what they need from us because that's what we're here for.Corey: To that end, one more area I want to go into before we call this a show, you've had Cloud Build for a little while, and that's great. Now, at—hot off the presses, you wound up effectively taking that one step further with Cloud Deploy. And I am still mostly someone with terrible build and release practices that people would be ashamed of, struggle to understand the differentiation between what I would do with Cloud Build and what I would do with Cloud Deploy. I understand they're both serverless. I understand that they are things that large companies care about. What is the story there?Aparna: Yeah, it's a journey. As you start to use containers—and these days, like you said, Corey, containers, a lot of people are using them—then you start to have a lot of microservices, and one of the benefits of container usage is that it's really quick to release new versions. You can have different versions of your application, you can test them out, you can roll them out. And so these DevOps practices, they become much more attainable, much more reachable. And we just put out the, I think, the seventh version of the DevOps Research Report—the DORA report—that shows that customers that follow best practices, they achieve their results two times better in terms of business outcomes, and so on.And there's many metrics that show that this kind of thing is important. But I think the most important thing I learned during the pandemic, as we were coming out of the pandemic, is a lot of—and you mentioned enterprises—large banks, large companies' CIOs and CEOs who basically were not prepared for the lockdown, not prepared for the fact that people aren't going to be going into branches, they came to Google Cloud and they said that, “I wish that I had implemented DevOps practices. I wish that I had implemented the capability to roll out changes frequently because I need that now. I need to be able to experiment with a new banking application that's mobile-only. I need to be able to experiment with curbside delivery. And I'm much more dependent on the software than I used to be. And I wish that I had put those DevOps practices.”And so the beginning of 2021, all our conversations were with customers, especially those, you know you said ‘legacy,' I don't think that's the right word, but the traditional companies that have been around for hundreds of years, all of them, they said, “Software is much more important. Yes, if I'm not a software company, at least a large division of my group is now a software group, and I want to put the DevOps practices into play because I know that I need that and that's a better way of working.”By the way, there's a security aspect to that I'd like to come back to because it's really important—especially in banking, financial services, and public sector—as you move to a more agile DevOps workflow, to have security built into that. So, let me come back to that. But with regard to Cloud Build and Cloud Deploy is something I've been wanting to bring into market for a couple of years. And we've been talking about it, we've been working on it actively for more than a year on my team. And I'm very, very excited about this service because what it does is it allows you to essentially put this practice, this DevOps practice into play whereas your artifacts are built and stored in the artifact repository, they can then automatically be deployed into your runtime—which is GKE Cloud Run—in the future, you can deploy them, and you can set how you want to deploy them.Do you want to deploy them to a particular environment that you want to designate the test environment, the environment to which your developers have access in a certain way? Like, it's a test environment, so they can make a lot of changes. And then when do you want to graduate from test to staging, and when do you want to graduate to production and do that gradual rollout? Those are some of the things that Cloud Deploy does.And I think it's high time because how do you manage microservices at scale? How do you really take advantage of container-based development is through this type of tooling. And that's what Cloud Deploy does. It's just the beginning of that, but it's a delightful product. I've been playing around with it; I love it, and we've seen just tremendous reception from our users.Corey: I'm looking forward to kicking the tires on it myself. I want to circle back to talk about the security aspect of it. Increasingly, I'm spending more of my attention looking at cloud security because everyone else has, too, and some of us have jobs that don't include the word security but need to care about it. That's why I have a Thursday edition of my newsletter, now, talking specifically about that. What is the story around security these days from your perspective?And again, it's a huge overall topic, and let's be clear here, I'm not asking, “What does Google Cloud think about security?” That would fill an encyclopedia. What is your take on it? And where do you want to talk about this in the context of Cloud Deploy?Aparna: Yeah, so I think about security from the perspective of the Google Cloud Developer Platform, and specifically from the perspective of the developer. And like you said, security is not often in the title of anybody in the developer organization, so how do we make it seamless? How do we make it such that security is something that is not going to catch you as you're doing your development? That's the critical piece. And at the same time, one of the things we saw during 2020 and 2021 is just the number of cyberattacks just went through the roof. I think there was a 400 to 600% increase in the number of software supply chain attacks. These are attacks where some malicious hacker has come in and inserted some malicious code into your software. [laugh]. Your software, Corey. You know, you the unsuspecting developer is—Corey: Well, it used to be my software; now there's some debate about that.Aparna: Right. That's true because most software is using open-source dependencies; and these open-source dependencies, they have a pretty intricate web of dependencies that they are themselves using. So, it's a transitive problem where you're using a language like Python, or whatever language you're using. And there's a number of—Corey: Crappy bash by default. But yes.Aparna: Well, it was actually a bash script vulnerability, I think, in the Codecov breach that happened, I think it was, in earlier this year, where a malicious bash script was injected into the build system, in fact, of Codecov. And there are all these new attack vectors that are specifically targeting developers. And whether it's nation-states or whoever it is that's causing some of these attacks, it's a problem that is of national and international magnitude. And so I'm really excited that we have the expertise in Google Cloud and beyond Google Cloud.Google, it's a very security-conscious company. This company is a very security-conscious company. [laugh]. And we have built a lot of tooling internally to avoid those kinds of attacks, so what we've done with Cloud Build, and what we're going to do with Cloud Deploy, we're building in the capability for code to be signed, for artifacts to be signed with cryptographic keys, and for that signing, that attestation—we call it an attestation—that attestation to be checked at various points along the software supply chain. So, as you're writing code, as you're submitting the code, as you're building the containers, as you're storing the containers, and then finally as you're deploying them into whatever environment you're deploying them, we check these keys, and we make sure that the software that is going through the system is actually what you intended and that there isn't this malicious code injection that's taking place.And also, we scan the software, we scan the code, we scan the artifacts to check for vulnerabilities, known vulnerabilities as well as unknown vulnerabilities. Known vulnerabilities from a Google perspective; so Google's always a little bit ahead, I would say, in terms of knowing what the vulnerabilities are out there because we do work so much on software across operating systems and programming languages, just across the full gamut of software in the industry, we work on it, and we are constantly securing software. So, we check for those vulnerabilities, we alert you, we help to remediate those vulnerabilities.Those are the type of things that we're doing. And it's all in service of certainly keeping enterprise developers secure, but also just longtail an average, everybody, helping them to be secure so that they don't get hacked and their companies don't get hacked.Corey: It's nice to see people talking about this stuff, who is not directly a security vendor. But by which I mean, you're not using this as the fear, uncertainty, and doubt angle to sell a given service that, “We have to talk about this exploit because otherwise, no one will ever buy this.” Something like Cloud Deploy is very much aligned with a best practices approach to release engineering. It's not, strictly speaking, a security product, but being able to wrap things that are very security-centric around it is valuable.Now, sponsors are always going to do interesting things at various expo halls, and oh, yeah, saw the same product warmed over. This is very much not that, and I don't interpret anything you're saying is trying to sell something via the fear, uncertainty, and doubt model. There are a lot of different areas that I will be skeptical hearing about from different companies; I do take security words from Google extremely seriously because, let's be clear, in the past 20 however many years it has been, you have established a clear track record for caring about these things.Aparna: Yeah. And I have to go back to my initial mission statement, which is to help developers accelerate time to value. And one of the things that will certainly get in the way of accelerating time to value is security breaches, by the nature of them. If you are not running a supply chain that is secure, then it is very difficult for you to empower your developers to do those releases frequently and to update the software frequently because what if the update has an issue? What if the update has a security vulnerability?That's why it's really important to have a toolchain that prevents against that, that checks for those things, that logs those things so that there's an audit trail available, and that has the capability for your security team to set policies to avoid those kinds of things. I think that's how you get speed. You get with security built in, and that's extremely important to developers and especially cloud developers.Corey: I want to thank you for taking the time to speak to me about all the things that you've been working on and how you view this industry unfolding. If people want to learn more about what you're up to, and how you think about these things, where can they find you?Aparna: Well, Corey, I'm available on Twitter, and that may be one of the best ways to reach me. I'm also available at various customer events that we are having, most of them are online now. And so I'll provide you more details on that and I can be reached that way.Corey: Excellent. I will, of course, include links to that in the [show notes 00:38:43]. Thank you so much for being so generous with your time. I appreciate it.Aparna: Thank you so much. I greatly enjoyed speaking with you.Corey: Aparna Sinha, Director of Product Management at Google Cloud. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. And that sentence needed the word ‘cloud' about four more times in it. And if you've enjoyed this episode, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with a loud angry comment telling me that I just don't understand serverless well enough.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Google Cloud Platform Podcast
Serverless, Redefined with Jason Polites

Google Cloud Platform Podcast

Play Episode Listen Later Dec 1, 2021 23:06


Guest Jason Polites joins Stephanie Wong and Bukola Ayodele this week to talk about advances in serverless computing with Cloud Run and how developers and wallets are benefiting. Cloud Run, a managed service which allows developers to run containers, is now available in all GCP regions, offers increased resource access, global load balancing, and more. Jason tells us how this evolution of Cloud Run has led to the support of bigger, more complicated, and even legacy software fully and efficiently functioning in a serverless environment. The team at Google continues to expand offerings in order to afford the benefits of auto-scaling and other managed services to all workloads. Always On CPU, for example, supports projects with running background functions. Later, Jason gives us examples of projects that best fit a serverless infrastructure and the cost benefits of using Cloud Run. He offers cost-saving tips for projects, like committed use discounts and auto-scaling limits. Balancing cost efficiency with global reliability is important, and Jason tells us how this is easily achieved with Cloud Run features like scaling to zero. To limit the barrier to entry for new Cloud Run and container users, Jason and his team have been working on open source build packs. Developers can turn code into a container without creating Docker files. The containers running in Cloud Run are highly portable as well, giving companies the freedom to move their containers freely. Jason Polites Jason leads the Serverless Compute product team in Google Cloud, including products like Cloud Run and App Engine. Cool things of the week Illicit coin mining, ransomware, APTs target cloud users in first Google Cybersecurity Action Team Threat Horizons report blog Microservices architecture on Google Cloud blog Interview Cloud Run site Cloud Run CPU Allocation docs Run more workloads on Cloud Run with new CPU allocation controls blog Docker site Google Cloud Buildpacks site App Engine site Cloud Functions site GCP Podcast Episode 173: Cloud Run with Steren Giannini and Ryan Gregg podcast GCP Podcast Episode 203: Cloud Run GKE with Donna Malayeri podcast GCP Podcast Episode 261: Full Stack Dart with Tony Pujals and Kevin Moore podcast What's something cool you're working on? Bukola just finished Season 2 of the Click to Deploy series.

Screaming in the Cloud
The Future of Google Cloud with Richard Seroter

Screaming in the Cloud

Play Episode Listen Later Nov 11, 2021 40:47


About RichardHe's also an instructor at Pluralsight, a frequent public speaker, and the author of multiple books on software design and development. Richard maintains a regularly updated blog (seroter.com) on topics of architecture and solution design and can be found on Twitter as @rseroter. Links: Twitter: https://twitter.com/rseroter LinkedIn: https://www.linkedin.com/in/seroter Seroter.com: https://seroter.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: You know how git works right?Announcer: Sorta, kinda, not really Please ask someone else!Corey: Thats all of us. Git is how we build things, and Netlify is one of the best way I've found to build those things quickly for the web. Netlify's git based workflows mean you don't have to play slap and tickle with integrating arcane non-sense and web hooks, which are themselves about as well understood as git. Give them a try and see what folks ranging from my fake Twitter for pets startup, to global fortune 2000 companies are raving about. If you end up talking to them, because you don't have to, they get why self service is important—but if you do, be sure to tell them that I sent you and watch all of the blood drain from their faces instantly. You can find them in the AWS marketplace or at www.netlify.com. N-E-T-L-I-F-Y.comCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Once upon a time back in the days of VH1, which was like MTV except it played music videos, would have a show that was, “Where are they now?” Looking at former celebrities. I will not use the term washed up because that's going to be insulting to my guest.Richard Seroter is a returning guest here on Screaming in the Cloud. We spoke to him a year ago when he was brand new in his role at Google as director of outbound product management. At that point, he basically had stars in his eyes and was aspirational around everything he wanted to achieve. And now it's a year later and he has clearly failed because it's Google. So, outbound products are clearly the things that they are going to be deprecating, and in the past year, I am unaware of a single Google Cloud product that has been outright deprecated. Richard, thank you for joining me, and what do you have to say for yourself?Richard: Yeah, “Where are they now?” I feel like I'm the Leif Garrett of cloud here, joining you. So yes, I'm still here, I'm still alive. A little grayer after twelve months in, but happy to be here chatting cloud, chatting whatever else with you.Corey: I joke a little bit about, “Oh, Google winds up killing things.” And let's be clear, your consumer division which, you know, Google is prone to that. And understanding a company's org chart is a challenge. A year or two ago, I was of the opinion that I didn't need to know anything about Google Cloud because it would probably be deprecated before I really had to know about it. My opinion has evolved considerably based upon a number of things I'm seeing from Google.Let's be clear here, I'm not saying this to shine you on or anything like that; it's instead that I've seen some interesting things coming out of Google that I consider to be the right moves. One example of that is publicly signing multiple ten-year deals with very large, serious institutions like Deutsche Bank, and others. Okay, you don't generally sign contracts with companies of that scale and intend not to live up to them. You're hiring Forrest Brazeal as your head of content for Google Cloud, which is not something you should do lightly, and not something that is a short-term play in any respect. And the customer experience has continued to improve; Google Cloud products have not gotten worse, and I'm seeing in my own customer conversations that discussions about Google Cloud have become significantly less dismissive than they were over the past year. Please go ahead and claim credit for all of that.Richard: Yeah. I mean, the changes a year ago when I joined. So, Thomas Kurian has made a huge impact on some of that. You saw us launch the enterprise APIs thing a while back, which was, “Hey, here's, for the most part, every one of our products that has a fixed API. We're not going to deprecate it without a year's notice, whatever it is. We're not going to make certain types of changes.” Maybe that feels like, “Well, you should have had that before.” All right, all we can do is improve things moving forward. So, I think that was a good change.Corey: Oh, I agree. I think that was a great thing to do. You had something like 80-some-odd percent coverage of Google Cloud services, and great, that's going to only increase with time, I can imagine. But I got a little pushback from a few Googlers for not being more congratulatory towards them for doing this, and look, it's a great thing. Don't get me wrong, but you don't exactly get a whole lot of bonus points and kudos and positive press coverage—not that I'm press—for doing the thing you should have been doing [laugh] all along.It's, “This is great. This is necessary.” And it demonstrates a clear awareness that there was—rightly or wrongly—a perception issue around the platform's longevity and that you've gone significantly out of your way to wind up addressing that in ways that go far beyond just yelling at people on Twitter they don't understand the true philosophy of Google Cloud, which is the right thing to do.Richard: Yeah, I mean, as you mentioned, look, the consumer side is very experimental in a lot of cases. I still mourn Google Reader. Like, those things don't matter—Corey: As do we all.Richard: Of course. So, I get that. Google Cloud—and of course we have the same cultural thing, but at the same time, there's a lifecycle management that's different in Google Cloud. We do not deprecate products that much. You know, enterprises make decade-long bets. I can't be swap—changing databases or just turning off messaging things. Instead, we're building a core set of things and making them better.So, I like the fact that we have a pretty stable portfolio that keeps getting a little bit bigger. Not crazy bigger; I like that we're not just throwing everything out there saying, “Rock on.” We have some opinions. But I think that's been a positive trend, customers seem to like that we're making these long-term bets. We're not going anywhere for a long time and our earnings quarter after quarter shows it—boy, this will actually be a profitable business pretty soon.Corey: Oh, yeah. People love to make hay, and by people, I stretch the term slightly and talk about, “Investment analysts say that Google Cloud is terrible because at your last annual report you're losing something like $5 billion a year on Google Cloud.” And everyone looked at me strangely, when I said, “No, this is terrific. What that means is that they're investing in the platform.” Because let's be clear, folks at Google tend to be intelligent, by and large, or at least intelligent enough that they're not going to start selling cloud services for less than it costs to run them.So yeah, it is clearly an investment in the platform and growth of it. The only way it should be turning a profit at this point is if there's no more room to invest that money back into growing the platform, given your market position. I think that's a terrific thing, and I'm not worried at all about it losing money. I don't think anyone should be.Richard: Yeah, I mean, strategically, look, this doesn't have to be the same type of moneymaker that even some other clouds have to be to their portfolio. Look, this is an important part, but you look at those ten-year deals that we've been signing: when you look at Univision, that's a YouTube partnership; you look at Ford that had to do with Android Auto; you look at these others, this is where us being also a consumer and enterprise SaaS company is interesting because this isn't just who's cranking out the best IaaS. I mean, that can be boring stuff over time. It's like, who's actually doing the stuff that maybe makes a traditional company more interesting because they partner on some of those SaaS services. So, those are the sorts of deals and those sorts of arrangements where cloud needs to be awesome, and successful, and make money, doesn't need to be the biggest revenue generator for Google.Corey: So, when we first started talking, you were newly minted as a director of outbound product management. And now, you are not the only one, there are apparently 60 of you there, and I'm no closer to understanding what the role encompasses. What is your remit? Where do you start? Where do you stop?Richard: Yeah, that's a good question. So, there's outbound product management teams, mostly associated with the portfolio area. So network, storage, AI, analytics, database, compute, application modernization-y sort of stuff—which is what I cover—containers, dev tools, serverless. Basically, I am helping make sure the market understands the product and the product understands the market. And not to be totally glib, but a lot of that is, we are amplification.I'm amplifying product out to market, analysts, field people, partners: “Do you understand this thing? Can I help you put this in context?” But then really importantly, I'm trying to help make sure we're also amplifying the market back to our product teams. You're getting real customer feedback: “Do you know what that analyst thinks? Have you heard what happened in the competitive space?”And so sometimes companies seem to miss that, and PMs poke their head up when I'm about to plan a product or I'm about to launch a product because I need some feedback. But keeping that constant pulse on the market, on customers, on what's going on, I think that can be a secret weapon. I'm not sure everybody does that.Corey: Spending as much time as I do on bills, admittedly AWS bills, but this is a pattern that tends to unfold across every provider I've seen. The keynotes are chock-full of awesome managed service announcements, things that are effectively turnkey at further up the stack levels, but the bills invariably look a lot more like, yeah, we spend a bit of money on that and then we run 10,000 virtual instances in a particular environment and we just treat it like it's an extension of our data center. And that's not exciting; that's not fun, quote-unquote, but it's absolutely what customers are doing and I'm not going to sit here and tell them that they're wrong for doing it. That is the hallmark of a terrible consultant of, “I don't understand why you're doing what you're doing, so it must be foolish.” How about you stop and gain some context into why customers do the things that they do?Richard: No, I send around a goofy newsletter every week to a thousand or two people, just on things I'm learning from the field, from customers, trying to make sure we're just thinking bigger. A couple of weeks ago, I wrote an idea about modernization is awesome, and I love when people upgrade their software. By the way, most people migration is a heck of a lot easier than if I can just get this into your cloud, yeah love that; that's not the most interesting thing, to move VMs around, but most people in their budget, don't have time to rewrite every Java app to go. Everybody's not changing .NET framework to .NET core.Like, who do I think everybody is? No, I just need to try to get some incremental value first. Yes, then hopefully I'll swap out my self-managed SQL database for a Spanner or a managed service. Of course, I want all of that, but this idea that I can turn my line of business loan processing app into a thousand functions overnight is goofy. So, how are we instead thinking more pragmatically about migration, and then modernizing some of it? But even that sort of mindset, look, Google thinks about innovation modernization first. So, also just trying to help us take a step back and go, “Gosh, what is the normal path? Well, it's a lot of migration first, some modernization, and then there's some steady-state work there.”Corey: One of the things that surprised me the most about Google Cloud in the market, across the board, has been the enthusiastic uptake for enterprise workloads. And by enterprise workloads, I'm talking about things like SAP HANA is doing a whole bunch of deployments there; we're talking Big Iron-style enterprise-y things that, let's be honest, countervene most of the philosophy that Google has always held and espoused publicly, at least on conference stages, about how software should be built. And I thought that would cut against them and make it very difficult for you folks to gain headway in that market and I could not have been more wrong. I'm talking to large enterprises who are enthusiastically talking about Google Cloud. I've got a level with you, compared to a year or two ago, I don't recognize the place.Richard: Mmm. I mean, some of that, honestly, in the conversations I have, and whatever I do a handful of customer calls every week, I think folks still want something familiar, but you're looking for maybe a further step on some of it. And that means, like, yes, is everybody going to offer VMs? Yeah, of course. Is everyone going to have MySQL? Obviously.But if I'm an enterprise and I'm doing these generational bets, can I cheat a little bit, and maybe if I partner with a more of an innovation partner versus maybe just the easy next step, am I buying some more relevance for the long-term? So, am I getting into environment that has some really cool native zero-trust stuff? Am I getting into environment with global backend services and I'm not just stitching together a bunch of regional stuff? How can I cheat by using a more innovation vendor versus just lifting and shifting to what feels like hosted software in another cloud? I'm seeing more of that because these migrations are tough; nobody should be just randomly switching clouds. That's insane.So, can I make, maybe, one of these big bets with somebody who feels like they might actually even improve my business as a whole because I can work with Google Pay and improve how I do mobile payments, or I could do something here with Android? Or, heck, all my developers are using Angular and Flutter; aren't I going to get some benefit from working with Google? So, we're seeing that, kind of, add-on effect of, “Maybe this is a place not just to host my VMs, but to take a generational leap.”Corey: And I think that you're positioning yourselves in a way to do it. Again, talk about things that you wouldn't have expected to come out of Google of all places, but your console experience has been first-rate and has been for a while. The developer experience is awesome; I don't need to learn the intricacies of 12 different services for what I'm trying to do just in order to get something basic up and running. I can stop all the random little billing things in my experimental project with a single click, which that admittedly has a confirm, which you kind of want. But it lets you reason about these things.It lets you get started building something, and there's a consistency and cohesiveness to the console that, again, I am not a graphic designer, by any stretch of the imagination. My most commonly used user interface is a green-screen shell prompt, and then I'm using Vim to wind up writing something horrifying, ideally in Python, but more often in YAML. And that has been my experience, but just clicking around the console, it's clear that there was significant thought put into the design, the user experience, and the way of approaching folks who are starting to look very different, from a user persona perspective.Richard: I can—I mean, I love our user research team; they're actually fun to hang out with and watch what they do, but you have to remember, Google as a company, I don't know, cloud is the first thing we had to sell. Did have to sell Gmail. I remember 15 years ago, people were waiting for invites. And who buys Maps or who buys YouTube? For the most part, we've had to build things that were naturally interesting and easy-to-use because otherwise, you would just switch to anything else because everything was free.So, some of that does infuse Google Cloud, “Let's just make this really easy to use. And let's just make sure that, maybe, you don't hate yourself when you're done jumping into a shell from the middle of the console.” It's like, that should be really easy to do—or upgrade a database, or make changes to things. So, I think some of the things we've learned from the consumer good side, have made their way to how we think of UX and design because maybe this stuff shouldn't be terrible.Corey: There's a trope going around, where I wound up talking about the next million cloud customers. And I'm going to have to write a sequel to it because it turns out that I've made a fundamental error, in that I've accepted the narrative that all of the large cloud vendors are pushing, to the point where I heard from so many folks I just accepted it unthinkingly and uncritically, and that's not what I should be doing. And we'll get to what I was wrong about in a minute, but the thinking goes that the next big growth area is large enterprises, specifically around corporate IT. And those are folks who are used to managing things in a GUI environment—which is fine—and clicking around in web apps. Now, it's easy to sit here on our high horse and say, “Oh, you should learn to write code,” or YAML, which is basically code. Cool.As an individual, I agree, someone should because as soon as they do that, they are now able to go out and take that skill to a more lucrative role. The company then has to backfill someone into the role that they just got promoted out of, and the company still has that dependency. And you cannot succeed in that market with a philosophy of, “Oh, you built something in the console. Now, throw it away and do it right.” Because that is maddening to that user persona. Rightfully so.I'm not that user persona and I find it maddening when I have to keep tripping over that particular thing. How did that come to be, from your perspective? First, do you think that is where the next million cloud customers come from? And have I adequately captured that user persona, or am I completely often the weeds somewhere?Richard: I mean, I shared your post internally when that one came out because that resonated with me of how we were thinking about it. Again, it's easy to think about the cloud-native operators, it's Spotify doing something amazing, or this team at Twitter doing something, or whatever. And it's not even to be disparaging. Like, look, I spent five years in enterprise IT and I was surrounded by operators who had to run dozen different systems; they weren't dedicated to just this thing or that. So, what are the tools that make my life easy?A lot of software just comes with UIs for quick install and upgrades, and how does that logic translate to this cloud world? I think that stuff does matter. How are you meeting these people a little better where they are? I think the hard part that we will always have in every cloud provider is—I think you've said this in different forums, but how do I not sometimes rub the data center on my cloud or vice versa? I also don't want to change the experience so much where I degrade it over the long term, I've actually somehow done something worse.So, can I meet those people where they are? Can we pull some of those experiences in, but not accidentally do something that kind of messes up the cloud experience? I mean, that's a fine line to walk. Does that make sense to you? Do you see where there's a… I don't know, you could accidentally cater to a certain audience too much, and change the experience for the worse?Corey: Yes, and no. My philosophy on it is that you have to meet customers where they are, but only to a point. At some point, what they're asking for becomes actively harmful or disadvantageous to wind up providing for them. “I want you to run my data center for me,” is on some level what some cloud environments look like, and I'm not going to sit here and tell people they're inherently wrong for that. Their big reason for moving to the cloud was because they keep screwing up replacing failed hard drives in their data center, so we're going to put it in the cloud.Is it more expensive that way? Well, sure in terms of actual cash outlay, it almost certainly is, but they're also not going down every month when a drive fails, so once the value of that? It's a capability story. That becomes interesting to me, and I think that trying to sit here in isolation, and say that, “Oh, this application is not how we would build it at Google.” And it's, “Yeah, you're Google. They are insert an entire universe of different industries that look nothing whatsoever like Google.” The constraints are different, the resources are different, and—Richard: Sure.Corey: —their approach to problem-solving are different. When you built out Google, and even when you're building out Google Cloud, look at some of the oldest craftiest stuff you have in your entire all of Google environment, and then remember that there are companies out there that are hundreds of years old. It's a different order of magnitude as far as era, as far as understanding of what's in the environment, and that's okay. It's a very broad and very diverse world.Richard: Yeah. I mean, that's, again, why I've been thinking more about migration than even some of the modernization piece. Should you bring your network architecture from on-prem to the cloud? I mean, I think most cases, no. But I understand sometimes that edge firewall, internal trust model you had on-prem, okay, trying to replicate that.So, yeah, like you say, I want to meet people where they are. Can we at least find some strategic leverage points to upgrade aspects of things as you get to a cloud, to save you from yourself in some places because all of a sudden, you have ten regions and you only had one data center before. So, many more rooms for mistakes. Where are the right guardrails? We're probably more opinionated than others at Google Cloud.I don't really apologize for that completely, but I understand. I mean, I think we've loosened up a lot more than maybe people [laugh] would have thought a few years ago, from being hyper-opinionated on how you run software.Corey: I will actually push back a bit on the idea that you should not replicate your on-premises data center in your cloud environment. Sure, are there more optimal ways to do it that are arguably more secure? Absolutely. But a common failure mode in moving from data center to cloud is, “All right, we're going to start embracing this entirely new cloud networking paradigm.” And it is confusing, and your team that knows how the data center network works really well are suddenly in way over their heads, and they're inadvertently exposing things they don't intend to or causing issues.The hard part is always people, not technology. So, when I glance at an environment and see things like that, perfect example, are there more optimal ways to do it? Oh, from a technology perspective, absolutely. How many engineers are working on that? What's their skill set? What's their position on all this? What else are they working on? Because you're never going to find a team of folks who are world-class experts in every cloud? It doesn't work that way.Richard: No doubt. No doubt, you're right. There's areas where we have to at least have something that's going to look similar, let you replicate aspects of it. I think it's—it'll just be interesting to watch, and I have enough conversations with customers who do ask, “Hey, where are the places we should make certain changes as we evolve?” And maybe they are tactical, and they're not going to be the big strategic redesign their entire thing. But it is good to see people not just trying to shovel everything from one place to the next.Corey: This episode is sponsored in part by something new. Cloud Academy is a training platform built on two primary goals. Having the highest quality content in tech and cloud skills, and building a good community the is rich and full of IT and engineering professionals. You wouldn't think those things go together, but sometimes they do. Its both useful for individuals and large enterprises, but here's what makes it new. I don't use that term lightly. Cloud Academy invites you to showcase just how good your AWS skills are. For the next four weeks you'll have a chance to prove yourself. Compete in four unique lab challenges, where they'll be awarding more than $2000 in cash and prizes. I'm not kidding, first place is a thousand bucks. Pre-register for the first challenge now, one that I picked out myself on Amazon SNS image resizing, by visiting cloudacademy.com/corey. C-O-R-E-Y. That's cloudacademy.com/corey. We're gonna have some fun with this one!Corey: Now, to follow up on what I was saying earlier, what I think I've gotten wrong by accepting the industry talking points on is that the next million cloud customers are big enterprises moving from data centers into the cloud. There's money there, don't get me wrong, but there is a larger opportunity in empowering the creation of companies in your environment. And this is what certain large competitors of yours get very wrong, where it's we're going to launch a whole bunch of different services that you get to build yourself from popsicle sticks. Great. That is not useful.But companies that are trying to do interesting things, or people who want to found companies to do interesting things, want something that looks a lot more turnkey. If you are going to be building cloud offerings, that for example, are terrific building blocks for SaaS companies, then it behooves you to do actual investments, rather than just a generic credit offer, into spurring the creation of those types of companies. If you want to build a company that does payroll systems, in a SaaS, cloud way, “Partner with us. Do it here. We will give you a bunch of credits. We will introduce you to your first ten prospective customers.”And effectively actually invest in a company success, as opposed to pitch-deck invest, which is, “Yeah, we'll give you some discounting and some credits, and that's our quote-unquote, ‘investment.'” actually be there with them as a partner. And that's going to take years for folks to wrap their heads around, but I feel like that is the opportunity that is significantly larger, even than the embedded existing IT space because rather than fighting each other for slices of the pie, I'm much more interested in expanding that pie overall. One of my favorite questions to get asked because I think it is so profoundly missing the point is, “Do you think it's possible for Google to go from number three to number two,” or whatever the number happens to be at some point, and my honest, considered answer is, “Who gives a shit?” Because number three, or number five, or number twelve—it doesn't matter to me—is still how many hundreds of billions of dollars in the fullness of time. Let's be real for a minute here; the total addressable market is expanding faster than any cloud or clouds are going to be able to capture all of.Richard: Yeah. Hey, look, whoever who'll be more profitable solving user problems, I really don't care about the final revenue number. I can be the number one cloud tomorrow by making Google Cloud free. What's the point? That's not a sustainable business. So, if you're just going for who can deploy the most VCPUs or who can deploy the most whatever, there's ways to game that. I want to make sure we are just uniquely solving problems better than anybody else.Corey: Sorry, forgive me. I just sort of zoned out for a second there because I'm just so taken aback and shocked by the idea of someone working at a large cloud provider who expresses a philosophy that isn't lying awake at night fretting over the possibility of someone who isn't them as making money somewhere.Richard: [laugh]. I mean, your idea there, it'll be interesting to watch, kind of, the maker's approach of are you enabling that next round of startups, the next round of people who want to take—I mean, honestly, I like the things we're doing building block-wise, even with our AI: we're not just handing you a vision API, we're giving you a loan processing AI that can process certain types of docs, that more packaged version of AI. Same with healthcare, same with whatever. I can imagine certain startups or a company idea going, “Hey, maybe I could disrupt or serve a new market.”I always love what Square did. They've disrupted emerging markets, small merchants here in North America, wherever, where I didn't need a big expensive point of sale system. You just gave me the nice, right building blocks to disrupt and run my business. Maybe Google Cloud can continue to provide better building blocks, but I do like your idea of actually investment zones, getting part of this. Maybe the next million users are founders and it's not just getting into some of these companies with, frankly, 10, 20, 30,000 people in IT.I think there's still plenty of room in these big enterprises to unlock many more of those companies, much more of their business. But to your point, there's a giant market here that we're not all grabbing yet. For crying out loud, there's tons of opportunity out here. This is not zero-sum.Corey: Take it a step further beyond that, and today, if you have someone who's enterprising, early on in their career, maybe they just got out of school, maybe they have just left their job and are ready to snap, or they have some severance money that they want to throw into something. Great. What do they want to do if they have an idea for a company? Well today, that answer looks a lot like, well, time to go to a boot camp and learn to code for six months so you can build a badly done MVP well enough to get off the ground and get some outside investment, and then go from there. Well, what if we cut that part out entirely?What if there were building blocks of I don't need to know or care that there's a database behind it, or what a database looks like. Picture Visual Basic in a web browser for building apps, and just take this bit of information I give you and store it and give it back to me later. Sure, you're going to have some significant challenges in the architecture or something like that as it goes from this thing that I'm talking about as an MVP to something planet-scale—like a Spotify for example—but that's not most businesses, and that's okay. Get out of the way and let people innovate and iterate on what it is they're doing more rapidly, and make it more accessible to teach people. That becomes huge; that gets the infrastructure bits that cloud providers excel at out of the way, and all it really takes is packaging those things into a golden path of what a given company of a particular profile should be doing, if—unless they have reason to deviate from it—and instead of having this giant paradox of choice issue, it's, “Oh, okay, I'll drag-drop, build things accordingly.”And under the hood, it's doing all the configuration of services and that's great. But suddenly, you've made being a founder of a software company—fundamentally—accessible to people who are not themselves software engineers. And I know that's anathema to some people, and I don't even slightly care because I am done with gatekeeping.Richard: Yeah. No, it's exciting if that can pull off. I mean, it's not the years ago where, how much capital was required to find the rack and do all sorts of things with tech, and hire some developers. And it's an amazing time to be software creators, now. The more we can enable that—yeah, I'm along for that journey, sign me up.Corey: I'm looking forward to seeing how it winds up shaking out. So, I want to talk a little bit about the paradox of choice problem that I just mentioned. If you take a look at the various compute services that every cloud provider offers, there are an awful lot of different choices as far as what you can run. There's the VM model, there's containers—if you're in AWS, you have 17 ways to run those—and you wind up—any of the serverless function story, and other things here and there, and managed services, I mean and honestly, Google has a lot of them, nowhere near as many as you do failed messaging products, but still, an awful lot of compute options. How do customers decide?What is the decision criteria that you see? Because the worst answer you can give someone who doesn't really know what they're doing is, “It depends,” because people don't know how to make that decision. It's, “What factors should I consider then, while making that decision?” And the answer has to be something somewhat authoritative because otherwise, they're going to go on the internet and get yelled at by everyone because no one is ever going to agree on this, except that everyone else is wrong.Richard: Mm-hm. Yeah, I mean, on one hand, look, I like that we intentionally have fewer choices than others because I don't think you need 17 ways to run a container. I think that's excessive. I think more than five is probably excessive because as a customer, what is the trade-off? Now, I would argue first off, I don't care if you have a lot of options as a vendor, but boy, the backends of those better be consistent.Meaning if I have a CI/CD tool in my portfolio and it only writes to two of them, shame on me. Then I should make sure that at least CI/CD, identity management, log management, monitoring, arguably your compute runtime should be a late-binding choice. And maybe that's blasphemous because somebody says, “I want to start up front knowing it's a function,” or, “I want to start it's a VM.” How about, as a developer, I couldn't care less. How about I just build cool software and maybe even at deploy time, I say, “This better fits in running in Kubernetes.” “This is better in a virtual machine.”And my cost of changing that later is meaningless because, hey, if it is in the container, I can switch it between three or four different runtimes, the identity management the same, it logs the exact same way, I can deploy CI/CD the same way. So, first off, if those things aren't the same, then the vendor is messing up. So, the customer shouldn't have to pay the cost of that. And then there gets to be other actual criteria. Look, I think you are looking at the workload itself, the team who makes it, and the strategy to figure out the runtime.It's easy for us. Google Compute Engine for VMs, containers go in GKE, managed services that need some containers, there are some apps around them, are Cloud Functions and Cloud Run. Like, it's fairly straightforward and it's going to be an OR situation—or an AND situation not an OR, which is great. But we're at least saying the premium way to run containers in Google Cloud for systems is GKE. There you go. If you do have a bunch of managed services in your architecture and you're stitching them together, then you want more serverless things like Cloud Run and Cloud Functions. And if you want to just really move some existing workload, GCE is your best choice. I like that that's fairly straightforward. There's still going to be some it depends, but it feels better than nine ways to run Kubernetes engines.Corey: I'm sure we'll see them in the fullness of time.Richard: [laugh].Corey: So, talk about Anthos a bit. That was a thing that was announced a while back and it was extraordinarily unclear what it was. And then I looked at the pricing and it was $10,000 a month with a one-year minimum commitment, and is like, “Oh, it's not for me. That's why I don't get it.” And I haven't really looked back at it since. But it is something else now. It almost feels like a wrapper brand, in some respects. How's it going? [unintelligible 00:29:26]?Richard: Yeah. Consumption, we'll talk more upcoming months on some of the adoption, but we're finally getting the hockey stick, which always comes delayed with platforms because nobody adopts platforms quickly. They buy the platform and a year later they start to actually build new development, migrate the things they have. So, we're starting to see the sort of growth. But back to your first point. And I even think I poorly tried to explain it a year ago with you. Basically, look, Anthos is the ability to manage fleets of GKE clusters, wherever they are. I don't care if they're on-prem, I don't care if they're in Google Cloud, I don't care if they're Amazon. We have one customer who only uses Anthos on AWS. Awesome, rock on.So, how do I put GKE clusters everywhere, but then do fleet management because look, some people are doing an app per cluster. They don't want to jam 50 apps in the cluster from different teams because they don't like the idea that this app requires root access; now you can screw around with mine. Or, you didn't update; that broke the cluster. I don't want any of that. So, you're going to see companies more, doing even app per cluster, app per developer per cluster.So, now I have a fleet problem. How do I keep it in sync? How do I make sure policy is consistent? Those sorts of things. So, Anthos is kind of solving the fleet management challenge and replacing people's first-gen app platform.Seeing a lot of those use cases, “Hey, we're retiring our first version of Docker Enterprise, Mesos, Cloud Foundry, even OpenShift,” saying, “All right, now's the time for our next version of our app platform. How about GKE, plus Cloud Run on top of it, plus other stuff?” Sounds good. So, going well is a, sort of—as you mentioned, there's a brand story here, mainly because we've also done two things that probably matter to you. A, we changed the price a lot.No minimum commit, remarkably at 20% of the cost it was when we launched, on purpose because we've gotten better at this. So, much cheaper, no minimum commit, pay as you go. Be on-premises, on bare metal with GKE. Pay by the hour, I don't care; sounds great. So, you can do that sort of stuff.But then more importantly, if you're a GKE customer and you just want config management, service mesh, things like that, now you can buy all of those independently as well. And Anthos is really the brand for fleet management of GKE. And if you're on Google Cloud only, it adds value. If you're off Google Cloud, if you're multi-cloud, I don't care. But I want to manage fleets of compute clusters and create them. We're going to keep doubling down on that.Corey: The big problem historically for understanding a lot of the adoption paradigm of Kubernetes has been that it was, to some extent, a reimagining of how Google ran and built software internally. And I thought at the time, the idea was—from a cynical perspective—that, “All right, well, your crappy apps don't run well on Google-style infrastructure so we're going to teach the entire world how to write software the way that we do.” And then you end up with people running their blog on top of Kubernetes, where it's one of those, like, the first blog post is, like, “How I spent the last 18 months building Kubernetes.” And, okay, that is certainly a philosophy and an approach, but it's almost approaching Windows 95 launch level of hype, where people who didn't own computers were buying copies of it, on some level. And I see the term come up in conversations in places where it absolutely has no place being brought up. “How do I run a Kubernetes cluster inside of my laptop?” And, “It's what you got going on in there, buddy?”Richard: [laugh].Corey: “What do you think you're trying to do here because you just said something that means something that I think is radically different to me than it is to you.” And again, I'm not here to judge other people's workflows; they're all terrible, except for mine, which is an opinion held by everyone about their own workflow. But understanding where people are, figuring out how to get there, how to meet customers where they are and empower them. And despite how heavily Google has been into the Kubernetes universe since its inception, you're very welcoming to companies—and loud-mouth individuals on Twitter—who have no use for Kubernetes. And working through various products you offer, I don't ever feel like a second-class citizen. There's really something impressive about that, of not letting the hype dictate the product and marketing decisions of it.Richard: Yeah, look, I think I tweeted it recently, I think the future of software is managed services with containers in the gap, for the most part. Whereas—if you can use managed services, please do. Use them wherever you can. And if you have to sling some code, maybe put it in a really portable thing that's really easy to run in lots of places. So, I think that's smart.But for us, look, I think we have the best container workflow from dev tools, and build tools, and artifact registries, and runtimes, but plenty of people are running containers, and you shouldn't be running Kubernetes all over the place. That makes sense for the workload, I think it's better than a VM at the retail edge. Can I run a small cluster, instead of a weird point-of-sale Windows app? Maybe. Maybe it makes sense to have a lightweight Kubernetes cluster there for consistency purposes.So, for me, I think it's a great medium for a subset of software. Google Cloud is going to take whatever you got, which is great. I think containers are great, but at the same time, I'm happily going to let you deploy a function that responds to you adding a storage item to a bucket, where at the same time give you a SaaS service that replaces the need for any code. All of those are terrific. So yeah, we love Kubernetes. We think it's great. We're going to be the best version to run it. But that's not going to be your whole universe.Corey: No, and I would argue it absolutely shouldn't be.Richard: [laugh]. Right. Agreed. Now again, for some companies, it's a great replacement for this giant fleet of VMs that all runs at eight percent utilization. Can I stick this into a bunch of high-density clusters? Absolutely you should. You're going to save an absolute fortune doing that and probably pick up some resilience and functionality benefits.But to your point, “Do I want to run a WordPress site in there?” I don't know, probably not. “Do I need to run my own MySQL?” I'd prefer you not do that. So, in a lot of cases, don't use it unless you have to. That should go for all compute nowadays. Use managed services.Corey: I'm a big believer in going down that approach just because it is so much easier than trying to build it yourself from popsicle sticks because you theoretically might have to move it someday in the future, even though you're not.Richard: [laugh]. Right.Corey: And it lets me feel better about a thing that isn't going to be used by anything that I'm doing in the near future. I just don't pretend to get it.Richard: No, I don't install a general purpose electric charger in my garage for any electric car I may get in the future; I charge for the one I have now. I just want it to work for my car; I don't want to plan for some mythical future. So yeah, premature optimization over architecture, or death in IT, especially nowadays where speed matters, don't waste your time building something that can run in nine clouds.Corey: Richard, I want to thank you for coming on again a year later to suffer my slings, arrows, and other various implements of misfortune. If people want to learn more about what you're doing, how you're doing it, possibly to pull a Forrest Brazeal and go work with you, where can they find you?Richard: Yeah, we're a fun place to work. So, you can find me on Twitter at @rseroter—R-S-E-R-O-T-E-R—hang out on LinkedIn, annoy me on my blog seroter.com as I try to at least explore our tech from time to time and mess around with it. But this is a fun place to work. There's a lot of good stuff going on here, and if you work somewhere else, too, we can still be friends.Corey: Thank you so much for your time today. Richard Seroter, director of outbound product management at Google. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment into which you have somehow managed to shove a running container.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Empower Apps
What is Firebase with Peter Friese

Empower Apps

Play Episode Listen Later Nov 9, 2021 42:07


Guest Peter Friese - Blog Twitter - @peterfriese YouTube - @peterfriese GitHub - @peterfriese Youtube Video - https://youtu.be/8m8oppa-LosRelated Episodes Episode 87 - Core Data Fun with Tim Mitra Episode 85 - AWS Amplify with Kilo Loco Episode 75 - Year of the Server with Tim Condon - Part 2 Episode 35 - SwiftUI - The Good, the Bad, and the Benefits Episode 8 - Cloud and Backend Services For Apps with Erik Gillespie Related Links Firebase Summit - November 10th Firebase YouTube Channel Firebase Blog Firebase Medium channel Firebase vs GCP Videos Projects & storage Functions & Firestore the LansingCodes app that never was Authenticate with Firebase Anonymously on Apple Platforms Make It So - Replicating Apple's Reminders app with SwiftUI and Firebase Swift on Cloud Run by Grant Timmerman from GCP SponsorsRevenueCatUsing RevenueCat to power your in-app purchase infrastructure solves: For edge cases you don't even know you have Protects from outages your team hasn't seen yet Saves you time on future maintenance  As well as new features released by the app stores Empowers your marketing teams with clean, reliable IAP data All that to say, RevenueCat handles all the headaches of in-app purchases so you can get back to building your app. Try RevenueCat today at revenuecat.com.LinodeA cloud experience developers love Great for Setting Up a Backend for Your App Variety of VM Configurations and Settings Reasonable Pricing Starting at $5 per month Global Data Centers The Developer Cloud Simplified Try it today with this special link:https://www.linode.com/?r=97e09acbd5d304d87dadef749491d245e71c74e7Check out OrchardNest Today:https://orchardnest.comShow Notes What does Firebase consist of How to use it properly in your iOS app How Firestore's NoSQL DB works How to work with Firestore and SwiftUI What are Firebase's authentication features Setting up security properly How to authenticate with Firebase anonymously? Register for Firebase Summit Social MediaEmailleo@brightdigit.comGitHub - @brightdigitTwitter BrightDigit - @brightdigitLeo - @leogdionRedditLeo - /u/leogdionLinkedInBrightDigitLeoInstagram - @brightdigitPatreon - empowerappshowCreditsMusic from https://filmmusic.io"Blippy Trance" by Kevin MacLeod (https://incompetech.com)License: CC BY (http://creativecommons.org/licenses/by/4.0/)★ Support this podcast on Patreon ★

The Swyx Mixtape
[Weekend Drop] Abhi Aiyer & Ward Peeters: Gatsby 4 and the Jamstack Endgame

The Swyx Mixtape

Play Episode Listen Later Oct 17, 2021 51:01


The following is my conversation with Abhi Aiyer and Ward Peeters, two lead engineers behind Gatsby Cloud and the recently announced Gatsby v4, which is at the forefront of what I think is the most significant change in the Jamstack landscape in the past 2 years.Watch the video version here. Links: Gatsby 4 Netlify DPR My blogpost on Smart Clients vs Smart Servers Timestamps:  [00:00:00] Cold Open  [00:00:28] Swyx Intro  [00:01:59] Call Start  [00:03:07] Gatsby v4  [00:06:23] Incremental Builds  [00:07:16] Cache Invalidation  [00:09:03] Gatsby DSG vs Netlify DPR  [00:09:35] Abandoning Redux for LMDB  [00:11:50] Parallel Queries (PQR)  [00:13:32] Gatsby DSG  [00:15:24] Netlify DPR vs Gatsby DSG  [00:19:19] The End of Jamstack  [00:22:12] Tradeoffs and Performance  [00:24:34] Image Processing  [00:27:25] Automatic DSG  [00:29:33] Gatsby Cloud vs Netlify  [00:33:34] Gatsby vs Next.js  [00:35:41] Gatsby and the Content Mesh  [00:37:19] React 18 and Gatsby  [00:39:45] Custom rendering page fragments with React 18  [00:42:10] Server Components in Limbo  [00:43:33] Smart Servers vs Smart Clients  [00:45:21] Apollo and Open Source Startup Strategy  [00:47:06] TMA: Too Many Acronyms  [00:49:16] Gatsby for Docs  Transcript [00:00:00] Cold Open  [00:00:00] Abhi Aiyer: And so with LMDB in place, right? We have workers that can read and write to LMDB, which allows us to run parallel queries. So PQR was a huge advancement for us. I think we saw up to like 40% reduction in query running time. And build times went down. We had a goal, I think it was like, we'd try to look for at least 20% reduction in build times and I think we hit 26%, so all cool wins, you know? [00:00:28] Swyx Intro  [00:00:28] swyx: The following is my conversation with Abhi Aiyer, and Ward Peeters, two lead engineers behind Gatsby Cloud, and the recently announced Gatsby V4, which is at the forefront of what I think is the most significant change in the JAMstack landscape in the past two years. We discussed how parallel query writing PQR and deferred static generation DSG are achieving 40% faster queries and 300% faster overall builds. [00:00:53] And they did a wonderful job handling the most impolite questions I could think of, including whether it Gatsby Cloud is a Netlify clone or the Gatsby should just be a data layer on top of Next.js and how they're dealing with TMA too many acronyms in web development. This conversation should be viewed together with my past discussions, with Sunil Pai and Misko Hevery in considering the cutting-edge of web development today. Online discussions often present a binary split in that your technical choices either have to optimize for developer experience or user experience. [00:01:25] But I find that it is builders like Abhi and Ward and Misko and Sunil who are constantly trying to improve the experience of developers in building great user experiences by default. I hope you enjoy these long form conversations I'm trying to produce with amazing developers. I still don't have a name for it. [00:01:41] And I still don't know what the plan is. I just know that I really enjoy it. And the feedback from you guys have been really great. So if you like this, share with a friend, if you have other requests for guests, tag them on social media, I basically like to make this a space where passionate builders and doers can talk about their craft and where things are going. [00:01:58] So here's the interview.  [00:01:59] Call Start  [00:01:59] Abhi Aiyer: I'm Abhi Aiyer. I'm a principal engineer at Gatsby. Thanks for having us.  [00:02:05] Ward Peeters: My name is Ward Peeters. I'm a staff software engineer at Gatsby and I'm from Belgium. And I've been working mostly on the open source side.  [00:02:15] Abhi Aiyer: I forgot to say where I'm from. I'm from Los Angeles, you know, Hollywood,  [00:02:21] swyx: I'm actually heading down to LA,  [00:02:22] Abhi Aiyer: in a couple of weeks, there's,  [00:02:24] swyx: I'm going to Kubecon, which is like a very interesting thing for a front end engineer to end up at. But that's where my career has taken me.  [00:02:34] So this conversation started because I had a chat with Sunil, on this podcast that I accidentally launched. I don't think we did Gatsby much, a good favor. [00:02:45] Like we both saw the new updates and I didn't get to say the nice things that I thought about Gatsby. I should also say that I used to have my blog on Gatsby and I no longer do. I used to work at Netlify and I no longer do. There's a lot of history here for me with Gatsby. It's been a while since I caught up, and I'm curious to see or get the latest. [00:03:07] Gatsby v4  [00:03:07] swyx: Maybe we should start off with like a quick summary of what's new with Gatsby with Gatsby V4, right?  [00:03:13] Abhi Aiyer: Is that a good place to start? Yeah, I think so. [00:03:17] swyx: So first of all, I think the marketing was really nice. Gatsby camp, it seems like a really big push and qualitatively very different from Gatsby 3. Tell me about what the behind the scenes was like. [00:03:30] Abhi Aiyer: Yeah, it was, we're getting better at the marketing side of what we're doing these days and Gatsby 4 was a big push. It really changed how we approach the framework as a whole.  [00:03:43] For those who don't know, traditionally Gatsby was a static site generator, purely static. We hold ourselves high on our connections to a content management system. [00:03:55] And we provide a really good data layer there, that takes all those requests that you would normally make to a content manager system, turns them into a, like a store of data that you can then use and query from graph QL. And the big thing that we were hitting before gas before was. Company was growing. [00:04:17] And as more customers were using Gatsby cloud, we started realizing that we couldn't scale to really large sites and large sites is like a misnomer. Like you could be, you could be a 50,000 page site and be considered large given the data that you may have. But we're talking like hundreds of thousands of pages. [00:04:38] And the thing that we kind of realized is not all pages are created equal on your site. Especially the ones from like 20, 15, 20 14, where, you know, no one's looking at that people, those pieces of content, if you're a site with a huge archive of content yeah, you should probably go check Google analytics to see how, you know, how, how many people are actually viewing your pages. [00:05:02] And the way gets me. And we'll get into this a little bit later, but today Gatsby isn't as smart as it should be in what pages should be rebuilt. For example, we're looking at the gatsbyjs.com page right here, but there are other data dependencies. This pure content. Like if you look at the nav bar, there's a whole bunch of items there. [00:05:22] And we have this navbar bar on all of our pages, cause that's what a website has, but the problem with Gatsby today and will be changed in the future is. If I change anything about this nav bar, any page, that depends on the nav bar now has a dependency that needs to be invalidated. [00:05:39] And so now I have a hundred thousand pages with this navbar I have 200,000 pages of this nav bar and I spelled Gastby instead of Gatsby or something, the navbar I made a typo and now I'm paying for. A hundred thousand pages of reload to rebuild. And so we just saw that and that this happens a lot, right? [00:05:57] This is a very small example, but this happens a lot to our customers and building a hundred thousand pages is not necessarily easy thing to do. There's memory requirements that come in. There is, what kind of machine are you doing this thing on? And so we had to figure out a way to scale Gatsby and do things differently. [00:06:15] We were traditionally static and now we're trying to be somewhere in between, you can go dynamic or you could go static and it's up to you.  [00:06:23] Incremental Builds  [00:06:23] swyx: So the new rendering options are SSG, DSG and SSR. Is ISR gone?  [00:06:32] Ward Peeters: Well, that's what Next.js has is doing. And I'm like, wait,  [00:06:36] swyx: we never have guessed. [00:06:38] We had an incremental mode.  [00:06:41] Abhi Aiyer: What do you  [00:06:41] Ward Peeters: call it? Yes. And that's still all statically. So when we do it, we have it in open source where we in V3 we enabled it where we only build HTML what's necessary on cloud. We have a more pumped up version of it where When you get the data change, we only update that page more rapidly than in open source, but still when you change your navbar with what Abhi said, you still have to rebuild everything because all the pages get invalidated. [00:07:09] So incremental builds works for data updates, but not so much for code changes.  [00:07:16] Cache Invalidation  [00:07:16] swyx: Right. Okay. Well, I guess, how do you solve cache invalidation? [00:07:26] Ward Peeters: Well, the thing is that because Gatsby owns the data, like the heads of data layer and a source plugins like WordPress, when we source data and to give us a webhook or, we just go to Wordpress and say like, Hey, what has changed? [00:07:40] Data. I was like, okay, these nodes have changed. Or these pieces, like a poster page has been changed. It gets me knows which node is used where, like, for example, this post is used on this section page. It's used in this article and that's all happening already behind the scenes because graph QL shenanigans. [00:07:59] And that's how we can build incremental builds. So we know, okay. Only these spaces need to be built. And that's also where DSG comes in because as a user, you don't have to care about cache invalidation anymore. Cause it's using the same engine as you were used to with like incremental builds. [00:08:15] When you use SSG and I think that's a major benefit of it, where you as a user, don't really have to care about cache control, because it is difficult to manage on a large scale. Like a lot of corporations just say like every 10 minutes we'll remove the cache because it is difficult to get through when change. [00:08:37] Yeah.   [00:08:39] swyx: That's pretty funny. At Netlify, one of the reasons that we constantly talk about for CDN level caching, like people say like, you know, why don't you just enable CDN level caching and then just have a regular server render. One of Matt Billman points that he always makes is that people always turn it off the moment there's a bug, it's like, oh, schedule, call, and turn it off. [00:09:02] And then don't turn it back on again. [00:09:03] Gatsby DSG vs Netlify DPR  [00:09:03] swyx: Okay. So let's, let's talk about like, DSG. That's the fancy new one and DPR, right? So maybe we should, is there. Yeah, there's a graphic. Okay. All right. This is new. So first of all, like what was the inspiration? What's the backstory I'm always interested in how these things come about. [00:09:21] Abhi Aiyer: I think we were inspired by DPR a lot, you know? But where we saw the benefit of our approach is our data layer, you know, and it took those many steps even before getting to DSG.  [00:09:35] Abandoning Redux for LMDB  [00:09:35] Abhi Aiyer: So it started actually in like Gatsby 3.10. We had to redo Gatsby's node store. [00:09:42] So traditionally we were using Redux to persist all these the data that we get from content management systems. And we had a particular customer who could not even persist the cache, like a Gatsby cache between builds, because they had so much data that it would OOM when they try to persist the cache. Right. [00:10:03] So for them, they were running cold builds for every build. Even if you had like a warm cache or you had your pods, you know, we use Kubernetes. So like, if you have your pods up, you're doing like an hour and a half cold build for everything. You could like change the typo and it'd be an hour and a half. [00:10:19] And so from there we were like, We need to reduce peak memory utilization and Redux is not going to help us there. And so we started looking into alternatives. We looked at SQL Lite, we looked at Reddis and we landed on LMDB, which is, Lightning memory, mapped database manager. [00:10:39] It's like a file system DB, which is really cool for us because one, it's pretty fast. It allows you to, to have like a query interface, which is good. You can store more data than available RAM. So for a site like this customer who pretty much is blowing up this pod on every warm build. To try to even have a warm build, we could actually store their data now, which then unlocked warm builds for them. [00:11:05] So an hour and a half, that went to 25 minutes, which is pretty good for them. now we have this thing, now we call it Gatsby DB internally. And so now Gatsby is node store is in LMDB. And the cool thing about LMDB is it's just comprised of a bunch of files. You have a lock file and database files. [00:11:26] And if you have files, that means you can move files around. They don't have to be in one place, right. They could be in storage, they can be in a serverless function. They could be anywhere you, you really want. Right. And so that was step one was we needed to redo the node store. And we did that and memory utilization from a lot of customers went down. Enough to unlock a simple thing as a warm build.  [00:11:50] Parallel Queries (PQR)  [00:11:50] Abhi Aiyer: So then the second thing that this, these other customers were having was like, wow, it takes so long to query, to run queries. Right. And people have like 25,000, 50,000 queries. And I don't know if they should have those that much, but they do. [00:12:05] Right. They do have that much. And it's a huge part of the build time. Right. A lot of people complained that. You know, Gatsby builds are sometimes slow for large sites and we agree. That's totally true. And so our next foray into like improvement was this thing called parallel queries. Which would allow Gatsby to run chunks of queries at a given time and what PQR in his pool, a diagram of it, you know, query running does take a huge percentage of your builds. [00:12:39] But now we can parallelize that across workers in the Gatsby process. But if you were to do this naively with Redux, like a child process can't write to a JavaScript object in the main process. Right. It's hard to maintain state. There's no easy way to communicate between workers to write state and read it. [00:12:59] And so with LMDB in place, we have workers that can read and write to LMDB, which allows us to run parallel queries. Right. We don't need to do things serially, anymore. So PQR was a huge advancement for us. I think we saw up to like 40% reduction in query running time. And build times went down or we had like a goal, like I think it was like, we'd try to look for at least 20% reduction in build times. [00:13:27] And I think we hit 26%, so all cool wins, you know?  [00:13:32] Gatsby DSG  [00:13:32] Abhi Aiyer: And so then Ward and I, and the team were all just like thinking like, okay, we have LMDB. We've got PQR. Alright, well really we have a Gatsby data layer that can be accessed from anywhere, right? Cause if you can access it from a worker, you can access it in a serverless function cloud run, you know, on your somewhere, anywhere you spin up your own machine and your own office, if you want it well  [00:13:56] swyx: steady coast. [00:13:57] How about that? Like an S3  [00:14:00] Abhi Aiyer: bucket, you put it in an S3 bucket, for sure. You know, like you could put those files there and then retrieve them from wherever you want. And so that's when we started thinking like, okay, we have this information now, what can we do to improve the, the life of our users even more. [00:14:19] And then we started thinking about DPR and like, we saw the approach and we were like, wow, this is exactly what we need, but we have Gatsby's data layer that kind of complicates things, but it's not complicated anymore because we can essentially use the data layer wherever we wants. So I'll let ward kind of go from there on like how DSG came about after these like fundamental pieces. [00:14:42] Ward Peeters: Yeah. So then we looked at like ISR DPR and like what's the difference in both of them. And then we saw like ISR that's where you have a refresh timeout and an hour with, in the latest next, you can also being an endpoint to they're getting validated cache, but it's all manual work. And there were many complaints about it's an index. [00:15:02] It's nothing in Gatsby and they complained about stale data, because what Next.js does is you visit the page and then the next time it will update. So I think it's a refresh or something. Yeah. [00:15:15] swyx: Alright. Alright. We don't have to dig through issues on, on the, on our call, but I just wanted to illustrate the problem.  [00:15:24] Ward Peeters: Yeah.   [00:15:24] Netlify DPR vs Gatsby DSG  [00:15:24] Ward Peeters: And then that's where we took it away and then say, okay, DPR. And then I looked at the spec of DPR, like, okay. Can we use the same name or not? And the problem with DPR was they had Atomic deploys. So every change means blow the whole cache away and do everything new and we were like, what do we have incremental builds from there? We don't want to like invalidate the whole cache. We just want to invalidate the pages that got removed. And there's like a GitHub discussion about it, where I commented as well. [00:15:55] And it felt like they didn't want to change the name. Yep. There you go.  [00:16:04] swyx: So you said to me, DPR, doesn't need to be opinionated about if the file is part of the atomic deploy. Can you reiterate why?  [00:16:13] Ward Peeters: Yeah, the thing is basically because they mentioned like everyday glory needs to blow the cache away and needs to be fresh. [00:16:20] And for me, like it shouldn't be in a spec like DPR should just say you built pieces at build-time and you build those pieces at runtime. That's basically what I was trying to say. And then because we have incremental builds, we only want to invalidate like five pages, even if you use SSG or DSG, we still want to say if you only changed five pages for evil dates to cache for five pages, I couldn't get that from the spec. [00:16:46] I think that's also because Netlify does it their way, which is totally fine, but then that's why we created a new acronym called DPR. And I think it's also probably explains. What we offer as well, a little bit better too, because it's Deferred Static Generation. It's like lazy SSG, something like that, because that's what we do. [00:17:08] Like you can mark a page as defer and that just means we don't do it at build time, but the first time you hit a request. We rebuild it in like a Lambda, I could use Cloud Run, we build it and then we give the response to a user and then also we save it to disk. So from there on, the second request, it's technically an SSG page. [00:17:29] We store it like you have the CDN cache, but we also have it inside our bucket. Like, your S3 buckets or whatever you want to call it.  [00:17:37] Abhi Aiyer: Yeah. We're caching responses, but we're also making that file exist as if it existed at build time. And that's a big distinction for us because what that allows us to do in the future would be like, if nothing changed about the data for the given page, then you don't need to DSG again. [00:17:56] Right. Like if nothing changes for, let's say like there's five builds and build a. Something changed in your data dependencies. So now you have a DSG page and then nothing changed for the next five builds, but a user comes and actually visits that page generates the files. It gets cacheed in our data layer or our files storage layer and on build five because nothing changed. [00:18:24] You're not DSGing. Right. You're not going to go through this process again. And so that's we think is the big thing about DSG.  [00:18:31] Yeah. And then I think the extra piece of it is because the date, like you can say it it's a benefit or or a negative point of Gatsby, like we source all the data at the build time. [00:18:41] So even if your APIs go down, even with DSG, you still go to our local database. So debts will never go down. Cause if like your site is down, your database will be down as well, but you, you're not dependent of other API. So let's say GitHub goes down or X go down and you need to get that data. We have it locally in our database, so you're still good to go through, still keep that resilience. [00:19:06] And the security even that you, you used to have with Gatsby, and I think that's a main benefit of the whole datalayer piece of Gatsby and DSG.  [00:19:17] Yeah.   [00:19:18] swyx: Yeah. Perfect.  [00:19:19] The End of Jamstack  [00:19:19] swyx: So something I always wonder about like, is this basically the last stage of JAMstack like, I feel like we have explored all possible varieties of rendering. [00:19:30] And this is like the end. This is like, this is it right? Like we have all the options.  [00:19:34] Ward Peeters: And now it's mixing them together. It's the next step having been static and on bits of your thesis, SSR. Uh,  [00:19:43] swyx: okay. I'll put it this way. Do you think that JAMstack at the end of the day after this, maybe like five-year journey of like, Hey, like a WordPress sucks. [00:19:53] That's everyone moves to static. Right. And then, and then we found like, oh yeah, static. Doesn't scale, big surprise. We were telling you that from the beginning. And now okay. Right. Hybrid. Is that it, like, it was that the Jamstack movement in like a five year period? [00:20:10] Abhi Aiyer: I think it's a yes or no. Like evolution is like, I think we're, you know, we're all coming full circle and I think in engineering, particularly we do the same thing all the time, every 10 years or something. Right. But where DSG came into play is for use cases that we saw, you know, and our customers still prefer static. [00:20:31] So I know we're talking about DSG. Like it's like a great thing and it is, but a lot of our customers prefer static and it's really up to their use case. If you're a small site out of a bunch of top of funnel page, any lag in anything, right? Cause DST is not like instant, right? Like you're doing a runtime build essentially. [00:20:51] Right? So in some cases it could be, you know, it could, it could be a longer response time than what the standards should be. And we have customers that won't DSG anything because they have essentially, most pages are top of funnel or high traffic that they would rather just wait.They don't mind waiting for the performance that they would want. [00:21:11] But we also have customers that have hundreds of thousands of pages, like there's one customer that has like a company handbook or something where like, you can see every employee. And like, if they like dogs and like, you know what I'm saying? Like, Bio's and stuff. And they have a lot of employees worldwide, and there, they can only like before DSG, they can only build their site once a week. [00:21:33] Cause it takes like 24 hours to build. What, and now with DSG, they don't really care about someone who no, one's going to view their profile. No offense to that person, but no one's viewing the non CEO's profile. So then how they can, like, you know, and there are other people that are important too. I'm sure, but like now they can actually, you know, make changes to their site. [00:21:55] You know, we actually had to work with them to make sure that, you know, they can build. I mean, previous to DSG, they can build like, at some cadence that we don't necessarily support, but we help support that. So, so just looking static is still king when it makes sense. For sure.  [00:22:12] Tradeoffs and Performance  [00:22:12] swyx: I feel like it's a bit scary when you tell people like, okay, you're deferring the build. [00:22:16] And then on the first request, someone's going to build that. It's not going to take that long. Yeah. Right. It's not like it's that bad. I think bottom line is, I think people are very scared whenever you say, like, okay, there's a trade off, but you don't quantify the trade-offs. And then they're like, oh, it's bigger in their mind than it really is. [00:22:37] Ward Peeters: Yeah, I think a big problem with the plugin ecosystem is that it's difficult to, to quantify like what's slow and what's not slow. For example, generating an MDX page is more time-consuming because it has to like get some dependencies make sure that they have bundled together, then use react to render and then render again because it's how the Gatsby plugin, is built right now that takes more time than a simple React renderToString with something. [00:23:07] And I think that's the difficult thing to say like, okay, it's some pages will be instant. Some pages might take a second to build or we'll half a second.  [00:23:18] swyx: Yeah. The important thing is that there are not data dependencies that you're waiting on. Right. That's usually the slowest part fetch all the data upfront and then you store it in a LMDB cache. [00:23:28] And that's written to a serverless function or written to I guess your build process or whatever. And then people can render that whenever which I think is great. Like, it should be fairly fast, like we're talking tens of milliseconds difference between like for first render, right? [00:23:44] Like something like that. Like I think, I think when you quantify, like, okay, we're talking tens of milliseconds, not hundreds of milliseconds and not thousands of seconds that really helps me with. Put these things in perspective.  [00:23:56] Abhi Aiyer: Yeah. But then, you know, people always find a way to screw it up. So say that like, of  [00:24:01] swyx: course. [00:24:01] Yeah. But, but you give a realistic benchmark and then you go like, yeah, for these benchmarks, we tested it like a hundred times or something. The median was this, the P 95 was that. That's it like, I mean, people can't really fault you for not accounting for every use case because no one, no one can, but at least you can give a reasonable basis and say like,  [00:24:22] Abhi Aiyer: there's,  [00:24:23] swyx: there's an up, there's an upper bound to you know, how bad, how the, the, the trade-off like, you know, when, whenever you miss channels, I like to quantify it basically. [00:24:32] Um, that's a good, that's a good idea.  [00:24:34] Image Processing  [00:24:34] Abhi Aiyer: And like, one thing to know for DSG is like, your data may be like available and that's cool that that may not be the long pole, but let's say you have a portfolio site that generates 20 different types of images for every image. Now you're getting into image processing at runtime, you know? [00:24:54] And so there, there are ways to kind of not do this properly. Right. And or like, for example, let's say your homepage, I love this example. Your homepage has links like to every other page on your site,and it's all DSG, right? So you load the homepage and because Gatsby does prefetch for link tags are doing Gatsby link to other pages. [00:25:17] We go and prefetch every page on your site. And essentially you're doing your build at runtime. So we're going to try to prevent these cases from happening, but just natively going through DSG everything is not my recommendation. That's for sure. [00:25:32] Not today. At least not today.  [00:25:35] swyx: so a couple of things on that. So, this Gatsby image point is very interesting. So how does Gatsby image work with DSG? [00:25:42] Abhi Aiyer: So yeah it works how it does it in Gatsby build. currently today Gatsby uses Gatsby-plugin-sharp and the sharp transformers to take one image, turn it into another. [00:25:54] And even in Gatsby cloud, before we implemented parallel image processing, images were like the slowest part of the Gatsby build because a lot of time, a lot of memory, et cetera. And so we solved that problem. And so in the DSG runtime, we do image processing there for a particular page. [00:26:15] So you will have to wait for image processing. If you're image heavy on a DSG page.  [00:26:21] swyx: Which I mean, does that mean that you cannot do a DSG in a serverless function?  [00:26:26] Abhi Aiyer: In a total? We do. We actually do DSG in serverless. And that's totally fine. Like you can do image processing, you know? But like, I would say your mileage may vary given what kind of transformations you have going on, how many images you have, right. [00:26:42] But like you said, there's, trade-offs right. If the page makes sense for it, you know, we have a bunch of examples that do have images and they work great, you know? But I don't know if I go full on portfolio with like a, you know, like a masonry thing where like, there's like tons of images and they have sub images and you have to go, like, I'll be like a carousel of images and stuff that may not be good for your. [00:27:06] I don't know, but the choices, the users, that's, what we're trying to get at is like, we're trying to give as many options. We're going to give guidance and like we're having our own opinions, but you, you can choose to listen or not, or, you know, do your own thing and we should try to support you as much as we can. [00:27:25] Automatic DSG  [00:27:25] swyx: Yeah, you give me some thought about like, having sort of like a browsers list type of API where you can say like, okay, top 100 most visited pages. No, this is not it. You know what I mean? Like, as a handholding for what should be DSG and what should be statically generated you know, plug into my Google analytics, tell me like top hundred pages statically render those, everything else, DSG. [00:27:48] I'm sure you've thought about it. And I think like maybe four years ago, Gatsby and Guess.js had someone in collaboration, which I assume went nowhere. But let me know if there's.  [00:27:59] Ward Peeters: Uh, okay.  [00:28:02] For now. Yeah, because there is a new way to do it because now greet guests, it stored everything in one file. So we have to like sometimes download a five megabyte Jason file to make guess.js work. Mondays switching around that you could make, get smarter to say like a guess for this route. You only need the bit of the JSON file. But we never implemented it. So,  [00:28:26] Abhi Aiyer: yeah. And we have this, so I'm speaking from the Gatsby cloud perspective, but like you're right, Shawn. Like, if you can hook into Google analytics, you'll get the same thing. [00:28:36] But if you host with Gatsby cloud, we know what, what routes coming through our hosting layer. We know what routes for your site. Are the most hit, you know, we know the requests per route. I mean, how much bandwidth you're using, like per route. And so we could be smarter and tell people exactly how. How to DSG, right? How should you DSG and get it done that way, for sure.  [00:29:04] swyx: Okay. So like a, to be, to be complete, uh, typical to be  [00:29:08] Abhi Aiyer: complete, you know, we're still in beta forgets before, so I would say like, maybe like after we launched for, for sure, we'll start adding some sugar on.  [00:29:17] swyx: Got it. So first of all I did, so this was my first time trying out Gatsby Cloud. I, I think it was behind like a signup wall, like a private beta in the past. And I never really gave it a spin, but again, you know, the V4 announcement really got me going and And yeah. I mean, I'm pretty impressed.  [00:29:33] Gatsby Cloud vs Netlify  [00:29:33] swyx: So how much of this, you know, the hard question, right? How much of this is a Netlify clone, what are you doing differently? [00:29:40] Abhi Aiyer: Let's talking about that. How much does like DSG is  [00:29:45] swyx: how much of Gatsby Cloud? Isn't it  [00:29:48] Abhi Aiyer: like? 0%. Ooh, okay. Yeah. Probably 0% of it is a Netlify clone. [00:29:56] swyx: I do like when you provision it, it gives me like a really good set of options. Uh, let's see, uh, you know, connect CMS guests. Netlify does not have that. [00:30:07] Abhi Aiyer: Yeah. I mean, I would, yeah. We're far from an elephant clone Mo multiple weeks. We've built our whole system based on the needs of Gatsby. The way our cloud front end and our back ends talk to our customers, Gatsby Sites is a novel way of doing it. We've exposed that in open source and I think Netlify actually did implement something for external jobs or something with Google pub sub I, I saw that, but everything we do in Gatsby cloud is for Gatsby. We have no other framework that we need to maintain nor care about, sorry. Luke's or whatever. Like we don't care about that. On Gatsby cloud, we've optimized our hosting layer with Fastly to be part of the data. And so if Gatsby changes, Gatsby cloud changes, and if we need to get to be framework to change, it will for Gatsby cloud. So, and we use Google cloud, so we're not on AWS.  [00:31:09] I would say we have the similar features though, and that's a valid point to bring out. [00:31:13] We have, we have functions, right. [00:31:15] We have domains and we don't have a purchasing domains or anything yet, but you know, we have the whole hosting product and everything like that. Yeah.  [00:31:27] swyx: Is that, is that what you would need for Gatsby Cloud to come out of beta? Like. Domains or like what, what, what are you waiting  [00:31:35] Abhi Aiyer: for essentially? Well, Gatsby cloud isn't in beta. [00:31:38] It's like a  [00:31:38] Oh Gatsby v4  [00:31:40] swyx: is in beta [00:31:41] Abhi Aiyer: yeah. V4 it's in beta. Yeah. Sorry. Yeah. Yeah, domain like domain registry and all that stuff is more sugar on top that we'll be probably releasing mid next year. But we're just trying to be I mean, Gatsby cloud, from that perspective, we just want to be at the table with all the other JAMstack providers. [00:31:59] But our edge is if you want to build a Gatsby site the best way, you know, you have our support team, that'll help you. Right. As a customer of ours, you're like our family. The Gatsby family, you know, we're, we'll help. We help our customers. We have great support and everything we do on the platform is for Gatsby and making Gatsby better. [00:32:18] So there's like so many things in the framework that we've improved by having Gatsby cloud. Cause we didn't know all the sites that could exist and not do things nicely or have problems or, you know, because of Gatsby cloud that the framework is getting so much better because we have real users feedback and they have a lot of demands and we like to, you know, fulfill them. Yeah.  [00:32:41] swyx: Okay. Actually I should probably clarify this earlier. How much of what we just talked about is Gatsby Cloud-only? [00:32:48] Abhi Aiyer: Pretty much all of it, DSG, SSR, they're all capable, you know, you can run it locally and stuff. And I know Netlify has a Gatsby plugin as well that will allow you to run DSG and SSR as well. [00:33:03] For those who are not using those platforms, it's like maybe you're using Amplify or whatever. You're going to have to implement this yourself. I don't recommend it though, because it was a pain in the ass to put it together. But yeah, it should work the best on Gatsby cloud.  [00:33:19] Ward Peeters: So technically all of that we building with v4 is in open source. [00:33:22] So you could wire it up all yourself, but why bother if you can use, like Gatsby Cloud. Yeah, you don't have to care about it.  [00:33:34] Gatsby vs Next.js  [00:33:34] swyx: That's true. Okay. So, just on the rendering side of things, right? I made this observation that, Gatsby now has SSR, now has serverless rendering. All the different rendering modes, like this looks very similar to next JS. [00:33:48] Is it possible to basically say like, okay, Gatsby is the data layer and is this the best data layer and most advanced or whatever, because this is basically what Next.js does, right? Like it's a very, very constrained rendering layer. Why can't you, I mean, you know, sunk costs aside. Why can't you be a next JS layer? [00:34:08] Ward Peeters: Well, technically we could now, because they like implemented everything too, like they have SSG, they have ISR, they have SSR and we could technically move the data layer out of, and use it with next. That could be a possibility, but. We've been, we've come so far and I think do already have built this. [00:34:31] And then now they're also parity. I think having two separate ones and having different dev experience, and maybe Next.js is winning now and Gatsby will a win in, in two months or vice versa. I think it's just a healthy balance. Like it's and I think it's the same thing as a browser wars, like everyone is going to Chrome or chromium and then there is still like, Firefox and iOS, but how long will they survive? [00:34:58] And I think just the competition is what you need. And I think that's why a good reason why we keep separate. And also, I don't think that Next.js is for like, merging with Gatsby or like having the like the same.  [00:35:13] swyx: Oh, I think I know Next.js, it be super happy about it, because then they, when they, when the server for reacts you know, role, and then you focus on the data role, right? [00:35:22] Like, uh, Makes sense to me, obviously I'm brushing over a lot of the plugins actually have a rendering portion as well. So how much can you separate those things  [00:35:33] Abhi Aiyer: if in the next. No, this is possible. I don't, I mean, we're not going to like say that it's happening or anything.  [00:35:41] Gatsby and the Content Mesh  [00:35:41] Abhi Aiyer: Like if we look at Gatsby's like, this is how it's set up. [00:35:45] It's, it's what we call the content mesh. You have all these different data warehouses that exist. WordPress Drupal, et cetera, can even be a freaking Google Sheets. You know, like whatever, and we assemble this data layer at build time. And in doing DSG and SSR, we build something called the query engine that allows you to query this LMD B store that has like the manifested data in there. [00:36:13] So. It really opens up the gate for yeah. If you want to use our data layer in a Next.js app, like, I mean, go ahead. Like once we expose this API to our customers then you can essentially have Gatsby data in an iOS app or an Android app react native. Like, it's just an API call at that point. And you know, Gatsby cloud hosts, like a graphical API for you that you can just query your data. [00:36:38] I don't know if any data scientists would care for that. They could add that into Looker or something. You know, like I remember they want to do it like that stuff would be available and it's almost like a content data lake versus, you know, traditional data lake I guess. It's purely for content and you would have the benefits of Gatsby because we normalize and we create structures and you like, the user can customize a schema, however you want. [00:37:05] And then now you can use it on multiple platforms, right? It's not an immediate goal for us to do so. It's a logical next step. Yeah. Yeah.  [00:37:15] swyx: Awesome. Awesome. Cool. Yeah, I, I feel like that's a really good and in depth coverage. [00:37:19] React 18 and Gatsby  [00:37:19] swyx: Maybe let's end off with talking about the future of React 18 and your plans there. First of all, what's happening in react 18. Is it out? Like the plan for the react 18 and published in June? Okay. All right. Let's talk about it. What's what's going on? [00:37:35] Ward Peeters: So, yeah, so we are working closely with the React team and we also in the working group to figure out like, okay, how can we help the team, make it more stable and give it in user hands. [00:37:46] So I think from may or something, we have introduced React 18 as part of Gatsby. So you can now install React 18 alpha. And we just moved to the new rendering mode. So the async mode suspense and all those things were. Like what, what we're planning on, at least when you use Gatsby, like we have page queries and we have static queries and there's a big pain point is static queries, cause it's a graph QL query, but you cannot have any variables, which means you're kind of limited to the unit. And then you have to move everything to page queries going to have to know all the content up front and wait the new async rendering bits of React to get into like a useQuery, because you can yield the rendering of React at any time. [00:38:34] Cause async doesn't mean you have to go like, uh, use Apollo Server to get server data tree or something or other pieces, or you kind of have two have React async mode or React Suspense in SSR and we can all move it to the page components or the components of your reactor. So basically look that you're just recreating an react application and then every async bit like using react-fetch or a useQuery, it all just works. [00:39:02] I think that's where, where we activate in benefits a lot where it's. It just removes a lot of cruft or that you have to do now. It gets you where you have to be in the Gatsby mindset when you're developing and, and you basically go to a, creating a react app and you have a data layer, but I think React 18 opens so many doors with the new cache APIs. It just becomes way smarter and when you look at it from a performance perspective with the whole concurrent mode where inputs gets priority over rendering, it's just going to be way smoother than what they had so far. [00:39:39] Abhi Aiyer: And hopefully people stop complaining about lighthouse scores and stuff. That'll be great.  [00:39:45] Custom rendering page fragments with React 18  [00:39:45] Abhi Aiyer: Another cool thing that React 18 kind of unlocked for Gatsby in particular is a concept of fragments. And so we were talking about that nav bar example earlier with the a hundred thousand pages. And we want to leverage react 18 with like custom renderers so that we can essentially create fragments of a page that had beta dependent. [00:40:07] Because there's no page query or static query anymore. That's just a query. Your navbar component has a query and essentially Gatsby can make that nap bar a navbar fragment and your body has a fragment, or maybe your footer has a fragment. Your sidebar has a fragment. And as data changes incrementally, we only rebuild fragments and our hosting layer, stitches, fragments together. This is an old concept called ESI includes like if everyone did PHP back in the day, like, you know, very familiar with this stuff, like I said, every 10 years, things has come back around and we're going to try to do that. We're going to try to build fragments of pages, stitch them together. So a navbar change doesn't break the bank, you know? But we can only do that once react 18. It's like, you know, fully there. I mean, we could do it now, but like why, when we should just like work off the, the, the work of others.  [00:41:02] swyx: So when you say fragments, are you referring to GraphQL Fragments or, or like  [00:41:06] Abhi Aiyer: Asian fragment might be a, maybe we call it like, you know, today, like an HTML page that has specific. [00:41:13] You know, I like to call him like the rectangles that we all draw around are our websites. Right. They all have independent data isolation. Right. And so these are like what maybe a Gatsby slice of a page or a fragment or some type of include, you know, like in the templating days. Right. And that's what I kind of mean there. [00:41:31] So these includes or templates or whatever you want to call them would be independently built. And then independently stitched at the cache layer. And then, you know, the data dependencies don't cross, and now I'm not building a hundred thousand pages because I misspelled Gasby and it should've been, you know,  [00:41:51] swyx: sounds like it happens a lot,  [00:41:54] Abhi Aiyer: but definitely those,  [00:41:56] Ward Peeters: and it looks a lot like donut caching. [00:41:58] If you're more familiar with that piece, like you have a page where I said parks has a different. Limit and another one. So that's more or less the technical piece out of  [00:42:10] Server Components in Limbo  [00:42:10] swyx: a server components. Anything on any implications on that forgets me?  [00:42:15] Ward Peeters: Not yet. I would say because they're not going to ship it with react 18. [00:42:19] We've been talking about it, but it's still very fresh or very new, like even the React team hasn't, hasn't worked more on it, so they did their demo, but then it's got like a little bit  [00:42:31] swyx: stagnated. Oh my God. [00:42:37] Ward Peeters: All the pieces. Like they need to build all the pieces underneath it to make it work. [00:42:45] swyx: They jumped, they jumped the gun, maybe in announcing I got so excited. I was like, wow. Okay. I can cut my Javascript bundle by 40% and run backend functions in my react component. And then nothing, nothing for 10 months,  [00:43:01] Ward Peeters: because we are super excited about it too. Because when you look at especially marketing sites, like marketing pages or blogs, there's only a small piece of JavaScript that you actually need. [00:43:13] Like maybe you need a bit for your newsletter button or you like something like that. And why. 200 kilobytes of JavaScript could bring technically only need maybe 10, 20 kilobytes. So I think it's static or with like marketing pages. Uh, [00:43:33] Smart Servers vs Smart Clients  [00:43:33] Abhi Aiyer: yeah, so the world was server rendered. Then we went client side rendered. Then we went static rendered. Now we're DSG rendered, and then we're going to go back to server run. So, you know, time just keeps spinning. Partially server.  [00:43:47] swyx: I called it smart server versus smart clients is my term for it. So this is the, I think maybe my, my most recent posts, because I have been trying to write more, but then I keep have having real life get in the way. [00:44:01] But why is traditional, which is server rendered, different from the new server rendered. We have essentially is essentially exactly the same, but there's a thin runtime, which I'll ship the stuff that we send over the wires changes. And we actually doing rendering in the browser, but like partial rendering, maybe I should say. [00:44:20] And yeah. I dunno. I think, I think this is a very interesting exploration. Phoenix live view is also the other one that, that gets a lot of love for this. And then rails is also adopting Hotwire. So, I don't know where this goes. I mean, I, I it's, it seems like we fully explored the smart client space and the smart server revolution is just kind of get, getting going. [00:44:41] Ward Peeters: We're going back to Meteor.  [00:44:44] swyx: Back to meteor, but not so opinionated, I think, you know, I was very excited about meteor. Like when I, when I first started as a web dev, I was like, oh yeah. Okay. Everything is in there. I actually mentioned Meteor here because it had the mini Mongo data store, which was I thought it was just such a great experience. [00:44:59] Did you use.  [00:45:02] Abhi Aiyer: Oh, both my last company, we used meteor for our backend, and then we had to kind of migrate slowly off of it. Cause they were just ahead of their time. You know, now all those concepts. Those are like, those are the concepts of today. Right. And that's the beautiful thing they were  [00:45:19] swyx: just ahead of their time. [00:45:21] Apollo and Open Source Startup Strategy  [00:45:21] swyx: I mean, you know, what they did was they became Apollo. They were just like, oh no, one's no, one's handling all the hard parts of GraphQL. Well,  [00:45:29] Abhi Aiyer: okay. We'll do it. Yeah, good job of that too,  [00:45:33] swyx: which is by the way, like in terms of just honestly, I'm interested in startups, entrepreneurship, uh, you know, we worked so hard in web dev stuff. [00:45:41] A lot of this, we never charge a cent for and something I would like to make money on the smart things that we do in tech.  [00:45:47] Taking an under specified spec, which most of the times is intentionally under specified, and then building all the hard parts around it, is a very interesting formula for success. [00:45:58] So essentially React and under specified framework and Next.js came in and went like, oh, okay, well, we'll build the get initial props that you guys forgot. And great, very successful Gatsby, same thing. And then Apollo and Relay by the way, but, but relay was not a serious company, a company effort. [00:46:19] I mean, Relay is a serious effort. It's not a startup that was like existentially relying on like, uh unsuccess. Whereas was Apollo was like, okay, GraphQL was under specified. There's a reference JS implementation, but no one's building the production quality standard. We'll do it. And then, and yeah, like it's really interesting. Cause as the spec grows or as adoption of the thing grows, you're you grow with it and, you serve the audience and you also capture a lot of the value and you essentially have Facebook working for you in the sense of like, oh, there's the spec maintainers, you know, whatever, whatever the spec is, they're working for you because every time they contribute to the spec, you. [00:47:06] TMA: Too Many Acronyms  [00:47:06] Abhi Aiyer: Yeah, maybe that's what the what's going to happen with DPR. Right?  [00:47:10] swyx: The naming socks, too many, three letter acronyms. I'm sure. Like, look like you and I, and everyone in like the WebDev, like Twitter sphere or whatever, we don't mind new things and like understanding the differences in nuances, but anyone who is like just a regular web dev or just like not web dev, but talking to web devs, they think we're crazy. [00:47:36] This is actually bad. Like it, we look like the nerds, uh, who. Talking about all these minor differences and inventing new acronyms for them. I don't know how to fix it. Jargon is important for specialists to understand in a very short amount of time, the differences between what we referring to. Jargon is important, but we don't do ourselves, our industry a favor when we have all these acronyms and then people just throw them on onto a page or a blogpost or a slide deck. [00:48:05] And then. People would just go like, okay. Yeah, the JS ecosystem  [00:48:09] Abhi Aiyer: is crazy. And you ended up explaining the same thing all the time. Right? Cause you use some acronym. It was funny, like on the way to Gatsby camp, like we had, like all of our release had all of the releases and gas before had the acronym. Yeah, like PQR parallel query, running DSE, SSR, SSG, man. [00:48:26] We were like trying to figure it out. How many more acronyms can we fit to, to get like the, the acronym count up, but it's a serious problem for us too, because our, some of our customers have never used Gatsby before they're coming from a WordPress full on WordPress background and our sales team marketing, we all need to be able to convey like, yeah, this is what it really is. [00:48:45] And this is what it means. And maybe. The acronym sticks after they understand it, but that's a really uphill battle to explain right on the way. So I would love if a community we all got together and like, kind of just understood it. You know, it's kind of like the GraphQL spec have a formal definition for what this is. [00:49:02] Don't be too heavy handed on approach, let people implement however they want to. And then there's just a concept that has different flavors. Yeah. Oh, it's different  [00:49:14] swyx: flavors. Okay. That'd be interesting.  [00:49:16] Gatsby for Docs  [00:49:16] swyx: Is there anything else that we haven't covered that you wanted to shout out?  [00:49:21] Abhi Aiyer: This is fun. I really enjoyed talking to you too.  [00:49:24] swyx: Yeah, I love, uh, I love catching up. Um, uh, Fun fact, we're actually at my workplace. We use Docusaurus right now for our docs. We're actually considering moving to Gatsby. [00:49:35] Nice. Not something I thought I would do this year, but we're, we're running into enough limitations to Docusaurus that we're essentially customizing so much that we don't get much benefit anymore. So maybe a good standard docs implementation. It would be interesting for you guys actually, because a lot of the reason that people pick Docusaurus is basically it has docs in the name and it's got a lot of good defaults for docs, right? [00:50:04] And Gatsby, maybe it doesn't have such a developed theme for docs.  [00:50:07] Ward Peeters: We've mostly pushed people to the Apollo team. Like they have a great, like the whole Apolo site is, or docs site is built with Gatsby and a open source. The building blocks up there. So, or you could start from there and then, oh  [00:50:20] Abhi Aiyer: yeah.  [00:50:23] New Relic is with Gatsby and they're working on something similar too.  [00:50:30] swyx: Awesome. Awesome. Yeah. All right. Cool. Well thanks for those pointers. I'm actually going to go explore them. [00:50:38] Abhi Aiyer: Yeah. If you need any help. Yeah, we'll do.  [00:50:41] swyx: And there's no reason why we shouldn't move to Gatsby cloud, if that makes sense for us as well. Okay. Okay.  [00:50:47] Ward and Abhi,thanks so much, and this is really great chatting, thanks for reaching out. And, yeah, I hope  [00:50:52] Abhi Aiyer: people would try out Gatsby. [00:50:54] Thanks for having us.

The Cloud Pod
135: The Cloud Pod Goes to Google Cloud Toronto, Eh?

The Cloud Pod

Play Episode Listen Later Sep 24, 2021 41:57


On The Cloud Pod this week, AWS releases OpenSearch and EKS Anywhere, Google Cloud is now available in the Toronto region, and Microsoft deals with two critical security issues.  A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. JumpCloud, which offers a complete platform for identity, access, and device management — no matter where your users and devices are located.  This week's highlights

Google Cloud Platform Podcast
Building the I/O Adventure Game with Valentin Deleplace

Google Cloud Platform Podcast

Play Episode Listen Later Sep 8, 2021 32:48


Carter Morgan and Guillaume Laforge co-host this week’s episode about what it took to develop the Google I/O Adventure Game. Our guest Valentin Deleplace and Guillaume introduce us to the game designed to encourage interaction with I/O attendees at the virtual conference held this year. Adventure takes the look of a 90s role-playing game. The online world facilitates the meeting of hundreds of conference attendees and presenters to mimic the in-person conference setting and facilitate meaningful conversations. With avatars, text chatting capabilities, and mini games, attendees’ experiences go beyond simply watching online technical presentations. The development of Adventure Game required scalability to handle varying attendee numbers. It takes advantage of many GCP products, including Compute Engine and Cloud Run. Valentin describes why he and the team chose Cloud Run and how they used it to to stress test the game. He talks about challenges the team faced and how they overcame them to produce a smooth, enjoyable experience for conference-goers. Being a game that’s live for specific periods of time rather than indefinitely presented different challenges as well. Valentin explains that scaling down, for example, is treated differently for this type of game. Adventure will be available at future conferences. Valentin Deleplace Valentin Deleplace is a developer advocate at Google. He’s also a senior cloud backend engineer, interested in performance and UX, and an enthusiast Gopher. Cool things of the week New Cloud Functions min instances reduces serverless cold starts blog What’s the key to a more secure Cloud Function? It’s a secret! blog Shift security left with on-demand vulnerability scanning blog All you need to know about Cloud Storage blog Interview Google I/O site Chrome Dev Summit site Join the Adventure at Google I/O video Google's I/O Adventure was almost as good as being there article Set Snail site Compute Engine site Cloud Run site Using WebSockets docs App Engine site Agones site What’s something cool you’re working on? Carter is working VM End to End. Guillaume is working on new features for Cloud Workflows and helping with the Serverless Expeditions videos.

#strobofm
ep.113 マンデー・ナイト・フィーバー (@mookjp)

#strobofm

Play Episode Listen Later Sep 6, 2021 40:10


Sponsors 救急車のたらい回しをなくす!ドクターズプライムでソフトウェアエンジニア募集中 Notes メルカリ Shops での NestJS を使った GraphQL Server の実装 | メルカリエンジニアリング メルカリShops の技術スタックと、その選定理由 | メルカリエンジニアリング NestJS - A progressive Node.js framework Apollo GraphQL | Apollo Data Graph Platform— unify APIs, microservices, and databases into a data graph that you can query with GraphQL Cloud Run: コンテナを秒単位で本番環境にデプロイ  |  Google Cloud メルカリ Shops の開発の現場が気になる人来て〜〜〜〜〜〜〜!!!! 新しいメルカリ Web の話 | メルカリエンジニアリング メルカリ・メルペイ・メルコインについて / English 🆗 mookjp のパソコンサタデーナイト • A podcast on Anchor サタデー・ナイト・フィーバー - Wikipedia パソコンサンデー - Wikipedia 都内の最新感染動向 | 東京都 新型コロナウイルス感染症対策サイト あすけん | あすけんダイエット - 栄養士が無料であなたのダイエットをサポート 最近本当にブログとか書いてないし、コミュニティ発表などもしてないし、いざみなさんにお知らせがある時になっても誰この人みたいになってしまう、ちゃんと日々コミュニティ活動や個人の発信や会社のメディアを使った発信をしていかないと本当にだめ…無力…— mook (@mookjp) August 23, 2021

Screaming in the Cloud
Helping Avoid the Kubernetes Hiccups with Rich Burroughs

Screaming in the Cloud

Play Episode Listen Later Aug 24, 2021 37:05


About RichRich Burroughs is a Senior Developer Advocate at Loft Labs where he's focused on improving workflows for developers and platform engineers using Kubernetes. He's the creator and host of the Kube Cuddle podcast where he interviews members of the Kubernetes community. He is one of the founding organizers of DevOpsDays Portland, and he's helped organize other community events. Rich has a strong interest in how working in tech impacts mental health. He has ADHD and has documented his journey on Twitter since being diagnosed.Links: Loft Labs: https://loft.sh Kube Cuddle Podcast: https://kubecuddle.transistor.fm LinkedIn: https://www.linkedin.com/in/richburroughs/ Twitter: https://twitter.com/richburroughs Polywork: https://www.polywork.com/richburroughs TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part my Cribl Logstream. Cirbl Logstream is an observability pipeline that lets you collect, reduce, transform, and route machine data from anywhere, to anywhere. Simple right? As a nice bonus it not only helps you improve visibility into what the hell is going on, but also helps you save money almost by accident. Kind of like not putting a whole bunch of vowels and other letters that would be easier to spell in a company name. To learn more visit: cribl.ioCorey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.scaCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Periodically, I like to have, well, let's call it fun, at the expense of developer advocates; the developer relations folks; DevRelopers as I insist on pronouncing it. But it's been a while since I've had one of those come on the show and talk about things that are happening in that universe. So, today we're going back to change that a bit. My guest today is Rich Burroughs, who's a Senior Developer Advocate—read as Senior DevReloper—at Loft Labs. Rich, thanks for joining me.Rich: Hey, Corey. Thanks for having me on.Corey: So, you've done a lot of interesting things in the space. I think we first met back when you were at Sensu, you did a stint over at Gremlin, and now you're over at Loft. Sensu was monitoring things, Gremlin was about chaos engineering and breaking things on purpose, and when you're monitoring things that are breaking that, of course, leads us to Kubernetes, which is what Loft does. I'm assuming. That's probably not your marketing copy, though, so what is it you folks do?Rich: I was waiting for your Kubernetes trash talk. I knew that was coming.Corey: Yeah. Oh, good. I was hoping I could sort of sneak it around in there.Rich: [laugh].Corey: But yeah, you know me too well.Rich: By the way, I'm not dogmatic about tools, right? I think Kubernetes is great for some things and for some use cases, but it's not the best tool for everything. But what we do is we really focus a lot on the experience of developers who are writing applications that run in Kubernetes cluster, and also on the platform teams that are having to maintain the clusters. So, we really are trying to address the speed bumps, the things that people bang their shins on when they're trying to get their app running in Kubernetes.Corey: Part of the problem I've always found is that the thing that people bang their shins on is Kubernetes. And it's one of those, “Well, it's sort of in the title, so you can't really avoid it. The only way out is through.” You could also say, “It's better never begin; once begun, better finish.” The same thing seems to apply to technology in a whole bunch of different ways.And that's always been a strange thing for me where I would have bet against Kubernetes. In fact, I did, and—because it was incredibly complicated, and it came out of Google, not that someone needed to tell me. It was very clearly a Google-esque product. And we saw it sort of take the world by storm, and we are all senior YAML engineers now. And here we are.And now you're doing developer advocacy, which means you're at least avoiding the problem of actually working with Kubernetes day-in-day out yourself, presumably. But instead, you're storytelling about it.Rich: You know, I spent a good part of my day a couple days ago fighting with my Kubernetes cluster at Docker Desktop. So, I still feel the pain some, but it's a different kind of pain. I've not maintaining it in production. I actually had the total opposite experience to you. So, my introduction to Kubernetes was seeing Kelsey Hightower talk about it in, like, 2015.And I was just hooked. And the reason that I was hooked is because of what Kubernetes did, and I think especially the service primitive, is that it encoded a lot of these operational patterns that had developed into the actual platform. So, things like how you check if an app is healthy, if it's ready to start accepting requests. These are things that I was doing in the shops that I was working at already, but we had to roll it ourselves; we had to invent a way to do that. But when Kelsey started talking about Kubernetes, it became apparent to me that the people who designed this thing had a lot of experience running applications in distributed systems, and they understood what you needed to be able to do that competently.Corey: There's something to be said for packaging and shipping expertise, and it does feel like we're on a bit of a cusp, where the complexity has risen and risen and risen, and it's always a sawtooth graph where things get so complicated that you then are paying people a quarter-million dollars a year to run the thing. And then it collapses in on itself. And the complexity is still there, but it's submerged to a point where you don't need to worry about it anymore. And it feels like we're a couple years away from Kubernetes hitting that, but I do view that as inevitable. Is that, basically, completely out to sea? Is that something that you think is directionally correct, or something else?Rich: I mean, I think that the thing that's been there for a long time is, how do we take this platform and make it actually usable for people? And that's a lot more about the whole CNCF ecosystem than Kubernetes itself. How do we make it so that we can easily monitor this thing, that we can have observability, that we can deploy applications to it? And I think what we've seen over the last few years is that, even more than Kubernetes itself, the tools that allow you to do those other things that you need to do to be able to run applications have exploded and gotten a lot better, I think.Corey: The problem, of course, is the explosion part of it because we look at the other side, at the CNCF landscape diagram, and it is a hilariously overwrought picture of all of the different offerings and products and tools in the space. There are something like 400 blocks on it, the last time I checked. It looks like someone's idea of a joke. I mean, I come up with various shitposts that I'm sort of embarrassed I didn't come up with one anywhere near that funny.Rich: I left SRE a few years ago, and this actually is one of the reasons. So, the explosion in tools gave me a huge amount of imposter syndrome. And I imagine I'm not the only one because you're on Twitter, you're hanging around, you're seeing people talk about all these cool tools that are out there, and you don't necessarily have a chance to play with them, let alone use them in production. And so what I would find myself doing is I would compare myself to these people who were experts on these tools. Somebody who actually invented the thing, like Joe Beda or something like that, and it's obviously unfair to do because I'm not that person. But my brain just wants to do that. You see people out there that know more than you and a lot of times I would feel bad about it. And it's an issue, I really think it is.Corey: So, one of the problems that I ran into when I left SRE was that I was solving the same problem again and again, in rapid succession. I was generally one of the first early SRE-type hires, and, “Oh, everything's on fire, and I know how to fix those things. We're going to migrate out of EC2 Classic into VPCs; we're going to set up infrastructure as code so we're not hand-building these things from scratch every time.” And in time, we wind up getting to a point where it's, okay, there are backups, and it's easy to provision stuff, and things mostly work. And then it becomes tedium, where the day starts to look too much alike.And I start looking for other problems elsewhere in the organization, and it turns out that when you don't have strategic visibility into what other orgs are doing but tell them what they're doing wrong, you're not a popular person; and you're often wrong. And that was a source of some angst in my case. The reason I started what I do now is because I was looking to do something different where no two days look alike, and I sort of found that. Do you find that with respect to developer advocacy, or does it fall into some repetitive pattern? Not there's anything wrong with that; I wish I had the capability to do that, personally.Rich: So, it's interesting that you mentioned this because I've talked pretty publicly about the fact that I've been diagnosed with ADHD a few months ago. You talked about the fact that you have it as well. I loved your Twitter thread about it, by the way; I still recommend it to people. But I think the real issue for me was that as I got more advanced in my career, people assumed that because you have ‘senior' in your title, that you're a good project manager. It's just assumed that as you grow technically and move into more senior roles, that you're going to own projects. And I was just never good at that. I was always very good at reactive things, I think I was good at being on call, I think I was good at responding to incidents.Corey: Firefighting is great for someone with our particular predilections. It's, “Oh, great. There's a puzzle to solve. It's extremely critical that we solve it.” And it gets the adrenaline moving. It's great, “Cool, now fill out a bunch of Jira tickets.” And those things will sit there unfulfilled until the day I die.Rich: Absolutely. And it's still not a problem that I've solved. I'll preface this with the kids don't try this at home advice because everybody's situation is different. I'm a white guy in the industry with a lot of privilege; I've developed a really good network over the years; I don't spend a lot of time worried about what happens if I lose my job, right, or how am I going to get another one. But when I got this latest job that I'm at now, I was pretty open with the CEO who interviewed me—it's a very small company, I'm like employee number four.And so when we talked to him ahead of time, I was very clear with him about the fact that bored Rich is bad. If Rich gets bored with what he's doing, if he's not engaged, it's not going to be good for anyone involved. And so—Corey: He's going to go find problems to solve, and they very well may not align with the problems that you need solved.Rich: Yeah, I think my problem is more that I disengage. Like, I lose my passion for what it is that I'm doing. And so I've been pretty intentional about trying to kind of change it up, make different kinds of content. I happen to be at this place that has four open-source projects, right, along with our commercial project. And so, so far at least, there's been plenty for me to talk about. I haven't had to worry about being bored so far.Corey: Small companies are great for that because you're everyone does everything to some extent; you start spreading out. And the larger a company gets, the smaller your remit is. The argument I always made against working at Google, for example was, let's say that I went in with evil in mind on day one. I would not be able—regardless of how long I was there, how high in the hierarchy I climbed—to take down google.com for one hour—the search engine piece.If I can't have that much impact intentionally, then the question really becomes how much impact can I have in a positive direction with everyone supposedly working in concert with me? And the answer I always came up with was not that much, not in the context of a company like that. It's hard for me to feel relevant to a large company. For better or worse, that is the thing that keeps me engaged is, “You know, if I get this wrong enough, we don't have a company anymore,” is sort of the right spot for me.Rich: [laugh]. Yeah, I mean, it's interesting because I had been at a number of startups last few years that were fairly early stage, and when I was looking for work this last time, my impulse was to go the opposite direction, was to go to a big company, you know, something that was going to be a little more stable, maybe. But I just was so interested in what these folks were building. And I really clicked with Lukas, the CEO, when we talked, and I ended up deciding to go this route. But there's a flip side to that.There's a lot of responsibility that comes with that, too. Part of me wanting to avoid being in that spotlight, in a way; part of me wanted to back off and be one of the million people building things. But I'm happy that I made this choice, and like I said, it's been working out really well, so far.Corey: It seems to be. You seem happy, which is always a nice thing to be able to pick up from someone in how they go about these things. Talk to me a little bit about what Loft does. You're working on some virtual cluster nonsense that mostly sails past me. Can you explain it using small words?Rich: [laugh]. Yeah, sure. So, if you talk to people who use Kubernetes, a lot, you are—Corey: They seem sad all the time. But please continue.Rich: One of the reasons that they're sad is because of multi-tenancy in Kubernetes; it just wasn't designed with that sort of model in mind. And so what you end up with is a couple of different things that happen. Either people build these shared clusters and feel a whole lot of pain trying to share them because people commonly use namespaces to isolate things, and that model doesn't completely work. Because there are objects like CRDs and things that are global, that don't live in the namespace, and so that can cause pain. Or the other option that people go with is that they just spin up a whole bunch of clusters.So, every team or every developer gets their own cluster, and then you've got all this cluster sprawl, and you've got costs, and it's not great for the environment. And so what we are really focused a lot on with the virtual cluster stuff is it provides people what looks like a full-blown Kubernetes cluster, but it just lives inside the namespace on your host cluster. So, it actually uses K3s, from the Rancher folks, the SUSE folks. And literally, this K3s API server sits in the namespace. And as a user, it looks to you like a full-blown Kubernetes cluster.Corey: Got it. So, basically a lightweight [unintelligible 00:13:31] that winds up stripping out some of the overwrought complexity. Do you find that it winds up then becoming a less high-fidelity copy of production?Rich: Sure. It's not one-to-one, but nothing ever is, right?Corey: Right. It's a question of whether people admit it or not, and where they're willing to make those trade-offs.Rich: Right. And it's a lot closer to production than using Docker Compose or something like that. So yeah, like you said, it's all about trade-offs, and I think that everything that we do as technical people is about trade-offs. You can give everybody their own Kubernetes cluster, you know, would run it in GK or AWS, and there's going to be a cost associated with that, not just financially, but in terms of the headaches for the people administering things.Corey: The hard part from where I've always been sitting has just been—because again, I deal with large-scale build-outs; I come in in the aftermath of these things—and people look at the large Kubernetes environments that they've built and it's expensive, and you look at it from the cloud provider perspective, and it's just a bunch of one big noisy application that doesn't make much sense from the outside because it's obviously not a single application. And it's chatty across availability zone boundaries, so it costs two cents per gigabyte. It has no [affinity 00:14:42] for what's nearby, so instead of talking to the service that is three racks away, it talks the thing over an expensive link. And that has historically been a problem. And there are some projects being made in that direction, but it's mostly been a collective hand-waving around it.And then you start digging into it in other directions from an economics perspective, and they're at large scale in the extreme corner cases, it always becomes this, “Oh, it's more trouble than it's worth.” But that is probably unfair for an awful lot of the real-world use cases that don't rise to my level of attention.Rich: Yeah. And I mean, like I said earlier, I think that it's not the best use case for everything. I'm a big fan of the HashiCorp tools. I think Nomad is awesome. A lot of people use it, they use it for other things.I think that one of the big use cases for Nomad is, like, running batch jobs that need to be scheduled. And there are people who use Nomad and Kubernetes both. Or you might use something like Cloud Run or AppRun, whatever works for you. But like I said, from someone who spent literally decades figuring out how to operate software and operating it, I feel like the great thing about this platform is the fact that it does sort of encode those practices.I actually have a podcast of my own. It's called Kube Cuddle. I talk to people in the Kubernetes community. I had Kelsey Hightower on recently, and the thing that Kelsey will tell you, and I agree with him completely, is that, you know, we talk about the complexity in Kubernetes, but all of that complexity, or a lot of it, was there already.We just dealt with it in other ways. So, in the old days, I was the Kubernetes scheduler. I was the guy who knew which app ran on which host, and deployed them and did all that stuff. And that's just not scalable. It just doesn't work.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking databases, observability, management, and security.And - let me be clear here - it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free you can do things like run small scale applications, or do proof of concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free. No asterisk. Start now. Visit https://snark.cloud/oci-free that's https://snark.cloud/oci-free.Corey: The hardest part has always been the people aspect of things, and I think folks have tried to fix this through a lens of, “The technology will solve the problem, and that's what we're going to throw at it, and see what happens by just adding a little bit more code.” But increasingly, it doesn't work. It works for certain problems, but not for others. I mean, take a look at the Amazon approach, where every team communicates via APIs—there's no shared data stores or anything like that—and their entire problem is a lack of internal communication. That's why the launch services that do basically the same thing as each other because no one bothers to communicate with one another. And half my job now is introducing Amazonians to one another. It empowers some amazing things, but it has some serious trade-offs. And this goes back to our ADHD aspect of the conversation.Rich: Yeah.Corey: The thing that makes you amazing is also the thing that makes you suck. And I think that manifests in a bunch of different ways. I mean, the fact that I can switch between a whole bunch of different topics and keep them all in state in my head is helpful, but it also makes me terrible, as far as an awful lot of different jobs, where don't come back to finish things like completing the Jira ticket to hit on Jira a second time in the same recording.Rich: Yeah, I'm the same way, and I think that you're spot on. I think that we always have to keep the people in mind. You know, when I made this decision to come to Loft Labs, I was looking at the tools and the tools were cool, but it wasn't just that. It's that they were addressing problems that people I know have. You hear these stories all the time about people struggling with the multi-tenancy stuff and I could see very quickly that the people building the tools were thinking about the people using them, and I think that's super important.Corey: As I check your LinkedIn profile, turns out, no, we met back in your Puppet days, the same era that I was a traveling trainer, teaching people how to Puppet and hoping not to get myself ejected from the premises for using sarcastic jokes about the company that I was conducting the training for. And that was fun. And then I worked at a bunch of places, you worked in a bunch of places, and you mentioned a few minutes ago that we share this privilege where if one of us loses our job, the next one is going to be a difficult thing for us to find, given the skill set that we have, the immense privilege that we enjoy, and the way that this entire industry works. Now, I will say that has changed somewhat since starting my own company. It's no longer the fear of, “Well, I'm going to land on my feet.” Rich: Right.Corey: Yeah, but I've got a bunch of people who are counting on me not to completely pooch this up. So, that's the thing that keeps me awake at night, now. But I'm curious, do you feel like that's given you the flexibility to explore a bunch of different company types and a bunch of different roles and stretch yourself a little with the understanding that, yeah, okay. If you've never last five years at the same company, that's not an inherent problem.Rich: Yeah, it's interesting. I've had conversations with people about this. If you do look up my LinkedIn, you're going to see that a lot of the recent jobs have been less than two years: year, year and a half, things like that. And I think that I do have some of that freedom, now. Those exits haven't always been by choice, right?And that's part of what happens in the industry, too. I think I've been laid off, like, four or five times now in my career. The worst one by far was when the bubble burst back in 2000. I was working at WebMD, and they ended up closing our office here.Corey: You were Doctor Google.Rich: I kind of was. So, I was actually the guy who would deploy the webmd.com site back then. And it was three big Sun servers. And I would manually go in and run shell scripts and take one out of the load balancer and roll the new code on it, and then move on to the next one. And those are early days; I started in the industry in about '95. Those early days, I just felt bulletproof because everybody needed somebody with my skills. And after that layoff in 2000, it was a lot different. The market just dried up, I went 10 months unemployed. I ended up taking a job where I took a really big pay cut in a role that wasn't really good for me, career-wise. And I guess it's been a little bit of a comfort to me, looking back. If I get laid off now, I know it's not going to be as bad as that was. But I think that's important, and one of the things that's helped me a lot and I'm assuming it's helped you, too, is building up a network, meeting people, making friends. I sort of hate the word networking because it has really negative connotations to it to me. The salespeople slapping each other on the back at the bar and exchanging business cards is the image that comes to my mind when I think of networking. But I don't think it has to be like that. I think that you can make genuine friendships with people in the industry that share the interests and passions that you have.Corey: That's part of it. People, I think, also have the wrong idea about passion and how that interplays with career. “Do a thing that you love, and the money will follow,” is terrific advice in the United States to make about $30,000 a year. I assure you, when I started this place, I was not deeply passionate about AWS billing. I developed a passion for it as I rapidly had to become an expert in this thing.I knew there was an expensive business problem there that leveraged the skill set that I already had and I could apply it to something that was valuable to more than just engineers because let's face it, engineers are generally terrible customers for a variety of reasons. And by doing that and becoming the expert in that space, I developed a passion for it. I think it was Scott Galloway who in one of his talks said he had a friend who was a tax attorney. And do you think that he started off passionate about tax law? Of course not.He was interested in making a giant pile of money. Like, his preferred seat when he flies is ‘private.' So, he's obviously very passionate about it now, but he found something that he could enjoy that would pay a bunch of money because it was an in-demand, expensive skill. I often wonder if instead of messing around and computers, my passion had been oil painting, for example. Would I have had anything approaching to the standard of living I have now?The answer is, “Of course not.” It would have been a very different story. And that says many deeply troubling things about our society across the board. I don't know how to fix any of them. I'm one of those people that rather than sitting here talking how the world should be; I deal with the world as I encounter it.And at times, that doesn't feel great, but it is the way that I've learned to cope, I guess, with the existential angst. I'm envious in some ways of the folks who sit here saying, “No, we demand a better world.” I wish I shared their optimism or ability to envision it being different than it is, but I just don't have it.Rich: Yeah, I mean, there are oil painters who make a whole lot of money, but it's not many of them, right?Corey: Yeah, but you shouldn't have to be dead first.Rich: [laugh]. I used to… know a painter who Jim Carrey bought one of his big canvases for quite a lot of money. So, they're not all dead. But again, your point is very valid. We are in this bubble in the tech industry where people do make on average, I think, a lot more money than people do in many other kinds of jobs.And I recently started thinking about possibly going into ADHD coaching. So, I have an ADHD coach myself; she has made a very big difference in my life so far. And I actually have started taking classes to prepare for possibly getting certified in that. And I'm not sure that I'm going to do it. I may stay in tech.I may do some of both. It doesn't have to be either-or. But it's been really liberating to just have this vision of myself working outside of tech. That's something that I didn't consider was even possible for quite a long time.Corey: I have to confess I've never had an ADHD coach. I was diagnosed when I was five years old and back then—my mother had it as well, and the way that it was dealt with in the '50s and '60s when she was growing up was, she had a teacher once physically tie her to a chair. Which—Rich: Oh, my gosh.Corey: —is generally frowned upon these days. And coaching was never a thing. They decided, “Oh, we're going to medicate you to the gills,” in my case. And that was great. I was basically a zombie for a lot of my childhood.When I was 17, I took myself off of it and figured I'd white-knuckle it for the next 10 years or so. Again, everyone's experience is different, but for me, didn't work, and it led to some really interesting tumultuous times in my '20s. I've never explored coaching just because it feels like so much of what I do is the weirdest aspects of different areas of ADHD. I also have constraints upon me that most folks with ADHD wouldn't have. And conversely, I have a tremendous latitude in other areas.For example, I keep dropping things periodically from time to time; I have an assistant. Turns out that most people, they bring in an assistant to help them with stuff will find themselves fired because you're not supposed to share inside company data with someone who is not an employee of that company. But when you own the company, as I do, it's well, okay, I'm not supposed to share confidential client data or give access to it to someone who's not an employee here. “Da da da da da. Welcome aboard. Your first day is Monday.”And now I've solved that problem in a way that is not open to most people. That is a tremendous benefit and I'm constantly aware of how much privilege is just baked into that. It's a hard thing for me to reconcile, so I've never explored the coaching angle. I also, on some level—and this is an area that I understand is controversial and I in no way, shape or form, mean any—want anyone to take anything negative away from this. There are a number of people I know where ADHD is a cornerstone of their identity, where that is the thing that they are.That is the adjective that gets hung on them the most—by choice, in many cases—and I'm always leery about going down that path because I'm super strange ever on a whole bunch of different angles, and even, “Oh, well he has ADHD. Does that explain it?” No, not really. I'm still really, really strange. But I never wanted to go down that path of it being, “Oh, Corey. The guy with ADHD.”And again, part of this is growing up with that diagnosis. I was always the odd kid, but I didn't want to be quote-unquote, “The freak” that always had to go to the nurse's office to wind up getting the second pill later in the day. I swear people must have thought I had irritable bowel syndrome or something. It was never, “Time to go to the nurse, Corey.” It was one of those [unintelligible 00:27:12]. “Wow, 11:30. Wow, he is so regular. He must have all the fiber in his diet.” Yeah, pretty much.Rich: I think that from reading that Twitter thread of yours, it sounds like you've done a great job at mitigating some of the downsides of ADHD. And I think it's really important when we talk about this that we acknowledge that everybody's experience is different. So, your experience with ADHD is likely different than mine. But there are some things that a lot of us have in common, and you mentioned some of them, that the idea of creating that Jira ticket and never following through, you put yourself in a situation where you have people around you and structures, external structures, that compensate for the things that you might have trouble with. And that's kind of how I'm looking at it right now.My question is, what can I do to be the most successful Rich Burroughs that I can be? And for me right now, having that coach really helps a lot because being diagnosed as an adult, there's a lot of self-image problems that can come from ADHD. You know that you failed at a lot of things over time; people have often pointed that out to you. I was the kid in high school who the counselors or my teachers were always telling me I needed to apply myself.Corey: “If you just tried harder and suck a little less, then you'll be much better off.” Yeah, “Just to apply yourself. You have so much potential, Rich.” Does any of that ring a bell?Rich: Yeah, for sure. And, you know, something my coach said to me not too long ago, I was talking about something and I said to her, I can't do X. Like, I'm just not—it's not possible. And her response was, “Well, what if you could?” And I think that's been one of the big benefits to me is she helps me think outside of my preconceptions of what I can do.And then the other part of it, that I'm personally finding really valuable, is having the goal setting and some level of accountability. She helps with those things as well. So, I'm finding it really useful. I'm sure it's not for everybody. And like we said, everybody's experience with ADHD isn't the same, but one of the things that I've had happened since I started talking about getting diagnosed, and what I've learned since then, is I've had a bunch of people come to me.And it's usually on Twitter; it's usually in DMs; you know, they don't want to talk about it publicly themselves, but they'll tell me that they saw my tweets and they went out and got diagnosed or their kid got diagnosed. And when I think about the difference that could make in someone's life, if you're a kid and you actually get diagnosed and hopefully get better treatment than it sounds like you did, it could make a really big positive impact in someone's life and that's the reason that I'm considering putting doing it myself is because I found that so rewarding. Some of these messages I get I'm almost in tears when I read them.Corey: Yeah. The reason I started talking about it more is that I was hoping that I could serve as something of, if not a beacon of inspiration, at least a cautionary tale of what not to do. But you never know if you ever get there or not. People come up and say that things you've said or posted have changed the trajectory of how they view their careers and you've had a positive impact on their life. And, I mean, you want to talk about weird Gremlins in our own minds?I always view that as just the nice things people say because they feel like they should. And that is ridiculous, but that's the voice in my head that's like, “You aren't shit, Corey, you aren't shit,” that's constantly whispering in my ear. And it's, I don't know if you can ever outrun those demons.Rich: I don't think I can outrun them. I don't think that the self-image issues I have are ever going to just go away. But one thing I would say is that since I've been diagnosed, I feel like I'm able to be at least somewhat kinder to myself than I was before because I understand how my brain works a little bit better. I already knew about the things that I wasn't good at. Like, I knew I wasn't a good project manager; I knew that already.What I didn't understand is some of the reasons why. I'm not saying that it's all because of ADHD, but it's definitely a factor. And just knowing that there's some reason for why I suck, sometimes is helpful. It lets me let myself off the hook, I guess, a little bit.Corey: Yeah, I don't have any answers here. I really don't. I hope that it becomes more clear in the fullness of time. I want to thank you for taking so much time to speak with me about all these things. If people want to learn more, where can they find you?Rich: I'm @richburroughs on Twitter, and also on Polywork, which I've been playing around with and enjoying quite a bit.Corey: I need to look into that more. I have an account but I haven't done anything with it, yet.Rich: It's a lot of fun and I think that, speaking of ADHD, one of the things that occurred to me is that I'm very bad at remembering the things that I accomplish.Corey: Oh, my stars, yes. People ask me what I do for a living and I just stammer like a fool.Rich: Yeah. And it's literally this map of, like, all the content I've been making. And so I'm able to look at that and, I think, appreciate what I've done and maybe pat myself on the back a little bit.Corey: Which is important. Thank you so much again, for your time, Rich. I really appreciate it.Rich: Thanks for having me on, Corey. This was really fun.Corey: Rich Burroughs, Senior Developer Advocate at Loft Labs. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment telling me what the demon on your shoulder whispers into your ear and that you can drive them off by using their true name, which is Kubernetes.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Coding talks with Vishnu VG
GCP Series - Google Serverless Services

Coding talks with Vishnu VG

Play Episode Listen Later Aug 24, 2021 8:16


GCP Series which features Google Serverless Services like App Engine, Cloud Functions, Cloud Run and Cloud Run Anthos --- Send in a voice message: https://anchor.fm/vishnu-vg/message

airhacks.fm podcast with adam bien
Java, Serverless, Google App Engine, gVisor, Kubernetes

airhacks.fm podcast with adam bien

Play Episode Listen Later Aug 22, 2021 74:49


An airhacks.fm conversation with Ludovic Champenois (@ludoch) about: Amstrad CPC 64 with audio tape, listen to bugs, first project: a family tree in Basic, 8-bit music over gaming, learning APL with Game of Life then fortran, inventing the iPad with Apple II, Pascal and assembler, working with computers on boats with Vax VMS and Fortran, refactoring logistics software from VAX to Unix C++ and DEC Alphas, starting at Sun Microsystems in 1996, from Java 0.9 to 1.0, Javasoft vs. Sun Tools, TeamWare was like git but developed by Sun, interviewing the CEO of NetBeans at Sun, working on Netbeans Enterprise Edition, xdoclet was forbidden by Sun Microsystems, Javasoft was the church, using Netbeans at Google, improving application servers usability, writing deployment descriptors by hand, Java EE 5 was a revolution, it was impossible to write an EJB 2 with vi, starting to work on iPlanet Netscape and Sun Server, Java EE Reference Implementation was the ancestor of Glassfish, using Glassfish as Reference Implementation and commercial offering at the same time, implementing HK2 - the dependency injection for Glassfish, generating JAX-RS resources with asm, starting at the Google AppEngine Team in 2011, Google AppEngine (GAE) is one of the first Platform as a Service (PaaS) offerings, serverless and elastic Google AppEngine, GAE came with JPA-like persistence, GAE ships with a single JAR which communicates to various Google services, GAE supports Java 11, GAE supports Servlets and jetty, kubernetes was created at the GAE team, GAE is a single application running on Google's infrastructure, GAE was not able to secure Java 8 like it secured Java 6 and Java 7, using gVisor as replacement for Java's security model, gVisor is the basis of Cloud Run, gVisor rewrites syscalls, gVisor is the new implementation of the libc library, gVisor is the matrix for JVM, Ludovic's presentation about GAE: Evolution of a Platform as a Service from the inside Ludovic Champenois on twitter: @ludoch

Cloud Engineering – Software Engineering Daily
Cloud Run: Serverless Applications with Steren Giannini

Cloud Engineering – Software Engineering Daily

Play Episode Listen Later Aug 12, 2021 46:45


Serverless computing is a cloud computing solution that lets developers deploy applications to containers without managing the servers themselves. Servers and resources are provisioned automatically, pay only for what you use, and experience little to no errors or downtime (ionos). Google Cloud Run is a managed compute platform that enables you to run containers that The post Cloud Run: Serverless Applications with Steren Giannini appeared first on Software Engineering Daily.

Software Engineering Daily
Cloud Run: Serverless Applications with Steren Giannini

Software Engineering Daily

Play Episode Listen Later Aug 12, 2021 46:45


Serverless computing is a cloud computing solution that lets developers deploy applications to containers without managing the servers themselves. Servers and resources are provisioned automatically, pay only for what you use, and experience little to no errors or downtime (ionos). Google Cloud Run is a managed compute platform that enables you to run containers that The post Cloud Run: Serverless Applications with Steren Giannini appeared first on Software Engineering Daily.

Software Daily
Cloud Run: Serverless Applications with Steren Giannini

Software Daily

Play Episode Listen Later Aug 12, 2021


Serverless computing is a cloud computing solution that lets developers deploy applications to containers without managing the servers themselves. Servers and resources are provisioned automatically, pay only for what you use, and experience little to no errors or downtime (ionos). Google Cloud Run is a managed compute platform that enables you to run containers that

Podcast – Software Engineering Daily
Cloud Run: Serverless Applications with Steren Giannini

Podcast – Software Engineering Daily

Play Episode Listen Later Aug 12, 2021 46:45


Serverless computing is a cloud computing solution that lets developers deploy applications to containers without managing the servers themselves. Servers and resources are provisioned automatically, pay only for what you use, and experience little to no errors or downtime (ionos). Google Cloud Run is a managed compute platform that enables you to run containers that The post Cloud Run: Serverless Applications with Steren Giannini appeared first on Software Engineering Daily.

RWpod - подкаст про мир Ruby и Web технологии
31 выпуск 09 сезона. Nokogiri 1.12 supports HTML5 parsing, Pushing boundaries, Babosa, Meyda, Kid Pix и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Aug 8, 2021 34:13


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Nokogiri 1.12 supports HTML5 parsing (after assimilating Nokogumbo) Running Rails on the Cloud Run environment Using ActiveRecord's #update_counters to Prevent Race Conditions 6 Command Line Tools for Productive Programmers Babosa - a library for creating human-friendly identifiers, aka “slugs” Yukihiro “Matz” Matsumoto - Pushing boundaries (video) Web Vue.js has been selected as Wikimedia Foundation's future JavaScript framework How MDN's autocomplete search works Accessibility testing with Storybook TensorFlow.js tutorial: Get started with the ML library Meyda is a JavaScript audio feature extraction library Kid Pix as a JavaScript App RWpod Cafe 25 (04.09.2021) Сбор и голосование за темы новостей

Flutter 101 Podcast
Dart in the Cloud, Backend, Command Line, and Shelf with Kevin Moore

Flutter 101 Podcast

Play Episode Listen Later Jul 23, 2021 44:56


My guest in this episode is Kevin Moore. Kevin is a Product Manager at Google working on Dart and Flutter.In one of the last episodes of the Flutter 101 podcast, I talked to Ryan Knell, the author of the Alfred package. Kevin, who works as Product Manager at Google, listened to the episode. He then shared on Twitter, that he would love to come on and explain more about his thoughts on pkg:shelf and Dart on the server and CLI. Of course, I invited him immediately!Most people know Dart as the language behind Flutter. Flutter code is powered by the world-class Dart platform, which enables compilation to 32-bit and 64-bit ARM machine code for iOS and Android, as well as JavaScript for the web and Intel x64 for desktop devices.Dart is also used for tooling, as command-line apps, running web servers, and more. pub.dev is also running on Dart and it's serving millions! It's a great match as a backend language for teams and developers who already write Flutter. If your Flutter app needs a backend or you need to glue some services together, Dart is a great match.We talked about how you can run Dart in the cloud today. You can use Cloud Run's container support, combined with Dart's Docker images, to run server-side Dart code.We briefly talked about the Functions Framework that makes it easy to write Dart functions instead of server applications for handling web requests. Using the framework, you can create functions that handle HTTP requests and CloudEvents and deploy your Dart functions to Google Cloud.Lastly, we also talked about command-line apps, and Kevin shared his tips on which packages can improve your development experience while writing and using Dart on the command line.Guest: Kevin MooreTwitter @kevmooGitHub @kevmooLinkedIn linkedin.com/in/kevmooReddit @kevmooMedium @kevmooHost: Vince VargaTwitter @vincevargadevGitHub @vincevargadevLinkedIn @vincevargadevWeb vincevarga.devMost relevant past episodes from Flutter 101Dart Server Framework Alfred with Ryan Knell (Episode 11): Ryan Knell is the author of the performant, Express.js-like Dart server framework Alfred. We talked about the state of full-stack Dart, ORMs, backend frameworks, Flutter, and many more!Dart on AWS Lambda and Serverless Computing with Sebastian Döll (Episode 6): We talked to Sebastian Döll (GitHub Microsoft, previously Solutions Architect at AWS) about serverless computing, the state of serverless Dart, and how he implemented a custom AWS Lambda Runtime for Dart.Backend and Frontend Web with Dart with Jermaine Oppong (Episode 7): We talked about backend and frontend Dart with Web Developer and YouTuber Jermaine Oppong. Shelf, Alfred, Aqueduct, Angel, AngularDart, and more.Mentioned packagespub.dev/packages/shelf: A model for web server middleware that encourages composition and easy reusepub.dev/packages/functions_framework: FaaS (Function as a service) framework for writing portable Dart functionspub.dev/packages/shelf_router: A convenient request router for the shelf web framework, with support for URL parameters, nested routers, and routers generated from source annotationspub.dev/packages/args: Library for defining parsers for parsing raw command-line arguments into a set of options and values using GNU and POSIX style optionspub.dev/packages/build_cli: Parse command-line arguments directly into an annotation class using the power of build_runner and source_genpub.dev/packages/completion: A package to add shell command completion to your Dart applicationOther ResourcesPop, Pop, Win! + source code: demonstration app for the open-source Dart project from Google: an implementation of Minesweeper in DartThe Chromium ProjectsArs Technica: Google has released Dartium, a Chromium build with a Dart VM (2012)Fandom Google Wiki: Dartium: "Dartium is a modified version of Chromium that is designed to support the Dart language."Dart News and Updates: The new AdWords UI uses Dart - we asked why (2016)Good to Great: Why Some Companies Make the Leap... and Others Don't by Jim C. CollinsRack: a Ruby Webserver InterfaceAnnouncing Dart 2.13: (...) Official Docker support and Dart on Google CloudDockerDocker Official ImagesDart Docker Official ImagesKubernetes - Production-grade container orchestrationDart Docs - Google Cloud (Cloud Run, Functions, Kubernetes, Compute Engine, App Engine)Announcing Dart support for GitHub ActionsIt's All Widgets! hosted by Hillel Coren: Kevin MooreYouTube Kevin Moore: Code generation with the Dart build systemYouTube Google IO Q&A: Cloud, Dart, and full-stack Flutter    

Reversim Podcast
413 GitOps with Yaron from Soluto

Reversim Podcast

Play Episode Listen Later Jul 17, 2021


זהו פרק 413 של רברס עם פלטפורמה, הוקלט ב-8 ביולי 2021, וזה הטייק השני - הטייק הראשון היה מוצלח במיוחד, אבל הוא לא הוקלט . . .אז הנה אתם פה, בטייק 2, יחד איתנו - כן, אני יודע שבשבילכם זה הטייק הראשון, בסדר - אז היום אנחנו נמצאים באולפן שלנו ביוקנעם עילית (!), אורי נמצא בחופש ומחליף את אורי יונתן מ-Outbrain - היי יונתן, מה נשמע?(יונתן) היי, מה העניינים?(רן) מצויין, ברוך הבא - ואיתנו נמצא גם ירון מחברת Soluto - היי ירון!(ירון) היי, מה העניינים? נעים מאוד . . . .(רן) טוב שבאת - היום אנחנו הולכים לדבר על GitOps, בפעם הראשונה.ולפני שנדבר על GitOps, נעשה סבב היכרות קצר - יונתן, היית כאן הרבה פעמים בפודקאסט לפני זה [הקדמה והיכרות - בפרק הקודם], אבל בוא ספר לנו בכל זאת עוד קצת על עצמך - (יונתן) אז אני הגעתי ל-Outbrain לפני 10 שנים, כמהנדס Backend, ובחמש השנים האחרונות אני מנהל את הפיתוח.ומאזין ותיק של רברסים [וגם אורח - 328 The tension between Agility and Ownership או Final Class 23: IDEs או 131 uijet או 088 Final Class 2, וכמובן 412 Serverless at Via](רן) מצויין - טוב שאתה פה.ירון - שני משפטים עליך?(ירון) אז אני ירון עידן, אני מוביל את צוות ה-DevOps ב-Solutoאני משחק עם מחשבים כבר יותר מ-20 שנה - התחלתי בצבא בתור DBA ואחרי זה עברתי להיות מפתח.לפני כמה שנים כבר גיליתי את עולם ה-DevOps ועברתי אליו לחלוטין - ומאז אני מאוד נהנה מהעולם הזה.ב-Soluto אני עושה את זה כבר משהו כמו חמש שנים.אני אספר גם קצת על Soluto, החברה שבה אני עובד - Soluto היא חברה שרוצה להפוך את הטכנולוגיה לדבר יותר ידידותי, בעיקר עבור אנשים שעבורם טכנולוגיה “זו לא השפה הראשונה שלהם”.אז המשתמשים שלנו יכולים לגשת לממשקים ב-Web או ב-Mobile ובעצם לקבל את המיטב מהמנויים הדיגיטליים שלהםלוודא שכל המידע שלהם מאובטח ושמוראם יש להם איזושהי מכונת כביסה חכמה בבית אז הם יכולים לוודא שהמכונה מתפקדת כמו שצריך ושהם מצליחים להשתמש בה . . .ובצד השני - יש להם גם את היכולת לפתוח איזשהו צ'אט, איזשהו Session של Chat עם Expert-ים - וגם הם משתמשים בפלטפורמה שאנחנו מפתחים בתל אביב, שנקראית Anywhere Expert, והיא מאפשרת לתומכים טכניים להיות מסוגלים לעשות את הסשנים האלה מהבית שלהם, מתוך איזושהי אפליקציה, כמו ב-Uberכבר לא צריכים לשבת בתוך איזה Cubical עם אוזניית מדונה ב-Setting קצת אפרורי - אלא ממש להשתחרר ולעשות את זה בתנאים שלהם.זה מייצר Disruption ענק לכל התעשייה הזאת של Tech-support בארה”ב - שם נמצאים רוב הלקוחות שלנו.(רן) אז זה, למעשה, Marketplace של תומכים ונתמכים - מצד אחד יש את הנתמכים, שאלו אנשים שיש להם, לצורך העניין, בעיה עם הטלפון או עם מכונת הכביסה או כל מכשיר אחרומצד שני התומכים, שבזמנם . . . אולי בנוסף על עבודתם, כמו שאמרת כמו ב-Uber, עושים השלמת הכנסה בזמנם החופשי.(ירון) כן - אנחנו אוהבים לחשוב על זה שאנחנו מצליחים לתרום לשני הצדדים הרבה מאודגם לגרום לאנשים להרגיש שהם מוציאים את המיטב ממה שהם שילמו עליו כסףוגם לגרום לאנשים לעשות את העבודה שלהם בתנאים יותר משחררים [אה . . . ](רן) אז הנה שאלה מפתיעה, שהרבה זמן לא שאלו אותך - אמרת שאתה כבר מתכנת הרבה זמן, אז תהיתי מה היה המעבד הראשון שסבל את נחת זרועך?(ירון) אז יש לי Deja Vu . . . אני חושב שזה היה 386 לדעתי? אבל נראה לי שעברתי על כל הסדרה, ואיפשהו בילדות מצאתי מצאתי איזושהי חוברת כזאת בעברית שמלמדת לתכנת ב-Basic, התחלתי לפתוח אותה - ומאז לא הפסקתי.(רן) עדיין ב-Basic?(ירון) התקדמתי מאז - עכשיו אני ב-Pascal . . . [אין יותר טוב מזה](רן) יפה . . . Turbo Pascal [אוקיי, יש יותר טוב…], Object Pascal . . . נחמד - הכחול והתכלת הזה, מקסים, הנדסת אנוש למופת.[אתה לא ציני, נכון? זה היה נפלא]בסדר - אז אנחנו התכנסנו היום כדי לדבר על GitOps.כולם, פחות או יותר, יודעים מה זה Git, וכולם, פחות או יותר, יודעים מה זה Ops - החלק המעניין של DevOps, להזכירכם . . . אבל מה זה GitOps? מה זה השילוב הזה ביניהם?(ירון) אז כן - דבר ראשון, הטרנד היום זה באמת לשים סיומת של Ops על הכל . . . יש DataOps ויש MLOps, אז עכשיו יש גם Buzzword חדש שהוא GitOps.אנחנו ב-Soluto עושים את זה כבר הרבה שנים, בלי לתת לזה את השם הזה, אבל אני כן אתן את ההרחבה של “מה זה בעצם אומר?”אז GitOps היא איזושהי מכניקה של CD, איזושהי אימפלמנטציה (Implementation), שמאפשרת למפתח לדלבר (Deliver) את המוצר שלו ל-Production בצורה שבה Git, או הקוד שיושב בתוך Git, ייצג את המצב של Production.אז אם ב-Continuous Delivery רגיל, יש איזושהו מבוך רציני, שהקוד צריך לעבור מהרגע שהוא Committed ל-Branch הראשי, ועד שבאמת אפשר לראות אותו ב-Production - אז GitOps מנסה לחסל כמה שיותר מהמחסומים האלהובאמצעות איזשהו רכיב שעושה פעולה שנקראת Reconciliation, לבדוק מה המצב של הקוד ב-Git, ולראות האם Production עונה על אותן הגדרות - ואם יש צורך אז לסנכרן בין שני הרכיבים האלה.(רן) כשאתה אומר “מבוך” ,אתה מתייחס, נגיד, לפרישה בהתחלה כ-Canary, ואחר כך אולי פרישה של 25% ב-Data Center אחד ואחר כך ב-Data Center אחר? זה המבוך שאליו אתה מתייחס?(ירון) אז האמת שהמבוך הזה יכול להיות קיים גם ב-GitOps, אבל אנחנו, ספציפית ב-Soluto כן משתמשים ב-Canaryהוא אפילו ניהיה הרבה יותר נגיש עבורנו בזכות השימוש שלנו ב-GitOpsשני הדברים האלו הם לא Mutually-exclusiveהמבוך שאני מתאר זה בעיקר להיכנס לתשתית של ה-CI, ללחוץ על “Deploy”, לראות שמשהו נתקע, להיזכר שהיה צריך לשדרג את ה-Script שעושה את זה . . .(רן) כן . . . בעצם אתה מדבר על ההתערבות האנושית שנדרשת אחרי שהקוד כבר נמצא ב-Master . . .(ירון) נכון - וגם זיהוי של טעויות שמתרחשות בזמן ה-Deployment - נניח, אצלנו ראינו הרבה פעמים שבגלל התאימות היחסית של Pipelines של Deployment, הרבה פעמים יש שגיאה ב-Production, והיא לא משתקפת חזרה ל-Pipeline של ה-Continuous Delivery - ואז המפתח פשוט יושב ואומר “טוב, זה כנראה לוקח לו הרבה זמן . . . זה כנראה הגמדים שלוקחים את הקופסאות ל-AWS התעכבו בדרך . . . “ורק אחרי 20 דקות או 30 דקות יש איזושהי הבנה שמשהו השתבש בצורה נוראית . . . (רן) זאת אומרת - ברגע שאני עושה Merge של Branch ל-Master - אני אף פעם לא אעשה הרי Commit ל-Master, זה אסור . . . - ברגע שאני עושה עושה Merge ל-Master, אני צריך להניח שהכל, כאילו, ב-Production, נכון?(ירון) לאו דווקאיש כלים של GitOps שלוקחים את זה בתור ה-First Class Citizen, הם באמת בונים על זה שתיהיה סדרה של הגנותבין אם זה טסטים ו-Smoke Tests או Canary ו-Gradual releases, כמו שהזכרתוהם פשוט מניחים שהמשתמש עושה בהם שימוש.אנחנו מעדיפים Deployments יותר קונטקסטואליים, ובגלל זה בהתחלה התחלנו להשתמש ב-Flux, שהוא כלי של WeaveWorks שלוקח את המתודלוגיה הזאת קדימה, ומנסה באמת “לאסור על ה-user” ליצור שינויים . . . ליצור הבדלים בין Production לקוד.ועברנו ל-Argo - כלי של Intuit - שחולק איתו הרבה מהקוד, אבל משנה הרבה מהדינמיקה והמכניקה.הוא מאפשר באמת קודם כל להכניס את הקוד לתוך ה-Master - ורק אחר כך להגיד למפתח “תעשה את הסנכרון שלך בצורה מודעת”.יש גם אופציה ליצור Sync אוטומטי, ואז ברגע שההגנות האלו נמצאות ובאמת יש את הבטחון לדעת שמה שנכנס ל-Master יכול להגיע ל-Production, ניתן להדליק את ה-Flag הזה ולהינות מחיים עם הרבה פחות Toil, הרבה פחות עבודה ידנית.(רן) אז דיברת על Reconciliation ועל זה שיש הפרשים בין מה שקיים ב-Master, שאמור לתאר את סביבת ה-Production, לבין סביבת ה-Production האמיתית, וההפרשים האלה יכולים לנבוע מכמה דברים - קוד שנכנס ל-Master, אבל עדיין לא עבר Deployment, אבל זה יכול גם להיות לנבוע מזה שהלך איזשהו איש Ops ושינה את ה-Production . . . נכנס ל-AWS או עשה SSH לאיזשהו שרת ושינה שם משהוואולי יש Drift-ים מכל מיני סוגים, ואני בטוח שכל מי שנמצא בעולם האופרציה נתקל בדברים האלה.אבל איך . . . מתי זה הגיע לנקודה שבה זה ממש הפריע לכם, ואמרתם “עד כאן! פה אנחנו חייבים לקום ולעשות איזשהו מעשה! אצלנו לא יהיה הבדל בין Master ל-Production!” . . . היה איזשהו אירוע מכונן שגרם לכם לעשות את זה?(ירון) אז היה . . . לפני שאני אסביר את האירוע הזה, אני גם אסביר איך הגענו למקום שבו אפשר לחשוב בכלל על הקונספט הזה.כמו שאמרתי - עשינו את זה עוד הרבה לפני שקראו לזה GitOps, והתחלנו במקומות הרבה יותר Low-stakes מסביבת ה-Production, שמגישה תוכן לקרוב למאה מיליון משתמשים היום . . .איפה שהתחלנו זה בתשתית הניטור שלנו - זה היה כבר לפני יותר מחמש שנים.רצינו לעשות דמוקרטיזציה של הניטור, לא רצינו שזה יהיה משהו שמפתח אומר “אני רוצה לנטר בבקשה . . . קח את השליפה הזו ושים אותה בבקשה על הכלי”וכדי שזה יקרה, יצרנו איזשהו Repository, שמנו בו קובץ JSON ענק ואמרנו למפתחים: “פשוט תכתבו פה את כל מה שאתם רוצים לנטר, וזה יגיע “בדרך קסם” אל התשתית”.אז זה היה ה-Production הראשון אצלנו שבעצם כל Commit ל-Master הסתנכרן עם הקוד, והיופי של זה היה שבאמת יכולנו לשחק פה ב-Stakes יותר נמוכים.שבירה של תשתית ניטור זו בעיה מסדר שני - משהו שיכול לקרות לדקה-שתיים בלי שהמשתמשים ירגישוזה בדרך כלל קורה בצורה מבוקרת, כשהמפתחים במשרדולכן זה היה משהו שנתן לנו להתנסות עם זה בצורה בטוחה.(רן) אז למעשה, המוטיבציה הראשונית שלכם הייתה לספק חווייית-מפתח יותר טובה - במקום שהוא ילך ויפנה אליכם ויבקש “תוסיפו לי בבקשה Monitoring” או שבמקום שיצטרך להכיר את כל החוכמות של כלי הניטור, הוא יכול לערוך איזשהו קובץ JSON ולעשות Commit - ומבחינתו זה ממשק העריכה - ועכשיו הוא מבין שברגע שהוא עשה Commit, יש איזשהו Hook שלוקח את הקובץ הזה ועושה לו Apply ל-Production.אז מבחינתך זה איזושהי חוויית מפתח יותר טובה - אבל זה עדיין . . . זאת אומרת, אני לא רואה עדיין איך זה בא ומטפל בתקלות Production . . . (יונתן) לכאורה, יכולת לממש את זה גם, נניח, עם CI/CD רגיל, נכון? בלי “הקונץ” הזה של הסנכרון או לבדוק את הפערים?(ירון) נכון - ואני אפילו אגיד שבאיזשהו שלב עברנו לזה: היה לנו Repository אחד מרכזי ואז הכנסנו איזושהי תשתית “כמו CD”, שלוקחת Commit-ים מ-Repository אחד ומזריקה אותם ל-Repo המרכזי הזה.ושם כבר התחילו להרגיש את החסרונות שאמרתי - הכלי היה נשבר הרבה פעמיםהיו נוצרים מצבים שבהם ה-Pipeline הקלאסי הזה, שמנסה להגיע למקום ה-GitOps-י, נתקע בגלל כל מיני טעיות שלא חזינו מראש, והיה קשה לקבל Visibility על דבר כזה.זה דורש יצירה של המון כלים, רק כדי שהדבר הזה יעבוד בצורה שהיא Flowless.(יונתן) עוד משהו שרציתי לשאול - איך ה-GitOps אל מול Infra-as-a-Code - זה משלים את זה? זה השלב הבא של זה?(ירון) זאת שאלה מעולה, כי באמת הרבה פעמים, את ה-Infra-as-a-Code אנחנו עדיין עושים עם כלים שהם יותר “Push-יים” כאלהאנחנו עבדנו קצת עם Terraform, נטינו יותר לכלי שנקרא Pulumi, שהוא סוג-של-כזה-Wrapper סביב Terraform, עם שפות תכנות יותר נפוצות.ושם מרגישים בדיוק את העניין הזה - שכדי עכשיו לשנות את ה-Infrastructure שלי, אני צריך לעשות Apply . . .ולפני שאני צריך לעשות Apply, אני ארצה לעשות איזשהו Preview, ולהציג אותו למפתחים, כדי שהם יבינו איזה שינוי הולך לקרות.ואז המנגנון הכמעט-אימפרטיבי (Imperative) הזה הוא נורא מורגש - נורא מורגש שהולך להיות איזשהו שינוי, וצריך לעשות איזושהי פעולה כדי שזה יקרה.ואחד הכלים שאנחנו מסתכלים עליהם יותר ויותר נקרא Xstate, וזה כלי שבאמת שם את ה-Infrastructure שרוצים ליצור כ-Custom resources בתוך Cluster של Kubernetes, ואז יש איזשהו Reconciliator, שבמקום לעבור עם ה-API של Kubernetes, הוא עובד עם ה-API של AWS או Azure או GCP - יוצר שם את אובייקטים.וזה שוב - שינוי תפיסה יחסית מאסיבי, כי זה אומר שברגע שמפתח עשה commit ל-Master, אז Xstate תופס אותו ומסנכרן אותו לענן אין איזשהו שלב באמצע של Apply, של Preview . . . כל הדברים האלה חייבים לקרות ב-PR, לפני שהקוד משתנה.(רן) מצד אחד - זה נשמע נורא אלגנטי . . . כאילו פיהם וליבם של Production ו-Master שווים. מגניב, נורא סימטרי כזה, נורא פשוט . . .מצד שני - גם נשמע נורא מסוכן: עשיתי Commit . . . סליחה - עשיתי Merge ל-Master, לא עשיתי Commit ל-Master. . . עשיתי Merge ל-Master, ואולי אני לא כל כך יודע מה זה הולך לייצר, זאת אומרת - אני לא יודע שזה עכשיו אולי הולך לייצר בלגאן לא נורמלי בתוך Production. . . אין לי איזשהו מקום קטן שבו אני יכול ככה להתנסות, בקטנה, לפני שאני עושה את ה-Commit? איך מטפלים? איך עושים מיטיגציה (Mitigation) למוטת הכנף הענקית שפתאום כל אחד מקבל?(ירון) זו שאלה נהדרת, כי היא מחזירה לשאלה הקודמת ששאלת - של בעצם “איזו בעיה ניסינו לפתור?”כי דווקא בניגוד או בהיפוך כזה של התמונה הזאת, החוסר ביטחון הגיע לפני שהיה לנו את ה-GitOpsהייתה לנו בעיה שהתשתית… פשוט כשעובדים עם תשתיות כמו Kubernetes אז התשתית נהיית מאוד מאוד מורכבתהיא גם נהיית במצב שכדי להרים Cluster חדש, במקרה של איזושהי בעית Production, בנאדם צריך לעשות פעולה ידנית, שיכול להיות שיכולה לארוך כמה שעות - וזה היה מצב מאוד לא נוח.קשה היה לדעת, כשיש לי מספר מוגבל של Cluster-ים - במקרה שלנו שניים - ואם עכשיו אחד מהם נופל אז אני צריך להיכנס למרוץ נגד השעון כדי ש-Cluster חדש יעלה.וזה גם עיכב אותנו מלייצר, אולי, את מה שרמזת אליו - שזה איזשהו מקום, איזשהו “מגרש משחקים” או ארגז חול בצד, שבו אפשר לעשות את כל השינויים בצורה בטוחה, ולדעת שלא משנה מה אני אשבור - Production לא ידע מזה.וזאת בעיה אחת שבאמת נאבקנו בה הרבה לפני שהגענו לעולם ה-GitOps המובטח.בעיה נוספת, שגם אותה אני אסביר איך GitOps פתר עבורנו, זה הארגונומיה של המפתחים מול Kubernetesכי מפתחים אצלנו היו רגילים לעבוד מול אילו-שהם Self-contained Services, שרצים על PaaS, כמו Herokuבמקרה שלנו זה היה Azure, אבל זו הייתה איזושהי סביבה סגורה, מכונות וירטואליות שכל מפתח קיבל, שמריצות את ה-Services שלו.ופתאום לעבור ל-Cluster שהוא Multi-tenant, שכולם עובדים ביחד, שצריך לדעת לא “לדרוך אחד לשני על הבהונות” . . . שיש בהם הרבה-הרבה אובייקטים חדשים שהמפתחים לא מכירים - יצר שינוי פרספקטיבה, שלא היה קל להנחיל לצוותי הפיתוח.אנחנו בצוות עבדנו עם Kubernetes הרבה, אבל המפתחים לא תמיד רצו להבין את המורכבות הזאת, והיה קשה לחשוף אותם לזה בצורה שתפגע איפשהו באיזון הזה . . .(רן) אז גם בהקשר הזה, זה נשמע כאילו אתה בעצם מייצר ממשק למפתחים עבור Kubernetes . . . זאת אומרת: “אתם לא צריכים ללכת ולהשתמש ב-Kubectl או בכלים אחרים” אלא אתם צריכים, לצורך העניין, “לעשות Commit לאיזשהו קובץ JSON ומשם אנחנו כבר נטפל בזה”.(ירון) נכון . . . אז היום הכל YAML, אבל כן - זה השינוי המרכזי שעשינו . . .(רן) . . . השתדרגנו . . . (ירון) . . . עכשיו יש מקפים במקום סוגריים מסולסלים . . .(יונתן) תזהיר את אבישי - יש לו איזה משהו נגד YAML-ים . . (רן) מאזיננו אבישי - תסתום רגע את האוזניים . . . כל העולם YAML כבר, אין מה לעשות . . .(ירון) בהרבה מקומות ראיתי שכבר מגייסים מפתחי YAML . . . (רן) כן - אולי המפתחי XML בפנסיה יהפכו למפתחי YAML . . . נחזור רגע אחורה - דיברת קודם על המוצר שלכם, ואמרת שהמוצר הזה יודע לתת תמיכה למכונות כביסה למחשבים אישיים וכו'. אז כל פעם שאני עושה Commit ל-Master, נגיד לאפלקיציה ה . . .(יונתן) אתה לא עושה Commit ל-Master, רן . . . . תזכור - עוד פעם, אני אעשה לך Reject . . . (רן) איך נפלתי . . . זה הפרוידיאני בי שמדבר . . . אז כל פעם שאני עושה Commit ל-Branch, ו-Merge ל-Master, אחרי Code review, כמובן, ומתקן את כל ההערות, ועובר CI - אז לאפליקציה האחרונה המגניבה שכתבתי למכונת הכביסה של סבתי - אז זה מיד הולך לכל מכונות הכביסה בעולם? לכל הטלפונים בעולם? זאת אומרת - זה באמת מה שאנחנו רוצים?(ירון) אז התשובה היא “לא” . . . כמו שאמרתי, יש לנו מידה מאוד חזקה של Control, כי ככה רצינו לבצע את השינוי הזהלא רצינו להפחיד אנשים ולהגיד לכל מי שעובד על הקוד אצלנו “תזהרו מאוד מה-Master!”הרעיון היה באמת לאפשר לאנשים יותר Visibility, יותר שקיפות - ולאט-לאט להגיע למודל הבגרות הזה, שבו אנחנו מרגישים בנוח לסנכרן דברים בצורה אוטומטית.זה אומר שהיום, רב שירותי ה-Backend שלנו נפרשים באמצעות כלי GitOpsבאמצעות Argo, שציינתי קודםמה שהמפתחים מקבלים מזה זה להחליף את ה-Pipeline המסועף והקשה להבנה באיזשהו Commit ל-Masterעכשיו, במקום לעשות את ה-Deployment בשלב הזה של ה-Pipeline, יש רק Commit אחד, שמשנה את ה-Version ש-Argo מסתכל עליו.ברגע שנעשה שינוי ה-Version הזה, המפתח הולך ל-UI אחר, של Argo - הוא רואה בצורה מאוד מאוד ברורה שהשתנה שדה מסויים ב-YAML של ה-Deployment שלו, עם ה-Tag.וזה מייצר הרבה דברים מאוד טובים עבורנו - כי גם אם עכשיו מבצעים סנכרון של הדבר הזה, אז זה כבר לא איזשהו Context-switch של לעבור עכשיו ולהסתכל מה המצב - Argo ממש מראה יפה, עם לבבות ירוקים או לבבות אדומים-שבורים , מה הסטטוס של הגרסא הישנה והגרסא החדשה.וזה נותן פידבק מאוד מהיר - האם השינוי הזה כרגע עובד? האם השינוי הזה נכשל? האם צריך לחזור רגע לשולחן השרטוטים ולתקן אותו?במצב הנוכחי, זה דווקא הגביר מאוד את ה-Control ואת השליטה שלנו ב-Production, ולא יצר אי-יציבות וחסר ודאות.(יונתן) ה-State הזה, נניח של שינוי הגרסא - נניח שאני מהנדס, ויש לי גרסא חדשה - אתה אומר, בעצם, שדבר ראשון אני צריך לעשות Branch ו-Commit ו-Merge, בגלל שאני רוצה גרסא חדשה, עם איזשהו Tag - זה ב-Repository של האפליקציה שלי או שזה ב-Repository של האפליקציה שמנהלת את ה-GitOps?(ירון) זו שאלה נהדרת, כי באמת אנחנו עושים משהו לא מסורתי שם - אנחנו כן עושים את כל השינויים האלו ב-Repository של הקוד, ובדרך כלל - אני מקשיב בכנסים, קראתי פוסט יפה של מישהי מ-Riskified שכותבת איך הם עובדים עם Argo - וראיתי שהקונצנזוס, פחות או יותר, הוא להפריד את ה-Repository שהמכונות קוראות וה-Repository שבני האדם קוראים . . . אז בדרך כלל, מה שמקובל זה ליצור, נגיד, את “Yaron-API”, להגיד שזה הולך להיות ה-Service שלי, ופה אני, כבן אדם, כותב קוד - ואז לתת למכונה לעשות את ה-Commit לאיזשהו Repository אחר, שיקרה “Yaron-API-Deployment”, ומשם לקחת את ה-State שה-Argo מסתכל עליו.אנחנו, פשוט בגלל שרצינו את הנושא הזה שאמרתי קודם - רצינו לשפר ארגונומיה של מפתחים - לא רצינו שיהיה להם את ה-Context-Switch הזה, את המעבר כל הזמן בין ה-Repo שבו הקוד כתוב לבין ה-Repo שבו ה-Deployment קורה . . .(יונתן) גם יש עוד יתרון - אתה תעשה git-log ותראה את ה-Deployments ולא . . . הרבה אנשים אומרים הפוך . . . הם אומרים “אני לא רוצה לראות Commit-ים של מכונה” [אחלה שם לפודקאסט, אגב], זה לא קדוש.אבל אנחנו מאמינים שה-Commit-ים האלה באמת, כמו שאתה אומר, מייצגים את השינוי של ה-State.(רן) אני מניח שהויכוח הזה, או הדילמה הזו, במצב של Mono-Repo היא פחות רלוונטית - עדיין יש התלבטות, נניח שאנחנו בעולם של Mono-Repo, ואני מבין שאתם לא - יש את ההתלבטות של האם לשים את הקונפיגורציה קרוב לקוד, או את כל הקונפיגורציה במקום אחד, לצורך העניין באיזשהו Branch או תת-עץ של ה-Mono-Repo.גם אני הייתי בהתלבטות הזאת הרבה פעמים, ואני חושב שיש פה Trade-off - מצד אחד זה נחמד שהקונפיגורציה קרובה לקוד, ולפעמים ממש בתוך הקוד; ומצד שני, זה גם נחמד לקבל איזשהו מבט על כל הקונפיגורציה של כל ה-Service-ים השונים, וככה להבין איך הדברים קורים.אז אני מבין שאתם יותר נוטים לשים את הקונפיגורציה קרוב לקוד, אם אפשר לקרוא לזה “קונפיגורציה”, בוא נקרא לזה . . .?(ירון) אז גם פה יש כמה תשובות . . . דבר ראשון - אנחנו לא נגד Mono-Repo, אנחנו מאוד-מאוד בגישה של “לתת למפתחים ולמפתחות פשוט להחליט מה הכי טוב ב-Context של המשימה הנוכחית” ולכן יש אצלנו קבוצה שלמה שעובדת בתוך Mono-Repo אחד, שמחזיק את כל ה-Service-ים, בלי שום קונטקסטויש לנו קבוצה שעובדת עם Repo-per-Serviceויש קבוצה שלישית, שעובדת עם Mono-Repos קונטקסטואליים . . . (יונתן) אתם לא נגד Mono-Repo - פשוט יש לכם הרבה כאלה, זה מה שאתה אומר . . . [“רבים מידידי הטובים ביותר” וכו'…](ירון) בדיוק . . . אנחנו אוליגו-Repo . . .(רן) לא, יש לזה גם שם - Multi-Mono-Repo . . . כתבו את זה לפנינו . . .(ירון) כן . . . אז לא הגענו למצב שבאמת אנחנו יכולים להגיע לרמות של Facebook, והקסמים שהם עשו עם Mono-Repos שם.בסוף, Mono-Repo ענק שמחזיק את כל הקוד זו לדעתי משימה הנדסית כבירה, וצריך לעשות אותה בצורה מאוד מחושבת.ושוב - בגלל הצורה ה-Distributed והלא-פרספקטיבית שאנחנו עובדים בה, שאנחנו לא רוצים להגיד לאנשים איך לעבוד, אז כמעט בלתי אפשרי לחשוב על “כל המפתחים ב-Soluto כותבים ל-Repository אחד”,כי דברים קמים, אנשים רוצים לשנות דברים, להתנסות עם משהו חדשוברגע שאנחנו מאפשרים את זה, אז לא נקבל אף פעם את השליטה של להגיד לאנשים “כל ה-Commit-ים שלכם עכשיו יהיו רק ב-Soluto-Code” [וגם אז - רק אם הם טובים]וזה נחמד, כי זה כן גורם לנו להתקדם קדימה . . .(רן) כן . . . אני חייב להעיר שאם כל זה שאני מכבד את שיקול דעתם של המפתחים, ואני הרבה פעמים גם לא רוצה להגיד למפתחים אחרים מה לעשות - אני חייב להגיד שלפעמים יש הרבה חוכמה בכן להגיד למפתחים מה לעשות, כי אני חושב שהרבה פעמים ההחלטות הן שרירותיות, והחלטה אחת טובה כהחלטה אחרת - הבעיה שכששתי החלטות, ששתיהן שקולות, אבל כששתי החלטות נלקחות, אז אתה בבלגן . . . אז דווקא בקטע הזה אני נוטה להיות קצת יותר הדוק, ולבוא ולהגיד “חבר'ה, נכון - יש פה שתי דעות, אבל אני בוחר את זאת, “כי ככה” - ובואו נתגלגל עם זה הלאה”כי אחרת פשוט נוצר בלגאן - וראיתי את זה קורה בחברות גדולות . . . ראיתי את זה קורה ב-Google,אני מבין ממה שאתה אומר שזה קורה גם ב-Facebook . . . מתקבלות החלטות שרירותיות, וכל המפתחים לפעמים אולי מקטרים - אבל הולכים לפיהןוזה עושה הרבה טוב, בסופו של דבר - “המסר שלי לאומה” הוא של “לא לפחד לקבל החלטות בשביל המפתחים”, ובסופו של דבר, בשורה התחתונה, אני חושב שזה עושה טובה, כי זה יותר קל כשדברים הם אחידים.(ירון) זה מעניין מאוד - ואני חושב שמה שאתה מציין הוא גם פונקציה של גדילה.אני חושב שיש שלב מסויים שבו חברה יכולה להרשות לעצמה להתפזר יותר ולנסות יותר דברים, ויש שלב מסויים שבו צריך להתכנס ולהגיד “אוקיי, ה-Business הגיע ל-SLA מאוד גבוה שהוא צריך לספק, החברה גדולה מספיק כדי שלא נוכל לתת ל-15 Frankenstein-ים לרוץ במקביל, המפלצות של . . .(רן) כל מפלצת טובה . . . כל מפלצת לכשלעצמה היא בסדר . . . אני לא אומר שההחלטות הן לא נכונות, הבעיה שיש החלטות אחרות, והחלטות סותרות לפעמים, החלטות שלא עובדות טוב אחת עם השנייהאו אפילו אם לא סותרות - לייצר Infrastructure שמתאים גם . . . לצורך העניין אפשר לקחת שפות תכנות - Infrastructure שמתאים גם ל-Python וגם ל-Ruby וגם ל-Java זה אפשרי, אני בטוח שזה אפשרי - זה רק יותר קשה.אז אתה יודע - שפות תכנות אפשר בדרך כלל, רוב החברות מתקבעות, זו לא הבעיה - אבל עדיין יש עוד הרבה בחירות אחרות:איך עושים Messaging, איך שומרים, באיזה Database משתמשים וכו'.(יונתן) אני חושב שאני מסכים - מבחינתי, המדד של מתי צריך לקחת כזאת החלטה או “דיקטטורה נאורה” שכזאת זה כשאתה צריך “לעבוד לרוחב”דיברת על תשתיות - ברגע שאתה צריך להתחיל . . . כשתשתיות נהייה “עניין”, אז קשה לתמוך בוריאנטים השונים . . . [כן . . .](רן) בוא נחזור רגע אחורה ל-GitOps . . . אז נלך, שנייה, Back-to-Basics: הבנו את הקונספט של “Mater ו-Production צריכים להיות שווים”. אז אני, אתה יודע, מתחיל ככה ב-Back-to-Basics ואני רוצה לעשות GitOps, אוקיי? אז מה אני עושה? אני מייצר Git-Hook, ובעצם אני צריך לדאוג לשני דברים . . .אחד זה שיהיה לי קוד שיודע לתאר את סביבת ה-Production, נגיד - כמה Server-ים, כמה Services, מה ה-Multiplicity שלהם, כל מיני דברים . . . מה שיודע. . . לצורך העניין קובץ YAML שיודע לתאר את סביבת ה-Production, ובטח יש שפות למכביר שיודעות לעשות את זה.אז אני צריך קוד שיודע לתאר את סביבת ה-Productionושתיים - אני צריך לדעת לעשות איזשהו Git-Hook, נגיד, שכל פעם שעושים Commit אז Production מתעדכן לפי מה השינוי האחרון.אז זה אולי GitOps בממש-ככה-30,000 רגל - ואתה הזכרת שיש כמה כלים שיודעים לעשות את זה - הזכרת קודם את Flux של WeaveWorks והזכרת את Argo של Intuit - ואני מניח שיש עוד כלים אחרים בשכונה.אז אם קם הבנאדם בבוקר ואומר “יאללה - בא לי GitOps!” [חמור מאוד] או “אני חייב GitOps!” [תופעת לוואי חדשה?] או “המנהלים שלי אומרים לי שאני צריך GitOps . . . “ [המקרה היותר נפוץ?] - איך אתה ממליץ לו להתחיל?(ירון) אז אני אזכיר פה את Kelsey Hightower, שהוא מן בחור כזה שאוהב לדבר על Kubernetes, מ-Google [בדיוק זז שם קצת], ואני מאוד אוהב גם את הצורה שבה הוא מנגיש ידע מורכבנגיד, לפני הרבה שנים הוא כתב את Kubernetes-the-hard-way [אבל כבר קישרתי לזה…], שזה מעיין מדריך על איך להרים את Kubernetes מ-Scratch, לעשות את כל הפעולות שעשויות, עבורנו, בצורה ידנית - וכשעברתי דרכו הרגשתי היכרות הרבה יותר טובה עם התשתית הזו, שהיום מעירה אותי בלילה אם יש לה בעיה . . .והוא עושה הרצאה מדהימה - יש כבר כנסים שנקראים GitOps Days מרוב שהדבר הזה טרנדי - הוא עשה הרצאה ממש מעניינת בכנס שהיה בשנה שעברה, שבה הוא מראה איך עושים Reconciliation Loop מ-Scratch . . . הוא ממש כזה . . . מראים קוד שהוא כותב ב-Go תוך כדי על המסך, תוך כדי הכנס - והדבר הזה מייצר, במקרה שלו, פונקציות של Cloud Run, שזה איזשהו Serverless כזה של Google.אני חושב שההרצאה הזאת היא פתיח מדהים בשביל לעשות דימיסטיפיקציה (Demystify) למשהו שבאמת, כמו שאמרת, יכול להישמע מורכב ויכול להישמע אפילו די מפחיד, כי זה מראה שהדבר הזה יכול להיות מאוד נשלט.אחרי שעוברים את המשוכה הזאת, של להבין את הקונספט, הייתי כן ממליץ לבחור את אחד מהכלים הגדולים - בין אם זה Argo או Flux, כי הם כרגע הכלים ששולטים בשוקאבל גם חשוב מאוד, כנראה, להבין את הבעיה - אם Argo ו-Flux מתאימים מאוד לתחזוקה של Kubernetes, אז אם רוצים לתחזק משהו שהוא מחוץ ל-Kubernetes, צריך לבחור משהו שהוא כלי שיודע לעשות את זה גם בלי הכוח הזה.וכמו שאמרתי - גם Puppet ו-Chef יודעים לעשות את זה עבור מכונות Linux, ויש כלי שנקרא Atlantis, שיודע לעשות את זה עבור Terraformואז בעצם כל אחד מהכלים האלה יכול להיות Entry-point ל-GitOps, לא משנה מה האתגר שכרגע עומד מולכם.(רן) נזכיר, אני חושב ששווה אולי לבוא ולמצוא את המקבילות בין הכלים השונים - אז גם מי שמכיר את Puppet ואת Chef - הם כולם עובדים באיזשהו Mode של Reconciliation Loopזאת אומרת - מסתכלים מה המצב הרצוי ועושים Apply, וכל פעם עושים Reconciliationזאת אומרת שאם משהו שהגדרת, לצורך העניין, שצריך להיות קובץ במערכת על מחשב והוא לא שם - אז הוא בכל פעם ייצר אותו מחדש אם הוא ימחק.גם Puppet וגם Chef עובדים באותה צורה - וגם Kubernetes הרבה פעמים עובד באותה צורה, זאת אומרת שגם ל Kubernetes יש Reconciliation Loop שמסתכל על ה-Resource-ים ועושה Apply ל-Resource מחדש בכל פעם שצריךכמובן שלא סתם . . . אז בהקשר הזה, המוטיב הזה של ה-Reconciliation Loop עובר, כנראה, בהרבה מאוד מהכלים שהזכרת.(יונתן) לפי מה שירון . . . לפי מה שתיארת, יש יכולת גם לעדכון מהצד השני - יכול להיות שזה לא שב-Production, לא רק שחסר קובץ, אלא שמישהו, לא יודע, שינה אותו, או שמישהו נכנס ל-UI של ה- Management של Kubernetes ושינה את ה-State Loader - איך ה-Flow המרכזי . . . מה יקרה אז?(ירון) זה בעיני הדבר . . . זה ממש ה-Added-Value, אולי אפילו ה-Killer-Feature של GitOps, כי אלו הפתעות שתמיד היו תופסות אותנו במקום הכי לא מוכן, ואני מאמין שזה קרה להרבה צוותי Production [מה?! מה פתאום?]שפשוט איזשהו שינוי נעשה בזמן של מקרה חירום, או אולי כלאחר-יד מתוך איזשהו חוסר הבנה, ולא הייתה לדבר הזה שום נוטיפיקציה (Notification)ואז בדרך כלל מגלים את זה חודשים, אם לא שנים, אחר כך, כשהידע כבר נשכח . . . יש את הפתגם הזה - שקוד שכתבת אחרי חצי שנה הוא כמו קוד שנכתב על ידי מישהו אחר [יש לזה אפילו שם - Eagleson's law] . . . אז גם עבור שינויים ב-Production הדבר הזה תקף - מה גם שהם הרבה פחות מתועדים . . .ב-GitOps, בצורה שאנחנו עובדים, Argo מחובר ל-Slack - וכל פעם שמישהו עושה Deployment יש הודעה חמודה כזו עם שאומרת “הקוד שלך שינה בהצלחה את ה-Production”אם במקרה, נגיד, הקוד שלי מסתמך על איזשהו Redis חיצוני, וה-Redis הזה פתאום נפל, אז אני אקבל “לב שבור ועצוב ” שאומר לי “רוץ מהר! משהו השתנה, זה כבר לא נראה כמו ה-Production, אני שבור וקשה לי” . . . (יונתן) אוקיי . . .(רן) מה ה-Hack החביב עליך? נגיד, מסוג הדברים שהתעוררת בשתיים-שלוש בלילה, וגילית “מי לעזאזל עשה את זה?!”? . . . אני אתחיל עם שלי - נניח שאתה נכנס ואתה מגלה שמישהו, לפני חצי שנה, כמו שאתה אומר, ערך את קובץ ה-Host והוסיף שם איזשהו Entry, כי כנראה פעם זה תיקן לו איזשהו משהו . . . עכשיו את מגלה ש Name Resolution מחוץ ל-Host עובד שונה לחלוטין ממה ש-Name Resolution עושה בתוך ה-Host - וזה מסביר הרבה דברים, בדרך כלל . . . (יונתן) סתם, פתאום התחלתי לחשוב האם זה יכול לעזור באילו-שהן בעיות של Security? או של מישהו ששינה משהו עקיף ב-Production, והוא לא עשה Git-Merge וכל הסיפור הזה . . . .(ירון) אז בהחלט . . . אני רק אגיד על ה-Hack-ים - שזה תמיד יהיה נס, אין ספק . . .ה-Hack החביב עלי זה שכשעובדים ב-High Availability, שולחים גרסא אחת של Production ל-Site אחד וגרסא שנייה ל-Site אחר - ועכשיו לך תבין למה חצי מה-Traffic מחזיר תשובה אחת וחצי מחזיר משהו אחר . . .(רן) יש את הסיפור המפורסם על ה-Trading. . . (ירון) כן, Knight - מסכנים . . .(רן) . . . שהם עדכנו גרסא, אבל כנראה נשאר שרת אחד או שניים, שאולי היו Offline בזמן עדכון הגרסא - וזה גרם לחברה לפשוט את הרגל, חברה של שווי, בגדול, של מיליארד דולר, שהפסידו ב-Algo-Trading את כל הכסף שלהם בגלל איזה Deployment שלא עלה נכון . . .(ירון) כן, זה סיפור נורא כשקוראים אותו, ופרקי הידיים מלבינים כי אתה חושב שאולי זה קורה לך עכשיו . . . (רן) ממש עכשיו . . . אבל ממש ממש עכשיו . . .(יונתן) . . תן לי רגע רק לבדוק את ה-Inspection . . .אז בנוגע ל-Security - יש כאן באמת יתרון אדיר, כי גם - ב-Continuous Delivery מסורתי, אני חייב לתת לתשתית שלי את היכולת לגשת ל-Production, ונגיד, אם זה Jenkins שיושב אצלך בשרת, אז לא אכפת לך כנראה לשים שם איזושהי גישת-כתיבה ל-Productionאנחנו עובדים עם SaaS, עם codefresh - חברה ישראלית שעושה CI ממש נחמד לדברים שהם Docker ו-Kubernetesועדיין, עם כל האהבה והרצון הטוב - אנחנו מעדיפים שהם לא יוכלו לגשת ל-Production . . . ברגע שאנחנו עושים את ההפרדה הזאת, הם יכולים לגשת רק עד הקוד - והמוצר היחיד שיכול לגשת ל-Production הוא ה-Reconciler של GitOps, שבמקרה שלנו זה שרת של Argo שיושב על ה-Clusterואז ה-Attack surface הוא הרבה יותר נמוך - כי הוא מלכתחילה יושב שם ומלכתחילה עושה שינויים, וזה טבעי שאצלו ישבו המפתחות [חביתוש?].ואני אעשה גם איזה Shout-out לפרויקט Open-Source שכתבנו ב-Soluto ושנקרא kamus - והוא גם מתבסס על GitOpsהרעיון שעשינו שם הוא שהראינו שה-Secret-ים ב-Kubernetes הם עוד לא בשלים, פחות או יותר - Secrets ב-Kubernetes הוא פשוט איזשהו אובייקט מקודד ב-Base 64, וזה אומר שכל מי שניגש ל-UI ב-Kubernetes ולוחץ על הכפתור של העין פשוט רואה את ה-Secret, פשוט רואה את ה-Plain-text, ולא הרגשנו עם זה בנוח . . .אז כתבנו Controller, שאפשר להתקין על כל Cluster, ומה שה-Controller הזה עושה הוא לאפשר למפתחים להצפין את הערכים מקומית אצלם על המחשב, לעשות להם Commit ל-Gitואז לכל Container נוסף איזשהו Init-Container, שעושה Encryption על ה-Cluster.זה גם מאוד מחזק את ה-Security, כי ה-Decryption יכול לקרות מעכשיו רק בסביבת ה-Productionזה דומה, נגיד, ל-Vault, אבל מגיע עם Operation overhead הרבה יותר נמוך.(רן) הזכרת מקודם - ואולי בזה, ככה, נסיים את הערב - הזכרת מקודם שעם המעבר ל-Kubernetes, מפתחים הרגישו איזושהי עלייה ברמת המורכבות, שהם פתאום צריכים להבין יותר Production, ואז יצרתם ממשק משתמש, או לפחות אני תיארתי את זה ככה - יצרתם ממשק משתמש, ממשק מפתח, באמצעות GitOps.האבחנה שרציתי להגיד זהש-Kubernetes מאפשר GitOps, נכון? אולי זה לא הכלי היחיד שמאפשר GitOps, אבל בהחלט אחד הכלים שמאפשרים GitOps.כי הוא נותן לך לתאר את סביבת ה-Production ולעשות לה Apply יחסית בקלותאז Kubernetes אמנם מאפשר GitOpsמצד שני, לפני Kubernetes אולי לא היה צריך GitOps, כי הדברים היו יותר פשוטים . . .אז אני סתם תוהה האם זו אבחנה שנראית לך מוצדקת, נכונה?(ירון) אני חושב שכולם מכירים את ה-Death-Star של Netflix, שמראה פשוט מיליארד שירותי microService שמדברים אחד עם השניוהם כתבו את Spinnaker, שזה כלי שהוא מזכיר . . . הוא מאוד מאפשר את הסיבוכיות שיכולה להגיע בדברים של Continuous Deliveryעם כל היופי והאלגנטיות של הכלי הזה, אני חושב שהוא בא לשרת משהו שאם לא צריך אותו, אז זה יהיה נחמד להיפטר ממנו.ושוב אני אצטט את Kelsey Hightower שאומר שהקסם והחידוש ב-Kubernetes זה שהתשתית מתוארת כדאטה, לא כקונפיגורציה (Configuration)זו לא סדרה ל צעדים אימפרטיביים (Imperative) שדרושים כדי שמכונת Linux תוכל להגיש קוד ב-Ruby - זה תמיד יהיה דאטה - זה תמיד יהיה קבצים ב-YAML שנשמרים ב-Database, והם אלה שמאפשרים את העלייה של Production.(רן) כן - וזה אולי אחד מהדברים שמאפשרים לעשות GitOps בצורה יחסית פשוטה, כי כל מה שצריך לעשות זה Commit לקובץ YAML - ולעשות Apply . . .(ירון) בהחלט(רן) טוב - אז תודה רבה, ירון, היה סופר-סופר מרתק. יש משהו שהיית רוצה עוד להגיד לפני שנסיים?(ירון) אז אני אשמח להגיד שאנחנו מגייסים - גם לצוות שלי וגם למגוון תפקידים ב-Solutoאם כל מה ששמעתם פה נשמע לכם מעניין, מבחינת החזון של החברה או מבחינת הדברים היותר Geek-יים - בואו, תתראיינו, תתקבלו . . . (רן) מעולה . . .(יונתן) אה, אפשר למסור ד”ש, רן?(רן) קדימה, נו . . . עם איזה שיר? רגע, שאני אכין את התקליטייה . . .(יונתן) שנכין בתקליטייה . . . אז למאזין אורי להב, ששט לו בדוגית בחופי הים התיכון . . . (רן) אורי - מתגעגעים אליך, חזור הביתה!טוב - אז תודה רבה לשניכם, ויאללה, נשתמע . . . להתראות.ובהצלחה ל-Reversim Summit 2021 . . . הקובץ נמצא כאן, האזנה נעימה ותודה רבה לעופר פורר על התמלול

The Cloud Pod
118: The Cloud Pod talks LaMDA, which one?

The Cloud Pod

Play Episode Listen Later May 31, 2021 55:57


This week on The Cloud Pod, the team discusses the fine art of writing the podcast show notes so there are bullet points for when Peter shows up without doing the homework.       A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights Amazon is catering to the unimaginative with its version of a vanilla milkshake. Google now performs commitment ceremonies but they come at a cost. Azure did an online pastry course and can now make croissants.  General News: La France Est Méconnaître Amazon (France Is Ignoring Amazon) VMware picks longtime executive Raghuram as its new CEO. So many people were overlooked for this position.   France says Google and Microsoft Cloud Services are OK for storing sensitive data. Bit of a snub for Amazon.     Amazon Web Services: Busy Little Bees AWS SaaS Boost released as open source. Sounds more like a product than it actually is.    AWS announces general availability of AWS Application Migration Service. If play is to lift and shift, with no thought of transformation at all, this is for you.     AWS CloudFormation Guard 2.0 is now generally available. It's great that this supports more than just cloud transformation.  AWS Premium Support launches Support Automation Workflows (SAW). This will make the exchange of data so much easier.  Amazon Elasticsearch Service announces a new lower-cost storage tier. This is great news for everybody.  Amazon announces the release of EKS 1.20 — the raddest release ever.   AWS launches another way to run containers with App Runner. Just in case you don't want to use one of the other billion container services.      Google Cloud Platform: Here To Confuse You Google will bring Starlink satellite connectivity to enterprises in late 2021. Cool!   Google is offering new committed use discounts for Cloud Run. Commit yourself to Google. Google is announcing several new ways to ensure your Cloud Run environment is secure. Thumbs up! Enhance DDoS protection and get predictable pricing with Google's new Cloud Armor service. Making this a per customer thing doesn't make a huge amount of sense.   Google launches new managed machine learning platform Vertex AI. A very powerful release.    Google releases LaMDA to improve conversation technology for chat bots. Not to be confused with AWS Lambda… Google launches the next generation of its custom AI chips. But it's not ready yet.    Azure: Viva La France! Microsoft to upgrade cloud products' data controls in European Union. Lots of interesting use cases for this one.   Microsoft is shutting down its Azure Blockchain service on September 10, 2021. We'll help you — it will only cost you a suitcase full of cash and a helicopter.  Azure Static Web Apps is now generally available. Static is great until you want to have any type of interaction.     TCP Lightning Round Justin exacts his revenge on Ryan for stealing his jokes last week and takes this week's point, leaving scores at Justin (8), Ryan (4), Jonathan (7).  Other headlines mentioned: General availability: Azure Key Vault SLA raised to 99.99% Amazon EMR 6.3 now supports Apache Ranger for fine-grained data access control AWS WAF adds support for log filtering Amazon Connect adds near real-time insights into voice call, chat, and task activity Now use AWS Systems Manager Change Calendar to prevent desired-state updates during critical events AWS License Manager now provides historical license usage reporting Amazon Macie supports criteria-based bucket selection for sensitive data discovery jobs Google’s Translation API Advanced can translate business documents across 100+ languages   Amazon Transcribe improves live subtitling with partial results stabilization Google now allows you to automate your budgeting with the Billing Budgets API General availability: Announcing lower pricing for provisioned throughput on Azure Ultra Disks Ground processing with space data 5x faster with Azure  Things Coming Up Announcing Google Cloud 2021 Summits [frequently updated] Google Financial Services Summit — May 27th  Harness Unscripted Conference — June 16–17 Google Cloud Next — Not announced yet (one site says Moscone is reserved June 28–30) Amazon re:Inforce — August 24-25 — Houston TX Google Cloud Next 2021 — October 12–14, 2021 AWS re:Invent — November 29–December 3 — Las Vegas Oracle Open World (no details yet) 

Google Cloud Reader
3 Ways of receiving Events in Cloud Run

Google Cloud Reader

Play Episode Listen Later May 27, 2021 5:32


Learn 3 different ways to read messages directly from Pub/Sub in Cloud Run, and how to decide which method to use. Original Blog postUsing Pub/Sub with Cloud Run TutorialDocumentation: Option 1 - Audit Logs via EventarcOption 2 - Pub/Sub via Eventarc Option 3 -Pub/Sub DirectPub/Sub based build notificationsAudit Logs of Cloud Build More Google Cloud Reader here

Google Cloud Platform Podcast
Full Stack Dart with Tony Pujals and Kevin Moore

Google Cloud Platform Podcast

Play Episode Listen Later May 26, 2021 36:18


On the podcast this week, we’re diving into what full stack development looks like on Google Cloud. Guests Tony Pujals and Kevin Moore join your hosts Stephanie Wong and Grant Timmerman to help us understand how developers can leverage Dart and Google Cloud to create powerful and effective front end and back end systems for their projects. Kevin takes us through the evolution of Dart and Flutter and how they have become a way to allow developers an experience-first solution. Developers can focus on the experience they want to create, then decide which platforms to run on. With Dart, Google provides business logic that allows developers to provide the front end and back end experience for users in one programming language. Our guests talk about the types of projects that will benefit most from the use of Dart and how Dart is expanding to offer more features and better usability. Flutter offers a high fidelity, rich framework that supports mobile and can be deployed on any platform. When paired with Dart on Docker Hub, developers can easily build optimized front and back end systems. Kevin and Grant tell us about the new Functions Framework for Dart and how it helps developers handle deploying to serverless technologies. We hear more about how Dart, Flutter, and Cloud Run working together can make any project easy to build and easy to deploy and use. Tony Pujals Tony is a career engineer who’s now on the serverless developer relations team and focused on helping full stack developers succeed building their app backends. Kevin Moore Kevin is the Product Manager of Dart & Flutter at Google. Cool things of the week What is Vertex AI? Developer advocates share more blog Google Cloud launches from Google I/O 2021 blog Secure and reliable development with Go | Q&A video Google CloudEvents - Go site Interview Flutter site Dart site Go site Datastore site Dart on Docker site Functions Framework for Dart on GitHub site Cloud Run site Dart Documentation docs Google APIs with Dart docs App Engine site Dart Functions Qwiklab site Flutter Startup Namer Qwiklab site Cloud, Dart, and full-stack Flutter | Q&A video Go full-stack with Kotlin or Dart on Google Cloud | Session video What’s something cool you’re working on? Grant has been working on libraries for CloudEvents.

Google Cloud Reader
Eventarc: A unified eventing experience in Google Cloud

Google Cloud Reader

Play Episode Listen Later May 11, 2021 5:22


Learn how Eventarc provides a unified eventing experience in Google Cloud. Original Blog Post.More Google Cloud Reader here

Google Cloud Platform Podcast
The Power of Serverless with Aparna Sinha and Philip Beevers

Google Cloud Platform Podcast

Play Episode Listen Later May 5, 2021 42:00


On the show this week, Mark Mirchandani joins Stephanie Wong to talk about serverless computing and the Cloud OnAir Serverless event with our guests. Aparna Sinha and Philip Beevers start the show giving us a thorough definition of serverless infrastructures and how this setup can help clients run efficient and cost-effective projects with easy scalability and observability. Serverless has grown exponentially over the last decade, and Aparna talks about how that trajectory will continue in the future. At its core, the serverless structure allows large enterprise companies to do what they need to do, from analyzing real time information to ensuring dinner is delivered piping hot. Aparna describes the three aspects of next generation serverless, developer centricity, versatility, and built-in best practices, and how Google is using these to empower developers and company employees to create robust projects efficiently and economically. Phil tells us about the experience of using serverless products and the success of the three pillars in Google serverless offerings. Enterprise customers like MediaMarktSaturn and Ikea are taking advantage of the serverless system for e-commerce, data processing, machine learning, and more. Our guests describe client experiences and how customer feedback is used to help improve Google serverless tools. With so many serverless tools available, our guests offer advice on choosing the right products for your project. We also hear all about the upcoming Cloud On Air event and what participants can expect, from product announcements and live demos to thorough reviews of recently added serverless features. Aparna Sinha Aparna Sinha is Director of Product at Google Cloud and the product leader for Serverless Application Development and DevOps. She is passionate about transforming businesses through faster, safer software delivery. Previously, Aparna helped grow Kubernetes into a widely adopted platform across industries. Aparna holds a PhD in Electrical Engineering from Stanford. She is Chair of the Governing Board of the Cloud Native Computing Foundation (CNCF). She lives in Palo Alto with her husband and two kids. Philip Beevers Phil has been at Google for seven years. He currently leads the Serverless Engineering teams and previously ran the Site Reliability Engineering team for Google Cloud and Google’s internal Technical Infrastructure. Phil holds a BA in Mathematics from Oxford University. Cool things of the week The evolution of Kubernetes networking with the GKE Gateway controller blog Network Performance for all of Google Cloud in Performance Dashboard site Go from Database to Dashboard with BigQuery and Looker blog Introducing Open Saves: Open-source cloud-native storage for games blog Interview Cloud Run site Cloud Functions site Serverless Computing site The power of Serverless: Get more done easily site App Engine site Building Serverless Applications with Google Cloud Run book MediaMarktSaturn site Ikea site Airbus site Veolia site Sound Effects Attribution “Fanfare1” by N2P5 of Freesound.org “Banjo Opener” by Simanays of Freesound.org

Google Cloud Platform Podcast
GKE Autopilot with Yochay Kiriaty and William Denniss

Google Cloud Platform Podcast

Play Episode Listen Later Apr 28, 2021 37:31


Kaslin Fields joins Stephanie Wong hosting the podcast this week as we talk all about GKE Autopilot with our guests Yochay Kiriaty and William Denniss. GKE Autopilot manages tasks like quantity and size of nodes so deploying workloads is faster and machines are used efficiently. Autopilot also offers cluster management options, including monitoring the health of nodes and other components. William and Yochay explain that GKE Autopilot was built to aid companies in the efficient use of resources and give clients more time to focus on their projects. Important efficiency features that are optional in GKE, like multidimensional pod autoscaling, are employed automatically for clients in Autopilot, giving clients peace of mind. Kubernetes best practices are auto-deployed for projects so clients can rest assured things will run as quickly and smoothly as possible without extra work. Kubernetes is a great way to manage containers, and our guests describe cases where this tool is best suited. We compare GKE standard mode and Autopilot, and Yochay tells us when developers might choose standard mode to allow for more specific customization. He talks about migrating between standard and Autopilot clusters with the goal of easy migration by the end of this year. Security is important for GKE, and we talk about the Autopilot security configurations and why they were chosen. Later, our guests walk us through the process of a Kubernetes project on Autopilot, highlighting decisions this tool makes automatically for you and why. Though Autopilot sounds very much like a serverless offering, William explains the differences between tools like Cloud Run and GKE Autopilot. We also hear about the future of Autopilot, including some exciting new features coming soon. Yochay Kiriaty Yochay is a Product Manager for GKE responsible for security. William Denniss William is a Product Manager for GKE Autopilot. He’s currently writing a book called Kubernetes Quickly. Cool things of the week Google Cloud Region Picker site Faster, cheaper, greener? Pick the Google Cloud region that’s right for you blog 5 resources to help you get started with SRE blog Interview Kubernetes site GKE site Autopilot Overview docs GCP Podcast Episode 252: GKE Cost Optimization with Kaslin Fields and Anthony Bushong podcast Multidimensional Pod Autoscaling docs Docker site Cloud Run site Introducing GKE Autopilot: a revolution in managed Kubernetes blog Creating an Autopilot cluster docs What’s something cool you’re working on? Kaslin has been working on KubeCon EU as a volunteer and will be presenting there as well.

Voces de la Nube
#3 - Anthos para Lograr Agilidad Y Reducir los Costos de TI

Voces de la Nube

Play Episode Listen Later Apr 23, 2021 22:40


Como mencionamos en el primer episodio, la incertidumbre es imperativa en cualquier planificación. Lo que buscamos, en verdad, es minimizar los riesgos a través de ella. Por eso, la resiliencia y la agilidad son grandes aliados, siempre que ayuden a que la empresa pueda adaptarse a diferentes escenarios. Las soluciones que ofrecen una capa extra de seguridad y flexibilidad son perfectas para estos entornos, especialmente cuando no requieren cambios en la infraestructura. Pero no hablamos solo de migrar todo hacia la nube pública, ni siquiera cuando existen soluciones híbridas para modernizar sistemas sin sacrificar el control. Cuando nos referimos a Anthos, hablamos de la modernización de aplicaciones en cualquier nube, incluso de estrategias para múltiples nubes o entornos híbridos. Este tercer episodio de Voces de la Nube coloca al servicio como protagonista del debate sobre optimización de costos. Rodrigo Perez y Carlos Rojas, ambos Customer Engineers de Google Cloud, hablarán sobre el uso de Anthos en la estrategia de aumento del desempeño y reducción de costos de TI. Voces de la Nube es el podcast oficial de Google Cloud para América Latina. Cada 15 días, trataremos temas sobre la transformación digital y el camino hacia la nube con ejecutivos y especialistas, además de tener invitados especiales. A continuación encontrarás los links de este episodio: Descubre más sobre Anthos: https://bit.ly/3lFhQI0 Prueba Kubernetes, nuestra herramienta de código abierto: https://bit.ly/39LXPLb Lee nuestro informe Google Cloud Adoption Framework para aprender cómo identificar el nivel de madurez de los equipos que respaldan tus aplicaciones: https://bit.ly/31Mi5bl Mira el video sobre nuestras capas de seguridad: https://bit.ly/3tR9Uqd Aprende más sobre la relación entre DevOps y SRE: https://bit.ly/3cRriWb Descubre más sobre Cloud Run: https://bit.ly/3rNo1LL Conoce más detalles sobre las funcionalidades de Anthos Service Mesh: https://bit.ly/3mnZYSd Descubre más sobre Cloud Code: https://bit.ly/3sWpUa4 Lee el informe de Forrester Public sobre Anthos, donde se explican sus grandes beneficios económicos: https://bit.ly/321KMRL Conoce más sobre cómo compilar un farm de procesamiento híbrida: https://bit.ly/3sRpty0 ¿Te ha gustado este episodio? ¿Tienes alguna sugerencia? Envíanos un e-mail a vocesdelanube@google.com

Google Cloud Reader
3 ways to optimize Cloud Run response times

Google Cloud Reader

Play Episode Listen Later Apr 13, 2021 6:56


Learn how to optimize your app for fast startup using Cloud Run. Original blog post. Season of Scale on YouTubeMore Google Cloud Reader here

Serverless Chats
Episode #96: Serverless and Machine Learning with Alexandra Abbas

Serverless Chats

Play Episode Listen Later Apr 12, 2021 44:19


About Alexa AbbasAlexandra Abbas is a Google Cloud Certified Data Engineer & Architect and Apache Airflow Contributor. She currently works as a Machine Learning Engineer at Wise. She has experience with large-scale data science and engineering projects. She spends her time building data pipelines using Apache Airflow and Apache Beam and creating production-ready Machine Learning pipelines with Tensorflow.Alexandra was a speaker at Serverless Days London 2019 and presented at the Tensorflow London meetup.Personal linksTwitter: https://twitter.com/alexandraabbasLinkedIn: https://www.linkedin.com/in/alexandraabbasGitHub: https://github.com/alexandraabbasdatastack.tv's linksWeb: https://datastack.tvTwitter: https://twitter.com/datastacktvYouTube: https://www.youtube.com/c/datastacktvLinkedIn: https://www.linkedin.com/company/datastacktvGitHub: https://github.com/datastacktvLink to the Data Engineer Roadmap: https://github.com/datastacktv/data-engineer-roadmapThis episode is sponsored by CBT Nuggets: cbtnuggets.com/serverless andStackery: https://www.stackery.io/Watch this video on YouTube: https://youtu.be/SLJZPwfRLb8TranscriptJeremy: Hi, everyone. I'm Jeremy Daly, and this is Serverless Chats. Today I'm joined by Alexa Abbas. Hey, Alexa, thanks for joining me.Alexa: Hey, everyone. Thanks for having me.Jeremy: So you are a machine learning engineer at Wise and also the founder of datastack.tv. So I'd love it if you could tell the listeners a little bit about your background and what you do at Wise and what datastack.tv is all about.Alexa: Yeah. So as you said, I'm a machine learning engineer at Wise. So Wise is an international money transfer service. We are aiming for very transparent fees and very low fees compared to banks. So at Wise, basically, designing, maintaining, and developing the machine learning platform, which serves data scientists and analysts, so they can train their models and deploy their models, easily.Datastack.tv is, basically, it's a video service or a video platform for data engineers. So we create bite-sized videos, educational videos, for data engineers. We mostly cover open source topics, because we noticed that some of the open source tools in the data engineering world are quite underserved in terms of educational content. So we create videos about those.Jeremy: Awesome. And then, what about your background?Alexa: So I actually worked as a data engineer and machine learning engineer, so I've always been a data engineer or machine learning engineer in terms of roles. I also worked, for a small amount of time, I worked as a data scientist as well. In terms of education, I did a big data engineering Master's, but actually my Bachelor is economics, so quite a mix.Jeremy: Well, it's always good to have a ton of experience and that diverse perspective. Well, listen, I'm super excited to have you here, because machine learning is one of those things where it probably is more of a buzzword, I think, to a lot of people where every startup puts it in their pitch deck, like, "Oh, we're doing machine learning and artificial intelligence ..." stuff like that. But I think it's important to understand, one, what exactly it is, because I think there's a huge confusion there in terms of what we think of as machine learning, and maybe we think it's more advanced than it is sometimes, as I think there's lower versions of machine learning that can be very helpful.And obviously, this being a serverless podcast, I've heard you speak a number of times about the work that you've done with machine learning and some experiments you've done with serverless there. So I'd love to just pick your brain about that and just see if we can educate the users here on what exactly machine learning is, how people are using it, and where it fits in with serverless and some of the use cases and things like that. So first of all, I think one of the important things to start with anyways is this idea of MLOps. So can you explain what MLOps is?Alexa: Yeah, sure. So really short, MLOps is DevOps for machine learning. So I guess the traditional software engineering projects, you have a streamlined process you can release, really often, really quickly, because you already have all these best practices that all these traditional software engineering projects implement. Machine learning, this is still in a quite early stage and MLOps is in a quite early stage. But what we try to do in MLOps is we try to streamline machine learning projects, as well as traditional software engineering projects are streamlined. So data scientists can train models really easily, and they can release models really frequently and really easily into production. So MLOps is all about streamlining the whole data science workflow, basically.And I guess it's good to understand what the data science workflow is. So I talk a bit about that as well. So before actually starting any machine learning project, the first phase is an experimentation phase. It's a really iterative process when data scientists are looking at the data, they are trying to find features and they are also training many different models; they are doing architecture search, trying different architecture, trying different hyperparameter settings with those models. So it's a really iterative process of trying many models, many features.And then by the end, they probably find a model that they like and that hit the benchmark that they were looking for, and then they are ready to release that model into production. And this usually looks like ... so sometimes they use shadow models, in the beginning, to check if the results are as expected in production as well, and then they actually release into production. So basically MLOps tries to create the infrastructure and the processes that streamline this whole process, the whole life cycle.Jeremy: Right. So the question I have is, so if you're an ML engineer or you're working on these models and you're going through these iterations and stuff, so now you have this, you're ready to release it to production, so why do you need something like an MLOps pipeline? Why can't you just move that into production? Where's the barrier?Alexa: Well, I guess ... I mean, to be honest, the thing is there shouldn't be a barrier. Right now, that's the whole goal of MLOps. They shouldn't feel that they need to do any manual model artifact copying or anything like that. They just, I don't know, press a button and they can release to production. So that's what MLOps is about really and we can version models, we can version the data, things like that. And we can create reproducible experiments. So I guess right now, I think many bits in this whole lifecycle is really manual, and that could be automated. For example, releasing to production, sometimes it's a manual thing. You just copy a model artifact to a production bucket or whatever. So sometimes we would like to automate all these things.Jeremy: Which makes a lot of sense. So then, in terms of actually implementing this stuff, because we hear all the time about CI/CD. If we're talking about DevOps, we know that there's all these tools that are being built and services that are being launched that allow us to quickly move code through some process and get into production. So are there similar tools for deploying models and things like that?Alexa: Well, I think this space is quite crowded. It's getting more and more crowded. I think there are many ... So there are the cloud providers, who are trying to create tools that help these processes, and there are also many third-party platforms that are trying to create the ML platform that everybody uses. So I think there is no go-to thing that everybody uses, so I think there is many tools that we can use.Some examples, for example, TensorFlow is a really popular machine learning library, But TensorFlow, they created a package on top of TensorFlow, which is called TFX, TensorFlow Extended, which is exactly for streamlining this process and serving models easily, So I would say it TFX is a really good example. There is Kubeflow, which is a machine learning toolkit for Kubernetes. I think there are many custom implementations in-house in many companies, they create their own machine learning platforms, their own model serving API, things like that. And like the cloud providers on AWS, we have SageMaker. They are trying to cover many parts of the tech science lifecycle. And on Google Cloud, we have AI Platform, which is really similar to SageMaker.Jeremy: Right. And what are you doing at Wise? Are you using one of those tools? Are you building something custom?Alexa: Yeah, it's a mix actually. We have some custom bits. We have a custom API, serving API, for serving models. But for model training, we are using many things. We are using SageMaker, Notebooks. And we are also experimenting with SageMaker endpoints, which are actually serverless model serving endpoints. And we are also using EMR for model training and data preparation, so some Spark-based things, a bit more traditional type of model training. So it's quite a mix.Jeremy: Right. Right. So I am not well-versed in machine learning. I know just enough to be dangerous. And so I think that what would be really interesting, at least for me, and hopefully be interesting to listeners as well, is just talk about some of these standard tools. So you mentioned things like TensorFlow and then Kubeflow, which I guess is that end-to-end piece of it, but if you're ... Just how do you start? How do you go from, I guess, building and training a model to then productizing it and getting that out? What's that whole workflow look like?Alexa: So, actually, the data science workflow I mentioned, the first bit is that experimentation, which is really iterative, really free, so you just try to find a good model. And then, when you found a good model architecture and you know that you are going to receive new data, let's say, I don't know, I have a day, or whatever, I have a week, then you need to build out a retraining pipeline. And that is, I think, what the productionization of a model really means, that you can build a retraining pipeline, which can automatically pick up new data and then prepare that new data, retrain the model on that data, and release that model into production automatically. So I think that means productionization really.Jeremy: Right. Yeah. And so by being able to build and train a model and then having that process where you're getting that feedback back in, is that something where you're just taking that data and assuming that that is right and fits in the model or is there an ongoing testing process? Is there supervised learning? I know that's a buzzword. I'm not even sure what it means. But those ... I mean, what types of things go into that retraining of the models? Is it something that is just automatic or is it something where you need constant, babysitting's probably the wrong word, but somebody to be monitoring that on a regular basis?Alexa: So monitoring is definitely necessary, especially, I think when you trained your model and you shouldn't release automatically in production just because you've trained a new data. I mentioned this shadow model thing a bit. Usually, after you retrained the model and this retraining pipeline, then you release that model into shadow mode; and then you will serve that model in parallel to your actual product production model, and then you will check the results from your new model against your production model. And that's a manual thing, you need to ... or maybe you can automate it as well, actually. So if it performs like ... If it is comparable with your production model or if it's even better, then you will replace it.And also, in terms of the data quality in the beginning, you should definitely monitor that. And I think that's quite custom, really depends on what kind of data you work with. So it's really important to test your data. I mean, there are many ... This space is also quite crowded. There are many tools that you can use to monitor your distribution of your data and see that the new data is actually corresponds to your already existing data set. So there are many bits that you can monitor in this whole retraining pipeline, and you should monitor.Jeremy: Right. Yeah. And so, I think of some machine learning like use cases of like sentiment analysis, for example... looking at tweets or looking at customer service conversations and trying to rate those things. So when you say monitoring or running them against a shadow model, is that something where ... I mean, how do you gauge what's better, right? if you've got a shadow... I mean, what's the success metric there as to say X number were classified as positive versus negative sentiment? Is that something that requires human review or some sampling for you to kind of figure out the quality of the success of those models?Alexa: Yeah. So actually, I think that really depends on the use case. For example, when you are trying to catch fraudsters, your false positive rate and true positive rate, these are really important. If your true positive rate is higher that means, oh, you are catching more fraudsters. But let's say your new model, with your model, also the false positive rate is higher, which means that you are catching more people who are actually not fraudsters, but you have more work because I guess that's a manual process to actually check those people. So I think it really depends on the use case.Jeremy: Right. Right. And you also said that the markets a little bit flooded and, I mean, I know of SageMaker and then, of course, there's all these tools like, what's it called, Recognition, a bunch of things at AWS, and then Google has a whole bunch of the Vision API and some of these things and Watson's Natural Language Processing over at IBM and some of these things. So there's all these different tools that are just available via an API, which is super simple and great for people like me that don't want to get into building TensorFlow models and things like that. So is there an advantage to building your own models beyond those things, or are we getting to a point where with things like ... I mean, again, I know SageMaker has a whole library of models that are already built for you and things like that. So are we getting to a point where some of these models are just good enough off the shelf or do we really still need ... And I know there are probably some custom things. But do we still really need to be building our own models around that stuff?Alexa: So to be honest, I think most of the data scientists, they are using off-the-shelf models, maybe not the serverless API type of models that Google has, but just off-the-shelf TensorFlow models or SageMaker, they have these built-in containers for some really popular model architectures like XGBoost, and I think most of the people they don't tweak these, I mean, as far as I know. I think they just use them out of the box, and they really try to tweak the data instead, the data that they have, and try to have these off-the-shelf models with higher and higher quality data.Jeremy: So shape the data to fit the model as opposed to the model to fit the data.Alexa: Yeah, exactly. Yeah. So you don't actually have to know ... You don't have to know how those models work exactly. As long as you know what the input should be and what output you expect, then I think you're good to go.Jeremy: Yeah, yeah. Well, I still think that there's probably a lot of value in tuning the models though against your particular data sets.Alexa: Yeah, right. But also there are services for hyperparameter tuning. There are services even for neural architecture search, where they try a lot of different architectures for your data specifically and then they will tell you what is the best model architecture that you should use and same for the hyperparameter search. So these can be automated as well.Jeremy: Yeah. Very cool. So if you are hosting your own version of this ... I mean, maybe you'll go back to the MLOps piece of this. So I would assume that a data scientist doesn't want to be responsible for maintaining the servers or the virtual machines or whatever it is that it's running on. So you want to have this workflow where you can get your models trained, you can get them into production, and then you can run them through this loop you talked about and be able to tweak them and continue to retrain them as things go through. So on the other side of that wall, if we want to put it that way, you have your ops people that are running this stuff. Is there something specific that ops people need to know? How much do they need to know about ML, as opposed to ... I mean, the data scientists, hopefully, they know more. But in terms of running it, what do they need to know about it, or is it just a matter of keeping a server up and running?Alexa: Well, I think ... So I think the machine learning pipelines are not yet as standardized as a traditional software engineering pipeline. So I would say that you have to have some knowledge of machine learning or at least some understanding of how this lifecycle works. You don't actually need to know about research and things like that, but you need to know how this whole lifecycle works in order to work as an ops person who can automate this. But I think the software engineering skills and DevOps skills are the base, and then you can just build this knowledge on top of that. So I think it's actually quite easy to pick this up.Jeremy: Yeah. Okay. And what about, I mean, you mentioned this idea of a lot of data scientists aren't actually writing the models, they're just using the preconfigured model. So I guess that begs the question: How much does just a regular person ... So let's say I'm just a regular developer, and I say, "I want to start building machine learning tools." Is it as easy as just pulling a model off the shelf and then just learning a little bit more about it? How much can the average person do with some of these tools out of the box?Alexa: So I think most of the time, it's that easy, because usually the use cases that someone tries to tackle, those are not super edge cases. So for those use cases, there are already models which perform really well. Especially if you are talking about, I don't know, supervised learning on tabular data, I think you can definitely find models that will perform really well off the shelf on those type of datasets.Jeremy: Right. And if you were advising somebody who wanted to get started... I mean, because I think that I think where it might come down to is going to be things like pricing. If you're using Vision API and you're maybe limited on your quota, and then you can ... if you're paying however many cents per, I guess, lookup or inference, then that can get really expensive as opposed to potentially running your own model on something else. But how would you suggest that somebody get started? Would you point them at the APIs or would you want to get them up and running on TensorFlow or something like that?Alexa: So I think, actually, for a developer, just using an API would be super easy. Those APIs are, I think ... So getting started with those APIs just to understand the concepts are very useful, but I think getting started with Tensorflow itself or just Keras, I definitely I would recommend that, or just use scikit-learn, which is a more basic package for more basic machine learning. So those are really good starting points. And there are so many tutorials to get started with, and if you have an idea of what you would like to build, then I think you will definitely find tutorials which are similar to your own use case and you can just use those to build your custom pipeline or model. So I would say, for developers, I would definitely recommend jumping into TensorFlow or scikit-learn or XGBoost or things like that.Jeremy: Right, right. And how many of these models exist? I mean, are we talking there's 20 different models or are we talking there's 20,000 models?Alexa: Well, I think ... Wow. Good question. I think we are more towards today maybe not 20,000, but definitely many thousands, I think. But there are popular models that most of the people use, and I think there are maybe 50 or 100 models that are the most popular and most companies use them and you are probably fine just using those for any use case or most of the use cases.Jeremy: Right. Now, and speaking of use cases, so, again, I try to think of use cases or machine learning and whether it's classifying movies into genres or sentiment analysis, like I said, or maybe trying to classify news stories, things like that. Fraud detection, you mentioned. Those are all great use cases, but what are ... I know you've worked on a bunch of projects. So what are some of the projects that you've done and what were the use cases that were being solved there, because I find these to be really interesting?Alexa: Yeah. So I think a nice project that I worked on was a project with Lush, which is a cosmetics company. They manufacture like soaps and bath bombs. And they have this nice mission that they would like to eliminate packaging from their shops. So they asked us, when I worked at Datatonic, we worked on a small project with them. They asked us to create an image recognition model, to train one, and then create a retraining pipeline that they can use afterwards. So they provided us with many hundred thousand images of their products, and they made photos from different angles with different lightings and all of that, so really high-quality image data set of all their products.And then, we used a mobile net model, because they wanted this model to be built-in into their mobile application. So when users actually use this model, they download it with their mobile application. And then, they created a service called Lush [inaudible], which you can use from within their app. And then, people can just scan the products and they can see the ingredients and how-to-use guides and things like that. So this is how they are trying to eliminate all kinds of packaging from their shops, that they don't actually need to put the papers there or put packaging with ingredients and things like that.And in terms of what we did on the technical side, so as I mentioned, we used a mobile net model, because we needed to quantize the model in order to put it on a mobile device. And we used TF Lite to do this. TF Lite is specifically for models that you want to run on an edge device, like a mobile phone. So that was already a constraint. So this is how we picked a model. I think, back then, like there were only a few model architectures supported by TF Lite, and I think there were only two, maybe. So we picked MobileNet, because it had a smaller size.And then, in terms of the retraining, so we automated the whole workflow with Cloud Composer on Google Cloud, which is a managed version of Apache Airflow, the open source scheduling package. The training happened on AI Platform, which is Google Cloud's SageMaker.Jeremy: Yeah.Alexa: Yeah. And what else? We also had an image pre-processing step just before the training, which happened on Dataflow, which is an auto-scaling processing service on Google Cloud. And after we trained the model, we just saved the model active artifact in a bucket, and then ... I think we also monitored the performance of the model, and if it was good enough, then we just shipped the model to developers who actually they manually updated the model file that went into the application that people can download. So we didn't really see if they use any shadow model thing or anything like that.Jeremy: Right. Right. And I think that is such a cool use case, because, if I'm hearing you right, there were just like a bar soap or something like that with no packaging, no nothing, and you just hold your mobile phone camera up to it or it looks at it, determines which particular product is, gives you all that ... so no QR codes, no bar codes, none of that stuff. How did they ring them up though? Do you know how that process worked? Did the employees just have to know what they were or did the employees use the app as well to figure out what they were billing people for?Alexa: Good question. So I think they wanted the employees as well to use the app.Jeremy: Nice.Alexa: Yeah. But when the app was wrong, then I don't know what happened.Jeremy: Just give them a discount on it or something like that. That's awesome. And that's the thing you mentioned there about ... Was it Tensor Lite, was it called?Alexa: TF Lite. Yeah.Jeremy: TF Lite. Yes. TensorFlow Lite or TF Lite. But, basically, that idea of being able to really package a model and get it to be super small like you said. You said edge devices, and I'm thinking serverless compute at the edge, I'm thinking Lambda functions. I'm thinking other ways that if you could get your models small enough in package, that you could run it. But that'd be a pretty cool way to do inference, right? Because, again, even if you're using edge devices, if you're on an edge network or something like that, if you could do that at the edge, that'd be a pretty fast response time.Alexa: Yeah, definitely. Yeah.Jeremy: Awesome. All right. So what about some other stuff that you've done? You've mentioned some things about fraud detection and things like that.Alexa: Yeah. So fraud detection is a use case for Wise. As I mentioned, Wise services international money transfer, one of its services. So, obviously, if you are doing anything with money, then a full use case is for sure that you will have. So, I mean, in terms of ... I don't actually develop models at Wise, so I don't know actually what models they use. I know that they use H2O, which is a Spark-based library that you can use for model training. I think it's quite an advanced library, but I haven't used it myself too much, so I cannot talk about that too much.But in terms of the workflow, it's quite similar. We also have Airflow to schedule the retraining of the models. And they use EMR for data preparation, so quite similar to Dataflow, in a sense. A Spark-based auto-scaling cluster that processes the data and then, they train the models on EMR as well but using this H2O library. And then in the end, when they are happy with the model, we have this tool that they can use for releasing shadow models in production. And then, if they are satisfied with the performance of the model that they can actually release into production. And at Wise, we have a custom micro service, a custom API, for serving models.Jeremy: Right. Right. And that sounds like you need a really good MLOps flow to make all that stuff work, because you just have a lot of moving parts there, right?Alexa: Yeah, definitely. Also, I think we have many bits that could be improved. I think there are many bits that still a bit manual and not streamlined enough. But I think most of the companies struggle with the same thing. It's just we don't yet have those best practices that we can implement, so many people try many different things, and then ... Yeah, so I think it's still a work in progress.Jeremy: Right. Right. And I'm curious if your economics background helps at all with the fraud and the money laundering stuff at all?Alexa: No.Jeremy: No. All right. So what about you worked in another data engineering project for Vodafone, right?Alexa: Yeah. Yeah, so that was a data engineering project purely, so we didn't do any machine learning. Well, Vodafone has their own Google Analytics library that they use in all their websites and mobile apps and things like that and that sense Clickstream data to a server in a Google Cloud Platform Project, and we consume that data in a streaming manner from data flows. So, basically, the project was really about processing this data by writing an Apache Beam pipeline, which was always on and always expected messages to come in. And then, we dumped all the data into BigQuery tables, which is data warehouse in Google Cloud. And then, these BigQuery tables powered some of the dashboards that they use to monitor the uptime and, I don't know, different metrics for their websites and mobile apps.Jeremy: Right. But collecting all of that data is a good source for doing machine learning on top of that, right?Alexa: Yeah, exactly. Yeah. I think they already had some use cases in mind. I'm not sure if they actually done those or not, but it's a really good base for machine learning, what we collected the data there in BigQuery, because that is an analytical data warehouse, so some analysts can already start and explore the data as a first step of the machine learning process.Jeremy: Right. I would think anomaly detection and things like that, right?Alexa: Yeah, exactly.Jeremy: Right. All right. Well, so let's go on and talk about serverless a little bit more, because I know I saw you do a talk where you were you ran some experiments with serverless. And so, I'm just kind of curious, where are the limitations that you see? And I know that there continues ... I mean, we now have EFS integration, and we've got 10 gigs of memory for lambda functions, you've even got Cloud Run, which I don't know how much you could do with that, but where's still some of the limitations for running machine learning in a serverless way, I guess?Alexa: So I think, actually, from this data science lifecycle, many bits, there are Cloud providers offer a lot of serverless options. For data preparation, there is Dataflow, which is, I think, kind of like serverless data processing service, so you can use that for data processing. For model training, there is ... Or the SageMaker and AI Platform, which are kind of serverless, because you don't actually need to provision these clusters that you train your models on. And for model serving, in SageMaker, there are the serverless model endpoints that you can deploy. So there are many options, I think, for serverless in the machine learning lifecycle.In my experience, many times, it's a cost thing. For example, at Wise, we have this custom model serving API, where we serve all our models. And if they would use SageMaker endpoints, I think, a single SageMaker endpoint is about $50 per month, that's the minimum price, and that's for a single model and a single endpoint. And if you have thousands of models, then your price can go up pretty quickly, or maybe not thousands, but hundreds of models, then your price can go up pretty quickly. So I think, in my experience, limitation could be just price.But in terms of ... So I think, for example, if I compare Dataflow with a spark cluster that you program yourself, then I would definitely go with Dataflow. I think it's just much easier and maybe cost-wise as well, you might be better off, I'm not sure. But in terms of comfort and developer experience, it's a much better experience.Jeremy: Right. Right. And so, we talked a little bit about TF Lite there. Is that something possible where maybe the training piece of it, running that on Functions as a Service or something like that maybe isn't the most efficient or cost-effective way to do that, but what about running models or running inference on something like a Lambda function or a Google Cloud function or an Azure function or something like that? Is it possible to package those models in a way that's small enough that you could do that type of workload?Alexa: I think so. Yeah. I think you can definitely make inference using a Lambda function. But in terms of model training, I think that's not a ... Maybe there were already experiments for, I'm sure there were. But I think it's not the kind of workload that would fit for Lambda functions. That's a typical parallelizable, really large-scale workloads for ... You know the MapReduce type of data processing workloads? I think those are not necessarily fit for Lambda functions. So I think for model training and data preparation, maybe those are not the best options, but for model inference, definitely. And I think there are many examples using Lambda functions for inference.Jeremy: Right. Now, do you think that ... because this is always something where I find with serverless, and I know you're more of a data scientist, ML expert, but I look at serverless and I question whether or not it needs to handle some of these things. Especially with some of the endpoints that are out there now, we talked about the Vision API and some of the other NLP things, are we putting in too much effort maybe to try to make serverless be able to handle these things, or is it just something where there's a really good way to handle these by hosting your ... I mean, even if you're doing SageMaker, maybe not SageMaker endpoints, but just running SageMaker machines to do it or whatever, are we trying too hard to squeeze some of these things into a serverless environment?Alexa: Well, I don't know. I think, as a developer, I definitely prefer the more managed versions of these products. So the less I need to bother with, "Oh, my cluster died and now we need to rebuild a cluster of things," and I think serverless can definitely solve that. I would definitely prefer the more managed version. Maybe not serverless, because, for some of the use cases or some of the bits from the lifecycle, serverless is not the best fit, but a managed product is definitely something that I prefer over a non-managed product.Jeremy: Right. And so, I guess one last question for you here, because this is something that always interests me. Just there are relevant things that we need machine learning for. I mean, I think the fraud detection is a hugely important one. Sentiment analysis, again. Some of those other things are maybe, I don't know, I shouldn't call them toy things, but personalization and some of the things, they're all really great things to have, and it seems like you can't build an application now without somebody wanting some piece of that machine learning in there. So do you see that as where we are going where in the future, we're just going to have more of these APIs?I mean, out of AWS, because I'm more familiar with the AWS ecosystem, but they have Personalize and they have Connect and they have all these other services, they have the recommendation engine thing, all these different services ... Lex, or whatever, that will read text, natural language processing and all that kind of stuff. Is that where we're moving to just all these pre-trained, canned products that I can just access via an API or do you think that if you're somebody getting started and you really want to get into the ML world that you should start diving into the TensorFlows and some of those other things?Alexa: So I think if you are building an app and your goal is not to become an ML engineer or a data scientist, then these canned models are really useful because you can have a really good recommendation engine in your product, you could have really good personalization engine in your product, things like that. And so, those are, I think, really useful and you don't need to know any machine learning in order to use them. So I think we definitely go into that direction, because most of the companies won't hire data scientists just to train a recommender model. I think it's just easier to use an API endpoint that is already really good.So I think, yeah, we are definitely heading into that direction. But if you are someone who wants to become a data scientist or wants to be more involved with MLOps or machine learning engineering, then I think jumping into TensorFlow and understanding, maybe not, as we discussed, not getting into the model architectures and things like that, but just understanding the workflow and being able to program a machine learning pipeline from end to end, I think that's definitely recommended.Jeremy: All right. So one last question: If you've ever used the Watson NLP API or the Google Vision API, can you put on your resume that you're a machine learning expert?Alexa: Well, if you really want to do that, I would give it a go. Why not?Jeremy: All right. Good. Good to know. Well, Alexa, thank you so much for sharing all this information. Again, I find the use cases here to be much more complex than maybe some of the surface ones that you sometimes hear about. So, obviously, machine learning is here to stay. It sounds like there's a lot of really good opportunities for people to start kind of dabbling in it and using that without having to become a machine learning expert. But, again, I appreciate your expertise. So if people want to find out more about you or more about the things you're working on and datastack.tv, things like that, how do they do that?Alexa: So we have a Twitter page for datastack.tv, so feel free to follow that. I also have a Twitter page, feel free to follow me, account, not page. There is a datastack.tv website, so it's just datastack.tv. You can go there, and you can check out the courses. And also, we have created a roadmap for data engineers specifically, because there was no good roadmap for data engineers. I definitely recommend checking that out, because we listed most of the tools that a data engineer and also machine learning engineer should know about. So if you're interested in this career path, then I would definitely recommend checking that out. So under datastack.tv's GitHub, there is a roadmap that you can find.Jeremy: Awesome. All right. And that's just, like you said, datastack.tv.Alexa: Yes.Jeremy: I will make sure that we get your Twitter and LinkedIn and GitHub and all that stuff in there. Alexa, thank you so much.Alexa: Thanks. Thank you.

Google Cloud Platform Podcast
Workflows with Kris Braun and Guillaume Laforge

Google Cloud Platform Podcast

Play Episode Listen Later Apr 7, 2021 39:56


Brian Dorsey joins Stephanie Wong this week for an in-depth discussion on Workflows. Guests Kris Braun and Guillaume Laforge introduce us to Google Cloud Workflows, explaining that this fully managed serverless product helps connect services in the cloud. By facilitating the creation of an end-to-end schema, Workflows lets developers specify what microservices or other software respond when certain events occur in a detailed, visual format. Kris and Guillaume list the benefits of using Workflows and detail the many uses for this powerful tool. The ability to add detailed descriptors, for example, helps companies avoid errors in calling up other pieces of software. New employees have an easier time getting acquainted when the steps are clearly defined as well. Our guests use real-world examples to illustrate the three main uses for Workflows: event-driven, batch, and infrastructure automation. Workflows are flexible and customizable. Later, we hear about Cloud Composer and its relation to Workflows, and our guests help us understand which product is right for each client type. The Workflows team continues to expand offerings. More connectors are being added to allow developers to call other GCP services. Working with lists will soon be easier, allowing Workflows to run steps in parallel. And Kris details other exciting updates coming soon, including Eventarc. Kris Braun Kris Braun is the Product Manager for three Google Cloud products that connect services to build applications: Workflows, Tasks, and Scheduler. Before Google, Kris’ adventures include founding and growing startups, leading a team of network security researchers investigating threats like Stuxnet, and writing the original BlackBerry simulator for app development. He’s a passionate advocate for opening job opportunities to skilled refugees displaced by war and disaster. Guillaume Laforge Guillaume Laforge is a Developer Advocate for Google Cloud, focusing on serverless technologies. More recently, he dived head first in Workflows, and started presenting the product at online events, wrote articles, tips and tricks, and videos on the topic. Cool things of the week How sweet it is: Using Cloud AI to whip up new treats with Mars Maltesers blog Turbo boost your Compute Engine workloads with new 100 Gbps networking blog Benchmarking higher bandwidth VM instances docs Interview Workflows site Spanner site Cloud SQL site Cloud Composer site Pub/Sub site Cloud Run site Eventarc site Eventarc Documentation docs Workflows Insiders site Quickstarts site How-To Guides site Syntax Reference site Guillaume’s Workflow Tips and Tricks blog A first look at serverless orchestration with Workflows blog Orchestrating the Pic-a-Daily serverless app with Workflows blog Better service orchestration with Workflows blog Get to know Workflows, Google Cloud’s serverless orchestration engine blog 3 common serverless patterns to build with Workflows blog Introduction to serverless orchestration with Workflows codelab Pic-a-Daily Serverless Workshop codelab Pic-a-daily: Lab 6—Orchestration with Workflows codelab What’s something cool you’re working on? Brian is working on use cases around VMs. Stephanie has been writing about database migration.

R Weekly Highlights
Issue 2021-W13 Highlights

R Weekly Highlights

Play Episode Listen Later Mar 30, 2021 10:58


The Minard System in R, ggplot2 wizardry, a slackbot created with plumber and googleCouldRunner Episode Links This week's curator: Tony Elhabr (@TonyElHabr (https://twitter.com/tonyelhabr)) "The Minard System" in R (http://minard.schochastics.net/) A guide to creating a Slackbot that sends weekly updates via plumber, googleCloudRunner and Cloud Run (https://code.markedmondson.me/googleCloudRunner/articles/usecase-slackbot-google-analytics.html) ggplot2 Wizardry: My Favorite Tricks and secrets for Beautiful Plots in R, Cédric Scherer (https://www.cedricscherer.com/slides/useR2021.pdf) Entire issue available at rweekly.org/2021-W13 (https://rweekly.org/2021-W13.html) Supplemental Resources DataViz History Series: Edward Tufte, Charles Minard, Napoleon and the Russian Campaign of 1812 - Part 2 (https://datavizblog.com/2013/05/18/dataviz-history-edward-tufte-charles-minard-napoleon-and-the-russian-campaign-of-1812-part-2) and Part 5 (https://datavizblog.com/2013/05/26/dataviz-history-charles-minards-flow-map-of-napoleons-russian-campaign-of-1812-part-5) {ggplot2} Wizardry recorded talk (https://www.youtube.com/watch?v=5KHvEXYtnOo) from UseR Oslo meetup (25 March 2021) Introducing googleCouldRunnner - serverless R on Google Could Platform (https://code.markedmondson.me/googleCloudRunner-intro/)

CarahCast: Podcasts on Technology in the Public Sector
Securely Modernize Your Workloads with Google Cloud

CarahCast: Podcasts on Technology in the Public Sector

Play Episode Listen Later Mar 4, 2021 47:06


Listen to this podcast to hear insights on:GCP Administration Fundamentals: reviewing various services, billing, and support cases and navigating the UI, IAM, etc.Running VMware workloads natively on GCPApplication Modernization by deploying containerized apps into Cloud Run, Google Kubernetes Engine (GKE), and Google AnthosModernization of Data and Analytics workloads on GCP using Pub/Sub, Dataflow, BigQuery, Data Catalog, Data Studio, and Looker

Google Cloud Platform Podcast
Botcopy with Dustin Dye and Alex Seegers

Google Cloud Platform Podcast

Play Episode Listen Later Feb 3, 2021 39:06


Dustin Dye and Alex Seegers of Botcopy are on the show today, chatting with hosts Mark Mirchandani and Priyanka Vergadia. Botcopy uses advanced AI technology along with excellent copy writing to create better chat bots. The software works directly on a company website and doesn’t require a login, allowing chats to stay anonymous. Our guests explain that their chat bots are treated like virtual employees, built and trained to function and speak appropriately for their specific job. Copy writing is an important part of this, as the conversational AI should continue to support the brand being represented and conversations should flow naturally. The bot personalities are developed through written copy and interactions with customers in instances like customer service, lead generation, and even some internal employee management needs. Later, we talk about how Dialogflow and Botcopy work together, including how Botcopy adds important user context to the conversation to facilitate more accurate bot responses. We hear more about Dialogflow CX and how the modular builder makes designing and controlling bot conversations easier. CX has also made managing multiple bots on a single account easier and team collaboration more efficient. The visual builder available in CX offers a better chatbot design experience, especially when multiple teams are working on the same bot. We hear examples of great use-cases for Botcopy, like restaurant menus, clinical trials, and more. Alex and Dustin give developers valuable advice about working with clients to build their bots. Test early and often to build a robust bot capable of handling many situations. It’s important to have an analytics system in place to identify possible improvement areas as well. Dustin Dye Dustin Dye is co-founder and CEO of Botcopy. After developing branded character and dialogue content for the #1 business bots on Messenger and Slack, Dustin launched Botcopy in 2017. Before co-founding Botcopy, Dustin had co-founded Expert Dojo, one of Silicon Beach’s largest startup incubator, serving, mentoring, and securing funding for some of the most exciting businesses coming out of LA. Dustin is a frequent keynote speaker at leading Chatbot conferences in the US and abroad. Alexander Seegers Alexander Seegers is a co-founder and COO of Botcopy and heads up the product team. He holds a Business degree from Northeastern and UX certification from General Assembly. Alex has consulted tech leaders at Fortune 500 companies worldwide, spearheading their forays into conversational AI for multiple use cases at the enterprise level. In addition to big-picture leadership and vision, Alex is adept at numerous coding languages and complex systems architecture. Cool things of the week Introducing WebSockets, HTTP/2 and gRPC bidirectional streams for Cloud Run blog Take the first step toward SRE with Cloud Operations Sandbox blog Interview Botcopy site Botcopy Blog blog Contact Botcopy email Dialogflow site Miro site What’s something cool you’re working on? Priyanka is working on Dialogflow CX episodes for the Deconstructing Chatbots series.

School of Cloud
Serverless Cloud Run

School of Cloud

Play Episode Listen Later Nov 8, 2020 35:31


Cloud Run lets you develop and deploy highly scalable containerized applications on a fully managed serverless platform.Github repo: https://github.com/hom-bahrani/terraform-gcp-cloudrun-actionsTwitter feedback: https://twitter.com/original_hom

Google Cloud Platform Podcast
HPC with Senanu Aggor and Ilias Katsardis + Deloitte Cyber Analytics with Eric Dull

Google Cloud Platform Podcast

Play Episode Listen Later Oct 21, 2020 40:37


Mark and Brian are together this week, hosting our guests Senanu Aggor and Ilias Katsardis as we discuss High Performance Computing with Google. HPC uses powerful computers to solve problems that would otherwise be too large or take too long for standard machines. Innovation and advances in cloud technology have made this resource more accessible, more scalable, and more affordable. Senanu lists some great use cases for HPC, including vehicle manufacturing and the medical field and describes how these markets benefit from the extra power HPC offers. Ilias talks tech and helps us understand the evolution of the Google HPC offering and the architecture most often used with HPC. He explains the benefits of HPC on the cloud over the old way, emphasizing the flexibility of choosing machines based on your code rather than forcing your code onto small machines. Storage of data is flexible, scalable, and secure as well. Diminishing VM to VM latency has been an important advancement in HPC, and Ilias describes how Google has decreased latency. Google Cloud customers are using the HPC offering for all kinds of large computing jobs, and Senanu details some of these real world instances. From Covid vaccine research to disaster evacuation planning, HPC on the cloud is changing the way we process data. Later, Ilias tells our listeners how to get started with their HPC project. Senanu Aggor Senanu Aggor is the Product Marketing Manager for Google Cloud’s High Performance Computing (HPC) solution. Ilias Katsardis Ilias Katsardis is the HPC Solution Lead for the Customer Engineering team (EMEA) at Google. In this role, Ilias brings over 14 years of experience in the cloud computing and high-performance computing industries to promote Google Cloud’s state-of-the-art infrastructure for complex HPC workloads. Previously, he worked as an applications analyst at Cray Inc., where he was a dedicated analyst to the European Centre for Medium-Range Weather Forecasts (ECMWF), and, prior to that, was an HPC application specialist at ClusterVision. Ilias also founded two startups Airwire Networks in 2006 and Performance Hive in 2017. Cool things of the week What’s happening in BigQuery: Time unit partitioning, Table ACLs and more blog BigQuery explained: Blog series blog BigQuery Spotlight videos Cloud Functions vs. Cloud Run video Interview High Performance Computing site GCP Podcast Episode 237: NVIDIA with Bryan Catanzaro podcasdt GCP Podcast Episode 167: World Pi Day with Emma Haruka Iwao podcast Compute Engine site Compute Engine Machine Types site Cloud Storage site Cloud Firestore site Google Cloud with Intel site Cloud GPUs site Best practices for running tightly coupled HPC applications on Compute Engine site Super Computing Event site Stackchat at home This week, Max Saltonstall is talking cyber analytics with Eric Dull from Deloitte.

Google Cloud Platform Podcast
reCAPTCHA Enterprise with Kelly Anderson + Spring ML Potholes with Eric Clark

Google Cloud Platform Podcast

Play Episode Listen Later Oct 14, 2020 49:11


This week, your hosts Mark Mirchandani and Priyanka Vergadia are joined by Google Product Marketing Lead for Online Fraud Protection, Kelly Anderson, to talk about reCAPTCHA Enterprise. Kelly’s main focus at Google, reCAPTCHA, is a service that helps online companies determine if a user is a human or an automated system. With an advanced risk analysis engine, Google’s reCAPTCHA system has been defending sites from fraud for more than a decade. We talk about the evolution of reCAPTCHA and learn about the specific attributes of each version. Later, we focus on reCAPTCHA Enterprise which caters this security software to enterprise companies. Like version three, Enterprise boasts detailed risk analysis and actions. Companies can also expect reason codes for high risk scores and the ability to use the risk analysis engine customized to their company. They are able to find bots impersonating users and neutralize them easily. reCAPTCHA Enterprise is easy to use, whether you’re a Google Cloud customer or not. With the handy API, any company can take advantage of this security feature. Kelly leads us through the steps of deploying the API and details instances where this software plays a vital role in the protection of your website across multiple devices. Kelly takes us through the reCAPTCHA Enterprise dashboard and talks about the future as we conclude the show. reCAPTCHA hopes to move beyond behavioral protection to continue to stay ahead of malicious actors online. As Kelly explains, Machine Learning will become more and more important as fraud detection continues to grow and change. Kelly Anderson Kelly Anderson currently leads product marketing for Google Cloud’s online fraud protection business. She devotes most of her time to working on reCAPTCHA Enterprise and Web Risk and is passionate about helping businesses and their customers stay protected from fraud and abuse. Prior to Google, she worked in product marketing for Microsoft in Azure, where she marketed a variety of security products and business continuity products. For more information about Kelly Anderson and her leather jacket, you can follow her on Twitter. Cool things of the week Introducing HTTP/gRPC server streaming for Cloud Run blog Introducing Google Workspace blog Interview reCAPTCHA Enterprise site reCAPTCHA enterprise product demo video Top 10 use cases for reCAPTCHA Enterprise to defend against OWASP Web-Automated Attacks site Google reCAPTCHA Enterprise: Frictionless, Flexible, and Effective Web App Security site Stackchat at home This week, Max Saltonstall is fixing potholes in Memphis with Eric Clark of SpringML. What’s something cool you’re working on? Priyanka has been working on Google Cloud Whiteboard episodes Operations and CDN. She’s been working on new comic strips for zero trust and new GCPSketchnotes as well!

Electro Monkeys
Docker Compose avec Nicolas de Loof

Electro Monkeys

Play Episode Listen Later Oct 6, 2020 63:13


Docker est un outil apprécié des développeurs pour sa simplicité. Docker compose emprunte cette même philosophie en permettant de démarrer une stack applicative avec une simple commande UP. Mais l'arrivée de Kubernetes est venue changer la donne, et la bien connue "developer experience" n'est plus du tout comparable à ce qu'elle était.Il n'est pas surprenant dans ce contexte de voir certains développeurs se tourner vers Fargate ou Cloud Run qui enlèvent totalement la complexité de l'orchestrateur de l'équation. Mais les fournisseurs de cloud ne sont pas réputés pour simplifier la vie des développeurs.Dans cet épisode, je reçois Nicolas de Loof. Nicolas se qualifie lui même de bricoleur dangereux, et travail pour Docker Inc sur la nouvelle mouture de compose : une version capable de fonctionner aussi bien en local que dans le cloud. Avec lui je reviens sur l'intérêt qu'apporte un outil tel que compose, mais aussi sur les raisons qui l'ont contraint à évoluer.Support the show (https://www.patreon.com/electromonkeys)

Google Cloud Platform Podcast
IKEA Retail (Ingka Group) with Matthew Lawson

Google Cloud Platform Podcast

Play Episode Listen Later Sep 16, 2020 25:14


Matthew Lawson of IKEA Retail (Ingka Group) joins Mark Mirchandani and Priyanka Vergadia today, telling us all about IKEA Retail (Ingka Group)’s move to the cloud. Engineering Manager Matt and his team primarily focus on the early stages of development at IKEA Retail (Ingka Group), helping the company with research and planning as well as development. Lately, they have been focused on incrementally moving IKEA Retail (Ingka Group)’s digital presence to the cloud. Matt explains the digital shift process for IKEA Retail (Ingka Group) and why they chose to modernize and move pieces to the cloud over time. By illustrating through examples, he details projects the team worked on during this digital transformation. Matt also talks about the changes to the IKEA Retail (Ingka Group) Digital DNA, emphasizing progress made in their digital culture to allow for the drastic change from on-prem to the cloud. Using managed services like Google Cloud Run, IKEA Retail (Ingka Group) has been able to adapt and grow in the cloud. Because IKEA Retail (Ingka Group)’s culture is developer-supportive, Matt and his team were able to research and convince the company that managed services in the cloud was the way to go, and developers were allowed some autonomy to choose things like GKE to create an effective cloud environment for IKEA Retail (Ingka Group). Next year, Matt and his engineering team are hoping to run some online hackathons and other events. Matthew Lawson Matthew Lawson is responsible for leading a small innovation team at IKEA Retail (Ingka Group) in southern Sweden. He has worked within the IT/Digital industry for 13 years and has deep experience and knowledge in application development, automation, DevOps and cloud technologies - especially serverless. He has a deep passion for enabling teams to quickly provide business value across the entire digital and physical customer journey. Cool things of the week Next OnAir as it happens: All the announcements in one place blog A developer’s take: Get the most out of Cloud AI Week at Next OnAir blog BANDAI NAMCO Entertainment Inc. brings PAC-MAN to the real world in PAC-MAN GEO blog Interview IKEA Retail (Ingka Group) site Matt’s Next Session: Serverless Functions (FaaS): Secure, Scalable, Resilient, Anywhere site ML Kit site Compute Engine site Cloud Run site Google Cloud Functions site Pub/Sub site BigQuery site GKE site Firestore site IKEA Retail (Ingka Group) is hiring! site Tip of the week This week, we get a great tip from our friend Grant on using Google Cloud Functions! github What’s something cool you’re working on? Priyanka is working on sketches like this summary of Google Cloud Next and more GCP Comics!

Google Cloud Platform Podcast
Traffic Director and Microservices with Stewart Reichling and John Laham

Google Cloud Platform Podcast

Play Episode Listen Later Aug 5, 2020 47:37


On the podcast this week, Mark Mirchandani and Brian Dorsey talk with fellow Googlers John Laham and Stewart Reichling about Traffic Director, a managed control plane for service mesh. Traffic Director solves many common networking problems developers face when breaking apart monoliths into multiple, manageable microservices. We start the conversation with some helpful definitions of terms like data plane (the plane that data passes through when one service calls on another) and service mesh (the art of helping these microservices speak with each other) and how Traffic Director and the Envoy Proxy use these concepts to streamline distributed services. Envoy Proxy can handle all sorts of networking solutions, from policy enforcement to routing, without adding hundreds of lines of code to each project piece. The proxy can receive a request, process it, and pass it on to the next correct piece, speeding up your distributed system processes. But Envoy can do more than the regular proxy. With its xDS APIs, services can configure proxies automatically, making the process much more efficient. In some instances, the same benefits developers see with a distributed system can be gained from distributed proxies as well. To make distributed proxy configuration easy and manageable, a managed control plane system like Traffic Director is the solution. Traffic Director not only helps you facilitate communication between microservices, it also syncs distributed states across regions, monitors your infrastructure, and more. Stewart Reichling Stewart is a Product Manager on Google Cloud Platform (GCP), based out of Cambridge, Massachusetts. Stewart leads Product Management for Traffic Director (Google’s managed control plane for open service mesh) and Internal HTTP(S) Load Balancing (Google’s managed, Envoy-based Layer 7 load balancer). He is a graduate of Georgia Institute of Technology and has worked across strategy, Marketing and Product Management at Google. John Laham John is an infrastructure architect and cloud solutions architect that works with customers to help them build their applications and platforms on Google Cloud. Currently, he leads a team of consultants and engineers as part of the Google Cloud Professional Services organization, aligned to the telco, media, entertainment and gaming verticals. Cool things of the week Week four sessions of Cloud Next: Security site Weekly Cloud Talks by DevRel Week 2 site Weekly Cloud Talks by DevRel Week 3 site Cost optimization on Google Cloud for developers and operators site GCP Podcast Episode 217: Cost Optimization with Justin Lerma and Pathik Sharma podcast Interview Traffic Director site Envoy Proxy site NGINX site HAProxy site Kubernetes site Cloud Run site Service Mesh with Traffic Director site Traffic Director Documentation site gRPC site Traffic Director and gRPC—proxyless services for your service mesh blog Tip of the week This week, we’re talking about IAM Policy Troubleshooter. What’s something cool you’re working on? Brian is working on the Weekly Cloud Talks by DevRel we mentioned in the cool things this week and continuing his Terraform studies. Check out the Immutable Infrastructure video we talked about last week. Sound Effect Attribution “Jingle Romantic” by Jay_You of Freesound.org

Snippets Tech
GCP - Cloud Functions

Snippets Tech

Play Episode Listen Later Aug 1, 2020 16:27


Estamos en una nueva temporada esta vez hablando de Google Cloud Platform. En este episodio hablo sobre Cloud Functions. Es un servicios serverless y basado en eventos. Te explico todo acerca de este servicio, realizó una comparación con App Engine y Cloud Run y aconsejo cuando usar cada uno. Recuerda compartir si crees en la comunidad.

Snippets Tech
GCP - Cloud Run

Snippets Tech

Play Episode Listen Later Jul 30, 2020 13:15


Estamos en una nueva temporada esta vez hablando de Google Cloud Platform. En este episodio hablo sobre Cloud Run. Es uno de los servicios más amados por los developers, construido sobre el estándar de Knative que facilitara la administración de tus cargas de trabajo. Recuerda compartir si crees en la comunidad.

Electro Monkeys
Google Cloud Functions avec Guillaume Laforge

Electro Monkeys

Play Episode Listen Later Jul 28, 2020 62:25


Avec la montée en puissance des microservices, les fonctions sont de plus en plus populaires dans nos architectures modernes. Une fonction est généralement un microservice pouvant être invoqué, par exemple, au travers d'une url, ou en réponse à un évènement.L'un des aspects qui rendent les fonctions si attractives, c'est qu'elles ne coûtent rien tant qu'elles ne sont pas utilisées. De plus, un développeur faisant appel une fonction n'a pas a gérer l'infrastructure en charge de l'exécuter. Ce type d'architecture est dite serverless.Dans un épisode précédent, nous avons exploré ce que Google proposait avec Cloud Run. Aujourd'hui, j'ai le plaisir de recevoir Guillaume Laforge. Guillaume est developer advocate pour Google Cloud Platform, et ensemble nous discutons des cas d'usage des fonctions, et des spécificités de Google Cloud Functions.Support the show (https://www.patreon.com/electromonkeys)

Electro Monkeys
Google Cloud Run avec Steren Giannini

Electro Monkeys

Play Episode Listen Later Jul 10, 2020 60:35


Depuis ses origines, le cloud a pour vocation de faciliter l'expérience des développeurs en leur permettant de déployer leurs applications simplement et en gérant pour eux la complexité du run. Quand nous pensons à cette simplicité, Heroku, Cloud Foundry ou Google App Engine nous viennent directement à l'esprit. Mais le cloud a un autre visage, composé d'instances de machines virtuelles, de VPC, de firewalls et de load balancers. Ces composants sont généralement complexes et ont souvent tendance a rebuter le premier développeur venu.C'est pour cette raison que les conteneurs ont pris un tel essor ces dernières années : ils permettent aux développeurs de déployer rapidement leurs applications en s'abstrayant de la complexité de l'infrastructure. Cependant, pour gérer ces conteneurs, il faut un orchestrateur, et cet orchestrateur, c'est aujourd'hui Kubernetes. Et Kubernetes est lui aussi une pièce d'infrastructure que les développeurs ne souhaitent pas gérer. C'est pourquoi Google a lancé Cloud Run : il réuni à lui seul la simplicité d'App Engine avec la flexibilité qu'offre les conteneurs. Dans cet épisode, j'ai le plaisir de recevoir Steren Giannini. Steren est product manager pour Google Cloud Platform, et il a eu la chance de travailler aussi bien sur App Engine que sur Cloud Run. Avec lui, nous allons découvrir les défis que Cloud Run vient relever et pourquoi il constitue la plateforme idéale pour déployer vos applications.Support the show (https://www.patreon.com/electromonkeys)

Electro Monkeys
Embrasser le serverless et les fonctions avec Alain Rouen

Electro Monkeys

Play Episode Listen Later Jun 5, 2020 58:08


Si le serverless est un concept qui a déjà plus de dix ans, la function as a service a grandement contribué à le remettre sur le devant de la scène, même si la fonction n'est jamais qu'une partie de cet écosystème. On en vient à se questionner sur la légitimité d'avoir la main ou non sur l'environnement d'exécution.Dans ce paysage, il y a deux types d'acteurs : ceux qui créent les produits comme AWS avec Lambda ou Google avec Cloud Run, et qui nous permettent de rêver à des architectures éthérées qui flotteraient dans les airs, et ceux qui utilisent ces mêmes produits. Et les retours d'expérience de ces derniers nous sont plus que jamais précieux pour répondre aux questions qu'on est en droit de se poser sur la mise en place d'une architecture serverless.Dans cet épisode, j'ai le plaisir de recevoir Alain Rouen. Alain est directeur technique pour Smile open source solutions, et il a toujours eu à coeur de tirer le meilleur parti des innovations techniques mises à notre disposition. Avec lui, nous allons en apprendre un peu plus sur les cas d'usage, et les bonnes et les mauvaises pratiques du serverless et de la function as a service.Support the show (https://www.patreon.com/electromonkeys)

Google Cloud Platform Podcast
Strise with Marit Rødevand

Google Cloud Platform Podcast

Play Episode Listen Later May 12, 2020 40:56


Priyanka Vergadia hops back into the host seat this week, joining Mark Mirchandani to talk to Marit Rødevand of Strise. Strise is an AI-driven enterprise company using knowledge graphs to gather and analyze massive amounts of information, depositing it into a web-based interface to help large clients such as banks solve data-driven problems. Strise’s web-based data platform is customizable, flexible, and capable of keeping up with the fast-paced world of technology so enterprise companies aren’t constantly putting time and resources into reworking old or building new software. To do this, Strise uses knowledge graphs rather than typical databases to create what Marit calls a future-proof data model. Marit explains knowledge graphs in detail, emphasizing that they can reduce training of machine models, allow new data to be input easily, and make analyzing unstructured data much easier. Knowledge graphs take data that would normally only make sense to humans and in effect translate it for computers. Using banking as an example, she details how information about customers can be collected and analyzed thoroughly to help the bank come to conclusions about credit-worthiness or possible criminal activity. On Strise’s platform, Marit tells us that the information is now available to the end user who provides feedback to the system, marking things as relevant or irrelevant, rather than leaving those decisions to a data scientist outside of the client’s field. This means that massive amounts of information could be stored in the knowledge graph, across many industries, and each user only gets the data he or she needs. Google Cloud tools such as Kubernetes Engine, Dataproc, and Pub/Sub have played an integral roll in the creation of the Strise data pipeline. Marit explains how Strise gets their data, how it’s input into the knowledge graph, and how these Google tools help to keep Strise running. Marit Rødevand Marit Rødevand is the CEO & co-founder of Strise, an AI startup from Norway who is signaling a new era of enterprise software. Strise makes the world’s information useful across the enterprise. Their novel approach by utilizing a knowledge graph to power their data platform, allows Strise to break data silos, end customization projects, and bring new insights from unstructured data. Strise is currently helping leading Nordic banks and financial institutions to solve their hardest data-driven problems within KYC/AML, risk, and sales. Strise recently announced their Seed round from Maki.vc, the leading Nordic early stage investor, who invests in deep tech & brand-led startups. Marit has a background from Cybernetics & Robotics at the Norwegian University of Science and Technology (NTNU). In university, she co-founded Rendra, a construction SaaS, who was later acquired. Marit started as Entrepreneur in Residence at NTNU where she spun Strise out of a research project that focused on new data models as a foundation for better AI. Cool things of the week BakeML site David East’s Firebase Podcast podcast Automating BigQuery exports to an email blog Cloud OnBoard site Interview Strise site A world in text — Strise blog GKE site Helm sote Dataproc site Operations site Cloud Run site Cloud Pub/Sub site Cloud DNS site Cloud Storage site GCP Podcast Episode 198: SeMI Technologies with Laura Ham podcast Building on Google Cloud with SeMI Technologies - Stack Chat video Knowledge graphs with Weaviate - Stack Chat video Natural Language Data Processing with Mito.ai - Stack Chat video Question of the week Zach answers the question “What’s a cool thing in Cloud that many people may not have thought about?” GSuite and Apps Script What’s something cool you’re working on? Priyanka wrote this post on 13 Most Common Google Cloud Reference Architectures and her parents were on the news for the vegetable garden they’ve been working on. She’s also been working with material design components and firebase hosting. And The Data Show with Yufeng and Felipe is going strong!

Google Cloud Platform Podcast
SAP with Thomas Jung and Lucia Subatin

Google Cloud Platform Podcast

Play Episode Listen Later Apr 7, 2020 42:00


Brian Dorsey and Mark Mirchandani team up this week to speak with Thomas Jung and Lucia Subatin about SAP. SAP, the company that builds software and other technology components, is probably best known for their Enterprise Resource Planning software that helps businesses with everything from accounting to order management. Their Customer Relationship Management software helps companies with things like marketing campaign management and sales, while SAP’s Supplier Relationship Management software helps clients manage large supply chains. Thomas explains how companies can get started with SAP products and integrate them into their current systems. Once SAP products are employed, clients benefit from a better understanding of their complete business and a more efficient company. SAP developers must have great communication skills, as they take SAP products and tailor them to each individual company, whether on-prem or in the cloud. Later, Lucia and Thomas describe instances when clients may want to take their core businesses and enhance them with technologies like AI and how this is possible. To wrap up the episode, Thomas and Lucia introduce us to SAP products like HANA, their Cloud Application Programming Model, and security measures. Thomas Jung Thomas Jung is Head of Developer Advocacy - a team within the SAP Developer & Community Relations organization. The Developer Advocates inform and educate about SAP and related development technologies and also act as the voice of external developers within the SAP organization. Lucia Subatin Chocolate, cats, computers. Lucia spends a lot of time with the computer, either solving problems or starting trouble. She likes to architect technology solutions to help enterprises run more efficiently. Her cats and chocolate help her bring innovations. Cool things of the week Google Cloud learning resources at no cost for 30 days blog Powering up caching with Memorystore for Memcached blog Interview SAP site OData site Pub/Sub site Cloud Knative site SAP TechEd site Cloud Run site Google Cloud Platform Podcast Episode 166: SAP HANA with Lucia Subatin and Kevin Nelson podcast Qwiklabs site Question of the week How do I get started with caching? Why should I cache? Brian tells us more about caching, Memcached, and Redis. Where can you find us next? Brian is taking it day by day right now. Mark will be making more videos!

Google Cloud Platform Podcast
Python with Katie McLaughlin

Google Cloud Platform Podcast

Play Episode Listen Later Feb 18, 2020 28:20


Aja Hammerly and Brian Dorsey are here this week to start off a new year of podcasts! In an interview with Google Developer Advocate Katie McLaughlin, we talk about the advantages of Python 3 and why version 2 has been retired, as well as the cool things you can do with Django. Later, Katie discusses the complexities of deployment and how she makes it work smoothly with GCP, and we have some fun with emojis! Katie McLaughlin Katie has worn many different hats over the years. She is currently a Developer Advocate at Google Cloud, and a Director of the Python Software Foundation. When she’s not changing the world, she enjoys making tapestries, cooking, and seeing just how well various application stacks handle emoji. Cool things of the week Running workloads on dedicated hardware just got better blog Container security summit is going on as we record this site Easily upgrade Windows Server 2008 R2 while migrating to Google Cloud blog Launch of the BigQuery Weekly Data Challenge! site New data engineering learning path site Interview Python Software Foundation site PyCascades site Django Demo site Emojipedia site App Engine site Compute Engine site Cloud Run site Cloud Build site Secrets Manager site Kakapo Mountain Parrot site The Power ⚡️ and Responsibility 😓 of Unicode Adoption ✨ video Question of the week I need to run something later, but Cron isn’t a good fit. What do I do? Where can you find us next? We’ll be at Cloud Next in San Francisco in April! Katie will also be at PyCon US in April! Sound Effects Attribution “African Gray” by Jmagiera of Freesound.org

PyDataMCR
Episode 11 - A Year in Review

PyDataMCR

Play Episode Listen Later Jan 26, 2020 47:52


This month we thought we would take a break from our usual episode format and have ourselves as the guests. Listen to us reflect on our year volunteering with PyDataMCR. We talk about what we did this year, including Google Next! We also talk a little about learning to rank. We realise this is an insider view, so this months meetup will be an open retro so have a think about what you want from next year. Request: If you know about recording events for YouTube, and can help us out, feel free to dm us (see our social channels below). Sponsors LadBible - ladbible.com/ Cathcart Associates - cathcartassociates.com/ Horsefly Analytics - horseflyanalytics.com/ Our Collaborators: HER+data - meetup.com/HER-Data-MCR/ Pyladies - twitter.com/pyladiesnwuk Django Girls - djangogirls.org/ Python NW - meetup.com/Python-North-West-Meetup/ Open Data Manchester - opendatamanchester.org.uk/ Lambda Lounge - http://lambdalounge.org.uk/ What we’ve done this year… hacktoberfest - hacktoberfest.digitalocean.com blog posts - tinyurl.com/tnkzafr tinyurl.com/urn8twp tinyurl.com/too86wj Learning to rank resources Reinforcement Learning to Rank with Markov Decision Process- http://bigdatalab.ac.cn/~junxu/publications/SIGIR2017_RL_L2R.pdf Reinforcement Learning to Rank in E-Commerce Search Engine: Formalization, Analysis, and Application arxiv.org/abs/1803.00710 medium.com/@alitech_2017/unlocking-insights-from-multi-round-searches-with-reinforcement-learning-74f7143acf08 youtube.com/watch?v=AXa3CW68xks Matt Crooks Medium: Louvain clustering - medium.com/@DrMattCrooks Google NEXT Google NEXT - cloud.withgoogle.com/next/uk/ Cloud Build - cloud.google.com/cloud-build/ Cloud Run - cloud.google.com/run/ Trifactor - trifacta.com Cloud Dataprep by Trifactor - cloud.google.com/run/ Tool Great Expectations - github.com/great-expectations/great_expectations Social Meetup - meetup.com/PyData-Manchester/ Slack - http://bit.ly/35KGOgR Twitter - @PyDataMCR

The InfoQ Podcast
Kelsey Hightower on Extending Kubernetes, Event-Driven Architecture, and Learning

The InfoQ Podcast

Play Episode Listen Later Jan 10, 2020 26:19


In this podcast, Daniel Bryant sat down with Kelsey Hightower, Staff Developer Advocate at Google. Topics covered included: the extensibility of Kubernetes, and why it has become the platform that other platforms are being built on top of; creating event-driven architectures and deploying these onto Function-as-a-Service (FaaS) platforms like the Kubernetes-based Knative and Google Cloud Run; and the benefits of learning, sharing knowledge, and building communities. Why listen to this podcast: - Kubernetes is a platform for building platforms. It may not be as opinionated as traditional Platform-as-a-Service (PaaS) offerings, but it has become popular due to its extensibility. There are PaaS-like solutions built on top of Kubernetes, such as OpenShift, Knative, and Cloud Run. - The creation of common interfaces within Kubernetes -- such as Custom Resource Definitions (CRDs), Container Networking Interface (CNI), and Container Runtime Interface (CRI) -- enabled the adoption of the platform by vendors and the open source community without everyone needing to agree on exactly how to implement extensions. - Although not every workload can be effectively implemented using an event-driven architecture, for those that can the Kubernetes-based Function-as-a-Service (FaaS) platforms like Knative and Cloud Run can handle a lot of the operational management tasks for developers. - Engineers may be able to get ~90% of the “service mesh” traffic management functionality they need from using a simple proxy. However, the separation of the control and data planes within modern service meshes, in combination with the rise in popularity of the sidecar deployment model, has provided many benefits within Kubernetes. - A lot of learning within software development and information technology is transferable. If you spend time going deep in a technology when you begin your career, much of what you learn will be useful when you come to learn the next technology. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/30alHC1 You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/30alHC1

Kubernetes Podcast from Google
Monitoring, Metrics and M3, with Martin Mao and Rob Skillington

Kubernetes Podcast from Google

Play Episode Listen Later Dec 17, 2019 35:37


Martin Mao and Rob Skillington are co-founders of Chronosphere; CEO and CTO respectively. They both worked on the monitoring team at Uber, where they created M3: a metrics platform with an open source time-series database built for scale. They join Craig and Adam to talk about monitoring, metrics and M3 on the last episode of 2019. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Test message from Delta Airlines News of the week CSI migration and CSI volume snapshots AKS Private Clusters in preview GKE maintenance Windows and exclusions is GA Google Cloud E2 VMs: introduction and understanding dynamic resource management New features in Cloud Run for Anthos Best practices for performing forensics on containers Infrastructure at Cliqz, and introducing Hydra Envoy CVEs Istio security bulletin The Top 3 Service Mesh Developments in 2019 by Zack Jory Istio Service Mesh Explained in 5 Minutes by Ram Vennam Ambassador Edge Stack Solo.io WebAssembly Hub Episode 55, with Idit Levine Kafka Envoy Protocol Filter Talos 0.3 beta AutoTiKV tuning OpenPolicyAgent’s KubeCon recap Episode 42, with John Murray A first look at Antrea from Alex Brand TODO: read this article by Patrick DeVivo Does Testing Kubernetes Conformance Leave You in the Dark? Get Progress Updates as Tests Run by John Schnake Demystifying Kubernetes as a Service – How Alibaba Cloud Manages 10,000s of Kubernetes Clusters How Jaeger Helped Grafana Labs Improve Query Performance and Root Out Tough Bugs Adopting Kubernetes at Quora by Taylor Barrella, CNCF announces schedule for Bengaluru/Delhi Forums Links from the interview M3 website M3: Uber’s Open Source, Large-scale Metrics Platform for Prometheus Before: Graphite and its Whisper database Prometheus Why pull rather than push? AlertManager PromQL RRDtool M3 on GitHub: open source from the start Chronosphere Rob’s 2019 KubeCon’s talks: EU: M3 and Prometheus, Monitoring at Planet Scale for Everyone NA: Deep Linking Metrics and Traces with OpenTelemetry, OpenMetrics and M3 Twitter: Rob Skillington Martin Mao M3 Chronosphere

Google Cloud Platform Podcast
End of the Year Recap

Google Cloud Platform Podcast

Play Episode Listen Later Dec 10, 2019 37:46


Hosts new and old gather together for this special episode of the podcast! We’ll talk about our favorite episodes of the year, the coolest things from 2019, and wrap up another great year together doing what we love! Happy Holidays to all of our listeners, and we’ll see you in the new year! Top episodes of the year GCP Podcast Episode 173: Cloud Run with Steren Giannini and Ryan Gregg podcast GCP Podcast Episode 165: Python with Dustin Ingram podcast GCP Podcast Episode 175: MongoDB with Andrew Davidson podcast GCP Podcast Episode 160: Knative with Mark Chmarny and Ville Aikas podcast GCP Podcast Episode 180: Firebase with Jen Person podcast GCP Podcast Episode 164: Node.js with Myles Borins podcast GCP Podcast Episode 174: Professional Services with Ann Wallace and Michael Wallman podcast GCP Podcast Episode 176: Human-Centered AI with Di Dang podcast GCP Podcast Episode 168: NVIDIA T4 with Ian Buck and Kari Briski podcast GCP Podcast Episode 163: Cloud SQL with Amy Krishnamohan podcast Favorite episodes of the year Mark Mirchandani’s Favorites: GCP Podcast Episode 193: Devoted Health and Data Science with Chris Albon podcast GCP Podcast Episode 177: Primer with John Bohannon podcast GCP Podcast Episode 202: Supersolid with Kami May podcast Mark Mandel’s Favorites: GCP Podcast Episode 186: Blockchain with Allen Day podcast GCP Podcast Episode 196: Phoenix Labs with Jesse Houston podcast Jon’s Favorites: GCP Podcast Episode 199: Data Visualization with Manuel Lima podcast GCP Podcast Episode 196: Phoenix Labs with Jesse Houston podcast GCP Podcast Episode 206: ML/AI with Zack Akil podcast GCP Podcast Episode 201: FACEIT with Maria Laura Scuri podcast Gabi’s Favorites: GCP Podcast Episode 199: Data Visualization with Manuel Lima podcast GCP Podcast Episode 167: World Pi Day with Emma Haruka Iwao podcast GCP Podcast Episode 206: ML/AI with Zack Akil podcast GCP Podcast Episode 198: SeMI Technologies with Laura Ham podcast Favorite things of the year Mark Mirchandani’s Favorites: Cloud Run Mark Mandel’s Favorites: Stadia Samurai Shodown available on Stadia All the new podcast hosts! Jon’s Favorites: First time doing the podcast at NEXT and it was quite the experience. Going to Nvidia offices to do an episode Getting to talk to guests in the gaming industry and hear how passionate they are about the things they are building Joining the podcast Podcast outtakes! Gabi’s Favorites: Visited a bunch of offices! Joining the podcast Cloud NEXT talk, where my demo failed but I recovered! Spreading the love and joy of databases Where can you find us next? Mark Mirch’ will be sleeping as much as possible! Mandel will be working on plans for Next, GDC, and I/O 2020! Gabi will be running away to warm weather for her winter vacation! Jon will be home! He’ll also be planning gaming content for next year and wrapping up this year with some deep dives into multiplayer games and some possible content! Sound Effects Attribution “Small Group Laugh 4, 5 & 6” by Tim.Kahn of Freesound.org “Incorrect” by RicherLandTV of Freesound.org “Correct” by Epon of Freesound.org “Fireworks 3 Bursts” by AtomWrath of Freesound.org “Jingle Romantic” by Jay_You of Freesound.org “Dark Cinematic” by Michael-DB of Freesound.org “Bossa Loop” by Reinsamba of Freesound.org

The Podlets - A Cloud Native Podcast
Kubernetes as per Kelsey Hightower (Ep 7)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Dec 9, 2019 56:12


Today on the show we have esteemed Kubernetes thought-leader, Kelsey Hightower, with us. We did not prepare a topic as we know that Kelsey presents talks and features on podcasts regularly, so we thought it best to pick his brain and see where the conversation takes us. We end up covering a mixed bag of super interesting Kubernetes related topics. Kelsey begins by telling us what he has been doing and shares with us his passion for learning in public and why he has chosen to follow this path. From there, we then talk about the issue of how difficult many people still think Kubernetes is. We discover that while there is no doubting that it is complicated, at one point, Linux was the most complicated thing out there. Now, we install Linux servers without even batting an eyelid and we think we can reach the same place with Kubernetes in the future if we shift our thinking! We also cover other topics such as APIs and the debates around them, common questions Kelsey gets before finally ending with a brief discussion on KubeCon. From the attendance and excitement, we saw that this burgeoning community is simply growing and growing. Kelsey encourages us all to enjoy this spirited community and what the innovation happening in this space before it simply becomes boring again. Tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Michael Gasch Key Points From This Episode: Learn more about Kelsey Hightower, his background and why he teaches Kubernetes! The purpose of Kelsey’s course, Kubernetes the Hard Way. Why making the Kubernetes cluster disappear will change the way Kubernetes works. There is a need for more ops-minded thinking for the current Kubernetes problems. Find out why Prometheus is a good example of ops-thinking applied to a system. An overview of the diverse ops skillsets that Kelsey has encountered. Being ops-minded is just an end –you should be thinking about the next big thing! Discover the kinds of questions Kelsey is most often asked and how he responds. Some interesting thinking and developments in the backup space of Kubernetes. Is it better to backup or to have replicas? If the cost of losing data is very high, then backing up cannot be the best solution. Debates around which instances are not the right ones to use Kubernetes in. The Kubernetes API is the part everyone wants to use, but it comes with the cluster. Why the Kubernetes API is only useful when building a platform. Can the Kubernetes control theory be applied to software? Protocols are often forgotten about when thinking about APIs. Some insights into the interesting work Akihiro Suda’s is doing. Learn whether Kubernetes can run on Edge or not. Verizon: how they are changing the Edge game and what the future trajectory is. The interesting dichotomy that Edge presents and what this means. Insights into the way that KubeCon is run and why it’s structured in the way it is. How Spotify can teach us a lesson in learning new skills! Quotes: “The real question to come to mind: there is so much of that work that how are so few of us going to accomplish it unless we radically rethink how it will be done?” — @mauilion [0:06:49] “If ops were to put more skin in the game earlier on, they would definitely be capable of building these systems. And maybe they even end up more mature as more operations people put ops-minded thinking into these problems.” — @kelseyhightower [0:04:37] “If you’re in operations, you should have been trying to abstract away all of this stuff for the last 10 to 15 years.” — @kelseyhightower [0:12:03] “What are you backing up and what do you hope to restore?” — @kelseyhightower [0:20:07] “Istio is a protocol for thinking about service mesh, whereas Kubernetes provides the API for building such a protocol.” — @kelseyhightower [0:41:57] “Go to sessions you know nothing about. Be confused on purpose.” — @kelseyhightower [0:51:58] “Pay attention to the fundamentals. That’s the people stuff. Fundamentally, we’re just some people working on some stuff.” — @kelseyhightower [0:54:49] Links Mentioned in Today’s Episode: The Podlets on Twitter — https://twitter.com/thepodlets Kelsey Hightower — https://twitter.com/kelseyhightower Kelsey Hightower on GitHub — https://github.com/kelseyhightower Interaction Protocols: It's All about Good Manners — https://www.infoq.com/presentations/history-protocols-distributed-systems Akihiro Suda — https://twitter.com/_AkihiroSuda_ Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisia/ Kubernetes — https://kubernetes.io/ Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilion/ Bryan Liles on LinkedIn — https://www.linkedin.com/in/bryanliles/ KubeCon North America — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/ Linux — https://www.linux.org/ Amazon Fargate — https://aws.amazon.com/fargate/ Go — https://golang.org/ Docker — https://www.docker.com/ Vagrant — https://www.vagrantup.com/ Prometheus — https://prometheus.io/ Kafka — https://kafka.apache.org/ OpenStack — https://www.openstack.org/ Verizon — https://www.verizonwireless.com/ Spotify — https://www.spotify.com/ Transcript: EPISODE 7 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [INTERVIEW] [00:00:41] CC: Hi, everybody. Welcome back to The Podlets, and today we have a special guest with us, Kelsey Hightower. A lot of people listening to us today will know Kelsey, but as usual, there are a lot of new comers in this space. So Kelsey, please give us an introduction. [00:01:00] KH: Yeah. So I consider myself a minimalist. So I want to keep this short. I work at Google, on Google Cloud stuff. I’ve been involved with the Kubernetes community for what? 3, 4, 5 years ever since it’s been out, and one main goal, learning in public and helping other people do the same. [00:01:16] CC: There you go. You do have a repo on your GitHub that it’s about learning Kubernetes the hard way. Are you still maintaining that? [00:01:26] KH: Yeah. So every six months or so. So Kubernetes is a hard way for those that don’t know. It’s a guide, a tutorial. You can copy and paste. It takes about three hours, and the whole goal of that guide was to teach people how to stand up a Kubernetes cluster from the ground up. So starting from scratch, 6 VMs, you install etcd, all the components, the nodes, and then you run a few test workloads so you can get a feel for Kubernetes. The history behind that was when I first joined Google, we were all concerned about the adaption of such a complex system that Kubernetes is, right? Docker Swarm is out at the time. A lot of people are using Mesos and we’re wondering like a lot of the feedback at that time was Kubernetes is too complex. So Kubernetes the hard way was built as an idea that if people understand how it worked just like they understand how Linux works, because that’s also complex, that if people just saw how the moving pieces fit together, then they would complain less about the complexity and have a way to kind of grasp it. [00:02:30] DC: I’m back. This is Duffie Colley. I’m back this week, and then we also have Michael and Bryan with us. So looking forward to this session talking through this stuff. [00:02:40] CC: Yeah. Thank you for doing that. I totally forgot to introduce who else is in this show, and me, Carlisia. We didn’t plan what the topic is going to be today. I will take a wild guess, and we are going to touch on Kubernetes. I have so many questions for you, Kelsey. But first and foremost, why don’t you tell us what you would love to talk about? One thing that I love about you is that every time I hear an interview of you, you’re always talking about something different, or you’re talking about the same thing in a different way. I love that about the way you speak. I know you offer to be on a lot of podcast shows, which is how we ended up here and I was thinking, “Oh my gosh! We’re going to talk about what everybody is going to talk about, but I know that’s not going to happen.” So feel free to get a conversation started, and we are VMware engineers here. So come at us with questions, but also what you would like to talk about on our show today. [00:03:37] KH: Yeah. I mean, we’re all just coming straight off the hills of KubeCon, right? So this big, 12,000 people getting together. We’re super excited about Kubernetes and the Mister V event, things are wrapping up there as well. When we start to think about Kubernetes and what’s going to happen, and a lot of people saw Amazon jump in with Fargate for EKS, right? So those unfamiliar with that offering, over the years, all the cloud providers have been providing some hosted Kubernetes offering, the ideas that the cloud provider, just like we do with hypervisors and virtual machines, would provide this base infrastructure so you can focus on using Kubernetes. You’ve seen this even flow down on-prem with VMware, right? VMware saying, “Hey, Kubernetes is going to be a part of this control plane that you can use to Kubernetes’ API to manage virtual machines and containers on-prem.” So at some point now, where do we go from here? There’s a big serverless movement, which is trying to eliminate infrastructure for all kinds of components, whether that’s compute, database as a storage. But even in the Kubernetes world, I think there’s an appetite when we saw this with Fargate, that we need to make the Kubernetes cluster disappear, right? If we can make it disappear, then we can focus on building new platforms that extend the API or, hell, just using Kubernetes as is without thinking about managing nodes, operating systems and autoscalers. I think that’s kind of been the topic that I’m pretty interested in talking about, because that feature means lots of things disappear, right? Programming languages and compilers made assembly disappear for a lot of developers. Assembly is still there. I think people get caught up on nothing goes away. They’re right. Nothing goes away, but the number of people who have to interact with that thing is greatly reduced. [00:05:21] BL: You know what, Kelsey? I’m going to have you get out of my brain, because that was the exact example that I was going to use. I was on a bus today and I was thinking about all the hubbub, about the whole Fargate EKS thing, and then I was thinking, “Well, Go, for example, can generate assembler and then it compiles that down.” No one complains about the length of the assembler that Go generates. Who cares? That’s how we should think about this problem. That’s a whole solvable problem. Let’s think about bigger things. [00:05:51] KH: I think it’s because in operations we tend to identify ourselves as the people responsible for running the nodes. We’re the people responsible for tuning the API server. When someone says it’s going to go away, in ops – And you see this in some parts, right? Ops, some people focus a lot more on observability. They can care less about what machine something runs on. They’re still going to try to observe and tune it. You see this in SRE and some various practices. But a lot of people who came up in a world like I have in a traditional ops background, you were the one that pixie-booted the server. You installed that Linux OS. You configured it with Puppet. When someone tells you, “We’re going to move on from that as if it’s a good thing.” You’re going to be like, “Hold up. That’s my job.” [00:06:36] DC: Definitely. We’ve touched this topic through a couple of different times on this show as well, and it definitely comes back to like understanding that, in my opinion, it’s not about whether there will be a worker for people who are in operations, people who want to focus on that. The real question that come to mind is like there is so much of that work that how are so few of us are going to be able to accomplish it unless we radically re-sync how it will be done. We’re vastly outnumbered. The number of people walking into the internet for the first time every day is mind-boggling. [00:07:08] KH: In early days, we have this goal of abstract or automating ourselves out of a job, and anyone that tried that a number of times knows that you’re always going to have something else to do. I think if we carry that to the infrastructure, I want to see the ops folks. I was very surprised that Docker didn’t come from operations folks. It came from the developer folks. Same thing for Vagrant and the same thing from Kubernetes. These are developer-minded folks that want to tackle infrastructure problems. If I think if ops were to put more skin in the game earlier on, definitely capable of building these systems and maybe they even end up more mature as more operations people put ops-minded thinking to these problems. [00:07:48] BL: Well, that’s exactly what we should do. Like you said, Kelsey, we will always have a job. Whenever we solve one problem, we could think about more interesting problems. We don’t think about Linux on servers anymore. We just put Linux on servers and we run it. We don’t think about the 15 years where it was little rocky. That’s gone now. So think about what we did there and let’s do that again with what we’re doing now. [00:08:12] KH: Yeah. I think the Prometheus community is a good example of operations-minded folks producing a system. When you meet the kind of the originators of Prometheus, they took a lot of their operational knowledge and kind of build this metrics and monitoring standard that we all kind of think about now when we talk about some levels of observability, and I think that’s what happens when you have good operations people that take prior experience, the knowledge, and that can happen over code these days. This is the kind of systems they produce, and it’s a very robust and extensible API that I think you start to see a lot of adaption. [00:08:44] BL: One more thing on Prometheus. Prometheus is six-years-old. Just think about that, and that’s not done yet, and it’s just gotten better and better and better. We go to give up our old thing so we can get better and better and better. That’s just what I want to add. [00:08:58] MG: Kelsey, if you look at the – Basically your own history of coming from ops, as I understood your own history, right? Now being kind of one of the poster childs in the Kubernetes world, you see the world changing to serverless, to higher abstractions, more complex systems on one hand, but then on the other side, we have ops. Looking beyond or outside the world of Silicon Valley into the traditional ops, traditional large enterprise, what do you think is the current majority level of these ops people? I don’t want to discriminate anyone here. I’m just basically throwing this out as a question. Where do you think do they need to go in terms of to keep up with this evolving and higher level abstractions where we don’t really care about nitty-gritty details? [00:09:39] KH: Yes. So this is a good, good question. I spent half of my time. So I probably spent time onsite with at least 100 customers a year globally. I fly on a plane and visit them in their home turf, and you definitely meet people at various skill levels and areas of responsibility. I want to make sure that I’m clear about the areas of responsibility. Sometimes you’re hired in an area of responsibility that’s below your skillset. Some people are hired to manage batch jobs or to translate files from XML to JSON. That really doesn’t say a lot about their skillset. It just kind of talks about the area of responsibility. So shout out to all the people that are dealing with main frames and having to deal with that kind of stuff. But when you look at it, you have the opportunity to rise up to whatever level you want to be in in terms of your education. When we talk about this particular question, some people really do see themselves as operators, and there’s nothing wrong with that. Meaning, they could come in. They get a system and they turn the knobs. You gave me a mainfrastructure me, I will tell you how to turn the knobs on that mainframe. You buy me a microwave, I’ll tell you how to pop popcorn. They’re not very interested in building a microwave. Maybe they have other things that are more important to them, and that is totally okay. Then you have people who are always trying to push the boundaries. Before Kubernetes, if I think back to 10 years ago, maybe 8. When I was working in a traditional enterprise, like kind of the ones you’re talking about or hinting at, the goal has always been to abstract away all of these stuff that it means to deploy an application the right way in a specific environment for that particular company. The way I manage to do it was say, “Hey, look. We have a very complex change in management processes.” I work in finance at that time. So everything had to have a ticket no matter how good the automation was. So I decided to make JIRA the ticketing system their front door to do everything. So you go to JIRA. There’ll be a custom field that says, “Hey, here are all the RPMs that have been QA’d by the QA team. Here are all the available environments.” You put those two fields in. That ticket goes to change in management and approval, and then something below the scenes automated everything, in that case it was Puppet, Red Hat and VMware, right? So I think what most people have been doing if you’re in the world of abstracting this stuff away and making it easier for the company to adapt, you’ve already been pushing these ideas that we call serverless now. I think the cloud providers put these labels on platforms to describe the contract between us and the consumer of the APIs that we present. But if you’re in operations, you should have been trying to abstract away all of these stuff for the last 10 or 15 years. [00:12:14] BL: I 100% agree. Then also, think about other verticals. So 23 years ago, I did [inaudible 00:12:22] work. That was my job. But we learned how to program in C and C++ because we were on old Suns, not even Spark machines. We’re on the old Suns, and we wanted to write things in CVE and we wanted to write our own Window managers. That is what we’re doing right now, and that’s why you see like Mitchell Hashimoto with Vagrant and you’re seeing how we’re pushing this thing. We have barely scratched the surface of what we’re trying to do. For a lot of people who are just ops-minded, understand that being ops-minded is just the end. You have to be able to think outside of your boundaries so you can create the next big thing. [00:12:58] KH: Of you may not care about creating the next big thing. There are parts of my life where I just don’t care. For example, I pay Comcast to get internet access, and my ops involvement was going to BestBuy and buying a modem and screwing it into the wall, and I troubleshoot this thing every once in a while when someone in the household complains the internet is down. But that’s just far as I’m ever going to push the internet boundaries, right? I am not really interested in pushing that forward. I’m assuming others will, and I think that’s one thing in our industry where sometimes we believe that we all need to contribute to pushing things forward. Look, there’s a lot of value in being a great operations person. Just be welcomed to saying that what we operate will change overtime. [00:13:45] DC: Yeah, that’s fair. Very fair. For me, personally, I definitely identify as an operations person. I don’t consider it my life’s goal to create new work necessarily, but to expand on the work that has been identified and to help people understand the value of it. I find I sit in between two roles personally. One is to help figure out all of the different edges and pieces and parts of Kubernetes or some other thing in the ecosystem. Second, to educate others on those things, right? Take what I’ve learned and amplify it. Having the amplifying effect. [00:14:17] CC: One thing that I wanted to ask you, Kelsey is – I work on the Valero project, and that does back and recovery of Kubernetes clusters. Some people ask me, “Okay. So tell me about the people who are doing?” I’m like, “I don’t want to talk about that. That’s boring. I wanted to talk about the people who are not doing backups.” “Okay. Let’s talk about why you should be doing maybe thinking about that.” Well, anyway. I wonder if you get a lot of questions in the area of Kubernetes operations or cloud native in general, infrastructure, etc., that in the back of your mind you go, “That’s the wrong question or questions.” Do you get that? [00:14:54] KH: Yeah. So let’s use your backup example. So I think when I hear questions, at least it lets me know what people are thinking and where they’re at, and if I ask enough questions, I can kind of get a pulse in the trend of where the majority of the people are. Let’s take the backups questions. When I hear people say, “I want to back up my Kubernetes cluster.” I rewind the clock in my mind and say, “Wow! I remember when we used to backup Linux servers,” because we didn’t know what config files were on the disk. We didn’t know where processes are running. So we used to do these PS snapshots and we used to pile up the whole file system and store it somewhere so we can recover it. Remember Norton Ghost? You take a machine and ghost it so you can make it again. Then we said, “You know what? That’s a bad idea.” What we should be doing is having a tool that can make any machine look like the way we want it. Config management is boring. So we don’t back those up anymore. So when I hear that question I say, “Hmm, what is happening in the community that’s keeping people to ask these questions?” Because if I hear a bunch of questions that already have good answers, that means those answers aren’t visible enough and not enough people are sharing these ideas. That should be my next key note. Maybe we need to make sure that other people know that that is no longer a boring thing, even though it’s boring to me, it’s not boring to the industry in general. When I hear these question I kind of use it as a keeps me up-to-date, keeps me grounded. I hear stuff like how many Kubernetes clusters should I have? I don’t think there’s a best practice around that answer. It depends on how your company segregates things, or depends on how you understand Kubernetes. It depends on the way you think about things. But I know why they’re asking that question, is because Kubernetes presents itself as a solution to a much broader problem set than it really is. Kubernetes manages a group of machines typically backed by IS APIs. If you have that, that’s what it does. It doesn’t do everything else. It doesn’t tell you exactly how you should run your business. It doesn’t tell you how you should compartmentalize your product teams. Those decisions you have to make independently, and once you do, you can serialize those into Kubernetes. So that’s the way I think about those questions when I hear them, like, “Wow! Yeah, that is a crazy thing that you’re still asking this question six years later. But now I know why you’re asking that question.” [00:17:08] CC: That is such a great take on this, because, yes, it in the area of backup, people who are doing backup in my mind – Yeah, they should be independent of Kubernetes or not. But let’s talk about the people who are not doing backups. What motivates you to not do backups? Obviously, backups can be done in many different ways. But, yes. [00:17:30] BL: So think about it like this way. Some people don’t exercise, because exercise is tough and it’s hard, and it’s easier to sit on the couch and eat a bag of potato chips than exercise. It’s the same thing with backups. Well, backing up my Kubernetes cluster before Valero was so hard that I’d rather just invest brain cycles in figuring out how to make money. So that’s where people come from when it comes to hard things like backups. [00:17:52] KH: There’s a trust element too, right? Because we don’t know if the effort we’re putting in is worth it. When people do unit testing, a lot of times unit testing can be seen as a proactive activity, where you write unit tests to catch bugs in the future. Some people only write unit test when there’s a problem. Meaning, “Wow! There’s an odd things in a database. Maybe we should write a test to prove that our code is putting odd things. Fix the code, and now the test pass.” I think it’s really about trusting that the investment is worth it. I think when you start to think about backups – I’ve seen people back up a lot of stuff, like every day or every couple of hours, they’re backing up their database, but they’d never restored the database. Then when you read their root cause analysis, they’re like, “Everything was going fine until we tried to restore a 2 terabyte database over 100 meg link. Yeah, we never exercised that part.” [00:18:43] CC: That is very true. [00:18:44] DC: Another really fascinating thing to think about the backup piece is that especially like in the Kubernetes with Valero and stuff, we’re so used to having the conversation around stateless applications and being able to ensure that you can redeploy in the case of a failure. You’re not trying to actually get back to a known state the way that like a backup traditionally would. You’re just trying to get back to a running state. So there’s a bit of a dichotomy there I think for most folks. Maybe they’re not conceptualizing the need for having to deal with some of those stateful applications when they start trying to just think about how Valero fits into the puzzle, because they’ve been told over and over again, “This is about immutable infrastructure. This is about getting back to running. This is not about restoring some complex state.” So it’s kind of interesting. [00:19:30] MG: I think part of this is also that for the stateful services that why we do backups actually, things change a lot lately, right? With those new databases, scale out databases, cloud services. Thinking about backup also has changed in the new world of being cloud native, which for most of the people, that’s also a new learning experiment to understand how should I backup Kafka? It’s replicated, but can I backup it? What about etcd and all those things? Little different things than backing up a SQL database like more traditional system. So backup, I think as you become more complex, stays if needed for [inaudible 00:20:06]. [00:20:06] KH: Yeah. The case is what are you backing up and what do you hope to restore? So replication, global replication, like we do with like cloud storage and S3. The goal is to give some people 11 9s of reliability and replicate that data almost as many geographies as you can. So it’s almost like this active backup. You’re always backing up and restoring as a part of the system design versus it being an explicit action. Some people would say the type of replication we do for object stores is much closer to active restoring and backing up on a continuous basis versus a one-time checkpoint. [00:20:41] BL: Yeah. Just a little bit of a note, you can back up two terabytes over 100 meg link in like 44 hours and a half. So just putting out there, it’s possible. Just like two days. But you’re right. When it comes to backups, especially for like – Let’s say you’re doing MySQL or Postgres. These days, is it better to back it up or is it better to have a replica right next to it and then having like a 10 minute delayed replica right next to that and then replicating to Europe or Asia? Then constantly querying the data that you’re replicating. That’s still a backup. What I’m saying here is that we can change the way that we talk about it. Backup started as conventional as they used to be. There are definitely other ways to protect your data. [00:21:25] KH: Yeah. Also, I think the other part too around the backup thing is what is the price of data loss? When you take a backup, you’re saying, “I’m willing to lose this much data between the last backup and the next.” That cost is too high than backing up cannot be your primary mode of operation, because the cost of losing data is way too high, then replication becomes a complementing factor in the whole discussion of backups versus real-time replication and shorter times to recovery. I have a couple of questions. When should people not use Kubernetes? Do you know what I mean? I visit a lot of customers, I work with a lot of eng teams, and I am in the camp of Kubernetes is not for everything, right? That’s a very obvious thing to say. But some people don’t actually practice it that way. They’re trying to jam more and more into Kubernetes. So I love to get your insights on where do you see Kubernetes being like the wrong direction for some folks or workloads. [00:22:23] MG: I’m going to scratch this one from my question list to Kelsey. [00:22:26] KH: I’ll answer it too then. I’ll answer it after you will answer it. [00:22:29] MG: Okay. Who wants to go first? [00:22:30] BL: All right. I’ll go first. There are cases when I’m writing a piece of software where I don’t care about the service discovery. I don’t care about ingress. It’s just software that needs to run. When I’m running it locally, I don’t need it. If it’s simple enough where I could basically throw it into a VM through a CloudNet script, I think that is actually lower friction than Kubernetes if it’s simple. Now, but I’m also a little bit jaded here, because I work for the dude who created Kubernetes, and I’m paid to create solutions for Kubernetes, but I’m also really pragmatic about it as well. It’s all about effort for me. If I can do it faster in CloudNet, I will. [00:23:13] DC: For my part, I think that there’s – I have a couple of – I got follow on questions to this real quick. But I do think that if you’re not actively trying to develop a distributed systems, something where you’re actually making use of the primitives that Kubernetes provides, then that already would kind of be a red flag for me. If you’re building a monolithic application or if you’re in that place where you’re just rapidly iterating on a SaaS product and you’re just trying to like get as many commits on this thing until it works and like just really rapidly prototype or even create this thing. Maybe Kubernetes isn’t the right thing, because although we’ve come a long way in improving the tools that allow for that iteration, I certainly wouldn’t say that we’re like all the way there yet. [00:23:53] BL: I would debate you that, Duffy. [00:23:55] DC: All right. Then the other part of it is Kubernetes aside, I’m curious about the same question as it relates to containerization. Is it containerization the right thing for everyone, or have we made that pronouncement, for example? [00:24:08] KH: I’m going to jump in and answer on this one, because I definitely think we need a way to transport applications in some way, right? We used to do it on floppy disks. We used to do it on [inaudible 00:24:18]. I think the container to me I treat as a glorified [inaudible 00:24:23]. That’s the way I’ve been seeing it for years. Registry store them. They replace [inaudible 00:24:28]. Great. Now we kind of have a more maybe universal packaging format that can handle simple use cases, scratch containers where it’s just your binary, and the more complex use cases where you have to compose multiple layers to get the output, right? I think RPM spec files used to do something very similar when you start to build those thing in [inaudible 00:24:48], “All right. We got that piece.” Do people really need them? The thing I get weary about is when people believe they have to have Kubernetes on their laptop to build an app that will eventually deploy to Kubernetes, right? If we took that thinking about the cloud, then everyone would be trying to install open stack on their laptop just to build an app. Does that even make sense? Does that make sense in that context? Because you don’t need the entire cloud platform on your laptop to build an app that’s going to take a request and respond. I think Kubernetes people, I guess because it’s easier to put your on laptop, people believe that it needs to be there. So I think Kubernetes is overused, because people just don’t quite understand what it does. I think there’s a case where you don’t use Kubernetes, like I need to read a file from a bucket. Someone uploaded an XML file and my app is going to translate it into JSON. That’s it. In that case, this is where I think functions as a service, something like Cloud Run or even Heroku make a lot more sense to me because the operational complexity is kind of hitting within a provider and is linked almost like an SDK to the overall service, which is the object store, right? The compute part, I don’t want to make a big deal about, because it’s only there to process the file that got uploaded, right? It’s almost like a plug-in to an FTP server, if you will. Those are the cases where I start to see Kubernetes become less of a need, because I need a custom platform to do such an obvious operation. [00:26:16] DC: Those applications that require the primitives that Kubernetes provides, service discovery, the ability to define ingress in a normal way. When you’re actually starting to figure out how you’re going to platform that application with regard to those primitives, I do see the argument for having Kubernetes locally, because you’re going to be using those tools locally and remotely. You have some way of defining what that platforming requirement is. [00:26:40] KH: So let me pull on that thread. If you have an app that depends on another app, typically we used to just have a command line flag that says, “This app is over there.” Local host when it’s on my laptop. Some DNS name when it’s in the cluster, or a config file can satisfy that need. So the need for service discovery usually arises where you don’t know where things are. But if you’re literally on your laptop, you know where the things are. You don’t really have that problem. So when you bring that problem space to your laptop, I think you’re actually making things worse. I’ve seen people depend on Kubernetes service discovery for the app to work. Meaning, they just assume they can call a thing by name and they don’t support IPs, and ports. They don’t support anything, because they say, “Oh! No. No. No. You’ll always be running into Kubernetes.” You know what’s going to happen? In 5 or 10 years, we’re going to be talking like, “Oh my God! Do you remember when you used to use Kubernetes? Man! That legacy thing. I built my whole career porting apps away from Kubernetes to the next thing.” The number one thing we’ll talk about is where people lean too hard on service discovery, or people who built apps that taught to config maps directly. Why are you calling the Kubernetes API from your app? That’s not a good design. I think we got to be careful coupling ourselves too much to the infrastructure. [00:27:58] MG: It’s a fair point too. Two answers from my end, to your question. So one is I just build an appliance, which basically priced to bring an AWS Lambda experience to the Vsphere ecosystem. Because we don’t – Or actually my approach is that I don’t want any ops people who needs to do some one-off things, like connect this guy to another guy. I don’t want him to learn Kubernetes for that. It should be as simple as writing a function. So for that appliance, we had to decide how do we build it? Because it should be scalable. We might have some function as a service component running on there. So we looked around and we decided to put it on Kubernetes. So build the appliance as a traditional VM using Kubernetes on top. For me as a developer, it gave me a lot of capabilities, like self-healing, the self-healing capabilities. But it’s also a fair point that you wrote, Kelsey, about how much do we depend or write our applications being depend on those auxiliary features from Kubernetes? Like self-healing, restarts, for example. [00:28:55] KH: Well, in your case, you’re building a platform. I would hate for you to tell me that you rebuilt a Kubernetes-like thing just for that appliance. In your case, it’s a great use case. I think the problem that we have as platform builders is what happens when things start leaking up to the user? You tell a user all they have to care about is functions. Then they get some error saying, “Oh! There’s some Kubernetes security context that doesn’t work.” I’m like, “What the hell is Kubernetes?” That leakage is the problem, and I think that’s the part where we have to be careful, and it will take time, but we don’t start leaking the underlying platform making the original goal untrue. [00:29:31] MG: The point is where I wanted to throw this question back was now these functions being written as simple scripts, whatever, and the operators put in. They run on Kubernetes. Now, the operators don’t know that it runs in Kubernetes. But going back to your question, when should we not use Kubernetes. Is it me writing in a higher level abstraction like a function? Not using Kubernetes in first sense, because I don’t know actually I’m using it. But on the covers, I’m still using it. So it’s kind of an answer and not an answer to your question because – [00:29:58] KH: I’ve seen these single node appliances. There’s only one node, right? They’re only there to provide like email at a grocery store. You don’t have a distributed system. Now, what people want is the Kubernetes API, the way it deploys things, the way it swaps out a running container for the next one. We want that Kubernetes API. Today, the only way to get it is by essentially bringing up a whole Kubernetes cluster. I think the K3S project is trying to simplify that by re-implementing Kubernetes. No etcd, SQLite instead. A single binary that has everything. So I think when we start to say what is Kubernetes, there’s the implementation, which is a big distributed system. Then there’s the API. I think what’s going to happen is if you want the Kubernetes API, you’re going to have so many more choices on the implementation that makes better sense for the target platform. So if you’re building an appliance, you’re going to look at K3S. If you’re a cloud provider, you’re going to probably look something like what we see on GitHub, right? You’re going to modify and integrate it into your cloud platform. [00:31:00] BL: Of maybe what happened with Kubernetes over the next few years is what happened with the Linux API, or the API. Firecracker and gVisor did this, and WSL did this. We can basically swap out Linux from the backend because we can just get on with the calls. Maybe that will happen with Kubernetes as well. So maybe Kubernetes will become a standard where Kubernetes standard and Kubernetes implementation that we have right now. I don’t even know about that one. [00:31:30] KH: We’re starting to see it, right? When you say here is my pod, and we can just look at Fargate for EKS as an example. When you give them a pod, their implementation is definitely different than what most people are thinking about running these days, right? One pod per VM. Not using Virtual Kube. So they’ve taken that pod spec and tried to uphold its means. But the problem with that, you get leaks. For example, they don’t allow you to bind to a host 4. Well, the pod spec says you can bind to a host 4. Their implementation doesn’t allow you to do it, and we see the same problem with gVisor. It doesn’t implement all the system calls. You couldn’t run the Docker daemon on top of gVisor. It wouldn’t work. So I think as long as we don’t leak, because when we leak, then we start breaking stuff. [00:32:17] BL: So we’re doing the same thing with Project Pacific here at VMware, where this concept of a pod is actually a virtual machines that loops in like a tenth of a second. It’s pretty crazy how they’ve been able to figure that out. If we can get this right, that’s huge for us. That means we can move out of our appliance and we can create better things that actually work. I’m VMware specific. I’m on AWS and I want this name space. I can use Fargate and EKS. That’s actually a great idea. [00:32:45] MG: I remember this presentation, Kelsey, that you gave. I think two or three years ago. It might be three years, where you took the Kubernetes architecture and you removed the boxes and the only thing remaining was the API server. This is where it clicked to me as like, “This is right,” because I was focused on the scheduler. I wanted to understand the scheduler. But then you zoomed out or your stripped off all these pieces and the only thing remaining was the API server. This is where it clicked to me. It’s like [inaudible 00:33:09] or like the syscall interface. It’s basically my API to do some crazy things that I would have write on my own and assembly kind of something before I could even get started. As well the breakthrough moment for me, this specific presentation. [00:33:24] KH: I’m working on an analogy to talk about what’s happening with the Kubernetes API, and I haven’t refined it yet. But when the web came out, we had all of these HTTP verbs, put post git. We have a body. We have headers. You can extract that out of the whole web, the web browser plus the web server. If you have tracked out that one piece, the instead of building web package, we can build APIs and GraphQL, because we can reuse many of those mechanisms, and we just call that RESTful interfaces. Kubernetes is going through the same evolution, right? The first thing we built was this container orchestration tool. But if you look at the CRDs, the way we do RBAC, the way we think about the status field in a custom object, if you extract those components out, then you end up with this Kubernetes style APIs where we start to treat infrastructure not as code, but as data. That will be the restful moment for Kubernetes, right? The web, we extracted it out, then we have REST interfaces. In Kubernetes, once we extracted out, we’ll end up with this declarative way of describing maybe any system. But right now, the fine, or the perfect match is infrastructure. Infrastructure as data and using these CRDs to allow us to manipulate that data. So maybe you start with Helm, and then Helm gets piped into something like Customize. That then gets piped into a mission controller. That’s how Kubernetes actually works, and that data model to API development I think is going to be the unique thing that lasts longer then the Kubernetes container platform does. [00:34:56] CC: But if you’re talking about – Correct me if I misinterpret it, platform as data. Data to me is meant to be consumed, and I actually have been thinking since you said, “Oh, developers should not be developing apps that connect directly to Kubernetes,” or I think you said the Kubernetes API. Then I was thinking, “Wait. I’ve heard so many times people saying that that’s one great benefit of Kubernetes, that the apps have that access.” Now, if you see my confusion, please clarify it. [00:35:28] KH: Yeah. Right. I remember early on when we’re doing config maps, and a big debate about how config maps should be consumed by the average application. So one way could be let’s just make a configs map API and tell every developer that they need to import a Kubernetes library to call the API server, right? Now everybody’s app doesn’t work anymore on your laptop. So we were like, “Of course not.” What we should do is have config maps be injected into the file system. So that’s why you can actually describe a config map as a volume and say, “Take these key values from the config map and write them as normal files and inject them into the container so you can just read them from the file system. The other option also was environment variables. You can take a config map and translate them into an environment variables, and lastly, you can take those environment variables and put them into command line flags. So the whole point of that is all three of the most popular ways of configuring an app, environment variables, command line flags and files. Kubernetes molded itself into that world so that developers would never tightly couple themselves to the Kubernetes API. Now, let’s say you’re building a platform, like you’re building a workflow engine like Argo, or you’re building a network control plane like Istio. Of course, you should use a Kubernetes API. You’re building a platform on top of a platform. I would say that’s kind of the exception to the rule if you’re building a platform. But a general application that’s leveraging the platform, I really think you should stay away from the Kubernetes API directly. You shouldn’t be making sys calls directly [inaudible 00:37:04] of your runtime. The unsafe package in Go. Once you start doing that, Go can’t really help you anymore. You start pining yourself to specific threads. You’re going to be in a bad time. [00:37:15] CC: Right. Okay. I think I get it. But you can still use Kubernetes to decouple your app from the machine by using objects to generate those dependencies. [00:37:25] KH: Exactly. That was the whole benefit of Kub, and Docker even, saying, “You know what? Don’t worry too much more about C groups and namespaces. Don’t even try to do that yourself.” Because remember, there was a period of time where people were actually trying to build C groups and network namespaces into the runtime. There’s a bunch of like Ruby and Python projects that they were trying to containerize themselves within the runtime. Whoa! What are we doing? Having that second layer now with Containerd on C, we don’t have to implement that 10,000 times for every programming language. [00:37:56] DC: One of the things I want to come back to is your point that you’d made about the Kubernetes API being like one of the more attractive parts of the projects, and people needing that to kind of move forward in some of these projects, and I wonder if it’s more abstract than that. I wonder if it’s abstract enough to think about in terms of like a level triggered versus edge triggered stuff. Taking control theory, the control theory that basically makes Kubernetes such a stable project and applying that to software architecture rather than necessarily bringing the entire API with you. Perhaps, what you should take from this is the lessons that we’ve learned in developing Kubernetes and apply that to your software. [00:38:33] KH: Yeah. I have the fortunate time to spend some time with Mark Burgess. He came out with the Promise Theory, and the Promise Theory is the underpinnings of Puppet Chef, Ansible, CF Engine, and this idea that we would make promises about something and eventually convergent to that state. The problem was with Puppet Chef and Ansible, we’re basically doing this with shell scripts and Ruby. We were trying to write all of these if, and, else statements. When those didn’t work, what did you do? You made an exec statement at the bottom and then you’re like, “Oh! Just run some batch, and who knows what’s going to happen?” That early implementations of Promise Theory, we didn’t own the resource that we were making promises about. Anyone could go behind this and remove the user, or the user could have a different user ID on different systems but mean the same thing. In the Kubernetes world, we push a lot of that if, else statements into the controller. Now, we force the API not have any code. That’s the big difference. If you look at the Kubernetes API, you can’t do if statements. Terraform, you can do if statements. So you kind of fall into the imperative trap at the worst moments when you’re doing dry runs or something like that. It does a really good of it. Don’t get me wrong. So the Kubernetes API says, “You know what? We’re going to go all-in on this idea.” You have to change the controller first and then update the API. There is no escape patches in the API. So it forces a set of discipline that I think gets us closer to the promises, because we know that the controller owns everything. There’s no way to escape in the API itself. [00:40:07] DC: Exactly. That’s exactly what I was pushing for. [00:40:09] MG: I have a somewhat related question and I’m just not sure how to frame it correctly. So yesterday I saw a good talk by someone talking about protocols, like they somewhat forgotten power of protocols in the world of APIs. We got Swagger. We got API definitions. But he made the very easy point of if I give you an open, a close and a write and read method, or an API, you’d still don’t know how to call them in sequence and which one to call it off. This is same for [inaudible 00:40:36] library if you look at that. So I always have to force myself, “Should I do anything [inaudible 00:40:40] or I’m not leaking some stuff.” So I look it up. Versus on protocols, if you look at the RFC definitions, they are very, very precise and very plainly outlined of what you should do, how you should behave, how you should communicate between these systems. This is more of a communication and less about the actual implementation of an API. I still have to go through that talk again, and I’m going to put it in the show notes. But this kind of opened my mind again a little bit to think more about communication between systems and contracts and promises, as you said, Carlisia. Because we make so many assumptions in our code, especially as we have to write a lot of stuff very quickly, which I think will make things brittle overtime. [00:41:21] KH: So the gift and the curse of Kubernetes that it tries to do both all the time. For some things like a pod or a deployment, we all feel that. If I give any Kubernetes cluster a deployment object, I’m going to get back out running pod. This is what we all believe. But the thing is it may not necessarily run on the same kernel. It may not run on the same OS version. It may not even run on the same type of infrastructure, right? This is where I think Kubernetes ends up leaking some of those protocol promises. A deployment gets you a set of running pods. But then we dropdown to a point where you can actually do your own API and build your own protocol. I think you’re right. Istio is a protocol for thinking about service mesh, whereas Kubernetes provides the API for building such a protocol. [00:42:03] MG: Yeah, good point. [inaudible 00:42:04]. [00:42:04] DC: On the Fargate stuff, I thought was a really interesting article, or actually, an interesting project by [inaudible 00:42:10], and I want to give him a shout out on this, because I thought that was really interesting. He wrote an admission controller that leverages autoscaler, node affinity and pod affinity to effectively do the same thing so that whenever there is a new pod created, it will spin up a new machine and associate only that pod with that machine. I was like, “What a fascinating project.” But also just seeing this come up from like the whole Fargate ECS stuff. I was like – [00:42:34] KH: I think that’s the thread that virtual kubelet is pulling on, right? This idea that you can simplify autoscalling if you remove that layer, right? Because right now we’re trying to do this musical chairs dance, right? Like in a cloud. Imagine if someone gave you the hypervisor and told you you’re responsible for attaching hypervisor workers and the VMs. It would be a nightmare. We’re going to be talking about autoscalling the way we do in the cloud. I think Kubernetes moving into a world where a one pod per resource envelope. Today we call them VMs, but I think at some point we’re going to drop the VM and we would just call it a resource envelope. VMs, this is the way we think about that, Firecrackers. Like, “Hey, does it really need to be a complete VM?” Firecracker is saying, “No. It doesn’t. It just needs to be a resource envelope that allows you to run their particular workload.” [00:43:20] DC: Yeah. Same thing we’re doing here. It’s just enough VM to get you to the point where you can drop those containers on to it. [00:43:25] CC: Kelsey, question. Edge? Kubernetes on edge. Yes or no? [00:43:29] KH: Again, it’s just like compute on edge has been a topic for discussion forever. Problem is when some people say compute on edge, they mean like go buy some servers from Dell and put it in some building somewhere close to your property as you can. But then you have to go build the APIs to deploy it to that edge. What people want, and I don’t know how far off it is, but Kubernetes has set the bar so high that the Kubernetes API comes with a way to low balance, attach storage, all of these things by just writing a few YAML files. What I hear people saying is I want that close to my data center or store as possible. When you say Kubernetes on the edge, that’s what they’re saying, is like, “But we currently have one at edge. It’s not enough.” We’ve been providing edge for a very longtime. Open stack was – Remember open stack? Oh! We’re going to do open stack on the edge. But now you’re a pseudo cloud provider without the APIs. I think what Kubernetes is bringing to the table is that we have to have a default low balancer. We have to have a default block store. We have to have a default everything and on or for to mean Kubernetes like it does today centralized. [00:44:31] BL: Well, Doors have been doing this forever in some form or another. 20 years ago I worked for a Duty Free place, and literally traveled all over the world replacing point of sale. You might think of point of sales as a cash register. There was a computer in the back and it was RS-232 links from the cash register to the computer in the back. Then there was dial-up, or [inaudible 00:44:53] line to our central thing. We’ve been doing edge for a long time, but now we can do edge. The central facility can actually manage the compute infrastructure. All they care about is basically CPU and memory and network storage now, and it’s a lot more flexible. The surety is long, but I think we’re going to do it. It’s going to happen, and I think we’re almost right – People are definitely experimenting. [00:45:16] KH: You know what, Carlisia? You know what’s interesting now though? I was watching the Reinvent announcement. Verizon is starting to allow these edge components to leverage 5G for the last mile, and that’s something game-changer, because most people are very skeptical about 5G being able to provide the same coverage as 4G because of the wavelength and point-to-point, all of these things. But for edge, this thing is a game-changer. Higher bandwidth, but shorter distance. This is exactly what edge want, right? Now you don’t have to dig up the ground and run fiber from point-to-point. So if you could buy in these Kubernetes APIs, plus concepts like 5G, and get in that closer to people, yeah, I think that’s going to change the way we think about regions and zones. That kind of goes away. We’re going to move closer to CDNs, like Cloudflare has been experimenting with their worker technology. [00:46:09] DC: On the edge stuff, I think that there’s also an interesting dichotomy happening, right? There’s a definition of edge that we referred to, which is storage stuff and one that you’re alluding to, which is that there may be like some way of actually having some edge capability and a point of presence in a 5G tower or some point with that. In some cases, edge means data gravity. You’re actually taking a bunch of data from sensors and you’re trying to store it in a place where you don’t have to pay the cost of moving all of the data form one point to another where you can actually centralize compute. So in those edge cases, you’re actually willing to invest in a high-end compute to allow for the manipulation of that data where that data lake is so that you can afford to move it into some centralized location later. But I think that that whole space is so complex right now, because there are so many different definitions and so many different levels of constraints that you have to solve for under one umbrella term, which is the edge. [00:47:04] KH: I think Bryan was pulling on that with the POS stuff, right? Because instead of you going to go buy your own cash registry and gluing everything together, that whole space got so optimized that you can just buy a square terminal. Plug it on some Wi-Fi and then there you go, right? You now have that thing. So once we start to do this for like ML capabilities, security capabilities, I think you’re going to see that POS-like thing expand and that computer get a little bit more robust to do exactly what you’re saying, right? Keep the data local. Maybe you ship models to that thing so that it can get smarter overtime, and then upload the data from various stores overtime. [00:47:40] DC: Yup. [00:47:40] MG: One last question from my end. Switching gears a bit, if allow it. KubeCon. I left KubeCon with some mixed feelings this years. But my perspective is different, because I’m not the typical, one of the 12,000 people, because most of them were new comers actually. So I looked at them and I asked myself, “If I would be new to this huge big world of CNCF and Kubernetes and all these stuff, what would I take from that?” I would be confused. Confused like how from [inaudible 00:48:10] talks, which make it sound like it’s so complex to run all these things through the keynotes, which seems to be like just a lineup of different projects that I all have to get through and install and run. I was missing some perspective and some clarity from KubeCon this year, especially for new comers. Because I’m afraid, if we don’t retain them, attract them, and maybe make them contributors, because that’s another big problem. I’m afraid that we’ll lose our base that is using Kubernetes. [00:48:39] BL: Before Kelsey says anything, and Kelsey was a Kub contrary before I was, but I was a Kub contrary this time, and I can tell you exactly why everything is like it is. Well, fortunately and unfortunately, this cloud native community is huge now. There’s lots of money. There are lots of people. There are lots of interests. If we went back to KubeCon when it was in San Francisco years ago, or even like the first Seattle one, that was a community event. We could make the event for the community. Now, there’s community. The people who are creating the products. There’s the end users, the people who are consuming the products, and there are these big corporations and companies, people who are actually financing this whole entire thing. We actually have to balance all three of those. As a person who just wants to learn, what are you trying to learn from? Are you learning from the consumption piece? Are you learning to be a vendor? Are you learning to be a contributor? We have to think about that. At a certain point, that’s good for Kubernetes. That means that we’ve been able to do the whole chasm thing. We’ve cross over to chasm. This thing is real. It’s big. It’s going to make a lot of people a lot of money one day. But I do see the issue for the person who’s trying to come in and say, “What do I do now?” Well, unfortunately, it’s like anything else. Where do you start? Well, you got to take it all in. So you need to figure out where you want to be. I’m not going to be the person that’s going to tell you, “Well, go do a sig.” That’s not it. What I want to tell you is like anything else that we’d have to learn is real hard, whether it’s a programming language or a new technique. Figure out where you want to be and you’re going to have to do some research. Then hopefully you can contribute. I’m sure Kelsey has opinions on this as well. [00:50:19] KH: I think Brian is right. I mean, I think it’s just like a pyramid happening. A the very bottom, we’re new. We need to get everybody together in one space and it becomes more of a tradeshow, like an introductory, like a tasting, right? When you’re hungry and you go and just taste everything. Then when you figure out what you want, then that will be your focus, and that’s going to change every year for a lot of people. Some people go from consumer to contributor, and they’re going to want something out of the conference. They’re only going to want to go to the contributor day and maybe some of the deep-dive technical tracks. You’re trying to serve everybody in two or three days. So you’re going to start to have like everything pulling for your attention. I think what you got to do is commit. If you go and you’re a contributor, or you’re someone what’s building on top, you may have to find a separate event to kind of go with it, right? Someone told me, “Hey, when you go to all of these conferences, make sure you don’t forget to invest in the one-on-one time.” Me going to Oslo and spending an evening with Mark Burgess and really talk about Promise Theory outside of competing for attention with the rest of the conference. When I go, I’d like to meet new people. Sit down with them. Out of the 12,000 people, I call it a win if I can meet three new people that I’ve never met before. You know what? I’ll do a follow-up hangout with them to go deeper in some areas. So I think it’s more of a catch all. It’s definitely has a tradeshow feel now, because it’s big and there’s a lot of money and opportunity involved. But at the same time, you got to know that, “Hey, you got to go and seek out.” You go to Spotif

Kubernetes Podcast from Google
Vitess, with Jiten Vaidya and Sugu Sougoumarane

Kubernetes Podcast from Google

Play Episode Listen Later Nov 26, 2019 37:26


Vitess is a cloud native database clustering system for horizontal scaling of MySQL. It was built for YouTube, open sourced, and has recently graduated from the CNCF. Two members of the team who wrote and ran Vitess at YouTube, Jiten Vaidya and Sugu Sougoumarane, are CEO and CTO of PlanetScale; a company they founded to support Vitess commercially. They join Craig and Adam to talk databases. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Our meetup at KubeCon The WeWork booth at KubeCon You Must Build A Boat You Have To Win The Game News of the week Anthos news from Next UK Multi-cluster management with Anthos GitLab Serverless with Cloud Run for Anthos Project Antrea from VMware Managed Istio GA on IBM Kubernetes Service IBM open sources developer tools Kui and iter8 Episode 47 with Kim Lewandowski Solo.io introduces Autopilot Hubble, from Cilium ByteBuilders introduces Kubeform Cloudbees adds Jenkins X UI to their community distribution Juniper updates Contrail Slack Vitess case study Debugging network stalls on Kubernetes by Theo Julienne at GitHub Volterra’s control plane for distributed PaaS Gravitational takes $25m investment Datadog’s 2019 container report Aqua Security acquires Cloudsploit CNCF 2019 award winners Episode 72 with Lachlan Evenson Episode 77 with Katharine Berry Links from the interview Vitess About Jiten and Sugu Graduated from the CNCF Database shards Vitess history YouTube acquired by Google in 2006 Go; 10 years old Google storage systems: Bigtable Colossus Scaling MySQL in the cloud with Vitess and Kubernetes and Cloud Native MySQL Sharding with Vitess and Kubernetes by Anthony Yeh, Google Cloud Case studies: Stich Labs, Hubspot, JD.com Vitess at KubeCon: Vitess: Stateless Storage in the Cloud by Sugu Sougoumarane Geo-partitioning with Vitess by Deepthi Sigireddi and Jiten Vaidya How to Migrate a MySQL Database to Vitess by Sugu Sougoumarane & Morgan Tocker Gone in 60 Minutes: Migrating 20 TB from AKS to GKE in an Hour with Vitess by Derek Perkins from Nozzle Postgres support PlanetScale Announcing PlanetScale’s CNDb The name Voltron Strong Bad’s advice on naming things Jiten Vaidya and Sugu Sougoumarane on Twitter

Google Cloud Platform Podcast
End to End Java on Google Cloud with Ray Tsang

Google Cloud Platform Podcast

Play Episode Listen Later Nov 19, 2019 38:05


Mark Mirchandani hosts solo today but is later joined by fellow Googler and Developer Advocate Ray Tsang to talk Java! Ray tells us what’s new with Java 11, including more memory and fewer restrictions for developers. One of the greatest things for Ray is using Java 11 in App Engine because of the management support that it provides. Later, we talk about Spring Boot on GCP. Ray explains the many benefits of using this framework. Developers can get their projects started much more quickly, for example, and with Spring Cloud GCP, it’s easy to integrate GCP services like Spanner and run your project in the cloud. For users looking to containerize their Java projects, JIB can help you do this without having to write a Dockerfile. At the end of the show, Ray and Mark pull it all together by explaining how Spring Boot, Cloud Code, Skaffold, and proper dev-ops can work together for a seamless Java project. Ray Tsang Ray is a Developer Advocate for the Google Cloud Platform and a Java Champion. Ray works with engineering and product teams to improve Java developer productivity on GCP. He also helps Alphabet companies migrate and adopt cloud native architecture. Prior to Google, Ray worked at Red Hat, Accenture, and other consulting companies, where he focused on enterprise architecture, managed solutions delivery, and contributed to open source projects. Aside from technology, Ray enjoys traveling and adventures. Cool things of the week Cloud Run is now GA blog Budget API in Beta blog Interview App Engine site Micronaut site Quarkus site Java 11 on App Engine blog and docs Spring Boot and Spring Cloud site Spring Cloud GCP Projects site Cloud Spanner site Spring Cloud Sleuth site Stackdriver site Bootiful GCP: To Production! blog Effective Cloud Native Spring Boot on Kubernetes & Google Cloud Platform blog JDBC drivers site Hibernate ORM with Cloud Spanner docs Effective Cloud Native Spring Boot on Kubernetes & Google Cloud Platform blog Dev to Prod with Spring on GCP in 20 Minutes (Cloud Next ‘19) video Cloud Code site JIB site Skaffold site Debugger site Troubleshooting & Debugging Microservices in Kubernetes blog Cloud Code Quickstart docs Spring (or Java) to Kubernetes Faster and Easier blog GCP Podcast Episode 58: Java with Ray Tsang and Rajeev Dayal podcast Question of the week How do I dockerize my Java app? video github Where can you find us next? Ray is taking a break for the holidays, but in the future, you can find him at Java and JUG conferences. Mark is hanging out in the Bay Area, but Google Cloud Next in London and KubeCon and CloudNativeCon are happening now! Sound Effect Attribution “Small Group Laugh 4, 5 & 6” by Tim.Kahn of Freesound.org “Tre-Loco1” by Sonipro of Freesound.org “Mens Sincere Laughter” by Urupin of Freesound.org “Party Pack” by InspectorJ of Freesound.org “DrumRoll” by HolyGhostParty of Freesound.org “Tension” by ERH of Freesound.org

Kurz informiert – die IT-News des Tages von heise online
Bezahldienste, Instagram, Google Cloud Run, Mars-Lander | Kurz informiert vom 15.11.2019

Kurz informiert – die IT-News des Tages von heise online

Play Episode Listen Later Nov 15, 2019


Mehr Infos zu den heutigen Themen: Bezahldienste: https://heise.de/-4586773 Instagram: https://heise.de/-4586802 Google Cloud Run: https://heise.de/-4586837 Mars-Lander: https://heise.de/-4586888

Google Cloud Platform Podcast
Cloud Run GKE with Donna Malayeri

Google Cloud Platform Podcast

Play Episode Listen Later Nov 12, 2019 32:13


Jon and Aja host our guest Donna Malayeri this week to learn all about Cloud Run and Anthos! Designed to provide serverless containers, Cloud Run has two versions: fully managed and Cloud Run for Anthos. Donna’s passion for serverless projects and containers shows as we discuss how these options benefit developers and customers. With containers, developers are able to go serverless without a lot of the typical restrictions, and because they are a standard format, containers are fairly easy to learn to use. Tools such as Ko can even do the work of generating docker containers for you. One of Cloud Run’s most unique features is that it allows developers to bring existing applications. You don’t have to rewrite your entire app to make it serverless! Developers can also reuse instances, making the process more efficient and cost effective. Cloud Run for Anthos allows projects to stay on-prem while still enjoying the benefits of containers and the Cloud Run platform. Later in the show, Donna tells us about Knative, which is the API Cloud Run is based on that helps create portability between Cloud Run versions, as well as portability to other vendors. We also get to hear the weirdest things she’s seen put in a container and run in Cloud Run! Donna Malayeri Donna Malayeri is a product manager for Cloud Run for Anthos. She’s worked in the serverless space since 2016 and is bullish on the future of serverless. Prior to joining Google, she was the first product manager at the Seattle startup, Pulumi. She was also a product manager on the Azure Functions team at Microsoft, guiding the developer experience from its beta through the first year of general availability. Donna is passionate about creating products that developers love and has worked on programming languages such as F# and Scala. Cool things of the week Bringing Google AutoML to 3.5 million data scientists on Kaggle blog GCP Podcast has a website on dev site Command and control now easier in BigQuery with scripting and stored procedures bog Skaffold now GA blog Interview Cloud Run site Cloud Run for Anthos site Anthos site Ko site Buildpacks site Google Cloud Functions site Kubernetes site Knative site Serverless: An ops experience of a programming model? video Question of the week How do I write a Matchmaking function in OpenMatch? OpenMatch 0.8RC OpenMatch Slack OpenMatch Twitter Jon’s Twitter Where can you find us next? Donna will be at Google Cloud Next in London. Aja will also be attending Google Cloud Next in London. Jon will be at AnimeNYC, Kubecon in November and Google Kirkland for an internal hackweek. Sound Effect Attribution “Small Group Laugh 4, 5 & 6” by Tim.Kahn of Freesound.org “Anime Cat Girl” by KurireeVA of Freesound.org “Anime Sword Hit” by Syna-Max of Freesound.org “Wedding Bells” by Maurice_J_K of Freesound.org “Big Dinosaur Whirrs” by RobinHood76 of Freesound.org “Cat Purring & Meow” by SkyMary of Freesound.org

Serverless Chats
Episode #22: Serverless Containers on Google Cloud Run with Bret McGowen

Serverless Chats

Play Episode Listen Later Nov 11, 2019 52:56


Jeremy chats with Bret McGowen about how Google Cloud Run can make containers serverless, why Knative is important for building developer platforms, and what the future of serverless looks like at Google Cloud.

De Dataloog
DTL Google Cloud Summit Special 4 – Serverless werken met Cloud Run

De Dataloog

Play Episode Listen Later Oct 21, 2019 28:04


Het nieuwe paradigma is niet het draaien van apps op een Cloud server. Nee het nieuwe paradigma is Serverless. Simpelweg betekent dit dat je als Data Scientist niet meer je zorgen hoeft te maken met wat er aan de achterkant van je code gebeurt. Het in productie brengen, draaien en nog beter het opschalen van je app wordt overgedragen aan de Cloud. Er is geen server, alleen nog maar een framework waarin je de applicatie draait. Hierdoor wordt Data Science Simpel. Het paradigma van Serverless heeft grote gevolgen voor de rollen in een Data Science project. Denk eraan dat ML engineering niet meer nodig is. Maar van de vorige Dataloog opname met Erwin Huizinga hadden we al geleerd dat AutoML de rol van de Data Scientist al zal veranderen. Kortom, het paradigma van serverless kan een grote invloed hebben op de behoefte die de arbeidsmarkt heeft aan Cloud Engineers, ML Engineers en Data Scientists We praten in deze Dataloog met Wytze Venema van Binx.IO. Hij heeft zijn visie op de technologie en de impact van de serverless. Lex was is niet vaak excited maar stiekem deze keer toch best wel. Of was dat nu omdat Walter deze keer aan de knoppen zat in plaats van Jurjen.

Kubernetes Podcast from Google
containerd, with Derek McGowan

Kubernetes Podcast from Google

Play Episode Listen Later Sep 17, 2019 32:51


containerd was born from community desire for a core, standalone runtime to act as a piece of plumbing that applications like Kubernetes could use. It sits between command line tools like Docker, which it was spun out from, and lower-level runtimes like runC or gVisor, which execute the container’s code. This week’s guest is Derek McGowan, a Software Engineer at Docker and a containerd maintainer-d. Along with the news of the week, Adam and Craig discuss the many Vancouvers. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Vancouver, Vancouver, and George Vancouver South Bend, North Bend, and Bend Cosmpolis “50 Year Sensation: the Dave McMacken Retrospective” (album art show in Astoria, Oregon) News of the week Istio 1.3 is out Google’s Anthos now incudes Anthos Service Mesh, Cloud Run for Anthos and more Cloud Native Application Bundles hit 1.0 Episode 61 with Ralph Squillace and Jeremy Rickard Nominations for the annual CNCF Community Awards Bloomberg hits 90% utilization with Kubernetes Mistakes that “cost” thousands by Gajus Kuizinas Kubernetes Edge working group publishes whitepaper Isopod, by Cruise Pulumi 1.0 5 RBAC mistakes you must avoid (number 4 will shock you) OpenShift 4.2 disconnected install Red Hat Quay 3.1 Microsoft AKS brings Scale Sets and Standard LB to GA Upstream kernel bugs Amazom EKS adds cluster tagging and IAM roles for service accounts Deep dive into AWS Fargate by Abhisheck Ray from Amazon Kong introduces Kuma, “universal service mesh” Google introduces Cloud Dataproc for Kubernetes Apache Flink operator from Google Cloud Container runtime security bypasses on Falco by Mark “Antitree” Manning Rafay Systems lands $8m in Series A funding Links from the interview containerd Original announcement The many meanings of ‘container runtime’ kubelet and Container Runtime Interfaces runC, gVisor, Kata Containers, and the Windows Host Compute Service (HCS) ctr debug tool containerd’s graduation from the CNCF containerd shim API gVisor shim Firecracker containerd integration Kata Containers shim Windows Container shim rkt announced in 2014 with appC spec Open Container Initiative libcontainer, which became runC Web Assembly (WASM) BuildKit 1.3.0 releases are coming Contribution opportunities: Reporting issues Plugin ecosystem Derek McGowan and containerd on Twitter

Google Cloud Platform Podcast
ML with Dale Markowitz

Google Cloud Platform Podcast

Play Episode Listen Later Sep 10, 2019 30:11


On the podcast this week, we have a great interview with Google Developer Advocate, Dale Markowitz. Aja Hammerly and Jon Foust are your hosts, as we talk about machine learning, its best use cases, and how developers can break into machine learning and data science. Dale talks about natural language processing as well, explaining that it’s basically the intersection of machine learning and text processing. It can be used for anything from aggregating and sorting Twitter posts about your company to sentiment analysis. For developers looking to enter the machine learning space, Dale suggests starting with non life-threatening applications, such as labeling pictures. Next, consider the possible mistakes the application can make ahead of time to help mitigate issues. To help prevent the introduction of bias into the model, Dale suggests introducing it to as many different types of project-appropriate data sets as possible. It’s also important to continually monitor your model. Later in the show, we talk Google shop, learning about all the new features in Google Translate and AutoML. Dale Markowitz Dale Markowitz is an Applied AI Engineer and Developer Advocate for ML on Google Cloud. Before that she was a software engineer in Google Research and an engineer at the online dating site OkCupid. Cool things of the week Build a dev workflow with Cloud Code on a Pixelbook blog Feminism & Agile blog New homepage and improved collaboration features for AI Hub blog Interview TensorFlow site Natural Language API site AutoML Natural Language site Content Classification site Sentiment Analysis site Analyzing Entities site Translation API site AutoML Translate site Google Translate Glossary Documentation docs Google News Lab site AI Platform’s Data Labeling Service docs Question of the week How many different ways can you run a container on GCP? GKE Cloud Run App Engine Flexible Environmnet Compute Engine VM as a computer Where can you find us next? Dale will be at DevFest Minneapolis, DevFest Madison, and London NEXT. Jon will be at the internal Google Game Summit and visiting Montreal. Aja will be holding down the fort at home. Sound Effect Attribution “Mystery Peak2” by FoolBoyMedia of Freesound.org “Collect Point 00” by LittleRobotSoundFactory of Freesound.org “Cinematic Piano” by Ellary of Freesound.org

44BITS 팟캐스트 - 클라우드, 개발, 가젯
stdout_045.log: 서비쿠라 님 블로그 수익, 22일 깃헙 장애, Nand2Tetris, 클라우드 런 버튼 w/ Subicura, Han

44BITS 팟캐스트 - 클라우드, 개발, 가젯

Play Episode Listen Later Sep 3, 2019 64:18


stdout.fm 45번째 로그에서는 서비쿠라 님 블로그 수익, 22일 깃헙 장애, Nand2Tetris, 클라우드 런 버튼 등에 대해서 이야기를 나눴습니다. 게스트: @subicura, han 참가자: @seapy, @nacyo_t, @raccoonyy 주제별 바로듣기 준비중 서비쿠라 님 블로그 수익 Subicura’s Blog Docker - Google 검색 kubernetes - Google 검색 초보를 위한 도커 안내서 - 도커란 무엇인가? - Subicura 왜 굳이 도커(컨테이너)를 써야 하나요? - 컨테이너를 사용해야 하는 이유 | 44bits.io 쿠버네티스 시작하기 - Kubernetes란 무엇인가? - Subicura 웹사이트로 온라인에서 수익 창출 | Google 애드센스 – Google Google Ads - 손쉬운 온라인 광고로 더 많은 고객을 유치하세요 22일 깃헙 장애 GitHub Status on Twitter: “We are investigating reports of issues affecting a number of services. https://t.co/2N4cXhg6B3” GitHub Status Choosing an Executor Type - CircleCI GitHub Status Enterprise · A smarter way to work together Asynchronous stand-up meetings in Slack | Geekbot #1 Slack Standup Bot for Project Management Automation Pull Panda: Tools for teams using GitHub 깃헙 웹사이트 지문 인증 지원 GitHub supports Web Authentication (WebAuthn) for security keys - The GitHub Blog Web Authentication: An API for accessing Public Key Credentials Level 1 Firefox 다운로드 — 자유로운 웹 브라우저 — Mozilla piroor/treestyletab: Tree Style Tab, Show tabs like a tree. Firefox Multi-Account Containers –

Learn With Us
#32- Intermission: Nikos ventures in Tech Land

Learn With Us

Play Episode Listen Later Aug 11, 2019 4:25


The tech Discord communities I've been building that I mentioned in the podcast: Tensorflow: https://discord.gg/KNm5Epj Google Cloud: https://discord.gg/9yGuxRV Visual Studio: https://discord.gg/ca72qgM

Computas AS
Podcast: Serverless with Knative and .Net apps on Google Cloud

Computas AS

Play Episode Listen Later Jun 27, 2019 11:30


Want to know more about serverless and running your .NET applications on Google Cloud Platform? Last week at NDC Conferences, we had a chat with Mete Atamel from Google about Cloud Run, Knative, AppEngine and containers. Enjoy!

Google Cloud Platform Podcast
Human-Centered AI with Di Dang

Google Cloud Platform Podcast

Play Episode Listen Later May 7, 2019 38:04


Mark Mirchandani and Michelle Casbon take over the show this week to discuss AI and the PAIR Guidebook to Human-Centered AI. Mark Mandel pops in on the interview, and Di Dang, Design Advocate at Google, talks about her role in designing and building the guidebook with the intent of helping others create quality AI projects. Di describes human-centered AI as a practice of not only being conscious of the project being built, but also considering how this AI project will impact us as humans at the end of the day. We influence machine learning so much, both intentionally and unintentionally, and it’s our job to look at the project and results as a whole. In the guidebook, topics like data bias in machine learning, what design patterns work, how to establish trust with the user, and more are addressed. Di explains that the guidebook is a work in progress that will develop with input from users and advances in technology. Di Dang Di Dang recently joined Google’s Design Relations team as a Design Advocate supporting emerging technologies such as augmented reality and machine learning. Previously, she worked as a Senior UX Designer and led the Emerging Tech group at Seattle-based digital agency POP, advising clients on how VR/AR, web/mobile, conversational UI, and machine learning could benefit their end users. With a degree in Philosophy and Religion, she considers herself an optimistic realist who is passionate about ethical design. You can find Di onstage doing improv or on Twitter @dqpdang. Cool things of the week Bringing the best of open source to Google Cloud customers blog James Ward’s Cloud Run button site Michelle’s favorite codelabs from I/O TPU-speed data pipelines site Your first Keras model site Convolutional neural networks site Modern convnets, squeezenet, with Keras with TPUs site Interview People + AI Guidebook site PAIR site GCP Podcast Episode 114: Machine Learning Bias and Fairness with Timnit Gebru and Margaret Mitchell podcast Machine Learning Crash Course site Google Clips site Google Brain Team site Question of the week How do I get started with practical AI? Build an Appointment Scheduler Chatbot with Dialogflow Where can you find us next? Michelle will be at Google I/O and Kubecon Europe. No I/O event in your area? You can host one!

44BITS 팟캐스트 - 클라우드, 개발, 가젯
stdout_028.log: 다크룸, 애플 분기 실적 발표, 인터넷전문은행 사용성 w/ ecleya

44BITS 팟캐스트 - 클라우드, 개발, 가젯

Play Episode Listen Later May 2, 2019 87:55


stdout.fm 28번째 로그에서는 다크룸, 애플 분기 실적 발표, 인터넷전문은행 사용성 등에 대해서 이야기를 나눴습니다. 참가자: @seapy, @nacyo_t, @raccoonyy 게스트: @ecleya Node.js 12 버전 릴리스 소식을 전했는데, 현재는 LTS 버전이 아닙니다. 12 버전은 2019년 4월 23일 Current로 릴리스되었으며 2019년 10월 22일 Active LTS로 릴리스될 예정입니다. stdout.fm : 오디오클립 stdout.fm 정기 후원 - Patreon 우분투(Ubuntu) 14.04 트러스티 타르 LTS 4월 30일 부로 지원 종료 | 44bits.io 구글, 도커(Docker) 컨테이너 기반 서버리스 서비스인 클라우드 런(Cloud Run) 발표 | 44bits.io AWS 시스템 매니저 파라미터 스토어 개선 발표: 고급 파라미터 추가 및 API 처리량 증가 | 44bits.io 4월 25일, 도커 허브(Docker Hub) 계정 정보 유출: 비밀번호 변경 등 조치 필요 | 44bits.io AWS RDS for PostgreSQL에서 S3 데이터 임포트 기능 지원 | 44bits.io 아마존 웹 서비스(AWS), 아시아 태평양 지역에 홍콩 리전(ap-east-1)을 새롭게 추가 | 44bits.io CNCF, Fluentd 프로젝트 졸업을 발표 | 44bits.io Ubuntu release cycle | Ubuntu Ubuntu Advantage Releases - Ubuntu Wiki Nintendo pulls Switch eShop game after dev reveals he sneaked in basic code editor “Easter egg” • Eurogamer.net Amir Rajan: “Dear Ruby devs and game devs. I have a crazy anno…” - Ruby.social tootsuite/mastodon: Your self-hosted, globally interconnected microblogging community Building a game for the Nintendo Switch using Ruby - RubyKaigi 2019 2011 PlayStation Network outage - Wikipedia 농협 전산망 마비 사태 - 위키백과, 우리 모두의 백과사전 ruby.social - Ruby.social Apple, 2분기 실적 발표 - Apple (KR) Q2 FY19 Consolidated Financial Statements.pdf AAPL hits $1 trillion market cap, Apple first publicly traded company to reach 13 digit valuation - 9to5Mac Apple hits $1 trillion market cap once again following better-than-expected earnings results, stock up 6% - 9to5Mac Apple Watch Series 4 - Apple Apple Watch Hermès - Apple Music - Apple (KR) Counterpoint: iPhone X was the best-selling device of 2018 - Gizchina.com Mac Pro - Wikipedia Microsoft OneDrive Apple’s Plan to Buy $75 Billion of Its Stock Fuels Spending Debate - The New York Times 당국 “토스 금융자본이라 단정못해”…인터넷은행 인가 변수 - 매일경제 케이뱅크 카카오뱅크 사용자 경험(UX)의 힘, 카카오뱅크 모바일앱 | Daum 뉴스 카카오뱅크 블로그 :: 카카오뱅크는 왜 공인인증서를 사용하지 않을까? 카카오뱅크 26주 적금 출시 20일만에 30만좌 돌파 | 한경닷컴 토스(toss), 온·오프라인 결제 지원하는 ‘토스카드’ 출시 신경꺼도 내 돈 관리 ㅣ 뱅크샐러드 전세대출, 신용카드 발급 거부·이용한도 축소 논란 - 머니투데이 뉴스 후잉가계부(whooing) Budget Tracker & Planner | Free Online Money Management | Mint 방카슈랑스 - 위키백과, 우리 모두의 백과사전 카카오, 카뱅 대주주 적격 심사 신청 - 매일경제 클라우드에 올인, 대한항공 “전 직원이 AWS 배운다” - Byline Network

44BITS 팟캐스트 - 클라우드, 개발, 가젯
stdout_027.log: 샌프란시스코 여행, Google Cloud Next 2019 w/ subicura

44BITS 팟캐스트 - 클라우드, 개발, 가젯

Play Episode Listen Later Apr 24, 2019 85:29


stdout.fm 27번째 로그에서는 @subicura 님을 모시고 샌프란시스코 여행과 구글 클라우드 넥스트 참관기에 대해서 이야기를 나눴습니다. 참가자: @seapy, @nacyo_t, @raccoonyy 게스트: @subicura RubyKaigi 2020 나고야 공항 - 마쓰모토 역까지 경로 - 구글 맵 무안국제공항 - 위키백과, 우리 모두의 백과사전 Seocho.rb 첫 번째 모임: 서버리스 루비 | Festa! Subicura’s Blog 초보를 위한 도커 안내서 - 도커란 무엇인가? Google Cloud Next ’19 | April 9-11 | San Francisco Moscone Center - Google 지도 아르고넛호텔 - 어 노블 하우스 호텔 특가 호텔예약, 2019 (샌프란시스코, 미국) 호텔추천 | 호텔스닷컴 stdout_003.log: GitHub Universe, HashiConf w/ @Outsideris | 개발자 팟캐스트 stdout.fm Apple Park Visitor Center - Apple iPad mini 구입하기 - Apple (KR) Samsung Galaxy Fold Non-Review: We Are Not Your Beta Testers - WSJ 구글플렉스 - Google 지도 Android lawn statues - Wikipedia MPK 12, facebook hq building - Google 지도 알라딘: 카오스 멍키 - 혼돈의 시대, 어떻게 기회를 낚아챌 것인가 Trust, but Verify: What Facebook’s Electronics Vending Machines Say About the Company - The Atlantic Chrome 원격 데스크톱 - 확장 프로그램 (일본어) Drecon은 올 해 RubyKaigi 2019에서 야타이 스폰스로 참여합니다! - Tech Inside Drecom #rubykaraoke - Twitter Search / Twitter Jeff Bezos and Robert Downey Jr. will headline re:MARS fest in Vegas – GeekWire Google I/O Viewing Party 2019 | Festa! Google - Site Reliability Engineering Home | OCI Micronaut | OCI Micronaut Framework on Twitter: “Love it when we run into fellow #micronautfw enthusiasts at events! @subicura … LogRocket | Logging and Session Replay for JavaScript Apps stdout_016.log: 정부의 SNI 기반 인터넷 접속 차단 w/ han | 개발자 팟캐스트 stdout.fm Many popular iPhone apps secretly record your screen without asking | TechCrunch Continuous Integration and Delivery - CircleCI HashiCorp: Multi-Cloud Management, Security, Automation Anthos  |  Anthos  |  Google Cloud Canalys Newsroom- Cloud market share Q4 2018 and full year 2018 Announcing the AWS China (Beijing) Region Google Cloud announces new regions in Seoul and Salt Lake City | Google Cloud Blog Apple’s HomePod delayed until next year - The Verge Apple cancels AirPower wireless charger - The Verge BigQuery - 분석 데이터 웨어하우스  |  BigQuery  |  Google Cloud Amazon Athena – 서버리스 대화식 쿼리 서비스 – AWS AWS CloudTrail – Amazon Web Services 데이터 파티셔닝 - Amazon Athena Bringing the best of open source to Google Cloud customers | Google Cloud Blog Memorystore  |  Google Cloud Cloud Code  |  Google Cloud AWS Toolkit for Visual Studio Code Atom Cloud Run  |  Google Cloud Cloud Functions - 이벤트 기반 서버리스 컴퓨팅  |  Cloud Functions  |  Google Cloud AWS Fargate – 서버 또는 클러스터를 관리할 필요 없이 컨테이너를 실행 Pricing  |  Cloud Run  |  Google Cloud API 관리  |  Apigee  |  Google Cloud Outsider on Twitter: “뉴스레터에 올릴 글을 모을 때 한국어로 된 글이 많지 않다는 것에 … Outsider’s Dev Story - Newsletter itcle - 페이지 읽기 오류 BigQuery - 분석 데이터 웨어하우스  |  BigQuery  |  Google Cloud Google announces new AI, smart analytics tools | ZDNet

Google Cloud Platform Podcast
Cloud Run with Steren Giannini and Ryan Gregg

Google Cloud Platform Podcast

Play Episode Listen Later Apr 16, 2019 32:32


Mark Mirchandani is our Mark this week, joining new host Michelle Casbon in a recap of their favorite things at Next! The main story this episode is Cloud Run, and Gabi and Mark met up with Steren Giannini and Ryan Gregg at Cloud Next to learn more about it. Announced at Next, Cloud Run brings serverless to containers! It offers great options and security, and the client only pays for what they use. With containers, developers can use any language, any library, any software, anything! Two versions of Cloud Run were released last week. Cloud Run is the fully managed, hosted service for running serverless containers. The second version, Cloud Run GKE, provides a lot of the same benefits, but runs the compute inside your Kubernetes container. It’s easy to move between the two if your needs change as well. Steren Giannini Steren is a Product Manager in the Google Cloud Platform serverless team. He graduated from École Centrale Lyon, France and then was CTO of a startup that created mobile and multi-device solutions. After joining Google, Steren managed Stackdriver Error Reporting, Node.js on App Engine, and Cloud Run. Ryan Gregg Ryan is a product manager at Google, working on Knative and Cloud Run. He has over 15 years experience working with developers on building and extending platforms and is passionate about great documentation and reducing developer toil. After more than a decade of working on enterprise software platforms and cloud solutions at Microsoft, he joined Google to work on Knative and building great new experiences for serverless and Kubernetes. Cool things of the week News to build on: 122+ announcements from Google Cloud Next ‘19 blog Mark’s Favorite Announcement: Network service tiers site Michelle’s Favorite Announcements: Cloud Code site Cloud SQL for Postgres now supports v11 release notes Cloud Data Fusion for visual code-free ETL pipelines site Cloud AI Platform site AutoML Natural Language site Google Voice for G Suite blog Hangouts Chat in Gmail site Kubeflow v0.5.0 release site Interview Cloud Run site Knative site Knative Docs site Firestore site App Engine site Cloud Functions site GKE site Cloud Run on GKE site Understanding cluster resource usage site Docker site Cloud Build site Gitlab site Buildpacks site Jib (Java Image Builder) site Pub/Sub site Cloud VPC site Google Cloud Next ‘19 All Sessions videos Question of the week If I want to try out Cloud Run, how do I get started? Get started with the beta version by logging in site Quicklinks site Codelab site Where can you find us next? Gabi is at PyTexas Jon and Mark Mandel are at East Coast Game Conference Michelle & Mark Mirchandani will be at Google IO in May Michelle will be at Kubecon Barcelona in May

Google Cloud Platform Podcast
Next 2019 Day 3

Google Cloud Platform Podcast

Play Episode Listen Later Apr 11, 2019 20:43


Welcome to day three of Next! More awesome interviews await in this episode, as hosts Mark Mirchandani, Aja Hammerly, Mark Mandel, Jon Foust and their guests explore more of Next. To start, Dan of Viacom joins Mark and Jon to talk about his job in the TV business and why he loves Istio. Host-turned-guest Aja and Lauren of the Developer Relations team sat in the booth to talk with the Marks about the developer keynote at Next. Aja and Lauren elaborate on how they work to promote Next and put together content inclusive of all aspects of Google Cloud. Mark and Mark hear how Yuri from Scotiabank is using Kubernetes to help advance Scotiabank’s latest projects. Anthony from Google joins the conversation, too. And lastly, we tease you with a short interview with Andrew of MongoDB to speak more on the partnership between MongoDB Atlas and Google Cloud. Andrew will be joining us for a full interview on the podcast later this year! Interviews Cloud Next site Next On Air site Google Cloud Next ‘19: Day 3 Run Channel video Google Cloud Next ‘19: Day 3 Build Channel video Google Cloud Next ‘19: Day 3 Collaborate Channel video Day 3 at Next ‘19: A look back at an amazing week blog Playlist: All Sessions - Google Cloud Next ‘19 videos Viacom site How Viacom modernized its Intelligent Content Discovery Platform with Google Cloud blog GKE site Anthos site Istio site Developer Keynote: Get to the Fun Part (Cloud Next ‘19) video Jenkins site Slack site Cloud Run site Announcing Cloud Run, the newest member of our serverless compute stack blog GCP Podcast Episode 167: World Pi Day with Emma Haruka Iwao podcast Dev Zone Walkthrough (Cloud Next ‘19) video Dev Zone Experiment Pizza Authenticator (Cloud Next ‘19) video Scotiabank site Kubernetes site Google Cloud Next ‘19: Day 2 Product Innovation Keynote (Justin Arbuckle at 25:23) video Securing Kubernetes Secrets (Cloud Next ‘19) video MongoDB site MongoDB Atlas site Where can you find us next? The GCP Podcast will be back to its regular schedule next week!

Gestalt IT Rundown
Run, Google Cloud Run | Gestalt IT Rundown: April 10, 2019

Gestalt IT Rundown

Play Episode Listen Later Apr 10, 2019 25:02


This week on the Gestalt IT Rundown, Tom Hollingsworth and Rich Stroffolino discussed the news from Google Cloud Next, the HPE-Nutanix partnership, AWS playing dirty, and more. New Friendly Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/

Google Cloud Platform Podcast
Next 2019 Day 2

Google Cloud Platform Podcast

Play Episode Listen Later Apr 10, 2019 19:08


The podcast celebrates day two of Next as our hosts speak with some more conference attendees. Andre came by to talk with Aja and Jon about his work with Stackdriver IRM and their mission for fewer, shorter, and smaller outages. We had three hosts in the booth with guest, Anne, who works for the GCP Trust and Security Product Team. Brian, Mark, and Aja find out exactly what Anne does at GCP and how she’s enjoying Next! Brian and Mark also met up with Mario who came all the way from Munich, Germany. Mario runs the Cloud Community in his hometown, and he shared his thoughts on Anthos and what he’s excited about at Next. Last but not least, Valentin stopped by to talk with Mark and Jon about Go and the presentation he’s giving at Next on site performance. Interviews Cloud Next site Next On Air site Google Cloud Next ‘19: Day 2 Run Channel video Google Cloud Next ‘19: Day 2 Build Channel video Stackdriver site Stackdriver Incident Response and Management site Stackdriver Incident Response and Management documentation docs Data Management: The New Best Practice for Incident Response (Cloud Next ‘19) video Stackdriver Profiler site GKE site Increasing trust in Google Cloud: visibility, control and automation blog GKE Sandbox site gVisor site Hybrid Cloud Sessions - Google Cloud Next ‘19 videos Google Cloud Next ‘19: Day 1 Secure Channel video Google Cloud Next ‘19: Day 2 Secure Channel video Anthos site Meet Anthos! (Cloud Next ‘19) video Introducing Anthos: An entirely new platform for managing applications in today’s multi-cloud world blog Cloud SQL site Making Google Cloud the best place to run your Microsoft Windows applications blog How to Migrate Windows Workloads to Google Cloud (Cloud Next ‘19) video Qwiklabs site Dev.to site Go site Go Tools site Cloud Run site Announcing Cloud Run, the newest member of our serverless compute stack blog Where can you find us next? We’re at Next this week! Stop by and say hi!

Techmeme Ride Home
Tue. 04/09 - Google Cloud Next Announcements

Techmeme Ride Home

Play Episode Listen Later Apr 9, 2019 17:05


The headlines from the Google Cloud Next conference, China seeks to ban cryptominers, why ads on streaming services is the new hotness, and also, why Roblox is the new hotness. Sponsors: Tiny.website GetQuip.com/ride Links: Google Cloud announces 7 open source partners, Seoul and Salt Lake City regions (VentureBeat) Google's hybrid cloud platform is coming to AWS and Azure (TechCrunch) Google announces Cloud Run for open and portable serverless compute (VentureBeat) China, home to the world’s biggest cryptocurrency mining farms, now wants to ban them completely (South China Morning Post) To cut down on spam, Twitter cuts the number of accounts you can follow per day (TechCrunch) The next front of the streaming wars is the battle for ad-supported programming (Digiday) Roblox hits milestone of 90M monthly active users (TechCrunch) Why Are People Still Playing Google’s Halloween Ghost Game in April? (Slate) Subscribe to the ad-free feed! Right in your podcast app! No muss! No fuss!

Kubernetes Podcast from Google
Anthos Migrate, with Issy Ben-Shaul

Kubernetes Podcast from Google

Play Episode Listen Later Apr 9, 2019 32:19


Anthos (previously known as Cloud Services Platform) has just gone GA at Google Cloud Next. One of its new features is Anthos Migrate, a tool for migrating monolithic apps directly to containers. Issy Ben-Shaul is a Director of Software Engineering at Google Cloud and led the team building Anthos Migrate. He talks to Craig and Adam about it. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Adam enjoyed: Umbrella Academy on TV Hearthstone on the computers Jarrod Alonge in his ears Death metal in water wings Craig enjoyed: Saturday 2⁄3 of “On The Basis Of Sex” News of the week Anthos from Google Cloud Launch announcement Launch keynote What’s in the box? Cloud Run and Cloud Run on GKE Project Eirini updates bring Kubernetes to Cloud Foundry OPA graduates to the CNCF incubator CRI-O joins the CNCF incubator Buildpacks: defense against the Docker arts Local persistent volumes in Kubernetes - PVC pipes! Istio 1.1.2 out for the Envoy CVEs: NUL parsing bug Path normalisation bug Why Rainforest moved from Heroku to GKE Enabling RBAC for Groups in GKE Metalkube

Runified Podcast
Ep.050: Andy Cloud - Run Grow Serve

Runified Podcast

Play Episode Listen Later May 22, 2017 44:43


Andy Cloud is a 39-year-old family man from Carthage, MO. When he’s not busy with his job, family, church, or other hobbies, he can be found running around southwest Missouri competing in everything from 5Ks to ultramarathons, but this wasn’t always the case. About five years ago Andy was very overweight, pre-diabetic, and had other health issues such as sleep apnea. He realized that he needed to change something. Though his faith, community, and inspiration from the 2013 Boston Marathon, Andy found the courage and discipline to improve his lifestyle. Andy shares how changes in his eating habits, exercising and connecting with other people enabled him to make this significant change in his life, losing over 130 pounds. He encourages our listeners to lean on others and your community, reminding us that running and life are hard and we don’t need to do it alone. Andy gives back to the running community by being a co-host of the Heartland Running Podcast and a race director for the Rallye 66 5K and Ranger Run benefiting Peterson Outdoor Ministries.   This episode is hosted by Matt Sorenson and is sponsored by goodr. Use code “runified2017” at playgoodr.com for a special Runified discount and to help support Runified and our running podcast!   Runified’s Website, Instagram, Facebook, Twitter Heartland Running Website, Heartland Running Facebook, Instagram, YouTube, Rallye 66 5K and Ranger Run, Peterson Outdoor Ministries