Podcasts about cloud code

  • 18PODCASTS
  • 24EPISODES
  • 37mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 26, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about cloud code

Latest podcast episodes about cloud code

En.Digital Podcast
La Tertul-IA #52 El código ya no lo escribes tú: novedades de OpenAI, Google, Meta y más

En.Digital Podcast

Play Episode Listen Later May 26, 2025 67:09


Mayo de 2025 será recordado como el mes en que la inteligencia artificial se volvió inabarcable. En esta tertulia especial, Lu recibe de nuevo a Javi Santos, AI Hacker y experimentador incansable, para desgranar una avalancha de anuncios: OpenAI, Google, Meta, Anthropic y muchas startups han sacado todo su arsenal

Pi Tech
News: філософія моралі та об'єктивності, як спростити роботу кодінг агентів, чому в нашому житті так мало ШІ?

Pi Tech

Play Episode Listen Later Mar 12, 2025 54:39


Питання AGI давно обговорюється, але чи дійсно ми наблизилися до створення штучного інтелекту, здатного самостійно мислити? У цьому випуску наші ведучі Павло Дмитрієв, Михайло Гірняк та Євген Москвіта аналізують розвиток AI і його вплив на суспільство, бізнес і технології. Тема об'єктивності AI залишається відкритою: чи можливо створити алгоритми, вільні від людських упереджень? Також говоримо про те, як компанії інтегрують AI у свої продукти та чи справді Apple Intelligence відкриває нову еру технологій. Важливу увагу приділено автоматизації повсякденних завдань, голосовим помічникам та майбутньому AI-рішень у бізнесі.00:24 — моделювання людського інтелекту через LLM07:04 — сучасні моделі штучного інтелекту14:20 — особливості нових моделей 23:24 — Cloud Code: нові можливості для розробників34:15 — проблеми зі штучним інтелектом40:14 — Apple Intelligence та його обмеження44:05 — генерація інфографіки за допомогою AI46:50 — розвиток штучного інтелекту та інтеграція

Glitch
015 - [Interview] Secrets of the Sky: Cracking the Cloud Code

Glitch

Play Episode Listen Later Nov 15, 2024 58:44


Les Cast Codeurs Podcast
LCC 296 - Interview Google IA IA I/O 2023

Les Cast Codeurs Podcast

Play Episode Listen Later May 25, 2023 104:45


Dans cet épisode, Antonio, Emmanuel et Guillaume reviennent sur les nouveautés et annonces faites à Google I/O 2023 : de nouveaux téléphones Pixel qui se plient ou pas, et surtout de l'intelligence artificielle du sol au plafond ! Que ce soit dans Android, dans Google Workspace, dans Google Cloud, une tonne de produits passe en mode survitaminé à l'IA. Guillaume, Antonio et Emmanuel discutent aussi de l'impact qu'ils voient sur l'AI, et de comment les Large Language Models sont raffinés et pourquoi on les fait halluciner, de subtilités du langage des signes. Enregistré le 23 mai 2023 Téléchargement de l'épisode LesCastCodeurs-Episode-296.mp3 Google I/O 2023 Site web : https://io.google/2023/ Keynote principale : https://io.google/2023/program/396cd2d5-9fe1-4725-a3dc-c01bb2e2f38a/ Keynote développeur : https://io.google/2023/program/9fe491dd-cadc-4e03-b084-f75e695993ea/ Vidéo résumée en 10 minutes de toutes les annonces : https://www.youtube.com/watch?v=QpBTM0GO6xI&list=TLGGCy91ScdjTPYxNjA1MjAyMw Vidéo de toutes les sessions techniques : https://io.google/2023/program/?q=technical-session Google I/O s'est tenu il y a 10 jours en Californie, dans l'amphithéâtre de Shoreline, près du campus de Google. Seulement 2000 personnes sur place, un chat et un jeu en ligne pour assister à distance. Jeu en ligne I/O Flip créé avec Flutter, Dart, Firebase, et Cloud Run, et tous les assets graphiques générés par Generative AI https://blog.google/technology/ai/google-card-game-io-flip-ai/ Des Pixels plein les yeux ! Des détails sur le design des nouveaux appareils : https://blog.google/products/pixel/google-pixel-fold-tablet-7a-design/ Pixel Fold Article : https://blog.google/products/pixel/google-pixel-fold/ Premier téléphone foldable de Google (après Samsung et Oppo) Un écran sur le dessus, et un grand écran pliable à l'intérieur Pratique pour la traduction où peut voir une discussion traduire en deux langues d'un côté sur un écran et dans l'autre langue sur l'autre Utilisation créative de la pliure : mode “laptop”, pour les selfies, pour poser l'appareil pour des photos de nuit Par contre… pas disponible en France, et tout de même presque 1900€ ! Pixel Tablet Article : https://blog.google/products/pixel/google-pixel-tablet/ Une belle tablette de 11 pouces, avec un dock de recharge avec enceinte intégrée Processeur Tensor G2, Chromecast intégré C'est un peu comme le Google Nest Hub Max mais avec un écran détachable Une coque pratique avec un trépied intégré et qui n'empêche pas de recharger la tablette sur le dock En mode dock, c'est comme l'écran du Google Home App, et dès qu'on la décroche, on est en mode multi-utilisateur, chacun avec son profil Pixel 7a Article : https://blog.google/products/pixel/pixel-7a-io-2023/ Écran de 6 pouces Triple appareil photo (grand angle, principal, et photo avant pour les selfies) 509 euros Magic Eraser pour effacer les trucs qu'on veut pas dans la photo, Magic Unblur pour rendre une photo floue plus nette, Real Tone pour rendre les peaux foncées plus naturelles Android Article quoi de neuf dans Android : https://blog.google/products/android/android-updates-io-2023/ Dans Messages, Magic Compose dans les conversations, l'IA nous aide à concevoir nos messages, dans différents styles (plus pro, plus fun, dans le style de Shakespeare) Android 14 devrait arriver un peu plus tard dans l'année, avec plus de possibilités de customisation (fond d'écran généré par Gen AI, fond d'écran Emojis, couleurs associées, fond d'écran 3D issus de ses photos) https://blog.google/products/android/new-android-features-generative-ai/ StudioBot : un chatbot intégré à Android Studio pour aider au développement d'applis Android https://io.google/2023/program/d94e89c5-1efa-4ab2-a13a-d61c5eb4e49c/ 800 millions d'utilisateurs sont passés à RCS pour le messaging Adaptation de 50 applications Android pour s'adapter aux foldables https://blog.google/products/android/android-app-redesign-tablet-foldable/ Wear OS 4 va rajouter le backup restore quand on change de montre et autres nouveautés https://blog.google/products/wear-os/wear-os-update-google-io-2023/ 800 chaînes TV gratuites dans Google TV sur Android et dans la voiture Android Auto va être disponible de 200 millions de voitures https://blog.google/products/android/android-auto-new-features-google-io-2023/ Waze disponible globalement sur le playstore dans toutes les voitures avec Android Auto Google Maps Article : https://blog.google/products/maps/google-maps-updates-io-2023/ Maps propose 20 milliards de km de direction tous les jours Immersive View for Routes 15 villes : Amsterdam, Berlin, Dublin, Florence, Las Vegas, London, Los Angeles, Miami, New York, Paris, San Francisco, San Jose, Seattle, Tokyo et Venice Possibilité pour les développeurs de s'intégrer et rajouter des augmentations 3D, des marqueurs Google Photos Article Magic Editor : https://blog.google/products/photos/google-photos-magic-editor-pixel-io-2023/ Magic Editor survitaminé à l'IA pour améliorer les photos, en déplaçant des gens, en rajoutant des parties coupées, ou bien rendre le ciel plus beau Possible que ce soit limité aux téléphones Pixel au début Projets expérimentaux Project Starline (écran avec caméra 3D qui donne un rendu 3D de son interlocuteur comme s'il était en face de soi) a été amélioré pour prendre moins de place https://blog.google/technology/research/project-starline-prototype/ Universal Translator : une nouvelle expérimentation pour faire du doublage et traduction automatique avec synchronisation des mouvements des lèvres Project Tailwind, une sorte de notebook dans lequel on peut rajouter tous ses documents à partir de drive, et poser des questions sur leur contenu, proposer des résumés, de faire du brainstorming sur ces thèmes https://thoughtful.sandbox.google.com/about MusicLM : un large language model pour générer de la musique à partir d'un texte de prompt (waitlist pour s'inscrire) https://blog.google/technology/ai/musiclm-google-ai-test-kitchen/ Project Gameface : utilisation des expressions du visage pour commander une souris et un ordinateur, pour les personnes qui ont perdu leur mobilité https://blog.google/technology/ai/google-project-gameface/ VisualBlocks : pour expérimenter dans une interface drag'n drop avec le développement de modèles pour Tensorflow lite et js https://visualblocks.withgoogle.com/ MakerStudio : pour les bidouilleurs et développeurs https://makersuite.google.com/ https://developers.googleblog.com/2023/05/palm-api-and-makersuite-moving-into-public-preview.html Search Labs Article : https://blog.google/products/search/generative-ai-search/ Expérimentations pour rajouter l'IA générative dans Google Search Faire des recherches avec des requêtes avec des phrases plus complexes, en intégrant des réponses comme Bard, avec des liens, des suggestions d'autres recherches associées Mais aussi proposer des publicités mieux ciblées On peut s'inscrire à Search Labs pour tester cette nouvelle expérience, mais au début juste en Anglais et juste pour les US Des intégrations avec Google Shopping pour proposer et filtrer des produits qui correspondent à la requête Recherche à l'aide d'image, avec Google Lens : 12 milliards de recherches visuelles par mois Palm et Bard Annonce du modèle LLM Palm 2 utilisé dans Bard et dans Google Cloud https://blog.google/technology/ai/google-palm-2-ai-large-language-model/ PaLM 2 est en cours d'intégration dans 25 produits de Google Supportera 100 langues différentes (pour l'instant seulement l'anglais, japonais et coréen), avec déjà les 40 langues les plus parlées d'ici la fin de l'année Maintenant disponible dans 180 pays… sauf l'Europe !!! Capacité de raisonnement accrue Peut coder dans une vingtaine de langages de programmation différents dont Groovy Différentes tailles de modèles : Gecko, Otter, Bison et Unicorn, mais le nombre de paramètres n'est pas communiquée, comme pour GPT-4 d'OpenAI Utilisable pour des requêtes et pour du chat Des modèles dérivées fine-tunés Med-PaLM 2 sur du savoir médical, sur l'analyse visuelle des radios et Sec-PaLM, entrainé sur des cas d'utilisation sur le thème de la cybersécurité, pour aider à déceler des scripts malicieux, des vecteurs d'attaque Sundar Pichai a aussi annoncé que Google travaillait déjà sur la prochaine évolution de ses LLM avec un modèle appelé Gemini. Peu de détails à part qu'il sera multimodal (en particulier recherche combinée image et texte par ex.) Partenariat et intégration de Adobe Firefly dans Bard pour générer des images https://blog.adobe.com/en/publish/2023/05/10/adobe-firefly-adobe-express-google-bard Duet AI pour Google Workspace Article : https://workspace.google.com/blog/product-announcements/duet-ai Dans Gmails et Docs, propose d'aider à la rédaction de vos emails et documents une extension de “smart compose” qui va permettre de générer des emails entiers, d'améliorer le style, de corriger la grammaire, éviter les répétitions de texte Dans Docs, des nouveaux “smart chips” pour rajouter des variables, des templates Dans Slides, rajouter des images générées par IA Des prompts dans Sheets pour générer un draft de table Dans Google Meet, possibilité de créer une image de fond customisée avec Generative AI Ces améliorations font parties de Workspace Labs auquel on peut s'inscrire dans la liste d'attente https://workspace.google.com/labs-sign-up/ Google Cloud Intégration de Generative AI partout https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-launches-new-ai-models-opens-generative-ai-studio Nouvelles VM A3 avec les GPUs H100 de Nvidia, idéal pour l'entrainement de modèles de machine learning, avec 26 exaFlops de performance https://cloud.google.com/blog/products/compute/introducing-a3-supercomputers-with-nvidia-h100-gpus Trois nouveaux modèles LLM dans Vertex AI : Imagen (private preview) pour générer des images, Codey pour la génération de code, et Chirp pour la génération de la parole supportant 100 langues différentes avec 2 milliards de paramètres vocaux Model Garden : avec les modèles de machine learning y compris externes et open sources Ajout des embeddings pour le texte et l'image RLHF, Reinforcement Learning from Human Feedback bientôt intégrer pour étendre Vertex AI tuning et prompt design avec une boucle de feedback humaine Generative AI Studio pour tester ses prompts zero-shot, one-shot, multi-shots Duet AI pour Google Cloud https://cloud.google.com/blog/products/application-modernization/introducing-duet-ai-for-google-cloud Assistance de code dans VSCode et bientôt les IDEs JetBrains grâce au plugin Cloud Code, et dans Cloud Workstations. Intégration dans les IDEs d'un chat pour comme un compagnon pour discuter d'architecture, trouver les commandes à lancer pour son projet Le modèle de code de Codey fonctionne sur une vingtaine de languages de programmation, mais un modèle fine-tuné a été entrainé sur toute la doc de Google Cloud, donc pourra aider en particulier sur l'utilisation des APIs de Google Cloud, ou l'utilisation de la ligne de commande gcloud Duet AI est aussi dans App Sheet, la plateforme low/no-code, et permettra de chatter avec un chatbot pour générer une application App Sheet Quoi de neuf dans Firebase https://firebase.blog/posts/2023/05/whats-new-at-google-io Web Article : https://developers.googleblog.com/2023/05/io23-developer-keynote-recap.html Flutter 3 et Dart 3.10 https://io.google/2023/program/7a253260-3941-470b-8a4d-4253af000119/ WebAssembly https://io.google/2023/program/1d176349-7cf8-4b51-b816-a90fc9d7d479/ WebGPU https://io.google/2023/program/0da196f5-5169-43ff-91db-8762e2c424a2/ Baseline https://io.google/2023/program/528a223c-a3d6-46c5-84e4-88af2cf62670/ https://web.dev/baseline/ Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Screaming in the Cloud
Doing What You Love in Cloud with Nate Avery

Screaming in the Cloud

Play Episode Listen Later May 11, 2023 33:15


Nate Avery, Outbound Product Manager at Google, joins Corey on Screaming in the Cloud to discuss what it's like working in the world of tech, including the implications of AI technology on the workforce and the importance of doing what you love. Nate explains why he feels human ingenuity is so important in the age of AI, as well as why he feels AI will make humans better at the things they do. Nate and Corey also discuss the changing landscape of tech and development jobs, and why it's important to help others throughout your career while doing something you love. About NateNate is an Outbound Product Manager at Google Cloud focused on our DevOps tools. Prior to this, Nate has 20 years of experience designing, planning, and implementing complex systems integrating custom-built and COTS applications. Throughout his career, he has managed diverse teams dedicated to meeting customer goals. With a background as a manager, engineer, Sys Admin, and DBA, Nate is currently working on ways to better build and use virtualized computer resources in both internal and external cloud environments. Nate was also named a Cisco Champion for Datacenter in 2015.Links Referenced: Google Cloud: https://cloud.google.com/devops Not Your Dad's IT: http://www.notyourdadsit.com/ Twitter: https://twitter.com/nathaniel_avery LinkedIn: https://www.linkedin.com/in/nathaniel-avery-2a43574/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: It's easy to **BEEP** up on AWS. Especially when you're managing your cloud environment on your own!Mission Cloud un **BEEP**s your apps and servers. Whatever you need in AWS, we can do it. Head to missioncloud.com for the AWS expertise you need. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and my guest today is Nate Avery, who's an outbound product manager over at Google Cloud. Nate, thank you for joining me.Nate: Thank you for having me. This is really a pretty high honor. I'm super thrilled to be here.Corey: One of my questions that I have about any large company when I start talking to them and getting to know people who work over there, pretty quickly emerges, which is, “What's the deal with your job title?” And it really doesn't matter what the person does, what the company is, there's always this strange nuance that tends to wind up creeping into the company. What is an outbound product manager and what is it you say it is you do here?Nate: Okay. That's an interesting question because I've been here for about a year now and I think I'm finally starting to figure it out. Sure, I should have known more when I applied for the job, [laugh] but there's what's on the paper and then there's what you do in reality. And so, what it appears to be, where I'm taking this thing now, is I talk to folks about our products and I try to figure out what it is they like, what it is they don't like, and then how do we make it better? I take that information back to our engineers, we huddle up, and we figure out what we can do, how to do it better, how to set the appropriate targets when it comes to our roadmaps. We look at others in the industry, where we are, where they are, where we think we can maybe have an advantage, and then we try to make it happen. That's really what it is.Corey: One of the strange things that happens at big companies, at least from my perspective, given that I've spent most of my career in small ones, is that everyone has a niche. There are very few people at large companies whose job description is yeah, I basically do everything. Where do you start? And where do you stop because Google Cloud, even bounding it to that business unit, is kind of enormous? You've [got 00:02:47] products that are outbound that you manage. And I feel like I should also call out that a product being outbound is not the same thing as being outgoing. I know that people are always wondering, what's Google going to turn off next, but Google Cloud mostly does the right thing in that respect. Good work.Nate: [laugh]. Nice. So, the products I focus on are the DevOps products. So, those are Cloud Build, Cloud Deploy, Artifact Registry, Artifact Analysis. I also work with some of our other dev tooling such as Cloud Workstations. That's in public preview right now, but maybe by the time this goes to air, it'll actually be in general availability.And then I also will talk about some of our other lesser-known tools like Skaffold or maybe on occasion, I'll throw out something about minikube. And also, Cloud Code, which is a really deep browser plugin for your IDE that gives you access to lots of different Google tools. So yeah, that's sort of my area.Corey: Well, I'm going to start with the last thing you mentioned, where you have Cloud Code as an IDE tooling and a plug-in for it. I'm relatively new to the world of IDEs because I come from the world of grumpy Unix admins; you never know what you're going to be remoting into next, but it's got VI on it, so worst case, you'll have that. So, I grew up using that, and as a result, that is still my default. I've been drifting toward VS Code a fair bit lately, as I've been regrettably learning JavaScript and TypeScript, just because having a lot of those niceties is great. But what's really transformative for me has been a lot of the generative AI offerings from various companies around hey, how about we just basically tab-complete your code for you, which is astonishing. I know people love to argue about that and then they go right back to their old approach of copying and pasting their code off a Stack Overflow.Nate: Yeah. That's an interesting one. When it works, it works and it's magical. And those are those experiences where you say, “I'm going to do this thing forever and ever I'm never going to go back.” And then when it doesn't work, you find yourself going back and then you maybe say, “Well, heck, that was horrible. Why'd I ever even go down this path?”I will say everyone's working on something along those lines. I don't think that that's much of a secret. And there are just so many different avenues at getting there. And I think that this is so early in the game that where we are today isn't where we're going to be.Corey: Oh, just—it's accelerating. Watching the innovation right now in the generative AI space is incredible. My light bulb moment that finally got me to start paying attention to this and viewing it as something other than hype that people are trying to sell us on conference stages was when I use one of them to spit out just, from a comment in VS Code, “Write a Python script that will connect to AWS pricing API and tell me what something costs, sorted from most to least expensive regions.” Because doing that manually would have taken a couple hours because their data structures are a sad joke and that API is garbage. And it sat and spun for a second and then it did it. But if I tell that story as, “This is the transformative moment that opened my eyes,” I sound incredibly sad and pathetic.Nate: No, I don't think so. I think that what it does, is it… one, it will open up more eyes, but the other thing that it does is you have to take that to the next level, which is great. That's great work, gone. Now that I have this information, what do I do with it? That's really where we need to be going and where we need to think about what this AI revolution is going to allow us to do, and that's to actually put this stuff into context.That's what humans do, which the computers are not always great at. And so, for instance, I see a lot of posts online about, “Hey, you know, I used to do job X, where I wrote up all these things,” or, “I used to write a blog and now because of AI, my boss wants me to write, you know, five times the output.” And I'm thinking, “Well, maybe the thing that you're writing doesn't need to be written if it can be easily queried and generated on the fly.” You know? Maybe those blog posts just don't have that much value anymore. So, what is it that we really should concentrate on in order to help us do better stuff, to have a higher order of importance in the world? That's where I think a lot of this really will wind up going is… you know, just as people, we've got to be better. And this will help us get there.Corey: One area of nuance on this, though, is—you're right when I talked about this with some of my developer friends, some of their responses were basically to become immediately defensive. Like, “Sure, it's great for the easy stuff, but it's not going to solve the high-level stuff that senior engineers are good at.” And I get that. This ridiculous thing that I had to do is not a threat to a senior engineer, but it is arguably a threat to someone I find on Upwork or Fiverr or whatnot to go and write this simple script for me.Nate: Oh yeah.Corey: Now, the concern that I have is one of approachability and accessibility because. Senior engineers don't form fully created from the forehead of some God somewhere that emerges from Google. They start off as simply people who have no idea what they're doing and have a burning curiosity about something, in many cases. Where is the next generation going to get the experience of writing a lot of that the small-scale stuff, if it's done for them? And I know that sounds alarmist, and oh, no, the sky is falling, and are the children going to be all right, as most people my age start to get into. But I do wonder what the future holds.Nate: That's legit. That's a totally legit question because it's always kind of hanging out there. I look at what my kids have access to today. They have freaking Oracle, the Oracle at Delphi on their phone; you know, and—Corey: If Oracle the database on their phone, I would hate to imagine what the cost of raising your kids to adulthood would be.Nate: Oh, it's mighty, mighty high [laugh]. But no, they have all of this stuff at their hands and then even just in the air, right? There's ambient computing, there's any question you want answered, you could speak it into the air and it'll come out. And it'll be, let's just say, I don't know, at least 85% accurate. But my kids still ask me [laugh].Corey: Having my kids, who are relatively young, still argue and exhaust their patience on a robot with infinite patience instead of me who has no patience? Transformative. “How do I spell whatever it is?” “Ask Alexa,” becomes a story instead of, “Look it up in the dictionary,” like my parents used to tell me. It's, “If I knew how to spell it, I would need to look it up in the dictionary, but I don't, so I can't.”Nate: Right. And I would never need to spell it again because I have the AI write my whole thing for me.Corey: That is a bit of concern for me when—some of the high school teachers are freaking out about students are writing essays with this thing. And, yeah, on the one hand, I absolutely see this as alarmism, where, oh, no, I'm going to have to do my job, on some level. But the reason you write so many of those boring, pointless essays in English class over the course of the K through 12 experience is ideally, it's teaching you how to frame your discussions, how to frame an argument, how to tell a compelling story. And, frankly, I think that's something that a lot of folks in the engineering cycle struggle with mightily. You're a product slash program manager at this point; I sort of assume that I don't need to explain to you that engineers are sometimes really bad at explaining what they mean.Nate: Yeah. Dude, I came up in tech. I'm… bad at it too sometimes [laugh]. Or when I think I'm doing a great job and then I look over and I see a… you know, the little blanky, blanky face, it goes, “Oh. Oh, hold on. I'll recalibrate that for you.” It's a thing.Corey: It's such a bad trope that they have now decided that they are calling describing what you actually mean slash want is now an entire field called prompt engineering.Nate: Dude, I hate that. I don't understand how this is going to be a job. It seems to be the most ridiculous thing in the world. If you say, “I sit down for six hours a day and I ask my computer questions,” I got to ask, “Well, why?” [laugh]. You know? And really, that's the thing. It gets back—Corey: Well, most of us do that all day long. It's just in Microsoft Excel or they use SQL to do it.Nate: Yeah… it is, but you don't spend your day asking the question of your computer, “Why.” Or really, most of us ask the question, “How?” That's really what it is we're doing.Corey: Yeah. And that is where I think it's going to start being problematic for some folks who are like, “Well, what is the point of writing blog posts if Chat-GIPITY can do it?” And yes, that's how I pronounce it: Chat-GIPITY. And the response is, “Look, if you're just going to rehash the documentation, you're right. There's no point in doing it.”Don't tell me how to do something. Tell me why. Tell me when I should consider using this tool, that tool, why this is interesting to me, why it exists. Because the how, one way or another, there are a myriad ways to find out the answer to something, but you've got to care first and convincing people to care is something computers still have not figured out.Nate: Bingo. And that gets back to your question about the engineers, right? Yeah. Okay. So sure, the little low-level tasks of, “Hey I need you to write this API.” All right, so maybe that stuff does get farmed out.However, the overall architecture still has to be considered by someone, someone still has to figure out where and how, and when things should be placed and the order in which these things should be connected. That never really goes away. And then there's also the act of creation. And by creation, I mean, just new ideas, things that—you know, that stroke of creativity and brilliance where you just say, “Man, I think there's a better way to do this thing.” Until I see that from one of these generative AI products, I don't know if anyone should truly feel threatened.Corey: I would argue that people shouldn't necessarily feel threatened regardless because things always change; that's the nature of it. I saw a headline on Hacker News recently where it said that 90% of my skills are worthless, but 10% of them are 10x what they were was worth. And I think that there's a lot of truth to that because it's, if you want a job where you never have to—you don't have to keep up with the continuing field, there are options. Not to besmirch them, but accountants are a terrific example of this. Yes, there's change to accountancy rules, but it happens slowly and methodically. You don't go on vacation for two years as an accountant—or a sabbatical—come back and discover that everything's different and math doesn't work the way it once did. Computers on the other hand, it really does feel like it's keep up or you never will.Nate: Unless you're a COBOL guy and you get called back for y2k.Corey: Oh, of course. And I'm sure—and now you're sitting around, you're waiting because when the epic time problem hits in 2038, you're going to get your next call out. And until then, it's kind of a sad life. You're the Maytag repair person.Nate: Yeah. I'm bad at humor, by the way, in case you have noticed. So, you touched on something there about the rate of change and how things change and whether or not these generative AI models are going to be able to—you know, just how far can they go? And I think that there's a—something happened over the last week or so that really got me thinking about this. There was a posting of a fake AI-generated song, I think from Drake.And say what you want about cultural appropriation, all that sort of thing, and how horrible that is, what struck me was the idea that these sorts of endeavors can only go so far because in any genre where there's language, and current language that morphs and changes and has subtlety to it, the generative AI would have to somehow be able to mimic that. And not to say that it could never get there, but again, I see us having some situations where folks are worried about a lot of things that they don't need to worry about, you know, just at this moment.Corey: I'm curious to figure out what your take is on how you see the larger industry because for a long time—and yes, it's starting to fade on some level, because it's not 2006 anymore, but there was a lot of hero worship going on with respect to Google, in particular. It was the mythical city on the hill where all the smart people went and people's entire college education was centered around the idea of, well, I'm going to get a job at Google when I graduate or I'm doomed. And it never seems to work out that way. I feel like there's a much more broad awareness these days that there's no one magical company that has the answers and there are a lot of different paths. But if you're giving guidance to someone who's starting down that path today, what would it be?Nate: Do what you love. Find something that you love, figure out who does the thing that you love, and go there. Or go to a place that does a thing that you love poorly. Go there. See if you can make a difference. But either way, you're working on something that you like to do.And really, in this business, if you can't get in the door at one of those places, then you can make your own door. It's becoming easier and easier to just sort of shoehorn yourself into this space. And a lot of it, yeah, there's got to be talent, yeah, you got to believe in yourself, all that sort of thing, but the barriers to entry are really low right now. It's super easy to start up a website, it costs you nothing to have a GitHub account. I really find it surprising when I talked to my younger cousins or someone else in that age range and they start asking, like, “Well, hey, how do I get into business?”And I'm like, “Well, what's your portfolio?” You know? And I ask them, “Do you want to work for someone else? Or would you like to at least try working for yourself first?” There are so many different avenues open to folks that you're right, you don't have to go to company X or you will never be anything anymore. That said, I am at [laugh] one of the bigger companies and do there are some brilliant people here. I bump into them and it's kind of wild. It really, really is.Corey: Oh, I want to be very clear, despite the shade that I throw at Google—and contemporary peers in the big tech company space—there are an awful lot of people who are freaking brilliant. And more importantly, by far, a lot of people who are extraordinarily kind.Nate: Yeah. Yeah. So, all right, in this business, there's that whole trope about, “Yeah, they're super smart, but they're such jerks.” It doesn't have to be that way. It really doesn't. And it's neat when you run into a place that has thousands of people who do not fit that horrible stereotype out there of the geek who can't, you know, who can't get along well with others. It's kind of nice.But I also think that that's because the industry itself is opening up. I go on to Twitter now and I see so many new faces and I see folks coming in, you know, for whatever reason, they're attracted to it for reasons, but they're in. And that's the really neat part of it. I used to worry that I didn't see a lot of young people being interested in this space. But I'm starting to notice it now and I think that we're going to wind up being in good hands.Corey: The kids are all right, I think, is a good way of framing it. What made you decide to go to Google? Again, you said you've been there about a year at this point. And, on some level, there's always a sense in hindsight of, well, yeah, obviously someone went from this job to that job to that job. There's a narrative here and it makes sense, but I've never once in my life found that it made sense and was clear while you're making the decision. It feels like it only becomes clear in hindsight.Nate: Yes, I am an extremely lucky person. I am super fortunate, and I will tell a lot of people, sometimes I have the ability to fall ass-backwards into success. And in this case, I am here because I was asked. I was asked and I didn't really think that I was the Google type because, I don't know what I thought the Google type was, just, you know, not me.And yet, I… talked it out with some folks, a really good, good buddy of mine and [laugh] I'll be darned, you know, next thing, you know, I'm here. So, gosh, what can I say except, don't limit yourself [laugh]. We do have a tendency to do that and oh, my God, it's great to have a champion and what I'd like to do now, now that you mention it and it's been something that I had on my mind for a bit is, I've got to figure out how to, you know how to start, you know, giving back, paying it forward, whatever the phrase it is you want to use? Because—Corey: I like, “Send the elevator back down.”Nate: Send the elevator back down? There you go, right? If that escalator stopped, turn it back on.Corey: Yeah, escalator; temporarily, stairs.Nate: Yes. You know, there are tons of ways up. But you know, if you can help someone, just go ahead and do it. You'd be surprised what a little bit of kindness can do.Corey: Well, let's tie this back to your day job for a bit, on some level. You're working on, effectively, developer tools. Who's the developer?Nate: Who's the developer? So, there's a general sense in the industry that anyone who works in IT or anyone who writes code is a developer. Sometimes there's the very blanket statement out there. I tend to take the view that a developer is the person who writes the code. That is a developer, that's [unintelligible 00:21:52] their job title. That's the thing that they do.The folks who assist developers, the folks who keep the servers up and running, they're going to have a lot of different names. They're DevOps admins, they're platform admins, they're server admins. Whatever they are, rarely would I call them developers, necessarily. So, I get it. We try to make blanket statement, we try to talk to large groups at a time, but you wouldn't go into your local county hospital and say that, “I want to talk to the dentist,” when you really mean, like, a heart surgeon.So, let's not do that, you know? We're known for our level of specificity when we discuss things in this field, so let's try to be a little more specific when we talk about the folks who do what they do. Because I came up on that ops track and I know the type of effort that I put in, and I looked at folks across from me and I know the kind of hours that they put in, I know all of the blood sweat and tears and nightless sleeps and answering the pagers at four in the morning. So, let's just call them what they are, [laugh] right? And it's not to say that calling them a developer is an insult in any way, but it's not a flex either.Corey: You do work at a large cloud company, so I have to assume that this is a revelation for you, but did you know that words actually mean things? I know, it's true. You wouldn't know it from a lot of the product names that wind up getting scattered throughout the world. The trophy for the worst one ever though, is Azure DevOps because someone I was talking to as a hiring manager once thought that they listed that is a thing they did on their resume and was about to can the resume. It's, “Wow, when your product name is so bad that it impacts other people's careers, that's kind of impressively awful.”But I have found that back when the DevOps movement was getting started, I felt a little offput because I was an operations person; I was a systems administrator. And suddenly, people were asking me about being a developer and what it's like. And honestly, on some level, I felt like an imposter, just because I write configuration files; I don't write code. That's very different. Code is something smart people write and I'm bad at doing that stuff.And in the fullness of time, I'm still bad at it, but at least now unenthusiastically bad at it. And, on some level, brute force also becomes a viable path forward. But it felt like it was gatekeeping, on some level, and I've always felt like the terms people use to describe what I did weren't aimed at me. I just was sort of against the edge.Nate: Yeah. And it's a weird thing that happens around here, how we get to these points, or… or somehow there's an article that gets written and then all of a sudden, everyone's life is changed in an industry. You go from your job being, “Hey, can you rack and stack the server?” To, “Hey, I need you to write this YAML code that's going to virtually instantiate a server and also connect it to a load balancer, and we need these done globally.” It's a really weird transition that happens in life.But like you said, that's part of our job: it morphs, it changes, it grows. And that's the fun of it. We hope that these changes are actually for the better and then they're going to make us more productive and they're going to make our businesses thrive and do things that they couldn't be before, like maybe be more resilient. You know, you look at the number of customers—customers; I think of them as customers—who had issues because of that horrible day in 9/11 and, you know, their business goes down the tube because there wasn't an adequate DR or COOP strategy, you know? And I know, I'm going way back in the wayback, but it's real. And I knew people who were affected by it.Corey: It is. And the tide is rising. This gets back to what we were talking about where the things that got you here won't necessarily get you there. And Cloud is a huge part of that. These days, I don't need to think about load balancers, in many cases, or all of the other infrastructure pieces because Google Cloud—among other companies, as well, lots of them—have moved significantly up the stack.I mean, people are excited about Kubernetes in a whole bunch of ways, but what an awful lot of enterprises are super excited about is suddenly, a hard drive failure doesn't mean their application goes down.Nate: [Isn't that 00:26:24] kind of awesome?Corey: Like, that's a transformative moment for them.Nate: It totally is. You know, I get here and I look at the things that people are doing and I kind of go, “Wow,” right? I'm in awe. And to be able to contribute to that in some way by saying, “Hey, you know what, we'll be cool? How about we try this feature?” Is really weird, [laugh] right?It's like, “Wow, they listened to me.” But we think about what it is we're trying to do and a lot of it, strangely enough, is not just helping people, but helping people by getting out of the way. And that is huge, right? You know, because you just want it to work, but more than it just working, you want it to be seamless. What's easier than putting your key in the ignition and turning it? Well, not having to use a key at all.So, what are those types of changes that we can bring to these different types of experiences that folks have? If you want to get your application onto a Kubernetes cluster, it shouldn't be some Herculean feat.Corey: And running that application responsibly should not require a team of people, each making a quarter million bucks a year, just to be able to do it safely and responsibly. There's going to be a collapsing down of what you have to know in order to run these things. I mean, web servers used to be something that required a month of your life and a fair bit of attention to run. Now, it's a checkbox in a cloud console.Nate: Yeah. And that's what we're trying to get it to, right? Why isn't everything a checkbox? Why can't you say, “Look, I wrote my app. I did the hard part.” Let's—you know, I just need to see it go somewhere. You know? Make it go and make it stay up. And how can I do that?And also, here's a feature that we're working on. Came out recently and we want folks to try it. It's a cloud deploy feature that works for Cloud Run as well as it does for GKE. And it's… I know it's going to sound super simple: it's our canary deployment method. But it's not just canary deployment, but also we can tie it into parallel deployment.And so, you can have your new version of your app stood up alongside your old version of the app and we can roll it out incrementally in parallel around the world and you can have an actual test that says, “Hey, is this working? Is it not working?” If it does, great, let's go forward. If it doesn't, let's roll back. And some of the stuff sounds like common sense, but it's been difficult to pull off.And now we're trying to do it with just a few lines a YAML. So, you know, is it as simple as it could be? Well, we're still looking at that. But the features are in there and we're constantly looking at what we can do to iterate and figure out what the next thing is.Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to find you?Nate: Best place for them to find me used to be my blog, it's Not Your Dad's IT, However, I've been pretty negligent there since doing this whole Google thing, so I would say, just look me up on Twitter at @nathaniel_avery, look me up on Google. You can go to a pretty cool search engine and [laugh]—Corey: Oh, that's right. You guys have a search engine now. Good work.Nate: That's what I hear [laugh].Corey: Someday maybe it'll even come to Google Docs.Nate: [laugh]. Yes, so yeah, that's where to find me. You know, just look me up at Nathaniel Avery. I think that handle works for almost everything, Twitter, LinkedIn, wherever, and reach out.If there's something you like about our DevOps tools, let me know. If there's something you hate about our DevOps tools, definitely let me know. Because the only reason we're doing this is to try and help people. And if we're not doing that, then we need to know. We need to know why it isn't working out.And trust me, I talk to these engineers every day. That's the thing that really keeps them moving in the morning is knowing that they're doing something to make things better for folks. Real quick, I'll close out, and I think I may have mentioned this on some other podcasts. I come from the ops world. I was that guy who had to help get a deployment out on a Friday night and it lasted all weekend long and you're staring there at your phone at some absurd time on a Sunday night and everyone's huddled together and you're trying to figure out, are we going to rollback or are we going to go forward? What are we going to do by Monday?Corey: I don't miss those days.Nate: Oh, oh God no. I don't miss those days either. But you know what I do want? I took this job because I don't want anyone else to have those days. That's really what it is. We want to make sure that these tools give folks the ability to deploy safely and to deploy with confidence and to take that level of risk out of the equation, so that folks can, you know, just get back to doing other things. You know, spend that time with your family, spend the time reading, spend that time prompting ChatGPT with questions, [laugh] whatever it is you want to do, but you shouldn't have to sit there and wonder, “Oh, my God, is my app working? And what do I do when it doesn't?”Corey: I really want to thank you for being as generous with your time and philosophy on this. Thanks again. I've really enjoyed our conversation.Nate: Thank you. Thank you. I've been a big fan of your work for years.Corey: [laugh]. Nate Avery, outbound product manager at Google Cloud. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice whereas if you hate this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that you had Chat-GIPITY write for you in YAML.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

The Cloud Pod
193: The cloud pod was less productive in 2022

The Cloud Pod

Play Episode Listen Later Dec 29, 2022 60:54


On this episode of The Cloud Pod, the team wraps up 2022 so far, comparing predictions made with the events so far while projecting into 2023 as the year comes to a close. They discuss the S3 security changes coming from Amazon, the new control plane connectivity options with GCP, and Microsoft's achievement, finally topping a list within the cloud space. A big thanks to this week's sponsor, Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. This week's highlights

Google Cloud Platform Podcast
Redesigning the Cloud SDK and CLI with Wael Manasra and Cody Oss

Google Cloud Platform Podcast

Play Episode Listen Later Feb 2, 2022 44:09


This week on the podcast, Wael Manasra and Cody Oss join hosts Carter Morgan and Mark Mirchandani to chat about new branding in Cloud SDK and gcloud CLI. Google Cloud SDK was built and designed to take over mundane development tasks, allowing engineers to focus on specialized features and solutions. The SDK documentation and tutorials are an important part of this as well. With clear instructions, developers can easily make use of Cloud SDK. Software Development Kits have evolved so much over the years that recently, Cody, Wael, and their teams have found it necessary to redefine and rethink SDKs. The popularity of cloud projects and distributed systems, for example, means changes to kit requirements. The update is meant to reevaluate the software included in SDKs and CLIs and to more accurately represent what the products offer. Giving developers the tools they need in the place they work means giving developers code language options, providing thorough instruction, and listening to feedback. These are the goals of this redesign. The Google Cloud SDK contains downloadable parts and web publications. Our guests explain the types of software and documentation in each group and highlight the importance of documentation and supporting materials like tutorials. The Cloud Console is a great place for developers to start building solutions using the convenient point-and-click tools that are available. When these actions need to be repeated, the downloadable Command Line Interface tool can do the work. Cody talks about authentication and gcloud, including its relationship to client libraries. He walks us through the steps a typical developer might take when using Google products and how they relate to the SDK and CLI. Through examples, Wael helps us further understand client libraries and how they can interact with the CLI. The Cloud SDK is a work in progress. Our guests welcome your feedback for future updates! Wael Manasra Wael manages the gcloud CLI, the client libraries for all GCP services, and the general Cloud SDK developer experience. Cody Oss Cody works on the Go Cloud Client libraries where he strives to provide an delightful and idiomatic experience to all the Gophers on Google Cloud. Cool things of the week Google Tau VMs deliver over 40% price-performance advantage to customers blog Find products faster with the new All products page blog Interview Cloud SDK site Cloud SDK Documentation docs Go site Google Cloud site Cloud Storage site Cloud Storage Documentation docs Cloud Code site Cloud Run site GKE site Cloud Functions site Cloud Client Libraries docs Cloud Shell site Cloud Shell Editor docs What's something cool you're working on? Carter is working on his comedy. Hosts Carter Morgan and Mark Mirchandani

Google Cloud Platform Podcast
Working with Kubernetes and KRM with Megan O'Keefe

Google Cloud Platform Podcast

Play Episode Listen Later Aug 25, 2021 35:58


This week on the podcast, we welcome guest Megan O'Keefe to talk about KRM and Kubernetes with your hosts Mark Mirchandani and Anthony Bushong. To start the show, Megan gives us a quick rundown of Kubernetes, an open-source tool to orchestrate containers and manage other GCP resources. She explains the difference between declarative and imperative to help us better understand the basics of Kubernetes. We tackle the challenges people face when beginning their Kubernetes journey and how it works with other open-source projects, like Anthos. This year, Megan and her team have been working to help developers understand the Kubernetes Resource Model, a concept that helps define how companies can organize and run clusters, enforce policies, and more for improved standardization across multiple teams. Megan explains GitOps, a deployment model for Kubernetes focusing on Git, and takes us through examples of implementation. We learn about Config Sync and how it helps with optimizing and automating GitOps. Megan goes over some other valuable tools, including Open Policy Agent and Gatekeeper, which help developers specify not just which resources are allowed, but also what kinds of things are allowed within each resource. We wrap up the show with a discussion on streamlining the development process with strategic use of Kubernetes and the help of open-source tools like Skaffold. Megan also talks about controllers like Config Connector that help with deploying to a GCP project and the things she finds most exciting about this space. Megan O'Keefe Megan O'Keefe is a Developer Relations Engineer at Google Cloud, helping developers build platforms with Kubernetes and Anthos. Cool things of the week Listen up! Google Cloud Reader reaches 50 episodes blog Private Pools Overview docs Interview Kubernetes site GKE site KRM site KRM Tutorial Demos site Build a platform with KRM: Part 1 - What's in a platform? blog Build a platform with KRM: Part 2 - How the Kubernetes resource model works blog Build a platform with KRM: Part 3 - Simplifying Kubernetes app development blog Build a platform with KRM: Part 4 - Administering a multi-cluster environment blog Build a platform with KRM: Part 5 - Manage hosted resources from Kubernetes blog I do declare! Infrastructure automation with Configuration as Data blog Multi-cluster Use Cases docs CNCF Kubernetes Overview site Anthos site Anthos Technical Overview docs Anthos Config Management site Config Sync Overview docs Guide To GitOps site Policy Controller Overview docs Kustomize site Cloud Code site Config Connector Overview docs Crossplane site Skaffold site Open Policy Agent site Backstage site What’s something cool you’re working on? Anthony shared info about GKE on the podcast last week and he’s been working on his video series on GKE cost optimization. The solutions guide and white paper are great resources for this topic.

Employment Matters - Europe
Episode 21: The EU Cloud Code of Conduct

Employment Matters - Europe

Play Episode Listen Later Jul 26, 2021 13:07


In this episode, we discuss what the EU Cloud Code of Conduct is, what its objective is, and how companies can join. Please visit the EU Cloud CoC website here. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.Moderator: Philippe Durand (August Debouzy / France)Guest Speaker: Bastiaan Bruyndonckx  (Lydian / Belgium)

Employment Matters
Episode 273: The EU Cloud Code of Conduct

Employment Matters

Play Episode Listen Later Jul 26, 2021 13:07


From Employment Matters - Europe:In this episode, we discuss what the EU Cloud Code of Conduct is, what its objective is, and how companies can join. Please visit the EU Cloud CoC website here. Subscribe to our podcast today to stay up to date on employment issues from law experts worldwide.Moderator: Philippe Durand (August Debouzy / France)Guest Speaker: Bastiaan Bruyndonckx  (Lydian / Belgium)

IBM Cloud Podcast
Get developers out of the infrastructure business with IBM Cloud Code Engine

IBM Cloud Podcast

Play Episode Listen Later Jun 10, 2021 20:41


A conversation with Doug Davis about how IBM Cloud Code Engine allows developers to spend more time coding. What is Code Engine ? Get started with Code Engine Tutorial Code samples and examples Mining large data sets of biomedical Omics Data made easy (KubeCon EU 2021) Music: Mercury by Shane Ivers - https://www.silvermansound.com

Voces de la Nube
#3 - Anthos para Lograr Agilidad Y Reducir los Costos de TI

Voces de la Nube

Play Episode Listen Later Apr 23, 2021 22:40


Como mencionamos en el primer episodio, la incertidumbre es imperativa en cualquier planificación. Lo que buscamos, en verdad, es minimizar los riesgos a través de ella. Por eso, la resiliencia y la agilidad son grandes aliados, siempre que ayuden a que la empresa pueda adaptarse a diferentes escenarios. Las soluciones que ofrecen una capa extra de seguridad y flexibilidad son perfectas para estos entornos, especialmente cuando no requieren cambios en la infraestructura. Pero no hablamos solo de migrar todo hacia la nube pública, ni siquiera cuando existen soluciones híbridas para modernizar sistemas sin sacrificar el control. Cuando nos referimos a Anthos, hablamos de la modernización de aplicaciones en cualquier nube, incluso de estrategias para múltiples nubes o entornos híbridos. Este tercer episodio de Voces de la Nube coloca al servicio como protagonista del debate sobre optimización de costos. Rodrigo Perez y Carlos Rojas, ambos Customer Engineers de Google Cloud, hablarán sobre el uso de Anthos en la estrategia de aumento del desempeño y reducción de costos de TI. Voces de la Nube es el podcast oficial de Google Cloud para América Latina. Cada 15 días, trataremos temas sobre la transformación digital y el camino hacia la nube con ejecutivos y especialistas, además de tener invitados especiales. A continuación encontrarás los links de este episodio: Descubre más sobre Anthos: https://bit.ly/3lFhQI0 Prueba Kubernetes, nuestra herramienta de código abierto: https://bit.ly/39LXPLb Lee nuestro informe Google Cloud Adoption Framework para aprender cómo identificar el nivel de madurez de los equipos que respaldan tus aplicaciones: https://bit.ly/31Mi5bl Mira el video sobre nuestras capas de seguridad: https://bit.ly/3tR9Uqd Aprende más sobre la relación entre DevOps y SRE: https://bit.ly/3cRriWb Descubre más sobre Cloud Run: https://bit.ly/3rNo1LL Conoce más detalles sobre las funcionalidades de Anthos Service Mesh: https://bit.ly/3mnZYSd Descubre más sobre Cloud Code: https://bit.ly/3sWpUa4 Lee el informe de Forrester Public sobre Anthos, donde se explican sus grandes beneficios económicos: https://bit.ly/321KMRL Conoce más sobre cómo compilar un farm de procesamiento híbrida: https://bit.ly/3sRpty0 ¿Te ha gustado este episodio? ¿Tienes alguna sugerencia? Envíanos un e-mail a vocesdelanube@google.com

Serious Privacy
On Cloud 9 for the EU Cloud Code of Conduct

Serious Privacy

Play Episode Play 40 sec Highlight Listen Later Apr 7, 2021 35:49


Demonstrating compliance is certainly not always easy, but under many laws, including the GDPR, it is a mandatory requirement. To facilitate the process, codes of conduct and certification schemes are becoming more popular, and it is no wonder they have been included in the GDPR as well. As we are on the verge of seeing the first codes of conduct to demonstrate GDPR compliance approved, Paul Breitbarth and K Royal discuss the EU Cloud Code of Conduct, which TrustArc is proud to support. Join us and learn more about what the EU Cloud Code of Conduct entails, how it is supposed to work and what the benefits are of adhering to such a code. Oh, and don't be surprised for a little April Fools and Easter conversation this week too - the recording was made on 1 April...  As always, if you have comments or questions, please contact us at seriousprivacy@trustarc.com. ResourcesA downloadable version of the EU Cloud Code of ConductDetails on the future Third Country Module, intended for international data transfersWebinar with Paul on the Third Country Module

Serverless Chats
Episode #95: Going Serverless with IBM Cloud Code Engine with Jason McGee

Serverless Chats

Play Episode Listen Later Apr 5, 2021 39:26


About Jason McGeeJason McGee, IBM Fellow, is VP and CTO at IBM Cloud Platform. Jason is currently responsible for technical strategy and architecture for all of IBM’s Cloud Platform, across public, dedicated, and local delivery models. Previously Jason has served as CTO of Cloud Foundation Services, Chief Architect of PureApplication System, WebSphere Extended Deployment, WebSphere sMash, and WebSphere Application Server on distributed platforms.   Twitter: @jrmcgee LinkedIn: https://www.linkedin.com/in/jrmcgee/ IBM Cloud Code Engine: Learn more during this live virtual event on April 14th (also available on-demand after April 14th) Read more: https://www.ibm.com/cloud/code-engine Get started today: https://cloud.ibm.com/docs/codeengine?topic=codeengine-getting-started Watch this episode on YouTube: https://youtu.be/yH_mgW2kGzUThis episode sponsored by IBM Cloud.Transcript:Jeremy: Hi, everyone. I'm Jeremy Daly and this is Serverless Chats. Today I'm joined by Jason McGee. Hey Jason, thanks for joining me.Jason: Thanks for having me.Jeremy: So you are an IBM fellow and the VP and CTO of the IBM Cloud platform. So I'd love it if you could tell our guests a little bit about yourself and what it is that you do at IBM.Jason: Sure. I spend my day at IBM worried about developers and platform services on our public cloud. So I'm responsible for both the technical strategy and the delivery of our Kubernetes and OpenShift platforms, our serverless environments, and kind of all the things that surround that space, logging, and monitoring and other developer tools that kind of make up the developer platform for IBM Cloud.Jeremy: And what about yourself? What's your background?Jason: Been a software, kind of middleware guy, my whole life. I used to be the chief architect for WebSphere app server. So I spent the last 20 plus years working on enterprise application platforms and helping companies be able to build mission-critical business systems.Jeremy: Awesome. So I had Michael Behrendt on the show not too long ago and it was great. We talked about a whole bunch of different things. IBM's point of view of serverless. We talked a little bit about the future of serverless and we talked about the IBM Cloud Code Engine, which I want to get into, but for the benefit of our listeners and just because I'm so fascinated by some of the things that IBM is doing now with serverless, it's just super interesting. So could you sort of give me your point of view or IBM's point of view on serverless and just sort of refresh the listener's memory sort of about how IBM is thinking about serverless and how they're probably thinking about it maybe differently than some of the other cloud providers?Jason: Yeah, sure. I mean, it's such a fascinating space and it's really changed a lot, I think, over the last five years or so from its kind of maybe beginnings in being very aligned with serverless functions and kind of event-driven computing and becoming a more general concept about how developers especially can consume cloud platforms. I think if you look at the IBM perspective on serverless, there's a couple layers to the problem that we think about. First is we've been pretty clear that we think Kubernetes and distributions of Kubernetes like OpenShift are kind of the key foundation compute environment for developers to use going forward. And we've done a ton of work in kind of building out our Kubernetes and OpenShift platforms and delivering them as a service on our public cloud. And that's an incredibly flexible platform that you can really build any kind of application. I think over the last five years, we've proven we can run anything on Kubernetes databases and AI and stateless apps and whatever you want.Jeremy: Right.Jason: So very, very flexible. However, sometimes flexible also means complicated and it means that there's lots to manage and there's lots of concepts to get your head around. And so we've been thinking a lot about, well, how do you actually consume a platform like Kubernetes more easily? How does the developer stay more focused on what they're really trying to do, which is like build application logic, solve problems? Now they don't really want to stand up coop clusters and configure security policies. They just want to write code and run code and they want to get the power of cloud to do that. Right? And so I think serverless has kind of morphed to be, for us, more about the experience that we can build on top of that container platform that's more oriented around how developers get work done and allows them to kind of more easily take advantage of the scale and power of public clouds without having to kind of take on the burden of a lot of that kind of work and management.And so the work that we've been doing is really aligned in that direction, that we've been working in projects like Knative, in the open source community to build simpler abstractions on top of Kubernetes. And we've been starting to deliver those in our cloud through things like Code Engine.Jeremy: Yeah. And I think that's interesting too because I always have, this is probably the wrong way to say it, but it's sort of a chip on my shoulder about Kubernetes because it just got so complicated. Right? It's just so many things that you have to do, so hard to manage. And as a serverless guy myself, I love just the simplicity of being able to write some code and just get it out there, have it auto scale, tie into all those events. So I think that a lot of cloud providers have sort of moved that way to say like, "Well, we're going to manage your Kubernetes cluster for you." Right? Which essentially is just, I think moving backwards, but also moving forwards at the same time, if that makes sense. But so in terms of the use cases that this opens up because now you're not necessarily limited to a sort of bespoke implementation of some serverless platform, you have a lot more capabilities. So what types of use cases does this open up?Jason: Yeah. I mean, I may have a couple of comments on that. I mean, so I think with Kubernetes, you have the complexity of managing the Kubernetes environment, but even if that's totally taken care of for you, and even if you're using a managed Kubernetes service like the things we offer on IBM Cloud, you still have that kind of resource burden of using Kubernetes. You have services and pods and replica sets and namespaces and all kinds of concepts that you have to kind of wrap your head around and know how to use in the right way. And so there's a value in like, "Can we abstract that? Can we move away from that?" And it's not like this idea hasn't been tried before. I mean, we've had paths platforms, like kind of Cloud Foundry style, Heroku, very opinionated paths environments in the past and they definitely simplify the user experience. However, they came with this negative, which is if you don't fit within the box of the opinion ...Jeremy: Right.Jason: ... then you can't do what you want to do. And the cost of going outside the box was super high. Maybe you had to completely switched platforms. You were completely blocked. You to switch to some other approach. And so part of what's informing us and as we think about this is how do you have more of a continuum? You have a simple model. It's aligned around what you're doing. Just run my source code, just run my container image. I want to run a batch job, but it's all running on one platform. They're running next to each other. You can drop down a layer into Kubernetes if you want to. If what you're trying to accomplish needs some of that flexibility, you should have access to it without having to kind of start over. And so that's kind of how we've approached the problem a little bit differently is bringing this all together into kind of one unified serverless environment on top of Kubernetes.And that lets us handle different use cases. That lets those handle kind of stateless, data processing and functions. That lets us handle simple web apps. That lets us handle very data-intensive, high-scale computation and data processing, async processing like batch all in one combined way.Jeremy: Right. Yeah. And I think it's interesting because there are artificial limitations may be put in place sometimes on serverless platforms. If you think about AWS Lambda, for example, you get 15 minutes of compute and they bumped things up. So now, and again, I've just sort of grew up in the AWS environment, but they have things like 10 gigs for a function or something like that. And so they've increased these things, but they are sort of artificial limits that I think, depending on the type of workload that you're doing, they can really get in your way, especially if, like you said, you're doing these data-intensive things. So from an IBM perspective, I mean that's sort of gone, right?Jason: Right. Exactly. That's a great, very concrete way to look at the problem. The approaches that have been taken in some of the other cloud environments is these different use cases like serverless functions, single containers, batch processing, they're different services. And every service has its own kind of limitations or rules about what you can and cannot do. How long your thing can execute, how big your code can be, how much data you can transfer. We've taken a different approach to say, "Let's eliminate all those limits and let's have one logical service, one environment that supports all those styles." We can still expose a simplified kind of consumption model for the developer like just give me your source code or just give me your image, but I can run it in a way that doesn't have those computational limits, and therefore I can do more. Right? I can run more kinds of workloads. I don't run up against some of those walls that kind of stopped me from getting my work done.Jeremy: Right. Right. Yeah. And I like that approach too because I'm a big fan of managed services. I think that if you have a service that does image recognition for you, that's great. And do you have a service that does queuing for you? That's great. But in some cases, you start stringing together so many different services and I feel like you lose a lot of that control. So I like that idea of just basically being able to say, "Look, I've got the compute. I can do whatever I need to do with it. It will scale to whatever I needed to scale to." And I think that's where this idea of IBM Cloud Code Engine comes in, which just became GA so I'd love it if you could tell the listeners exactly what that is.Jason: Yeah, absolutely. So, so Code Engine is the new service that we launched that makes some of these concepts I've been talking about real. It is a service that allows developers to deploy functions, containers, source code, batch jobs, into IBM Cloud. The entire environment behind that application is managed for you. So we handle you don't manage clusters, you don't provision infrastructure. You can scale all the way to zero. So you can literally only pay for what you're using. You can scale up to thousands of cores that are in parallel processing your application and we manage that entire runtime environment for you. So you can think of it as a multi-tenant shared Kubernetes-based runtime environment that you can run your workloads on that presents to you the personality that you need for different workloads. And because it's all in one service, if you have an application that's like a mix of some single containers and batch jobs, they can actually talk to each other, they can talk to each other over a private network connection. They can work together instead of being kind of siloed in these completely different environments.Jeremy: Right? Yeah. And so from the developer, I guess, perspective, you had mentioned that you can deploy just code or you could deploy a container if you want to. So what does that developer experience look like? So is this something where I could just say, "Look, I don't need to have a whole ops team now managing this for me. If I just want to write code, deploy it into these things, I'm sure there's some things I need to know," but for the most part, what does that developer experience look like?Jason: Yeah. So you absolutely could do it without a whole ops team. The experience right now, there's like maybe kind of three basic entry points. You can give me source code and we will take care of compiling that source code, combining with a runtime, executing it for you, giving it a web end point, scaling it. You can give me some hints about kind of how much resource you think you need and things like that and we can scale that up and down and manage it for you, including all the way down to zero. That's nice if you're coming from maybe a historical paths background or it's just like, "Here's my code, run it for me." You can have that experience with Code Engine. You could also start with a container image. So lots of developers now, because of things like Kubernetes and Docker, are very familiar and comfortable with packaging up their application as a container image, but you don't want to then deal with creating a cluster and dealing with Kubes.So you can just say like, "Here's my image, run it for me." And one of the advantages we have with Code Engine is we can really do that with any container image. You don't have to have a container image that follows some particular framework that's built in a very special way. We can take any container image and you can just literally point me at the image and say, "Run this for me," and Code Engine will execute it and scale it and manage it for you. Or you can start with a batch job interface. So like a more of an async kind of parallel job submission model. So maybe I'm doing Monte Carlo simulations or data processing and I want to parallelize that across a whole bunch of machines and cores, Code Engine gives you an interface for that. So as a developer, you kind of start with one of those three entry points and let Code Engine take care of how to run that and scale it and keep it highly available and things like that.Jeremy: Right. So I love the idea of the batch jobs. I want to talk about that a little bit more, but let's go back to some of the use cases here. So what if I was building just like a REST API, that seems to be a very popular, serverless use case, what would I do for that? Do I need to have some sort of an API type gateway type thing in front of it? Or how does that work?Jason: No, Code Engine provides all that for you. So you would literally either just take your implementation and package it in a container or point us at your source code directory. If you have source code, we use things like Paketo Buildpacks to build a runtime around that source code. And so you can use different languages. So you can either point us, with our CLI tool, you point us at the source code directory and we'll build it and package it in a runtime and run it for you. Or you point us out a container image that you've uploaded to our container registry or to your container registry of choice and then Code Engine will execute that for you. It will give you that web end point, right? So it'll give you a HTTP end point that you can use to access that service. And it will watch the demand on that system and scale it up and down as needed. And by default, we'll just scale it to zero. So it'll just be kind of registered in the system and it'll take care of scaling it up as needed to handle the demand on the app.Jeremy: All right. Cool. And then what about these batch jobs? So I talked a little bit about this with Michael and this idea of being able to run massively parallel execution. So how does that all work?Jason: Yeah. So similar, obviously with batch, there's a little bit more kind of metadata that you have to provide to describe the job and what you want to execute and how things relate to each other. So there's some input data you provide along with the implementation of the batch job, which itself could just be like a container image and you submit that job. So the CLI interface is a little bit different. You're not standing up a long-running REST end point, you're submitting a job to Code Engine for execution, and it will go take that job and execute it and parallelize it for you. You can also use Frameworks on top. One of the things we've been doing a lot of work on, maybe Michael talked about it a little bit when he was here, is some work we're doing around Ray. Ray is a really interesting new project that lets you do kind of distributed computing, especially around data workloads in a really easy way.And so you can actually stand up Ray on top of Code Engine and so Ray acts as kind of the application interface for the developer to be able to easily parallelize their code, particularly Python code, and then Code Engine acts as the runtime below it. And you can take a simple function in Python, mark it as Ray remote and it'll now execute on the cloud and distribute itself across a thousand cores. And you get your answer back 20 times faster than you would have running it locally. And so you can have those kinds of async environments as well.Jeremy: Awesome. And so what about some customers? So do you have customers that are having success with this now?Jason: Yeah, we have a number. I mean, we have the European Microbiology Laboratory, which is using it to do science processing and provide access for scientists to the large-scale compute environments of the cloud. We have some airlines that are leveraging this. The airline scenarios, I think, the scenario is actually kind of interesting because it shows the power of combining REST end points, more interactive workloads with batch workloads. In their case, they're exploring using it to do dynamic pricing. So if you think about how you do dynamic pricing, there's kind of two dimensions. It's like, there's a very interactive, somebody is getting a price on a ticket or a route, and you want to be able to present them with dynamic price information as part of that web interaction. But then there's like a data processing angle.You're looking at all kinds of data coming from your backend systems from route data, from the fleet and historical information. And you're trying to decide what the right price table is for that route. And so you're doing batch processing in the background, and then you're doing this interactive processing. You can implement both halves on serverless with Code Engine and they scale as needed. If you're getting a lot of traffic on the web front end, it scales up as needed without you having to do anything. So they can kind of combine both halves in one environment.Jeremy: Right. Right. And so in terms of, I think we kind of talked about this a little bit, but when you see all these different services, right, and no matter what it is, whether it's Google's Kubernetes engine that they run or it's EKS on AWS or something like that, I think a lot of people look at these and like, "Oh, it's just another managed Kubernetes cluster." Right? So what are the major differences? I know we talked about it a little bit, but maybe you could just be a little bit more succinct and sort of talk about why is it so different than other sort of previous generations of tools or some of the other competing products out there.Jason: Yeah. So if you look kind of behind the curtain on Code Engine, you'd see a couple of things. One is there is Kubernetes there, there is a Kubernetes environment there. The differences that Kubernetes environment is completely managed by the Code Engine service. So we're not, if you look at, in IBM Cloud, we have the IBM Cloud Kubernetes service and our Red Hat OpenShift service. So in those services, we're managing a cluster on your behalf, but we give you the cluster. It's like, "Here's your Kube cluster. We'll manage its life cycle, but you have direct access to it." With Code Engine, we have Kube cluster there, we completely manage it in all respects. You have no kind of direct access to it. That allows us to manage scale and capacity. We run that in a multi-tenant way. I mean, we have security and isolation between tenants, but logically you can think of it as like a big Kube cluster that lots of users are sharing, which is how the pay as you go model ultimately works because we're keeping track of what you're actually running and just charging you for that.So one part of it is fully managing that runtime environment. We've layered on top of that things like Knative so that we have that developer abstraction like a simpler way to define services, to do the source code and image stuff that I talked about. That's coming through largely through things like Knative, which again, we're completely running for you, but it gives you some of that simple interface now that we talked about, and we're doing that in an open-source way with the community. So it's not like proprietary to IBM Cloud. And then on top of that, we built kind of the batch processing system. So batch scheduling and some of these unique interfaces, the command line interface and the user experience to get into that environment for the different workflows that I talked about. And one of the cool things is, because we built it on top of that Kubernetes layer, we can also expose the Kubernetes API if we want.So like the Ray example I gave you, Ray doesn't really know anything about Code Engine, but Ray knows how to deploy and leverage a Kube cluster. So we're able to actually hand Ray the Kubernetes API server end point inside of Code Engine for your instance. And that framework can use Kubernetes to stand itself up. And then you can use the kind of simple abstractions on top, and that's still all in Code Engine. It's still pay as you go and it still scales to zero. And so that's what I meant by this you can kind of blend the lines and drop down to or the framework can drop down to something like Kubernetes as needed to give you that flexibility.Jeremy: Yeah, that's awesome. So you mentioned you have a fully managed Kubernetes service and then you also have a bunch of other serverless services that run within the IBM Cloud. So OpenWhisk or, I guess, IBM Cloud functions now. And then also, I mean, you mentioned Cloud Foundry, which is sort of a pass, but it also sort of an easy-to-use serverless environment in a sense. Right? And so I guess, is this like an evolution? Is this where you suggest people go?Jason: Yeah. Yeah. So I think the simplest way to think about it is yes, Code Engine is the evolution of those ideas. It doesn't necessarily have a direct technical lineage, always, between those projects, but the problem that functions with IBM Cloud functions that Whisk was trying to solve and the problem that Cloud Foundry was trying to solve with source code, start from source code paths, are both represented in what we're doing in Code Engine. So Code Engine will be the kind of natural evolution path for those workloads and for the problems that those users are using those platforms for. The Cloud Foundry one, I think, is super interesting, in the sense that with the rise of Kubernetes has clearly pivoted many people who were doing Cloud Foundry into doing Kubernetes.Jeremy: Yeah.Jason: And people are using Kubernetes as their foundation and the Cloud Foundry project, which we're deeply involved in, has done a lot of work to kind of realign Cloud Foundry with Kubernetes in a better way. But what never went away, what people always still saw value in with Cloud Foundry was the simple push my source code developer experience. Right? And so that still carries forward. And with Code Engine, we're taking that same experience that we had in Cloud Foundry, and we're bringing it into this new service and bringing it onto Kubernetes seat, so the developer still gets that similar experience, but without the boundaries that we talked about. The challenge with Cloud Foundry was always like, oh, as soon as you want to do stateful things, or you want to do async jobs, Cloud Foundry didn't solve that problem. Go use a Kube cluster or go use some completely different environment. And so it's kind of the same experience with the boundaries removed and that's where we would see people go.Jeremy: Right. So if I'm in one of those services, now, if I've got things written in Cloud Functions or in Cloud Foundry, and I've hit some of those limits, or I just want to take advantage of some of the cooler things that Code Engine does, is there a simple migration path for those?Jason: Yeah. In general, yes. For Cloud Foundry, for sure. It's pretty straightforward to take the same source code directory that you have and just push it to Code Engine instead. Right? So I think the path for a Cloud Foundry, I mean, there's edge cases with everything obviously, but the base of workflow is the same. You can use the same source input directories. We mapped to Paketo Buildpacks, which Cloud Foundry, a lot of that stuff came out of Cloud Foundry. And so that has a really clean path. For Cloud Functions. There's a little bit of a timing thing in general, yeah, you can take your same functions. You can run them on Code Engine. OpenWhisk has some advantages still that we haven't quite gotten built into Code Engine yet. It's got faster startup times, for example, right? The runtime model behind Code Engine, we're still starting a container, like a full container.In OpenWhisk we had done a bunch of work on warm start of containers and container pooling so we can get like small number of milliseconds startup times on those functions. And some of that hasn't worked its way into Code Engine yet. So there are still some cases with Cloud Functions where it has some capability that doesn't quite exist in Code Engine yet, but over time that will get filled in and there'll be a simple path there to move all those workloads over to Code Engine as well.Jeremy: Right. So with Code Engine, because you mentioned this idea of sort of like the cold starts. So does Code Engine keep containers warm for a certain amount of time or is it always a cold start?Jason: It is, in general, a cold start. It can keep some of them, like in the scale up scale down cycle, it may keep them around for a while, so it doesn't be overly aggressive about scaling them down and bringing them right back. But it's not doing some of the warm start tricks yet that OpenWhisk was doing where we have a pool of primed container instances, and then we're injecting code into them and running them. That's work-in-progress. There's work to do both in Knative to improve that stack and then stuff to do in Code Engine. There's a balancing act there too ...Jeremy: Yeah, definitely.Jason: ... on things like network isolation and getting on customer VPC networks and other things which are harder to do in that warm start model.Jeremy: Yeah, definitely. All right. So if somebody wanted to get started with Code Engine, what's the best way for them to do that, just sign up and start writing some code or how do they do that?Jason: Yeah, kind of. I mean, obviously, we've been talking a lot about how developers use these things. And so I always think the best way to get started is either to build something on it or to try out some specific source code project. We have a lot of things that we've done to try to make that easy. So there's a Code Engine landing page on IBM Cloud. It has some great examples to guide you through those three starting points I talked about, start from source code, start from image and do batch. We have some really nice tutorials, like specific text analysis tutorials, for example, that'll show you how to build applications on Code Engine. And we actually have a pretty cool Git repo, which will take you through tons of samples of how to use Code Engine to solve all kinds of problems.So there's a lot of really good code assets out there that a developer could go to and actually try something real on Code Engine and the getting started experience is super easy. You've got IBM Cloud, you log in and you go to Code Engine, you create a project, you push an image and then a couple of minutes you'll have something up and running that you can play with.Jeremy: Amazing. All right. So I love watching the evolution of things and again, just this different way that, that IBM is thinking about serverless and, again, trying to make it easier. Because I always look back and I think of Lambda when it first came out, I was like, "Oh, it's so easy. You just put some code there and it's just done for you." And then we got more and more complex and more and more complex. And not that we didn't need to, I mean, some of this complexity is absolutely necessary, but I'm just curious, seeing the evolution and where things have gone, I talked to a bunch of people earlier about, Roger Graba, for example, who was one of the first people involved with the IBM or the OpenWhisk project, I guess it was Apache OpenWhisk or it became Apache OpenWhisk, whatever what it was, seeing that evolution and seeing the changes that these different cloud providers have gone through, seeing the changes that IBM has gone through and where you sort of are now with Cloud Code Engine.I'd love to get your perspective here on where you think this is going, not just maybe what the future is for IBM, but what you think the future of serverless is and just cloud computing maybe in general. I know that's a lot of question.Jason: I'll give you a long answer.Jeremy: Perfect. Jason: So that brings to mind two things. First, let me talk about the complexity thing for a second. Managing complexity is always hard. You are so right. That many things start out with a value prop of like, this is easy. And then as people use, the more you add more, and then three years later, we're like, "We need a new thing that's easy because that other thing is too hard now." And there's no magic pill for that. That's always a hard problem to manage. However, one of the things I like about the approach that we're trying to take with Code Engine is because we've layered it on Kubernetes, It gives us a way to kind of decide where we want that complexity to show up. When we had a Cloud Functions OpenWhisk stack and we had a Cloud Foundry stack and you had a Kubernetes stack, you had to try to solve all problems within each stack.So each stack was getting more complex because you were trying to like, "Oh, I need storage. And I need like private networking. And I need all these things." With Code Engine, I think we have an opportunity to say, once you cross some line, we're just going to ask you to drop down a layer and go use it directly in Kubernetes, right? You can push some of the complexity down and that allows us to hold a harder line on complexity in the developer layer on top. So it's the balancing act we're trying to play is because we built it on a common platform, we don't have to solve all problems in Code Engine directly.Jeremy: Right.Jason: So that's kind of my viewpoint on the complexity problem. On the evolution, it's really interesting. So one of the other things that my team's working on and launched recently is this thing called IBM Cloud Satellite, which is about distributing cloud outside of cloud data centers so you can kind of consume cloud services anywhere you want. So cloud computing in general, and this is not just an IBM thing, in the industry cloud computing is diversifying to be kind of omnipresent. You can consume cloud on-prem, at the edge, in our cloud data centers, wherever you want. There's a programming model dimension to that problem, too. As you specially go to the edge, you kind of want some of these simple to consume, easy to deploy, scale to zero, resource-efficient, you need some kind of model like that because at the edge, especially, you don't have 2000 cores worth of compute to go deal with.You have one box in a retail store, or you have two servers in the back of the distribution center. And so I think things like Code Engine layered on top of distributed cloud and in our case, things like Satellite, is actually a really powerful combination. I think we're going to see serverless become the dominant application development and deployment model, especially for these edge use cases, because it combines ease of deployment and management with efficiency and scale to zero footprint, which are all really attractive when you get outside of a mega data center like you have in cloud.Jeremy: Right. Right. So I love this idea, too, about sort of expose the complexity when the complexity needs to be exposed. I love this idea of sort of creating same defaults, right? If you could default Kubernetes to do all the optimal things that you would need it to do for use case X, if you could just do that for me and then if I say, "Oh, I want to tweak this one thing," then be able to kind of go down to that level. But I love this idea of you mentioned about edge too because that's one of those things that I think, from a programming model, as you said, how do you write code that's sort of, I guess, environment-aware? How does it know what's running at the edge versus running in a data center versus running maybe in a hybrid cloud and partially in your own private cloud or your own private data center? That model, just wrapping your head around it from a developer standpoint, I think is incredibly complex right there.Jason: Yeah. It is. And sometimes it's like, how do they know? And then sometimes it's like, how do I just operate at a high enough level of abstraction that like the differences between those environments can get handled below me? If I'm consuming Kubernetes clusters directly, the shape of that Kubernetes cluster in like a retail store or a telco data center in Atlanta somewhere or in the cloud are going to all be different because you have a different amount of capacity. You have a different networking arm. So you're going to have to deal with the differences. If I'm giving you a container image and saying, "Run this," the developer doesn't have to deal with those differences. The provider might have to deal with those differences but the developer doesn't have to deal with those differences. So that's where I think things like serverless and approaches like Code Engine really come to be much more valuable because you're just dealing at this higher level of abstraction and then Satellite and Code Engine and other services can kind of magically deal with the complexity for you.Jeremy: Yeah. And so I know we talked a lot about Kubernetes and what's running underneath a lot of these services. Is that something you see, though, as being that sort of common format across all these different services, or do you think that something will evolve beyond Kubernetes to become a standard?Jason: Right now, I really think that Kubernetes will become the base platform. What Kubernetes is will probably keep evolving. And I'm not saying it's Kubernetes forever, but I don't think we should underestimate the power of the kind of industry-wide alignment that exists around containerization and Kubernetes as the next infrastructure platform, if you will, because that's kind of really what it is. And I told you at the beginning, I used to build webs for apps servers. So I was like very involved in the whole Java app server era, the late 90s and early 2000s. And at that time, the industry kind of aligned around two platforms, Java and .net, as the two dominant, at least enterprise, application platforms. We have everyone aligned on Kube. Literally, there's nobody in the industry who's not like, "Kubernetes is the platform." So I think it will be the abstraction for infrastructure in all these environments. The question will be, how do you consume it? Who manages it? How's it delivered? How does it optimize itself? And then at what level do you consume?And I don't think Code Engine is the end of it at all. I think there's lots of room for improving the consumption experience on top of Kubernetes for these developer use cases.Jeremy: Yeah. Yeah. And that's actually was going to be my next question, sort of where do you see, what's the next evolution of Code Engine, right? So is that going to be kind of driving into specific use cases more and trying to solve those or becoming more flexible? How do you see the developers, I don't know, in five years, maybe this probably a hard question, but in five years, how are we going to be writing cloud applications?Jason: Yeah. It's a great and super hard question, but I think projects like Ray, I think, are an interesting forward look into where this might go. One of the things that I've always felt like, if I look at the whole history of paths in particular over the last five, six, seven years, paths has always been about simplifying the experience for the developers, but fundamentally, most paths environments don't change anything about how you write the code. They change how you package the code, how you deploy the code, how the code is executed, and how the dependencies of the code are satisfied. But the actual code you write probably wasn't any different. Right? And that's where I think there's the next step is like, how do we actually get into the languages, into the code structure itself to be able to take advantage of cloud capacity, to be able to take advantage of scale and there's lots of projects that have taken attempts at that.Ray, as an example, I think is a particularly interesting one, because there's some good examples where you can take a Python function, you literally add like one annotation to it in the language, and now it becomes remotely executable and horizontally scalable for you.Jeremy: Right.Jason: It's that kind of stuff that I think three or four years from now, there'll be a lot more of, where we're actually changing how code is written because that code can assume there's some containerized, scalable fabric out there somewhere that it can go execute on top of.Jeremy: Right. Yeah. And I think that that pendulum swing for developers, especially, well, developers in the cloud, who's they used to be writing a bunch of code, whether it was JavaScript or Python or Java, whatever it was and then all of a sudden now they have to switch context and be like, "All right, now I have to write a YAML file in order to configure my cloud resources," and that sort of back and forth. So yeah, that marrying of basically saying like a programming language for the cloud is a really interesting concept.Jason: And I think the distributed cloud notion, funnily enough, is a big enabler of that. Because, I don't know, the other tension I see right now is like, let's say you wanted to use Lambda or you want to use serverless functions. That only works in your cloud environment, but you're also running something at the edge or you're running something in your data center, so you're forced to kind of use different approaches, which tends to force you to kind of some common denominator models.Jeremy: Right. Right.Jason: And so you're kind of holding back from really adopting some of these newer models because of the diversity. Well, if cloud goes everywhere and those services go everywhere, then now I can just say, "Well, I'll use the serverless model everywhere. And so I can really deeply adopt it." So I think the distributed cloud thing will open up the opportunity to embed these approaches more deeply in kind of day-to-day development activities.Jeremy: Yeah. No, I love that. I'm all for that approach because I think this split-brain sort of approach to it is getting very complex and it's not super easy. So is there anything else that you'd like to let the listeners know about IBM Cloud Code Engine?Jason: No. I mean, I think we touched on a lot of the motivation behind it and the kind of core capabilities. I would just encourage you to go check it out, go check out the space, go give it a try and love to hear people's feedback as they do that.Jeremy: Awesome. Well, first of all, I got to make sure I thank IBM Cloud for sponsoring this episode because just the team over there and everything that all of you are working on is amazing stuff and I appreciate the support. We appreciate the support in the community for what you're doing. So if people want to find out more about you or more about Cloud Code Engine, how do they do that?Jason: Yeah. And you can find me on Twitter, JRMcGee, or LinkedIn. For me personally, I love to talk to people. For Code Engine, I think the best place to start is the product page, which is ibm.com/cloud/code-engine. And from there, you can get to all of the code examples I talked about.Jeremy: Awesome. All right. Well, I will put all that stuff in the show notes. Thanks again, Jason.Jason: Yeah. Great. Thanks, Jeremy.

Google Cloud Platform Podcast
Kubernetes Config Connector with Emily Cai

Google Cloud Platform Podcast

Play Episode Listen Later Mar 3, 2020 26:57


Emily Cai of Google is on the podcast today with hosts Brian Dorsey and Mark Mirchandani to talk about Kubernetes Config Connector, which went GA last month. The program helps users manage their Google Cloud resources in a way that is familiar for Kubernetes developers. Emily explains that it’s a great tool for Kubernetes developers looking to easily manage their infrastructure in one place. A platform team managing other teams is a perfect example of large-scale companies who could benefit from this tool, Emily explains. Walking listeners through the development cycle before and after Kubernetes Config Connector, Emily shines some light on specific instances when this powerful tool could streamline the process of building your project, making it faster and more efficient. She elaborates on the ways Config Connector and Anthos can work together as well. In the future, the Config Connector team hopes to cover all GCP resources, to create a more clear end-to-end experience for Kubernetes developers, and to allow Config Connector to be enabled straight onto a cluster. Emily Cai Emily is an engineer on Google Cloud’s Config Connector team focused on creating a declarative way for users to manage their non-Kubernetes resources. She has been with Google since November 2018 after interning twice (once in Irvine, once in Zurich). Currently living in Seattle, she is an avid frisbee player and winter sports enthusiast who is always open to new experiences. Cool things of the week SQL Server, managed in the cloud blog Now, you can explore Google Cloud APIs with Cloud Code blog Interview Kubernetes site Kubernetes Docs site Kubernetes Config Connector on Github site Kubernetes Config Connector Docs site Unify Kubernetes and GCP resources for simpler and faster deployments blog keeprunning.io blog Cloud SQL site Compute Engine site Pub/Sub site Terraform site Anthos site Question of the week How can I improve reliability/availability with the least amount of work? Regional Persistent Disks site High Availability Regional Persistent Disks site Where can you find us next? Our guest will be at Kubecon Europe and speaking at Next Mark and Brian will also be at Next!

Google Cloud Platform Podcast
ML/AI with Zack Akil

Google Cloud Platform Podcast

Play Episode Listen Later Dec 3, 2019 27:10


Gabi Ferrara and Jon Foust are joined today by fellow Googler Zack Akil to discuss machine learning and AI advances at Google. First up, Zack explains some of the ways AutoML Vision and Video can be used to make life easier. One example is how Google Photos are automatically tagged, allowing them to be searchable thanks to AutoML. Developers can also train their own AutoML to detect specific scenarios, such as laughing in a video. We also talk Cloud Next 2019 and learn how Zack comes up with ideas for his cool demos. His goal is to inspire people to incorporate machine learning into their projects, so he tries to combine hardware and exciting technology to think of fun, creative ways developers can use ML. Recently, he made a smart AI bicycle that alerts riders of possible danger behind them through a system of lights and a project to track and photograph balls as they fly through the air after being kicked. To wrap it all up, Zack tells us about some cool projects he’s heard people use AutoML for (like bleeping out tv show spoilers in online videos!) and the future of the software. Zack Akil When he’s not teaching machine learning at Google, Zack likes to teach machine learning at his hands-on data science meetup, Central London Data Science Project Nights. Although he works in the cloud, most of his hobby projects look at different ways you can embed machine learning into low-power devices like Raspberry Pis and Arduinos. He also likes to have a bit of banter with his mixed tag rugby teams. Cool things of the week Stackdriver Logging comes to Cloud Code in Visual Studio Code blog Open Match v0.8 was released last month site Cloud Spanner now supports the WITH clause blog Interview Zack’s Website site Cloud AutoML site AutoML Video docs AutoML Vision site AutoML Vision Object Detection docs Coral site TensorFlow.js site Central London Data Science Meetup site Question of the week How do I run Cloud Functions in a local environment? Where can you find us next? Zack will be at DevRelCon. Gabi will be taking time to recharge after conference season, then visiting family. Jon will be attending several baby showers. Sound Effect Attribution “Small Group Laugh 4, 5 & 6” by Tim.Kahn of Freesound.org “Sparkling Effect A” by CetSoundCrew of Freesound.org

Google Cloud Platform Podcast
End to End Java on Google Cloud with Ray Tsang

Google Cloud Platform Podcast

Play Episode Listen Later Nov 19, 2019 38:05


Mark Mirchandani hosts solo today but is later joined by fellow Googler and Developer Advocate Ray Tsang to talk Java! Ray tells us what’s new with Java 11, including more memory and fewer restrictions for developers. One of the greatest things for Ray is using Java 11 in App Engine because of the management support that it provides. Later, we talk about Spring Boot on GCP. Ray explains the many benefits of using this framework. Developers can get their projects started much more quickly, for example, and with Spring Cloud GCP, it’s easy to integrate GCP services like Spanner and run your project in the cloud. For users looking to containerize their Java projects, JIB can help you do this without having to write a Dockerfile. At the end of the show, Ray and Mark pull it all together by explaining how Spring Boot, Cloud Code, Skaffold, and proper dev-ops can work together for a seamless Java project. Ray Tsang Ray is a Developer Advocate for the Google Cloud Platform and a Java Champion. Ray works with engineering and product teams to improve Java developer productivity on GCP. He also helps Alphabet companies migrate and adopt cloud native architecture. Prior to Google, Ray worked at Red Hat, Accenture, and other consulting companies, where he focused on enterprise architecture, managed solutions delivery, and contributed to open source projects. Aside from technology, Ray enjoys traveling and adventures. Cool things of the week Cloud Run is now GA blog Budget API in Beta blog Interview App Engine site Micronaut site Quarkus site Java 11 on App Engine blog and docs Spring Boot and Spring Cloud site Spring Cloud GCP Projects site Cloud Spanner site Spring Cloud Sleuth site Stackdriver site Bootiful GCP: To Production! blog Effective Cloud Native Spring Boot on Kubernetes & Google Cloud Platform blog JDBC drivers site Hibernate ORM with Cloud Spanner docs Effective Cloud Native Spring Boot on Kubernetes & Google Cloud Platform blog Dev to Prod with Spring on GCP in 20 Minutes (Cloud Next ‘19) video Cloud Code site JIB site Skaffold site Debugger site Troubleshooting & Debugging Microservices in Kubernetes blog Cloud Code Quickstart docs Spring (or Java) to Kubernetes Faster and Easier blog GCP Podcast Episode 58: Java with Ray Tsang and Rajeev Dayal podcast Question of the week How do I dockerize my Java app? video github Where can you find us next? Ray is taking a break for the holidays, but in the future, you can find him at Java and JUG conferences. Mark is hanging out in the Bay Area, but Google Cloud Next in London and KubeCon and CloudNativeCon are happening now! Sound Effect Attribution “Small Group Laugh 4, 5 & 6” by Tim.Kahn of Freesound.org “Tre-Loco1” by Sonipro of Freesound.org “Mens Sincere Laughter” by Urupin of Freesound.org “Party Pack” by InspectorJ of Freesound.org “DrumRoll” by HolyGhostParty of Freesound.org “Tension” by ERH of Freesound.org

Google Cloud Platform Podcast
ML with Dale Markowitz

Google Cloud Platform Podcast

Play Episode Listen Later Sep 10, 2019 30:11


On the podcast this week, we have a great interview with Google Developer Advocate, Dale Markowitz. Aja Hammerly and Jon Foust are your hosts, as we talk about machine learning, its best use cases, and how developers can break into machine learning and data science. Dale talks about natural language processing as well, explaining that it’s basically the intersection of machine learning and text processing. It can be used for anything from aggregating and sorting Twitter posts about your company to sentiment analysis. For developers looking to enter the machine learning space, Dale suggests starting with non life-threatening applications, such as labeling pictures. Next, consider the possible mistakes the application can make ahead of time to help mitigate issues. To help prevent the introduction of bias into the model, Dale suggests introducing it to as many different types of project-appropriate data sets as possible. It’s also important to continually monitor your model. Later in the show, we talk Google shop, learning about all the new features in Google Translate and AutoML. Dale Markowitz Dale Markowitz is an Applied AI Engineer and Developer Advocate for ML on Google Cloud. Before that she was a software engineer in Google Research and an engineer at the online dating site OkCupid. Cool things of the week Build a dev workflow with Cloud Code on a Pixelbook blog Feminism & Agile blog New homepage and improved collaboration features for AI Hub blog Interview TensorFlow site Natural Language API site AutoML Natural Language site Content Classification site Sentiment Analysis site Analyzing Entities site Translation API site AutoML Translate site Google Translate Glossary Documentation docs Google News Lab site AI Platform’s Data Labeling Service docs Question of the week How many different ways can you run a container on GCP? GKE Cloud Run App Engine Flexible Environmnet Compute Engine VM as a computer Where can you find us next? Dale will be at DevFest Minneapolis, DevFest Madison, and London NEXT. Jon will be at the internal Google Game Summit and visiting Montreal. Aja will be holding down the fort at home. Sound Effect Attribution “Mystery Peak2” by FoolBoyMedia of Freesound.org “Collect Point 00” by LittleRobotSoundFactory of Freesound.org “Cinematic Piano” by Ellary of Freesound.org

Kubernetes Podcast from Google
Cloud Code, with Sarah D'Angelo and Patrick Flynn

Kubernetes Podcast from Google

Play Episode Listen Later Jul 30, 2019 33:49


Cloud Code provides everything you need to write, debug, and deploy Kubernetes applications, including extensions to IDEs such as Visual Studio Code and IntelliJ. Joining Craig and Adam are Sarah D’Angelo, a UX Researcher, and Patrick Flynn, an engineering lead, both on the Cloud Code team at Google. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week All-meat diet (do not try this at home) Warmest UK day on record News of the week Happy first birthday Knative! Episode 14, with Oren Teich Episode 47, with Kim Lewandowski Episode 44, with Tracy Miranda Grafana Labs: How a production outage was caused using Kubernetes pod priorities Episode 38 with Henning Jacobs Banzai Cloud: Kafka on Istio performance Docker Enteprise 3.0 is GA, and their new Technology Partner program Tim Hockin on reconcilation Episode 41, with Tim Hockin Fairwinds Polaris Container platform security with Cruise YuniKorn KubeCon China transparency report Kazuhm Kubernetes as a Service Morpheus v4 Links from the interview Cloud Code IntelliJ VS Code Skaffold Episode 6, with Matt Rickard Jib GitHub issues: IntelliJ VS Code Sign up for a Cloud Code research study

Last Week in DevOps
July 19th - Hybrid Cloud, Code Reviews, and Kubernetes

Last Week in DevOps

Play Episode Listen Later Jul 19, 2019 15:04


In this episode, Josh and Eddie briefly discuss the Red Hat merger, and introduce us to articles about Containers and Kubernetes, advice for Hybrid Cloud deployment, and the essentials of a good code review.

44BITS 팟캐스트 - 클라우드, 개발, 가젯
stdout_027.log: 샌프란시스코 여행, Google Cloud Next 2019 w/ subicura

44BITS 팟캐스트 - 클라우드, 개발, 가젯

Play Episode Listen Later Apr 24, 2019 85:29


stdout.fm 27번째 로그에서는 @subicura 님을 모시고 샌프란시스코 여행과 구글 클라우드 넥스트 참관기에 대해서 이야기를 나눴습니다. 참가자: @seapy, @nacyo_t, @raccoonyy 게스트: @subicura RubyKaigi 2020 나고야 공항 - 마쓰모토 역까지 경로 - 구글 맵 무안국제공항 - 위키백과, 우리 모두의 백과사전 Seocho.rb 첫 번째 모임: 서버리스 루비 | Festa! Subicura’s Blog 초보를 위한 도커 안내서 - 도커란 무엇인가? Google Cloud Next ’19 | April 9-11 | San Francisco Moscone Center - Google 지도 아르고넛호텔 - 어 노블 하우스 호텔 특가 호텔예약, 2019 (샌프란시스코, 미국) 호텔추천 | 호텔스닷컴 stdout_003.log: GitHub Universe, HashiConf w/ @Outsideris | 개발자 팟캐스트 stdout.fm Apple Park Visitor Center - Apple iPad mini 구입하기 - Apple (KR) Samsung Galaxy Fold Non-Review: We Are Not Your Beta Testers - WSJ 구글플렉스 - Google 지도 Android lawn statues - Wikipedia MPK 12, facebook hq building - Google 지도 알라딘: 카오스 멍키 - 혼돈의 시대, 어떻게 기회를 낚아챌 것인가 Trust, but Verify: What Facebook’s Electronics Vending Machines Say About the Company - The Atlantic Chrome 원격 데스크톱 - 확장 프로그램 (일본어) Drecon은 올 해 RubyKaigi 2019에서 야타이 스폰스로 참여합니다! - Tech Inside Drecom #rubykaraoke - Twitter Search / Twitter Jeff Bezos and Robert Downey Jr. will headline re:MARS fest in Vegas – GeekWire Google I/O Viewing Party 2019 | Festa! Google - Site Reliability Engineering Home | OCI Micronaut | OCI Micronaut Framework on Twitter: “Love it when we run into fellow #micronautfw enthusiasts at events! @subicura … LogRocket | Logging and Session Replay for JavaScript Apps stdout_016.log: 정부의 SNI 기반 인터넷 접속 차단 w/ han | 개발자 팟캐스트 stdout.fm Many popular iPhone apps secretly record your screen without asking | TechCrunch Continuous Integration and Delivery - CircleCI HashiCorp: Multi-Cloud Management, Security, Automation Anthos  |  Anthos  |  Google Cloud Canalys Newsroom- Cloud market share Q4 2018 and full year 2018 Announcing the AWS China (Beijing) Region Google Cloud announces new regions in Seoul and Salt Lake City | Google Cloud Blog Apple’s HomePod delayed until next year - The Verge Apple cancels AirPower wireless charger - The Verge BigQuery - 분석 데이터 웨어하우스  |  BigQuery  |  Google Cloud Amazon Athena – 서버리스 대화식 쿼리 서비스 – AWS AWS CloudTrail – Amazon Web Services 데이터 파티셔닝 - Amazon Athena Bringing the best of open source to Google Cloud customers | Google Cloud Blog Memorystore  |  Google Cloud Cloud Code  |  Google Cloud AWS Toolkit for Visual Studio Code Atom Cloud Run  |  Google Cloud Cloud Functions - 이벤트 기반 서버리스 컴퓨팅  |  Cloud Functions  |  Google Cloud AWS Fargate – 서버 또는 클러스터를 관리할 필요 없이 컨테이너를 실행 Pricing  |  Cloud Run  |  Google Cloud API 관리  |  Apigee  |  Google Cloud Outsider on Twitter: “뉴스레터에 올릴 글을 모을 때 한국어로 된 글이 많지 않다는 것에 … Outsider’s Dev Story - Newsletter itcle - 페이지 읽기 오류 BigQuery - 분석 데이터 웨어하우스  |  BigQuery  |  Google Cloud Google announces new AI, smart analytics tools | ZDNet

Google Cloud Platform Podcast
Cloud Run with Steren Giannini and Ryan Gregg

Google Cloud Platform Podcast

Play Episode Listen Later Apr 16, 2019 32:32


Mark Mirchandani is our Mark this week, joining new host Michelle Casbon in a recap of their favorite things at Next! The main story this episode is Cloud Run, and Gabi and Mark met up with Steren Giannini and Ryan Gregg at Cloud Next to learn more about it. Announced at Next, Cloud Run brings serverless to containers! It offers great options and security, and the client only pays for what they use. With containers, developers can use any language, any library, any software, anything! Two versions of Cloud Run were released last week. Cloud Run is the fully managed, hosted service for running serverless containers. The second version, Cloud Run GKE, provides a lot of the same benefits, but runs the compute inside your Kubernetes container. It’s easy to move between the two if your needs change as well. Steren Giannini Steren is a Product Manager in the Google Cloud Platform serverless team. He graduated from École Centrale Lyon, France and then was CTO of a startup that created mobile and multi-device solutions. After joining Google, Steren managed Stackdriver Error Reporting, Node.js on App Engine, and Cloud Run. Ryan Gregg Ryan is a product manager at Google, working on Knative and Cloud Run. He has over 15 years experience working with developers on building and extending platforms and is passionate about great documentation and reducing developer toil. After more than a decade of working on enterprise software platforms and cloud solutions at Microsoft, he joined Google to work on Knative and building great new experiences for serverless and Kubernetes. Cool things of the week News to build on: 122+ announcements from Google Cloud Next ‘19 blog Mark’s Favorite Announcement: Network service tiers site Michelle’s Favorite Announcements: Cloud Code site Cloud SQL for Postgres now supports v11 release notes Cloud Data Fusion for visual code-free ETL pipelines site Cloud AI Platform site AutoML Natural Language site Google Voice for G Suite blog Hangouts Chat in Gmail site Kubeflow v0.5.0 release site Interview Cloud Run site Knative site Knative Docs site Firestore site App Engine site Cloud Functions site GKE site Cloud Run on GKE site Understanding cluster resource usage site Docker site Cloud Build site Gitlab site Buildpacks site Jib (Java Image Builder) site Pub/Sub site Cloud VPC site Google Cloud Next ‘19 All Sessions videos Question of the week If I want to try out Cloud Run, how do I get started? Get started with the beta version by logging in site Quicklinks site Codelab site Where can you find us next? Gabi is at PyTexas Jon and Mark Mandel are at East Coast Game Conference Michelle & Mark Mirchandani will be at Google IO in May Michelle will be at Kubecon Barcelona in May

UKFast - Corporate Film Production
Cloud Code of Practice Round Table Part 4

UKFast - Corporate Film Production

Play Episode Listen Later May 20, 2011 3:59


Cloud Code of Practice Round Table Part 4. Suppliers are jumping on the cloud technology bandwagon hoping to benefit from the hype surrounding internet-based IT. But how do you know who to trust to build and maintain the right cloud platform for your business? UKFast's Round Table panellists discuss the importance of industry regulations and a code of practice for suppliers and offer advice on how to avoid the cowboys. Panellists include Andy Burton for Cloud Industry Forum, Simon Howitt for Outsourcery, Andrew Saunders for Zen Intrenet, Ian Moyse for Webroot, Andrew Corbett for UK IT Association, Lawrence Jones for UKFast, and hosted by Jonathan Bowers for UKFast.

Digital Success Daily
Last Week In Digital : New MS Excel, New Google Chrome, IBM Cloud Code Engine and more

Digital Success Daily

Play Episode Listen Later Jan 1, 1970 5:18


Welcome to another edition of Last Week In Digital, helping you get upto speed with the latest updates from digital platforms. New Google Chrome : Google announced limited availability of Federated Learning of Cohorts to replace third party cookies by 2022. I have created a quick demo for this new capability New MS Excel : Microsoft announced Public Preview of Microsoft Power Fx - a low code open source programming that can potentially change the way we use MS Excel. Checkout the cool demo Azure Communication Services : The technology that powers MS Teams is now globally available across all regions for enterprises. If you support your customers on video, audio and chat, check out ACS capabilities to level up your contact center. IBM Cloud Code Engine : New capability from IBM Cloud that manages & scales cloud infrastructure automatically when you deploy your apps. Most importantly, you pay for what you use. Here's more details Amazon Cloud Watch Metric Streams : Real time data stream that connects your Amazon Cloud Watch data to any destination such as Power BI dashboard, New Relic/Data Dog or Amazon Athena for cost optimization. Don't miss out more details here